Learn the difference between row locks, table locks, and advisory locks, how deadlocks form and are resolved, and why understanding lock granularity prevents the most common write-contention bottlenecks.
27 min read2026-03-28mediumlockingdatabasesdeadlockstransactionspostgresql
Transaction A starts. It updates a row in the orders table where order_id = 1001, acquiring an exclusive row lock. Transaction B starts simultaneously. It updates a row in the shipments table where shipment_id = 5, acquiring an exclusive row lock. Then:
Transaction A needs to update the shipment row for order 1001. Blocked. That row is locked by Transaction B.
Transaction B needs to update order 1001's status. Blocked. That row is locked by Transaction A.
Neither will ever proceed. PostgreSQL detects the cycle after about one second and kills one transaction with ERROR: deadlock detected. The application retries. The exact same interleaving happens again, because both retry paths acquire locks in the same order. By the time the on-call engineer wakes up, hundreds of write queries are queued and p99 latency is measured in seconds.
Deadlocks are the most common write-contention failure in production databases. They are entirely preventable once you understand why they form.
Database locking is the mechanism that prevents two transactions from making conflicting changes to the same data simultaneously. A lock is an exclusive claim on a resource (a row, a page, or a table) that blocks other transactions from claiming it in an incompatible mode until the first transaction commits or rolls back.
Think of it like single-occupancy restrooms in a small restaurant. One person is inside (holds the lock). Everyone else queues outside (waits). When the person inside leaves (commits or rolls back), the first person in the queue enters. Multiple people can look through the window at the same time (read locks), but only one can occupy the room (write lock).
Every lock request goes through four stages: request, grant or queue, hold for the duration of the transaction, and release on commit or rollback. The deadlock scenario happens when two transactions form a cycle in the wait-for graph.
Comments
Step by step:
T1 acquires an exclusive lock on orders row 1001. Succeeds immediately.
T2 acquires an exclusive lock on shipments row 5. Succeeds immediately.
T1 requests a lock on shipments row 5. Held by T2, so T1 is queued.
T2 requests a lock on orders row 1001. Held by T1, so T2 is queued.
The lock manager detects the cycle (T1 waits for T2, T2 waits for T1). It aborts T2 as the victim and releases its locks. T1 proceeds.
// Pseudocode: deadlock detection via wait-for graph traversal
function check_deadlock(requesting_txn, blocking_txn):
visited = {}
queue = [blocking_txn]
while queue is not empty:
node = queue.pop()
if node == requesting_txn:
return DEADLOCK_DETECTED // cycle found
if node in visited: continue
visited.add(node)
for each lock in node.waiting_for:
queue.append(lock.holder) // follow the wait chain
return NO_DEADLOCK
PostgreSQL runs this check after deadlock_timeout (default: 1 second) has elapsed, not on every lock request. This means a deadlock takes at least 1 second to detect even if the cycle formed immediately.
Not all lock modes conflict with each other. Reads can proceed alongside other reads. Intent locks allow the manager to check higher-level conflicts without scanning all row locks.
Shared (S)
Exclusive (X)
Intent Shared (IS)
Intent Exclusive (IX)
Shared (S)
Compatible
Conflict
Compatible
Conflict
Exclusive (X)
Conflict
Conflict
Conflict
Conflict
Intent Shared (IS)
Compatible
Conflict
Compatible
Compatible
Intent Exclusive (IX)
Conflict
Conflict
Compatible
Compatible
Intent locks (IS, IX) are taken on a table or page before acquiring row-level locks. This lets the lock manager detect table-level conflicts cheaply. For example: if Transaction A holds an Intent Exclusive lock on the orders table (meaning it will write some rows), Transaction B cannot acquire an Exclusive lock on the orders table (needed for TRUNCATE TABLE).
In PostgreSQL, you can observe live locks directly:
-- All locks currently heldSELECT pid, mode, granted, relation::regclassFROM pg_locksWHERE relation IS NOT NULL;-- Which transactions are blocking whichSELECT blocked.pid AS blocked_pid, blocking.pid AS blocking_pid, blocked.query AS blocked_query, blocking.query AS blocking_queryFROM pg_stat_activity blockedJOIN pg_stat_activity blocking ON blocking.pid = ANY(pg_blocking_pids(blocked.pid))WHERE cardinality(pg_blocking_pids(blocked.pid)) > 0;
The granularity at which locks are acquired determines how much concurrency is possible.
Database
Default write granularity
DDL granularity
Notes
PostgreSQL
Row-level
AccessExclusiveLock on table
Very high concurrency for row-level DML
MySQL InnoDB
Row-level with gap locks
Metadata lock on table
Gap locks prevent phantom reads in range queries
SQLite (default)
Table-level write lock
Same
Single writer, unlimited concurrent readers
SQLite (WAL mode)
Effectively page-level
Same
One writer and unlimited readers simultaneously
Row-level locking maximizes concurrency: two transactions writing different rows of the same table never conflict. The cost is overhead: a bulk UPDATE modifying 100,000 rows holds 100,000 simultaneous row locks. Each lock consumes memory in the lock table, and other transactions touching those rows must wait for every one of them to release.
Pessimistic locking assumes conflicts will happen. It acquires a lock at read time, before any modification, so other transactions cannot modify the row until the first transaction commits or rolls back. Use it when you always need to write what you just read and the cost of a conflicting write would be incorrect business state.
-- Acquire an exclusive row lock at read timeBEGIN;SELECT quantity FROM inventoryWHERE product_id = 7FOR UPDATE;-- No other transaction can modify this row until COMMIT or ROLLBACK.-- quantity = 5, confirmed exclusively.UPDATE inventory SET quantity = quantity - 1 WHERE product_id = 7;COMMIT;
PostgreSQL provides four row-level lock modes for SELECT:
Clause
Blocks
Compatible with
FOR UPDATE
All other writers and FOR SHARE
Nothing
FOR NO KEY UPDATE
FOR UPDATE only
FOR SHARE, FOR KEY SHARE
FOR SHARE
All writers (FOR UPDATE, FOR NO KEY UPDATE)
Other FOR SHARE, FOR KEY SHARE
FOR KEY SHARE
FOR UPDATE only
FOR SHARE, FOR NO KEY UPDATE, other FOR KEY SHARE
FOR UPDATE is the default. FOR NO KEY UPDATE is appropriate when you modify non-primary-key columns of a row with foreign key dependents, to avoid unnecessary conflicts with FK validators. FOR SHARE lets multiple transactions hold a shared read lock that blocks writers.
Two modifiers control what happens when the target row is already locked:
-- Raise an error immediately if the row is already locked (do not queue)SELECT quantity FROM inventory WHERE product_id = 7 FOR UPDATE NOWAIT;-- Skip rows currently locked by other transactions (for job queues)SELECT id FROM jobs WHERE status = 'pending'ORDER BY created_atLIMIT 1FOR UPDATE SKIP LOCKED;
NOWAIT is correct when queuing behind a lock is never acceptable: reject the request now rather than wait. SKIP LOCKED is the standard pattern for concurrent job queues where each worker should claim uncontested work rather than queue behind another worker claiming the same job.
Optimistic locking assumes conflicts are rare. It reads the row without acquiring a lock, then writes with a check that the row has not changed since the read. If the check fails (a concurrent transaction modified the row), the application retries from the read step. No lock is held during the window between read and write.
The standard implementation uses a version column incremented on every write:
-- Schema: add a version column to the tableALTER TABLE orders ADD COLUMN version BIGINT NOT NULL DEFAULT 1;-- Step 1: Read without holding a lockSELECT id, status, total, version FROM orders WHERE id = 1001;-- Returns: { id: 1001, status: 'pending', version: 5 }-- Step 2: Write with an atomic version checkUPDATE ordersSET status = 'processing', version = version + 1WHERE id = 1001 AND version = 5; -- fails if version changed since the read-- 0 rows affected: a concurrent transaction changed the row. Retry.-- 1 row affected: this transaction won. Proceed.
The row is never locked during the read-to-write window. Under low contention, this is a strict win over pessimistic locking: reads never block each other, and the retry path is rarely exercised.
Optimistic locking breaks when conflicts are frequent. If 20 workers simultaneously read version = 5 and all attempt to increment it, one wins and the other 19 must retry. With high write concurrency on the same rows, retries become the common case and latency degrades. Use pessimistic locking (FOR UPDATE) in those situations.
Advisory locks are named locks stored inside the database that are not tied to any table row or transaction. The database has no idea what they protect. They exist purely to let application processes coordinate with each other using the database as a central authority.
-- Session-level: held until explicitly released or the session endsSELECT pg_advisory_lock(12345); -- acquire (blocks if already held)-- ... critical section ...SELECT pg_advisory_unlock(12345); -- release-- Transaction-level: released automatically at COMMIT or ROLLBACKBEGIN;SELECT pg_advisory_xact_lock(12345); -- acquire within this transaction-- ... critical section ...COMMIT; -- lock released automatically
The lock identifier (12345 above) is a 64-bit integer chosen by the application. Common conventions: hash the resource name, or keep a lock_keys table mapping names to stable integers.
Advisory locks are the right tool when:
A distributed cron job should run on exactly one app server at a time. Workers try pg_advisory_xact_lock(job_id) and the one that wins executes the job.
A background process must prevent concurrent processing of the same entity. The lock does not correspond to a single row and does not need row-level locking semantics.
You need mutual exclusion across separate services sharing the same PostgreSQL instance.
Advisory locks are lightweight (sub-millisecond, no table storage). Their risk is that session-level locks must be released explicitly. A crashed worker that held a session-level advisory lock leaves it held for the lifetime of the database session, which can be until the connection pool recycles the connection.
PostgreSQL detects deadlocks after they happen. The far better approach is to prevent them entirely through consistent lock ordering: every transaction that acquires multiple row locks does so in the same predetermined sequence.
-- DEADLOCK-PRONE: two paths acquire locks in different orders-- Transaction A (order processing):BEGIN;UPDATE orders SET status='processing' WHERE order_id = 1001; -- locks orders firstUPDATE shipments SET status='pending' WHERE order_id = 1001; -- then shipmentsCOMMIT;-- Transaction B (concurrent fulfillment):BEGIN;UPDATE shipments SET status='shipped' WHERE order_id = 1001; -- locks shipments firstUPDATE orders SET status='fulfilled' WHERE order_id = 1001; -- then ordersCOMMIT;-- A holds orders, B holds shipments. Each waits for the other. Deadlock.-- DEADLOCK-SAFE: always acquire locks in the same order-- Transaction A:BEGIN;UPDATE orders SET status='processing' WHERE order_id = 1001; -- orders FIRSTUPDATE shipments SET status='pending' WHERE order_id = 1001; -- shipments SECONDCOMMIT;-- Transaction B (fixed to match A's order):BEGIN;UPDATE orders SET status='fulfilled' WHERE order_id = 1001; -- orders FIRSTUPDATE shipments SET status='shipped' WHERE order_id = 1001; -- shipments SECONDCOMMIT;-- Now both paths must hold the orders lock before the shipments lock.-- They can never form a cycle.
If both transactions always lock orders before shipments, they serialize on the orders row. One gets it, runs both updates, and releases. The other then proceeds. No cycle is possible. The rule: sort lock acquisitions by a consistent key (often the primary key) or by table hierarchy.
Consistent lock ordering is a code review rule, not just a runtime optimization. Add it to your team's PR checklist: "Does this transaction acquire multiple row locks? Are they acquired in a documented canonical order?" One code path that acquires locks in the reverse order of every other code path is enough to enable deadlocks under load. In PostgreSQL, the pg_stat_activity view does not tell you which paths are deadlock-prone; you need to audit the application code.
Deadlocks are always preventable but require discipline. Any codebase where transactions acquire multiple row locks in varying orders will deadlock under load. Enforcing consistent ordering across a large team requires code review and documentation.
deadlock_timeout = 1s means one second of wasted work per deadlock. Under high contention, deadlocks fire repeatedly and each costs at least one second of wasted transaction time before the victim is aborted. Monitor pg_stat_database.deadlocks.
Long-held locks cascade into queue pile-ups. A transaction holding a row lock for 30 seconds (due to network I/O, slow application logic, or a stuck job) causes every other transaction waiting for that row to queue for 30 seconds. Always set lock_timeout in production.
lock_timeout is not set by default in PostgreSQL. A transaction can hold a lock indefinitely, and every waiting transaction queues behind it indefinitely. In production, set lock_timeout = '5s' (or lower for critical write paths) at the connection or session level. This surfaces lock contention as errors immediately rather than silently degrading throughput until the connection pool exhausts.
Bulk updates hold thousands of simultaneous row locks. A UPDATE ... WHERE created_at < '2024-01-01' touching 500K rows holds 500K locks simultaneously. Any application query touching one of those rows waits for the bulk job to finish. Batch large updates into smaller transactions.
SELECT FOR UPDATE on a non-existent row acquires no lock. If the row you FOR UPDATE does not exist yet, PostgreSQL returns zero rows and holds no lock. A concurrent INSERT of that row is not prevented. This is a common "check then insert" race condition.
Optimistic locking eliminates blocking at the cost of retry logic. For high-contention workloads, consider the version column pattern described in the optimistic locking section above. No blocking, but the application must safely handle retries on version conflicts.
When asked how deadlocks form: Two transactions each hold a lock the other needs, forming a wait cycle. The lock manager detects the cycle and aborts one transaction as the victim. State this clearly before any other detail.
When asked how to prevent deadlocks: Consistent lock ordering. All code paths acquiring multiple row locks do so in the same order, typically sorted by primary key or table hierarchy. This is the correct answer before discussing optimistic locking or retries.
When asked about shared vs exclusive locks: Shared locks (reads) are mutually compatible. Exclusive locks (writes) conflict with everything. Intent locks let the manager detect table-level conflicts without scanning all row locks.
When asked how PostgreSQL detects deadlocks: It builds a wait-for graph and traverses it for cycles after deadlock_timeout (default 1 s). It aborts the youngest transaction (highest transaction ID) as the victim to break the cycle.
When asked about MySQL vs PostgreSQL locking: Both use row-level locks for DML. MySQL InnoDB adds gap locks and next-key locks to prevent phantom reads in range queries. PostgreSQL uses MVCC snapshots to handle phantoms without gap locks.
When asked about lock_timeout in production: Always set it. SET lock_timeout = '5s' prevents a slow transaction from causing indefinite cascading queue buildup. This is a non-optional production configuration.
When asked about advisory locks: They are application-level semaphores stored in the database. Useful for distributed mutual exclusion ("only one job runs at a time"). The database cannot see what they protect, so the query planner cannot optimize around them.
When asked if MVCC eliminates locking: No. MVCC eliminates read-write conflicts by giving readers their own snapshot. Write-write conflicts on the same row still require locks. MVCC and row-level locking coexist in every major relational database.
When asked about pessimistic vs optimistic locking: Use pessimistic (SELECT FOR UPDATE) when you must write what you just read and a conflict is expensive to retry (inventory, seat booking, financial debit). Use optimistic (version column) when conflicts are rare and reads greatly outnumber writes. State the tradeoff: pessimistic blocks; optimistic retries. Neither is universally better.
Database locks prevent conflicting concurrent writes by giving exclusive ownership of a resource to one transaction at a time until it commits or rolls back.
Deadlocks form when two transactions each hold a lock the other needs, creating a wait cycle that the lock manager resolves by aborting one transaction as the victim.
PostgreSQL detects deadlocks by traversing the wait-for graph after deadlock_timeout (default 1 s) and aborts the youngest transaction in the cycle.
Deadlocks are completely preventable through consistent lock ordering: all transactions that acquire multiple row locks must acquire them in the same sequence, sorted by primary key or table hierarchy.
Long-held locks cause cascading queue buildup; always set lock_timeout in production and batch large bulk updates into small transactions to limit simultaneous lock counts.
Use pessimistic locking (SELECT FOR UPDATE) when you must write what you just read; optimistic locking (version column) when conflicts are rare; SKIP LOCKED for concurrent job queues; and advisory locks (pg_advisory_xact_lock) for cross-process coordination outside row scope.
MVCC — MVCC eliminates read-write lock conflicts by giving each reader its own snapshot, but write-write conflicts on the same row still require the row-level locking described here; the two mechanisms coexist in every major relational database.
Databases — Lock granularity (row vs table vs page) is one of the sharpest differentiators between database engines and directly determines write throughput under concurrent workloads.
B-tree indexes — Index page locks interact with row locks during index scans; understanding both explains why a bulk UPDATE on an indexed column can trigger more lock contention than expected.