Intermediate Reading time: ~8 min

Synchronization

Synchronized, locks, ReentrantLock and ReadWriteLock

Synchronization

Definition

Synchronization is the set of techniques that make shared mutable state safe under concurrent access. In Java this is not only about “preventing two threads from entering the same code at once”; it is about establishing visibility guarantees, preserving invariants across multiple fields, and creating well-defined happens-before relationships under the Java Memory Model. A synchronized program is one where readers observe a state that could have been produced by some valid execution order, not a random mixture of half-finished writes.

At the JVM level, synchronization is implemented through monitor operations, memory barriers, lock metadata in object headers, and higher-level lock implementations built on top of queued synchronizers. Correct synchronization therefore combines language constructs, library choices, and mechanical sympathy for contention patterns.

Core Concepts

The synchronized keyword provides intrinsic locking. Entering a synchronized block performs monitor acquisition; leaving it performs monitor release, even when the block exits via exception. This gives mutual exclusion and a memory effect: a successful monitor release happens-before a subsequent successful acquisition of the same monitor. That is why synchronized solves both atomicity and visibility for the guarded state.

Intrinsic locks are reentrant, so the same thread can enter the same monitor multiple times without deadlocking itself. Every object can act as a monitor, but using publicly accessible objects such as this, boxed values, or string literals is risky because external code may accidentally lock on the same object. Production code often uses a private final lock object to make the lock scope explicit.

wait(), notify(), and notifyAll() operate on the monitor’s wait set. A thread must own the monitor before calling them. wait() atomically releases the monitor and suspends the thread until notification, interruption, timeout, or spurious wakeup. Because spurious wakeups are legal, the correct pattern is always “wait in a loop while the condition is false”. notify() wakes one waiter, notifyAll() wakes all; the latter is usually safer when multiple conditions may be multiplexed on the same monitor.

Explicit locks such as ReentrantLock expose the same core idea with more control. You gain timed acquisition, interruptible acquisition, optional fairness, and multiple Condition queues. ReadWriteLock allows multiple concurrent readers with exclusive writers, which helps only when reads are frequent, long enough to amortize coordination, and contention is real. On write-heavy or tiny critical sections it can be slower than a plain mutex.

Practical Usage

Use synchronized for small, local invariants where simple mutual exclusion is enough. It is ideal for protecting a small object graph, lazy initialization with a clear guard, or condition queues implemented with wait/notifyAll. Its biggest strength is readability: the lock is lexical, automatic on exit, and easy to review.

Use ReentrantLock when you need features intrinsic locking does not offer. Examples include tryLock() for deadlock avoidance, lockInterruptibly() for cancellation-aware blocking, or several Condition objects so different waiters do not wake each other unnecessarily. The cost is discipline: forgetting unlock() in a finally block is a correctness bug, not just a style problem.

Use ReadWriteLock only after measuring. Teams often reach for it because the name sounds scalable, but real speedups depend on read/write ratio, critical-section duration, cache effects, and writer starvation behavior. Sometimes immutable snapshots or ConcurrentHashMap deliver better throughput with less complexity.

Code Examples

class Counter {
    private final Object lock = new Object();
    private int value;

    int incrementAndGet() {
        synchronized (lock) {
            value++;
            return value;
        }
    }

    int get() {
        synchronized (lock) {
            return value;
        }
    }
}

This example protects both reads and writes with the same monitor. That matters because unsynchronized reads could observe stale data even if all writes are synchronized.

class BoundedBuffer<T> {
    private final Queue<T> queue = new ArrayDeque<>();
    private final int capacity;

    BoundedBuffer(int capacity) {
        this.capacity = capacity;
    }

    public synchronized void put(T item) throws InterruptedException {
        while (queue.size() == capacity) {
            wait();
        }
        queue.add(item);
        notifyAll();
    }

    public synchronized T take() throws InterruptedException {
        while (queue.isEmpty()) {
            wait();
        }
        T item = queue.remove();
        notifyAll();
        return item;
    }
}

The important part is the guarded loop. Using if instead of while breaks under spurious wakeups and under races where another thread consumes the condition before the awakened thread reacquires the monitor.

class AccountService {
    private final Lock lock = new ReentrantLock();

    void transfer(Account from, Account to, long amount) throws InterruptedException {
        if (lock.tryLock(100, TimeUnit.MILLISECONDS)) {
            try {
                from.debit(amount);
                to.credit(amount);
            } finally {
                lock.unlock();
            }
        } else {
            throw new IllegalStateException("Could not acquire transfer lock");
        }
    }
}

tryLock adds a failure path instead of indefinite blocking. That is valuable in production systems where bounded latency matters more than blindly waiting forever.

Trade-offs

Intrinsic locking is compact, hard to misuse accidentally on normal code paths, and optimized heavily by the JVM. It is usually the right default for narrow critical sections. Its weakness is limited expressiveness: no timed acquisition, no non-blocking acquisition, one implicit condition queue, and no easy instrumentation hooks.

ReentrantLock offers flexibility and better fit for advanced coordination, but with more ceremony and more room for bugs. Fair locks can reduce starvation, yet fairness often lowers throughput because it suppresses beneficial lock barging. ReadWriteLock may increase read concurrency, but can also increase bookkeeping, hurt cache locality, and complicate upgrade/downgrade scenarios.

The deepest trade-off is not syntax but contention strategy. If many threads fight over one hot lock, the real fix may be sharding, immutability, batching, actor-style ownership, or a concurrent data structure. Better lock APIs do not rescue a poor ownership model.

Common Mistakes

The first mistake is using synchronization on writes but not on reads. That protects against lost updates but not stale visibility. Another classic bug is synchronizing on the wrong object: locking on this in one method and a private lock in another means the state is not actually protected consistently.

With monitor methods, developers often call wait() or notify() outside a synchronized block and get IllegalMonitorStateException. More subtle is using if around wait() instead of while, which fails under spurious wakeups or competing consumers. Another recurring issue is calling notify() when multiple logical conditions share one monitor; the awakened thread may not be the one that can make progress.

For explicit locks, forgetting unlock() in finally is catastrophic. So is holding a lock across slow I/O, remote calls, or logging frameworks that might block. Long critical sections amplify contention and can turn harmless bursts into cascading latency incidents. With ReadWriteLock, teams also underestimate writer starvation or assume a read lock can always be upgraded safely to a write lock, which is not generally true.

Senior-level Insights

Under HotSpot, lock behavior is dynamic. Uncontended locks are cheap; contended locks may inflate and involve parking. The JVM can eliminate some locks via escape analysis or coarsen adjacent locks when profitable. That means microbenchmarks around synchronization can be misleading unless they reflect realistic contention and object escape patterns.

Happens-before is the key mental model. synchronized, volatile, thread start, thread join, and completion of certain concurrent utilities all create ordering edges. Senior engineers reason about which thread publishes data, which thread consumes it, and what edge connects them. Without that edge, code may “work on my machine” for years and then fail under a different CPU, JIT phase, or load pattern.

Diagnosis is also part of synchronization expertise. In production, a lock problem is investigated with thread dumps, JFR lock profiling, contention metrics, and code review of ownership boundaries. If a lock protects too much, reduce the shared mutable surface. If fairness is needed, ask what starvation signal forced the decision. If deadlocks are possible, redesign for lock ordering or timed acquisition rather than hoping they never happen.

Glossary

  • Intrinsic lock: Built-in monitor used by synchronized.
  • Monitor: JVM synchronization construct tied to an object.
  • Reentrancy: Ability of the same thread to acquire the same lock repeatedly.
  • Happens-before: Ordering relation that guarantees visibility and ordering of memory effects.
  • Spurious wakeup: Legal wakeup from wait() without a matching notification.
  • Condition: Explicit wait queue associated with a Lock.
  • Fair lock: Lock that tries to honor acquisition order.
  • Contention: Multiple threads competing for the same synchronization resource.

Cheatsheet

  • synchronized gives mutual exclusion and visibility.
  • Guard each shared invariant with one well-defined lock.
  • Prefer private final lock objects over public lock targets.
  • Use while, not if, around wait().
  • Call wait/notify/notifyAll only while holding the monitor.
  • Prefer notifyAll() when multiple wait conditions exist.
  • Always unlock() in finally with explicit locks.
  • Use tryLock or lockInterruptibly when bounded waiting matters.
  • Benchmark ReadWriteLock; do not assume it is automatically faster.
  • If contention is high, revisit the data ownership model, not only the lock choice.

🎮 Games

10 questions