The Loom Revolution: Java's 30-Year Journey to Easy Concurrency
The introduction of Virtual Threads (Project Loom) in Java 21 is more than just a new feature; it’s the culmination of a nearly 30-year quest for a concurrency model that is both easy to use and massively scalable. To understand why Virtual Threads are so revolutionary, we must look back at the two concurrency models that came before.
The evolution of concurrency in Java is a three-act story, as illustrated in the diagram below.
Act I: The Early Dream - Green Threads (Java 1.1)
In the earliest days of Java, not all operating systems had good (or any) support for native threads. To fulfill the “Write Once, Run Anywhere” promise, Java implemented its own threading system inside the JVM. These were called Green Threads.
- What they were: Threads managed entirely by the JVM, invisible to the underlying operating system. The JVM would schedule these Green Threads to run on a single OS thread.
- The Problem: As shown in the first panel of the diagram, this model had a fatal flaw. Since the OS only saw one thread, if any Green Thread made a blocking I/O call (like reading from a network socket), the entire OS thread would block. This froze all other Green Threads in the JVM. Furthermore, this model could never take advantage of multi-core CPUs, as all work was ultimately funneled through one OS thread. They were quickly abandoned.
Act II: The Workhorse - Platform Threads (Java 1.2 to Today)
To solve the problems of Green Threads, Java was re-engineered to use the operating system’s native threads directly. This gave us Platform Threads, the model we’ve used for over two decades.
- What they are: A thin Java wrapper around a native OS thread. When you write
new Thread().start()
, you are asking the OS to create a real, heavyweight thread. - The Advantage: As seen in the second panel, this provides true parallelism. Multiple threads can run on multiple CPU cores simultaneously. If one thread blocks on I/O, others can continue to run on other cores.
- The Problem: OS threads are a scarce and expensive resource. They consume significant memory for their stack, and context-switching between them is slow. An application can typically only handle a few thousand platform threads before the system grinds to a halt. This forced developers into complex, asynchronous, non-blocking programming models (like reactive streams) to achieve high scalability, sacrificing code readability.
Act III: The Revolution - Virtual Threads (Java 21)
Virtual Threads aim to give us the best of both worlds: the simple, “thread-per-request” programming model of Platform Threads, combined with the massive scalability required for modern applications.
- What they are: Extremely lightweight threads managed by the JVM. The JVM maintains a small pool of Platform Threads (called “carrier threads”) and efficiently “mounts” and “unmounts” millions of Virtual Threads onto them.
- The Solution: As illustrated in the final panel, when a Virtual Thread executes a blocking I/O operation, the JVM automatically unmounts it from its carrier thread. The carrier thread is immediately freed to run another Virtual Thread. Once the I/O operation is complete, the JVM will find an available carrier thread to “mount” the original Virtual Thread on so it can continue its work.
- The Result: Developers can write simple, easy-to-read, blocking-style code. You can create a new virtual thread for every incoming request, even if you have millions of them. The JVM handles all the complex scheduling work, achieving incredible scalability without sacrificing simplicity. This is not just an improvement; it’s a paradigm shift that makes high-performance, concurrent programming accessible to all Java developers.
Want to go deeper?
Now that you understand the what and why of Virtual Threads, explore the mechanics behind the magic in our follow-up article: Loom Deep Dive: Continuations, Schedulers, and Stack Traces.