managing throughput with virtual threads
Abstract
- rate limit with
Semaphore
.- configure platform threads:
jdk.virtualThreadScheduler.parallelism
: configures the number of platform threads created per core the JVM has access to.jdk.virtualThreadScheduler.maxPoolSize
: configures the maximum number of platform threads that will be created.- use JDK JFR and
jcmd
to debug virtuah threads
Virtual threads, the long-awaited headline feature of Project Loom, became a final feature in JDK 21. Virtual threads are the start of a new chapter of concurrency in Java by providing developers with lightweight threads, allowing the easy and resource-cheap breaking up of tasks to be executed concurrently. However, with these changes come questions about how to manage this throughput best. Let’s take a look at how developers can manage throughput when using virtual threads.
Creating Virtual Threads with ExecutorService
In most cases, developers will not need to create virtual threads themselves. For example, in the case of web applications, underlying frameworks like Tomcat or Jetty will automatically generate a virtual thread for every incoming request.
If a portion of your application could benefit by being broken into individual tasks and executed concurrently, consider creating new virtual threads with Executors.newVirtualThreadPerTaskExecutor())
.
Here, in this hypothetical example, three random numbers are being generated serially:
This could be re-written so that the random numbers are generated in parallel:
AutoCloseable ExecutorService
The ExecutorService
interface extends AutoCloseable
so the ExecutorService
instance returned from Executors.newVirtualThreadPerTaskExecutor();
can be placed in a try-with-resources
block to be automatically closed if needed as in the example below:
However, in most cases, it would likely be preferable to have the ExecutorService
live for the application’s lifetime, or at least encapsulating object, as would be the case in the first example.
Structured Concurrency Cometh
Using Executors.newVirtualThreadPerTaskExecutor())
to create and execute tasks as virtual threads is largely a temporary solution. Ideally, Structured Concurrency should be utilized when breaking work down into individual tasks to be executed concurrently. Structured Concurrency is in preview mode as of JDK 21; you can read about preview features here. For more on Structured Concurrency, check out this video from José Paumard.
Rate Limiting Virtual Threads
When I have presented on Virtual Threads, a common question I have heard is how to prevent an external service from getting overwhelmed by too many calls when using virtual threads. The answer to that question would be to use a java.util.concurrent.Semaphore
. Semaphore
s work like pools, but instead of pooling a scarce resource like connections or threads before the introduction of virtual threads. Semaphore
s instead pool permits, which is simply a counter.
The code example below demonstrates how to implement a Semaphore
. The number of permits should be set to the acceptable rate limit for external service, and the call to the service placed between an acquire()
, which takes a permit, and a release()
, which returns a permit. Be sure to place the release()
within a finally
block to prevent permits from leaking.
⚠️ Warning: Don’t pool virtual threads, as they are not a scarce resource.
Pools within Pools
If you already use a connection pool to manage the connections to a service, avoid using a Semaphore
. If you find that when migrating to JDK 21 and using virtual threads, a service is being overwhelmed with traffic, instead configure the relevant connection pool to have a smaller pool of connections that the external service can manage.
Configuring Platform Threads
Virtual threads run on top of platform threads, a subject I cover in this video. The number of platform threads that are created when the JVM starts up is based upon the number of cores available to the JVM, with the default max size of the platform thread pool being 256. In the vast majority of cases, these defaults should work fine. If, however, you have an edge case where these default values don’t meet your requirements, they can be modified with the VM arguments:
jdk.virtualThreadScheduler.parallelism
: configures the number of platform threads created per core the JVM has access to.jdk.virtualThreadScheduler.maxPoolSize
: configures the maximum number of platform threads that will be created.
Debugging Virtual Threads
When issues arise on applications using virtual threads, remember that observability tools like JDK Flight Recorder (JFR) and jcmd
are already set to handle virtual threads.
JFR and Virtual Threads
JFR has been updated with four new events for virtual threads:
By default, jdk.VirtualThreadStart
and jdk.VirtualThreadEnd
are disabled but you can be enabled through a JFR configuration file or JDK Mission Control. You can read how to enable JFR events here.
Thread dumps with Virtual Threads
Thread dumps can be executed with the following command:
The thread dump will include virtual threads that are blocked in network I/O operations and virtual threads that are created by the ExecutorService
interface, one of the reasons to prefer using the ExecutorService
for creating virtual threads.
Object addresses, locks, JNI statistics, heap statistics, and other information that appears in traditional thread dumps are not included.