Performances
Blazing Fast
RESTHeart has been designed and developed with a focus on lightness and performance. This section presents the performance test results gathered by the SoftInstigate development team.
IMPORTANT: Starting from RESTHeart version 8, Virtual Threads, a feature introduced in Java 21 via JEP 444, are utilized. These lightweight threads provide significant performance improvements and simplify concurrent programming.
Test Outcome in a Nutshell
With RESTHeart, you can serve 2.6 million authenticated requests on the MongoDB REST API in just 5 minutes using a single server with only 4 cores.
MongoDB REST API Performance
This test evaluates the throughput and latency of performing GET requests on a MongoDB collection using the RESTHeart REST API.
The requests are authenticated using Basic Authentication and the mongoRealAuthenticator
(which authenticates users defined in a MongoDB collection) to simulate real-world usage scenarios.
The following startup log message shows the Threads configuration used in the test:
---
10:56:49.167 [main] INFO org.restheart.Bootstrapper - Available processors: 4, IO threads: 1, worker scheduler parallelism (auto detected): 3, worker scheduler max pool size: 256
---
The log message highlights the following threads setup:
-
Available processors: 4
-
IO threads: 1
-
worker scheduler parallelism: 3
-
worker scheduler max pool size: 256
The Virtual Threads are unlimited and are executed by the JVM using 3 platform threads. (Note that these are not the default values that are: io-threads=number of cores
and worker-threads-scheduler-parallelism=1.5*number of cores
).
The following screenshot shows the result of a performance test measured with the monitoring feature or RESTHeart that allows plotting data with Prometheus.
Here we have the average rate of GET /coll
requests executed with the performance tool wrk
. The requests retrieve the documents stored in a MongoDB collection using the REST API. The average rate grows up to about 8,700 Requests/sec on a c7a.xlarge
AWS EC2 instance (with 4 cores).

This is the output of wrk, showing a throughput of 8,690 Requests/sec, an amazing total of 2.61 millions GET requests executed in 5 minutes 🚀 and an average latency of 18,34ms (75% percentile) with a JSON payload of 411 bytes.
$ wrk -c 128 -t 8 -d 300 --latency --timeout 1m -s get.lua http://172.31.44.217:8080/coll?filter='{"links.url":{"$regex":".*pach.*"}}'
Running 5m test @ http://172.31.44.217:8080/coll?filter={"links.url":{"$regex":".*pach.*"}}
8 threads and 128 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 14.97ms 6.99ms 138.69ms 75.41%
Req/Sec 1.09k 74.63 1.73k 72.94%
Latency Distribution
50% 14.06ms
75% 18.34ms
90% 23.19ms
99% 37.80ms
2607924 requests in 5.00m, 3.33GB read
Requests/sec: 8690.23
Transfer/sec: 11.37MB
Change Streams
Measure RESTHeart’s notification throughput while n Websockets are listening for targeted notifications. RESTHeart will process 180 POSTs in 60 seconds while testing (3 RPS) and every client will wait until all notification have been received.

Observing the graph, RESTHeart delivers almost real-time notification for a very huge amount of clients:
Clients | TPS | Mean Notification Time (333ms = Real Time) |
---|---|---|
10 |
27 |
357ms |
100 |
278 |
359ms |
1000 |
2790 |
358ms |
10000 |
27909 |
358ms |
15000 |
41812 |
358ms |
20000 |
54125 |
369ms |
25000 |
61995 |
403ms |