Monitoring RESTHeart
Introduction
RESTHeart provides comprehensive monitoring capabilities through Prometheus-compatible metrics. This allows you to track application performance, identify bottlenecks, and integrate with popular monitoring stacks like Prometheus and Grafana.
What You’ll Learn
-
How to enable metrics collection in RESTHeart
-
Understanding HTTP request metrics (count, duration, rate)
-
Monitoring JVM metrics (memory, garbage collection)
-
Configuring metric collection with include/exclude patterns
-
Using path templates for tenant-specific metrics
-
Querying metrics in Prometheus format
-
Best practices for production monitoring
By the end of this guide, you’ll have RESTHeart metrics configured and ready for monitoring dashboards.
RESTHeart enables tracking of various key performance indicators (KPIs) for HTTP requests and the JVM. These metrics can be accessed through an API in Prometheus format, allowing you to query and visualize graphs using Prometheus.
The following HTTP requests metrics are collected by RESTHeart:
-
http_requests_count (Count of HTTP requests)
-
http_requests_duration (Duration of HTTP requests)
-
http_requests_rate (Rate of HTTP requests)
Additionally, RESTHeart captures JVM metrics such as memory usage and garbage collector data.
Tutorial
Run RESTHeart with metrics enabled and specify a configuration:
$ docker run --rm -p "8080:8080" -e RHO="/http-listener/host->'0.0.0.0';/mclient/connection-string->'mongodb://host.docker.internal';/ping/uri->'/acme/ping';/requestsMetricsCollector/enabled->true;/jvmMetricsCollector/enabled->true;/requestsMetricsCollector/include->['/{tenant}/*']" softinstigate/restheart
With the given RHO env variable, the configuration is:
ping:
uri: /acme/ping # change the ping service uri for testing purposes
metrics:
enabled: true
uri: /metrics
requestsMetricsCollector:
enabled: true
include:
- /{tenant}/*
exclude:
- /metrics
- /metrics/*
jvmMetricsCollector:
enabled: true
Metrics will be gathered for requests that match the path templates specified in the include criteria and do not match those listed in the exclude criteria.
Note that when using the variable {tenant} in the include path templates, the metrics will be tagged with path_template_param_tenant=<value>. This tagging does not apply when using wildcards in path templates.
Now, make a few requests to /acme/ping using [httpie](https://httpie.io/).
$ http -b :8080/acme/ping
{
"client_ip": "127.0.0.1",
"host": "localhost:8080",
"message": "Greetings from RESTHeart!",
"version": "8.4.0"
}
$ http -b :8080/acme/ping
{
"client_ip": "127.0.0.1",
"host": "localhost:8080",
"message": "Greetings from RESTHeart!",
"version": "8.4.0"
}
$ http -b :8080/acme/ping
{
"client_ip": "127.0.0.1",
"host": "localhost:8080",
"message": "Greetings from RESTHeart!",
"version": "8.4.0"
}
Now we can ask for available metrics:
$ http -b -a admin:secret :8080/metrics
[
"/jvm",
"/{tenant}/ping"
]
Let’s get the metrics for requests matching "/{tenant}/*":
$ http -b -a admin:secret :8080/metrics/{tenant}/\*
(omitting many rows)
http_requests_count{request_method="GET",path_template="/{tenant}/*",response_status_code="200",path_template_param_tenant="acme",} 3.0
The response is in prometheus format. The highlighted row is the metrics http_requests_count with value 3 and the following tags:
request_method="GET"
path_template="/{tenant}/*"
response_status_code="200",
path_template_param_tenant="acme"
Use prometheus to display metrics
Define the following prometheus configuration file prometheus.yml
global:
scrape_interval: 5s
evaluation_interval: 5s
scrape_configs:
- job_name: 'restheart http /{tenant}/*'
static_configs:
- targets: ['host.docker.internal:8080']
metrics_path: '/metrics/{tenant}/*'
basic_auth:
username: admin
password: secret
- job_name: 'restheart jvm'
static_configs:
- targets: ['host.docker.internal:8080']
metrics_path: '/metrics/jvm'
basic_auth:
username: admin
password: secret
Run prometheus with:
$ docker run --rm --name prometheus -p 9090:9090 -v ./prometheus.yml:/etc/prometheus/prometheus.yml prom/prometheus --config.file=/etc/prometheus/prometheus.yml
Prometheus will start scraping restheart metrics. Note that given the default exclude path templates, metrics for prometheus requests are not collected.
Open localhost:9090 with your browser and check the metrics:
Handling Missing Metrics Registries
added in RESTHeart v. 8.9.0
By default, RESTHeart returns HTTP 404 when a metrics registry for a requested path does not exist (i.e., no matching traffic has occurred yet). This behavior is compatible with legacy clients that expect a 404 for non-existent metrics.
However, some monitoring tools (such as Prometheus) expect a 200 OK response from metrics endpoints, even if no metrics are available yet. To support this, you can configure RESTHeart to return HTTP 200 with an empty body for missing registries.
To enable Prometheus compatibility, starting from RESTHeart v 8.9.0 you can add the following to your configuration:
metrics:
enabled: true
uri: /metrics
missing-registry-status-code: 200 # Return 200 OK with empty body for missing registries
Note: if missing-registry-status-code is not set, RESTHeart will return 404 by default.
Summary of behaviors:
-
missing-registry-status-code: 404(default): Returns 404 for missing registries (legacy clients) -
missing-registry-status-code: 200: Returns 200 OK with empty body (Prometheus-friendly)
Add custom metrics labels from a Service
The org.restheart.metrics.Metrics.attachMetricLabels(Request<?> request, List<MetricLabel> labels) method provides the capability to include custom labels in the metrics that are being collected.
For example, the GraphQLService utilizes this method to include the query label in the metrics, which corresponds to the name of the executed GraphQL query.
Custom Metrics (RESTHeart v9)
RESTHeart v9 introduces a programmatic API for registering and collecting custom application-specific metrics. This allows you to track business metrics, performance indicators, and domain-specific measurements alongside the built-in HTTP and JVM metrics.
Supported Metric Types
RESTHeart supports four standard Prometheus metric types:
| Type | Description | Use Case |
|---|---|---|
Counter |
Monotonically increasing value (can only go up) |
Request counts, error counts, orders processed |
Gauge |
Value that can increase or decrease |
Active connections, queue size, temperature |
Histogram |
Samples observations and counts them in configurable buckets |
Request durations, response sizes |
Summary |
Similar to histogram, with configurable quantiles |
Request latencies, SLA measurements |
Registering Custom Metrics
Custom metrics must be registered during application startup using an Initializer plugin.
Example: Counter Metric
@RegisterPlugin(
name = "customMetricsInit",
description = "Registers custom metrics",
enabledByDefault = true)
public class CustomMetricsInitializer implements Initializer {
@Override
public void init() {
// Register a counter with labels
Metrics.registerCounter(
"custom_requests_total",
"Total number of custom requests",
"endpoint", // Label names
"method"
);
}
}
Example: Gauge Metric
@RegisterPlugin(
name = "cacheMetricsInit",
description = "Registers cache size gauge",
enabledByDefault = true)
public class CacheMetricsInitializer implements Initializer {
private final CacheService cacheService;
@Inject("cacheService")
public CacheMetricsInitializer(CacheService cacheService) {
this.cacheService = cacheService;
}
@Override
public void init() {
// Register a gauge with a supplier function
Metrics.registerGauge(
"cache_size",
"Current cache size",
() -> cacheService.getCacheSize()
);
}
}
Example: Histogram Metric
@Override
public void init() {
// Register a histogram with custom buckets
Metrics.registerHistogram(
"order_value_dollars",
"Distribution of order values in USD",
new double[]{10, 50, 100, 500, 1000, 5000} // Bucket boundaries
);
}
Example: Summary Metric
@Override
public void init() {
// Register a summary with quantiles
Metrics.registerSummary(
"api_latency_seconds",
"API request latency",
0.5, 0.9, 0.99 // 50th, 90th, 99th percentiles
);
}
Updating Custom Metrics
Once registered, you can update metrics from Services or Interceptors during request handling.
Updating Counters
@RegisterPlugin(
name = "orderService",
description = "Order processing service")
public class OrderService implements JsonService {
@Override
public void handle(JsonRequest req, JsonResponse res) {
// Process order...
// Increment counter with labels
Metrics.counter("custom_requests_total")
.labels("orders", req.getMethod().toString())
.inc();
if (orderSuccessful) {
Metrics.counter("orders_processed_total").inc();
}
}
}
Updating Gauges
// Increment gauge
Metrics.gauge("active_connections").inc();
// Decrement gauge
Metrics.gauge("active_connections").dec();
// Set to specific value
Metrics.gauge("queue_size").set(42);
Recording Histogram/Summary Values
@Override
public void handle(JsonRequest req, JsonResponse res) {
long startTime = System.nanoTime();
// Process request...
long duration = System.nanoTime() - startTime;
// Record observation in histogram
Metrics.histogram("request_duration_seconds")
.observe(duration / 1_000_000_000.0);
// Record observation in summary
Metrics.summary("api_latency_seconds")
.observe(duration / 1_000_000_000.0);
}
Complete Example: Business Metrics
This example shows how to track business metrics for an e-commerce application:
@RegisterPlugin(
name = "businessMetricsInit",
description = "Registers business metrics",
enabledByDefault = true)
public class BusinessMetricsInitializer implements Initializer {
@Override
public void init() {
// Order metrics
Metrics.registerCounter(
"orders_total",
"Total number of orders",
"status" // pending, completed, failed
);
Metrics.registerHistogram(
"order_value_dollars",
"Order value distribution",
new double[]{10, 50, 100, 500, 1000}
);
// Cart metrics
Metrics.registerGauge(
"active_carts",
"Number of active shopping carts"
);
// Performance metrics
Metrics.registerSummary(
"checkout_duration_seconds",
"Time to complete checkout",
0.5, 0.95, 0.99
);
}
}
Using the metrics:
@RegisterPlugin(name = "orderService")
public class OrderService implements JsonService {
@Override
public void handle(JsonRequest req, JsonResponse res) {
var order = req.getContent();
double orderValue = order.get("total").asDouble();
long startTime = System.nanoTime();
try {
// Process order...
processOrder(order);
// Track successful order
Metrics.counter("orders_total")
.labels("completed")
.inc();
Metrics.histogram("order_value_dollars")
.observe(orderValue);
} catch (Exception e) {
// Track failed order
Metrics.counter("orders_total")
.labels("failed")
.inc();
} finally {
// Track checkout duration
long duration = System.nanoTime() - startTime;
Metrics.summary("checkout_duration_seconds")
.observe(duration / 1_000_000_000.0);
}
}
}
Querying Custom Metrics
Custom metrics are exposed at the same /metrics endpoint as built-in metrics:
# Get all metrics
$ http -a admin:secret :8080/metrics
# Custom metrics appear in the list
[
"/jvm",
"/{tenant}/ping",
"/custom" # Your custom metrics
]
# Query custom metrics
$ http -a admin:secret :8080/metrics/custom
# TYPE orders_total counter
orders_total{status="completed",} 1523.0
orders_total{status="failed",} 12.0
# TYPE order_value_dollars histogram
order_value_dollars_bucket{le="10.0",} 45.0
order_value_dollars_bucket{le="50.0",} 234.0
order_value_dollars_bucket{le="100.0",} 456.0
order_value_dollars_sum 45678.90
order_value_dollars_count 789.0
# TYPE active_carts gauge
active_carts 42.0
# TYPE checkout_duration_seconds summary
checkout_duration_seconds{quantile="0.5",} 0.234
checkout_duration_seconds{quantile="0.95",} 1.456
checkout_duration_seconds{quantile="0.99",} 2.789
checkout_duration_seconds_sum 1234.567
checkout_duration_seconds_count 789.0
Configuration
Configure the metrics endpoint in restheart.yml:
metrics:
enabled: true
uri: /metrics
missing-registry-status-code: 200 # Prometheus-friendly
Best Practices
-
Register at startup - Always register metrics in an Initializer, not during request handling
-
Use descriptive names - Follow Prometheus naming conventions:
subsystem_metric_unit -
Add helpful descriptions - Make metrics self-documenting
-
Choose appropriate types:
-
Use Counters for things that only increase
-
Use Gauges for values that fluctuate
-
Use Histograms/Summaries for distributions
-
-
Label carefully - Labels create new time series; avoid high-cardinality labels
-
Monitor costs - Each unique label combination creates a separate metric
Retrieving Existing Metrics
You can also retrieve and manipulate existing metrics programmatically:
// Get an existing counter
Counter counter = Metrics.getCounter("http_requests_total");
if (counter != null) {
counter.inc();
}
// Get an existing gauge
Gauge gauge = Metrics.getGauge("cache_size");
if (gauge != null) {
gauge.set(newSize);
}
This allows multiple plugins to update the same metrics without re-registering them.