Internal Metrics
This guide describes the Prometheus metrics endpoint exposed by Mermin and provides a comprehensive breakdown of all available metrics, their types, and descriptions. See the metrics configuration document for more details on metrics configuration.
Metrics Endpoint
Mermin exposes Prometheus metrics in the standard Prometheus text format at multiple HTTP endpoints on port 10250 (configurable via internal.metrics.port):
/metrics- All metrics (standard + debug if enabled)/metrics/standard- Standard metrics only (no high-cardinality labels)/metrics/debug- Debug metrics only (returns 404 if disabled)/metrics:summary- JSON summary of all available metrics with metadata (name, type, description, labels, category)
Standard vs Debug Metrics:
Standard metrics: Always enabled, aggregated across resources, safe for production.
Debug metrics: High-cardinality labels (per-interface, per-resource), must be explicitly enabled via
metrics.debug_metrics_enabled = true.
Prometheus Scraping
Prometheus can be configured in multiple ways: annotation-based discovery or Kubernetes service discovery, Prometheus Operator CRDs (e.g. ServiceMonitor, PodMonitor), or engine-specific CRDs. Prometheus-compatible engines such as VictoriaMetrics use similar CRDs (VMServiceScrape, VMPodScrape). The following options work with Mermin's metrics endpoint.
Pod annotations — for annotation-based discovery, see Expose Mermin metrics to Prometheus in Advanced Scenarios.
A PodMonitor example for Mermin is in values_prom_stack.yaml (see prometheus.additionalPodMonitors), used when Prometheus Operator or other compatible controller is deployed
Further reading:
Prometheus configuration — scrape config and discovery
GKE Managed Service for Prometheus — PodMonitoring — Google Cloud's
PodMonitoringCR for managed collection
See also the Kubernetes Helm deployment guide, Helm deployment examples and Advanced Scenarios for more deployment examples.
Metrics Reference
All metrics follow the naming convention: mermin_<subsystem>_<name>. Metrics are categorized into logical subsystems that correspond to different components of Mermin:
ebpf: For eBPF-specific metricschannel: Internal Mermin channels metricsexport: Export-related metricsflow: Metrics on the Flow Spansinterface: Network interface-related metricsk8s: For Kubernetes watcher metricstaskmanager: Internal Mermin tasks metrics
eBPF Metrics (mermin_ebpf_*)
mermin_ebpf_*)This section describes metrics from the eBPF layer, responsible for capturing low-level packets. These metrics provide visibility into the status of loaded eBPF programs and the usage of eBPF maps. Monitoring these is crucial for ensuring that Mermin's foundational data collection mechanism functions as expected.
mermin_ebpf_bpf_fs_writableWhether /sys/fs/bpf is writable for TCX link pinning (1 = writable, 0 = not writable).
Type:
gaugemermin_ebpf_map_capacityMaximum capacity of eBPF maps. For hash maps (FLOW_STATS, TCP_STATS, ICMP_STATS, LISTENING_PORTS) this is max entries. For ring buffers (FLOW_EVENTS) this is size in bytes.
Type:
gaugeLabels:
map:FLOW_STATS,FLOW_EVENTS,TCP_STATS,ICMP_STATS,LISTENING_PORTSunit:entries(for hash maps),bytes(for ring buffers)
mermin_ebpf_map_ops_totalTotal number of eBPF map operations. Not all maps track all operation types:
FLOW_EVENTS:readonly (ring buffer consumed by userspace)FLOW_STATS,TCP_STATS,ICMP_STATS:readanddelete(hash maps read during flow processing, deleted on eviction)LISTENING_PORTS:writeonly (populated at startup from/proc)
Type:
counterLabels:
map:FLOW_STATS,FLOW_EVENTS,TCP_STATS,ICMP_STATS,LISTENING_PORTSoperation:read,write,deletestatus:ok,error,not_found
mermin_ebpf_map_sizeCurrent size of eBPF maps. For hash maps (FLOW_STATS, TCP_STATS, ICMP_STATS, LISTENING_PORTS) this is the entry count. For ring buffers (FLOW_EVENTS) this is pending bytes (producer_pos - consumer_pos).
Type:
gaugeLabels:
map:FLOW_STATS,FLOW_EVENTS,TCP_STATS,ICMP_STATS,LISTENING_PORTSunit:entries(for hash maps),bytes(for ring buffers)
mermin_ebpf_methodCurrent eBPF attachment method used (tc or tcx).
Type:
gaugeLabels:
attachment:tc,tcx
Network Interface Metrics (mermin_interface_*)
mermin_interface_*)These metrics provide visibility into network traffic processed by Mermin across all monitored interfaces. They are essential for understanding the overall throughput and packet rates processed by Mermin.
mermin_interface_bytes_totalTotal number of bytes processed across all interfaces.
Type:
counterUnit: bytes
Labels:
interface: Network interface name (e.g.,eth0)direction:ingress,egress
mermin_interface_packets_totalTotal number of packets processed across all interfaces.
Type:
counterUnit: packets (count)
Labels:
interface: Network interface name (e.g.,eth0)direction:ingress,egress
Flow Metrics (mermin_flow_*)
mermin_flow_*)mermin_flow_spans_active_totalCurrent number of active flow traces across all interfaces.
Type:
gaugeUnit: spans (count)
mermin_flow_spans_created_totalTotal number of flow spans created across all interfaces.
Type:
counterUnit: spans (count)
Kubernetes Watcher Metrics (mermin_k8s_watcher_*)
mermin_k8s_watcher_*)These metrics track events and performance of the Kubernetes resource watchers used by Mermin for metadata enrichment and resource monitoring.
mermin_k8s_watcher_events_totalTotal number of K8s kind watcher events (aggregated across resources).
Type:
counterLabels:
event:apply,delete,init,init_done,errorkind: Kubernetes resource types (e.g.,Pod,Service,Node,Deployment,ReplicaSet,DaemonSet,StatefulSet,EndpointSlice)
mermin_k8s_watcher_ip_index_update_duration_secondsDuration of K8s IP index updates.
Type:
histogramUnit: seconds
Default buckets:
[0.001, 0.005, 0.01, 0.05, 0.1, 0.5, 1.0](1ms to 1s)
Kubernetes Decorator Metrics (mermin_k8s_decorator_*)
mermin_k8s_decorator_*)These metrics expose the details to the Kubernetes decorator stage.
mermin_k8s_decorator_flow_spans_totalTotal number of flow spans processed by the K8s decorator.
Type:
counterUnit: spans (count)
Labels:
status:ok,dropped,error,undecorated
Flow Span Export Metrics (mermin_export_*)
mermin_export_*)These metrics track the export of flow spans from Mermin to external systems (such as OTLP collectors), providing insight into export performance and reliability.
mermin_export_batch_sizeNumber of spans per export batch.
Type:
histogramUnit: spans (count)
Default buckets:
[1, 10, 50, 100, 250, 500, 1000]mermin_export_flow_spans_totalTotal number of flow spans exported to external systems.
Type:
counterUnit: spans (count)
Labels:
exporter:otlp,stdout,noopstatus:ok,error,noop
Channel Metrics (mermin_channel_*)
mermin_channel_*)These metrics offer insight into the internal channels used for data transmission.
mermin_channel_capacityCapacity of internal channels.
Type:
gaugeUnit: items (count)
Labels:
channel:packet_worker,producer_output,decorator_output
mermin_channel_entriesCurrent number of items in channels.
Type:
gaugeUnit: items (count)
Labels:
channel:packet_worker,producer_output,decorator_output
mermin_channel_sends_totalTotal number of send operations to internal channels.
Type:
counterLabels:
channel:packet_worker,producer_output,decorator_outputstatus:success,error,backpressure
Pipeline Metrics (mermin_pipeline_*)
mermin_pipeline_*)These metrics offer insight into the internal pipelines used for data mutation (flow generation, decoration).
mermin_pipeline_duration_secondsProcessing duration by pipeline stage.
Type:
histogramUnit: seconds
Labels:
stage:flow_producer_out: Time spent reading and processing flow events from the eBPF ring buffer (typically microseconds to milliseconds)k8s_decorator_out: Time spent enriching flow spans with Kubernetes metadata (pod, service, namespace lookups)export_out: Time spent exporting spans to configured exporters (OTLP or stdout), including serialization and network I/O
Default buckets:
[0.00001, 0.00005, 0.0001, 0.0005, 0.001, 0.005, 0.01, 0.05, 0.1, 0.5, 1.0, 5.0, 10.0, 30.0, 60.0](10μs to 60s)
TaskManager Metrics (mermin_taskmanager_*)
mermin_taskmanager_*)These metrics track the number and type of active background tasks managed by Mermin.
mermin_taskmanager_tasks_activeCurrent number of active tasks across all task types.
Type:
gaugeUnit: tasks (count)
Labels:
task: Task names are dynamic and correspond to spawned background tasks (e.g., watcher tasks, producer tasks)
Label Values Reference
This section provides a quick reference for all label values used across metrics.
map
FLOW_STATS, FLOW_EVENTS, TCP_STATS, ICMP_STATS, LISTENING_PORTS
unit
entries, bytes
operation
read, write, delete
status (eBPF)
ok, error, not_found
attachment
tc, tcx
channel
packet_worker, producer_output, decorator_output
status (channel)
success, error, backpressure
exporter
otlp, stdout, noop
status (export)
ok, error, noop
status (decorator)
ok, dropped, error, undecorated
event
apply, delete, init, init_done, error
kind
Pod, Service, Node, Deployment, ReplicaSet, DaemonSet, StatefulSet, EndpointSlice, etc.
stage
flow_producer_out, k8s_decorator_out, export_out
Histogram Buckets
Histogram metrics use configurable bucket boundaries. The default buckets are optimized for typical workloads but can be customized via configuration. See metrics configuration for details.
mermin_pipeline_duration_seconds
[0.00001, 0.00005, 0.0001, 0.0005, 0.001, 0.005, 0.01, 0.05, 0.1, 0.5, 1.0, 5.0, 10.0, 30.0, 60.0]
10μs to 60s
mermin_export_batch_size
[1, 10, 50, 100, 250, 500, 1000]
1 to 1000 spans
mermin_k8s_watcher_ip_index_update_duration_seconds
[0.001, 0.005, 0.01, 0.05, 0.1, 0.5, 1.0]
1ms to 1s
mermin_taskmanager_shutdown_duration_seconds (debug)
[0.1, 0.5, 1.0, 5.0, 10.0, 30.0, 60.0, 120.0]
100ms to 120s
Grafana Dashboard
Grafana dashboard can be imported from the Dashboard JSON
Next Steps
Configure Prometheus Endpoint: Customize metrics exposure
Set Up Alerting: Configure health checks
Diagnose Performance Issues: Use metrics to identify bottlenecks
Tune the Pipeline: Optimize based on metrics
Need Help?
GitHub Discussions: Share dashboards and alerting configurations
Last updated