Internal Tracing
This page documents the internal.traces configuration, which controls how Mermin exports its own telemetry data for self-monitoring and debugging.
Overview
Mermin can export traces about its own operation, enabling you to:
Monitor Mermin's internal performance
Debug issues with flow processing
Track eBPF program execution
Observe internal component interactions
This is separate from network flow export and is primarily used for Mermin development and advanced troubleshooting.
Configuration
internal "traces" {
span_fmt = "full"
stdout = {
format = "text_indent"
}
otlp = {
endpoint = "http://otel-collector:4317"
protocol = "grpc"
}
}Configuration Options
span_fmt
span_fmtType: String (enum) Default: "full"
Span event format for internal traces.
Valid Values:
"full": Record all span events (enter, exit, close)
Example:
internal "traces" {
span_fmt = "full" # Complete span lifecycle
}stdout
stdoutType: Object Default: null (disabled)
Stdout exporter configuration for internal traces.
Sub-options:
format
formatType: String (enum) Valid Values: "text_indent"
Example:
internal "traces" {
stdout = {
format = "text_indent"
}
}otlp
otlpType: Object Default: null (disabled)
OTLP exporter configuration for internal traces.
Uses same configuration options as main OTLP exporter (see OTLP Exporter).
Example:
internal "traces" {
otlp = {
endpoint = "http://otel-collector:4318"
protocol = "http_binary"
timeout = "10s"
max_batch_size = 512
max_batch_interval = "5s"
auth = {
basic = {
user = "mermin-internal"
pass = "password"
}
}
}
}Use Cases
Debugging Mermin Issues
Enable internal traces to debug Mermin behavior:
log_level = "debug"
internal "traces" {
span_fmt = "full"
stdout = {
format = "text_indent"
}
}Useful for:
eBPF program loading issues
Flow processing bottlenecks
Informer synchronization problems
Export pipeline issues
Performance Analysis
Send internal traces to OTLP for performance analysis:
internal "traces" {
span_fmt = "full"
otlp = {
endpoint = "http://otel-collector:4317"
protocol = "grpc"
}
}Analyze:
Span duration for operations
Bottlenecks in processing pipeline
Resource usage patterns
Mermin Development
Essential for developing and testing Mermin:
log_level = "trace"
internal "traces" {
span_fmt = "full"
stdout = {
format = "text_indent"
}
}Internal Trace Examples
eBPF Program Loading
Span: load_ebpf_program
Start: 2025-10-27T15:30:00.000Z
Duration: 250ms
Attributes:
program.name: mermin
program.type: classifier
interface: eth0
Events:
- attach_to_interface (eth0)
- verify_program_loadedFlow Processing
Span: process_packet
Start: 2025-10-27T15:30:01.234Z
Duration: 0.5ms
Attributes:
packet.size: 1500
flow.exists: true
flow.state: established
Events:
- lookup_flow_table
- update_counters
- check_timeoutsKubernetes Informer Sync
Span: sync_k8s_informers
Start: 2025-10-27T15:30:05.000Z
Duration: 2.5s
Attributes:
informer.type: pod
resources.count: 1234
Events:
- connect_to_api_server
- list_resources
- populate_cache
- watch_startedSeparating Network Flows and Internal Traces
You can send network flows and internal traces to different backends:
# Network flows to production collector
export "traces" {
otlp = {
endpoint = "http://flow-collector:4317"
protocol = "grpc"
}
}
# Internal traces to development collector
internal "traces" {
otlp = {
endpoint = "http://debug-collector:4317"
protocol = "grpc"
}
}Benefits:
Separate production Flow Traces from debug data
Different retention policies
Isolate development traffic
Performance Impact
Internal tracing has minimal performance overhead:
Stdout only:
CPU: < 1%
Memory: Negligible
OTLP export:
CPU: < 2%
Memory: ~10-20 MB (for buffering)
Safe to enable in production for troubleshooting.
Disabling Internal Traces
To completely disable internal traces:
# No internal block = internal traces disabled
# Or explicitly:
# internal "traces" {}This is the default and recommended for most deployments.
Troubleshooting
Internal Traces Not Appearing
Symptoms: No internal trace data visible
Solutions:
Verify
internal "traces"block is configuredCheck exporter configuration (stdout or otlp)
Ensure log level is sufficient:
log_level = "debug"Check OTLP collector is receiving data
Too Much Internal Trace Data
Symptoms: Overwhelming volume of internal traces
Solutions:
Disable internal traces if not needed
Send to separate collector
Use sampling (if supported)
Filter by span name in collector
Internal Traces Interfering with Flow Traces
Symptoms: Internal traces mixed with network Flow Traces
Solutions:
Send internal traces to different endpoint
Use different collector instances
Filter by service name in backend
Best Practices
Disable by default: Only enable when needed
Use separate collectors: Don't mix with production Flow Traces
Enable for debugging: Temporarily enable for troubleshooting
Monitor overhead: Watch resource usage if enabled
Document usage: Note why internal traces are enabled
Complete Configuration Examples
Disabled (Default)
# No internal block = disabled (recommended for production)Stdout Only (Debugging)
log_level = "debug"
internal "traces" {
span_fmt = "full"
stdout = {
format = "text_indent"
}
}OTLP Export (Development)
internal "traces" {
span_fmt = "full"
otlp = {
endpoint = "http://debug-collector:4317"
protocol = "grpc"
timeout = "10s"
}
}Both Stdout and OTLP
internal "traces" {
span_fmt = "full"
stdout = {
format = "text_indent"
}
otlp = {
endpoint = "http://debug-collector:4317"
protocol = "grpc"
}
}Integration with Observability Stack
Grafana Tempo
Query internal traces in Tempo:
# Find slow operations
{service.name="mermin-internal"} | duration > 1s
# Find eBPF loading spans
{service.name="mermin-internal", span.name="load_ebpf_program"}Jaeger
Filter internal traces:
Service:
mermin-internalOperation:
process_packet,sync_k8s_informers, etc.
Next Steps
Global Options: Configure logging levels
API and Metrics: Monitor Mermin with Prometheus
Troubleshooting: Debug common issues
OTLP Exporter: Configure trace export
Last updated