Deployment Overview
This section provides comprehensive guidance for deploying Mermin in various environments, from local development to production Kubernetes clusters.
Deployment Options
Mermin supports multiple deployment scenarios:
Architecture Considerations
DaemonSet Pattern
Mermin is typically deployed as a Kubernetes DaemonSet, which ensures:
One Pod Per Node: Each node runs its own Mermin agent
Automatic Scaling: New nodes automatically get Mermin pods
Node Affinity: Pods can target specific node pools or architectures
Resource Isolation: Each agent operates independently
Resource Requirements
Plan your deployment based on these resource guidelines:
Minimum Resources (for low-traffic environments):
CPU: 100m (0.1 cores)
Memory: 128 Mi
Recommended Resources (for moderate traffic):
CPU: 500m (0.5 cores)
Memory: 256 Mi
High-Traffic Resources (for busy production nodes):
CPU: 1-2 cores
Memory: 512 Mi - 1 Gi
Actual requirements vary based on:
Network traffic volume
Number of pods per node
Flow timeout configurations
OTLP batch sizes and export frequency
Network Interface Selection
Mermin captures traffic from network interfaces matching your configured patterns. The default configuration provides complete visibility without flow duplication:
Complete Visibility (default):
discovery "instrument" {
interfaces = [
"veth*", # Same-node pod-to-pod traffic
"tunl*", # Calico IPIP tunnels (IPv4)
"ip6tnl*", # IPv6 tunnels (dual-stack)
"flannel*", # Flannel interfaces
"cali*", # Calico interfaces
"cilium_*", # Cilium overlays
# ... additional CNI-specific patterns
]
}Captures all traffic (same-node + inter-node, IPv4 + IPv6) without duplication. Works with most CNIs including Flannel, Calico, Cilium, kindnetd, and cloud providers. Supports dual-stack clusters.
Lower Overhead (inter-node only):
discovery "instrument" {
interfaces = ["eth*", "ens*"]
}Captures only inter-node traffic. Misses same-node pod-to-pod communication but monitors fewer interfaces.
See Network Interface Discovery for detailed strategies and CNI-specific patterns.
Network Namespace Switching
Mermin uses an advanced technique to monitor host network interfaces without requiring hostNetwork: true. This provides better network isolation while maintaining full monitoring capabilities.
How it works:
Mermin starts in its own pod network namespace
During eBPF program attachment, it temporarily switches to the host network namespace
After attachment, it switches back to the pod namespace
eBPF programs remain attached in the host namespace (kernel space)
Mermin operates normally in pod namespace (userspace)
Benefits:
Network Isolation: Pod has its own network namespace, separate from the host
Kubernetes DNS: Can resolve service names for OTLP endpoints (e.g.,
http://otel-collector.observability:4317)Service Communication: Other pods can communicate with Mermin on predictable IP addresses
Better Security: Doesn't expose host network interfaces to the pod
Requirements:
hostPID: true- Required to access/proc/1/ns/net(host network namespace)CAP_SYS_ADMIN- Required forsetns()syscall to switch namespacesCAP_SYS_PTRACE- Required to open namespace files of other processes (/proc/1/ns/net)Automatic DNS Policy - Helm chart sets
dnsPolicy: ClusterFirstWithHostNetwhenhostNetwork: false
Configuration:
The default Helm chart configuration uses namespace switching:
# values.yaml
hostNetwork: false # Use pod namespace (not host)
hostPidEnrichment: true # Required for namespace switching
securityContext:
privileged: false # No longer requires full privileged mode
capabilities:
add:
- NET_ADMIN # TC attachment
- BPF # eBPF operations
- PERFMON # Ring buffers
- SYS_ADMIN # Namespace switching
- SYS_PTRACE # Access process namespaces
- SYS_RESOURCE # Memory limitsThe DaemonSet automatically sets the appropriate DNS policy to enable Kubernetes service resolution.
Prerequisites by Environment
All Environments
Linux kernel 4.18 or newer with eBPF support
Privileged container support
Network access to OTLP collector endpoint
Kubernetes
Kubernetes 1.20 or newer
Helm 3.x
kubectl configured for cluster access
Permissions to create ClusterRole and ClusterRoleBinding
Privileged DaemonSets allowed (most clusters)
Cloud Platforms
GKE (Google Kubernetes Engine):
GKE Standard or Autopilot (with Autopilot limitations)
Node OS: Container-Optimized OS (COS) or Ubuntu
Workload Identity (optional, for managed identity)
EKS (Amazon Elastic Kubernetes Service):
EKS 1.20 or newer
Amazon Linux 2 or Bottlerocket node OS
IAM roles for service accounts (optional)
AKS (Azure Kubernetes Service):
AKS 1.20 or newer
Ubuntu or Azure Linux node OS
Azure AD pod identity (optional)
Bare Metal / Virtual Machines
Linux distribution with kernel 4.18+
Docker or containerd installed
Root/sudo access to run privileged containers
No Kubernetes metadata enrichment available
Security Considerations
Required Privileges
Mermin requires elevated privileges to function:
securityContext:
privileged: true
capabilities:
add:
- NET_ADMIN # TC attachment
- BPF # eBPF operations (kernel 5.8+)
- PERFMON # Ring buffers (kernel 5.8+)
- SYS_ADMIN # Namespace switching and BPF filesystem access
- SYS_RESOURCE # memlock limitsThis is necessary to:
Load eBPF programs into the kernel
Attach to network interfaces
Access the host network namespace
Switch between network namespaces
Never reduce these privileges. Mermin will fail to start without them.
RBAC Permissions
Mermin needs read access to Kubernetes resources for metadata enrichment:
get,list,watchon pods, services, deployments, etc.Cluster-wide access (all namespaces)
Non-sensitive data only (no secrets)
The Helm chart creates a minimal ClusterRole with only necessary permissions.
Network Policies
If using Kubernetes NetworkPolicies:
Egress to OTLP Collector: Allow traffic to your collector endpoint
Egress to Kubernetes API: Allow access to the API server (typically allowed by default)
No Ingress Required: Mermin doesn't accept inbound connections (except health checks)
Example egress policy:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: mermin-egress
spec:
podSelector:
matchLabels:
app.kubernetes.io/name: mermin
policyTypes:
- Egress
egress:
- to:
- podSelector:
matchLabels:
app: otel-collector
ports:
- protocol: TCP
port: 4317
- to:
- namespaceSelector: {}
ports:
- protocol: TCP
port: 6443 # Kubernetes APIDeployment Checklist
Before deploying Mermin to production:
Upgrade Strategy
When upgrading Mermin:
Review Release Notes: Check for breaking changes or new features
Update Helm Chart:
helm repo updatefor chart updatesTest in Staging: Always test upgrades in non-production first
Rolling Update: DaemonSet controller performs rolling updates automatically
Monitor Health: Watch pod status and metrics during rollout
Rollback if Needed:
helm rollback merminto revert
The DaemonSet updateStrategy controls upgrade behavior:
updateStrategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1 # Update one node at a timeNext Steps
Choose your deployment path:
Standard Kubernetes: Kubernetes with Helm
Cloud Platform: GKE, EKS, or AKS
Advanced Setup: Custom CNI, Multi-Cluster
Non-Kubernetes: Docker on Bare Metal
After deploying, configure Mermin for your environment:
Last updated