3 min read

Haproxy Prometheus Metrics

So TL;DR Haproxy has a builtin prometheus-exporter now: https://github.com/haproxy/haproxy/tree/master/contrib/prometheus-exporter

In on-prem Kubernetes clusters there is no IAAS loadbalancer options, so a common setup is to run haproxy and keepalived as static pods. This adds some complexity upfront but runs rock solid in production as the control plane VIP is always available and HaProxy automatically removes control plane nodes from rotation.

I’ve been maintaining a Kubernetes cluster at home for dev-work and one of the more recent things I did was spool up some monitoring IE Grafana and Prometheus. By the way the prometheus-operator ( https://github.com/coreos/prometheus-operator ) from the former coreos folks is works perfectly for this. So, I’ve been slowly going through different cluster components and adding scrapers and exporters. And, one of the recent targets has been my cluster control plane load-balancer and vip (haproxy+keepalived) so I’ve been looking into exporters. Coincidentally, with the release of haproxy 2.0.0 there is an embedded exporter. Anyways, here’s the container I’ve been working on: https://hub.docker.com/r/whisperos/haproxy

Below is an example static manifest that lives on master nodes:

---
apiVersion: v1
kind: Pod
metadata:
  name: ha-lb
  namespace: kube-system
  labels:
    k8s-app: ha-control-plane
spec:
  hostNetwork: true
  volumes:
  - hostPath:
      path: /etc/haproxy
    name: etc-haproxy
  - hostPath:
      path: /etc/keepalived
    name: etc-keepalived
  - name: var-iptables
    hostPath:
      path: /var/lib/iptables
  - name: xtables-lock
    hostPath:
      path: /run/xtables.lock
      type: FileOrCreate
  containers:
  - name: haproxy
    image: docker.io/whisperos/haproxy:2.0.0
    command:
    - /usr/sbin/haproxy
    args:
    - -db
    - -f
    - /etc/haproxy/haproxy.cfg
    securityContext:
      privileged: true
    volumeMounts:
    - mountPath: /etc/haproxy
      name: etc-haproxy
      readOnly: true
  - name: keepalived
    image: docker.io/whisperos/keepalived:2.0.16
    command:
    - /usr/sbin/keepalived
    args:
    - --no-syslog
    - --log-console
    - --dont-fork
    - --use-file=/etc/keepalived/keepalived.conf
    securityContext:
      privileged: true
    volumeMounts:
    - mountPath: /etc/keepalived
      name: etc-keepalived
      readOnly: true
    - mountPath: /var/lib/iptables
      name: var-iptables
      readOnly: false
    - mountPath: /run/xtables.lock
      name: xtables-lock
      readOnly: false

Then my haproxy config looks like this:

global
    maxconn 200000
    nbthread 8
    tune.ssl.default-dh-param 2048

defaults
    log               global
    retries           3
    maxconn           200000
    timeout connect   5s
    timeout client    50s
    timeout server    50s

frontend exporter
    mode http
    bind *:8404
    option http-use-htx
    http-request use-service prometheus-exporter if { path /metrics }
    stats enable
    stats uri /stats
    stats refresh 10s

frontend kubernetes_lb
    bind 10.12.4.3:6443
    default_backend kubernetes_back
    mode tcp

backend kubernetes_back
    balance leastconn
    mode tcp
    server titan01.iag.d3fy.net 10.12.4.114:6443 check port 6443
    server titan02.iag.d3fy.net 10.12.4.175:6443 check port 6443
    server titan03.iag.d3fy.net 10.12.4.171:6443 check port 6443

frontend etcd_lb
    bind 10.12.4.3:2379
    default_backend etcd_back
    mode tcp

backend etcd_back
    balance leastconn
    mode tcp
    server titan01.iag.d3fy.net 10.12.4.114:2379 check port 2379
    server titan02.iag.d3fy.net 10.12.4.175:2379 check port 2379
    server titan03.iag.d3fy.net 10.12.4.171:2379 check port 2379

# vi:syntax=haproxy

Now I’ll configure the prometheus-operator:

apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  labels:
    app: prom-haproxy-ha
    release: mon
  name: prom-haproxy-ha
  namespace: monitoring
spec:
  endpoints:
  - bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
    interval: 15s
    port: http-metrics
  jobLabel: jobLabel
  namespaceSelector:
    matchNames:
    - kube-system
  selector:
    matchLabels:
      app: prom-ha-control-plane

And don’t forget the headless service:

apiVersion: v1
kind: Service
metadata:
  labels:
    app: prom-ha-control-plane
    jobLabel: ha-control-plane
  name: prom-ha-control-plane
  namespace: kube-system
spec:
  clusterIP: None
  ports:
  - name: http-metrics
    port: 8404
    protocol: TCP
    targetPort: 8404
  selector:
    k8s-app: ha-control-plane
  type: ClusterIP