Skip to content

Observability

Logging

Kolab is distributed with a logging pipeline that works like this:

  • Applications running in containers log to stdout/stderr, which is captured by Kubernetes.
  • The vector.dev daemonset picks up the Kubernetes logs and generates a stream of structured log events from each log line.
  • vector.dev further processes each event to extract structured log fields from the log lines in the various formats (apache, logfmt, ...)
  • The resulting log events are forwarded to victorialogs for storage and retrieval

Kubernetes logging and vector.dev in a k3s setup

K3s stores pod logs in files like this:

/var/log/pods/kolab_scheduler-29320110-5z858_77c55590-3932-4dc5-a77e-7aef4d32af73/scheduler/0.log

These files are detected and ingested by vector.dev (see vector container output).

With a lot of log output the log-files will rotate quickly (every couple of seconds) and it is important, that vector.dev will notice new files quickly enough, otherwise parts of the log output may be missing from the log stream. vector.dev is configured with a relatively low glob_minimum_cooldown_ms for this reason.

Victorialogs

Victorialogs provides a simple webinterface at /select/vmui/, which can be used to query for logs.

A simple query might look like this:

{pod_name=~"postfix.*"} disconnect
This will query for logs from all postfix pods, and filter for the string "disconnect".

Alternatively it is possible to query for logs via the HTTP API.

Metrics

Kolab is distributed with a prometheus instance that scrapes internal metrics and has various alerts pre-configured.

Alerting

Kolab can be configured to provide an internal Alertmanager instance, or use an external instance, to receive alerts generated by both Prometheus and Victorialogs.