Introduction
In previous blogs, we have explored the creating Kubernetes cluster, deploying an application with Kubernetes cluster and monitoring Kubernetes cluster. This blog gives the overview about the logging with Kubernetes. The logging methods used with Kubernetes.
Logging Kubernetes Cluster
Application and system level logs are useful to understand the problem with the system. It helps with troubleshooting and finding the root cause of the problem. Like application and system level logs containerized application also requires logs to be recorded and stored somewhere. The most standard method used for the logging is to write it to standard output and standard error streams. When the logs are recorded with separate storage then the mechanism is called as Cluster Level Logging.
Basic Logging with Kubernetes:
In the most basic logging, it’s possible to write the logs to the standard output using the Pod specification.
For Example:
apiVersion: v1 kind: Pod metadata: name: counter spec: containers: – name: count image: busybox args: [/bin/sh, -c, ‘i=0; while true; do echo “$i: $(date)”; i=$((i+1)); sleep 1; done’] |
The logs will be recorded with standard output:
$ kubectl create -f log-example-pod.yaml pod “counter” created $ kubectl get po NAME READY STATUS RESTARTS AGE counter 1/1 Running 0 8s hello-world-493621601-1ffrp 1/1 Running 3 19d hello-world-493621601-mmnzw 1/1 Running 3 19d hello-world-493621601-nqd67 1/1 Running 3 19d hello-world-493621601-qkfcx 1/1 Running 3 19d hello-world-493621601-xbf6s 1/1 Running 3 19d $ kubectl logs counter 0: Fri Dec 1 16:37:36 UTC 2017 1: Fri Dec 1 16:37:37 UTC 2017 2: Fri Dec 1 16:37:38 UTC 2017 3: Fri Dec 1 16:37:39 UTC 2017 4: Fri Dec 1 16:37:40 UTC 2017 5: Fri Dec 1 16:37:41 UTC 2017 6: Fri Dec 1 16:37:42 UTC 2017 7: Fri Dec 1 16:37:43 UTC 2017 8: Fri Dec 1 16:37:44 UTC 2017 |
Node level logging with Kubernetes:
The containerize application writes logs to stdout and stderr. Logging driver is the responsible for the writing log to the file in JSON format. In case of docker engine, docker logging driver is responsible for writing the log.
The most important part of Node level logging is log rotation with the Kubernetes. With the help of log rotation, it ensures the logs will not consume all storage space of the nodes.
Cluster Level Logging with Kubernetes:
Kubernetes does not provide the native cluster level logging. But the cluster level logging is possible with following approaches:
- Run the agent on each node for log collection
- Run the side container which will be responsible for log collection
- Directly store the logs of the application into the storage
The most used and recommended method is using the node agent for log collection and storing the logs in log storage.
Stackdriver or Elasticsearch is used for the logging with Kubernetes. However, there are other solutions available like logz.io, sematext logging etc. Fluentd is used with custom configuration along with Stackdriver and Elasticsearch. Fluentd acts as the node agent.
For the Kubernetes Cluster created through minikube Giantswarm provides the solution. The solution consists ELK (Elasticsearch , logstash and Kibana) stack logging with minikube. However, it is possible to deploy all these components manually with manifests.
Start the Minikube:
minikube start –memory 4096 |
Download all manifests and start Kibana:
Kubectl apply –filename https://raw.githubusercontent.com/giantswarm/kubernetes-elastic-stack/master/manifests-all.yaml minikube service kibana |
The logging will be enabled and you can check it through Kibana dashboard. If you are using Google Kubernetes Engine, then stackdriver is a default logging option for GKE.
If you want custom logging with elasticsearch, kibana and fluentd, then Kubernetes provides the supported addon for logging.
Please check: https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/fluentd-elasticsearch
For more information check our other blogs too.