As part of responsibly running applications, it’s important to have all the supporting Day 2 operations covered. That way, when something goes bump in the night, you’re immediately prepared and able to quickly find the source of the issue. Logging is one critical component of this overall architecture. Many shops are already running mature processes for logging with vRealize LogInsight in supporting their vSphere infrastructures. Wouldn’t it be great to use this existing logging setup for your Kubernetes clusters? You can!
Note: If you’d like help setting up a simple, single node test cluster see this blog.
Setting It Up
Fluentd is an open source project that provides a “unified logging layer.” It is a great project that provides a lot of capabilities, outside of Kubernetes as well. For our purposes, it will be deployed as a DaemonSet within our Kubernetes cluster to provide log collection and shipping to our vRealize LogInsight Appliance.
Luckily for us, the project maintains a set of templates that make it very easy to deploy fluentd as a DaemonSet within a Kubernetes cluster. Remember, a DaemonSet (DS), is a Kubernetes capability that ensures we always have pod of this type running on every node within our cluster. Perfect for the logging use case.
Github for templates: https://github.com/fluent/fluentd-kubernetes-daemonset
For our implementation with LogInsight, we will be using the Kubernetes syslog template.
If you click on that file, you will see the manifest file that shows the configuration that will be deployed into the Kubernetes cluster.
You can see that it will:
- Create a ServiceAccount and ClusterRole for fluentd
- Deploy as a DaemonSet
- Deploy into the kube-system namespace
- Pull the container image from Fluent’s repository
Within the manifest file, the parameters that we need to change are only the IP address and desired port for our LogInsight Appliance.
Once you change the
value: to the LogInsight IP address you can simply use that yaml file to deploy fluentd to the cluster! This will automatically create the DS and start shipping logs to your LogInsight Appliance.
Step by step for the deployment (assumes your have your cluster up and running and kubeconfig set up):
git clone https://github.com/fluent/fluentd-kubernetes-daemonset.git
2. Use a text editor to change the syslog template file to have correct value for your LogInsight Appliance
sudo vim fluentd-kubernetes-daemonset/fluentd-daemonset-syslog.yaml
Edit the value field under SYSLOG_HOST to the LogInsight IP and save
esc w q !
3. Apply the DS to the Kubernetes cluster
kubectl apply -f fluentd-kubernetes-daemonset/fluentd-daemonset-syslog.yaml
Verify the success within the kube-system namespace
kubectl get ds -n kube-system
It should be listed along with kube-proxy and whichever CNI you’re leveraging for your Kubernetes cluster, for me that is Antrea.
Testing to make sure it works
In order to test that the logs are shipping and being received, let’s deploy a simple webserver and send it a few requests. I’ve added the label app:nginx so when we create the nodeport service it will select this pod as it’s endpoint to communicate with.
kubectl run nginx --image=nginx --restart=Never --labels=app=nginx
Then create a nodeport service so we can access the default webpage from Nginx. By default, this command uses a selector as app and name of the service, Nginx.
kubectl create svc nodeport nginx --tcp=80
kubectl get svc This will allow us to see the port we need to access the test nginx webserver.
Okay! There should be some http requests we can view from LogInsight which is acting as our syslog server via fluentd DaemonSet running in our cluster!
Logging into LogInsight and selecting Interactive Analytics, with a simple ‘http’ search in the search bar should show our Nginx logs.
There you have it! Logs are now flowing from our Kubernetes into our existing LogInsight appliance and we are able to search for them.
You can match these against the logs being output within the Kubernetes cluster with the
kubectl logs nginx command.
It’s not just our app logs that will be shipped, but Kubernetes logs as well. Within LogInsight and the Interactive Analytics window, filter the
app name to
fluentd and you should see all the logs being sent from the K8s cluster. For example, I had a failed postgres deployment which can be seen in the screenshot below.
That is a lot of material, but the steps are fairly simple and easy thanks to the work done by the fluentd project.
In part 2 of this blog, we will look at creating some dashboards within LogInsight that will help us more easily monitor and analyze the logs coming in from the Kubernetes cluster.
Helpful source docs:
- Fluentd Getting Started: https://docs.fluentd.org/v/0.12/quickstart/getting-started
- Kubernetes Logging: https://kubernetes.io/docs/concepts/cluster-administration/logging/