Author: Kevin Fitzpatrick

Kevin Fitzpatrick is a Senior Solution Engineer at VMware, supporting enterprise accounts in the New England region. Kevin has been with VMware since November 2017, and is particularly focused on cloud native technologies and Kubernetes. He is a US Air Force veteran, new father, and enjoys boating, camping, and BJJ.

Leveraging LogInsight for Kubernetes

As part of responsibly running applications, it’s important to have all the supporting Day 2 operations covered. That way, when something goes bump in the night, you’re immediately prepared and able to quickly find the source of the issue. Logging is one critical component of this overall architecture. Many shops are already running mature processes for logging with vRealize LogInsight in supporting their vSphere infrastructures. Wouldn’t it be great to use this existing logging setup for your Kubernetes clusters? You can!

Note: If you’d like help setting up a simple, single node test cluster see this blog.

Setting It Up

Fluentd is an open source project that provides a “unified logging layer.” It is a great project that provides a lot of capabilities, outside of Kubernetes as well. For our purposes, it will be deployed as a DaemonSet within our Kubernetes cluster to provide log collection and shipping to our vRealize LogInsight Appliance.

Luckily for us, the project maintains a set of templates that make it very easy to deploy fluentd as a DaemonSet within a Kubernetes cluster. Remember, a DaemonSet (DS), is a Kubernetes capability that ensures we always have pod of this type running on every node within our cluster. Perfect for the logging use case.

Github for templates: https://github.com/fluent/fluentd-kubernetes-daemonset

For our implementation with LogInsight, we will be using the Kubernetes syslog template.

If you click on that file, you will see the manifest file that shows the configuration that will be deployed into the Kubernetes cluster.

You can see that it will:

  • Create a ServiceAccount and ClusterRole for fluentd
  • Deploy as a DaemonSet
  • Deploy into the kube-system namespace
  • Pull the container image from Fluent’s repository

Within the manifest file, the parameters that we need to change are only the IP address and desired port for our LogInsight Appliance.

Once you change the value: to the LogInsight IP address you can simply use that yaml file to deploy fluentd to the cluster! This will automatically create the DS and start shipping logs to your LogInsight Appliance.

Step by step for the deployment (assumes your have your cluster up and running and kubeconfig set up):

1. git clone https://github.com/fluent/fluentd-kubernetes-daemonset.git

2. Use a text editor to change the syslog template file to have correct value for your LogInsight Appliance

sudo vim fluentd-kubernetes-daemonset/fluentd-daemonset-syslog.yaml

Edit the value field under SYSLOG_HOST to the LogInsight IP and save esc w q !

3. Apply the DS to the Kubernetes cluster kubectl apply -f fluentd-kubernetes-daemonset/fluentd-daemonset-syslog.yaml

Verify the success within the kube-system namespace kubectl get ds -n kube-system

It should be listed along with kube-proxy and whichever CNI you’re leveraging for your Kubernetes cluster, for me that is Antrea.

Testing to make sure it works

In order to test that the logs are shipping and being received, let’s deploy a simple webserver and send it a few requests. I’ve added the label app:nginx so when we create the nodeport service it will select this pod as it’s endpoint to communicate with.

kubectl run nginx --image=nginx --restart=Never --labels=app=nginx

Then create a nodeport service so we can access the default webpage from Nginx. By default, this command uses a selector as app and name of the service, Nginx.

kubectl create svc nodeport nginx --tcp=80

kubectl get svc This will allow us to see the port we need to access the test nginx webserver.

Okay! There should be some http requests we can view from LogInsight which is acting as our syslog server via fluentd DaemonSet running in our cluster!

Logging into LogInsight and selecting Interactive Analytics, with a simple ‘http’ search in the search bar should show our Nginx logs.

There you have it! Logs are now flowing from our Kubernetes into our existing LogInsight appliance and we are able to search for them.

You can match these against the logs being output within the Kubernetes cluster with the kubectl logs nginx command.

It’s not just our app logs that will be shipped, but Kubernetes logs as well. Within LogInsight and the Interactive Analytics window, filter the app name to fluentd and you should see all the logs being sent from the K8s cluster. For example, I had a failed postgres deployment which can be seen in the screenshot below.

That is a lot of material, but the steps are fairly simple and easy thanks to the work done by the fluentd project.

In part 2 of this blog, we will look at creating some dashboards within LogInsight that will help us more easily monitor and analyze the logs coming in from the Kubernetes cluster.

Helpful source docs:

Quick Start: Kubernetes Test Cluster w/ Antrea CNI

Recently, VMware announced an open source Kubernetes Networking project called Antrea. This project uses Open vSwitch (more here) as the data plane for a compatible Container Network Interface (CNI). To run Kubernetes (k8s) clusters, it is required that you provide a CNI to allow for pod to pod communication. It is assumed the hosts (physical or VMs) making up the cluster are already networked together.

In this post, I’d like to go over setting up a single node k8s cluster using Kubeadm on an Ubuntu 18.04 and Antrea CNI (with latest versions). For me, this is an easy way to spin up a cluster to mess around with or do some quick testing. A couple other ways that I’ve used and love are KinD (here) and simply enabling it in Docker for Desktop (probably the easiest way for most).

To start, you’ll need a single Ubuntu 18.04 machine. I’ve done this on AWS, and using VMware Workstation on my laptop and it’s worked well on both. The recommendation is to make sure you have 2 vCPU and 2 Gb RAM. (and if you use the script below, the install will fail without these resources)

To prepare the Ubuntu machine for k8s we need to install Docker (original Docker doc):

sudo apt update
sudo apt install -y apt-transport-https ca-certificates curl software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu bionic stable"
sudo apt update
apt-cache policy docker-ce
sudo apt install -y docker-ce

sudo usermod -aG docker $USER 

In the commands above, Docker was added to the apt repository then downloaded and installed. Then we added our current user to the Docker group so we don’t have to use sudo with all the Docker commands. If someone knows differently, please let me know, but it has always required a restart for me for that to take effect. Which we will do after we download the rest of the required k8s system components, cli and kubelet.

sudo apt-get update && sudo apt-get install -y apt-transport-https curl
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
cat <<EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list
deb https://apt.kubernetes.io/ kubernetes-xenial main
EOF
sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl

sudo reboot

With that, we should have everything needed to continue. Before we begin the Kubeadm bootstrap, we need to ensure swap is turned off because it will cause us problems if we don’t.

#turn off swap
sudo swapoff -a 

#initialize master cluster
sudo kubeadm init --pod-network-cidr=10.244.0.0/16 

#Remove taint from master to use that node
kubectl taint nodes --all node-role.kubernetes.io/master-

#get cluster credentials and copy them into our kube-config file
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

#apply antrea CNI network latest version
sudo kubectl apply -f https://raw.githubusercontent.com/vmware-tanzu/antrea/master/build/yamls/antrea.yml

In the code above, we turn off swap for this machine and then pull down the Kubeadm images that are used to create our cluster components. This includes our API server, etcd database, controller manager, and scheduler.

After that we are initializing our cluster with the kubeadm init --cidr= command. The network address range we pass in here will be used for our pods and controlled by Antrea, which we are installing in the next command. **Important to grab the token given by Kubeadm if you want to grow your cluster with additional worker nodes!

After that, we are simply removing the taint from the master node so that we can run our pod/container workloads on the same node. By default, a taint is applied to the master so that workloads do not interfere with the operation of our control plane….obviously the right thing to do when it matters!

Make it faster for me: https://github.com/fitz0017/k8s

I have the script broken down into 2 parts, because I can’t get Docker to run properly without a full reboot. To run the scripts, log into your Ubuntu machine that has 2 CPU and at least 2Gb RAM and:

git clone https://github.com/fitz0017/k8s.git
source k8s/install_k8s_1.sh

At this point you may need to input your sudo password, and select ‘yes’ when asking if you will allow for system services to be restarted, if doing this on Workstation.

When that completes, you should see a full reboot of your Ubuntu machine. So log yourself back in and:

source k8s/install_k8s.sh 

This will kick off the initialization of the cluster and application of the Antrea CNI. Again, make sure to copy the discovery token output at the end of initialization if you want to grow this cluster.

From here, when ssh’d into that machine. You have access to a k8s cluster for testing and learning! Please tell me if you notice any problems or give feedback in the comments.

Good luck!

KubeCon 2019 – VMware Recap

VMware was very busy this year at KubeCon with the announcement of three new open source projects, a new podcast with rockstar hosts, and presenting lots of sessions.  These new open source projects are in addition to the already very popular and widely adopted Velero, Contour, Sonobuoy and Octant.  The commitment and number of employees at VMware working in the cloud native and open source space is truly impressive.

The first announcement was on Project Antrea, which is an open source CNI for Kubernetes based on Open vSwitch (OVS).  This project aims to deliver a simple and secure Kubernetes networking CNI.  One fantastic feature is it’s plugin for Octant, which is another developer-focused, open source project that gives a very powerful GUI for visibility and management of Kubernetes applications.  With Project Antrea and Octant, you can get even more visibility into your microservices and connectivity.

The next project announced was Project Hamlet, which is a joint effort between VMware, Google Anthos, HashiCorp and Pivotal, is an effort to create an interoperable API for the federation of service meshes.  The end goal being an API that allows for interconnectivity of service meshes across heterogeneous cloud environments.

The third project is Crash Diagnostics for Kubernetes, which is way to automate the investigation of unhealthy or unresponsive Kubernetes clusters.  It does this by automating the collection of diagnostics from all of the nodes within a cluster and bundling that into a TAR file for further analysis.

If that wasn’t enough, a new podcast, ‘The Podlets” was announced: https://blogs.vmware.com/cloudnative/2019/11/20/introducing-podlets-podcast-audio-guide-to-cloud-native-concepts/

The hosts include an impressive list of experts in cloud native and distributed system topics and great all around people.  This will be a great resource for keeping up on the latest news in the fast paced ecosystem.  Direct link to “The Podlets” is at thepodlets.io .  It will be available on the normal podcast distribution platforms, as well as the Cloud Native Applications YouTube Channel here.

Kubernetes at VMware

What is the Strategy?

So many exciting announcements this year at VMworld have been around the cloud strategy of build, run, and manage. This strategy is outlined perfectly by Paul Fazzone here.

At the heart of these announcements is the integration of Kubernetes(K8s) into all things vSphere. With Project Pacific, Kubernetes will be embedded into vSphere to provide native K8s functionality within ESXi, as well as pure, open-source K8s clusters on-demand for developers.

With Tanzu Mission Control, VMware is enabling companies to manage their K8s clusters from a single location, bringing together operations and developers, and creating a single point of management to apply policies and governance to clusters deployed across a variety of environments on-premise and in public clouds.

Free Open-Source Kubernetes Training

With the increasing importance of Kubernetes to all IT professionals, it is important to provide the resources to enable people to master this new skillset. In that vein, another amazing announcement was the providing of free, vendor agnostic training for open-source Kubernetes by VMware. Available at kubernetes.academy. These courses provide a fantastic overview of containers and Kubernetes led by highly experienced instructors who have been deep in this ecosystem since the beginning.

Sign up today and start up-leveling your skills!