Accessing Logs in Kubernetes

Before getting into dashboards for LogInsight, this blog post will go through briefly how to access the different logs stored in the Kubernetes cluster without using tools like Fluentd and a log aggregation service (assuming kubeadm use for bootstrapping the cluster).  This is a great way to really get under the covers and see what’s happening within your Kubernetes cluster!

Accessing control plane component logs —

Using the kubectl command line, access to pod logs are available via the `kubectl logs <pod-name>` command.  This applies to any pod, including the cluster control plane components, which are running as static pods in the kube-system namespace.  *Note for the control plane components the kube-system namespace must be specified 

To access other control plane component logs, simply use their pod names. First, get their pod names by running kubectl get pods -n kube-system and then kubectl logs <pod name here> -n kube-system . Every deployment will have different suffixes for these static pods.

For example, here is accessing the etcd pod logs on my test cluster:

Accessing kubelet logs — 

The kubelet is responsible for interacting with the container engine (Docker in this case) and kubeapi-server, so a lot of good information is stored in these logs. If the nodes of the Kubernetes cluster are running with systemd, then kubelet logs are written to journald and can be accessed via journalctl.  Otherwise, they will be located in the /var/log/ directory, written to a .log file.

Kubeadm deployment using an Ubuntu 18.04 node:

journalctl --unit kubelet

Since kubelet is running as a service under systemd control, the logs are accessible via journalctl as show above. 

Accessing pod/application logs — 

To show that this works with applications as well, there is an Nginx pod running with a NodePort service exposing it.

To access the logs for this pod —

kubectl logs nginx

* Note didn’t need to specify namespace because pod was deployed in the default namespace. 

And we have logs!  The access logs from my browser are visible in the output.  

If you are in a situation where you may have multiple containers within a pod — the syntax to choose which containers’ logs to view is: 

kubectl logs <pod name> <container name>

That does it for this quick post on accessing cluster and application logs. In a previous post, I covered getting up and running with Fluentd running as a DaemonSet agent on every node and forwarding all of these logs to vRealize LogInsight (a log aggregator) for analysis and storage outside of the cluster. Next post will be on LogInsight dashboards and queries using the Interactive Analytics dashboard.

Sources of truth:

Leveraging LogInsight for Kubernetes

As part of responsibly running applications, it’s important to have all the supporting Day 2 operations covered. That way, when something goes bump in the night, you’re immediately prepared and able to quickly find the source of the issue. Logging is one critical component of this overall architecture. Many shops are already running mature processes for logging with vRealize LogInsight in supporting their vSphere infrastructures. Wouldn’t it be great to use this existing logging setup for your Kubernetes clusters? You can!

Note: If you’d like help setting up a simple, single node test cluster see this blog.

Setting It Up

Fluentd is an open source project that provides a “unified logging layer.” It is a great project that provides a lot of capabilities, outside of Kubernetes as well. For our purposes, it will be deployed as a DaemonSet within our Kubernetes cluster to provide log collection and shipping to our vRealize LogInsight Appliance.

Luckily for us, the project maintains a set of templates that make it very easy to deploy fluentd as a DaemonSet within a Kubernetes cluster. Remember, a DaemonSet (DS), is a Kubernetes capability that ensures we always have pod of this type running on every node within our cluster. Perfect for the logging use case.

Github for templates: https://github.com/fluent/fluentd-kubernetes-daemonset

For our implementation with LogInsight, we will be using the Kubernetes syslog template.

If you click on that file, you will see the manifest file that shows the configuration that will be deployed into the Kubernetes cluster.

You can see that it will:

  • Create a ServiceAccount and ClusterRole for fluentd
  • Deploy as a DaemonSet
  • Deploy into the kube-system namespace
  • Pull the container image from Fluent’s repository

Within the manifest file, the parameters that we need to change are only the IP address and desired port for our LogInsight Appliance.

Once you change the value: to the LogInsight IP address you can simply use that yaml file to deploy fluentd to the cluster! This will automatically create the DS and start shipping logs to your LogInsight Appliance.

Step by step for the deployment (assumes your have your cluster up and running and kubeconfig set up):

1. git clone https://github.com/fluent/fluentd-kubernetes-daemonset.git

2. Use a text editor to change the syslog template file to have correct value for your LogInsight Appliance

sudo vim fluentd-kubernetes-daemonset/fluentd-daemonset-syslog.yaml

Edit the value field under SYSLOG_HOST to the LogInsight IP and save esc w q !

3. Apply the DS to the Kubernetes cluster kubectl apply -f fluentd-kubernetes-daemonset/fluentd-daemonset-syslog.yaml

Verify the success within the kube-system namespace kubectl get ds -n kube-system

It should be listed along with kube-proxy and whichever CNI you’re leveraging for your Kubernetes cluster, for me that is Antrea.

Testing to make sure it works

In order to test that the logs are shipping and being received, let’s deploy a simple webserver and send it a few requests. I’ve added the label app:nginx so when we create the nodeport service it will select this pod as it’s endpoint to communicate with.

kubectl run nginx --image=nginx --restart=Never --labels=app=nginx

Then create a nodeport service so we can access the default webpage from Nginx. By default, this command uses a selector as app and name of the service, Nginx.

kubectl create svc nodeport nginx --tcp=80

kubectl get svc This will allow us to see the port we need to access the test nginx webserver.

Okay! There should be some http requests we can view from LogInsight which is acting as our syslog server via fluentd DaemonSet running in our cluster!

Logging into LogInsight and selecting Interactive Analytics, with a simple ‘http’ search in the search bar should show our Nginx logs.

There you have it! Logs are now flowing from our Kubernetes into our existing LogInsight appliance and we are able to search for them.

You can match these against the logs being output within the Kubernetes cluster with the kubectl logs nginx command.

It’s not just our app logs that will be shipped, but Kubernetes logs as well. Within LogInsight and the Interactive Analytics window, filter the app name to fluentd and you should see all the logs being sent from the K8s cluster. For example, I had a failed postgres deployment which can be seen in the screenshot below.

That is a lot of material, but the steps are fairly simple and easy thanks to the work done by the fluentd project.

In part 2 of this blog, we will look at creating some dashboards within LogInsight that will help us more easily monitor and analyze the logs coming in from the Kubernetes cluster.

Helpful source docs:

Quick Start: Kubernetes Test Cluster w/ Antrea CNI

Recently, VMware announced an open source Kubernetes Networking project called Antrea. This project uses Open vSwitch (more here) as the data plane for a compatible Container Network Interface (CNI). To run Kubernetes (k8s) clusters, it is required that you provide a CNI to allow for pod to pod communication. It is assumed the hosts (physical or VMs) making up the cluster are already networked together.

In this post, I’d like to go over setting up a single node k8s cluster using Kubeadm on an Ubuntu 18.04 and Antrea CNI (with latest versions). For me, this is an easy way to spin up a cluster to mess around with or do some quick testing. A couple other ways that I’ve used and love are KinD (here) and simply enabling it in Docker for Desktop (probably the easiest way for most).

To start, you’ll need a single Ubuntu 18.04 machine. I’ve done this on AWS, and using VMware Workstation on my laptop and it’s worked well on both. The recommendation is to make sure you have 2 vCPU and 2 Gb RAM. (and if you use the script below, the install will fail without these resources)

To prepare the Ubuntu machine for k8s we need to install Docker (original Docker doc):

sudo apt update
sudo apt install -y apt-transport-https ca-certificates curl software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu bionic stable"
sudo apt update
apt-cache policy docker-ce
sudo apt install -y docker-ce

sudo usermod -aG docker $USER 

In the commands above, Docker was added to the apt repository then downloaded and installed. Then we added our current user to the Docker group so we don’t have to use sudo with all the Docker commands. If someone knows differently, please let me know, but it has always required a restart for me for that to take effect. Which we will do after we download the rest of the required k8s system components, cli and kubelet.

sudo apt-get update && sudo apt-get install -y apt-transport-https curl
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
cat <<EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list
deb https://apt.kubernetes.io/ kubernetes-xenial main
EOF
sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl

sudo reboot

With that, we should have everything needed to continue. Before we begin the Kubeadm bootstrap, we need to ensure swap is turned off because it will cause us problems if we don’t.

#turn off swap
sudo swapoff -a 

#initialize master cluster
sudo kubeadm init --pod-network-cidr=10.244.0.0/16 

#Remove taint from master to use that node
kubectl taint nodes --all node-role.kubernetes.io/master-

#get cluster credentials and copy them into our kube-config file
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

#apply antrea CNI network latest version
sudo kubectl apply -f https://raw.githubusercontent.com/vmware-tanzu/antrea/master/build/yamls/antrea.yml

In the code above, we turn off swap for this machine and then pull down the Kubeadm images that are used to create our cluster components. This includes our API server, etcd database, controller manager, and scheduler.

After that we are initializing our cluster with the kubeadm init --cidr= command. The network address range we pass in here will be used for our pods and controlled by Antrea, which we are installing in the next command. **Important to grab the token given by Kubeadm if you want to grow your cluster with additional worker nodes!

After that, we are simply removing the taint from the master node so that we can run our pod/container workloads on the same node. By default, a taint is applied to the master so that workloads do not interfere with the operation of our control plane….obviously the right thing to do when it matters!

Make it faster for me: https://github.com/fitz0017/k8s

I have the script broken down into 2 parts, because I can’t get Docker to run properly without a full reboot. To run the scripts, log into your Ubuntu machine that has 2 CPU and at least 2Gb RAM and:

git clone https://github.com/fitz0017/k8s.git
source k8s/install_k8s_1.sh

At this point you may need to input your sudo password, and select ‘yes’ when asking if you will allow for system services to be restarted, if doing this on Workstation.

When that completes, you should see a full reboot of your Ubuntu machine. So log yourself back in and:

source k8s/install_k8s.sh 

This will kick off the initialization of the cluster and application of the Antrea CNI. Again, make sure to copy the discovery token output at the end of initialization if you want to grow this cluster.

From here, when ssh’d into that machine. You have access to a k8s cluster for testing and learning! Please tell me if you notice any problems or give feedback in the comments.

Good luck!

KubeCon 2019 – VMware Recap

VMware was very busy this year at KubeCon with the announcement of three new open source projects, a new podcast with rockstar hosts, and presenting lots of sessions.  These new open source projects are in addition to the already very popular and widely adopted Velero, Contour, Sonobuoy and Octant.  The commitment and number of employees at VMware working in the cloud native and open source space is truly impressive.

The first announcement was on Project Antrea, which is an open source CNI for Kubernetes based on Open vSwitch (OVS).  This project aims to deliver a simple and secure Kubernetes networking CNI.  One fantastic feature is it’s plugin for Octant, which is another developer-focused, open source project that gives a very powerful GUI for visibility and management of Kubernetes applications.  With Project Antrea and Octant, you can get even more visibility into your microservices and connectivity.

The next project announced was Project Hamlet, which is a joint effort between VMware, Google Anthos, HashiCorp and Pivotal, is an effort to create an interoperable API for the federation of service meshes.  The end goal being an API that allows for interconnectivity of service meshes across heterogeneous cloud environments.

The third project is Crash Diagnostics for Kubernetes, which is way to automate the investigation of unhealthy or unresponsive Kubernetes clusters.  It does this by automating the collection of diagnostics from all of the nodes within a cluster and bundling that into a TAR file for further analysis.

If that wasn’t enough, a new podcast, ‘The Podlets” was announced: https://blogs.vmware.com/cloudnative/2019/11/20/introducing-podlets-podcast-audio-guide-to-cloud-native-concepts/

The hosts include an impressive list of experts in cloud native and distributed system topics and great all around people.  This will be a great resource for keeping up on the latest news in the fast paced ecosystem.  Direct link to “The Podlets” is at thepodlets.io .  It will be available on the normal podcast distribution platforms, as well as the Cloud Native Applications YouTube Channel here.

Weekly Update – Week of 11/18/2019

New and Noteworthy:
Announcing Project Antrea – Open Source Kubernetes Networking – We are excited to announce Project Antrea – an open source networking and security project for Kubernetes clusters. Antrea uses Kubernetes extension mechanisms and the Open vSwitch (OVS) data plane to provide pod networking and help enforce network policies for Kubernetes clusters.

Security Advisory VMSA-2019-0020 – VMware has released Hypervisor-Specific Mitigations for two speculative-execution vulnerabilities impacting Intel processors known as Machine Check Error on Page Size Change (MCEPSC) and TSX Asynchronous Abort (TAA) identified by CVE-2018-12207 and CVE-2019-11135 respectively. Please see this page for details

Updated KB Articles:
New KB articles published for the week ending 9 November 2019
New KB articles published for the week ending 2 November 2019

Upcoming Events:
Gartner IOCS – Las Vegas – 12/09/2019 to 12/12/2019 – Register
Dell Technologies Word 2020 – Las Vegas – 05/04/2020 to 05/07/2020 – Register

Upcoming Webinars:
vSAN Encryption: Tales from the Field – 11/19/2019 – Register
Site Recovery Manager (SRM) 8.2: What’s New – 11/20/2019 – Register
What’s New with VMware Cloud Services – 11/21/2019 – Register
Instructor Hour covering ‘What’s New with VMware Cloud Services’ – 11/21/2019 – Register
vSAN View and Dashboard Development in vROps – 12/12/2019 – Register
[Full Live Event List]

New Releases:
2019-11-12
VMware Workstation 14.1.8 Pro for Windows [Download]
VMware Fusion 11.5.1 (for Intel-based Macs) [Download]
VMware Workstation 15.5.1 Pro for Windows [Download]
VMware Workstation 15.5.1 Pro for Linux [Download]

Weekly Update – Week of 10/7/2019

New and Noteworthy:
Grant Shipley, OpenShift boss, leaves Red Hat for VMware – This is great news for VMware and for our cloud-native strategy over the next couple of years, lending further legitimacy to the work that we’ve been doing in this space. This is a view shared outside of our circle of evangelists here at VMware – the CRN article linked above published the following quote from an industry executive:

“Look, VMware is going all in on Kubernetes. They acquired Heptio. They’re investing tons of money on containers and transforming really quickly into the modern developer leader and hybrid cloud leader. There’s endless potential for combining VMware and Pivotal,” said the CEO, who did not wish to be identified. “So if you see someone like [Shipley] leaving Red Hat and their new home at IBM to take over Kubernetes at VMware – that speaks volumes to where this market is heading.”

Updated KB Articles:
New KB articles published for the week ending 28th September, 2019

Upcoming Events:
vForum – Hartford, CT – 10/16/2019 – Register
vForum – Online – 10/16/2019 – Register
VMworld 2019 Europe – Barcelona – 11/04/2019 to 11/07/2019 – Register
KubeCon + CloudNativeCon – San Diego – 11/18/2019 to 11/21/2019 – Register
Gartner IOCS – Las Vegas – 12/09/2019 to 12/12/2019 – Register

Upcoming Webinars:
New Workspace ONE Features and Training Options – 10/10/2019 – Register
Instructor Hour Covering Workspace ONE – 10/10/2019 – Register
GSS Webinar: vSAN Best Practices from the Field – 10/29/2019 – Register
The Latest on Containers – 10/31/2019 – Register
Instructor Hour Covering Containers – 10/31/2019 – Register
Tanzu: Any App, Any Cloud, Any Cluster – 11/13/2019 – Register
What’s New with VMware Cloud Services – 11/21/2019 – Register
Instructor Hour covering ‘What’s New with VMware Cloud Services’ – 11/21/2019 – Register
[Full Live Event List]

New Releases:
No new releases since last week

vForum 2019 Events

As we put VMworld and its many exciting announcements in our rearview mirror, it’s time to focus on spreading the news for those who were unable to attend, and diving deeper into the technologies that were discussed during the big event. For many of you, your local account team will bring a lot of that messaging to you directly, but another way that VMware does this is through our local and online vForum events.

This year, these events will take place on October 16th. vForum is a great way to engage with technical experts and executives that you may not be able to meet with during your day-to-day dealings, network with other professionals in your geography, and gain additional insight into VMware’s strategy as we move into a container and cloud-centric world. Please read on to learn more about how you can participate both locally and online.

vForum Hartford – Wednesday, October 16, 2019

11:00 AM – 5:30 PM (ET)
Thomas Hooker Brewery
140 Huyshope Avenue
Hartford, CT 06106
REGISTER NOW

Join us for our free local vForum event that will be packed with technical deep dives, peer to peer networking, and fun. Reserve your spot today to join us on October 16th. You will have the opportunity to hear recaps of the key announcements from VMworld and engage 1:1 with VMware technical experts on the newest developments in NSX, vSAN, and Cloud.

Here’s why you should attend:
Watch a livestream keynote with Pat Gelsinger, VMware CEO, followed by an Office of the CTO Expert Panel
Engage with technical experts on deep technical content
Compete for limited edition prizes that include a VMware Lego Set and T-Shirt
Access our latest Hands-on Labs with your own device to compete for a special VMware jacket
Give Back to your community cancer mission while testing your basketball skills

vForum Online – Wednesday, October 16, 2019

9:00 AM – 3:00 PM PDT
12:00 PM – 6:00 PM EDT
Agenda-at a Glance
Register Now!

Disruptive technologies are changing the way organizations are looking at cloud, networking, security, containers and the digital workspace to power their next wave of innovation. Join us at vForum Online, VMware’s largest virtual IT event for expert insight into:
Accelerating your cloud journey with VMware Cloud on AWS, vSphere Platinum, vSAN, Kubernetes and cloud-native apps.
Building the next generation network virtualization and security platform with NSX Data Center, SD-WAN by VeloCloud and App Defense.
Helping your employees work more easily and securely from anywhere, at any time, and on any device with Workspace ONE and Horizon.

Here’s why you should attend:
Exclusive thoughts and observations from theOffice of the CTO Expert Panel and guest customers.
38 technical breakouts on building, running, managing, and securing business-critical applications on any cloud; deploying network and security virtualization; and delivering seamless access to apps and services with a secure, integrated digital workspace.
Live Q&A video chats with more than 130 VMware experts who are ready to answer your toughest questions on cloud migration, networking, security, storage and the digital workspace.
10 instructor-led Hands-On Labs where you can test drive vSphere, vSAN, VMware Cloud on AWS, NSX, and Workspace ONE.

Weekly Update – Week of 9/9/2019

New and Noteworthy:

Introducing Project Pacific – At VMworld we announced Project Pacific. Project Pacific evolves vSphere to be a native Kubernetes platform.  Project Pacific fuses vSphere with Kubernetes to enable our customers to accelerate development and operation of modern apps on vSphere.  This new architecture fuses vSphere with Kubernetes to enable our customers to accelerate development and operation of modern apps on vSphere.  This platform provides a foundation that will allow developers and IT practitioners alike a secure, robust, and streamlined platform to run their modern applications.

VMware Project Pacific Architecture

New KB articles published for the week ending 31st August,2019Re-posting due to a broken link last week

Upcoming Events:

CloudLive by CloudHealth – Boston 09/24/2019 – 09/26/2019 Register
VMUG UserCon – Boston09/25/2019Register
VMUG UserCon – NY/NJ – Jersey City09/27/2019Register
VMworld 2019 Europe – Barcelona11/04/2019 – 11/07/2019Register

Upcoming Webinars:

VVD for the vSphere Admin09/19/2019Register
NSX-V Upgrades and Best Practices09/24/2019Register
New Workspace ONE Features and Training Options 10/10/2019Register
Instructor Hour Covering Workspace ONE10/10/2019Register

  [Full Live Event List]

New Releases:

2019-08-22
VMware Enterprise PKS Management Console [Download]
2019-09-05
VMware Pulse IoT Center 2.0.0 [Release Notes] [Download]
2019-09-03
VMware Integrated OpenStack 6.0.0 [Release Notes] [Download]
VMware Smart Assurance SMARTS 10.1.0 [Release Notes] [Download]
VMware Smart Assurance NCM 10.1.0 [Release Notes] [Download]
2019-08-29
VMware Integrated OpenStack 5.1.0.3 [Release Notes] [Download]
VMware Integrated OpenStack 4.1.2.3 [Release Notes] [Download]
2019-08-28
Lenovo Custom Image for ESXi 6.7 U3 Install CD [Download]
2019-08-26
DellEMC Custom Image for ESXi 6.7U3 Install CD [Download]
2019-08-22
VMware Enterprise PKS Management Console [Download]
2019-08-21
Hitachi Custom Image for ESXi 6.5U3 GA Install CD [Download]

Kubernetes at VMware

What is the Strategy?

So many exciting announcements this year at VMworld have been around the cloud strategy of build, run, and manage. This strategy is outlined perfectly by Paul Fazzone here.

At the heart of these announcements is the integration of Kubernetes(K8s) into all things vSphere. With Project Pacific, Kubernetes will be embedded into vSphere to provide native K8s functionality within ESXi, as well as pure, open-source K8s clusters on-demand for developers.

With Tanzu Mission Control, VMware is enabling companies to manage their K8s clusters from a single location, bringing together operations and developers, and creating a single point of management to apply policies and governance to clusters deployed across a variety of environments on-premise and in public clouds.

Free Open-Source Kubernetes Training

With the increasing importance of Kubernetes to all IT professionals, it is important to provide the resources to enable people to master this new skillset. In that vein, another amazing announcement was the providing of free, vendor agnostic training for open-source Kubernetes by VMware. Available at kubernetes.academy. These courses provide a fantastic overview of containers and Kubernetes led by highly experienced instructors who have been deep in this ecosystem since the beginning.

Sign up today and start up-leveling your skills!