Accessing Logs in Kubernetes

Before getting into dashboards for LogInsight, this blog post will go through briefly how to access the different logs stored in the Kubernetes cluster without using tools like Fluentd and a log aggregation service (assuming kubeadm use for bootstrapping the cluster).  This is a great way to really get under the covers and see what’s happening within your Kubernetes cluster!

Accessing control plane component logs —

Using the kubectl command line, access to pod logs are available via the `kubectl logs <pod-name>` command.  This applies to any pod, including the cluster control plane components, which are running as static pods in the kube-system namespace.  *Note for the control plane components the kube-system namespace must be specified 

To access other control plane component logs, simply use their pod names. First, get their pod names by running kubectl get pods -n kube-system and then kubectl logs <pod name here> -n kube-system . Every deployment will have different suffixes for these static pods.

For example, here is accessing the etcd pod logs on my test cluster:

Accessing kubelet logs — 

The kubelet is responsible for interacting with the container engine (Docker in this case) and kubeapi-server, so a lot of good information is stored in these logs. If the nodes of the Kubernetes cluster are running with systemd, then kubelet logs are written to journald and can be accessed via journalctl.  Otherwise, they will be located in the /var/log/ directory, written to a .log file.

Kubeadm deployment using an Ubuntu 18.04 node:

journalctl --unit kubelet

Since kubelet is running as a service under systemd control, the logs are accessible via journalctl as show above. 

Accessing pod/application logs — 

To show that this works with applications as well, there is an Nginx pod running with a NodePort service exposing it.

To access the logs for this pod —

kubectl logs nginx

* Note didn’t need to specify namespace because pod was deployed in the default namespace. 

And we have logs!  The access logs from my browser are visible in the output.  

If you are in a situation where you may have multiple containers within a pod — the syntax to choose which containers’ logs to view is: 

kubectl logs <pod name> <container name>

That does it for this quick post on accessing cluster and application logs. In a previous post, I covered getting up and running with Fluentd running as a DaemonSet agent on every node and forwarding all of these logs to vRealize LogInsight (a log aggregator) for analysis and storage outside of the cluster. Next post will be on LogInsight dashboards and queries using the Interactive Analytics dashboard.

Sources of truth:

Custom Script Monitoring with vRealize Operations 8.0

One of the cool new features that VMware has introduced into vRealize Operations 7.5 is the ability to deploy agents to monitor the operating systems and applications inside your virtual machines. With vRealize Operations 8.0, we have added the extra feature to be able to run custom scripts using the Application Monitoring agent, and then collect the script output as a metric. This provides a lot of flexibility and robustness to our in-guest monitoring feature, since now you can monitor any information that can be pulled by running a script inside your operating system. 

In this blog, I will show off a simple bash script that checks for security patches in an Ubuntu VM, and then passes that metric to vRealize Operations, where we can create an alert to let us know if there are any patches available for our OS. This lets us centralize our Linux patch management into vRealize Operations, and lets us corollate our patching with other metrics collected by vRealize Operations to do things like patch when the system is the least busy, or when our app is least busy as reported by the application monitoring features in vRealize Operations.  

Weekly Update – Week of 2/3/2020

New and Noteworthy:
Following some changes to our team we took a bit of a hiatus from the weekly update component of the site, so apologies for the lack of news over that time. Since we last posted an update, there have been some major changes at VMware, most notably the completion of the acquisition of Pivotal. This acquisition is key to our mission of solving business problems through software, as Pivotal’s software stack combined with our Tanzu portfolio allow us to help developers accelerate deployment of new applications in the hybrid cloud. You can read more about the Pivotal acquisition here.

Additionally, we acquired a company called Nyansa, a producer of a highly-regarded AI-based network analytics platform. Nyansa will be tightly integrated with our VeloCloud offering, and will enable better end-to-end monitoring of the SD-WAN platform as well as the Virtual Cloud Network, enabling us to offer a truly self-healing network. Read more about VMware’s acquisition of Nyansa here.

Finally, our Workspace One offering was chosen as the leader by IDC in three separate End User Computing (EUC) vendor assessments: Unified Endpoint Management, Mobility Management, and IOT/Ruggedized Device Management. This likely isn’t a surprise to anyone who has used the software, as Workspace One is one of the most transformational platforms that I have ever been exposed to. It completely changed the way that I work (for the better) upon joining VMware, and really completes our mission of allowing users to securely access any application, running on any cloud, from any device, whenever they want it. Read more about our wins from IDC here.

Updated KB Articles:
New KB articles published for the week ending 2 February, 2020
New KB articles published for the week ending 26 January, 2020
New KB articles published for the week ending 19 January, 2020
New KB articles published for the week ending 12 January, 2020
New KB articles published for the week ending 5 January, 2020

Upcoming Events:
Dell Technologies Word 2020 – Las Vegas – 05/04/2020 to 05/07/2020 – Register

Upcoming Webinars:
Go Beyond Break/Fix with Skyline – 13 Feb, 2020 – Register
3 Strategies to Advance your Career with Training – 19 Feb, 2020 – Register
Implementing VM Storage Policies with vSAN – 20 Feb, 2020 – Register
Go Beyond Break/Fix with Skyline – 27 Feb, 2020 – Register
Multi-Cloud Load Balancing 101 – 18 March, 2020 – Register
[Full Live Event List]

New Releases:

VMware Workspace ONE Access 20.01 [Release Notes] [Download]

VMware vRealize Network Insight 5.1.0 [Release Notes] [Download]
VMware Tools 11.0.5 [Release Notes] [Download]
VMware App Volumes 4.0 [Release Notes] [Download]
VMware vCloud Director for Service Providers [Release Notes] [Download]
VMware Cloud Foundation 3.9.1 Downloads [Release Notes] [Download]
VMware App Volumes 4 [Release Notes] [Download]

VMware Horizon 7.10.1 Standard (ESB Release) [Release Notes] [Download]
VMware Horizon 7.10.1 Advanced (ESB Release) [Release Notes] [Download]
VMware Horizon 7.10.1 Enterprise (ESB Release) [Release Notes] [Download]
VMware Horizon 7.10.1 Enterprise Add-On [Release Notes] [Download]
VMware Horizon 7.5.4 Standard (ESB Release) [Release Notes] [Download]
VMware Horizon 7.5.4 Advanced (ESB Release) [Release Notes] [Download]
VMware Horizon 7.5.4 Enterprise (ESB Release) [Release Notes] [Download]
VMware Horizon 7.5.4 Enterprise Add-On [Release Notes] [Download]

VMware vRealize Orchestrator Appliance 8.0.1 [Release Notes] [Download]
VMware vRealize Suite Lifecycle Manager 8.0.1 [Release Notes] [Download]
VMware vRealize Operations 8.0.1 [Release Notes] [Download]
VMware vRealize Automation 8.0.1 [Release Notes] [Download]
VMware vCenter Server 6.5U3f [Release Notes] [Download]
VMware NSX-T Data Center 2.5.1 [Release Notes] [Download]
VMware NSX Intelligence 1.0.1 [Release Notes] [Download]
VMware NSX Cloud 2.5.1 [Download]

VMware Dynamic Environment Manager 9.10.0 [Release Notes] [Download]
VMware Horizon 7.11.0 Enterprise [Release Notes] [Download]
VMware Horizon Apps Advanced 7.11.0 [Release Notes] [Download]
VMware Horizon 7.11.0 Subscription [Release Notes] [Download]
VMware Horizon Apps 7.11.0 Subscription [Release Notes] [Download]
VMware Unified Access Gateway 3.8 [Release Notes] [Download]
VMware Horizon 7.11.0 Standard [Release Notes] [Download]
VMware Horizon 7.11.0 Advanced [Release Notes] [Download]
VMware Horizon 7.11.0 Enterprise Add-On [Release Notes] [Download]

VMware Unified Access Gateway 3.7.2 [Release Notes] [Download]

VMware vCenter Server 6.7U3b [Release Notes] [Download]
VMware vSphere Hypervisor (ESXi) 6.7U3b [Release Notes] [Download]

Leveraging LogInsight for Kubernetes

As part of responsibly running applications, it’s important to have all the supporting Day 2 operations covered. That way, when something goes bump in the night, you’re immediately prepared and able to quickly find the source of the issue. Logging is one critical component of this overall architecture. Many shops are already running mature processes for logging with vRealize LogInsight in supporting their vSphere infrastructures. Wouldn’t it be great to use this existing logging setup for your Kubernetes clusters? You can!

Note: If you’d like help setting up a simple, single node test cluster see this blog.

Setting It Up

Fluentd is an open source project that provides a “unified logging layer.” It is a great project that provides a lot of capabilities, outside of Kubernetes as well. For our purposes, it will be deployed as a DaemonSet within our Kubernetes cluster to provide log collection and shipping to our vRealize LogInsight Appliance.

Luckily for us, the project maintains a set of templates that make it very easy to deploy fluentd as a DaemonSet within a Kubernetes cluster. Remember, a DaemonSet (DS), is a Kubernetes capability that ensures we always have pod of this type running on every node within our cluster. Perfect for the logging use case.

Github for templates:

For our implementation with LogInsight, we will be using the Kubernetes syslog template.

If you click on that file, you will see the manifest file that shows the configuration that will be deployed into the Kubernetes cluster.

You can see that it will:

  • Create a ServiceAccount and ClusterRole for fluentd
  • Deploy as a DaemonSet
  • Deploy into the kube-system namespace
  • Pull the container image from Fluent’s repository

Within the manifest file, the parameters that we need to change are only the IP address and desired port for our LogInsight Appliance.

Once you change the value: to the LogInsight IP address you can simply use that yaml file to deploy fluentd to the cluster! This will automatically create the DS and start shipping logs to your LogInsight Appliance.

Step by step for the deployment (assumes your have your cluster up and running and kubeconfig set up):

1. git clone

2. Use a text editor to change the syslog template file to have correct value for your LogInsight Appliance

sudo vim fluentd-kubernetes-daemonset/fluentd-daemonset-syslog.yaml

Edit the value field under SYSLOG_HOST to the LogInsight IP and save esc w q !

3. Apply the DS to the Kubernetes cluster kubectl apply -f fluentd-kubernetes-daemonset/fluentd-daemonset-syslog.yaml

Verify the success within the kube-system namespace kubectl get ds -n kube-system

It should be listed along with kube-proxy and whichever CNI you’re leveraging for your Kubernetes cluster, for me that is Antrea.

Testing to make sure it works

In order to test that the logs are shipping and being received, let’s deploy a simple webserver and send it a few requests. I’ve added the label app:nginx so when we create the nodeport service it will select this pod as it’s endpoint to communicate with.

kubectl run nginx --image=nginx --restart=Never --labels=app=nginx

Then create a nodeport service so we can access the default webpage from Nginx. By default, this command uses a selector as app and name of the service, Nginx.

kubectl create svc nodeport nginx --tcp=80

kubectl get svc This will allow us to see the port we need to access the test nginx webserver.

Okay! There should be some http requests we can view from LogInsight which is acting as our syslog server via fluentd DaemonSet running in our cluster!

Logging into LogInsight and selecting Interactive Analytics, with a simple ‘http’ search in the search bar should show our Nginx logs.

There you have it! Logs are now flowing from our Kubernetes into our existing LogInsight appliance and we are able to search for them.

You can match these against the logs being output within the Kubernetes cluster with the kubectl logs nginx command.

It’s not just our app logs that will be shipped, but Kubernetes logs as well. Within LogInsight and the Interactive Analytics window, filter the app name to fluentd and you should see all the logs being sent from the K8s cluster. For example, I had a failed postgres deployment which can be seen in the screenshot below.

That is a lot of material, but the steps are fairly simple and easy thanks to the work done by the fluentd project.

In part 2 of this blog, we will look at creating some dashboards within LogInsight that will help us more easily monitor and analyze the logs coming in from the Kubernetes cluster.

Helpful source docs:

AWS re:Invent 2019 Recap

Amazon AWS introduced almost eighty new services or service enhancements this year at re:Invent. Let’s go over a few of the more important ones.


Serverless was one of the main focuses of re:Invent 2019. The big announcement was the launch of ‘provisioned concurrency’ for Lambda. Currently, there is some latency the first time Lambda is invoked because of ‘cold starts’, when containers need to initialize in the background to do processing for your functions. Provisioned concurrency mitigates this by allocating a pool of pre-initialized Lambda containers in the background. This should allow for better latency when a Lambda function is initialized for the first time.

Link to announcement:

A few other major announcements in the serverless compute space:


IAM Access Analyzer was the biggest security announcement from re:Invent. This new feature continuously monitors your IAM policies for changes and alerts if anything has changed. When an IAM policy violates your security and access standards, it can be remediated faster.

Link to announcement:

Other major releases and announcements in the security space:


ARM based compute is the coolest thing that came out of the major compute announcements. Graviton processors, custom ARM based CPUs designed by Amazon, can perform almost as well as x86 CPUs at a fraction of the cost.

Link to announcement:

Other major releases and announcements in the AWS compute space:


The big announcement here seems to be AWS Wavelength, AWS services embedded into the datacenters of telecommunications providers. This will provide very low latency for sensitive applications.

Link to Announcement:

Other major networking releases and announcements:



AWS Outpost was announced in 2018, but is now generally available. This allows for true hybrid functionality for the cloud with AWS services on-prem and in the public cloud. VMware also offers VMware Cloud on AWS Outpost for customers that want to bring the strengths of AWS and VMware together in their datacenters.

Link to Outposts GA announcement:

AWS re:Invent 2019 Keynotes & Further Announcements

If you’re interested in watching any of the keynotes, the re:Invent 2019 YouTube channel has them all here:

Announcements for the dozens of other new technologies we didn’t cover here can be found on the 2019 re:Invent announcement page:

That’s about everything we’re going to cover. There was so much more announced this year, but these are what I think they key highlights are for. Thanks for reading!

Quick Start: Kubernetes Test Cluster w/ Antrea CNI

Recently, VMware announced an open source Kubernetes Networking project called Antrea. This project uses Open vSwitch (more here) as the data plane for a compatible Container Network Interface (CNI). To run Kubernetes (k8s) clusters, it is required that you provide a CNI to allow for pod to pod communication. It is assumed the hosts (physical or VMs) making up the cluster are already networked together.

In this post, I’d like to go over setting up a single node k8s cluster using Kubeadm on an Ubuntu 18.04 and Antrea CNI (with latest versions). For me, this is an easy way to spin up a cluster to mess around with or do some quick testing. A couple other ways that I’ve used and love are KinD (here) and simply enabling it in Docker for Desktop (probably the easiest way for most).

To start, you’ll need a single Ubuntu 18.04 machine. I’ve done this on AWS, and using VMware Workstation on my laptop and it’s worked well on both. The recommendation is to make sure you have 2 vCPU and 2 Gb RAM. (and if you use the script below, the install will fail without these resources)

To prepare the Ubuntu machine for k8s we need to install Docker (original Docker doc):

sudo apt update
sudo apt install -y apt-transport-https ca-certificates curl software-properties-common
curl -fsSL | sudo apt-key add -
sudo add-apt-repository "deb [arch=amd64] bionic stable"
sudo apt update
apt-cache policy docker-ce
sudo apt install -y docker-ce

sudo usermod -aG docker $USER 

In the commands above, Docker was added to the apt repository then downloaded and installed. Then we added our current user to the Docker group so we don’t have to use sudo with all the Docker commands. If someone knows differently, please let me know, but it has always required a restart for me for that to take effect. Which we will do after we download the rest of the required k8s system components, cli and kubelet.

sudo apt-get update && sudo apt-get install -y apt-transport-https curl
curl -s | sudo apt-key add -
cat <<EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list
deb kubernetes-xenial main
sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl

sudo reboot

With that, we should have everything needed to continue. Before we begin the Kubeadm bootstrap, we need to ensure swap is turned off because it will cause us problems if we don’t.

#turn off swap
sudo swapoff -a 

#initialize master cluster
sudo kubeadm init --pod-network-cidr= 

#Remove taint from master to use that node
kubectl taint nodes --all

#get cluster credentials and copy them into our kube-config file
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

#apply antrea CNI network latest version
sudo kubectl apply -f

In the code above, we turn off swap for this machine and then pull down the Kubeadm images that are used to create our cluster components. This includes our API server, etcd database, controller manager, and scheduler.

After that we are initializing our cluster with the kubeadm init --cidr= command. The network address range we pass in here will be used for our pods and controlled by Antrea, which we are installing in the next command. **Important to grab the token given by Kubeadm if you want to grow your cluster with additional worker nodes!

After that, we are simply removing the taint from the master node so that we can run our pod/container workloads on the same node. By default, a taint is applied to the master so that workloads do not interfere with the operation of our control plane….obviously the right thing to do when it matters!

Make it faster for me:

I have the script broken down into 2 parts, because I can’t get Docker to run properly without a full reboot. To run the scripts, log into your Ubuntu machine that has 2 CPU and at least 2Gb RAM and:

git clone
source k8s/

At this point you may need to input your sudo password, and select ‘yes’ when asking if you will allow for system services to be restarted, if doing this on Workstation.

When that completes, you should see a full reboot of your Ubuntu machine. So log yourself back in and:

source k8s/ 

This will kick off the initialization of the cluster and application of the Antrea CNI. Again, make sure to copy the discovery token output at the end of initialization if you want to grow this cluster.

From here, when ssh’d into that machine. You have access to a k8s cluster for testing and learning! Please tell me if you notice any problems or give feedback in the comments.

Good luck!

Weekly Update – Week of 12/09/2019

Updated KB Articles:
New KB articles published for the week ending 1 December, 2019

Upcoming Events:
Gartner IOCS – Las Vegas – 12/09/2019 to 12/12/2019 – Register
Dell Technologies Word 2020 – Las Vegas – 05/04/2020 to 05/07/2020 – Register

Upcoming Webinars:
vSAN View and Dashboard Development in vROps – 12/12/2019 – Register
vCenter Upgrades, What’s in it for You? – 12/17/2019 – Register
[Full Live Event List]

New Releases:
VMware Horizon Cloud Connector [Download]

Weekly Update – Week of 12/2/2019

New and Noteworthy:
VMware Cloud on AWS Outposts Enters Beta – At AWS re:Invent 2019, VMware is announcing the VMware Cloud on AWS Outposts Beta program. We are begining the process for Beta nominations, so if you have an interest in expanding your AWS capabilities to your on-premises datacenter, definitely reach out to your solutions engineer ASAP. For those unfamiliar with the solution, VMware Cloud on AWS Outposts is a jointly engineered on-premises as-a-service offering, powered by VMware Cloud Foundation. It integrates our Software-Defined Data Center software that runs on next-generation, dedicated, elastic Amazon EC2 bare-metal infrastructure, delivered on-premises with optimized access to local and remote AWS cloud services.

Updated KB Articles:
New KB articles published for the week ending 1 December, 2019

Upcoming Events:
Gartner IOCS – Las Vegas – 12/09/2019 to 12/12/2019 – Register
Dell Technologies Word 2020 – Las Vegas – 05/04/2020 to 05/07/2020 – Register

Upcoming Webinars:
vSAN View and Dashboard Development in vROps – 12/12/2019 – Register
vCenter Upgrades, What’s in it for You? – 12/17/2019 – Register
[Full Live Event List]

New Releases:
VMware Horizon Cloud Connector [Download]

Weekly Update – Week of 11/25/2019

New and Noteworthy:
Google buys CloudSimple – Google recently announced that they have completed their acquisition of CloudSimple, the leading VMware MaaS (Metal-as-a-Service) provider in Azure and Google’s Cloud Platform. “We believe in a multi-cloud world and will continue to provide choice for our customers to use the best technology in their journey to the cloud,” Rich Sanzi, a vice president of engineering at Google, wrote in a blog post on Monday. This appears to cement Google’s belief in the demand for VMware’s Cloud Foundation platform in the public cloud, but it will be interesting to see how Microsoft responds to the move.

VMware Reports Earnings on Tuesday, 11/26 – VMware will report earnings on 11/26, with analysts including RBC Capital expecting another quarter of strong results. VMware’s stock is up 14% since our last earnings call on 8/22.

Updated KB Articles:
New KB articles published for the week ending 17 November, 2019

Upcoming Events:
Gartner IOCS – Las Vegas – 12/09/2019 to 12/12/2019 – Register
Dell Technologies Word 2020 – Las Vegas – 05/04/2020 to 05/07/2020 – Register

Upcoming Webinars:
vSAN View and Dashboard Development in vROps – 12/12/2019 – Register
[Full Live Event List]

New Releases:
VMware Horizon Cloud Connector [Download]