In my last post, I discussed the roles of Kubernetes Master Components: Etcd, API Server, Controller Manager, and Scheduler. Now, let’s dive into Kubernetes Node Components.
Kubernetes Node Components
Basically, the Kubernetes node runs the Kubelet and Service Proxy components as well as a container engine such as Docker or Rocket, which in turns run your containerized applications and makes you and your customers very happy (Lol).
The Service Proxy runs on each node and is responsible for watching the API Server for changes on services and pods definitions to maintain the entire network configuration up to date, ensuring that one pod can talk to another pod, one node can talk to another node, one container can talk to another container, and so on. Besides, it exposes Kubernetes services and manipulates iptables rules to trap access to services IPs and redirect them to the correct backends (that’s why you can access a NodePort service using any node IP; even if the node you hit is not the one you are looking for, this node will already be set up with the appropriate iptables rules to redirect your request to the correct backend). This provides a highly-available load-balancing solution with low performance overhead.
The Kubelet is one of the most important components in Kubernetes. Basically, it’s an agent that runs on each node and is responsible for watching the API Server for pods that are bound to its node and making sure those pods are running (it talks to the Docker daemon using the API over the Docker socket to manipulate containers lifecycle). It then reports back to the API Server the status of changes regarding those pods.
The main Kubelet responsibilities include:
Run the pods containers.
Report the status of the node and each pod to the API Server.
Run container probes.
Retrieve container metrics from cAdvisor, aggregate and expose them through the Kubelet Summary API for components (such as Heapster) to consume.
The last responsibility listed above will change in the future as cAdvisor will be turned off for container stats collection and replaced by the Container Runtime Interface.
The Kubelet also starts an internal HTTP server on port 10255 and exposes some endpoints (mostly for debugging, stats, and for one-off container operations such as kubectl logs or kubectl exec), such as /metrics, /metrics/cadvisor, /pods, /spec, and so on.
The Kubelet ships with built-in support for cAdvisor, which collects, aggregates, processes and exports metrics (such as CPU, memory, file and network usage) about running containers on a given node. cAdvisor includes a built-in web interface available on port 4194 (just open your browser and navigate to http://<node-ip>:4194/).
You can view the cAdvisor /metrics endpoint by issuing a GET request to http://<node-ip>:4194/metrics.
If this article was helpful to you, please like and share it with your friends. Plus, follow me on LinkedIn, Medium and subscribe to jorgeacetozi.com to get notified when a new article is published!
This article was based on contents of my book Continuous Delivery for Java Apps: Build a CD Pipeline Step by Step Using Kubernetes, Docker, Vagrant, Jenkins, Spring, Maven, and Artifactory, which I invite you to download and read the free sample (110 pages) just clicking on Read Free Sample and choosing your preferred format (PDF, EPUB, MOBI, or WEB). Basically, the free sample will guide you through these subjects:
Please check out my other articles:
Thank you very much for your time!
Jorge Acetozi is a software engineer who spends almost his whole day having fun with things such as AWS, Kubernetes, Docker, Terraform, Ansible, Cassandra, Redis, Elasticsearch, Graylog, New Relic, Sensu, Elastic Stack, Fluentd, RabbitMQ, Kafka, Java, Spring, and much more! He loves deploying applications in production while thousands of users are online, monitoring the infrastructure, and acting quickly when monitoring tools decide to challenge his heart’s health! Check out his books: