Let's Explore Kubernetes:

Summary

Learning Objectives

Summarise key content around Containers, Workloads and Kubernetes

Recap the full process of setting up Kubernetes and how a user request flows through the system

Provide useful follow-on links and documentation to continue your learning

Conclusion

We have learnt that Kubernetes is a Container Orchestration system which can be used to efficiently manage and deploy many containers at scale. In order to setup Kubernetes successfully, we require:

  • Application code to be packaged with its dependencies (containerised)
  • Predefined Workloads describing how each request to the code can be executed
  • A Kubernetes Control Plane and Worker Nodes to run the Workloads on

Download our infographic here to refer back to the core concepts learnt on this platform.

Final Recap

Let's Explore Kubernetes one final time, following the end to end process. We can remind ourselves of the preparation activities needed by the developers, and follow through how a user request travels through the system and is managed by the Control Plane.

Containerise the application

Developers write their own Dockerfile to create the Image for parts of the application they have written themselves and use community or vendor Images for parts they don’t want to create from scratch. They group these together in a pod.

Gather recipes for Tommy’s Burger

Tommy creates his secret patty recipe and gives this to the prep chefs in the kitchen who gather these with any other recipes needed to make components of Tommy’s burger.

Define a workload

Kubernetes needs to know how to handle each type of request made to the application, so developers define this in a Workload. They include details such as how many containers are needed and what customisations can be made.

Write the menu

The prep chefs write a detailed menu which includes a description of how each dish is made and what can be customised in each dish.

User makes request

A user submits a request to carry out a batch job, such as requesting to apply the 'noise reduction' audio effect to their audio file.

Customer places order

Two customers place an order to the receptionist on the front desk. One asks for Set Menu 2= burger (with extra bacon) + dessert + drink and the other asks for Set Menu 1= burger + salad + drink.

Kube API captures request

The Kubernetes API captures the request and writes it to the etcd database, where the Workload definitions are stored. The Workload makes the system aware of what the desired state is (which in this case is a pod running the 'noise reduction' audio effect container).

Receptionist captures order request

The receptionist captures the order and writes it down in the database on her computer. This alerts all staff on their ipads that a new order has come through.

Controller Manager assigned

A Controller Manager is assigned to ensure the request is successfully executed. It constantly tries to match the current state of the cluster to the desired state of the cluster.

Dedicated waiter assigned

A dedicated waiter ensures the successful execution of orders by consistently checking in on customers, referring back to their order details from the menu to fulfill their expectations.

Scheduler assigns a node

The scheduler finds a Worker Node with sufficient capacity to be able to run the required containers in.

Maitre'd assigns a table

The maitre’d finds a suitable table with enough seats to seat both customers.

Kubelet runs containers

The Kubelet communicates with the Kube API to obtain the relevant Workload and ensure that it is running as desired. It starts and stops the containers as required, monitors their health and reports the state back to the API who logs the state in etcd.

Table chef starts cooking

The table chef receives an alert on his ipad from the receptionist which explains the order. He starts cooking their burgers, and provides updates on his ipad so everyone is aware of how far through their order they are.

Response displayed

Inside the container on the Worker Node, the code processes the batch job request and generates the response, such as outputting the audio file with the applied noise reduction audio effect. This response is displayed to the user.

Meal is served

The meals are served to the customers once they are ready.

Final Checkpoint Click here to check in and see what you've learned so far

- We now have a defined, automated way of handling any request within Kubernetes.

- When we receive a larger number of requests, Kubernetes gives us the ability to scale and handle this growing load automatically.

- In periods of lower demand, Kubernetes scales back down to reduce unnecessary costs.

- If a container or worker node fails, Kubernetes detects the failure and automatically self heals which reduces downtime.

- Kubernetes helps to reduce costs by intelligently distributing workloads across multiple worker nodes in the cluster.

- Kubernetes provides a solution to help growing organisations manage multiple containers with ease.

In this course, you've learnt that Kubernetes is a container orchestration system that helps manage and coordinate the deployment, scaling and operations of containers across a cluster of machines or nodes. It ensures your applications run smoothly across different environments by automating many aspects of application management. With this knowledge at hand, you are now equipped with the foundational knowledge to start exploring and using Kubernetes in your projects.

Useful links

Want more? Check out these links.

  • The official Kubernetes site
  • The Kubernetes project at Cloud Native Computing Foundation
  • The official Docker website
  • The Kubernetes Github project