Kasten by Veeam’s Rookie and Pro Learning labs are targeted to beginners and experienced practitioners, respectively. The Rookie lab covers Kubernetes Networking Fundamentals, while the Pro lab will allow you to troubleshoot and fix an issue that sometimes occurs in a real-world Kubernetes deployment. Specifically, you will determine why an application (in this case, a content management system) is unavailable and correct the problem.
Rookie Course – Kubernetes Networking Fundamentals
This lab is for those who want to understand Kubernetes networking fundamentals, including Kubernetes services, microservices and load balancing.
What is the structure of the course?
The lab consists of two sections. The first covers Kubernetes networking theory, and the second provides hands-on keyboard command line experience. Each section is approximately 15 minutes long; however, your time may vary, depending on how quickly you pass either section. There are a total of 6 challenges to complete during this lab.
Important: On multiple-choice questions, note that more than one answer may be correct. The lab is timed, so it is best to complete it in one sitting.
Section 1: Networking Theory
This section will cover topics and terminology for Kubernetes networking. Each topic will review material on-screen, then pose a challenge question. You must answer the question correctly to proceed to the next section.
The theory section includes the following topics:
- Kubernetes Services
- Node Port
- Load Balancer
Topic 1: Kubernetes Services
Although certain pods can do their work independently of an external stimulus, many applications are meant to respond to external requests. For example, in the case of microservices, pods will usually respond to HTTP requests coming either from other pods inside the cluster or from clients outside the cluster.
Services allow clients to discover and talk to pods. They are like internal load balancers, designed to distribute traffic to a subset of pods that satisfy a set of rules on their labels.
A Kubernetes Service is a resource you create to make a single, constant point of entry to a group of pods providing the same service.
Each service has an IP address and port that never changes while the service exists.
Clients can open connections to that IP and port, and those connections are then routed to one of the pods backing that service.
This way, clients of a service don’t need to know the location of individual pods providing the service, and pods can be moved around the cluster at any time.
Suppose you have a frontend web server and a backend database server. There may be multiple pods that all act as the frontend, but there may only be a single backend database pod.
You need to solve two problems to make the system function:
- External clients need to connect to the frontend pods without caring if there is only a single web server or hundreds.
- The frontend pods need to connect to the backend database. Because the database runs inside a pod, it may be moved around the cluster over time, causing its IP address to change. You do not want to have to reconfigure the frontend pods every time the backend database is moved.
Topic 2: ClusterIP
The primary purpose of ClusterIP service is exposing groups of pods to other pods in the cluster. Choosing this value makes the service reachable only from within the cluster. This is the default ServiceType.
Topic 3 – Node Port
You’ll also want to expose services such as frontend webservers to the outside, so that external clients can access them as depicted below.
The first method of exposing a set of pods to external clients is to create a service and set its type field to NodePort.
If you set the type field to NodePort, the Kubernetes control plane allocates a port from a range (default: 30000-32767). Each node proxies that port (the same port number on every Node) into your service.
The diagram below shows an external client connecting to a NodePort service, either through node 1 or 2.
Your service reports the allocated port in its .spec.ports[*].nodePort field.
– port: 8080
Topic 4: Load Balancer
Kubernetes clusters running on public clouds usually support the automatic provisioning of a load balancer from the cloud infrastructure. All you need to do is set the service’s type to LoadBalancer instead of NodePort.
The load balancer will have its own unique, publicly accessible IP address and will redirect all connections to your service. In this way, you can access your service through the load balancer’s IP address.
Topic 5: ExternalName
Instead of exposing an external service by manually configuring the service’s endpoints, a simpler method allows you to refer to an external service by its fully qualified domain name (FQDN).
ExternalName maps the service to the contents of the externalName field, by returning a CNAME record with its value. No proxying of any kind is set up.
This service definition, for example, maps the my-service service in the pod namespace to my.database.example.com.
Section 2: Hands-on Commands
In the hands-on section of this lab, we will first provision a Kubernetes cluster and virtual machines.
We will then proceed with commands for deploying a pod inside our cluster, exposing the pod via a ClusterIP service, and testing if we can reach the pod through the service internally.
Is there pre-work for the course?
Yes. Be sure to complete reading and studying this blog post, and the accompanying slides.