Orchestrating Inter-Cluster Networking in Kubernetes
How to setup inter-cluster networking between two Kubernetes clusters using Cillium.
Getting Started with all things Kubernetes.
Kubernetes, also known as K8s (where 8 represents the number of letters between K and S!) is the most widely used and trending open source tool nowadays. It is a system used for automating the deployment, scaling, and management of containerised applications.
Kubernetes is a powerful tool to use if you are building cloud native applications with numerous micro services. It provides a lot of out-of-the-box features that make deployment and monitoring easier (when compared to installing and managing applications directly on physical/virtual machines), and with more power than a single container runtime such as Docker.
When you deploy Kubernetes, you get a cluster. A K8s cluster consists of many different components, as illustrated in the diagram below.
The Control Plane or (Master Node) controls your K8s cluster, and it consists of multiple components that are responsible of managing that cluster. Usually, all the components are installed on the same machine for simplicity, but of course control plane components can be distributed among machines within the cluster. The main components that form K8s and are related to your control plane are:
One or more nodes can be installed, whether it be on a virtual or physical machine, depending on the cluster. Each node is managed by the control plane, and contains the necessary services to run pods and communicate directly with the control plane. These are made up of:
There are many ways to install K8s cluster for production, either on premise, or you can choose from many of the available public cloud K8s services, like AKS from Microsoft Azure, or GKE provided by Google Cloud. Deciding on these will depend on what your cluster needs, as well as the requirements, use case and project you are working on.
I will mention some ways to spin up a local K8s cluster for you to play around with and start getting familiar with starter K8s concepts––after all, we learn better when we get our hands dirty! 🔨 💻
kubeadm init
and kubeadm join
for starting master and worker nodes.Before diving through the main K8s objects, it is good to cover the tool we will use for accessing the cluster: kubectl. It uses the Kubernetes API to communicate with the cluster and carry out commands––believe me, it is better than using curl (https://kubernetes.io/docs/tasks/tools/#kubectl).
Here, we’ll cover some of the most frequently used commands you’ll need:
$ kubectl get <object type> <object name> -o <output> --sort-by <JSONPath> --selector <selector>
$ kubectl describe <object type> <object name>
$ kubectl create -f <file name>
$ kubectl apply -f <file name>
$ kubectl delete <object type> <object name>
Usually, for creating objects in K8s we use the declarative method which requires creating a YAML file and run Kubectl apply/create/delete -f file.yaml
command to do the job. This method allows us to keep a history of our K8s manifests and have them in a repository, following more of an Infrastructure as Code approach.
However, sometimes you need to be quickly spinning things up using kubectl directly to experiment, or resolve an issue––imperative commands are handy for doing this. Here, you are also able to generate a YAML file if you are not very familiar with K8s manifest definitions. Below are some of useful imperative commands with their outputs.
$ kubectl run nginx --image=nginx
pod/nginx created
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx 1/1 Running 0 58s
$ kubectl run nginx --image=nginx --dry-run=client -o yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: nginx
name: nginx
spec:
containers:
- image: nginx
name: nginx
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}
$ kubectl create deployment nginx --image=nginx --dry-run=client -o yaml > nginx-deployment.yaml
Kubernetes Objects are persistent entities in the Kubernetes system. Kubernetes uses these entities to represent the state of your cluster, and all objects have the below in common as illustrated in this graph:
The smallest computing unit you can create in K8s, where it doesn’t deploy applications directly into nodes. However, here containers are encapsulated into Pod objects.
A pod can have one or more containers, and containers will share networking and resources within the same pod. So instead of having to run images separately in Docker, and have to set up volumes and network, while also monitoring the state of the dependent container––don’t sweat it. Kubernetes can do this for us.
pod-1 in the below diagram has a NodeJS application that depends on mongoDB for storage, and Redis for caching. The NodeJS application will be able to access both containers with the networking features provided by the Pod. Where as pod-2 only contains one application running inside a container, which is mostly the case for Pods, unless you need a helper tool for your application in the same pod as pod-1.
Pods are by nature ephemeral, meaning you can only expect one to last for a limited time. This means that when a pod is terminated or deleted, it doesn’t replace itself––for this reason, it’s recommended to have Pods inside a replica set, or for a deployment to have self-healing options.
To maintain stable and self-healing Pods, you can use ReplicaSet: it ensures the number of desired Pods are met and always running at a given time.
ReplicaSet is defined by specifying replicas (the number of Pods to replicate), a pod template and an extra field called selector which is used to identify new Pods with the same selectors to acquire them.
As you can see in the above diagram, ReplicaSet has two replicas: that’s why we have two Pods created for this replica with a tier selector. If for some reason pod-1/2 were to be terminated, the replica set will make sure to initialise a new pod to maintain the correct and required number of replicas. Whereas if pod-3 were terminated, it would be gone forever.
Below is a sample ReplicaSet manifest:
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: be
labels:
app: customer-svc
tier: backend
spec:
replicas: 2
selector:
matchLabels:
tier: backend
template:
metadata:
labels:
tier: backend
spec:
containers:
- name: customer
image: customer-svc
Creating applications or services is not a one-time process: our applications will always evolve and change. When we deploy a version of our application, we need a mechanism to update the containerised applications automatically, as doing this manually is an overwhelming process (notably error prone, and time consuming…), particularly if you have tens, or many hundreds, of services. The deployment is a higher object in the hierarchy, as demonstrated in the diagram below. We should define the deployment to encapsulate replica sets and pods: its controller gives it the ability to monitor, manage and maintain the desired state of the application we want to deploy.
apiVersion: apps/v1
kind: Deployment
metadata:
name: be-deployment
labels:
app: customer-svc
tier: backend
spec:
replicas: 3
selector:
matchLabels:
tier: backend
template:
metadata:
labels:
tier: backend
spec:
containers:
- name: customer
image: customer-svc
Deployment provides a rollback mechanism in case it’s not stable, or an error is occurring: all rollout histories are kept and logged in the system, where you can rollback anytime to a specific version. You describe your desired state in a Deployment, and the Deployment Controller changes the actual state to match the desired state at a controlled rate. You can define Deployments to create new ReplicaSets, or to remove existing Deployments and adopt all their resources with new Deployments. Some great information about Deployment can be found in this link.
Namespaces provides isolation among objects, where you can have some kind of virtual clusters in your K8s cluster. As shown in the diagram below, there are two custom namespaces created dev-ns and prod-ns.
By having this separation, you could restrict the access to the production cluster objects for administrators and infrastructure managers, where developers can access only the objects found under the dev-ns.
Three namespaces are always created when having a K8s cluster: default, kube-system, and kube-public. While you can create objects under these namespaces, it’s best practice and recommended to create your own namespaces.
When creating an object, if a namespace is not provided, the default will be assigned to the new object, or you can specify the namespace either in the metadata object definition, or as a flag for the kubectl command.
$kubectl create namespace dev-ns //create a namespace$kubectl run nginx --image nginx
--namespace dev-ns //create and run an nginx pod under dev-ns namespace
Or specify the namespace in the object definition(line 4):
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: dev-ns
name: be-deployment
labels:
app: customer-svc
tier: backend
spec:
replicas: 3
selector:
matchLabels:
tier: backend
template:
metadata:
labels:
tier: backend
spec:
containers:
- name: customer
image: customer-svc
Services are used to expose and enables network access to Pods. Pods are selected using their labels, so when the network is enabled on a service, it will select all the Pods matching the specified selector.
Use expose command to enable pods networking access and create a service:
$kubectl run nginx –replicas=3 –labels="run=service-example" –image=nginx
–port=8080$kubectl expose deployment nginx --type=ClusterIP --name=nginx-service
Types of Services
And that’s a wrap! Hope you enjoyed my guide to the core concepts in Kubernetes, and are ready to start tinkering with your own projects.
How to setup inter-cluster networking between two Kubernetes clusters using Cillium.
Across the financial services industry, competition is rising at an unprecedented pace while growth is subject to stringent industry regulations.
Gartner top IT trends for 2021 includes multi cloud