Orchestrating Inter-Cluster Networking in Kubernetes

How to setup inter-cluster networking between two Kubernetes clusters using Cillium.

Introduction


Greetings, Kubernetes enthusiasts! In this blog post, we are going to dive deep into a fascinating and crucial aspect of Kubernetes orchestration: inter-cluster networking. In particular, we'll be exploring how to connect two Kubernetes clusters to enable the orchestration of applications that can communicate with each other.

By the end of this blog post, you'll be familiar with the method of using Cilium's ClusterMesh functionality to establish communication between two clusters.

For our illustration, we'll have a front-end server and a backend database installed on two different clusters. We will ensure these applications can freely communicate with each other.

But before we dive in, let's get the housekeeping out of the way. We must clarify some prerequisites that your clusters need to meet to ensure successful communication between them.

Prerequisites


To effectively connect two Kubernetes clusters, a few conditions must be satisfied:

  1. Network Accessibility: Both clusters must be network accessible to each other. This accessibility could be achieved in various ways. They might be in the same virtual network or VPC, or they may be in different networks with appropriate routing and firewall rules set up to ensure they can communicate.

  2. Unique Cluster CIDRs: Each Kubernetes cluster uses a Cluster CIDR, which is a block of IP addresses assigned to pods within the cluster. These CIDRs must not overlap between your clusters, as it would lead to IP conflicts and communication issues.

  3. Kubernetes Version Compatibility: Ensure both clusters are running a version of Kubernetes that is compatible with Cilium. At the time of writing, Cilium supports Kubernetes versions 1.16 and later. I do recommend however that you use a version 1.24 at least.

With these prerequisites in place, we can dive into the process.

Step 1: Setting Up Your Kubernetes Clusters


The first step in our process is to have two Kubernetes clusters at our disposal. Each cluster will host one part of our application – the frontend and the backend.

I'm assuming you already have two clusters set up. If not, there are numerous resources available online that walk you through setting up your Kubernetes clusters. If you need further guidance Cilium themselves have a excellent guide for preparing two Azure AKS clusters for use with Cluster Mesh - https://docs.cilium.io/en/v1.13/network/clustermesh/aks-clustermesh-prep/

Ensure you have kubeconfig files for both clusters. These files contain the necessary details to connect to and authenticate with your cluster. We'll call them cluster1-config and cluster2-config for this walkthrough.

If you need further guidance on setting up clusters Cilium themselves have an example for creating clusters suitable for use with Cilium cluster mesh.

Step 2: Installing Cilium on Both Clusters


After your Kubernetes clusters are set up, the next step is to install Cilium. The installation process is straightforward thanks to Helm, a package manager for Kubernetes. Here's how to do it...

First you will need to grab the Cilium CLI, we will use this to install and configure the Cilium;

CILIUM_CLI_VERSION=$(curl -s https://raw.githubusercontent.com/cilium/cilium-cli/main/stable-v0.14.txt) CLI_ARCH=amd64 if [ "$(uname -m)" = "arm64" ]; then CLI_ARCH=arm64; fi curl -L --fail --remote-name-all https://github.com/cilium/cilium-cli/releases/download/${CILIUM_CLI_VERSION}/cilium-darwin-${CLI_ARCH}.tar.gz{,.sha256sum} shasum -a 256 -c cilium-darwin-${CLI_ARCH}.tar.gz.sha256sum sudo tar xzvfC cilium-darwin-${CLI_ARCH}.tar.gz /usr/local/bin rm cilium-darwin-${CLI_ARCH}.tar.gz{,.sha256sum}


I'm installing on an Intel Mac, instructions may differ depending on your OS and architecture.

Step 3: Installing Cilium on Both Clusters


Installing Cilium is fairly straightforward. Let's walk through the installation process.

  1. First, switch the current context to Cluster1:

kubectl config use-context cluster1-config


2. Use the cilium CLI to install cilium.

cilium install --set cluster.name cluster1 --set cluster.id 1


3. Repeat these steps for Cluster2. Remember to change the cluster.name parameter to cluster2 and cluster.id parameter to 2.

Step 4: Configuring Cilium ClusterMesh


Having installed Cilium on both our clusters, we're now ready to configure ClusterMesh. This functionality of Cilium will facilitate the communication between our two Kubernetes clusters.

  1. First, we need to ensure the cilium CA is consistent between clusters. To do this we are going to copy the K8s secret from cluster1 to cluster2
kubectl --context=cluster1-config get secret -n kube-system cilium-ca -o yaml | \ kubectl --context cluster2-config create -f -
  1. We are now going to enable cluster mesh on both our clusters leveraging the cilium CLI
cilium clustermesh enable --context cluster1-config --service-type LoadBalancer cilium clustermesh enable --context cluster1-config --service-type LoadBalancer
  1. Once enabled we can check the status of cluster mesh enablement.
cilium clustermesh status --context cluster1-config cilium clustermesh status --context cluster2-config ✅ Cluster access information is available: - 10.6.0.50:2379 ✅ Service "clustermesh-apiserver" of type "LoadBalancer" found 🔌 Cluster Connections: 🔀 Global services: [ min:0 / avg:0.0 / max:0 ]
  1. Now cilium's ClusterMesh is enabled, you can connect both of your clusters.
cilium clustermesh connect --context cluster1-config --destination-context cluster2-config


It may take a few minutes for your clusters to connect, to see the status you can re-run the commands in step 3.

Step 4: Deploying the Application


Now that we have our ClusterMesh ready, we can deploy our example application. We will be deploying a front-end server on cluster1 and a backend database on cluster2.

Here's a simple example Kubernetes manifest for a front-end server:

apiVersion: v1 kind: Service metadata: name: frontend spec: selector: app: frontend ports: - protocol: TCP port: 80 targetPort: 80 --- apiVersion: v1 kind: Service metadata: name: postgres annotations: service.cilium.io/global: "true" service.cilium.io/shared: "false" spec: selector: app: backend ports: - protocol: TCP port: 5432 targetPort: 5432 --- apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: selector: matchLabels: app: frontend replicas: 1 template: metadata: labels: app: frontend spec: containers: - name: frontend image: frontend:1.0 ports: - containerPort: 80


And for the backend database:

apiVersion: v1 kind: Service metadata: name: postgres annotations: service.cilium.io/global: "true" spec: selector: app: backend ports: - protocol: TCP port: 5432 targetPort: 5432 --- apiVersion: apps/v1 kind: Deployment metadata: name: backend spec: replicas: 1 selector: matchLabels: app: postgres template: metadata: labels: app: postgres spec: containers: - name: postgres image: postgres:13.3 env: - name: POSTGRES_USER value: "dbuser" - name: POSTGRES_PASSWORD value: "dbpass" ports: - containerPort: 5432 name: backend volumeMounts: - name: postgres-storage mountPath: /var/lib/postgresql/data volumes: - name: postgres-storage persistentVolumeClaim: claimName: postgres-pv-claim --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: postgres-pv-claim spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi


For the front-end application, deployment and service apply these to cluster1

kubectl --context=cluster1-config apply -f frontend.yaml


For the Backend database deployment and service apply thease to cluster2

kubectl --context=cluster2-config apply -f backend.yaml


Once all deployment and services are up you will be able to pass traffic from the front end service deployed on cluster1 to the backend deployment on cluster2.

But how? If you note in the manifest for the backend service we set some annotations:

annotations: service.cilium.io/global: "true"


This annotation is parsed by Cilium service discovery. Any service with the same name and namespace is made available across all clusters in the cilium cluster mesh.

In our example the frontend app connects to the backend service ClusterIP of the service in cluster1 and because this Service is Cilium global service traffic is passed to backend pods in cluster2 by the Cilium CNI Cluster Mesh.

Conclusion


Congratulations! You've successfully set up two Kubernetes clusters, installed and configured Cilium and its ClusterMesh functionality, and deployed a front-end and backend application that can communicate across the clusters.

While the task might seem challenging at first, understanding the nuances of inter-cluster communication is vital for managing larger-scale, more complex Kubernetes environments. Understanding how to establish this kind of communication allows for more flexibility and scalability in managing workloads.

But Wait!


What if I told you you could achieve the same outcome without all the prerequisites and requirement to install an advanced CNI such as Cilium?

Ori Global Cloud automates application deployment lifecycle including application centric inter-cluster networking for you in one deployment operation. Stay tuned as we will be covering this in our next networking focused blog post.

Can't wait, then head over to https://ori.co/sandbox-request to try for yourself.

Similar posts

Join the new class of AI infrastructure! 

Build a modern GPU cloud with Ori to accelerate your AI workloads at scale.