In this guide, I will try to explain how to set up a Kubernetes RKE2 Cluster with the graphical Managemant Interface rancher. Furthermore the Kubernetes Cluster should be accessible over one central virtual IP address and therefore we setup two external HAProxy load balancer. In total we need to setup five Ubuntu-Server machines (3x Kubernetes machines; 2x HAProxy machines).
Components to make Kubernetes high available:
For our running Kubernetes cluster, we need an external load balancer so that network traffic can be distributed across all three machines. In addition, the HAProxy can detect a failure of a Docker Swarm machine and can temporarily stop serving network to the failed machine.
Install HAProxy:
The following commands are repeated for the following systems (haproxy1; haproxy2):
sudo apt install -y haproxy
The configuration file of the HAProxy (/etc/haproxy/haproxy.cfg
) is edited as follows to allow load balancing between ubkub1, ubkub2 and ubkub3 (Please append the content of the following code-block to the existing /etc/haproxy/haproxy.cfg
file):
Each HAProxy comes with its own web monitoring interface http://192.168.0.99:8404/stats respectively http://192.168.0.98:8404/stats. These web interfaces are secured by a username/password pop-up. these line defines a username and password in the following code-block stats auth <username:password>
. Please define your own username an password !!!
More information about the HAProxy monitoring web interface can be found on the following page: https://www.haproxy.com/blog/exploring-the-haproxy-stats-page
# Configure Stats-Webpage of HAProxy on Port 8404
frontend stats
mode http
bind *:8404
stats enable
stats uri /stats
stats refresh 10s
stats admin if TRUE
stats auth admin:password
frontend kubernetes-http
bind *:80
mode tcp
option tcplog
default_backend kubernetes-http
frontend kubernetes-https
bind *:443
mode tcp
option tcplog
default_backend kubernetes-https
frontend kubernetes-api
bind *:6443
mode tcp
option tcplog
default_backend kubernetes-api
frontend kubernetes-supervisor
bind *:9345
mode tcp
option tcplog
default_backend kubernetes-supervisor
backend kubernetes-http
mode tcp
option tcplog
option tcp-check
default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100
balance roundrobin
server ubkub1 192.168.1.100:80 check
server ubkub2 192.168.1.101:80 check
server ubkub3 192.168.1.102:80 check
backend kubernetes-https
mode tcp
option tcplog
option tcp-check
default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100
balance roundrobin
server ubkub1 192.168.1.100:443 check
server ubkub2 192.168.1.101:443 check
server ubkub3 192.168.1.102:443 check
backend kubernetes-api
mode tcp
option tcplog
option tcp-check
default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100
balance roundrobin
server ubkub1 192.168.1.100:6443 check
server ubkub2 192.168.1.101:6443 check
server ubkub3 192.168.1.102:6443 check
backend kubernetes-supervisor
mode tcp
option tcplog
option tcp-check
default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100
balance roundrobin
server ubkub1 192.168.1.100:9345 check
server ubkub2 192.168.1.101:9345 check
server ubkub3 192.168.1.102:9345 check
Now we restart the haproxy.service
to activate the changes we made:
sudo systemctl restart haproxy.service
Pacemaker is an open source cluster manager application. Corosync is a cluster engine for Pacemaker and crmsh is a Python-based tool for managing a Pacemaker cluster.
Install Pacemaker, Corosync and crmsh. After that, the services "corosync" and "pacemaker" are temporarily disabled.
The following commands are repeated for the following systems (haproxy1; haproxy2):
sudo apt install -y pacemaker corosync crmsh
sudo systemctl stop corosync
sudo systemctl stop pacemaker
Now corosync is configured on both machines. But first a backup of the default configuration file is made and then the new configuration file is created:
The following commands are repeated for the following systems (haproxy1; haproxy2):
sudo mv /etc/corosync/corosync.conf /etc/corosync/corosync.conf.backup
Now we create the file /etc/corosync/corosync.conf
as follows
sudo nano /etc/corosync/corosync.conf
and paste the content of the following code-block in the nano
file editor and save the file.
The following commands are repeated for the following systems (haproxy1; haproxy2):
# Please read the corosync.conf.5 manual page
totem {
version: 2
# Corosync itself works without a cluster name, but DLM needs one.
# The cluster name is also written into the VG metadata of newly
# created shared LVM volume groups, if lvmlockd uses DLM locking.
cluster_name: lbcluster
# crypto_cipher and crypto_hash: Used for mutual node authentication.
# If you choose to enable this, then do remember to create a shared
# secret with "corosync-keygen".
# enabling crypto_cipher, requires also enabling of crypto_hash.
# crypto works only with knet transport
crypto_cipher: none
crypto_hash: none
interface {
ringnumber: 0
bindnetaddr: 192.168.0.97
broadcast: yes
mcastport: 5405
}
}
logging {
# Log the source file and line where messages are being
# generated. When in doubt, leave off. Potentially useful for
# debugging.
fileline: off
# Log to standard error. When in doubt, set to yes. Useful when
# running in the foreground (when invoking "corosync -f")
to_stderr: yes
# Log to a log file. When set to "no", the "logfile" option
# must not be set.
to_logfile: yes
logfile: /var/log/corosync/corosync.log
# Log to the system log daemon. When in doubt, set to yes.
to_syslog: yes
# Log debug messages (very verbose). When in doubt, leave off.
debug: off
# Log messages with time stamps. When in doubt, set to hires (or on)
timestamp: on
}
quorum {
# Enable and configure quorum subsystem (default: off)
# see also corosync.conf.5 and votequorum.5
provider: corosync_votequorum
two_node: 1
}
nodelist {
# Change/uncomment/add node sections to match cluster configuration
node {
# Hostname of the node
name: haproxy1
# Cluster membership node identifier
nodeid: 1
# Address of first link
ring0_addr: 192.168.0.99
}
node {
# Hostname of the node
name: haproxy2
# Cluster membership node identifier
nodeid: 2
# Address of first link
ring0_addr: 192.168.0.98
}
}
service {
name: pacemaker
ver: 0
}
Now the Corosync service and Pacemaker service are activated:
The following commands are repeated for the following systems (haproxy1; haproxy2):
sudo systemctl start corosync
sudo systemctl enable corosync
sudo systemctl start pacemaker
sudo update-rc.d pacemaker defaults 20 01
sudo systemctl enable pacemaker
OPTIONAL: Query status of the cluster:
sudo crm status
sudo corosync-cmapctl | grep members
Now the cluster is configured. To activate the virtual ip and the cluster, execute the following commands on haproxy1.
!!! Only on machine haproxy1 !!!:
sudo crm configure property stonith-enabled=false
sudo crm configure property no-quorum-policy=ignore
sudo crm configure primitive virtual_ip ocf:heartbeat:IPaddr2 params ip="192.168.0.97" cidr_netmask="32" op monitor interval="10s" meta migration-threshold="10"
Now all that remains is to set up and enable port forwarding on your router for port 80 and 443 for the virtual IP address 192.168.0.97.
Also setup a dynamic DNS on your router. There are many dynamic DNS provider ( https://www.noip.com, https://www.duckdns.org, ...) that can translate your public IP address of your router to a predefined Domain name you can define.
Comparison of different dynamic DNS provider: https://www.ionos.de/digitalguide/server/tools/dyndns-anbieter-im-ueberblick/
Good documentation can be found on rancher docs: https://ranchermanager.docs.rancher.com/how-to-guides/new-user-guides/kubernetes-cluster-setup/rke2-for-rancher
After installing ubuntu-server on all three machines proceed to install Kubernetes RKE2:
!!! Only on machine ubkub1 !!!:
sudo mkdir -p /etc/rancher/rke2/
sudo nano /etc/rancher/rke2/config.yaml
Content of config.yaml
:
Please edit token. Must be the same on all three machines to create a cluster !!!
!!! Only on machine ubkub1 !!!:
token: my-shared-secret
tls-san:
- 192.168.1.97
Install Kubernetes RKE2:
!!! Only on machine ubkub1 !!!:
sudo curl -sfL https://get.rke2.io | sudo sh -
sudo systemctl enable rke2-server.service
sudo systemctl start rke2-server.service
Install Kubernetes RKE2 ubkub2 and ubkub3:
The following commands are repeated for the following systems (ubkub2; ubkub3):
sudo mkdir -p /etc/rancher/rke2/
sudo nano /etc/rancher/rke2/config.yaml
Content of config.yaml
:
Please edit token. Must be the same on all three machines to create a cluster !!!
The following commands are repeated for the following systems (ubkub2; ubkub3):
token: my-shared-secret
server: https://192.168.1.97:9345
tls-san:
- 192.168.1.97
Install Kubernetes RKE2:
The following commands are repeated for the following systems (ubkub2; ubkub3):
sudo curl -sfL https://get.rke2.io | sudo sh -
sudo systemctl enable rke2-server.service
sudo systemctl start rke2-server.service
Now we wait until the Kubernetes RKE2 Cluster is setup. To check the installation status we can call following command on ubkub1. Be patient it can take up to 30 minutes until the Kubernetes cluster is completely built up.
!!! Only on machine ubkub1 !!!:
sudo /var/lib/rancher/rke2/bin/kubectl --kubeconfig /etc/rancher/rke2/rke2.yaml get nodes
Now we need to install kubectl an helm on all three nodes in our Kubernetes cluster.
Good documentation about kubectl installation can be found on kubernetes docs: https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/#install-using-native-package-management
Good documentation about helm installation can be found on helm docs: https://helm.sh/docs/intro/install/
Firstly we install kubectl on all nodes:
The following commands are repeated for the following systems (ubkub1; ubkub2; ubkub3):
sudo apt-get update
sudo apt-get install -y ca-certificates curl
sudo apt-get install -y apt-transport-https
curl -fsSL https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-archive-keyring.gpg
echo "deb [signed-by=/etc/apt/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo apt-get update
sudo apt-get install -y kubectl
Now we are temporarily switching to the root user to copy the content of the rke2.yaml
file to our local ~/.kube
folder so we can use kubectl as the root user or with the sudo
command.
The following commands are repeated for the following systems (ubkub1; ubkub2; ubkub3):
sudo -i
mkdir -p ~/.kube && cp /etc/rancher/rke2/rke2.yaml ~/.kube/config
nano ~/.kube/config
#######################################################################################################
# In the nano editor please edit the option server: ... to server: 192.168.1.97:6443 #
#######################################################################################################
####################################################
# after saving the file please exit from root-mode #
####################################################
exit
Now we can proceed with the installation of helm on all three machines:
The following commands are repeated for the following systems (ubkub1; ubkub2; ubkub3):
curl https://baltocdn.com/helm/signing.asc | gpg --dearmor | sudo tee /usr/share/keyrings/helm.gpg > /dev/null
sudo apt-get install apt-transport-https --yes
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/helm.gpg] https://baltocdn.com/helm/stable/debian/ all main" | sudo tee /etc/apt/sources.list.d/helm-stable-debian.list
sudo apt-get update
sudo apt-get install helm
Good documentation can be found on rancher docs: https://ranchermanager.docs.rancher.com/pages-for-subheaders/install-upgrade-on-a-kubernetes-cluster
Now we can install the Management UI rancher as a service on our Kubernetes cluster:
!!! Only on machine ubkub1 !!!:
sudo helm repo add rancher-stable https://releases.rancher.com/server-charts/stable
sudo kubectl create namespace cattle-system
Now we need to choose a SSL configuration how do we want the web-ui of rancher to be certified. In this tutorial I choose "Rancher-generated TLS certificate" option. For this option we firstly need to install cert-manager (More information about the other options can be found in the ranchers docs https://ranchermanager.docs.rancher.com/pages-for-subheaders/install-upgrade-on-a-kubernetes-cluster#3-choose-your-ssl-configuration:
!!! Only on machine ubkub1 !!!:
sudo kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.11.0/cert-manager.crds.yaml
sudo helm repo add jetstack https://charts.jetstack.io
sudo helm repo update
sudo helm install cert-manager jetstack/cert-manager \
--namespace cert-manager \
--create-namespace \
--version v1.11.0
Verify that cert-manager is up and running:
!!! Only on machine ubkub1 !!!:
sudo kubectl get pods --namespace cert-manager
Now we can install rancher via a helm chart:
!!! Only on machine ubkub1 !!!:
Please edit value bootstrapPassword. This is the web-ui password for rancher. Standard unsername of the web-ui is "admin" !!!
sudo helm install rancher rancher-stable/rancher \
--namespace cattle-system \
--set hostname=192.168.1.97.sslip.io \
--set bootstrapPassword=password
Wait for Rancher to be rolled out:
!!! Only on machine ubkub1 !!!:
sudo kubectl -n cattle-system rollout status deploy/rancher
If you see the following error: "error: deployment "rancher" exceeded its progress deadline", you can check the status of the deployment by running the following command
!!! Only on machine ubkub1 !!!:
sudo kubectl -n cattle-system get deploy rancher
For future Services / Workloads on our cluster we want to use LetsEncrypt to certify our domain(s)/subdomain(s) automatically. For this we create two ClusterIssuers
(letsencrypt-production
and letsencrypt-staging
)
INFO: With the letsencrypt-production
ClusterIssuer we can request a valid certificate for our domains that we configure in different ingress
Objects. But if we misconfigure our ingress
Objects and we shortly request new certificates with LetsEncrypt we potentially get rate limited by LetsEncrypt. For test purposes can use our created letsencrypt-staging
ClusterIssuer to get a semi-valid certificate from LetsEncrypt without the risk to run into a rate limit.
!!! Only on machine ubkub1 !!!:
Please change <YOUR_EMAIL>
to your own email address !!!
cat << EOF > letsencrypt-production.yaml
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-production
namespace: default
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: <YOUR_EMAIL>
privateKeySecretRef:
name: letsencrypt-production
solvers:
- selector: {}
http01:
ingress:
class: nginx
EOF
!!! Only on machine ubkub1 !!!:
Please change <YOUR_EMAIL>
to your own email address !!!
cat << EOF > letsencrypt-staging.yaml
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-staging
namespace: default
spec:
acme:
server: https://acme-staging-v02.api.letsencrypt.org/directory
email: <YOUR_EMAIL>
privateKeySecretRef:
name: letsencrypt-staging
solvers:
- selector: {}
http01:
ingress:
class: nginx
EOF
Apply to LetsEncrypt ClusterIssuers to kubernetes cluster:
!!! Only on machine ubkub1 !!!:
sudo kubectl apply -f letsencrypt-production.yaml
sudo kubectl apply -f letsencrypt-staging.yaml
Good documentation can be found on Longhorn docs: https://longhorn.io/docs/1.5.1/deploy/install/install-with-rancher/
Before we can install Longhorn on our cluster we need to check some requirements with the following check script:
The following commands are repeated for the following systems (ubkub1; ubkub2; ubkub3):
sudo curl -sSfL https://raw.githubusercontent.com/longhorn/longhorn/v1.5.1/scripts/environment_check.sh | sudo bash
On my machines for example I had to install following packages:
sudo apt install jq nfs-common
Now we open the web browser an opening the following url: https://192.168.0.97.sslip.io
and using the user "admin" and password defined in part 5. Install Rancher via helm chart
On the web-ui navigate to Apps -> Charts
and search for "Longhorn".
Click on the longhorn-logo and on the next window click on "Install". In the next view windows you can customize longhorn but you can skip all the steps and proceed to install.
In this step, we will deploy a demo web server (nginx) as a service in the Kubernetes cluster.
Firstly we create the webapp1-deploment.yaml
with the following command:
cat << EOF > webapp1-deploment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: webapp1-deployment
spec:
replicas: 3
selector:
matchLabels:
app: webapp1-deployment
template:
metadata:
labels:
app: webapp1-deployment
spec:
containers:
- image: nginxdemos/hello
name: webapp1-deployment
EOF
Secondly we create the webapp1-service.yaml
file with the following command:
cat << EOF > webapp1-service.yaml
apiVersion: v1
kind: Service
metadata:
name: webapp1-service
spec:
ports:
- port: 80
selector:
app: webapp1-deployment
type: ClusterIP
EOF
Thirdly we create the webapp1-ingress.yaml
file with the following command:
Please modify the Domain name webapp1.mydnydns.com
in the code-block to your setup domain name on your router (dynamic dns)!!!
Please modify letsencrypt-staging
to letsencrypt-production
if you are finished with testing an want to deploy the app with a real LetsEncrypt certificate!!!
cat << EOF > webapp1-ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: webapp1-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
cert-manager.io/cluster-issuer: letsencrypt-staging
spec:
tls:
- hosts:
- webapp1.mydnydns.com
secretName: webapp1-secret-tls
rules:
- host: webapp1.mydnydns.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: webapp1-service
port:
number: 80
EOF
Now apply webapp1-deploment.yaml
, webapp1-service.yaml
and webapp1-ingress.yaml
:
sudo kubectl apply -f webapp1-deployment.yaml
sudo kubectl apply -f webapp1-service.yaml
sudo kubectl apply -f webapp1-ingress.yaml
In this step, we will deploy a demo wordpress server (webserver and mysql-server) as services Kubernetes.
Firstly we create the worpress-pvc.yaml
file with the following command:
With this PersistentVolumeClaim
we are given Longhorn the task to generate a PersistentVolume
for us.
the wordpress-deployment.yaml
(webserver) and wordpress-mysql-deployment.yaml
(mysql-server) are using this volume (both are using the same volume) to save persistent data.
cat << EOF > wordpress-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: wordpress-pvc
spec:
storageClassName: longhorn
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5Gi # Set the desired storage size
EOF
Secondly we create the wordpress-deployment.yaml
file with the following command:
!!! Please change passwords for WORDPRESS_DB_PASSWORD
!!!
cat << EOF > wordpress-deploment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: wordpress-deployment
spec:
selector:
matchLabels:
app: wordpress-deployment
template:
metadata:
labels:
app: wordpress-deployment
spec:
containers:
- image: wordpress:6.2.1-apache
name: wordpress-deployment
env:
- name: WORDPRESS_DB_HOST
value: wordpress-mysql-service
- name: WORDPRESS_DB_PASSWORD
value: your-wordpress-db-password
- name: WORDPRESS_DB_USER
value: wordpress
ports:
- containerPort: 80
name: wordpress
volumeMounts:
- name: wordpress-persistent-storage
mountPath: /var/www/html
volumes:
- name: wordpress-persistent-storage
persistentVolumeClaim:
claimName: wordpress-pvc
EOF
Thirdly we create the wordpress-service.yaml
file with the following command:
cat << EOF > wordpress-service.yaml
apiVersion: v1
kind: Service
metadata:
name: wordpress-service
spec:
ports:
- port: 80
selector:
app: wordpress-deployment
type: ClusterIP
EOF
Fourthly we create the wordpress-ingress.yaml
file with the following command:
Please modify the Domain name wordpress.mydnydns.com
in the code-block to your setup domain name on your router (dynamic dns)!!!
Please modify letsencrypt-staging
to letsencrypt-production
if you are finished with testing an want to deploy the app with a real LetsEncrypt certificate!!!
cat << EOF > wordpress-ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: wordpress-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
cert-manager.io/cluster-issuer: letsencrypt-staging
spec:
tls:
- hosts:
- wordpress.mydnydns.com
secretName: wordpress-secret-tls
rules:
- host: wordpress.mydnydns.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: wordpress-service
port:
number: 80
EOF
Fifthly we create the wordpress-mysql-deployment.yaml
file with the following command:
!!! Please change passwords for WORDPRESS_DB_PASSWORD
and MYSQL_ROOT_PASSWORD
!!!
cat << EOF > wordpress-mysql-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: wordpress-mysql-deployment
spec:
replicas: 1
selector:
matchLabels:
app: wordpress-mysql-deployment
template:
metadata:
labels:
app: wordpress-mysql-deployment
spec:
containers:
- image: mysql:5.6
name: wordpress-mysql-deployment
env:
- name: MYSQL_ROOT_PASSWORD
value: your-mysql-root-password
- name: MYSQL_DATABASE
value: wordpress
- name: MYSQL_USER
value: wordpress
- name: MYSQL_PASSWORD
value: your-wordpress-db-password
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: wordpress-pvc
EOF
Sixthly we create the wordpress-mysql-service.yaml
file with the following command:
cat << EOF > wordpress-mysql-service.yaml
apiVersion: v1
kind: Service
metadata:
name: wordpress-mysql-service
spec:
ports:
- port: 3306
selector:
app: wordpress-mysql-deployment
clusterIP: None
EOF
Now apply everything:
sudo kubectl apply -f wordpress-pvc.yaml
sudo kubectl apply -f wordpress-deployment.yaml
sudo kubectl apply -f wordpress-service.yaml
sudo kubectl apply -f wordpress-ingress.yaml
sudo kubectl apply -f wordpress-mysql-deployment.yaml
sudo kubectl apply -f wordpress-mysql-service.yaml
If you want to deploy custom container images to your Kubernetes cluster without upload your custom images to an public container registry like docker hub, Amazon Elastic Container Registry or Google Container Registry, you need a private container registry.
In the following steps we are setting up a private docker registry on machine haproxy1.
!!! Only on machine haproxy1 !!!:
sudo apt-get update
sudo apt-get install ca-certificates curl gnupg
sudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
sudo chmod a+r /etc/apt/keyrings/docker.gpg
echo \
"deb [arch="$(dpkg --print-architecture)" signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
"$(. /etc/os-release && echo "$VERSION_CODENAME")" stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update
sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
!!! Only on machine haproxy1 !!!:
sudo docker run -d -p 5000:5000 --restart=always --name registry -v /mnt/registry:/var/lib/registry registry:2
Now the private container image registry can be accessed via 192.168.0.99:5000
.
We can also use the integrated REST API of the private container image registry to query all saved images in our repository:
curl -X GET 192.168.0.99:5000/v2/_catalog
More information about the private container image registry REST API can be found here: https://docs.docker.com/registry/spec/api/
Kaniko is a tool to build container images from a Dockerfile, inside a container or Kubernetes cluster.
Kaniko doesn't depend on a Docker daemon and executes each command within a Dockerfile completely in userspace. This enables building container images in environments that can't easily or securely run a Docker daemon, such as a standard Kubernetes cluster.
Good documentation can be found on the Kaniko on the GitHub Repo: https://github.com/GoogleContainerTools/kaniko
First we need to add the private container image registry to all Kubernetes nodes
To successfully pull images from our private image registry we need to configure all our Kubernetes nodes (ubkub1, ubkub2 and ubkub3).
Good documentation can be found on the RKE2 docs page: https://docs.rke2.io/install/containerd_registry_configuration
The following commands are repeated for the following systems (ubkub1; ubkub2; ubkub3):
cat << EOF > /etc/rancher/rke2/registries.yaml
mirrors:
192.168.0.99:
endpoint:
- "http://192.168.0.99:5000"
configs:
"192.168.0.99:5000":
tls:
insecure_skip_verify: true
EOF
!!! To apply the changes please reboot all three Kubernetes nodes !!!
With the following command we create a Kaniko Pod. The Kaniko pod build the a new container image with the following Dockerfile-Content : FROM alpine \nRUN echo "created from standard input"
. After the build process is finished Kaniko pushes the container image to our private container image registry.
!!! Only on machine ubkub1 !!!:
echo -e 'FROM alpine \nRUN echo "created from standard input"' > Dockerfile | tar -cf - Dockerfile | gzip -9 | kubectl run kaniko --rm --stdin=true --image=gcr.io/kaniko-project/executor:latest --restart=Never --overrides='{
"apiVersion": "v1",
"spec": {
"containers": [
{
"name": "kaniko",
"image": "gcr.io/kaniko-project/executor:latest",
"stdin": true,
"stdinOnce": true,
"args": [
"--insecure",
"--insecure-pull",
"--skip-tls-verify",
"--skip-tls-verify-pull",
"--dockerfile=Dockerfile",
"--context=tar://stdin",
"--destination=192.168.0.99:5000/myalpine"
]
}
]
}
}'