Skip to main content

Kubernetes The Hard Way

· 13 min read

Kubernetes The Hard Way #

Resuming the Kubernetes article series in a new format.

This article describes the overall experience of manually deploying Kubernetes without using automated tools such as kubeadm. The presented approach is consistent with our documentation, which we maintain according to best practices and IAC methodologies.

All configuration provided below exactly replicates the behavior of kubeadm. As a result, the final cluster is hard to distinguish — whether it was assembled using kubeadm or manually.

comics

1. Introduction

Kubernetes has become the de facto standard for managing containerized applications. Its installation and configuration have been greatly simplified thanks to tools like kubeadm, which handle certificate generation, component startup, and basic cluster configuration.

However, behind this convenience lies a complex architecture, understanding of which is critical when designing fault-tolerant solutions, creating custom automations, or debugging production issues. To truly understand how a Kubernetes cluster works, it is important to go through the deployment process manually — from initialization to full readiness.

Kubernetes The Hard Way is a guide in which a cluster is deployed step by step, without using kubeadm or other automated tools. Instead of a black box — sequential execution of all the steps that are usually performed under the hood.

Each stage corresponds to a specific phase of kubeadm init or kubeadm join, but is implemented manually, with explicit key generation, configuration preparation, process startup, and system state verification.

💡 Result — a fully functional Kubernetes cluster, virtually indistinguishable from one assembled via kubeadm, but prepared with a complete understanding of all internal dependencies.

Skill Level

This article is intended for readers who are already familiar with the basic concepts of containerization and Kubernetes in general. Without this background, the level of detail will be overwhelming. If you are just getting started, we recommend reviewing the official Kubernetes Bootcamp.


🔧 Preface: Why the Startup Order Matters

Some systems are designed so that components are interdependent, and their management is partially performed within the system itself. This requires a strict order of operations:

  • ⚙️ Component Interdependency
    One component cannot start without another.
    Example: API requires storage, and storage requires networking and configuration.

  • Cannot Start Everything Simultaneously
    Parallel startup leads to undesirable results.
    Example: Scheduler waits for API, and API waits for data loading and initialization.

  • 🔄 Some Components Are Started Externally
    Before the system is ready, some processes are started through the environment.
    Example: kubelet is started via systemd, not as part of the cluster.

  • 🛠 A Bootstrap Stage Is Required
    Configs, certificates, addresses — everything is prepared manually.
    Example: Initial generation of root CA, kubeconfig, static pod manifests.

  • 🤖 Transition to Self-Management
    After startup, the system begins to manage its own processes and state.
    Example: Control plane components begin to control each other through the API.

Important

Without a strictly defined sequence, such a system will not work. This is exactly why tools and utilities like kubeadm exist — they solve the "chicken and egg" problem and establish the correct deployment order.

Chapters:

Kubernetes components installation diagram

2. Why "The Hard Way"

Deploying Kubernetes manually requires additional effort. However, this approach has several advantages:

  • It provides a deep understanding of the architecture and internal logic of Kubernetes components.
  • It allows flexible configuration of each cluster component to meet specific technical requirements.

3. Deployment Architecture

Component Layer

Technology layer.

Below is a list of components required for manual cluster deployment. To ensure compatibility, all versions must be synchronized with each other.

ComponentVersionPurpose
containerd1.7.19Container runtime that manages the container lifecycle.
runcv1.1.12Low-level tool for running containers using Linux kernel capabilities.
crictlv1.30.0Utility for debugging CRI runtimes with containerd interaction support.
kubectlv1.30.4Client for interacting with the Kubernetes API.
kubeadmv1.30.4

Tool for automating Kubernetes installation and configuration (used for configuration validation).

kubeletv1.30.4Agent running on each node, responsible for pod execution and health monitoring.
etcd3.5.12-0Distributed key-value store for storing cluster configuration and state.
kube-apiserverv1.30.4Component providing a REST API for cluster interaction.
kube-controller-managerv1.30.4Manages the state of cluster objects using built-in controllers.
kube-schedulerv1.30.4Responsible for scheduling pod placement on nodes.
conntrackv1.4.+Utility for tracking network connections (used by iptables and kubelet).
socat1.8.+Utility for port forwarding and TCP tunneling (used for debugging and proxying).

Switching Layer

Network deployment diagram.

ComponentPortProtocol
etcd-server2379TCP
etcd-peer2380TCP
etcd-metrics2381TCP
kube-apiserver6443TCP
kube-controller-manager10257TCP
kube-scheduler10259TCP
kubelet-healthz10248TCP
kubelet-server10250TCP
kubelet-read-only-port10255TCP

Load Balancing Layer

Technology layer.

IP AddressTarget GroupPortTarget Port
VIP-LB- NODE-IP-1 - NODE-IP-2 - NODE-IP-364436443

DNS Records

A RecordIP AddressTTL

api.my-first-cluster.example.com

VIP-LB60s

master-1.my-first-cluster.example.com

NODE-IP-160s

master-2.my-first-cluster.example.com

NODE-IP-260s

master-3.my-first-cluster.example.com

NODE-IP-360s

4. Creating the Infrastructure

At this stage, the basic cluster architecture is defined, including its network topology, control plane nodes, and core parameters.

Cluster Information

NameExternal DomainKubernetes Version
my-first-clusterexample.comv1.30.4

Control Plane Nodes

NameIP AddressOperating SystemResources

master-1.my-first-cluster.example.com

NODE-IP-1ubuntu-24-04-lts2CPU / 2RAM / 20GB

master-2.my-first-cluster.example.com

NODE-IP-2ubuntu-24-04-lts2CPU / 2RAM / 20GB

master-3.my-first-cluster.example.com

NODE-IP-3ubuntu-24-04-lts2CPU / 2RAM / 20GB

5. Basic Node Setup

This section covers the basic preparation of Kubernetes nodes before installing components. It describes setting up environment variables, changing the hostname, and installing required system utilities. These steps are mandatory for the correct operation of kubelet and other control plane components.

Basic node setup

● Required

Basic node settings

  • Node environment variables.
  • Changing the node name.
  • Installing dependencies.

Node environment variables

export HOST_NAME=master-1
export CLUSTER_NAME="my-first-cluster"
export BASE_DOMAIN="example.com"
export CLUSTER_DOMAIN="cluster.local"
export FULL_HOST_NAME="${HOST_NAME}.${CLUSTER_NAME}.${BASE_DOMAIN}"

Changing the node name

hostnamectl set-hostname ${FULL_HOST_NAME}

Installing dependencies

sudo apt update
sudo apt install -y conntrack socat jq tree

6. Loading Kernel Modules

This section covers loading kernel modules required for the correct operation of Kubernetes. The setup includes modprobe configuration and activation of the overlay and br_netfilter modules, which provide support for the container filesystem and network functions. These steps are mandatory for the functioning of network policies, iptables, and container runtimes.

Loading kernel modules

● Required

Component installation steps:

  • Modprobe configuration.
  • Loading modules.

Modprobe configuration

cat <<EOF > /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF

Loading modules

sudo -i
modprobe overlay
modprobe br_netfilter
note

The overlay module is used by the OverlayFS filesystem to manage container layers. It allows merging multiple directories into a single virtual filesystem. It is used by runtimes such as Docker and containerd.

The br_netfilter module enables processing of network bridge traffic through the netfilter subsystem. This is necessary for the correct operation of iptables in Kubernetes.


7. Configuring sysctl Parameters

This section covers configuring kernel parameters using sysctl, which are necessary for Kubernetes networking. Changes are made to ensure traffic routing between pods and correct iptables operation for bridges. These parameters are mandatory for enabling IP packet forwarding and network flow filtering in the cluster.

Configuring sysctl parameters

● Required

Component installation steps:

  • Sysctl configuration.
  • Applying configuration.
Note

Network Parameters

For correct traffic routing and filtering, kernel parameters must be set.

Sysctl configuration

cat <<EOF > /etc/sysctl.d/99-br-netfilter.conf
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
EOF

Applying configuration

sysctl --system

If the net.ipv4.ip_forward parameter is not enabled, the system will not forward IP packets between interfaces. This can lead to network failures within the cluster, service unavailability, and loss of connectivity between pods.

Sysctl configuration

cat <<EOF > /etc/sysctl.d/99-network.conf
net.ipv4.ip_forward=1
EOF
sysctl --system

8. Installing Components

This section describes the installation process for the core components required for a Kubernetes cluster. The installation is performed manually and prepares the environment for subsequent initialization and control plane configuration stages.

Installation of runc

● Required

Component installation steps

  • Creating working directories.
  • Environment variables.
  • Download instructions.
  • Permissions setup.
  • Download service.
  • Starting the download service.

Creating working directories

mkdir -p /etc/default/runc

Environment variables

cat <<EOF > /etc/default/runc/download.env
COMPONENT_VERSION="v1.1.12"
REPOSITORY="https://github.com/opencontainers/runc/releases/download"
EOF

Download instructions

cat <<"EOF" > /etc/default/runc/download-script.sh
#!/bin/bash
set -Eeuo pipefail


COMPONENT_VERSION="${COMPONENT_VERSION:-v1.1.12}"
REPOSITORY="${REPOSITORY:-https://github.com/opencontainers/runc/releases/download}"
PATH_BIN="${REPOSITORY}/${COMPONENT_VERSION}/runc.amd64"
PATH_SHA256="${REPOSITORY}/${COMPONENT_VERSION}/runc.sha256sum"
INSTALL_PATH="/usr/local/bin/runc"

LOG_TAG="runc-installer"
TMP_DIR="$(mktemp -d)"

logger -t "$LOG_TAG" "[INFO] Checking current runc version..."

CURRENT_VERSION=$($INSTALL_PATH --version 2>/dev/null | head -n1 | awk '{print $NF}') || CURRENT_VERSION="none"
COMPONENT_VERSION_CLEAN=$(echo "$COMPONENT_VERSION" | sed 's/^v//')

logger -t "$LOG_TAG" "[INFO] Current: $CURRENT_VERSION, Target: $COMPONENT_VERSION_CLEAN"

if [[ "$CURRENT_VERSION" != "$COMPONENT_VERSION_CLEAN" ]]; then
logger -t "$LOG_TAG" "[INFO] Download URL: $PATH_BIN"
logger -t "$LOG_TAG" "[INFO] Updating runc to version $COMPONENT_VERSION..."

cd "$TMP_DIR"
logger -t "$LOG_TAG" "[INFO] Working directory: $PWD"

logger -t "$LOG_TAG" "[INFO] Downloading runc..."
curl -fsSL -o runc.amd64 "$PATH_BIN" || { logger -t "$LOG_TAG" "[ERROR] Failed to download runc"; exit 1; }

logger -t "$LOG_TAG" "[INFO] Downloading checksum file..."
curl -fsSL -o runc.sha256sum "$PATH_SHA256" || { logger -t "$LOG_TAG" "[ERROR] Failed to download checksum file"; exit 1; }

logger -t "$LOG_TAG" "[INFO] Verifying checksum..."
grep "runc.amd64" runc.sha256sum | sha256sum -c - || { logger -t "$LOG_TAG" "[ERROR] Checksum verification failed!"; exit 1; }

logger -t "$LOG_TAG" "[INFO] Installing runc..."
install -m 755 runc.amd64 "$INSTALL_PATH"

logger -t "$LOG_TAG" "[INFO] runc successfully updated to $COMPONENT_VERSION."
rm -rf "$TMP_DIR"

else
logger -t "$LOG_TAG" "[INFO] runc is already up to date. Skipping installation."
fi
EOF

Permissions setup

chmod +x /etc/default/runc/download-script.sh

Download service

cat <<EOF > /usr/lib/systemd/system/runc-install.service
[Unit]
Description=Install and update in-cloud component runc
After=network.target
Wants=network-online.target

[Service]
Type=oneshot
EnvironmentFile=-/etc/default/runc/download.env
ExecStart=/bin/bash -c "/etc/default/runc/download-script.sh"
RemainAfterExit=yes

[Install]
WantedBy=multi-user.target
EOF

Download

systemctl enable runc-install.service
systemctl start runc-install.service

Installation check

Installation check

Executable files

journalctl -t runc-installer
Command output
***** [INFO] Checking current runc version...
***** [INFO] Current: none, Target: v1.1.12
***** [INFO] Download URL: https://*******
***** [INFO] Updating runc to version v1.1.12...
***** [INFO] Working directory: /tmp/tmp.*****
***** [INFO] Downloading runc...
***** [INFO] Downloading checksum file...
***** [INFO] Verifying checksum...
***** [INFO] Installing runc...
***** [INFO] runc successfully updated to v1.1.12.
ls -la /usr/local/bin/ | grep 'runc$'
Command output
-rwxr-xr-x  1 root root  10709696 Jan 23  2024 runc

Executable file version

runc --version
Command output
runc version 1.1.12
commit: v1.1.12-0-g51d5e946
spec: 1.0.2-dev
go: go1.20.13
libseccomp: 2.5.4

9. Configuring Components

This section describes the setup and configuration of Kubernetes components that ensure proper cluster operation.

Configuration of containerd

● Required

Component configuration steps

  • Component configuration
  • Systemd Unit setup for the component
  • Systemd Unit start
Note

This section depends on the following documents:

Component configuration

Creating working directories

mkdir -p /etc/containerd/
mkdir -p /etc/containerd/conf.d
mkdir -p /etc/containerd/certs.d

Base configuration file

cat <<"EOF" > /etc/containerd/config.toml
version = 2
imports = ["/etc/containerd/conf.d/*.toml"]
EOF

Custom configuration file template

cat <<"EOF" > /etc/containerd/conf.d/in-cloud.toml
version = 2
[plugins]
[plugins."io.containerd.grpc.v1.cri"]
sandbox_image = "registry.k8s.io/pause:3.9"
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
runtime_type = "io.containerd.runc.v2"
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
SystemdCgroup = true
[plugins."io.containerd.grpc.v1.cri".registry]
config_path = "/etc/containerd/certs.d/"
EOF

Systemd Unit setup for the component

Delegate=yes delegates cgroup subsystem management to the container runtime (required for proper Kubernetes operation). KillMode=process ensures that when the service is stopped, only the main containerd process is terminated, not the child containers. OOMScoreAdjust=-999 protects the process from OOM Killer — without the runtime, all containers on the node become unmanageable.

cat <<EOF > /usr/lib/systemd/system/containerd.service
[Unit]
Description=containerd container runtime
Documentation=https://containerd.io
After=network.target local-fs.target containerd-install.service runc-install.service
Wants=containerd-install.service runc-install.service

[Service]
ExecStartPre=-/sbin/modprobe overlay
ExecStart=/usr/local/bin/containerd

Type=notify
Delegate=yes
KillMode=process
Restart=always
RestartSec=5
LimitNPROC=infinity
LimitCORE=infinity
LimitNOFILE=infinity
TasksMax=infinity
OOMScoreAdjust=-999

[Install]
WantedBy=multi-user.target
EOF
systemctl enable containerd
systemctl start containerd

Configuration verification

Configuration verification
tree /etc/containerd/
Command output
/etc/containerd/
├── certs.d
├── conf.d
│ └── cloud.toml
└── config.toml
systemctl status containerd
Command output
● containerd.service - containerd container runtime
Loaded: loaded (/usr/lib/systemd/system/containerd.service; enabled; preset: enabled)
Active: active (running) since Tue 2024-12-31 17:26:21 UTC; 2min 30s ago
Docs: https://containerd.io
Main PID: 839 (containerd)
Tasks: 7 (limit: 2274)
Memory: 62.0M (peak: 62.5M)
CPU: 375ms
CGroup: /system.slice/containerd.service
└─839 /usr/local/bin/containerd

***** level=info msg="Start subscribing containerd event"
***** level=info msg="Start recovering state"
***** level=info msg="Start event monitor"
***** level=info msg="Start snapshots syncer"
***** level=info msg="Start cni network conf syncer for default"
***** level=info msg="Start streaming server"
***** level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
***** level=info msg=serving... address=/run/containerd/containerd.sock
***** level=info msg="containerd successfully booted in 0.065807s"
***** Started containerd.service - containerd container runtime.

10. Verifying Component Readiness

This section describes the process of verifying the readiness of Kubernetes components before cluster initialization or joining new nodes.

Component readiness verification

● Optional

kubeadm init phase preflight --dry-run \
--config=/var/run/kubeadm/kubeadm.yaml
Command output

If everything is installed correctly, the command will complete without errors, and you will see the following output:

[preflight] Running pre-flight checks
[preflight] Would pull the required images (like 'kubeadm config images pull')
Command output

If the process was performed in semi-automatic mode, the acceptable output may look like this:

[WARNING FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml]: /etc/kubernetes/manifests/kube-apiserver.yaml already exists
[WARNING FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml]: /etc/kubernetes/manifests/kube-controller-manager.yaml already exists
[WARNING FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml]: /etc/kubernetes/manifests/kube-scheduler.yaml already exists
[WARNING FileAvailable--etc-kubernetes-manifests-etcd.yaml]: /etc/kubernetes/manifests/etcd.yaml already exists
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks before initializing the new control plane instance
[preflight] Would pull the required images (like 'kubeadm config images pull')

11. Working with Certificates

This section covers the rules for using certificates in a Kubernetes cluster: which components use certificates, who signs them, and how authentication is performed.

Basic certificate structure

Network communication using certificates


12. Creating Root Certificates

Certificate Authority (CA) is a trusted source that issues root certificates used to sign all other certificates within the Kubernetes cluster.

CA certificates play a key role in establishing trust between components, ensuring authentication, encryption, and integrity of communications.

This section describes the process of obtaining root certificates that are used to sign the remaining certificates in the Kubernetes cluster.

Creating root certificates

● Required

Kubernetes CA

Purpose: Kubernetes root Certificate Authority (CA). Signs the server and client certificates for kube-apiserver, kubelet, kube-controller-manager, and kube-scheduler. All cluster components trust this CA for TLS connection verification.

Note

Note: this block describes only the process of creating Kubernetes CA root certificates.

Working directory

mkdir -p /etc/kubernetes/openssl
mkdir -p /etc/kubernetes/pki

Configuration

cat <<EOF > /etc/kubernetes/openssl/ca.conf
[req]
distinguished_name = req_distinguished_name
x509_extensions = v3_ca
prompt = no

[req_distinguished_name]
CN = kubernetes

[v3_ca]
keyUsage = critical, keyCertSign, keyEncipherment, digitalSignature
basicConstraints = critical,CA:TRUE
EOF

Private key generation

openssl genrsa \
-out /etc/kubernetes/pki/ca.key 2048

Public key generation

openssl req \
-x509 \
-new \
-nodes \
-key /etc/kubernetes/pki/ca.key \
-sha256 \
-days 3650 \
-out /etc/kubernetes/pki/ca.crt \
-config /etc/kubernetes/openssl/ca.conf
Certificate readiness verification
Note

This section depends on the following sections:

/etc/kubernetes/openssl/cert-report.sh /etc/kubernetes/pki/ca.crt
Command output
CERTIFICATE AUTHORITY   EXPIRES                  RESIDUAL TIME   EXTERNALLY MANAGED
ca Oct 20, 2034 22:04 UTC 9y no

13. Creating Application Certificates

Certificates are digital documents that verify the authenticity of components within a Kubernetes cluster. They provide secure communication, authentication, and encryption during interactions between nodes, control components, and users.

All certificates are created based on Public Key Infrastructure (PKI) and contain information about the owner, validity period, and the Certificate Authority (CA) that issued the certificate.

This section generates the certificates required for various Kubernetes components (API server, kubelet, controller-manager, etc.).

Creating application certificates

● Required

Kubelet server

Purpose: kubelet server certificate for TLS on port 10250. Presented when kube-apiserver and other clients connect to the kubelet API. Signed by kubernetes-ca.

Environment variables

export CLUSTER_NAME=my-first-cluster
export BASE_DOMAIN=example.com
export CLUSTER_DOMAIN=cluster.local
export FULL_HOST_NAME=${HOST_NAME}.${CLUSTER_NAME}.${BASE_DOMAIN}
export MACHINE_LOCAL_ADDRESS=$(ip -4 addr show scope global | awk '/inet/ {print $2; exit}' | cut -d/ -f1)

Working directory

mkdir -p /etc/kubernetes/pki
mkdir -p /etc/kubernetes/openssl/csr
mkdir -p /var/lib/kubelet/pki

Configuration

cat <<EOF > /etc/kubernetes/openssl/kubelet-server.conf
[ req ]
default_bits = 2048
prompt = no
default_md = sha256
distinguished_name = dn
req_extensions = req_ext

[ req_ext ]
subjectAltName = @alt_names

[ alt_names ]
DNS.1 = localhost
DNS.2 = ${HOST_NAME}
DNS.3 = ${FULL_HOST_NAME}
IP.1 = 127.0.0.1
IP.2 = 0:0:0:0:0:0:0:1
IP.3 = ${MACHINE_LOCAL_ADDRESS}

[ dn ]
CN = "system:node:${FULL_HOST_NAME}
O = "system:nodes"

[ v3_ext ]
authorityKeyIdentifier=keyid,issuer:always
basicConstraints=CA:FALSE
keyUsage=keyEncipherment,dataEncipherment
extendedKeyUsage=serverAuth
subjectAltName=@alt_names
EOF

Private key generation

openssl genrsa \
-out /var/lib/kubelet/pki/kubelet-server-key.pem 2048

CSR generation

openssl req \
-new \
-key /var/lib/kubelet/pki/kubelet-server-key.pem \
-out /etc/kubernetes/openssl/csr/kubelet-server.csr \
-config /etc/kubernetes/openssl/kubelet-server.conf

CSR signing

openssl x509 \
-req \
-days 365 \
-sha256 \
-outform PEM \
-CA /etc/kubernetes/pki/ca.crt \
-CAkey /etc/kubernetes/pki/ca.key \
-CAcreateserial \
-in /etc/kubernetes/openssl/csr/kubelet-server.csr \
-out /var/lib/kubelet/pki/kubelet-server.pem \
-extensions v3_ext \
-extfile /etc/kubernetes/openssl/kubelet-server.conf
cat /var/lib/kubelet/pki/kubelet-server.pem /var/lib/kubelet/pki/kubelet-server-key.pem >> /var/lib/kubelet/pki/kubelet-server-$(date '+%Y-%m-%d-%H-%M-%S').pem
ln -s /var/lib/kubelet/pki/kubelet-server-$(date '+%Y-%m-%d-%H-%M-%S').pem /var/lib/kubelet/pki/kubelet-server-current.pem
Certificate readiness check
Note
This section depends on the following sections:
/etc/kubernetes/openssl/cert-report.sh /var/lib/kubelet/pki/kubelet-server.pem
Command output
CERTIFICATE                EXPIRES                  RESIDUAL TIME   CERTIFICATE AUTHORITY   EXTERNALLY MANAGED
kubelet-server-current Oct 22, 2025 22:06 UTC 364d kubernetes no

14. Creating the ServiceAccount Signing Key

In Kubernetes, ServiceAccount is a mechanism that allows applications within the cluster to authenticate when accessing the API server. The private key specified in kube-apiserver and kube-controller-manager is used for signing tokens of these accounts. This ensures secure and verifiable interaction between services and provides the ability for granular access control.

This section creates or connects the key used by Kubernetes to sign ServiceAccount tokens.

Creating ServiceAccount signing key

● Required

openssl genpkey \
-algorithm RSA \
-out /etc/kubernetes/pki/sa.key \
-pkeyopt rsa_keygen_bits:2048
openssl rsa \
-pubout \
-in /etc/kubernetes/pki/sa.key \
-out /etc/kubernetes/pki/sa.pub

15*. Creating All Certificates

This section describes the generation of all certificates.

caution

If you have not performed manual certificate generation, use this block to automatically create the necessary files.

Generation of all certificates

● Optional

Certificate generation

kubeadm init phase certs all \
--config=/var/run/kubeadm/kubeadm.yaml
note
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [api.my-first-cluster.example.com kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.my-first-cluster.example.com pylcozuscb] and IPs [29.64.0.1 31.129.111.153 127.0.0.1]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost master-1.my-first-cluster.example.com pylcozuscb] and IPs [31.129.111.153 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost master-1.my-first-cluster.example.com pylcozuscb] and IPs [31.129.111.153 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key

16. Creating kubeconfig Configurations

Kubeconfig is a configuration file that provides access to a Kubernetes cluster. It contains information about API servers, user credentials (such as tokens or certificates), and contexts that define which cluster and user are being used. Kubeconfig provides authentication and authorization when interacting with the cluster through kubectl or other clients, allowing secure management of cluster resources and settings.

We create kubeconfig files for components and users. This ensures secure and controlled connection to the API server.

Creating kubeconfig configurations and certificates

● Required

Super Admin

Working directory

mkdir -p /etc/kubernetes/pki
mkdir -p /etc/kubernetes/openssl/csr
mkdir -p /etc/kubernetes/kubeconfig

Configuration

cat <<EOF > /etc/kubernetes/openssl/super-admin.conf
[ req ]
default_bits = 2048
prompt = no
default_md = sha256
distinguished_name = dn

[ dn ]
CN = kubernetes-super-admin
O = system:masters

[ v3_ext ]
authorityKeyIdentifier=keyid,issuer:always
basicConstraints=CA:FALSE
keyUsage=keyEncipherment,dataEncipherment
extendedKeyUsage=clientAuth
EOF

Private key generation

openssl genrsa \
-out /etc/kubernetes/kubeconfig/super-admin.key 2048

CSR generation

openssl req \
-new \
-key /etc/kubernetes/kubeconfig/super-admin.key \
-out /etc/kubernetes/openssl/csr/super-admin.csr \
-config /etc/kubernetes/openssl/super-admin.conf

CSR signing

openssl x509 \
-req \
-days 365 \
-sha256 \
-CA /etc/kubernetes/pki/ca.crt \
-CAkey /etc/kubernetes/pki/ca.key \
-CAcreateserial \
-in /etc/kubernetes/openssl/csr/super-admin.csr \
-out /etc/kubernetes/kubeconfig/super-admin.crt \
-extensions v3_ext \
-extfile /etc/kubernetes/openssl/super-admin.conf

Kubeconfig setup for super-admin

kubectl config set-cluster kubernetes \
--certificate-authority=/etc/kubernetes/pki/ca.crt \
--embed-certs=true \
--server=https://127.0.0.1:6443 \
--kubeconfig=/etc/kubernetes/super-admin.conf

kubectl config set-credentials system:node:${HOST_NAME} \
--client-certificate=/etc/kubernetes/kubeconfig/super-admin.crt \
--client-key=/etc/kubernetes/kubeconfig/super-admin.key \
--embed-certs=true \
--kubeconfig=/etc/kubernetes/super-admin.conf

kubectl config set-context default \
--cluster=kubernetes \
--user=system:node:${HOST_NAME} \
--kubeconfig=/etc/kubernetes/super-admin.conf

kubectl config use-context default \
--kubeconfig=/etc/kubernetes/super-admin.conf
Certificate readiness check
Note
This section depends on the following sections:
/etc/kubernetes/openssl/cert-report.sh /etc/kubernetes/kubeconfig/super-admin.crt
Command output
CERTIFICATE                EXPIRES                  RESIDUAL TIME   CERTIFICATE AUTHORITY   EXTERNALLY MANAGED
super-admin.conf Oct 22, 2025 22:06 UTC 364d kubernetes no

17*. Creating All kubeconfigs

This section describes the generation of all kubeconfig files.

caution

If you have not performed manual kubeconfig generation, use this block to automatically create the configurations.

Generation of all kubeconfig files

● Optional

Kubeconfig generation

kubeadm init phase kubeconfig all \
--config=/var/run/kubeadm/kubeadm.yaml
note
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file

18. Verifying the Certificate Block

This section covers the verification of the correctness of created certificates and keys, as well as the correspondence between them. This is important for eliminating errors before launching Kubernetes components.

Certificate block verification

● Optional

After configuring the certificates, it is recommended to verify their correctness using Kubeadm

Working directory

mkdir -p /etc/kubernetes/openssl

Script creation instructions

Script creation instructions
cat <<'EOF' > /etc/kubernetes/openssl/cert-report.sh
#!/usr/bin/env bash
set -euo pipefail

TMPDIR=$(mktemp -d)
trap 'rm -rf "$TMPDIR"' EXIT

declare -A CN_TO_CA_NAME
declare -A PROCESSED_FINGERPRINTS
CERT_ROWS=()
CA_ROWS=()

CERT_HEADER=$(printf "%-28s %-25s %-15s %-24s %-20s" \
"CERTIFICATE" "EXPIRES" "RESIDUAL TIME" "CERTIFICATE AUTHORITY" "EXTERNALLY MANAGED")
CA_HEADER=$(printf "%-24s %-25s %-15s %-20s" \
"CERTIFICATE AUTHORITY" "EXPIRES" "RESIDUAL TIME" "EXTERNALLY MANAGED")

CERT_PATH="${1:-}"

if [ -n "$CERT_PATH" ]; then
FILES=("$CERT_PATH")
else
mapfile -t FILES < <(
find /etc/kubernetes/ \
-type d -name openssl -prune -o \
-type f \( -name "*.crt" -o -name "*.pem" -o -name "*.conf" \) -print 2>/dev/null
)
fi

extract_cert() {
local file="$1"
local out="$2"
if grep -q "client-certificate-data:" "$file"; then
awk '/client-certificate-data:/ {print $2}' "$file" | base64 -d > "$out"
else
cp "$file" "$out"
fi
}

cert_lifetime() {
local end="$1"
local end_ts now_ts days years
end_ts=$(date -d "$end" +%s)
now_ts=$(date +%s)
(( end_ts < now_ts )) && echo "expired" && return
days=$(( (end_ts - now_ts) / 86400 ))
years=$(( days / 365 ))
(( years > 0 )) && echo "${years}y" || echo "${days}d"
}

cert_name() {
local path="$1"
local base
base=$(basename "$path" | sed 's/\.\(crt\|pem\|conf\)$//')
case "$path" in
*/etcd/*) echo "etcd-$base" ;;
*/front-proxy/*) echo "front-proxy-$base" ;;
*) echo "$base" ;;
esac
}

for file in "${FILES[@]}"; do
crt="$TMPDIR/ca.crt"
extract_cert "$file" "$crt" || continue
openssl x509 -in "$crt" -noout -text 2>/dev/null | grep -A1 "Key Usage" | grep -q "Certificate Sign" || continue
cn=$(openssl x509 -in "$crt" -noout -subject 2>/dev/null | sed -n 's/.*CN *= *\([^,\/]*\).*/\1/p')
[[ -n "$cn" ]] && CN_TO_CA_NAME["$cn"]="$(cert_name "$file")"
done

for file in "${FILES[@]}"; do
crt="$TMPDIR/cert.crt"
extract_cert "$file" "$crt" || continue
openssl x509 -in "$crt" -noout >/dev/null 2>&1 || continue

fp=$(openssl x509 -in "$crt" -noout -fingerprint -sha256 | cut -d= -f2)
[[ -n "${PROCESSED_FINGERPRINTS[$fp]+x}" ]] && continue
PROCESSED_FINGERPRINTS[$fp]=1

name=$(cert_name "$file")
end_raw=$(openssl x509 -in "$crt" -noout -enddate | cut -d= -f2)
expires=$(date -d "$end_raw" "+%b %d, %Y %H:%M UTC")
residual=$(cert_lifetime "$end_raw")

if openssl x509 -in "$crt" -noout -text | grep -A1 "Key Usage" | grep -q "Certificate Sign"; then
CA_ROWS+=("$(printf "%-24s %-25s %-15s %-20s" "$name" "$expires" "$residual" "no")")
else
issuer_cn=$(openssl x509 -in "$crt" -noout -issuer | sed -n 's/.*CN *= *\([^,\/]*\).*/\1/p')
ca_name="${CN_TO_CA_NAME[$issuer_cn]:-$issuer_cn}"
CERT_ROWS+=("$(printf "%-28s %-25s %-15s %-24s %-20s" "$name" "$expires" "$residual" "$ca_name" "no")")
fi
done

echo
echo "$CERT_HEADER"
printf "%s\n" "${CERT_ROWS[@]}" | sort
echo
echo "$CA_HEADER"
printf "%s\n" "${CA_ROWS[@]}" | sort
EOF

Setting permissions

chmod +x /etc/kubernetes/openssl/cert-report.sh

Running the script for all certificates/kubeconfigs

/etc/kubernetes/openssl/cert-report.sh
Note
CERTIFICATE                EXPIRES                  RESIDUAL TIME   CERTIFICATE AUTHORITY   EXTERNALLY MANAGED
admin.conf Oct 22, 2025 22:06 UTC 364d ca no
apiserver Oct 22, 2025 22:06 UTC 364d ca no
apiserver-etcd-client Oct 22, 2025 22:06 UTC 364d etcd-ca no
apiserver-kubelet-client Oct 22, 2025 22:06 UTC 364d ca no
controller-manager.conf Oct 22, 2025 22:06 UTC 364d ca no
etcd-healthcheck-client Oct 22, 2025 22:06 UTC 364d etcd-ca no
etcd-peer Oct 22, 2025 22:06 UTC 364d etcd-ca no
etcd-server Oct 22, 2025 22:06 UTC 364d etcd-ca no
front-proxy-client Oct 22, 2025 22:06 UTC 364d front-proxy-ca no
scheduler.conf Oct 22, 2025 22:06 UTC 364d ca no
super-admin.conf Oct 22, 2025 22:06 UTC 364d ca no

CERTIFICATE AUTHORITY EXPIRES RESIDUAL TIME EXTERNALLY MANAGED
ca Oct 20, 2034 22:04 UTC 9y no
etcd-ca Oct 20, 2034 22:04 UTC 9y no
front-proxy-ca Oct 20, 2034 22:04 UTC 9y no

Running the script for a single certificate/kubeconfig

/etc/kubernetes/openssl/cert-report.sh /etc/kubernetes/pki/ca.crt
Note
CERTIFICATE                EXPIRES                  RESIDUAL TIME   CERTIFICATE AUTHORITY   EXTERNALLY MANAGED


CERTIFICATE AUTHORITY EXPIRES RESIDUAL TIME EXTERNALLY MANAGED
ca Oct 20, 2034 22:04 UTC 9y no

19. Creating Control Plane Static Pods

Static Pods setup

● Required

This section describes the manual creation of static pod manifests for Kubernetes control plane components.

Kube-API setup

● Required

Note

This section is optional and intended only for cases where this resource needs to be configured separately from the others.

Environment variables

export MACHINE_LOCAL_ADDRESS=$(ip -4 addr show scope global | awk '/inet/ {print $2; exit}' | cut -d/ -f1)

Working directory

mkdir -p /etc/kubernetes/manifests
Static Pod Kube-apiserver

Manifest generation

cat <<EOF > /etc/kubernetes/manifests/kube-apiserver.yaml
---
apiVersion: v1
kind: Pod
metadata:
annotations:
kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: ${MACHINE_LOCAL_ADDRESS}:6443
creationTimestamp: null
labels:
component: kube-apiserver
tier: control-plane
name: kube-apiserver
namespace: kube-system
spec:
containers:
- command:
- kube-apiserver
- --advertise-address=${MACHINE_LOCAL_ADDRESS}
- --aggregator-reject-forwarding-redirect=true
- --allow-privileged=true
- --anonymous-auth=true
- --api-audiences=konnectivity-server
- --apiserver-count=1
- --audit-log-batch-buffer-size=10000
- --audit-log-batch-max-size=1
- --audit-log-batch-max-wait=0s
- --audit-log-batch-throttle-burst=0
- --audit-log-batch-throttle-enable=false
- --audit-log-batch-throttle-qps=0
- --audit-log-compress=false
- --audit-log-format=json
- --audit-log-maxage=30
- --audit-log-maxbackup=10
- --audit-log-maxsize=1000
- --audit-log-mode=batch
- --audit-log-truncate-enabled=false
- --audit-log-truncate-max-batch-size=10485760
- --audit-log-truncate-max-event-size=102400
- --audit-log-version=audit.k8s.io/v1
- --audit-webhook-batch-buffer-size=10000
- --audit-webhook-batch-initial-backoff=10s
- --audit-webhook-batch-max-size=400
- --audit-webhook-batch-max-wait=30s
- --audit-webhook-batch-throttle-burst=15
- --audit-webhook-batch-throttle-enable=true
- --audit-webhook-batch-throttle-qps=10
- --audit-webhook-initial-backoff=10s
- --audit-webhook-mode=batch
- --audit-webhook-truncate-enabled=false
- --audit-webhook-truncate-max-batch-size=10485760
- --audit-webhook-truncate-max-event-size=102400
- --audit-webhook-version=audit.k8s.io/v1
- --audit-policy-file=/etc/kubernetes/audit-policy.yaml
- --audit-log-path=/var/log/kubernetes/audit/audit.log
- --authentication-token-webhook-cache-ttl=2m0s
- --authentication-token-webhook-version=v1beta1
- --authorization-mode=Node,RBAC
- --authorization-webhook-cache-authorized-ttl=5m0s
- --authorization-webhook-cache-unauthorized-ttl=30s
- --authorization-webhook-version=v1beta1
- --bind-address=0.0.0.0
- --cert-dir=/var/run/kubernetes
- --client-ca-file=/etc/kubernetes/pki/ca.crt
# -> Enable if managing state via Cloud Controller Manager
# - --cloud-provider=external
- --cloud-provider-gce-l7lb-src-cidrs=130.211.0.0/22,35.191.0.0/16
- --cloud-provider-gce-lb-src-cidrs=130.211.0.0/22,209.85.152.0/22,209.85.204.0/22,35.191.0.0/16
- --contention-profiling=false
- --default-not-ready-toleration-seconds=300
- --default-unreachable-toleration-seconds=300
- --default-watch-cache-size=100
- --delete-collection-workers=1
- --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,PodSecurity
- --enable-aggregator-routing=true
- --enable-bootstrap-token-auth=true
- --enable-garbage-collector=true
- --enable-logs-handler=true
- --enable-priority-and-fairness=true
- --encryption-provider-config-automatic-reload=false
- --endpoint-reconciler-type=lease
- --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt
- --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt
- --etcd-compaction-interval=5m0s
- --etcd-count-metric-poll-period=1m0s
- --etcd-db-metric-poll-interval=30s
- --etcd-healthcheck-timeout=2s
- --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key
- --etcd-prefix=/registry
- --etcd-readycheck-timeout=2s
- --etcd-servers=https://127.0.0.1:2379
- --event-ttl=1h0m0s
- --feature-gates=RotateKubeletServerCertificate=true
- --goaway-chance=0
- --help=false
- --http2-max-streams-per-connection=0
- --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt
- --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key
- --kubelet-port=10250
- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
- --kubelet-read-only-port=10255
- --kubelet-timeout=5s
- --kubernetes-service-node-port=0
- --lease-reuse-duration-seconds=60
- --livez-grace-period=0s
- --log-flush-frequency=5s
- --logging-format=text
- --log-json-info-buffer-size=0
- --log-json-split-stream=false
- --log-text-info-buffer-size=0
- --log-text-split-stream=false
- --max-connection-bytes-per-sec=0
- --max-mutating-requests-inflight=200
- --max-requests-inflight=400
- --min-request-timeout=1800
- --permit-address-sharing=false
- --permit-port-sharing=false
- --profiling=false
- --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt
- --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key
- --requestheader-allowed-names=front-proxy-client
- --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
- --requestheader-extra-headers-prefix=X-Remote-Extra-
- --requestheader-group-headers=X-Remote-Group
- --requestheader-username-headers=X-Remote-User
- --request-timeout=1m0s
- --runtime-config=api/all=true
- --secure-port=6443
- --service-account-extend-token-expiration=true
- --service-account-issuer=https://kubernetes.default.svc.cluster.local
- --service-account-key-file=/etc/kubernetes/pki/sa.pub
- --service-account-lookup=true
- --service-account-max-token-expiration=0s
- --service-account-signing-key-file=/etc/kubernetes/pki/sa.key
- --service-cluster-ip-range=29.64.0.0/16
- --service-node-port-range=30000-32767
- --shutdown-delay-duration=0s
- --shutdown-send-retry-after=false
- --shutdown-watch-termination-grace-period=0s
- --storage-backend=etcd3
- --storage-media-type=application/vnd.kubernetes.protobuf
- --tls-cert-file=/etc/kubernetes/pki/apiserver.crt
- --tls-private-key-file=/etc/kubernetes/pki/apiserver.key
- --v=2
- --version=false
- --watch-cache=true
# IF YOU NEED TO CONNECT CLOUD-CONTROLLER-MANAGER
# UNCOMMENT THE FOLLOWING
# ->
# - --cloud-provider: "external"
# Do not specify if value is "" or undefined
# - --cloud-config=
# - --strict-transport-security-directives=
# - --disable-admission-plugins=
# - --disabled-metrics=
# - --egress-selector-config-file=
# - --encryption-provider-config=
# - --etcd-servers-overrides=
# - --external-hostname=
# - --kubelet-certificate-authority=
# - --oidc-ca-file=
# - --oidc-client-id=
# - --oidc-groups-claim=
# - --oidc-groups-prefix=
# - --oidc-issuer-url=
# - --oidc-required-claim=
# - --oidc-signing-algs=RS256
# - --oidc-username-claim=sub
# - --oidc-username-prefix=
# - --peer-advertise-ip=
# - --peer-advertise-port=
# - --peer-ca-file=
# - --service-account-jwks-uri=
# - --show-hidden-metrics-for-version=
# - --tls-cipher-suites=
# - --tls-min-version=
# - --tls-sni-cert-key=
# - --token-auth-file=
# - --tracing-config-file=
# - --vmodule=
# - --watch-cache-sizes=
# - --authorization-webhook-config-file=
# - --cors-allowed-origins=
# - --debug-socket-path=
# - --authorization-policy-file=
# - --authorization-config=
# - --authentication-token-webhook-config-file=
# - --authentication-config=
# - --audit-webhook-config-file=
# - --allow-metric-labels=
# - --allow-metric-labels-manifest=
# - --admission-control=
# - --admission-control-config-file=
# - --advertise-address=
image: registry.k8s.io/kube-apiserver:v1.30.4
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 8
httpGet:
host: ${MACHINE_LOCAL_ADDRESS}
path: /livez
port: 6443
scheme: HTTPS
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 15
name: kube-apiserver
readinessProbe:
failureThreshold: 3
httpGet:
host: ${MACHINE_LOCAL_ADDRESS}
path: /readyz
port: 6443
scheme: HTTPS
periodSeconds: 1
timeoutSeconds: 15
resources:
requests:
cpu: 250m
startupProbe:
failureThreshold: 24
httpGet:
host: ${MACHINE_LOCAL_ADDRESS}
path: /livez
port: 6443
scheme: HTTPS
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 15
volumeMounts:
- mountPath: /etc/ssl/certs
name: ca-certs
readOnly: true
- mountPath: /etc/ca-certificates
name: etc-ca-certificates
readOnly: true
- mountPath: /etc/pki
name: etc-pki
readOnly: true
- mountPath: /var/log/kubernetes/audit/
name: k8s-audit
- mountPath: /etc/kubernetes/audit-policy.yaml
name: k8s-audit-policy
readOnly: true
- mountPath: /etc/kubernetes/pki
name: k8s-certs
readOnly: true
- mountPath: /usr/local/share/ca-certificates
name: usr-local-share-ca-certificates
readOnly: true
- mountPath: /usr/share/ca-certificates
name: usr-share-ca-certificates
readOnly: true
hostNetwork: true
priority: 2000001000
priorityClassName: system-node-critical
securityContext:
seccompProfile:
type: RuntimeDefault
volumes:
- hostPath:
path: /etc/ssl/certs
type: DirectoryOrCreate
name: ca-certs
- hostPath:
path: /etc/ca-certificates
type: DirectoryOrCreate
name: etc-ca-certificates
- hostPath:
path: /etc/pki
type: DirectoryOrCreate
name: etc-pki
- hostPath:
path: /var/log/kubernetes/audit/
type: DirectoryOrCreate
name: k8s-audit
- hostPath:
path: /etc/kubernetes/audit-policy.yaml
type: File
name: k8s-audit-policy
- hostPath:
path: /etc/kubernetes/pki
type: DirectoryOrCreate
name: k8s-certs
- hostPath:
path: /usr/local/share/ca-certificates
type: DirectoryOrCreate
name: usr-local-share-ca-certificates
- hostPath:
path: /usr/share/ca-certificates
type: DirectoryOrCreate
name: usr-share-ca-certificates
status: {}
EOF

20*. Creating All Control Plane Static Pods

This section describes the automatic generation of static pod manifests for Kubernetes control plane components using kubeadm.

Static Pods setup

● Required

Certificate generation

kubeadm init phase certs all \
--config=/var/run/kubeadm/kubeadm.yaml
note
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [api.my-first-cluster.example.com kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.my-first-cluster.example.com pylcozuscb] and IPs [29.64.0.1 31.129.111.153 127.0.0.1]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost master-1.my-first-cluster.example.com pylcozuscb] and IPs [31.129.111.153 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost master-1.my-first-cluster.example.com pylcozuscb] and IPs [31.129.111.153 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key

21. Creating ETCD Cluster Static Pods

This section describes the manual creation of static pod manifests for ETCD.

Static Pods setup

● Required

Note

This section is optional and is intended only for cases when you need to configure this resource separately from the others.

Environment variables

export HOST_NAME=master-1
export CLUSTER_NAME=my-first-cluster
export BASE_DOMAIN=example.com
export MACHINE_LOCAL_ADDRESS=$(ip -4 addr show scope global | awk '/inet/ {print $2; exit}' | cut -d/ -f1)
export FULL_HOST_NAME="${HOST_NAME}.${CLUSTER_NAME}.${BASE_DOMAIN}"
export ETCD_INITIAL_CLUSTER="${FULL_HOST_NAME}=https://${MACHINE_LOCAL_ADDRESS}:2380"

Working directory

mkdir -p /etc/kubernetes/manifests
Static Pod ETCD

Manifest generation

cat <<EOF > /etc/kubernetes/manifests/etcd.yaml
apiVersion: v1
kind: Pod
metadata:
annotations:
kubeadm.kubernetes.io/etcd.advertise-client-urls: https://${MACHINE_LOCAL_ADDRESS}:2379
creationTimestamp: null
labels:
component: etcd
tier: control-plane
name: etcd
namespace: kube-system
spec:
containers:
- command:
- etcd
- --advertise-client-urls=https://${MACHINE_LOCAL_ADDRESS}:2379
- --auto-compaction-retention=8
- --cert-file=/etc/kubernetes/pki/etcd/server.crt
- --client-cert-auth=true
- --data-dir=/var/lib/etcd
- --election-timeout=1500
- --experimental-initial-corrupt-check=true
- --experimental-watch-progress-notify-interval=5s
- --heartbeat-interval=250
- --initial-advertise-peer-urls=https://${MACHINE_LOCAL_ADDRESS}:2380
- --initial-cluster=${ETCD_INITIAL_CLUSTER}
- --key-file=/etc/kubernetes/pki/etcd/server.key
- --listen-client-urls=https://0.0.0.0:2379
- --listen-metrics-urls=http://0.0.0.0:2381
- --listen-peer-urls=https://0.0.0.0:2380
- --logger=zap
- --max-snapshots=10
- --max-wals=10
- --metrics=extensive
- --name=${FULL_HOST_NAME}
- --peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt
- --peer-client-cert-auth=true
- --peer-key-file=/etc/kubernetes/pki/etcd/peer.key
- --peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt
- --snapshot-count=10000
- --quota-backend-bytes=10737418240
- --trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt
image: registry.k8s.io/etcd:3.5.12-0
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 8
httpGet:
host: 127.0.0.1
path: /health?exclude=NOSPACE&serializable=true
port: 2381
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 15
name: etcd
resources:
requests:
cpu: 100m
memory: 100Mi
startupProbe:
failureThreshold: 24
httpGet:
host: 127.0.0.1
path: /health?serializable=false
port: 2381
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 15
volumeMounts:
- mountPath: /var/lib/etcd
name: etcd-data
- mountPath: /etc/kubernetes/pki/etcd
name: etcd-certs
hostNetwork: true
priority: 2000001000
priorityClassName: system-node-critical
securityContext:
seccompProfile:
type: RuntimeDefault
volumes:
- hostPath:
path: /etc/kubernetes/pki/etcd
type: DirectoryOrCreate
name: etcd-certs
- hostPath:
path: /var/lib/etcd
type: DirectoryOrCreate
name: etcd-data
status: {}
EOF

22. Starting the Kubelet Service

This section covers the manual startup of Kubelet with systemd unit configuration. It describes the creation of a basic kubelet configuration file, setting up environment variables for the kubelet.service, and starting the service itself.

Start/Configure kubelet

● Required

This configuration file is required for Kubelet to start.

Kubelet default config

Basic kubelet configuration file

cat <<EOF > /var/lib/kubelet/config.yaml
---
apiVersion: kubelet.config.k8s.io/v1beta1
authentication:
anonymous:
enabled: false
webhook:
cacheTTL: 0s
enabled: true
x509:
clientCAFile: /etc/kubernetes/pki/ca.crt
authorization:
mode: Webhook
webhook:
cacheAuthorizedTTL: 0s
cacheUnauthorizedTTL: 0s
cgroupDriver: systemd
clusterDNS:
- 29.64.0.10
clusterDomain: cluster.local
containerRuntimeEndpoint: ""
cpuManagerReconcilePeriod: 0s
evictionPressureTransitionPeriod: 0s
fileCheckFrequency: 0s
healthzBindAddress: 127.0.0.1
healthzPort: 10248
httpCheckFrequency: 0s
imageMaximumGCAge: 0s
imageMinimumGCAge: 0s
kind: KubeletConfiguration
logging:
flushFrequency: 0
options:
json:
infoBufferSize: "0"
text:
infoBufferSize: "0"
verbosity: 0
memorySwap: {}
nodeStatusReportFrequency: 0s
nodeStatusUpdateFrequency: 0s
resolvConf: /run/systemd/resolve/resolv.conf
rotateCertificates: true
runtimeRequestTimeout: 0s
shutdownGracePeriod: 0s
shutdownGracePeriodCriticalPods: 0s
staticPodPath: /etc/kubernetes/manifests
streamingConnectionIdleTimeout: 0s
syncFrequency: 0s
volumeStatsAggPeriod: 0s
EOF

Environment variables

Note

This configuration block is applicable only when installing Kubernetes manually (using the "Kubernetes the Hard Way" method). When using the kubeadm utility, the configuration file will be created automatically based on the specification provided in the kubeadm-config file.

cat <<EOF > /var/lib/kubelet/kubeadm-flags.env
KUBELET_KUBEADM_ARGS="--container-runtime-endpoint=unix:///var/run/containerd/containerd.sock --pod-infra-container-image=registry.k8s.io/pause:3.9 --config=/var/lib/kubelet/config-custom.yaml --cluster-domain=cluster.local --cluster-dns=29.64.0.10
"
EOF

This command starts the Kubelet service, which is responsible for deploying all containers based on Static Pods manifests.

systemctl start kubelet

Systemd Unit Status

Systemd unit readiness check
Note

Note that when creating a cluster with Kubeadm without running kubeadm init or kubeadm join, the Systemd Unit will be added to autostart but will be disabled.

systemctl status kubelet
Command output
● kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; preset: enabled)
Drop-In: /usr/lib/systemd/system/kubelet.service.d
└─10-kubeadm.conf
Active: active (running) since Sat 2025-02-22 10:33:54 UTC; 17min ago
Docs: https://kubernetes.io/docs/
Main PID: 13779 (kubelet)
Tasks: 14 (limit: 7069)
Memory: 34.0M (peak: 35.3M)
CPU: 27.131s
CGroup: /system.slice/kubelet.service
└─13779 /usr/local/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml

23. Checking Cluster Status

This section is dedicated to verifying the status of cluster components after kubelet startup. It describes commands for monitoring image pulls, container startup, and successful initialization of cluster resources. This allows you to confirm that the cluster has started correctly before proceeding to the next stages.

Checking Cluster Status

● Not required

After kubelet starts, the cluster initialization process will begin, consisting of 3 stages:

  • Image download
  • Container startup
  • Migration

Image download check

crictl images
Command output
registry.k8s.io/etcd                      3.5.12-0            3861cfcd7c04c       57.2MB
registry.k8s.io/kube-apiserver v1.30.4 8a97b1fb3e2eb 32.8MB
registry.k8s.io/kube-controller-manager v1.30.4 8398ad49a121d 31.1MB
registry.k8s.io/kube-scheduler v1.30.4 4939f82ab9ab4 19.3MB
registry.k8s.io/pause 3.9 e6f1816883972 322kB

Container state check

crictl ps -a
Command output
CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
09c8c2d7b6446 4939f82ab9ab4 2 minutes ago Running kube-scheduler 1 934a798c482c3 kube-scheduler-master-1.my-first-cluster.example.com
15a10517ea727 8398ad49a121d 2 minutes ago Running kube-controller-manager 1 765405114b0a9 kube-controller-manager-master-1.my-first-cluster.example.com
4b2d766a5f129 8a97b1fb3e2eb 2 minutes ago Running kube-apiserver 0 bd281a893a7c1 kube-apiserver-master-1.my-first-cluster.example.com
3fb02d0f802ae 3861cfcd7c04c 2 minutes ago Running etcd 0 b6b62dc165409 etcd-master-1.my-first-cluster.example.com

Migration check

crictl logs $(crictl ps -name kube-apiserver \
-o json |
jq -r '.containers[''].id') 2>&1 |
grep created
Command output
Output
I0325 19:50:24.849116       1 strategy.go:270] "Successfully created " type="suggested" name="node-high"
I0325 19:50:25.015326 1 strategy.go:270] "Successfully created " type="suggested" name="leader-election"
I0325 19:50:25.015272 1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
I0325 19:50:25.062070 1 strategy.go:270] "Successfully created " type="suggested" name="workload-high"
I0325 19:50:25.092785 1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
I0325 19:50:25.093056 1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
I0325 19:50:25.097457 1 strategy.go:270] "Successfully created " type="suggested" name="workload-low"
I0325 19:50:25.122907 1 strategy.go:270] "Successfully created " type="suggested" name="global-default"
I0325 19:50:25.136110 1 strategy.go:270] "Successfully created " type="suggested" name="system-nodes"
I0325 19:50:25.145639 1 strategy.go:270] "Successfully created " type="suggested" name="system-node-high"
I0325 19:50:25.162094 1 strategy.go:270] "Successfully created " type="suggested" name="probes"
I0325 19:50:25.171177 1 storage_rbac.go:226] created clusterrole.rbac.authorization.k8s.io/cluster-admin
I0325 19:50:25.178987 1 strategy.go:270] "Successfully created " type="suggested" name="system-leader-election"
I0325 19:50:25.189666 1 strategy.go:270] "Successfully created " type="suggested" name="workload-leader-election"
I0325 19:50:25.194349 1 storage_rbac.go:226] created clusterrole.rbac.authorization.k8s.io/system:discovery
I0325 19:50:25.201448 1 strategy.go:270] "Successfully created " type="suggested" name="endpoint-controller"
I0325 19:50:25.209644 1 storage_rbac.go:226] created clusterrole.rbac.authorization.k8s.io/system:monitoring
I0325 19:50:25.216051 1 strategy.go:270] "Successfully created " type="suggested" name="kube-controller-manager"
I0325 19:50:25.247945 1 strategy.go:270] "Successfully created " type="suggested" name="kube-scheduler"
I0325 19:50:25.254378 1 storage_rbac.go:226] created clusterrole.rbac.authorization.k8s.io/system:basic-user
I0325 19:50:25.263224 1 strategy.go:270] "Successfully created " type="suggested" name="kube-system-service-accounts"
I0325 19:50:25.270945 1 storage_rbac.go:226] created clusterrole.rbac.authorization.k8s.io/system:public-info-viewer
I0325 19:50:25.281581 1 strategy.go:270] "Successfully created " type="suggested" name="service-accounts"
I0325 19:50:25.289105 1 strategy.go:270] "Successfully created " type="suggested" name="global-default"
I0325 19:50:25.291001 1 storage_rbac.go:226] created clusterrole.rbac.authorization.k8s.io/admin
I0325 19:50:25.314232 1 strategy.go:270] "Successfully created " type="mandatory" name="catch-all"
I0325 19:50:25.318737 1 storage_rbac.go:226] created clusterrole.rbac.authorization.k8s.io/edit
I0325 19:50:25.342170 1 strategy.go:270] "Successfully created " type="mandatory" name="exempt"
I0325 19:50:25.363630 1 storage_rbac.go:226] created clusterrole.rbac.authorization.k8s.io/view
I0325 19:50:25.364923 1 strategy.go:270] "Successfully created " type="mandatory" name="exempt"
I0325 19:50:25.372538 1 strategy.go:270] "Successfully created " type="mandatory" name="catch-all"
I0325 19:50:25.378771 1 storage_rbac.go:226] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-admin
I0325 19:50:25.390632 1 storage_rbac.go:226] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-edit
I0325 19:50:25.404175 1 storage_rbac.go:226] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-view
I0325 19:50:25.423981 1 storage_rbac.go:226] created clusterrole.rbac.authorization.k8s.io/system:heapster
I0325 19:50:25.455599 1 storage_rbac.go:226] created clusterrole.rbac.authorization.k8s.io/system:node
I0325 19:50:25.470607 1 storage_rbac.go:226] created clusterrole.rbac.authorization.k8s.io/system:node-problem-detector
I0325 19:50:25.476809 1 storage_rbac.go:226] created clusterrole.rbac.authorization.k8s.io/system:kubelet-api-admin
I0325 19:50:25.482742 1 storage_rbac.go:226] created clusterrole.rbac.authorization.k8s.io/system:node-bootstrapper
I0325 19:50:25.509907 1 storage_rbac.go:226] created clusterrole.rbac.authorization.k8s.io/system:auth-delegator
I0325 19:50:25.518103 1 storage_rbac.go:226] created clusterrole.rbac.authorization.k8s.io/system:kube-aggregator
I0325 19:50:25.523930 1 storage_rbac.go:226] created clusterrole.rbac.authorization.k8s.io/system:kube-controller-manager
I0325 19:50:25.530724 1 storage_rbac.go:226] created clusterrole.rbac.authorization.k8s.io/system:kube-dns
I0325 19:50:25.536652 1 storage_rbac.go:226] created clusterrole.rbac.authorization.k8s.io/system:persistent-volume-provisioner
I0325 19:50:25.548041 1 storage_rbac.go:226] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:nodeclient
I0325 19:50:25.552946 1 storage_rbac.go:226] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
I0325 19:50:25.563551 1 storage_rbac.go:226] created clusterrole.rbac.authorization.k8s.io/system:volume-scheduler
I0325 19:50:25.569432 1 storage_rbac.go:226] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:legacy-unknown-approver
I0325 19:50:25.587133 1 storage_rbac.go:226] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:kubelet-serving-approver
I0325 19:50:25.593244 1 storage_rbac.go:226] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:kube-apiserver-client-approver
I0325 19:50:25.601059 1 storage_rbac.go:226] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:kube-apiserver-client-kubelet-approver
I0325 19:50:25.610208 1 storage_rbac.go:226] created clusterrole.rbac.authorization.k8s.io/system:service-account-issuer-discovery
I0325 19:50:25.618408 1 storage_rbac.go:226] created clusterrole.rbac.authorization.k8s.io/system:node-proxier
I0325 19:50:25.633183 1 storage_rbac.go:226] created clusterrole.rbac.authorization.k8s.io/system:kube-scheduler
I0325 19:50:25.638420 1 storage_rbac.go:226] created clusterrole.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0325 19:50:25.646202 1 storage_rbac.go:226] created clusterrole.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0325 19:50:25.662691 1 storage_rbac.go:226] created clusterrole.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0325 19:50:25.670479 1 storage_rbac.go:226] created clusterrole.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0325 19:50:25.695624 1 storage_rbac.go:226] created clusterrole.rbac.authorization.k8s.io/system:controller:deployment-controller
I0325 19:50:25.704607 1 storage_rbac.go:226] created clusterrole.rbac.authorization.k8s.io/system:controller:disruption-controller
I0325 19:50:25.723784 1 storage_rbac.go:226] created clusterrole.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0325 19:50:25.730609 1 storage_rbac.go:226] created clusterrole.rbac.authorization.k8s.io/system:controller:endpointslice-controller
I0325 19:50:25.739767 1 storage_rbac.go:226] created clusterrole.rbac.authorization.k8s.io/system:controller:endpointslicemirroring-controller
I0325 19:50:25.749724 1 storage_rbac.go:226] created clusterrole.rbac.authorization.k8s.io/system:controller:expand-controller
I0325 19:50:25.770915 1 storage_rbac.go:226] created clusterrole.rbac.authorization.k8s.io/system:controller:ephemeral-volume-controller
I0325 19:50:25.778952 1 storage_rbac.go:226] created clusterrole.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0325 19:50:25.789374 1 storage_rbac.go:226] created clusterrole.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0325 19:50:25.849152 1 storage_rbac.go:226] created clusterrole.rbac.authorization.k8s.io/system:controller:job-controller
I0325 19:50:25.876849 1 storage_rbac.go:226] created clusterrole.rbac.authorization.k8s.io/system:controller:namespace-controller
I0325 19:50:25.911640 1 storage_rbac.go:226] created clusterrole.rbac.authorization.k8s.io/system:controller:node-controller
I0325 19:50:25.925130 1 storage_rbac.go:226] created clusterrole.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0325 19:50:25.931132 1 storage_rbac.go:226] created clusterrole.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0325 19:50:25.937393 1 storage_rbac.go:226] created clusterrole.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0325 19:50:25.946109 1 storage_rbac.go:226] created clusterrole.rbac.authorization.k8s.io/system:controller:replication-controller
I0325 19:50:25.960711 1 storage_rbac.go:226] created clusterrole.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0325 19:50:25.966392 1 storage_rbac.go:226] created clusterrole.rbac.authorization.k8s.io/system:controller:route-controller
I0325 19:50:25.974500 1 storage_rbac.go:226] created clusterrole.rbac.authorization.k8s.io/system:controller:service-account-controller
I0325 19:50:26.006739 1 storage_rbac.go:226] created clusterrole.rbac.authorization.k8s.io/system:controller:service-controller
I0325 19:50:26.024263 1 storage_rbac.go:226] created clusterrole.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0325 19:50:26.030127 1 storage_rbac.go:226] created clusterrole.rbac.authorization.k8s.io/system:controller:ttl-controller
I0325 19:50:26.038301 1 storage_rbac.go:226] created clusterrole.rbac.authorization.k8s.io/system:controller:certificate-controller
I0325 19:50:26.052458 1 storage_rbac.go:226] created clusterrole.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0325 19:50:26.059044 1 storage_rbac.go:226] created clusterrole.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0325 19:50:26.088843 1 storage_rbac.go:226] created clusterrole.rbac.authorization.k8s.io/system:controller:ttl-after-finished-controller
I0325 19:50:26.094917 1 storage_rbac.go:226] created clusterrole.rbac.authorization.k8s.io/system:controller:root-ca-cert-publisher
I0325 19:50:26.101768 1 storage_rbac.go:226] created clusterrole.rbac.authorization.k8s.io/system:controller:validatingadmissionpolicy-status-controller
I0325 19:50:26.116571 1 storage_rbac.go:226] created clusterrole.rbac.authorization.k8s.io/system:controller:legacy-service-account-token-cleaner
I0325 19:50:26.124067 1 storage_rbac.go:256] created clusterrolebinding.rbac.authorization.k8s.io/cluster-admin
I0325 19:50:26.130435 1 storage_rbac.go:256] created clusterrolebinding.rbac.authorization.k8s.io/system:monitoring
I0325 19:50:26.135037 1 storage_rbac.go:256] created clusterrolebinding.rbac.authorization.k8s.io/system:discovery
I0325 19:50:26.144777 1 storage_rbac.go:256] created clusterrolebinding.rbac.authorization.k8s.io/system:basic-user
I0325 19:50:26.152784 1 storage_rbac.go:256] created clusterrolebinding.rbac.authorization.k8s.io/system:public-info-viewer
I0325 19:50:26.165524 1 storage_rbac.go:256] created clusterrolebinding.rbac.authorization.k8s.io/system:node-proxier
I0325 19:50:26.172777 1 storage_rbac.go:256] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-controller-manager
I0325 19:50:26.179247 1 storage_rbac.go:256] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-dns
I0325 19:50:26.197002 1 storage_rbac.go:256] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-scheduler
I0325 19:50:26.203166 1 storage_rbac.go:256] created clusterrolebinding.rbac.authorization.k8s.io/system:volume-scheduler
I0325 19:50:26.208714 1 storage_rbac.go:256] created clusterrolebinding.rbac.authorization.k8s.io/system:node
I0325 19:50:26.217096 1 storage_rbac.go:256] created clusterrolebinding.rbac.authorization.k8s.io/system:service-account-issuer-discovery
I0325 19:50:26.226190 1 storage_rbac.go:256] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0325 19:50:26.239853 1 storage_rbac.go:256] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0325 19:50:26.244226 1 storage_rbac.go:256] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0325 19:50:26.257950 1 storage_rbac.go:256] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0325 19:50:26.262028 1 storage_rbac.go:256] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller
I0325 19:50:26.281103 1 storage_rbac.go:256] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller
I0325 19:50:26.294203 1 storage_rbac.go:256] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0325 19:50:26.309198 1 storage_rbac.go:256] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpointslice-controller
I0325 19:50:26.317701 1 storage_rbac.go:256] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpointslicemirroring-controller
I0325 19:50:26.333124 1 storage_rbac.go:256] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller
I0325 19:50:26.338934 1 storage_rbac.go:256] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ephemeral-volume-controller
I0325 19:50:26.344080 1 storage_rbac.go:256] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0325 19:50:26.355286 1 storage_rbac.go:256] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0325 19:50:26.365297 1 storage_rbac.go:256] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller
I0325 19:50:26.397412 1 storage_rbac.go:256] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller
I0325 19:50:26.402716 1 storage_rbac.go:256] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller
I0325 19:50:26.452669 1 storage_rbac.go:256] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0325 19:50:26.457837 1 storage_rbac.go:256] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0325 19:50:26.469955 1 storage_rbac.go:256] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0325 19:50:26.477884 1 storage_rbac.go:256] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller
I0325 19:50:26.490451 1 storage_rbac.go:256] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0325 19:50:26.509024 1 storage_rbac.go:256] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller
I0325 19:50:26.543252 1 storage_rbac.go:256] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller
I0325 19:50:26.549039 1 storage_rbac.go:256] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller
I0325 19:50:26.578269 1 storage_rbac.go:256] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0325 19:50:26.592059 1 storage_rbac.go:256] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller
I0325 19:50:26.603091 1 storage_rbac.go:256] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller
I0325 19:50:26.622458 1 storage_rbac.go:256] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0325 19:50:26.630783 1 storage_rbac.go:256] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0325 19:50:26.647976 1 storage_rbac.go:256] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-after-finished-controller
I0325 19:50:26.662162 1 storage_rbac.go:256] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:root-ca-cert-publisher
I0325 19:50:26.701501 1 storage_rbac.go:256] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:validatingadmissionpolicy-status-controller
I0325 19:50:26.711935 1 storage_rbac.go:256] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:legacy-service-account-token-cleaner
I0325 19:50:26.724206 1 storage_rbac.go:289] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system
I0325 19:50:26.736799 1 storage_rbac.go:289] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0325 19:50:26.747295 1 storage_rbac.go:289] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0325 19:50:26.757544 1 storage_rbac.go:289] created role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0325 19:50:26.766086 1 storage_rbac.go:289] created role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0325 19:50:26.773895 1 storage_rbac.go:289] created role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0325 19:50:26.785505 1 storage_rbac.go:289] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0325 19:50:26.813423 1 storage_rbac.go:321] created rolebinding.rbac.authorization.k8s.io/system::extension-apiserver-authentication-reader in kube-system
I0325 19:50:26.822640 1 storage_rbac.go:321] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0325 19:50:26.829331 1 storage_rbac.go:321] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0325 19:50:26.838203 1 storage_rbac.go:321] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0325 19:50:26.848813 1 storage_rbac.go:321] created rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0325 19:50:26.861183 1 storage_rbac.go:321] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0325 19:50:26.871910 1 storage_rbac.go:321] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public

24. Configuring the Role Model

This section covers the configuration of the role model (RBAC) required for the correct operation of the kubeadm join mechanism. It describes the Roles/ClusterRoles, RoleBindings/ClusterRoleBindings, and Bootstrap token that allow new nodes to securely connect to the cluster, request certificates, and obtain API server configuration information.

Kubeadm role model setup

● Required

Role bindings

Environment variables

export AUTH_EXTRA_GROUPS="system:bootstrappers:kubeadm:default-node-token"

Roles and bindings

This block is required so that kubeadm can check whether a node with this name is registered in the cluster or not.

kubectl \
--kubeconfig=/etc/kubernetes/super-admin.conf apply -f - <<EOF
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: kubeadm:get-nodes
rules:
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kubeadm:get-nodes
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: kubeadm:get-nodes
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: ${AUTH_EXTRA_GROUPS}
EOF

This block is required so that anonymous clients (e.g., kubeadm during the discovery phase) can retrieve the ConfigMap with cluster information (cluster-info) from the kube-public namespace. This allows loading the initial API server connection parameters and verifying the bootstrap token signature before establishing full authentication.

kubectl \
--kubeconfig=/etc/kubernetes/super-admin.conf apply -f - <<EOF
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: kubeadm:bootstrap-signer-clusterinfo
namespace: kube-public
rules:
- apiGroups:
- ""
resourceNames:
- cluster-info
resources:
- configmaps
verbs:
- get
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: kubeadm:bootstrap-signer-clusterinfo
namespace: kube-public
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: kubeadm:bootstrap-signer-clusterinfo
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: system:anonymous
EOF

This block is required to assign cluster-admin rights to all users in the kubeadm:cluster-admins group. This allows granting full cluster access with centralized rights management — unlike the system:masters group, from which access cannot be revoked through RBAC mechanisms. This approach simplifies administrative role setup and integration with external authorization systems.

kubectl \
--kubeconfig=/etc/kubernetes/super-admin.conf apply -f - <<EOF
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kubeadm:cluster-admins
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: kubeadm:cluster-admins
EOF

This block is required so that members of the ${AUTH_EXTRA_GROUPS} group (e.g., system:bootstrappers) can use the bootstrap token to initialize the kubelet connection to the cluster. Binding to the system:node-bootstrapper role allows such subjects to request TLS certificates for nodes through CSR (CertificateSigningRequest), which is a necessary step in the kubeadm join process.

kubectl \
--kubeconfig=/etc/kubernetes/super-admin.conf apply -f - <<EOF
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kubeadm:kubelet-bootstrap
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:node-bootstrapper
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: ${AUTH_EXTRA_GROUPS}
EOF

This block is required for automatic approval of client certificate requests from nodes joining the cluster via bootstrap token. It assigns the system:certificates.k8s.io:certificatesigningrequests:nodeclient role to the ${AUTH_EXTRA_GROUPS} group (e.g., system:bootstrappers), which allows kube-controller-manager to automatically sign CSRs from kubelet during kubeadm join.

kubectl \
--kubeconfig=/etc/kubernetes/super-admin.conf apply -f - <<EOF
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kubeadm:node-autoapprove-bootstrap
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:certificates.k8s.io:certificatesigningrequests:nodeclient
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: ${AUTH_EXTRA_GROUPS}
EOF

This block is required for automatic approval of kubelet client certificate renewal requests. It grants the system:nodes group rights that allow re-requesting and automatically receiving new certificates through CertificateSigningRequest. This is necessary for the correct operation of the node certificate rotation mechanism without manual intervention.

kubectl \
--kubeconfig=/etc/kubernetes/super-admin.conf apply -f - <<EOF
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kubeadm:node-autoapprove-certificate-rotation
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: system:nodes
EOF
note
clusterrole.rbac.authorization.k8s.io/kubeadm:get-nodes created
role.rbac.authorization.k8s.io/kubeadm:bootstrap-signer-clusterinfo created
rolebinding.rbac.authorization.k8s.io/kubeadm:bootstrap-signer-clusterinfo created
clusterrolebinding.rbac.authorization.k8s.io/kubeadm:cluster-admins created
clusterrolebinding.rbac.authorization.k8s.io/kubeadm:get-nodes created
clusterrolebinding.rbac.authorization.k8s.io/kubeadm:kubelet-bootstrap created
clusterrolebinding.rbac.authorization.k8s.io/kubeadm:node-autoapprove-bootstrap created
clusterrolebinding.rbac.authorization.k8s.io/kubeadm:node-autoapprove-certificate-rotation created
Bootstrap tokens

Environment variables

export AUTH_EXTRA_GROUPS="system:bootstrappers:kubeadm:default-node-token"
export DESCRIPTION="kubeadm bootstrap token"
export EXPIRATION=$(date -d '24 hours' "+%Y-%m-%dT%H:%M:%SZ")
export TOKEN_ID="fjt9ex"
export TOKEN_SECRET="lwzqgdlvoxtqk4yw"
export USAGE_BOOTSTRAP_AUTHENTIFICATION="true"
export USAGE_BOOTSTRAP_SIGNING="true"

Creating access token

This token is a bootstrap token, and it is needed to allow a new node to securely join the Kubernetes cluster via kubeadm join while it does not yet have its own certificates and a trusted kubeconfig.

Warning

In production environments, it is recommended to create a separate bootstrap token for each node. However, for demonstration purposes (and within this documentation), we have simplified the process and use a single shared token for all control plane nodes.

kubectl \
--kubeconfig=/etc/kubernetes/super-admin.conf \
apply -f - <<EOF
---
apiVersion: v1
kind: Secret
metadata:
name: bootstrap-token-${TOKEN_ID}
namespace: kube-system
data:
auth-extra-groups: $(echo -n "$AUTH_EXTRA_GROUPS" | base64)
description: $(echo -n "$DESCRIPTION" | base64)
expiration: $(echo -n "$EXPIRATION" | base64)
token-id: $(echo -n "$TOKEN_ID" | base64)
token-secret: $(echo -n "$TOKEN_SECRET" | base64)
usage-bootstrap-authentication: $(echo -n "$USAGE_BOOTSTRAP_AUTHENTIFICATION" | base64)
usage-bootstrap-signing: $(echo -n "$USAGE_BOOTSTRAP_SIGNING" | base64)
type: bootstrap.kubernetes.io/token
EOF
note
secret/bootstrap-token-fjt9ex configured
Cluster-Info

Environment variables

export KUBE_CA_CRT_BASE64=$(base64 -w 0 /etc/kubernetes/pki/ca.crt)
export CLUSTER_API_URL=https://api.my-first-cluster.example.com

Updating Cluster-info

cluster-info is a public source of basic cluster information required for secure bootstrap joining of new nodes via kubeadm.

  • 🔐 Contains a public kubeconfig with CA and API address.
  • 📥 Used by kubeadm join for discovery.
  • 🌐 Accessible anonymously through kube-public.
  • ✅ Allows the node to verify API server authenticity before authentication.
kubectl \
--kubeconfig=/etc/kubernetes/super-admin.conf \
apply -f - <<EOF
---
apiVersion: v1
data:
kubeconfig: |
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: ${KUBE_CA_CRT_BASE64}
server: ${CLUSTER_API_URL}:6443
name: ""
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null
kind: ConfigMap
metadata:
name: cluster-info
namespace: kube-public

EOF
note
configmap/cluster-info created

25. Uploading Configuration to the Cluster

This section covers uploading the current kubeadm and kubelet configuration to the cluster as a ConfigMap. This configuration is required for the correct execution of the kubeadm join command, as it is used during initialization of new control plane nodes. Uploading the configuration centralizes cluster parameter management and ensures consistency across all nodes, including both master and worker nodes.

Uploading configuration to the cluster

● Required

Note

This section describes the instructions for uploading the current Kubeadm and Kubelet configuration to the Kubernetes control plane as a ConfigMap resource. This approach simplifies managing configuration changes for Kubernetes nodes, covering both worker and master nodes.

Environment variables for configuration file template

export CLUSTER_NAME='my-first-cluster'
export BASE_DOMAIN='example.com'
export FULL_HOST_NAME=${HOST_NAME}.${CLUSTER_NAME}.${BASE_DOMAIN}
export INTERNAL_API=api.${CLUSTER_NAME}.${BASE_DOMAIN}
export MACHINE_LOCAL_ADDRESS=$(ip -4 addr show scope global | awk '/inet/ {print $2; exit}' | cut -d/ -f1)
export ETCD_INITIAL_CLUSTER="${FULL_HOST_NAME}=https://${MACHINE_LOCAL_ADDRESS}:2380"
export AUTH_EXTRA_GROUPS="system:bootstrappers:kubeadm:default-node-token"
kubeadm-config

This block is required to allow nodes to read the kubeadm-config ConfigMap in the kube-system namespace:

kubectl \
--kubeconfig=/etc/kubernetes/super-admin.conf \
apply -f - <<EOF
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: kubeadm:nodes-kubeadm-config
namespace: kube-system
rules:
- apiGroups:
- ""
resourceNames:
- kubeadm-config
resources:
- configmaps
verbs:
- get
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: kubeadm:nodes-kubeadm-config
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: kubeadm:nodes-kubeadm-config
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: ${AUTH_EXTRA_GROUPS}
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: system:nodes
EOF

This block is required so that when executing kubeadm join, the node receives the current ClusterConfiguration from the control cluster and correctly joins the control-plane.

kubectl \
--kubeconfig=/etc/kubernetes/super-admin.conf \
apply -f - <<EOF
---
apiVersion: v1
kind: ConfigMap
metadata:
name: kubeadm-config
namespace: kube-system
data:
ClusterConfiguration: |
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
clusterName: "${CLUSTER_NAME}"
certificatesDir: /etc/kubernetes/pki
controlPlaneEndpoint: ${INTERNAL_API}:6443
imageRepository: "registry.k8s.io"
networking:
serviceSubnet: 29.64.0.0/16
dnsDomain: cluster.local
kubernetesVersion: v1.30.4
dns: {}
etcd:
local:
imageRepository: "registry.k8s.io"
dataDir: "/var/lib/etcd"
extraArgs:
auto-compaction-retention: "8"
cert-file: "/etc/kubernetes/pki/etcd/server.crt"
client-cert-auth: "true"
data-dir: "/var/lib/etcd"
election-timeout: "1500"
heartbeat-interval: "250"
key-file: "/etc/kubernetes/pki/etcd/server.key"
listen-client-urls: "https://0.0.0.0:2379"
listen-metrics-urls: "http://0.0.0.0:2381"
listen-peer-urls: "https://0.0.0.0:2380"
logger: "zap"
max-snapshots: "10"
max-wals: "10"
metrics: "extensive"
peer-cert-file: "/etc/kubernetes/pki/etcd/peer.crt"
peer-client-cert-auth: "true"
peer-key-file: "/etc/kubernetes/pki/etcd/peer.key"
peer-trusted-ca-file: "/etc/kubernetes/pki/etcd/ca.crt"
snapshot-count: "10000"
quota-backend-bytes: "10737418240" # TODO
experimental-initial-corrupt-check: "true"
experimental-watch-progress-notify-interval: "5s"
trusted-ca-file: "/etc/kubernetes/pki/etcd/ca.crt"
peerCertSANs:
- 127.0.0.1
serverCertSANs:
- 127.0.0.1
apiServer:
extraArgs:
aggregator-reject-forwarding-redirect: "true"
allow-privileged: "true"
anonymous-auth: "true"
api-audiences: "konnectivity-server"
apiserver-count: "1"
audit-log-batch-buffer-size: "10000"
audit-log-batch-max-size: "1"
audit-log-batch-max-wait: "0s"
audit-log-batch-throttle-burst: "0"
audit-log-batch-throttle-enable: "false"
audit-log-batch-throttle-qps: "0"
audit-log-compress: "false"
audit-log-format: "json"
audit-log-maxage: "30"
audit-log-maxbackup: "10"
audit-log-maxsize: "1000"
audit-log-mode: "batch"
audit-log-truncate-enabled: "false"
audit-log-truncate-max-batch-size: "10485760"
audit-log-truncate-max-event-size: "102400"
audit-log-version: "audit.k8s.io/v1"
audit-webhook-batch-buffer-size: "10000"
audit-webhook-batch-initial-backoff: "10s"
audit-webhook-batch-max-size: "400"
audit-webhook-batch-max-wait: "30s"
audit-webhook-batch-throttle-burst: "15"
audit-webhook-batch-throttle-enable: "true"
audit-webhook-batch-throttle-qps: "10"
audit-webhook-initial-backoff: "10s"
audit-webhook-mode: "batch"
audit-webhook-truncate-enabled: "false"
audit-webhook-truncate-max-batch-size: "10485760"
audit-webhook-truncate-max-event-size: "102400"
audit-webhook-version: "audit.k8s.io/v1"
audit-policy-file: /etc/kubernetes/audit-policy.yaml
audit-log-path: /var/log/kubernetes/audit/audit.log
authentication-token-webhook-cache-ttl: "2m0s"
authentication-token-webhook-version: "v1beta1"
authorization-mode: "Node,RBAC"
authorization-webhook-cache-authorized-ttl: "5m0s"
authorization-webhook-cache-unauthorized-ttl: "30s"
authorization-webhook-version: "v1beta1"
bind-address: "0.0.0.0"
cert-dir: "/var/run/kubernetes"
client-ca-file: "/etc/kubernetes/pki/ca.crt"
cloud-provider-gce-l7lb-src-cidrs: "130.211.0.0/22,35.191.0.0/16"
cloud-provider-gce-lb-src-cidrs: "130.211.0.0/22,209.85.152.0/22,209.85.204.0/22,35.191.0.0/16"
contention-profiling: "false"
default-not-ready-toleration-seconds: "300"
default-unreachable-toleration-seconds: "300"
default-watch-cache-size: "100"
delete-collection-workers: "1"
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,PodSecurity"
enable-aggregator-routing: "true"
enable-bootstrap-token-auth: "true"
enable-garbage-collector: "true"
enable-logs-handler: "true"
enable-priority-and-fairness: "true"
encryption-provider-config-automatic-reload: "false"
endpoint-reconciler-type: "lease"
etcd-cafile: "/etc/kubernetes/pki/etcd/ca.crt"
etcd-certfile: "/etc/kubernetes/pki/apiserver-etcd-client.crt"
etcd-compaction-interval: "5m0s"
etcd-count-metric-poll-period: "1m0s"
etcd-db-metric-poll-interval: "30s"
etcd-healthcheck-timeout: "2s"
etcd-keyfile: "/etc/kubernetes/pki/apiserver-etcd-client.key"
etcd-prefix: "/registry"
etcd-readycheck-timeout: "2s"
etcd-servers: "https://127.0.0.1:2379"
event-ttl: "1h0m0s"
feature-gates: "RotateKubeletServerCertificate=true"
goaway-chance: "0"
help: "false"
http2-max-streams-per-connection: "0"
kubelet-client-certificate: "/etc/kubernetes/pki/apiserver-kubelet-client.crt"
kubelet-client-key: "/etc/kubernetes/pki/apiserver-kubelet-client.key"
kubelet-port: "10250"
kubelet-preferred-address-types: "InternalIP,ExternalIP,Hostname"
kubelet-read-only-port: "10255"
kubelet-timeout: "5s"
kubernetes-service-node-port: "0"
lease-reuse-duration-seconds: "60"
livez-grace-period: "0s"
log-flush-frequency: "5s"
logging-format: "text"
log-json-info-buffer-size: "0"
log-json-split-stream: "false"
log-text-info-buffer-size: "0"
log-text-split-stream: "false"
max-connection-bytes-per-sec: "0"
max-mutating-requests-inflight: "200"
max-requests-inflight: "400"
min-request-timeout: "1800"
permit-address-sharing: "false"
permit-port-sharing: "false"
profiling: "false"
proxy-client-cert-file: "/etc/kubernetes/pki/front-proxy-client.crt"
proxy-client-key-file: "/etc/kubernetes/pki/front-proxy-client.key"
requestheader-allowed-names: "front-proxy-client"
requestheader-client-ca-file: "/etc/kubernetes/pki/front-proxy-ca.crt"
requestheader-extra-headers-prefix: "X-Remote-Extra-"
requestheader-group-headers: "X-Remote-Group"
requestheader-username-headers: "X-Remote-User"
request-timeout: "1m0s"
runtime-config: "api/all=true"
secure-port: "6443"
service-account-extend-token-expiration: "true"
service-account-issuer: "https://kubernetes.default.svc.cluster.local"
service-account-key-file: "/etc/kubernetes/pki/sa.pub"
service-account-lookup: "true"
service-account-max-token-expiration: "0s"
service-account-signing-key-file: "/etc/kubernetes/pki/sa.key"
service-cluster-ip-range: "29.64.0.0/16"
service-node-port-range: "30000-32767"
shutdown-delay-duration: "0s"
shutdown-send-retry-after: "false"
shutdown-watch-termination-grace-period: "0s"
storage-backend: "etcd3"
storage-media-type: "application/vnd.kubernetes.protobuf"
tls-cert-file: "/etc/kubernetes/pki/apiserver.crt"
tls-private-key-file: "/etc/kubernetes/pki/apiserver.key"
v: "2"
version: "false"
watch-cache: "true"
# ЕСЛИ НУЖНО ПОДКЛЮЧИТЬ CLOUD-CONTROLLER-MANAGER
# ТРЕБУЕТСЯ РАСКОМЕНТИРОВАТЬ
# ->
# cloud-provider: "external"
# Не указывать если значение "" или undefined
# cloud-config: ""
# strict-transport-security-directives: ""
# disable-admission-plugins: ""
# disabled-metrics: ""
# egress-selector-config-file: ""
# encryption-provider-config: ""
# etcd-servers-overrides: ""
# external-hostname: ""
# kubelet-certificate-authority: ""
# oidc-ca-file: ""
# oidc-client-id: ""
# oidc-groups-claim: ""
# oidc-groups-prefix: ""
# oidc-issuer-url: ""
# oidc-required-claim: ""
# oidc-signing-algs: "RS256"
# oidc-username-claim: "sub"
# oidc-username-prefix: ""
# peer-advertise-ip: ""
# peer-advertise-port: ""
# peer-ca-file: ""
# service-account-jwks-uri: ""
# show-hidden-metrics-for-version: ""
# tls-cipher-suites: ""
# tls-min-version: ""
# tls-sni-cert-key: ""
# token-auth-file: ""
# tracing-config-file: ""
# vmodule: ""
# watch-cache-sizes: ""
# authorization-webhook-config-file: ""
# cors-allowed-origins: ""
# debug-socket-path: ""
# authorization-policy-file: ""
# authorization-config: ""
# authentication-token-webhook-config-file: ""
# authentication-config: ""
# audit-webhook-config-file: ""
# audit-policy-file: "/etc/kubernetes/audit-policy.yaml"
# audit-log-path: "/var/log/kubernetes/audit/audit.log"
# allow-metric-labels: ""
# allow-metric-labels-manifest: ""
# admission-control: ""
# admission-control-config-file: ""
# advertise-address: ""
extraVolumes:
- name: "k8s-audit"
hostPath: "/var/log/kubernetes/audit/"
mountPath: "/var/log/kubernetes/audit/"
readOnly: false
pathType: DirectoryOrCreate
- name: "k8s-audit-policy"
hostPath: "/etc/kubernetes/audit-policy.yaml"
mountPath: "/etc/kubernetes/audit-policy.yaml"
pathType: File
certSANs:
- "127.0.0.1"
# TODO для доабвления внешнего FQDN в сертификаты кластера
# - ${INTERNAL_API}
timeoutForControlPlane: 4m0s
controllerManager:
extraArgs:
cluster-name: "${CLUSTER_NAME}"
allocate-node-cidrs: "false"
allow-untagged-cloud: "false"
attach-detach-reconcile-sync-period: "1m0s"
authentication-kubeconfig: "/etc/kubernetes/controller-manager.conf"
authentication-skip-lookup: "false"
authentication-token-webhook-cache-ttl: "10s"
authentication-tolerate-lookup-failure: "false"
authorization-always-allow-paths: "/healthz,/readyz,/livez,/metrics"
authorization-kubeconfig: "/etc/kubernetes/controller-manager.conf"
authorization-webhook-cache-authorized-ttl: "10s"
authorization-webhook-cache-unauthorized-ttl: "10s"
bind-address: "0.0.0.0"
cidr-allocator-type: "RangeAllocator"
client-ca-file: "/etc/kubernetes/pki/ca.crt"
# -> Включить, если управляете состоянием через Cloud Controller Manager
# cloud-provider: "external"
cloud-provider-gce-lb-src-cidrs: "130.211.0.0/22,209.85.152.0/22,209.85.204.0/22,35.191.0.0/16"
cluster-signing-cert-file: "/etc/kubernetes/pki/ca.crt"
cluster-signing-duration: "720h0m0s"
cluster-signing-key-file: "/etc/kubernetes/pki/ca.key"
concurrent-cron-job-syncs: "5"
concurrent-deployment-syncs: "5"
concurrent-endpoint-syncs: "5"
concurrent-ephemeralvolume-syncs: "5"
concurrent-gc-syncs: "20"
concurrent-horizontal-pod-autoscaler-syncs: "5"
concurrent-job-syncs: "5"
concurrent-namespace-syncs: "10"
concurrent-rc-syncs: "5"
concurrent-replicaset-syncs: "20"
concurrent-resource-quota-syncs: "5"
concurrent-service-endpoint-syncs: "5"
concurrent-service-syncs: "1"
concurrent-serviceaccount-token-syncs: "5"
concurrent-statefulset-syncs: "5"
concurrent-ttl-after-finished-syncs: "5"
concurrent-validating-admission-policy-status-syncs: "5"
configure-cloud-routes: "true"
contention-profiling: "false"
controller-start-interval: "0s"
controllers: "*,bootstrapsigner,tokencleaner"
disable-attach-detach-reconcile-sync: "false"
disable-force-detach-on-timeout: "false"
enable-dynamic-provisioning: "true"
enable-garbage-collector: "true"
enable-hostpath-provisioner: "false"
enable-leader-migration: "false"
endpoint-updates-batch-period: "0s"
endpointslice-updates-batch-period: "0s"
feature-gates: "RotateKubeletServerCertificate=true"
flex-volume-plugin-dir: "/usr/libexec/kubernetes/kubelet-plugins/volume/exec/"
help: "false"
horizontal-pod-autoscaler-cpu-initialization-period: "5m0s"
horizontal-pod-autoscaler-downscale-delay: "5m0s"
horizontal-pod-autoscaler-downscale-stabilization: "5m0s"
horizontal-pod-autoscaler-initial-readiness-delay: "30s"
horizontal-pod-autoscaler-sync-period: "30s"
horizontal-pod-autoscaler-tolerance: "0.1"
horizontal-pod-autoscaler-upscale-delay: "3m0s"
http2-max-streams-per-connection: "0"
kube-api-burst: "120"
kube-api-content-type: "application/vnd.kubernetes.protobuf"
kube-api-qps: "100"
kubeconfig: "/etc/kubernetes/controller-manager.conf"
large-cluster-size-threshold: "50"
leader-elect: "true"
leader-elect-lease-duration: "15s"
leader-elect-renew-deadline: "10s"
leader-elect-resource-lock: "leases"
leader-elect-resource-name: "kube-controller-manager"
leader-elect-resource-namespace: "kube-system"
leader-elect-retry-period: "2s"
legacy-service-account-token-clean-up-period: "8760h0m0s"
log-flush-frequency: "5s"
log-json-info-buffer-size: "0"
log-json-split-stream: "false"
log-text-info-buffer-size: "0"
log-text-split-stream: "false"
logging-format: "text"
max-endpoints-per-slice: "100"
min-resync-period: "12h0m0s"
mirroring-concurrent-service-endpoint-syncs: "5"
mirroring-endpointslice-updates-batch-period: "0s"
mirroring-max-endpoints-per-subset: "1000"
namespace-sync-period: "2m0s"
node-cidr-mask-size: "0"
node-cidr-mask-size-ipv4: "0"
node-cidr-mask-size-ipv6: "0"
node-eviction-rate: "0.1"
node-monitor-grace-period: "40s"
node-monitor-period: "5s"
node-startup-grace-period: "10s"
node-sync-period: "0s"
permit-address-sharing: "false"
permit-port-sharing: "false"
profiling: "false"
pv-recycler-increment-timeout-nfs: "30"
pv-recycler-minimum-timeout-hostpath: "60"
pv-recycler-minimum-timeout-nfs: "300"
pv-recycler-timeout-increment-hostpath: "30"
pvclaimbinder-sync-period: "15s"
requestheader-client-ca-file: "/etc/kubernetes/pki/front-proxy-ca.crt"
requestheader-extra-headers-prefix: "x-remote-extra-"
requestheader-group-headers: "x-remote-group"
requestheader-username-headers: "x-remote-user"
resource-quota-sync-period: "5m0s"
root-ca-file: "/etc/kubernetes/pki/ca.crt"
route-reconciliation-period: "10s"
secondary-node-eviction-rate: "0.01"
secure-port: "10257"
service-account-private-key-file: "/etc/kubernetes/pki/sa.key"
terminated-pod-gc-threshold: "0"
unhealthy-zone-threshold: "0.55"
use-service-account-credentials: "true"
v: "2"
version: "false"
volume-host-allow-local-loopback: "true"
# ЕСЛИ НУЖНО ПОДКЛЮЧИТЬ СЕРВЕРНЫЕ СЕРТИФИКАТЫ ДЛЯ KUBE-CONTROLLER-MANAGER
# ОБРАТИТЕ ВНИМАНИЕ, ЧТО KUBEADM НЕ СОЗДАЕТ ДАННЫЕ СЕРТИФИКАТЫ
# ТРЕБУЕТСЯ РАСКОМЕНТИРОВАТЬ
# ->
# tls-cert-file=/etc/kubernetes/pki/controller-manager-server.crt
# tls-private-key-file=/etc/kubernetes/pki/controller-manager-server.key
# Не указывать если значение "" или undefined
# cluster-signing-kube-apiserver-client-cert-file: ""
# cluster-signing-kube-apiserver-client-key-file: ""
# cluster-signing-kubelet-client-cert-file: ""
# cluster-signing-kubelet-client-key-file: ""
# cluster-signing-kubelet-serving-cert-file: ""
# cluster-signing-kubelet-serving-key-file: ""
# cluster-signing-legacy-unknown-cert-file: ""
# cluster-signing-legacy-unknown-key-file: ""
# cluster-cidr: ""
# cloud-config: ""
# cert-dir: ""
# allow-metric-labels-manifest: ""
# allow-metric-labels: ""
# disabled-metrics: ""
# leader-migration-config: ""
# master: ""
# pv-recycler-pod-template-filepath-hostpath: ""
# pv-recycler-pod-template-filepath-nfs: ""
# service-cluster-ip-range: ""
# show-hidden-metrics-for-version: ""
# tls-cipher-suites: ""
# tls-min-version: ""
# tls-sni-cert-key: ""
# vmodule: ""
# volume-host-cidr-denylist: ""
# external-cloud-volume-plugin: ""
# requestheader-allowed-names: ""
# ЕСЛИ НУЖНО ПОДКЛЮЧИТЬ СЕРВЕРНЫЕ СЕРТИФИКАТЫ ДЛЯ KUBE-CONTROLLER-MANAGER
# ОБРАТИТЕ ВНИМАНИЕ, ЧТО KUBEADM НЕ СОЗДАЕТ ДАННЫЕ СЕРТИФИКАТЫ
# ТРЕБУЕТСЯ РАСКОМЕНТИРОВАТЬ
# ->
# extraVolumes:
# - name: "controller-manager-crt"
# hostPath: "/etc/kubernetes/pki/controller-manager-server.crt"
# mountPath: "/etc/kubernetes/pki/controller-manager-server.crt"
# pathType: File
# - name: "controller-manager-key"
# hostPath: "/etc/kubernetes/pki/controller-manager-server.key"
# mountPath: "/etc/kubernetes/pki/controller-manager-server.key"
# pathType: File
scheduler:
extraArgs:
authentication-kubeconfig: "/etc/kubernetes/scheduler.conf"
authentication-skip-lookup: "false"
authentication-token-webhook-cache-ttl: "10s"
authentication-tolerate-lookup-failure: "true"
authorization-always-allow-paths: "/healthz,/readyz,/livez,/metrics"
authorization-kubeconfig: "/etc/kubernetes/scheduler.conf"
authorization-webhook-cache-authorized-ttl: "10s"
authorization-webhook-cache-unauthorized-ttl: "10s"
bind-address: "0.0.0.0"
client-ca-file: ""
contention-profiling: "true"
help: "false"
http2-max-streams-per-connection: "0"
kube-api-burst: "100"
kube-api-content-type: "application/vnd.kubernetes.protobuf"
kube-api-qps: "50"
kubeconfig: "/etc/kubernetes/scheduler.conf"
leader-elect: "true"
leader-elect-lease-duration: "15s"
leader-elect-renew-deadline: "10s"
leader-elect-resource-lock: "leases"
leader-elect-resource-name: "kube-scheduler"
leader-elect-resource-namespace: "kube-system"
leader-elect-retry-period: "2s"
log-flush-frequency: "5s"
log-json-info-buffer-size: "0"
log-json-split-stream: "false"
log-text-info-buffer-size: "0"
log-text-split-stream: "false"
logging-format: "text"
permit-address-sharing: "false"
permit-port-sharing: "false"
pod-max-in-unschedulable-pods-duration: "5m0s"
profiling: "true"
requestheader-extra-headers-prefix: "[x-remote-extra-]"
requestheader-group-headers: "[x-remote-group]"
requestheader-username-headers: "[x-remote-user]"
secure-port: "10259"
v: "2"
version: "false"
# ЕСЛИ НУЖНО ПОДКЛЮЧИТЬ СЕРВЕРНЫЕ СЕРТИФИКАТЫ ДЛЯ KUBE-SCHEDULER
# ОБРАТИТЕ ВНИМАНИЕ, ЧТО KUBEADM НЕ СОЗДАЕТ ДАННЫЕ СЕРТИФИКАТЫ
# ТРЕБУЕТСЯ РАСКОМЕНТИРОВАТЬ
# ->
# tls-cert-file=/etc/kubernetes/pki/scheduler-server.crt
# tls-private-key-file=/etc/kubernetes/pki/scheduler-server.key
# <-
# allow-metric-labels: "[]"
# allow-metric-labels-manifest: ""
# cert-dir: ""
# config: ""
# disabled-metrics: "[]"
# feature-gates: ""
# master: ""
# requestheader-allowed-names: "[]"
# requestheader-client-ca-file: ""
# show-hidden-metrics-for-version: ""
# tls-cipher-suites: "[]"
# tls-min-version: ""
# tls-sni-cert-key: "[]"
# vmodule: ""
# write-config-to: ""
# ЕСЛИ НУЖНО ПОДКЛЮЧИТЬ СЕРВЕРНЫЕ СЕРТИФИКАТЫ ДЛЯ KUBE-SCHEDULER
# ОБРАТИТЕ ВНИМАНИЕ, ЧТО KUBEADM НЕ СОЗДАЕТ ДАННЫЕ СЕРТИФИКАТЫ
# ТРЕБУЕТСЯ РАСКОМЕНТИРОВАТЬ
# ->
# extraVolumes:
# - name: "scheduler-crt"
# hostPath: "/etc/kubernetes/pki/scheduler-server.crt"
# mountPath: "/etc/kubernetes/pki/scheduler-server.crt"
# pathType: File
# - name: "scheduler-key"
# hostPath: "/etc/kubernetes/pki/scheduler-server.key"
# mountPath: "/etc/kubernetes/pki/scheduler-server.key"
# pathType: File
EOF
kubelet-config

This block is required to allow nodes to read the kubelet-config ConfigMap in the kube-system namespace:

kubectl \
--kubeconfig=/etc/kubernetes/super-admin.conf \
apply -f - <<EOF
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: kubeadm:kubelet-config
namespace: kube-system
rules:
- apiGroups:
- ""
resourceNames:
- kubelet-config
resources:
- configmaps
verbs:
- get
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: kubeadm:kubelet-config
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: kubeadm:kubelet-config
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: system:nodes
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: ${AUTH_EXTRA_GROUPS}
EOF

This block is required so that when executing kubeadm join, the node receives the current kubelet-config from the control cluster and correctly joins the control-plane.

kubectl \
--kubeconfig=/etc/kubernetes/super-admin.conf \
apply -f - <<EOF
---
apiVersion: v1
kind: ConfigMap
metadata:
name: kubelet-config
namespace: kube-system
data:
kubelet: |
apiVersion: kubelet.config.k8s.io/v1beta1
authentication:
anonymous:
enabled: false
webhook:
cacheTTL: 0s
enabled: true
x509:
clientCAFile: "/etc/kubernetes/pki/ca.crt"
authorization:
mode: Webhook
webhook:
cacheAuthorizedTTL: 0s
cacheUnauthorizedTTL: 0s
cgroupDriver: systemd
containerLogMaxSize: "50Mi"
containerRuntimeEndpoint: "/var/run/containerd/containerd.sock"
cpuManagerReconcilePeriod: 0s
evictionPressureTransitionPeriod: 5s
fileCheckFrequency: 0s
healthzBindAddress: 127.0.0.1
healthzPort: 10248
httpCheckFrequency: 0s
imageGCHighThresholdPercent: 55
imageGCLowThresholdPercent: 50
imageMaximumGCAge: 0s
imageMinimumGCAge: 0s
kind: KubeletConfiguration
logging:
flushFrequency: 0
options:
json:
infoBufferSize: "0"
text:
infoBufferSize: "0"
verbosity: 0
kubeAPIQPS: 50
kubeAPIBurst: 100
maxPods: 250
memorySwap: {}
nodeStatusReportFrequency: 1s
nodeStatusUpdateFrequency: 1s
podPidsLimit: 4096
registerNode: true
resolvConf: /run/systemd/resolve/resolv.conf
rotateCertificates: true
runtimeRequestTimeout: 0s
serializeImagePulls: false
serverTLSBootstrap: true
shutdownGracePeriod: 15s
shutdownGracePeriodCriticalPods: 5s
staticPodPath: /etc/kubernetes/manifests
streamingConnectionIdleTimeout: 0s
syncFrequency: 0s
tlsMinVersion: "VersionTLS12"
volumeStatsAggPeriod: 0s
featureGates:
RotateKubeletServerCertificate: true
APIPriorityAndFairness: true
tlsCipherSuites:
- "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"
- "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"
- "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384"
- "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"
- "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256"
- "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"
EOF

26. Uploading Root Certificates to the Cluster

This section covers uploading root certificates to the Kubernetes cluster. The kubeadm-certs secret is created manually and contains the keys and certificates required when adding new control plane nodes (kubeadm join). This approach allows sensitive data to be securely transferred between control plane nodes.

Uploading root certificates to Kubernetes

● Required

Note

This section provides instructions for uploading root certificates to the Kubernetes control plane. The certificates are uploaded in encrypted form as a Secret resource, which allows them to be securely transferred and decrypted on another node for managing the control plane node lifecycle.

Environment variables for configuration file template

export AUTH_EXTRA_GROUPS="system:bootstrappers:kubeadm:default-node-token"

Role model preparation

This block prepares the role model for granting access to the kubeadm-certs secret. This is necessary so that control plane nodes can securely obtain root certificates through the Kubernetes API when joining the cluster. The role is bound to the ${AUTH_EXTRA_GROUPS} group, which kubeadm typically falls under during join.

kubectl \
--kubeconfig=/etc/kubernetes/super-admin.conf apply -f - <<EOF
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: kubeadm:kubeadm-certs
namespace: kube-system
rules:
- apiGroups:
- ""
resourceNames:
- kubeadm-certs
resources:
- secrets
verbs:
- get
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: kubeadm:kubeadm-certs
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: kubeadm:kubeadm-certs
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: ${AUTH_EXTRA_GROUPS}
EOF

Working directory

mkdir -p /etc/kubernetes/openssl

Environment variables

export CERTIFICATE_UPLOAD_KEY=0c00c2fd5c67c37656c00d78a9d7e1f2eb794ef8e4fc3e2a4b532eb14323cd59
cat <<EOF > /etc/kubernetes/openssl/encrypt.py
#!/usr/bin/env python3
import sys, base64, os
from cryptography.hazmat.primitives.ciphers.aead import AESGCM

key = bytes.fromhex(sys.argv[1])
path = sys.argv[2]

with open(path, "rb") as f:
data = f.read()

nonce = os.urandom(12)
aesgcm = AESGCM(key)
ct = aesgcm.encrypt(nonce, data, None)

# kubeadm expects: nonce + ciphertext (including auth tag)
payload = nonce + ct
print(base64.b64encode(payload).decode())
EOF
cat <<'EOF' > /etc/kubernetes/openssl/upload-certs.sh
#!/bin/bash
set -euo pipefail

CERT_PATH="/etc/kubernetes/pki"
PY_SCRIPT="$(dirname "$0")/encrypt.py"

declare -A files=(
["ca.crt"]="$CERT_PATH/ca.crt"
["ca.key"]="$CERT_PATH/ca.key"
["etcd-ca.crt"]="$CERT_PATH/etcd/ca.crt"
["etcd-ca.key"]="$CERT_PATH/etcd/ca.key"
["front-proxy-ca.crt"]="$CERT_PATH/front-proxy-ca.crt"
["front-proxy-ca.key"]="$CERT_PATH/front-proxy-ca.key"
["sa.key"]="$CERT_PATH/sa.key"
["sa.pub"]="$CERT_PATH/sa.pub"
)

KEY="${CERTIFICATE_UPLOAD_KEY:-}"
if [[ -z "$KEY" ]]; then
echo "[ERROR] CERTIFICATE_UPLOAD_KEY is not set"
exit 1
fi

echo "[INFO] Using certificate key: $KEY"

TMP_DIR=$(mktemp -d)
SECRET_FILE="$TMP_DIR/secret.yaml"

cat <<EOF_SECRET > "$SECRET_FILE"
apiVersion: v1
kind: Secret
metadata:
name: kubeadm-certs
namespace: kube-system
type: Opaque
data:
EOF_SECRET

for name in "${!files[@]}"; do
path="${files[$name]}"
if [[ ! -f "$path" ]]; then
echo "[WARN] Skipping missing file: $path"
continue
fi
echo "[INFO] Encrypting $name..."
b64=$(python3 "$PY_SCRIPT" "$KEY" "$path")
echo " $name: $b64" >> "$SECRET_FILE"
done

echo "[INFO] Applying secret to cluster..."
kubectl apply -f "$SECRET_FILE"

echo "[INFO] Secret successfully uploaded."
EOF

Setting permissions

chmod +x /etc/kubernetes/openssl/upload-certs.sh

Running the script

/etc/kubernetes/openssl/upload-certs.sh
Command output
[INFO] Using certificate key: 0c00c2fd5c67c37656c00d78a9d7e1f2eb794ef8e4fc3e2a4b532eb14323cd59
[INFO] Encrypting front-proxy-ca.key...
[INFO] Encrypting sa.key...
[INFO] Encrypting front-proxy-ca.crt...
[INFO] Encrypting etcd-ca.crt...
[INFO] Encrypting sa.pub...
[INFO] Encrypting ca.key...
[INFO] Encrypting ca.crt...
[INFO] Encrypting etcd-ca.key...
[INFO] Applying secret to cluster...
secret/kubeadm-certs configured
[INFO] Secret successfully uploaded.

27 Labeling and Tainting Nodes

This section covers marking and restricting control plane nodes. It describes how to assign the control-plane role to a node and apply a taint that prevents scheduling workload pods on master nodes. These actions are necessary to ensure isolation of control plane components and to comply with the cluster architecture model.

Node marking and restriction

● Required

Note

This section describes the cluster configuration that allows you to set the container scheduling policy in advance and ensure isolation of the control plane from unplanned workloads.

export HOST_NAME=master-1

Environment variables

export CLUSTER_NAME=my-first-cluster
export BASE_DOMAIN=example.com
export FULL_HOST_NAME=${HOST_NAME}.${CLUSTER_NAME}.${BASE_DOMAIN}

Node labeling

kubectl label node ${FULL_HOST_NAME} node-role.kubernetes.io/control-plane="" \
--kubeconfig=/etc/kubernetes/super-admin.conf
note
node/master-1.my-first-cluster.example.com labeled

Node tainting

kubectl taint node ${FULL_HOST_NAME} node-role.kubernetes.io/control-plane="":NoSchedule \
--overwrite \
--kubeconfig=/etc/kubernetes/super-admin.conf
note
node/master-1.my-first-cluster.example.com modified

🍀 Conclusion

The Kubernetes The Hard Way journey for me has been a path spanning nearly two years. It opened up a wealth of new knowledge, opportunities... And, of course, challenges 🙂

This is far from my first article on this topic — if you're interested, check out my previous drafts on Habr:

To sum it up: this article took about four months to write. Every script was hand-polished (with the help of chatGPT) and tested in real-world conditions. No kidding — during all this time I spun up over 400 clusters.

Thanks to those who understood the idea, and special thanks to those who read all the way to the end 🙌 I'd love to hear your feedback and will definitely continue sharing my experience — in the same spirit, but in a new format.

note

🐾 During all four months, no animals were harmed... except for the Good Cat 😼 It was an amazing experience that I wouldn't recommend unless you have a slight inclination toward masochism 😅 And if you do — welcome!