Skip to main content

Kubernetes The Hard Way: Workers

· 8 min read

Kubernetes THW: Workers #

Continuing Kubernetes The Hard Way: adding worker nodes to the cluster.

In the previous article we assembled the control plane manually: issued certificates, prepared configurations, and launched the management components. The API server is responding, but the cluster still has no worker nodes.

Without worker nodes there is nowhere to run application pods. In this article we will add a Worker node and walk through the entire path from a bare VM to a registered Kubernetes node.

The format is the same as in the first part: prepare the OS, install containerd and kubelet, set up cluster connectivity, and verify node registration. Two approaches are covered: the manual path via bootstrap tokens and the CSR API, or the standard kubeadm join route.

Kubernetes The Hard Way

1. Introduction

The Control Plane makes decisions: which pods to schedule, how to maintain the desired state, and how to react to failures. But actual workloads run on Worker nodes — the data plane of the cluster.

A worker node boils down to two main parts: kubelet and the container runtime, in our case containerd. kubelet communicates with the API server, receives pod specifications, and ensures the containers are actually running.

Unlike master nodes, there is no etcd, kube-apiserver, kube-controller-manager, or kube-scheduler here. No control plane static pods. The CA private key never reaches the worker node either.

This makes the machine setup simpler, but introduces a separate bootstrap challenge: how does a new node join the cluster when it has neither a client certificate nor trust in the API?


2. Infrastructure

Below is the minimal set of parameters for adding a worker node: name, address, DNS, and software. This is enough to reproduce the steps from this article in your own environment.

Worker Nodes

NameIP AddressOperating SystemResources

worker-1.my-first-cluster.example.com

NODE-IP-4ubuntu-24-04-lts2CPU / 4RAM / 40GB

DNS Records

A RecordIP AddressTTL

worker-1.my-first-cluster.example.com

NODE-IP-460s

Components

Only the components needed to run workloads are installed on the worker node. Control plane components and etcd are not required here.

ComponentVersionPurpose
containerd1.7.19Container runtime managing the lifecycle of containers.
runcv1.1.12Low-level tool for running containers using Linux kernel features.
crictlv1.30.0Debugging utility for CRI runtimes with containerd support.
kubeletv1.30.4Node agent ensuring pods are running and healthy.
kubectlv1.30.4CLI client for interacting with the Kubernetes API (optional).
kubeadmv1.30.4Tool for automating node joining (optional).

3. Base OS Setup

First, bring the OS into a predictable state: set environment variables, change the hostname, and install basic utilities. The steps are nearly identical to master nodes — only the worker-specific values differ.

This section covers the basic preparation of Kubernetes worker nodes before installing components. It describes setting up environment variables, changing the hostname, and installing required system utilities. These steps are mandatory for the correct operation of kubelet on worker nodes.

Basic node setup

● Required

Basic node settings

  • Node environment variables.
  • Changing the node name.
  • Installing dependencies.

Node environment variables

export HOST_NAME=worker-1
export CLUSTER_NAME="my-first-cluster"
export BASE_DOMAIN="example.com"
export CLUSTER_DOMAIN="cluster.local"
export FULL_HOST_NAME="${HOST_NAME}.${CLUSTER_NAME}.${BASE_DOMAIN}"

Changing the node name

hostnamectl set-hostname ${FULL_HOST_NAME}

Installing dependencies

sudo apt update
sudo apt install -y conntrack socat jq tree

4. Loading Kernel Modules

Load the kernel modules required by containerd and the Kubernetes network stack. The set is the same as on master nodes.

This section covers loading kernel modules required for the correct operation of Kubernetes. The setup includes modprobe configuration and activation of the overlay and br_netfilter modules, which provide support for the container filesystem and network functions. These steps are mandatory for the functioning of network policies, iptables, and container runtimes.

Loading kernel modules

● Required

Component installation steps:

  • Modprobe configuration.
  • Loading modules.

Modprobe configuration

cat <<EOF > /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF

Loading modules

sudo -i
modprobe overlay
modprobe br_netfilter
note

The overlay module is used by the OverlayFS filesystem to manage container layers. It allows merging multiple directories into a single virtual filesystem. It is used by runtimes such as Docker and containerd.

The br_netfilter module enables processing of network bridge traffic through the netfilter subsystem. This is necessary for the correct operation of iptables in Kubernetes.


5. Configuring sysctl Parameters

Next, configure sysctl: enable IP forwarding and bridge traffic parameters. These values match the control plane setup.

This section covers configuring kernel parameters using sysctl, which are necessary for Kubernetes networking. Changes are made to ensure traffic routing between pods and correct iptables operation for bridges. These parameters are mandatory for enabling IP packet forwarding and network flow filtering in the cluster.

Configuring sysctl parameters

● Required

Component installation steps:

  • Sysctl configuration.
  • Applying configuration.
Note

Network Parameters

For correct traffic routing and filtering, kernel parameters must be set.

Sysctl configuration

cat <<EOF > /etc/sysctl.d/99-br-netfilter.conf
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
EOF

Applying configuration

sysctl --system

If the net.ipv4.ip_forward parameter is not enabled, the system will not forward IP packets between interfaces. This can lead to network failures within the cluster, service unavailability, and loss of connectivity between pods.

Sysctl configuration

cat <<EOF > /etc/sysctl.d/99-network.conf
net.ipv4.ip_forward=1
EOF
sysctl --system

6. Installing Components

Only what is needed for running containers and registering the node is installed on the worker: containerd, kubelet, and supporting utilities. Control plane components are not installed here.

This section describes the installation process for the components required on Kubernetes worker nodes. The installation is performed manually and prepares the environment for subsequent node joining to the cluster.

Installation of runc

● Required

Component installation steps

  • Creating working directories.
  • Environment variables.
  • Download instructions.
  • Permissions setup.
  • Download service.
  • Starting the download service.

Creating working directories

mkdir -p /etc/default/runc

Environment variables

cat <<EOF > /etc/default/runc/download.env
COMPONENT_VERSION="v1.1.12"
REPOSITORY="https://github.com/opencontainers/runc/releases/download"
EOF

Download instructions

cat <<"EOF" > /etc/default/runc/download-script.sh
#!/bin/bash
set -Eeuo pipefail


COMPONENT_VERSION="${COMPONENT_VERSION:-v1.1.12}"
REPOSITORY="${REPOSITORY:-https://github.com/opencontainers/runc/releases/download}"
PATH_BIN="${REPOSITORY}/${COMPONENT_VERSION}/runc.amd64"
PATH_SHA256="${REPOSITORY}/${COMPONENT_VERSION}/runc.sha256sum"
INSTALL_PATH="/usr/local/bin/runc"

LOG_TAG="runc-installer"
TMP_DIR="$(mktemp -d)"

logger -t "$LOG_TAG" "[INFO] Checking current runc version..."

CURRENT_VERSION=$($INSTALL_PATH --version 2>/dev/null | head -n1 | awk '{print $NF}') || CURRENT_VERSION="none"
COMPONENT_VERSION_CLEAN=$(echo "$COMPONENT_VERSION" | sed 's/^v//')

logger -t "$LOG_TAG" "[INFO] Current: $CURRENT_VERSION, Target: $COMPONENT_VERSION_CLEAN"

if [[ "$CURRENT_VERSION" != "$COMPONENT_VERSION_CLEAN" ]]; then
logger -t "$LOG_TAG" "[INFO] Download URL: $PATH_BIN"
logger -t "$LOG_TAG" "[INFO] Updating runc to version $COMPONENT_VERSION..."

cd "$TMP_DIR"
logger -t "$LOG_TAG" "[INFO] Working directory: $PWD"

logger -t "$LOG_TAG" "[INFO] Downloading runc..."
curl -fsSL -o runc.amd64 "$PATH_BIN" || { logger -t "$LOG_TAG" "[ERROR] Failed to download runc"; exit 1; }

logger -t "$LOG_TAG" "[INFO] Downloading checksum file..."
curl -fsSL -o runc.sha256sum "$PATH_SHA256" || { logger -t "$LOG_TAG" "[ERROR] Failed to download checksum file"; exit 1; }

logger -t "$LOG_TAG" "[INFO] Verifying checksum..."
grep "runc.amd64" runc.sha256sum | sha256sum -c - || { logger -t "$LOG_TAG" "[ERROR] Checksum verification failed!"; exit 1; }

logger -t "$LOG_TAG" "[INFO] Installing runc..."
install -m 755 runc.amd64 "$INSTALL_PATH"

logger -t "$LOG_TAG" "[INFO] runc successfully updated to $COMPONENT_VERSION."
rm -rf "$TMP_DIR"

else
logger -t "$LOG_TAG" "[INFO] runc is already up to date. Skipping installation."
fi
EOF

Permissions setup

chmod +x /etc/default/runc/download-script.sh

Download service

cat <<EOF > /usr/lib/systemd/system/runc-install.service
[Unit]
Description=Install and update in-cloud component runc
After=network.target
Wants=network-online.target

[Service]
Type=oneshot
EnvironmentFile=-/etc/default/runc/download.env
ExecStart=/bin/bash -c "/etc/default/runc/download-script.sh"
RemainAfterExit=yes

[Install]
WantedBy=multi-user.target
EOF

Download

systemctl enable runc-install.service
systemctl start runc-install.service

Installation check

Installation check

Executable files

journalctl -t runc-installer
Command output
***** [INFO] Checking current runc version...
***** [INFO] Current: none, Target: v1.1.12
***** [INFO] Download URL: https://*******
***** [INFO] Updating runc to version v1.1.12...
***** [INFO] Working directory: /tmp/tmp.*****
***** [INFO] Downloading runc...
***** [INFO] Downloading checksum file...
***** [INFO] Verifying checksum...
***** [INFO] Installing runc...
***** [INFO] runc successfully updated to v1.1.12.
ls -la /usr/local/bin/ | grep 'runc$'
Command output
-rwxr-xr-x  1 root root  10709696 Jan 23  2024 runc

Executable file version

runc --version
Command output
runc version 1.1.12
commit: v1.1.12-0-g51d5e946
spec: 1.0.2-dev
go: go1.20.13
libseccomp: 2.5.4

7. Configuring Components

After installation, configure containerd, kubelet, crictl, and optionally kubeadm. For the worker node, kubeadm only needs a JoinConfiguration block without the controlPlane section.

This section describes the setup and configuration of components required for Kubernetes worker nodes.

Configuration of containerd

● Required

Component configuration steps

  • Component configuration
  • Systemd Unit setup for the component
  • Systemd Unit start
Note

This section depends on the following documents:

Component configuration

Creating working directories

mkdir -p /etc/containerd/
mkdir -p /etc/containerd/conf.d
mkdir -p /etc/containerd/certs.d

Base configuration file

cat <<"EOF" > /etc/containerd/config.toml
version = 2
imports = ["/etc/containerd/conf.d/*.toml"]
EOF

Custom configuration file template

cat <<"EOF" > /etc/containerd/conf.d/in-cloud.toml
version = 2
[plugins]
[plugins."io.containerd.grpc.v1.cri"]
sandbox_image = "registry.k8s.io/pause:3.9"
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
runtime_type = "io.containerd.runc.v2"
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
SystemdCgroup = true
[plugins."io.containerd.grpc.v1.cri".registry]
config_path = "/etc/containerd/certs.d/"
EOF

Systemd Unit setup for the component

cat <<EOF > /usr/lib/systemd/system/containerd.service
[Unit]
Description=containerd container runtime
Documentation=https://containerd.io
After=network.target local-fs.target containerd-install.service runc-install.service
Wants=containerd-install.service runc-install.service

[Service]
ExecStartPre=-/sbin/modprobe overlay
ExecStart=/usr/local/bin/containerd

Type=notify
Delegate=yes
KillMode=process
Restart=always
RestartSec=5
LimitNPROC=infinity
LimitCORE=infinity
LimitNOFILE=infinity
TasksMax=infinity
OOMScoreAdjust=-999

[Install]
WantedBy=multi-user.target
EOF
systemctl enable containerd
systemctl start containerd

Configuration verification

Configuration verification
tree /etc/containerd/
Command output
/etc/containerd/
├── certs.d
├── conf.d
│ └── cloud.toml
└── config.toml
systemctl status containerd
Command output
● containerd.service - containerd container runtime
Loaded: loaded (/usr/lib/systemd/system/containerd.service; enabled; preset: enabled)
Active: active (running) since Tue 2024-12-31 17:26:21 UTC; 2min 30s ago
Docs: https://containerd.io
Main PID: 839 (containerd)
Tasks: 7 (limit: 2274)
Memory: 62.0M (peak: 62.5M)
CPU: 375ms
CGroup: /system.slice/containerd.service
└─839 /usr/local/bin/containerd

***** level=info msg="Start subscribing containerd event"
***** level=info msg="Start recovering state"
***** level=info msg="Start event monitor"
***** level=info msg="Start snapshots syncer"
***** level=info msg="Start cni network conf syncer for default"
***** level=info msg="Start streaming server"
***** level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
***** level=info msg=serving... address=/run/containerd/containerd.sock
***** level=info msg="containerd successfully booted in 0.065807s"
***** Started containerd.service - containerd container runtime.

8. Authentication

This is the most important step in the entire article. The worker node has no access to the CA private key, so it needs to securely establish trust with the cluster using one of two methods:

  • Bootstrap Token + CSR API — manual path with full control over TLS Bootstrap
  • Kubeadm — standard joining via kubeadm join

Worker node authentication

● Required

The manual scenario covers fetching ca.crt from cluster-info, building bootstrap-kubelet.conf, and optionally walking through the CSR flow for kubelet client and server certificates.

This section describes authentication options for kubelet on worker nodes when connecting to a Kubernetes cluster. The strategy depends on security requirements and the installation method.

Manual creation of bootstrap-kubelet.conf using a bootstrap token. After starting, kubelet will automatically perform TLS Bootstrap: obtain a client certificate and create kubelet.conf.

Warning

This example uses a static bootstrap token for all worker nodes. In production environments, it is recommended to generate a unique token for each node with a limited TTL.

Creating a bootstrap token

A bootstrap token is a Secret in the kube-system namespace that allows a new node to join the cluster. Two methods for creating the token are shown below.

🖥️ Master node

The commands below must be executed on a master node or on a host with a kubeconfig that has permissions to create Secrets in the kube-system namespace.

Environment variables

export AUTH_EXTRA_GROUPS="system:bootstrappers:kubeadm:default-node-token"
export DESCRIPTION="kubeadm bootstrap token"
export EXPIRATION=$(date -d '24 hours' "+%Y-%m-%dT%H:%M:%SZ")
export TOKEN_ID="fjt9ex"
export TOKEN_SECRET="lwzqgdlvoxtqk4yw"
export USAGE_BOOTSTRAP_AUTHENTIFICATION="true"
export USAGE_BOOTSTRAP_SIGNING="true"

Create Secret

kubectl \
--kubeconfig=/etc/kubernetes/super-admin.conf \
apply -f - <<EOF
---
apiVersion: v1
kind: Secret
metadata:
name: bootstrap-token-${TOKEN_ID}
namespace: kube-system
data:
auth-extra-groups: $(echo -n "$AUTH_EXTRA_GROUPS" | base64)
description: $(echo -n "$DESCRIPTION" | base64)
expiration: $(echo -n "$EXPIRATION" | base64)
token-id: $(echo -n "$TOKEN_ID" | base64)
token-secret: $(echo -n "$TOKEN_SECRET" | base64)
usage-bootstrap-authentication: $(echo -n "$USAGE_BOOTSTRAP_AUTHENTIFICATION" | base64)
usage-bootstrap-signing: $(echo -n "$USAGE_BOOTSTRAP_SIGNING" | base64)
type: bootstrap.kubernetes.io/token
EOF

Creating bootstrap-kubelet.conf

🖥️ Worker node

All commands in this section are executed on the worker node. The ca.crt file is not yet present on the worker node. CA data is fetched from the public cluster-info ConfigMap in the kube-public namespace, accessible anonymously via kube-apiserver.

Environment variables

export BOOTSTRAP_TOKEN=fjt9ex.lwzqgdlvoxtqk4yw
export API_SERVER="https://api.my-first-cluster.example.com:6443"

Working directory

mkdir -p /etc/kubernetes

Fetch CA from cluster-info

export CA_DATA=$(curl -sk "${API_SERVER}/api/v1/namespaces/kube-public/configmaps/cluster-info" | \
jq -r '.data.kubeconfig' | \
grep 'certificate-authority-data' | \
awk '{print $2}')

Save CA certificate

mkdir -p /etc/kubernetes/pki
echo "${CA_DATA}" | base64 -d > /etc/kubernetes/pki/ca.crt

Generate kubeconfig

cat <<EOF > /etc/kubernetes/bootstrap-kubelet.conf
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: ${CA_DATA}
server: ${API_SERVER}
name: my-first-cluster
contexts:
- context:
cluster: my-first-cluster
user: tls-bootstrap-token-user
name: tls-bootstrap-token-user@kubernetes
current-context: tls-bootstrap-token-user@kubernetes
kind: Config
preferences: {}
users:
- name: tls-bootstrap-token-user
user:
token: ${BOOTSTRAP_TOKEN}
EOF

Kubernetes CSR (TLS Bootstrap simulation)

This approach simulates kubelet's TLS Bootstrap behavior: private keys are generated on the worker node, CSRs are submitted through the Kubernetes API using bootstrap-kubelet.conf, and approval is performed by an administrator on the master node. The CA private key is not required on the worker node.

Kubelet Client Certificate (CSR)

● Required

Purpose: Kubelet client certificate for connecting to kube-apiserver.

1. Generate key and CSR

🖥️ Worker node

All commands in this step are executed on the worker node.

export HOST_NAME=worker-1
export CLUSTER_NAME="my-first-cluster"
export BASE_DOMAIN="example.com"
export FULL_HOST_NAME="${HOST_NAME}.${CLUSTER_NAME}.${BASE_DOMAIN}"
mkdir -p /var/lib/kubelet/pki
mkdir -p /etc/kubernetes/openssl/csr
cat <<EOF > /etc/kubernetes/openssl/kubelet-client.conf
[ req ]
default_bits = 2048
prompt = no
default_md = sha256
distinguished_name = dn

[ dn ]
CN = system:node:${FULL_HOST_NAME}
O = system:nodes

[ v3_ext ]
authorityKeyIdentifier=keyid,issuer:always
basicConstraints=CA:FALSE
keyUsage=keyEncipherment,dataEncipherment
extendedKeyUsage=clientAuth
EOF
openssl genrsa \
-out /var/lib/kubelet/pki/kubelet-client-key.pem 2048
openssl req -new \
-key /var/lib/kubelet/pki/kubelet-client-key.pem \
-out /etc/kubernetes/openssl/csr/kubelet-client.csr \
-config /etc/kubernetes/openssl/kubelet-client.conf

2. Submit CSR to Kubernetes API

🖥️ Worker node

Worker node authenticates with the bootstrap token via bootstrap-kubelet.conf.

export HOST_NAME=worker-1
export CSR_NAME="${HOST_NAME}-kubelet-client"
export CSR_CONTENT=$(cat /etc/kubernetes/openssl/csr/kubelet-client.csr | base64 | tr -d '\n')
kubectl \
--kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf \
apply -f - <<EOF
apiVersion: certificates.k8s.io/v1
kind: CertificateSigningRequest
metadata:
name: ${CSR_NAME}
spec:
request: ${CSR_CONTENT}
signerName: kubernetes.io/kube-apiserver-client-kubelet
usages:
- digital signature
- key encipherment
- client auth
EOF

3. Approve CSR

🖥️ Master node

CSR approval is performed on the master node. Specify the name of the worker node for which the CSR is being approved.

export HOST_NAME=worker-1
export CSR_NAME="${HOST_NAME}-kubelet-client"
kubectl \
--kubeconfig=/etc/kubernetes/super-admin.conf \
certificate approve ${CSR_NAME}

4. Retrieve signed certificate

🖥️ Worker node

Certificate is retrieved on the worker node using bootstrap-kubelet.conf.

export HOST_NAME=worker-1
export CSR_NAME="${HOST_NAME}-kubelet-client"
kubectl \
--kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf \
get csr ${CSR_NAME} \
-o jsonpath='{.status.certificate}' | base64 -d > /var/lib/kubelet/pki/kubelet-client.pem
cat /var/lib/kubelet/pki/kubelet-client.pem /var/lib/kubelet/pki/kubelet-client-key.pem > /var/lib/kubelet/pki/kubelet-client-$(date '+%Y-%m-%d-%H-%M-%S').pem
ln -sf /var/lib/kubelet/pki/kubelet-client-$(date '+%Y-%m-%d-%H-%M-%S').pem /var/lib/kubelet/pki/kubelet-client-current.pem

Kubelet Server Certificate (CSR)

● Required

Purpose: Kubelet server certificate for TLS on port 10250.

1. Generate key and CSR

🖥️ Worker node

All commands in this step are executed on the worker node.

export HOST_NAME=worker-1
export CLUSTER_NAME="my-first-cluster"
export BASE_DOMAIN="example.com"
export FULL_HOST_NAME="${HOST_NAME}.${CLUSTER_NAME}.${BASE_DOMAIN}"
export MACHINE_LOCAL_ADDRESS="$(ip -4 addr show scope global | awk '/inet/ {print $2; exit}' | cut -d/ -f1)"
mkdir -p /var/lib/kubelet/pki
mkdir -p /etc/kubernetes/openssl/csr
cat <<EOF > /etc/kubernetes/openssl/kubelet-server.conf
[ req ]
default_bits = 2048
prompt = no
default_md = sha256
distinguished_name = dn
req_extensions = req_ext

[ req_ext ]
subjectAltName = @alt_names

[ alt_names ]
DNS.1 = localhost
DNS.2 = ${HOST_NAME}
DNS.3 = ${FULL_HOST_NAME}
IP.1 = 127.0.0.1
IP.2 = 0:0:0:0:0:0:0:1
IP.3 = ${MACHINE_LOCAL_ADDRESS}

[ dn ]
CN = system:node:${FULL_HOST_NAME}
O = system:nodes

[ v3_ext ]
authorityKeyIdentifier=keyid,issuer:always
basicConstraints=CA:FALSE
keyUsage=keyEncipherment,dataEncipherment
extendedKeyUsage=serverAuth
subjectAltName=@alt_names
EOF
openssl genrsa \
-out /var/lib/kubelet/pki/kubelet-server-key.pem 2048
openssl req -new \
-key /var/lib/kubelet/pki/kubelet-server-key.pem \
-out /etc/kubernetes/openssl/csr/kubelet-server.csr \
-config /etc/kubernetes/openssl/kubelet-server.conf

2. Submit CSR to Kubernetes API

🖥️ Worker node

Worker node authenticates with the bootstrap token via bootstrap-kubelet.conf.

export HOST_NAME=worker-1
export CSR_NAME="${HOST_NAME}-kubelet-server"
export CSR_CONTENT=$(cat /etc/kubernetes/openssl/csr/kubelet-server.csr | base64 | tr -d '\n')
kubectl \
--kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf \
apply -f - <<EOF
apiVersion: certificates.k8s.io/v1
kind: CertificateSigningRequest
metadata:
name: ${CSR_NAME}
spec:
request: ${CSR_CONTENT}
signerName: kubernetes.io/kubelet-serving
usages:
- digital signature
- key encipherment
- server auth
EOF

3. Approve CSR

🖥️ Master node

CSR approval is performed on the master node. Specify the name of the worker node for which the CSR is being approved.

export HOST_NAME=worker-1
export CSR_NAME="${HOST_NAME}-kubelet-server"
kubectl \
--kubeconfig=/etc/kubernetes/super-admin.conf \
certificate approve ${CSR_NAME}

4. Retrieve signed certificate

🖥️ Worker node

Certificate is retrieved on the worker node using bootstrap-kubelet.conf.

export HOST_NAME=worker-1
export CSR_NAME="${HOST_NAME}-kubelet-server"
kubectl \
--kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf \
get csr ${CSR_NAME} \
-o jsonpath='{.status.certificate}' | base64 -d > /var/lib/kubelet/pki/kubelet-server.pem
cat /var/lib/kubelet/pki/kubelet-server.pem /var/lib/kubelet/pki/kubelet-server-key.pem > /var/lib/kubelet/pki/kubelet-server-$(date '+%Y-%m-%d-%H-%M-%S').pem
ln -sf /var/lib/kubelet/pki/kubelet-server-$(date '+%Y-%m-%d-%H-%M-%S').pem /var/lib/kubelet/pki/kubelet-server-current.pem

9. Starting Kubelet

Once authentication is complete, kubelet is ready to start. This step creates the kubelet config, brings up the systemd service, and lets the node register in the cluster.

This section describes connecting a worker node to a Kubernetes cluster and starting Kubelet. For manual installation (Hard Way), you need to create a bootstrap kubeconfig with an authentication token, a base kubelet configuration file, and start the systemd service. When using kubeadm, simply run the kubeadm join command.

Start/Configure kubelet

● Required

This configuration file is required to start Kubelet.

Kubelet default config

Basic kubelet configuration file

cat <<EOF > /var/lib/kubelet/config.yaml
---
apiVersion: kubelet.config.k8s.io/v1beta1
authentication:
anonymous:
enabled: false
webhook:
cacheTTL: 0s
enabled: true
x509:
clientCAFile: /etc/kubernetes/pki/ca.crt
authorization:
mode: Webhook
webhook:
cacheAuthorizedTTL: 0s
cacheUnauthorizedTTL: 0s
cgroupDriver: systemd
clusterDNS:
- 29.64.0.10
clusterDomain: cluster.local
containerRuntimeEndpoint: ""
cpuManagerReconcilePeriod: 0s
evictionPressureTransitionPeriod: 0s
fileCheckFrequency: 0s
healthzBindAddress: 127.0.0.1
healthzPort: 10248
httpCheckFrequency: 0s
imageMaximumGCAge: 0s
imageMinimumGCAge: 0s
kind: KubeletConfiguration
logging:
flushFrequency: 0
options:
json:
infoBufferSize: "0"
text:
infoBufferSize: "0"
verbosity: 0
memorySwap: {}
nodeStatusReportFrequency: 0s
nodeStatusUpdateFrequency: 0s
resolvConf: /run/systemd/resolve/resolv.conf
rotateCertificates: true
runtimeRequestTimeout: 0s
shutdownGracePeriod: 0s
shutdownGracePeriodCriticalPods: 0s
staticPodPath: /etc/kubernetes/manifests
streamingConnectionIdleTimeout: 0s
syncFrequency: 0s
volumeStatsAggPeriod: 0s
EOF
Prerequisites

Before starting kubelet, complete the steps in 5.3.2. Authentication:

  • Fetch the CA certificate (ca.crt)
  • Create bootstrap-kubelet.conf (or generate certificates manually)

Environment variables

Note

This configuration block is applicable only when installing Kubernetes manually (using the "Kubernetes the Hard Way" method). When using the kubeadm utility, the configuration file will be created automatically based on the specification provided in the kubeadm-config file.

cat <<EOF > /var/lib/kubelet/kubeadm-flags.env
KUBELET_KUBEADM_ARGS="--container-runtime-endpoint=unix:///var/run/containerd/containerd.sock --pod-infra-container-image=registry.k8s.io/pause:3.9 --config=/var/lib/kubelet/config-custom.yaml --cluster-domain=cluster.local --cluster-dns=29.64.0.10
"
EOF

This command starts the Kubelet service, which is responsible for deploying all containers based on Static Pods manifests.

systemctl start kubelet

Systemd Unit Status

Systemd unit readiness check
Note

Note that when creating a cluster with Kubeadm without running kubeadm init or kubeadm join, the Systemd Unit will be added to autostart but will be disabled.

systemctl status kubelet
Command output
● kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; preset: enabled)
Drop-In: /usr/lib/systemd/system/kubelet.service.d
└─10-kubeadm.conf
Active: active (running) since Sat 2025-02-22 10:33:54 UTC; 17min ago
Docs: https://kubernetes.io/docs/
Main PID: 13779 (kubelet)
Tasks: 14 (limit: 7069)
Memory: 34.0M (peak: 35.3M)
CPU: 27.131s
CGroup: /system.slice/kubelet.service
└─13779 /usr/local/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml

10. Verification

After starting kubelet, verify the node has appeared in the cluster. Until a network plugin is installed, it may remain in NotReady state — this is expected.

Run on a master node:

kubectl --kubeconfig=/etc/kubernetes/super-admin.conf get nodes -o wide
info
NAME                                          STATUS     ROLES    AGE   VERSION
master-1.my-first-cluster.example.com NotReady master 1d v1.32.0
master-2.my-first-cluster.example.com NotReady master 1d v1.32.0
master-3.my-first-cluster.example.com NotReady master 1d v1.32.0
worker-1.my-first-cluster.example.com NotReady <none> 30s v1.32.0
NotReady Status

The NotReady status is normal behavior until a network plugin (CNI) is installed. After deploying a CNI (Calico, Cilium, Flannel, etc.) the node status will change to Ready.


Conclusion

The worker node has been added to the cluster. Now the control plane can be used as intended — it can accept real workloads.

Along the way we:

  • Prepared the OS and network stack
  • Installed containerd, kubelet, and supporting utilities
  • Connected the node manually via bootstrap token and CSR API, or via kubeadm
  • Started kubelet and verified node registration

The next logical step is to install a CNI plugin, set up in-cluster DNS, and only then move on to deploying application workloads.

note

If you've made it this far, you already have a manually assembled control plane and your first worker node. From here you can proceed to the network plugin and turn the cluster skeleton into a working environment.