Kubernetes The Hard Way
Kubernetes The Hard Way #
Resuming the Kubernetes article series in a new format.
This article describes the overall experience of manually deploying Kubernetes without
using automated tools such as
kubeadm. The presented approach is consistent with our
documentation, which we maintain according to best practices and
IAC methodologies.
All configuration provided below exactly replicates the behavior of
kubeadm. As a result, the final cluster is hard to distinguish — whether
it was assembled using kubeadm or manually.

1. Introduction
Kubernetes has become the de facto standard for managing containerized applications. Its installation and configuration have been greatly simplified thanks to tools like kubeadm, which handle certificate generation, component startup, and basic cluster configuration.
However, behind this convenience lies a complex architecture, understanding of which is critical when designing fault-tolerant solutions, creating custom automations, or debugging production issues. To truly understand how a Kubernetes cluster works, it is important to go through the deployment process manually — from initialization to full readiness.
Kubernetes The Hard Way is a guide in which a cluster is deployed step by step, without using kubeadm or other automated tools. Instead of a black box — sequential execution of all the steps that are usually performed under the hood.
Each stage corresponds to a specific phase of kubeadm init or kubeadm join, but is implemented manually, with explicit key generation, configuration preparation, process startup, and system state verification.
💡 Result — a fully functional Kubernetes cluster, virtually indistinguishable from one assembled via
kubeadm, but prepared with a complete understanding of all internal dependencies.
This article is intended for readers who are already familiar with the basic concepts of containerization and Kubernetes in general. Without this background, the level of detail will be overwhelming. If you are just getting started, we recommend reviewing the official Kubernetes Bootcamp.
🔧 Preface: Why the Startup Order Matters
Some systems are designed so that components are interdependent, and their management is partially performed within the system itself. This requires a strict order of operations:
⚙️ Component Interdependency
One component cannot start without another.
Example: API requires storage, and storage requires networking and configuration.⏱ Cannot Start Everything Simultaneously
Parallel startup leads to undesirable results.
Example: Scheduler waits for API, and API waits for data loading and initialization.🔄 Some Components Are Started Externally
Before the system is ready, some processes are started through the environment.
Example:kubeletis started via systemd, not as part of the cluster.🛠 A Bootstrap Stage Is Required
Configs, certificates, addresses — everything is prepared manually.
Example: Initial generation of root CA,kubeconfig, static pod manifests.🤖 Transition to Self-Management
After startup, the system begins to manage its own processes and state.
Example: Control plane components begin to control each other through the API.
Without a strictly defined sequence, such a system will not work.
This is exactly why tools and utilities like kubeadm exist — they solve the
"chicken and egg" problem and establish the correct deployment order.
Chapters:
- 1. Introduction
- 2. Why "The Hard Way"
- 3. Deployment Architecture
- 4. Creating the Infrastructure
- 5. Basic Node Setup
- 6. Loading Kernel Modules
- 7. Configuring sysctl Parameters
- 8. Installing Components
- 9. Configuring Components
- 10. Verifying Component Readiness
- 11. Working with Certificates
- 12. Creating Root Certificates
- 13. Creating Application Certificates
- 14. Creating the ServiceAccount Signing Key
- 15*. Creating All Certificates
- 16. Creating kubeconfig Configurations
- 17*. Creating All kubeconfigs
- 18. Verifying the Certificate Block
- 19. Creating Control Plane Static Pods
- 20*. Creating All Control Plane Static Pods
- 21. Creating ETCD Cluster Static Pods
- 22. Starting the Kubelet Service
- 23. Checking Cluster Status
- 24. Configuring the Role Model
- 25. Uploading Configuration to the Cluster
- 26. Uploading Root Certificates to the Cluster
- 27. Labeling and Tainting Nodes

2. Why "The Hard Way"
Deploying Kubernetes manually requires additional effort. However, this approach has several advantages:
- It provides a deep understanding of the architecture and internal logic of Kubernetes components.
- It allows flexible configuration of each cluster component to meet specific technical requirements.
3. Deployment Architecture
Component Layer
Technology layer.
Below is a list of components required for manual cluster deployment. To ensure compatibility, all versions must be synchronized with each other.
| Component | Version | Purpose |
|---|---|---|
| containerd | 1.7.19 | Container runtime that manages the container lifecycle. |
| runc | v1.1.12 | Low-level tool for running containers using Linux kernel capabilities. |
| crictl | v1.30.0 | Utility for debugging CRI runtimes with containerd interaction support. |
| kubectl | v1.30.4 | Client for interacting with the Kubernetes API. |
| kubeadm | v1.30.4 | Tool for automating Kubernetes installation and configuration (used for configuration validation). |
| kubelet | v1.30.4 | Agent running on each node, responsible for pod execution and health monitoring. |
| etcd | 3.5.12-0 | Distributed key-value store for storing cluster configuration and state. |
| kube-apiserver | v1.30.4 | Component providing a REST API for cluster interaction. |
| kube-controller-manager | v1.30.4 | Manages the state of cluster objects using built-in controllers. |
| kube-scheduler | v1.30.4 | Responsible for scheduling pod placement on nodes. |
| conntrack | v1.4.+ | Utility for tracking network connections (used by iptables and kubelet). |
| socat | 1.8.+ | Utility for port forwarding and TCP tunneling (used for debugging and proxying). |
Switching Layer
Network deployment diagram.
| Component | Port | Protocol |
|---|---|---|
| etcd-server | 2379 | TCP |
| etcd-peer | 2380 | TCP |
| etcd-metrics | 2381 | TCP |
| kube-apiserver | 6443 | TCP |
| kube-controller-manager | 10257 | TCP |
| kube-scheduler | 10259 | TCP |
| kubelet-healthz | 10248 | TCP |
| kubelet-server | 10250 | TCP |
| kubelet-read-only-port | 10255 | TCP |
Load Balancing Layer
Technology layer.
| IP Address | Target Group | Port | Target Port |
|---|---|---|---|
| VIP-LB | - NODE-IP-1 - NODE-IP-2 - NODE-IP-3 | 6443 | 6443 |
DNS Records
| A Record | IP Address | TTL |
|---|---|---|
api.my-first-cluster.example.com | VIP-LB | 60s |
master-1.my-first-cluster.example.com | NODE-IP-1 | 60s |
master-2.my-first-cluster.example.com | NODE-IP-2 | 60s |
master-3.my-first-cluster.example.com | NODE-IP-3 | 60s |
4. Creating the Infrastructure
At this stage, the basic cluster architecture is defined, including its network topology, control plane nodes, and core parameters.
Cluster Information
| Name | External Domain | Kubernetes Version |
|---|---|---|
| my-first-cluster | example.com | v1.30.4 |
Control Plane Nodes
| Name | IP Address | Operating System | Resources |
|---|---|---|---|
master-1.my-first-cluster.example.com | NODE-IP-1 | ubuntu-24-04-lts | 2CPU / 2RAM / 20GB |
master-2.my-first-cluster.example.com | NODE-IP-2 | ubuntu-24-04-lts | 2CPU / 2RAM / 20GB |
master-3.my-first-cluster.example.com | NODE-IP-3 | ubuntu-24-04-lts | 2CPU / 2RAM / 20GB |
5. Basic Node Setup
This section covers the basic preparation of Kubernetes nodes before installing components. It describes setting up environment variables, changing the hostname, and installing required system utilities. These steps are mandatory for the correct operation of kubelet and other control plane components.
Basic node setup
● Required
Basic node setup
● Required
Basic node settings
- Node environment variables.
- Changing the node name.
- Installing dependencies.
Node environment variables
- master-1
- master-2
- master-3
export HOST_NAME=master-1
export HOST_NAME=master-2
export HOST_NAME=master-3
export CLUSTER_NAME="my-first-cluster"
export BASE_DOMAIN="example.com"
export CLUSTER_DOMAIN="cluster.local"
export FULL_HOST_NAME="${HOST_NAME}.${CLUSTER_NAME}.${BASE_DOMAIN}"
Changing the node name
hostnamectl set-hostname ${FULL_HOST_NAME}
Installing dependencies
- apt
- yum
- dnf
sudo apt update
sudo apt install -y conntrack socat jq tree
sudo yum update
sudo yum install -y conntrack-tools socat jq tree
sudo dnf update
sudo dnf install -y conntrack-tools socat jq tree
6. Loading Kernel Modules
This section covers loading kernel modules required for the correct operation of Kubernetes. The setup includes modprobe configuration and activation of the overlay and br_netfilter modules, which provide support for the container filesystem and network functions. These steps are mandatory for the functioning of network policies, iptables, and container runtimes.
Loading kernel modules
● Required
Loading kernel modules
● Required
Component installation steps:
- Modprobe configuration.
- Loading modules.
- Bash
- Cloud-init
The overlay module is used by the OverlayFS filesystem to manage container layers. It allows merging multiple directories into a single virtual filesystem. It is used by runtimes such as Docker and containerd.
The br_netfilter module enables processing of network bridge traffic through the netfilter subsystem. This is necessary for the correct operation of iptables in Kubernetes.
7. Configuring sysctl Parameters
This section covers configuring kernel parameters using sysctl, which are necessary for Kubernetes networking. Changes are made to ensure traffic routing between pods and correct iptables operation for bridges. These parameters are mandatory for enabling IP packet forwarding and network flow filtering in the cluster.
Configuring sysctl parameters
● Required
Configuring sysctl parameters
● Required
Component installation steps:
- Sysctl configuration.
- Applying configuration.
Network Parameters
For correct traffic routing and filtering, kernel parameters must be set.
- Bash
- Cloud-init
If the net.ipv4.ip_forward parameter is not enabled, the system will not forward IP packets between interfaces. This can lead to network failures within the cluster, service unavailability, and loss of connectivity between pods.
- Bash
- Cloud-init
8. Installing Components
This section describes the installation process for the core components required for a Kubernetes cluster. The installation is performed manually and prepares the environment for subsequent initialization and control plane configuration stages.
- runc
- containerd
- kubelet
- etcd
- kubectl
- crictl
- kubeadm
Installation of runc
● Required
Installation of runc
● Required
Component installation steps
- Creating working directories.
- Environment variables.
- Download instructions.
- Permissions setup.
- Download service.
- Starting the download service.
- Bash
- Cloud-init
Creating working directories
mkdir -p /etc/default/runc
Environment variables
cat <<EOF > /etc/default/runc/download.env
COMPONENT_VERSION="v1.1.12"
REPOSITORY="https://github.com/opencontainers/runc/releases/download"
EOF
Download instructions
cat <<"EOF" > /etc/default/runc/download-script.sh
#!/bin/bash
set -Eeuo pipefail
COMPONENT_VERSION="${COMPONENT_VERSION:-v1.1.12}"
REPOSITORY="${REPOSITORY:-https://github.com/opencontainers/runc/releases/download}"
PATH_BIN="${REPOSITORY}/${COMPONENT_VERSION}/runc.amd64"
PATH_SHA256="${REPOSITORY}/${COMPONENT_VERSION}/runc.sha256sum"
INSTALL_PATH="/usr/local/bin/runc"
LOG_TAG="runc-installer"
TMP_DIR="$(mktemp -d)"
logger -t "$LOG_TAG" "[INFO] Checking current runc version..."
CURRENT_VERSION=$($INSTALL_PATH --version 2>/dev/null | head -n1 | awk '{print $NF}') || CURRENT_VERSION="none"
COMPONENT_VERSION_CLEAN=$(echo "$COMPONENT_VERSION" | sed 's/^v//')
logger -t "$LOG_TAG" "[INFO] Current: $CURRENT_VERSION, Target: $COMPONENT_VERSION_CLEAN"
if [[ "$CURRENT_VERSION" != "$COMPONENT_VERSION_CLEAN" ]]; then
logger -t "$LOG_TAG" "[INFO] Download URL: $PATH_BIN"
logger -t "$LOG_TAG" "[INFO] Updating runc to version $COMPONENT_VERSION..."
cd "$TMP_DIR"
logger -t "$LOG_TAG" "[INFO] Working directory: $PWD"
logger -t "$LOG_TAG" "[INFO] Downloading runc..."
curl -fsSL -o runc.amd64 "$PATH_BIN" || { logger -t "$LOG_TAG" "[ERROR] Failed to download runc"; exit 1; }
logger -t "$LOG_TAG" "[INFO] Downloading checksum file..."
curl -fsSL -o runc.sha256sum "$PATH_SHA256" || { logger -t "$LOG_TAG" "[ERROR] Failed to download checksum file"; exit 1; }
logger -t "$LOG_TAG" "[INFO] Verifying checksum..."
grep "runc.amd64" runc.sha256sum | sha256sum -c - || { logger -t "$LOG_TAG" "[ERROR] Checksum verification failed!"; exit 1; }
logger -t "$LOG_TAG" "[INFO] Installing runc..."
install -m 755 runc.amd64 "$INSTALL_PATH"
logger -t "$LOG_TAG" "[INFO] runc successfully updated to $COMPONENT_VERSION."
rm -rf "$TMP_DIR"
else
logger -t "$LOG_TAG" "[INFO] runc is already up to date. Skipping installation."
fi
EOF
Permissions setup
chmod +x /etc/default/runc/download-script.sh
Download service
cat <<EOF > /usr/lib/systemd/system/runc-install.service
[Unit]
Description=Install and update in-cloud component runc
After=network.target
Wants=network-online.target
[Service]
Type=oneshot
EnvironmentFile=-/etc/default/runc/download.env
ExecStart=/bin/bash -c "/etc/default/runc/download-script.sh"
RemainAfterExit=yes
[Install]
WantedBy=multi-user.target
EOF
Download
systemctl enable runc-install.service
systemctl start runc-install.service
Environment variables
- path: /etc/default/runc/download.env
owner: root:root
permissions: '0644'
content: |
COMPONENT_VERSION="v1.1.12"
REPOSITORY="https://github.com/opencontainers/runc/releases/download"
Download instructions/Permissions setup
- path: /etc/default/runc/download-script.sh
owner: root:root
permissions: '0755'
content: |
#!/bin/bash
set -Eeuo pipefail
COMPONENT_VERSION="${COMPONENT_VERSION:-v1.1.12}"
REPOSITORY="${REPOSITORY:-https://github.com/opencontainers/runc/releases/download}"
PATH_BIN="${REPOSITORY}/${COMPONENT_VERSION}/runc.amd64"
PATH_SHA256="${REPOSITORY}/${COMPONENT_VERSION}/runc.sha256sum"
INSTALL_PATH="/usr/local/bin/runc"
LOG_TAG="runc-installer"
TMP_DIR="$(mktemp -d)"
logger -t "$LOG_TAG" "[INFO] Checking current runc version..."
CURRENT_VERSION=$($INSTALL_PATH --version 2>/dev/null | head -n1 | awk '{print $NF}') || CURRENT_VERSION="none"
COMPONENT_VERSION_CLEAN=$(echo "$COMPONENT_VERSION" | sed 's/^v//')
logger -t "$LOG_TAG" "[INFO] Current: $CURRENT_VERSION, Target: $COMPONENT_VERSION_CLEAN"
if [[ "$CURRENT_VERSION" != "$COMPONENT_VERSION_CLEAN" ]]; then
logger -t "$LOG_TAG" "[INFO] Download URL: $PATH_BIN"
logger -t "$LOG_TAG" "[INFO] Updating runc to version $COMPONENT_VERSION..."
cd "$TMP_DIR"
logger -t "$LOG_TAG" "[INFO] Working directory: $PWD"
logger -t "$LOG_TAG" "[INFO] Downloading runc..."
curl -fsSL -o runc.amd64 "$PATH_BIN" || { logger -t "$LOG_TAG" "[ERROR] Failed to download runc"; exit 1; }
logger -t "$LOG_TAG" "[INFO] Downloading checksum file..."
curl -fsSL -o runc.sha256sum "$PATH_SHA256" || { logger -t "$LOG_TAG" "[ERROR] Failed to download checksum file"; exit 1; }
logger -t "$LOG_TAG" "[INFO] Verifying checksum..."
grep "runc.amd64" runc.sha256sum | sha256sum -c - || { logger -t "$LOG_TAG" "[ERROR] Checksum verification failed!"; exit 1; }
logger -t "$LOG_TAG" "[INFO] Installing runc..."
install -m 755 runc.amd64 "$INSTALL_PATH"
logger -t "$LOG_TAG" "[INFO] runc successfully updated to $COMPONENT_VERSION."
rm -rf "$TMP_DIR"
else
logger -t "$LOG_TAG" "[INFO] runc is already up to date. Skipping installation."
fi
Download service
- path: /usr/lib/systemd/system/runc-install.service
owner: root:root
permissions: '0644'
content: |
[Unit]
Description=Install and update in-cloud component runc
After=network.target
Wants=network-online.target
[Service]
Type=oneshot
EnvironmentFile=-/etc/default/runc/download.env
ExecStart=/bin/bash -c "/etc/default/runc/download-script.sh"
RemainAfterExit=yes
[Install]
WantedBy=multi-user.target
Download
- systemctl enable runc-install.service
- systemctl start runc-install.service
Installation check
Installation check
Executable files
journalctl -t runc-installer
***** [INFO] Checking current runc version...
***** [INFO] Current: none, Target: v1.1.12
***** [INFO] Download URL: https://*******
***** [INFO] Updating runc to version v1.1.12...
***** [INFO] Working directory: /tmp/tmp.*****
***** [INFO] Downloading runc...
***** [INFO] Downloading checksum file...
***** [INFO] Verifying checksum...
***** [INFO] Installing runc...
***** [INFO] runc successfully updated to v1.1.12.
ls -la /usr/local/bin/ | grep 'runc$'
-rwxr-xr-x 1 root root 10709696 Jan 23 2024 runc
Executable file version
runc --version
runc version 1.1.12
commit: v1.1.12-0-g51d5e946
spec: 1.0.2-dev
go: go1.20.13
libseccomp: 2.5.4
Installation of containerd
● Required
Installation of containerd
● Required
Component installation steps
- Creating working directories.
- Environment variables.
- Download instructions.
- Permission setup.
- Download service.
- Starting the download service.
- Bash
- Cloud-init
Creating working directories
mkdir -p /etc/default/containerd
Environment variables
cat <<EOF > /etc/default/containerd/download.env
COMPONENT_VERSION="1.7.19"
REPOSITORY="https://github.com/containerd/containerd/releases/download"
EOF
Download instructions
cat <<"EOF" > /etc/default/containerd/download-script.sh
#!/bin/bash
set -Eeuo pipefail
COMPONENT_VERSION="${COMPONENT_VERSION:-1.7.19}"
REPOSITORY="${REPOSITORY:-https://github.com/containerd/containerd/releases/download}"
PATH_BIN="${REPOSITORY}/v${COMPONENT_VERSION}/containerd-${COMPONENT_VERSION}-linux-amd64.tar.gz"
PATH_SHA256="${REPOSITORY}/v${COMPONENT_VERSION}/containerd-${COMPONENT_VERSION}-linux-amd64.tar.gz.sha256sum"
INSTALL_PATH="/usr/local/bin/"
LOG_TAG="containerd-installer"
TMP_DIR="$(mktemp -d)"
logger -t "$LOG_TAG" "[INFO] Checking current containerd version..."
CURRENT_VERSION=$($INSTALL_PATH/containerd --version 2>/dev/null | awk '{print $3}' | sed 's/v//') || CURRENT_VERSION="none"
COMPONENT_VERSION_CLEAN=$(echo "$COMPONENT_VERSION" | sed 's/^v//')
logger -t "$LOG_TAG" "[INFO] Current: $CURRENT_VERSION, Target: $COMPONENT_VERSION_CLEAN"
if [[ "$CURRENT_VERSION" != "$COMPONENT_VERSION_CLEAN" ]]; then
logger -t "$LOG_TAG" "[INFO] Download URL: $PATH_BIN"
logger -t "$LOG_TAG" "[INFO] Updating containerd to version $COMPONENT_VERSION_CLEAN..."
cd "$TMP_DIR"
logger -t "$LOG_TAG" "[INFO] Working directory: $PWD"
logger -t "$LOG_TAG" "[INFO] Downloading containerd..."
curl -fsSL -o "containerd-${COMPONENT_VERSION}-linux-amd64.tar.gz" "$PATH_BIN" || { logger -t "$LOG_TAG" "[ERROR] Failed to download containerd"; exit 1; }
logger -t "$LOG_TAG" "[INFO] Downloading checksum file..."
curl -fsSL -o "containerd.sha256sum" "$PATH_SHA256" || { logger -t "$LOG_TAG" "[ERROR] Failed to download checksum file"; exit 1; }
logger -t "$LOG_TAG" "[INFO] Verifying checksum..."
sha256sum -c containerd.sha256sum | grep 'OK' || { logger -t "$LOG_TAG" "[ERROR] Checksum verification failed!"; exit 1; }
logger -t "$LOG_TAG" "[INFO] Extracting files..."
tar -C "$TMP_DIR" -xvf "containerd-${COMPONENT_VERSION}-linux-amd64.tar.gz"
logger -t "$LOG_TAG" "[INFO] Installing binaries..."
install -m 755 "$TMP_DIR/bin/containerd" $INSTALL_PATH
install -m 755 "$TMP_DIR/bin/containerd-shim"* $INSTALL_PATH
install -m 755 "$TMP_DIR/bin/ctr" $INSTALL_PATH
logger -t "$LOG_TAG" "[INFO] Containerd successfully updated to $COMPONENT_VERSION."
rm -rf "$TMP_DIR"
else
logger -t "$LOG_TAG" "[INFO] Containerd is already up to date. Skipping installation."
fi
EOF
Permission setup
chmod +x /etc/default/containerd/download-script.sh
Download service
cat <<EOF > /usr/lib/systemd/system/containerd-install.service
[Unit]
Description=Install and update in-cloud component containerd
After=network.target
Wants=network-online.target
[Service]
Type=oneshot
EnvironmentFile=-/etc/default/containerd/download.env
ExecStart=/bin/bash -c "/etc/default/containerd/download-script.sh"
RemainAfterExit=yes
[Install]
WantedBy=multi-user.target
EOF
Starting the download service
systemctl enable containerd-install.service
systemctl start containerd-install.service
Environment variables
- path: /etc/default/containerd/download.env
owner: root:root
permissions: '0644'
content: |
COMPONENT_VERSION="1.7.19"
REPOSITORY="https://github.com/containerd/containerd/releases/download"
Download instructions/Permission setup
- path: /etc/default/containerd/download-script.sh
owner: root:root
permissions: '0755'
content: |
#!/bin/bash
set -Eeuo pipefail
COMPONENT_VERSION="${COMPONENT_VERSION:-1.7.19}"
REPOSITORY="${REPOSITORY:-https://github.com/containerd/containerd/releases/download}"
PATH_BIN="${REPOSITORY}/v${COMPONENT_VERSION}/containerd-${COMPONENT_VERSION}-linux-amd64.tar.gz"
PATH_SHA256="${REPOSITORY}/v${COMPONENT_VERSION}/containerd-${COMPONENT_VERSION}-linux-amd64.tar.gz.sha256sum"
INSTALL_PATH="/usr/local/bin/"
LOG_TAG="containerd-installer"
TMP_DIR="$(mktemp -d)"
logger -t "$LOG_TAG" "[INFO] Checking current containerd version..."
CURRENT_VERSION=$($INSTALL_PATH/containerd --version 2>/dev/null | awk '{print $3}' | sed 's/v//') || CURRENT_VERSION="none"
COMPONENT_VERSION_CLEAN=$(echo "$COMPONENT_VERSION" | sed 's/^v//')
logger -t "$LOG_TAG" "[INFO] Current: $CURRENT_VERSION, Target: $COMPONENT_VERSION_CLEAN"
if [[ "$CURRENT_VERSION" != "$COMPONENT_VERSION_CLEAN" ]]; then
logger -t "$LOG_TAG" "[INFO] Download URL: $PATH_BIN"
logger -t "$LOG_TAG" "[INFO] Updating containerd to version $COMPONENT_VERSION_CLEAN..."
cd "$TMP_DIR"
logger -t "$LOG_TAG" "[INFO] Working directory: $PWD"
logger -t "$LOG_TAG" "[INFO] Downloading containerd..."
curl -fsSL -o "containerd-${COMPONENT_VERSION}-linux-amd64.tar.gz" "$PATH_BIN" || { logger -t "$LOG_TAG" "[ERROR] Failed to download containerd"; exit 1; }
logger -t "$LOG_TAG" "[INFO] Downloading checksum file..."
curl -fsSL -o "containerd.sha256sum" "$PATH_SHA256" || { logger -t "$LOG_TAG" "[ERROR] Failed to download checksum file"; exit 1; }
logger -t "$LOG_TAG" "[INFO] Verifying checksum..."
sha256sum -c containerd.sha256sum | grep 'OK' || { logger -t "$LOG_TAG" "[ERROR] Checksum verification failed!"; exit 1; }
logger -t "$LOG_TAG" "[INFO] Extracting files..."
tar -C "$TMP_DIR" -xvf "containerd-${COMPONENT_VERSION}-linux-amd64.tar.gz"
logger -t "$LOG_TAG" "[INFO] Installing binaries..."
install -m 755 "$TMP_DIR/bin/containerd" $INSTALL_PATH
install -m 755 "$TMP_DIR/bin/containerd-shim"* $INSTALL_PATH
install -m 755 "$TMP_DIR/bin/ctr" $INSTALL_PATH
logger -t "$LOG_TAG" "[INFO] Containerd successfully updated to $COMPONENT_VERSION."
rm -rf "$TMP_DIR"
else
logger -t "$LOG_TAG" "[INFO] Containerd is already up to date. Skipping installation."
fi
Download service
- path: /usr/lib/systemd/system/containerd-install.service
owner: root:root
permissions: '0644'
content: |
[Unit]
Description=Install and update in-cloud component containerd
After=network.target
Wants=network-online.target
[Service]
Type=oneshot
EnvironmentFile=-/etc/default/containerd/download.env
ExecStart=/bin/bash -c "/etc/default/containerd/download-script.sh"
RemainAfterExit=yes
[Install]
WantedBy=multi-user.target
Starting the download service
- systemctl enable containerd-install.service
- systemctl start containerd-install.service
Installation verification
Installation verification
Executable files
journalctl -t containerd-installer
***** [INFO] Checking current containerd version...
***** [INFO] Current: none, Target: 1.7.19
***** [INFO] Download URL: https://*******
***** [INFO] Updating containerd to version 1.7.19...
***** [INFO] Working directory: /tmp/tmp.*****
***** [INFO] Downloading containerd...
***** [INFO] Downloading checksum file...
***** [INFO] Verifying checksum...
***** [INFO] Extracting files...
***** [INFO] Installing binaries...
***** [INFO] Containerd successfully updated to 1.7.19.
ls -la /usr/local/bin/ | grep -E "containerd|ctr"
-rwxr-xr-x 1 root root 54855296 Feb 18 15:12 containerd
-rwxr-xr-x 1 root root 7176192 Feb 18 15:12 containerd-shim
-rwxr-xr-x 1 root root 8884224 Feb 18 15:12 containerd-shim-containerd-v1
-rwxr-xr-x 1 root root 12169216 Feb 18 15:12 containerd-shim-containerd-v2
-rwxr-xr-x 1 root root 12169216 Feb 18 15:12 ctr
Executable file version
containerd --version
containerd github.com/containerd/containerd v1.7.19 2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41
Installation of kubelet
● Required
Installation of kubelet
● Required
Component installation steps
- Creating working directories.
- Environment variables.
- Download instructions.
- Permissions setup.
- Download service.
- Starting the download service.
- Bash
- Cloud-init
Creating working directories
mkdir -p /etc/default/kubelet
Environment variables
cat <<EOF > /etc/default/kubelet/download.env
COMPONENT_VERSION="v1.30.4"
REPOSITORY="https://dl.k8s.io"
EOF
Download instructions
cat <<"EOF" > /etc/default/kubelet/download-script.sh
#!/bin/bash
set -Eeuo pipefail
COMPONENT_VERSION="${COMPONENT_VERSION:-v1.30.4}"
REPOSITORY="${REPOSITORY:-https://dl.k8s.io}"
PATH_BIN="${REPOSITORY}/${COMPONENT_VERSION}/bin/linux/amd64/kubelet"
PATH_SHA256="${REPOSITORY}/${COMPONENT_VERSION}/bin/linux/amd64/kubelet.sha256"
INSTALL_PATH="/usr/local/bin/kubelet"
LOG_TAG="kubelet-installer"
TMP_DIR="$(mktemp -d)"
logger -t "$LOG_TAG" "[INFO] Checking current kubelet version..."
CURRENT_VERSION=$($INSTALL_PATH --version 2>/dev/null | awk '{print $2}' | sed 's/v//') || CURRENT_VERSION="none"
COMPONENT_VERSION_CLEAN=$(echo "$COMPONENT_VERSION" | sed 's/^v//')
logger -t "$LOG_TAG" "[INFO] Current: $CURRENT_VERSION, Target: $COMPONENT_VERSION_CLEAN"
if [[ "$CURRENT_VERSION" != "$COMPONENT_VERSION_CLEAN" ]]; then
logger -t "$LOG_TAG" "[INFO] Download URL: $PATH_BIN"
logger -t "$LOG_TAG" "[INFO] Updating kubelet to version $COMPONENT_VERSION_CLEAN..."
cd "$TMP_DIR"
logger -t "$LOG_TAG" "[INFO] Working directory: $PWD"
logger -t "$LOG_TAG" "[INFO] Downloading kubelet..."
curl -fsSL -o kubelet "$PATH_BIN" || { logger -t "$LOG_TAG" "[ERROR] Failed to download kubelet"; exit 1; }
logger -t "$LOG_TAG" "[INFO] Downloading checksum file..."
curl -fsSL -o kubelet.sha256sum "$PATH_SHA256" || { logger -t "$LOG_TAG" "[ERROR] Failed to download checksum file"; exit 1; }
logger -t "$LOG_TAG" "[INFO] Verifying checksum..."
awk '{print $1" kubelet"}' kubelet.sha256sum | sha256sum -c - || { logger -t "$LOG_TAG" "[ERROR] Checksum verification failed!"; exit 1; }
logger -t "$LOG_TAG" "[INFO] Installing kubelet..."
install -m 755 kubelet "$INSTALL_PATH"
logger -t "$LOG_TAG" "[INFO] kubelet successfully updated to $COMPONENT_VERSION_CLEAN."
rm -rf "$TMP_DIR"
else
logger -t "$LOG_TAG" "[INFO] kubelet is already up to date. Skipping installation."
fi
EOF
Permissions setup
chmod +x /etc/default/kubelet/download-script.sh
Download service
cat <<EOF > /usr/lib/systemd/system/kubelet-install.service
[Unit]
Description=Install and update kubelet
After=network.target
Wants=network-online.target
[Service]
Type=oneshot
EnvironmentFile=-/etc/default/kubelet/download.env
ExecStart=/bin/bash -c "/etc/default/kubelet/download-script.sh"
RemainAfterExit=yes
[Install]
WantedBy=multi-user.target
EOF
Download
systemctl enable kubelet-install.service
systemctl start kubelet-install.service
Environment variables
- path: /etc/default/kubelet/download.env
owner: root:root
permissions: '0644'
content: |
COMPONENT_VERSION="v1.30.4"
REPOSITORY="https://dl.k8s.io"
Download instructions/Permissions setup
- path: /etc/default/kubelet/download-script.sh
owner: root:root
permissions: '0755'
content: |
#!/bin/bash
set -Eeuo pipefail
COMPONENT_VERSION="${COMPONENT_VERSION:-v1.30.4}"
REPOSITORY="${REPOSITORY:-https://dl.k8s.io}"
PATH_BIN="${REPOSITORY}/${COMPONENT_VERSION}/bin/linux/amd64/kubelet"
PATH_SHA256="${REPOSITORY}/${COMPONENT_VERSION}/bin/linux/amd64/kubelet.sha256"
INSTALL_PATH="/usr/local/bin/kubelet"
LOG_TAG="kubelet-installer"
TMP_DIR="$(mktemp -d)"
logger -t "$LOG_TAG" "[INFO] Checking current kubelet version..."
CURRENT_VERSION=$($INSTALL_PATH --version 2>/dev/null | awk '{print $2}' | sed 's/v//') || CURRENT_VERSION="none"
COMPONENT_VERSION_CLEAN=$(echo "$COMPONENT_VERSION" | sed 's/^v//')
logger -t "$LOG_TAG" "[INFO] Current: $CURRENT_VERSION, Target: $COMPONENT_VERSION_CLEAN"
if [[ "$CURRENT_VERSION" != "$COMPONENT_VERSION_CLEAN" ]]; then
logger -t "$LOG_TAG" "[INFO] Download URL: $PATH_BIN"
logger -t "$LOG_TAG" "[INFO] Updating kubelet to version $COMPONENT_VERSION_CLEAN..."
cd "$TMP_DIR"
logger -t "$LOG_TAG" "[INFO] Working directory: $PWD"
logger -t "$LOG_TAG" "[INFO] Downloading kubelet..."
curl -fsSL -o kubelet "$PATH_BIN" || { logger -t "$LOG_TAG" "[ERROR] Failed to download kubelet"; exit 1; }
logger -t "$LOG_TAG" "[INFO] Downloading checksum file..."
curl -fsSL -o kubelet.sha256sum "$PATH_SHA256" || { logger -t "$LOG_TAG" "[ERROR] Failed to download checksum file"; exit 1; }
logger -t "$LOG_TAG" "[INFO] Verifying checksum..."
awk '{print $1" kubelet"}' kubelet.sha256sum | sha256sum -c - || { logger -t "$LOG_TAG" "[ERROR] Checksum verification failed!"; exit 1; }
logger -t "$LOG_TAG" "[INFO] Installing kubelet..."
install -m 755 kubelet "$INSTALL_PATH"
logger -t "$LOG_TAG" "[INFO] kubelet successfully updated to $COMPONENT_VERSION_CLEAN."
rm -rf "$TMP_DIR"
else
logger -t "$LOG_TAG" "[INFO] kubelet is already up to date. Skipping installation."
fi
Download service
- path: /usr/lib/systemd/system/kubelet-install.service
owner: root:root
permissions: '0644'
content: |
[Unit]
Description=Install and update kubelet
After=network.target
Wants=network-online.target
[Service]
Type=oneshot
EnvironmentFile=-/etc/default/kubelet/download.env
ExecStart=/bin/bash -c "/etc/default/kubelet/download-script.sh"
RemainAfterExit=yes
[Install]
WantedBy=multi-user.target
Download
- systemctl enable kubelet-install.service
- systemctl start kubelet-install.service
Installation check
Installation check
journalctl -t kubelet-installer
***** [INFO] Checking current kubelet version...
***** [INFO] Current: none, Target: v1.30.4
***** [INFO] Download URL: https://*******
***** [INFO] Updating kubelet to version v1.30.4...
***** [INFO] Working directory: /tmp/tmp.*****
***** [INFO] Downloading kubelet...
***** [INFO] Downloading checksum file...
***** [INFO] Verifying checksum...
***** [INFO] Installing kubelet...
***** [INFO] kubelet successfully updated to v1.30.4.
ls -la /usr/local/bin/ | grep 'kubelet$'
-rwxr-xr-x 1 root root 100125080 Aug 14 2024 kubelet
kubelet --version
Kubernetes v1.30.4
Installation of etcd
● Required
Installation of etcd
● Required
Component installation steps
- Creating working directories.
- Environment variables.
- Download instructions.
- Setting permissions.
- Download service.
- Starting the download service.
- Bash
- Cloud-init
Creating working directories
mkdir -p /etc/default/etcd
Environment variables
cat <<EOF > /etc/default/etcd/download.env
COMPONENT_VERSION="v3.5.12"
REPOSITORY="https://github.com/etcd-io/etcd/releases/download"
EOF
Download instructions
cat <<"EOF" > /etc/default/etcd/download-script.sh
#!/bin/bash
set -Eeuo pipefail
COMPONENT_VERSION="${COMPONENT_VERSION:-v3.5.12}"
REPOSITORY="${REPOSITORY:-https://github.com/etcd-io/etcd/releases/download}"
PATH_BIN="${REPOSITORY}/${COMPONENT_VERSION}/etcd-${COMPONENT_VERSION}-linux-amd64.tar.gz"
PATH_SHA256="${REPOSITORY}/${COMPONENT_VERSION}/SHA256SUMS"
INSTALL_PATH="/usr/local/bin/"
LOG_TAG="etcd-installer"
TMP_DIR="$(mktemp -d)"
logger -t "$LOG_TAG" "[INFO] Checking current etcd version..."
CURRENT_VERSION=$($INSTALL_PATH/etcd --version 2>/dev/null | grep 'etcd Version:' | awk '{print $3}' | sed 's/v//') || CURRENT_VERSION="none"
COMPONENT_VERSION_CLEAN=$(echo "$COMPONENT_VERSION" | sed 's/^v//')
logger -t "$LOG_TAG" "[INFO] Current: $CURRENT_VERSION, Target: $COMPONENT_VERSION_CLEAN"
if [[ "$CURRENT_VERSION" != "$COMPONENT_VERSION_CLEAN" ]]; then
logger -t "$LOG_TAG" "[INFO] Download URL: $PATH_BIN"
logger -t "$LOG_TAG" "[INFO] Updating etcd to version $COMPONENT_VERSION_CLEAN..."
cd "$TMP_DIR"
logger -t "$LOG_TAG" "[INFO] Working directory: $PWD"
logger -t "$LOG_TAG" "[INFO] Downloading etcd..."
curl -fsSL -o "etcd-${COMPONENT_VERSION}-linux-amd64.tar.gz" "$PATH_BIN" || { logger -t "$LOG_TAG" "[ERROR] Failed to download etcd"; exit 1; }
logger -t "$LOG_TAG" "[INFO] Downloading checksum file..."
curl -fsSL -o "etcd.sha256sum" "$PATH_SHA256" || { logger -t "$LOG_TAG" "[ERROR] Failed to download checksum file"; exit 1; }
logger -t "$LOG_TAG" "[INFO] Verifying checksum..."
grep "etcd-${COMPONENT_VERSION}-linux-amd64.tar.gz" etcd.sha256sum | sha256sum -c - || { logger -t "$LOG_TAG" "[ERROR] Checksum verification failed!"; exit 1; }
logger -t "$LOG_TAG" "[INFO] Extracting files..."
tar -C "$TMP_DIR" -xvf "etcd-${COMPONENT_VERSION}-linux-amd64.tar.gz"
logger -t "$LOG_TAG" "[INFO] Installing binaries..."
install -m 755 "$TMP_DIR/etcd-${COMPONENT_VERSION}-linux-amd64/etcd" $INSTALL_PATH
install -m 755 "$TMP_DIR/etcd-${COMPONENT_VERSION}-linux-amd64/etcdctl" $INSTALL_PATH
install -m 755 "$TMP_DIR/etcd-${COMPONENT_VERSION}-linux-amd64/etcdutl" $INSTALL_PATH
logger -t "$LOG_TAG" "[INFO] etcd successfully updated to $COMPONENT_VERSION_CLEAN."
rm -rf "$TMP_DIR"
else
logger -t "$LOG_TAG" "[INFO] etcd is already up to date. Skipping installation."
fi
EOF
Setting permissions
chmod +x /etc/default/etcd/download-script.sh
Download service
cat <<EOF > /usr/lib/systemd/system/etcd-install.service
[Unit]
Description=Install and update in-cloud component etcd
After=network.target
Wants=network-online.target
[Service]
Type=oneshot
EnvironmentFile=-/etc/default/etcd/download.env
ExecStart=/bin/bash -c "/etc/default/etcd/download-script.sh"
RemainAfterExit=yes
[Install]
WantedBy=multi-user.target
EOF
Download
systemctl enable etcd-install.service
systemctl start etcd-install.service
Environment variables
#write_files:
- path: /etc/default/etcd/download.env
owner: root:root
permissions: '0644'
content: |
COMPONENT_VERSION="v3.5.12"
REPOSITORY="https://github.com/etcd-io/etcd/releases/download"
Download instructions
- path: /etc/default/etcd/download-script.sh
owner: root:root
permissions: '0755'
content: |
#!/bin/bash
set -Eeuo pipefail
COMPONENT_VERSION="${COMPONENT_VERSION:-v3.5.12}"
REPOSITORY="${REPOSITORY:-https://github.com/etcd-io/etcd/releases/download}"
PATH_BIN="${REPOSITORY}/${COMPONENT_VERSION}/etcd-${COMPONENT_VERSION}-linux-amd64.tar.gz"
PATH_SHA256="${REPOSITORY}/${COMPONENT_VERSION}/SHA256SUMS"
INSTALL_PATH="/usr/local/bin/"
LOG_TAG="etcd-installer"
TMP_DIR="$(mktemp -d)"
logger -t "$LOG_TAG" "[INFO] Checking current etcd version..."
CURRENT_VERSION=$($INSTALL_PATH/etcd --version 2>/dev/null | grep 'etcd Version:' | awk '{print $3}' | sed 's/v//') || CURRENT_VERSION="none"
COMPONENT_VERSION_CLEAN=$(echo "$COMPONENT_VERSION" | sed 's/^v//')
logger -t "$LOG_TAG" "[INFO] Current: $CURRENT_VERSION, Target: $COMPONENT_VERSION_CLEAN"
if [[ "$CURRENT_VERSION" != "$COMPONENT_VERSION_CLEAN" ]]; then
logger -t "$LOG_TAG" "[INFO] Download URL: $PATH_BIN"
logger -t "$LOG_TAG" "[INFO] Updating etcd to version $COMPONENT_VERSION_CLEAN..."
cd "$TMP_DIR"
logger -t "$LOG_TAG" "[INFO] Working directory: $PWD"
logger -t "$LOG_TAG" "[INFO] Downloading etcd..."
curl -fsSL -o "etcd-${COMPONENT_VERSION}-linux-amd64.tar.gz" "$PATH_BIN" || { logger -t "$LOG_TAG" "[ERROR] Failed to download etcd"; exit 1; }
logger -t "$LOG_TAG" "[INFO] Downloading checksum file..."
curl -fsSL -o "etcd.sha256sum" "$PATH_SHA256" || { logger -t "$LOG_TAG" "[ERROR] Failed to download checksum file"; exit 1; }
logger -t "$LOG_TAG" "[INFO] Verifying checksum..."
grep "etcd-${COMPONENT_VERSION}-linux-amd64.tar.gz" etcd.sha256sum | sha256sum -c - || { logger -t "$LOG_TAG" "[ERROR] Checksum verification failed!"; exit 1; }
logger -t "$LOG_TAG" "[INFO] Extracting files..."
tar -C "$TMP_DIR" -xvf "etcd-${COMPONENT_VERSION}-linux-amd64.tar.gz"
logger -t "$LOG_TAG" "[INFO] Installing binaries..."
install -m 755 "$TMP_DIR/etcd-${COMPONENT_VERSION}-linux-amd64/etcd" $INSTALL_PATH
install -m 755 "$TMP_DIR/etcd-${COMPONENT_VERSION}-linux-amd64/etcdctl" $INSTALL_PATH
install -m 755 "$TMP_DIR/etcd-${COMPONENT_VERSION}-linux-amd64/etcdutl" $INSTALL_PATH
logger -t "$LOG_TAG" "[INFO] etcd successfully updated to $COMPONENT_VERSION_CLEAN."
rm -rf "$TMP_DIR"
else
logger -t "$LOG_TAG" "[INFO] etcd is already up to date. Skipping installation."
fi
Download service
- path: /usr/lib/systemd/system/etcd-install.service
owner: root:root
permissions: '0644'
content: |
[Unit]
Description=Install and update in-cloud component etcd
After=network.target
Wants=network-online.target
[Service]
Type=oneshot
EnvironmentFile=-/etc/default/etcd/download.env
ExecStart=/bin/bash -c "/etc/default/etcd/download-script.sh"
RemainAfterExit=yes
[Install]
WantedBy=multi-user.target
Download
- systemctl enable etcd-install.service
- systemctl start etcd-install.service
Installation verification
Installation verification
journalctl -t etcd-installer
***** [INFO] Checking current etcd version...
***** [INFO] Current: none, Target: 3.5.12-0
***** [INFO] Download URL: https://*******
***** [INFO] Updating etcd to version 3.5.12-0...
***** [INFO] Working directory: /tmp/tmp.*****
***** [INFO] Downloading etcd...
***** [INFO] Downloading checksum file...
***** [INFO] Verifying checksum...
***** [INFO] Installing etcd...
***** [INFO] etcd successfully updated to 3.5.12-0.
ls -la /usr/local/bin/ | grep 'etcd'
-rwxr-xr-x 1 root root 23760896 Mar 29 16:21 etcd
-rwxr-xr-x 1 root root 17960960 Mar 29 16:21 etcdctl
-rwxr-xr-x 1 root root 16031744 Mar 29 16:21 etcdutl
etcd --version
etcd Version: 3.5.5
Git SHA: 19002cfc6
Go Version: go1.16.15
Go OS/Arch: linux/amd64
Installation of kubectl
● Optional
Installation of kubectl
● Optional
Component installation steps
- Creating working directories.
- Environment variables.
- Download instructions.
- Permission setup.
- Download service.
- Starting the download service.
- Bash
- Cloud-init
Creating working directories
mkdir -p /etc/default/kubectl
Environment variables
cat <<EOF > /etc/default/kubectl/download.env
COMPONENT_VERSION="v1.30.4"
REPOSITORY="https://dl.k8s.io"
EOF
Download instructions
cat <<"EOF" > /etc/default/kubectl/download-script.sh
#!/bin/bash
set -Eeuo pipefail
COMPONENT_VERSION="${COMPONENT_VERSION:-v1.30.4}"
REPOSITORY="${REPOSITORY:-https://dl.k8s.io}"
PATH_BIN="${REPOSITORY}/${COMPONENT_VERSION}/bin/linux/amd64/kubectl"
PATH_SHA256="${REPOSITORY}/${COMPONENT_VERSION}/bin/linux/amd64/kubectl.sha256"
INSTALL_PATH="/usr/local/bin/kubectl"
LOG_TAG="kubectl-installer"
TMP_DIR="$(mktemp -d)"
logger -t "$LOG_TAG" "[INFO] Checking current kubectl version..."
CURRENT_VERSION=$($INSTALL_PATH version -o json --client=true 2>/dev/null | jq -r '.clientVersion.gitVersion' | sed 's/^v//') || CURRENT_VERSION="none"
COMPONENT_VERSION_CLEAN=$(echo "$COMPONENT_VERSION" | sed 's/^v//')
logger -t "$LOG_TAG" "[INFO] Current: $CURRENT_VERSION, Target: $COMPONENT_VERSION_CLEAN"
if [[ "$CURRENT_VERSION" != "$COMPONENT_VERSION_CLEAN" ]]; then
logger -t "$LOG_TAG" "[INFO] Download URL: $PATH_BIN"
logger -t "$LOG_TAG" "[INFO] Updating kubectl to version $COMPONENT_VERSION_CLEAN..."
cd "$TMP_DIR"
logger -t "$LOG_TAG" "[INFO] Working directory: $PWD"
logger -t "$LOG_TAG" "[INFO] Downloading kubectl..."
curl -fsSL -o kubectl "$PATH_BIN" || { logger -t "$LOG_TAG" "[ERROR] Failed to download kubectl"; exit 1; }
logger -t "$LOG_TAG" "[INFO] Downloading checksum file..."
curl -fsSL -o kubectl.sha256sum "$PATH_SHA256" || { logger -t "$LOG_TAG" "[ERROR] Failed to download checksum file"; exit 1; }
logger -t "$LOG_TAG" "[INFO] Verifying checksum..."
awk '{print $1" kubectl"}' kubectl.sha256sum | sha256sum -c - || { logger -t "$LOG_TAG" "[ERROR] Checksum verification failed!"; exit 1; }
logger -t "$LOG_TAG" "[INFO] Installing kubectl..."
install -m 755 kubectl "$INSTALL_PATH"
logger -t "$LOG_TAG" "[INFO] kubectl successfully updated to $COMPONENT_VERSION_CLEAN."
rm -rf "$TMP_DIR"
else
logger -t "$LOG_TAG" "[INFO] kubectl is already up to date. Skipping installation."
fi
EOF
Permission setup
chmod +x /etc/default/kubectl/download-script.sh
Download service
cat <<EOF > /usr/lib/systemd/system/kubectl-install.service
[Unit]
Description=Install and update kubectl
After=network.target
Wants=network-online.target
[Service]
Type=oneshot
EnvironmentFile=-/etc/default/kubectl/download.env
ExecStart=/bin/bash -c "/etc/default/kubectl/download-script.sh"
RemainAfterExit=yes
[Install]
WantedBy=multi-user.target
EOF
Download
systemctl enable kubectl-install.service
systemctl start kubectl-install.service
Environment variables
- path: /etc/default/kubectl/download.env
owner: root:root
permissions: '0644'
content: |
COMPONENT_VERSION="v1.30.4"
REPOSITORY="https://dl.k8s.io"
Download instructions/Permission setup
- path: /etc/default/kubectl/download-script.sh
owner: root:root
permissions: '0755'
content: |
#!/bin/bash
set -Eeuo pipefail
COMPONENT_VERSION="${COMPONENT_VERSION:-v1.30.4}"
REPOSITORY="${REPOSITORY:-https://dl.k8s.io}"
PATH_BIN="${REPOSITORY}/${COMPONENT_VERSION}/bin/linux/amd64/kubectl"
PATH_SHA256="${REPOSITORY}/${COMPONENT_VERSION}/bin/linux/amd64/kubectl.sha256"
INSTALL_PATH="/usr/local/bin/kubectl"
LOG_TAG="kubectl-installer"
TMP_DIR="$(mktemp -d)"
logger -t "$LOG_TAG" "[INFO] Checking current kubectl version..."
CURRENT_VERSION=$($INSTALL_PATH version -o json --client=true 2>/dev/null | jq -r '.clientVersion.gitVersion' | sed 's/^v//') || CURRENT_VERSION="none"
COMPONENT_VERSION_CLEAN=$(echo "$COMPONENT_VERSION" | sed 's/^v//')
logger -t "$LOG_TAG" "[INFO] Current: $CURRENT_VERSION, Target: $COMPONENT_VERSION_CLEAN"
if [[ "$CURRENT_VERSION" != "$COMPONENT_VERSION_CLEAN" ]]; then
logger -t "$LOG_TAG" "[INFO] Download URL: $PATH_BIN"
logger -t "$LOG_TAG" "[INFO] Updating kubectl to version $COMPONENT_VERSION_CLEAN..."
cd "$TMP_DIR"
logger -t "$LOG_TAG" "[INFO] Working directory: $PWD"
logger -t "$LOG_TAG" "[INFO] Downloading kubectl..."
curl -fsSL -o kubectl "$PATH_BIN" || { logger -t "$LOG_TAG" "[ERROR] Failed to download kubectl"; exit 1; }
logger -t "$LOG_TAG" "[INFO] Downloading checksum file..."
curl -fsSL -o kubectl.sha256sum "$PATH_SHA256" || { logger -t "$LOG_TAG" "[ERROR] Failed to download checksum file"; exit 1; }
logger -t "$LOG_TAG" "[INFO] Verifying checksum..."
awk '{print $1" kubectl"}' kubectl.sha256sum | sha256sum -c - || { logger -t "$LOG_TAG" "[ERROR] Checksum verification failed!"; exit 1; }
logger -t "$LOG_TAG" "[INFO] Installing kubectl..."
install -m 755 kubectl "$INSTALL_PATH"
logger -t "$LOG_TAG" "[INFO] kubectl successfully updated to $COMPONENT_VERSION_CLEAN."
rm -rf "$TMP_DIR"
else
logger -t "$LOG_TAG" "[INFO] kubectl is already up to date. Skipping installation."
fi
Download service
- path: /usr/lib/systemd/system/kubectl-install.service
owner: root:root
permissions: '0644'
content: |
[Unit]
Description=Install and update kubectl
After=network.target
Wants=network-online.target
[Service]
Type=oneshot
EnvironmentFile=-/etc/default/kubectl/download.env
ExecStart=/bin/bash -c "/etc/default/kubectl/download-script.sh"
RemainAfterExit=yes
[Install]
WantedBy=multi-user.target
Download
- systemctl enable kubectl-install.service
- systemctl start kubectl-install.service
Installation verification
Installation verification
Executable files
journalctl -t kubectl-installer
***** [INFO] Checking current kubectl version...
***** [INFO] Current: none, Target: v1.30.4
***** [INFO] Download URL: https://*******
***** [INFO] Updating kubectl to version v1.30.4...
***** [INFO] Working directory: /tmp/tmp.*****
***** [INFO] Downloading kubectl...
***** [INFO] Downloading checksum file...
***** [INFO] Verifying checksum...
***** [INFO] Installing kubectl...
***** [INFO] kubectl successfully updated to v1.30.4.
ls -la /usr/local/bin/ | grep 'kubectl$'
-rwxr-xr-x 1 root root 51454104 Aug 14 2024 kubectl
Executable file version
kubectl version
Client Version: v1.30.4
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Installation of crictl
● Optional
Installation of crictl
● Optional
Component installation steps
- Creating working directories.
- Environment variables.
- Download instructions.
- Permission setup.
- Download service.
- Starting the download service.
- Bash
- Cloud-init
Creating working directories
mkdir -p /etc/default/crictl
Environment variables
cat <<EOF > /etc/default/crictl/download.env
COMPONENT_VERSION="v1.30.0"
REPOSITORY="https://github.com/kubernetes-sigs/cri-tools/releases/download"
EOF
Download instructions
cat <<"EOF" > /etc/default/crictl/download-script.sh
#!/bin/bash
set -Eeuo pipefail
COMPONENT_VERSION="${COMPONENT_VERSION:-v1.30.0}"
REPOSITORY="${REPOSITORY:-https://github.com/kubernetes-sigs/cri-tools/releases/download}"
PATH_BIN="${REPOSITORY}/${COMPONENT_VERSION}/crictl-${COMPONENT_VERSION}-linux-amd64.tar.gz"
PATH_SHA256="${REPOSITORY}/${COMPONENT_VERSION}/crictl-${COMPONENT_VERSION}-linux-amd64.tar.gz.sha256"
INSTALL_PATH="/usr/local/bin/crictl"
LOG_TAG="crictl-installer"
TMP_DIR="$(mktemp -d)"
logger -t "$LOG_TAG" "[INFO] Checking current crictl version..."
CURRENT_VERSION=$($INSTALL_PATH --version 2>/dev/null | awk '{print $3}' | sed 's/v//') || CURRENT_VERSION="none"
COMPONENT_VERSION_CLEAN=$(echo "$COMPONENT_VERSION" | sed 's/^v//')
logger -t "$LOG_TAG" "[INFO] Current: $CURRENT_VERSION, Target: $COMPONENT_VERSION_CLEAN"
if [[ "$CURRENT_VERSION" != "$COMPONENT_VERSION_CLEAN" ]]; then
logger -t "$LOG_TAG" "[INFO] Download URL: $PATH_BIN"
logger -t "$LOG_TAG" "[INFO] Updating crictl to version $COMPONENT_VERSION_CLEAN..."
cd "$TMP_DIR"
logger -t "$LOG_TAG" "[INFO] Working directory: $PWD"
logger -t "$LOG_TAG" "[INFO] Downloading crictl..."
curl -fsSL -o crictl.tar.gz "$PATH_BIN" || { logger -t "$LOG_TAG" "[ERROR] Failed to download crictl"; exit 1; }
logger -t "$LOG_TAG" "[INFO] Downloading checksum file..."
curl -fsSL -o crictl.sha256sum "$PATH_SHA256" || { logger -t "$LOG_TAG" "[ERROR] Failed to download checksum file"; exit 1; }
logger -t "$LOG_TAG" "[INFO] Verifying checksum..."
awk '{print $1" crictl.tar.gz"}' crictl.sha256sum | sha256sum -c - || { logger -t "$LOG_TAG" "[ERROR] Checksum verification failed!"; exit 1; }
logger -t "$LOG_TAG" "[INFO] Extracting files..."
tar -C "$TMP_DIR" -xvf crictl.tar.gz
logger -t "$LOG_TAG" "[INFO] Installing crictl..."
install -m 755 "$TMP_DIR/crictl" "$INSTALL_PATH"
logger -t "$LOG_TAG" "[INFO] crictl successfully updated to $COMPONENT_VERSION_CLEAN."
rm -rf "$TMP_DIR"
else
logger -t "$LOG_TAG" "[INFO] crictl is already up to date. Skipping installation."
fi
EOF
Permission setup
chmod +x /etc/default/crictl/download-script.sh
Download service
cat <<EOF > /usr/lib/systemd/system/crictl-install.service
[Unit]
Description=Install and update in-cloud component crictl
After=network.target
Wants=network-online.target
[Service]
Type=oneshot
EnvironmentFile=-/etc/default/crictl/download.env
ExecStart=/bin/bash -c "/etc/default/crictl/download-script.sh"
RemainAfterExit=yes
[Install]
WantedBy=multi-user.target
EOF
Download
systemctl enable crictl-install.service
systemctl start crictl-install.service
Environment variables
- path: /etc/default/crictl/download.env
owner: root:root
permissions: '0644'
content: |
COMPONENT_VERSION="v1.30.0"
REPOSITORY="https://github.com/kubernetes-sigs/cri-tools/releases/download"
Download instructions/Permission setup
- path: /etc/default/crictl/download-script.sh
owner: root:root
permissions: '0755'
content: |
#!/bin/bash
set -Eeuo pipefail
COMPONENT_VERSION="${COMPONENT_VERSION:-v1.30.0}"
REPOSITORY="${REPOSITORY:-https://github.com/kubernetes-sigs/cri-tools/releases/download}"
PATH_BIN="${REPOSITORY}/${COMPONENT_VERSION}/crictl-${COMPONENT_VERSION}-linux-amd64.tar.gz"
PATH_SHA256="${REPOSITORY}/${COMPONENT_VERSION}/crictl-${COMPONENT_VERSION}-linux-amd64.tar.gz.sha256"
INSTALL_PATH="/usr/local/bin/crictl"
LOG_TAG="crictl-installer"
TMP_DIR="$(mktemp -d)"
logger -t "$LOG_TAG" "[INFO] Checking current crictl version..."
CURRENT_VERSION=$($INSTALL_PATH --version 2>/dev/null | awk '{print $3}' | sed 's/v//') || CURRENT_VERSION="none"
COMPONENT_VERSION_CLEAN=$(echo "$COMPONENT_VERSION" | sed 's/^v//')
logger -t "$LOG_TAG" "[INFO] Current: $CURRENT_VERSION, Target: $COMPONENT_VERSION_CLEAN"
if [[ "$CURRENT_VERSION" != "$COMPONENT_VERSION_CLEAN" ]]; then
logger -t "$LOG_TAG" "[INFO] Download URL: $PATH_BIN"
logger -t "$LOG_TAG" "[INFO] Updating crictl to version $COMPONENT_VERSION_CLEAN..."
cd "$TMP_DIR"
logger -t "$LOG_TAG" "[INFO] Working directory: $PWD"
logger -t "$LOG_TAG" "[INFO] Downloading crictl..."
curl -fsSL -o crictl.tar.gz "$PATH_BIN" || { logger -t "$LOG_TAG" "[ERROR] Failed to download crictl"; exit 1; }
logger -t "$LOG_TAG" "[INFO] Downloading checksum file..."
curl -fsSL -o crictl.sha256sum "$PATH_SHA256" || { logger -t "$LOG_TAG" "[ERROR] Failed to download checksum file"; exit 1; }
logger -t "$LOG_TAG" "[INFO] Verifying checksum..."
awk '{print $1" crictl.tar.gz"}' crictl.sha256sum | sha256sum -c - || { logger -t "$LOG_TAG" "[ERROR] Checksum verification failed!"; exit 1; }
logger -t "$LOG_TAG" "[INFO] Extracting files..."
tar -C "$TMP_DIR" -xvf crictl.tar.gz
logger -t "$LOG_TAG" "[INFO] Installing crictl..."
install -m 755 "$TMP_DIR/crictl" "$INSTALL_PATH"
logger -t "$LOG_TAG" "[INFO] crictl successfully updated to $COMPONENT_VERSION_CLEAN."
rm -rf "$TMP_DIR"
else
logger -t "$LOG_TAG" "[INFO] crictl is already up to date. Skipping installation."
fi
Download service
- path: /usr/lib/systemd/system/crictl-install.service
owner: root:root
permissions: '0644'
content: |
[Unit]
Description=Install and update in-cloud component crictl
After=network.target
Wants=network-online.target
[Service]
Type=oneshot
EnvironmentFile=-/etc/default/crictl/download.env
ExecStart=/bin/bash -c "/etc/default/crictl/download-script.sh"
RemainAfterExit=yes
[Install]
WantedBy=multi-user.target
Download
- systemctl enable crictl-install.service
- systemctl start crictl-install.service
Installation verification
Installation verification
journalctl -t crictl-installer
***** [INFO] Checking current crictl version...
***** [INFO] Current: none, Target: v1.30.0
***** [INFO] Download URL: https://*******
***** [INFO] Updating crictl to version v1.30.0...
***** [INFO] Working directory: /tmp/tmp.*****
***** [INFO] Downloading crictl...
***** [INFO] Downloading checksum file...
***** [INFO] Verifying checksum...
***** [INFO] Installing crictl...
***** [INFO] crictl successfully updated to v1.30.0.
ls -la /usr/local/bin/ | grep 'crictl$'
-rwxr-xr-x 1 1001 127 58376628 Apr 18 2024 crictl
crictl --version
crictl version v1.30.0
Installation of kubeadm
● Optional
Installation of kubeadm
● Optional
Component installation steps
- Creating working directories.
- Environment variables.
- Download instructions.
- Permission setup.
- Download service.
- Starting the download service.
- Bash
- Cloud-init
Creating working directories
mkdir -p /etc/default/kubeadm
Environment variables
cat <<EOF > /etc/default/kubeadm/download.env
COMPONENT_VERSION="v1.30.4"
REPOSITORY="https://dl.k8s.io"
EOF
Download instructions
cat <<"EOF" > /etc/default/kubeadm/download-script.sh
#!/bin/bash
set -Eeuo pipefail
COMPONENT_VERSION="${COMPONENT_VERSION:-v1.30.4}"
REPOSITORY="${REPOSITORY:-https://dl.k8s.io}"
PATH_BIN="${REPOSITORY}/${COMPONENT_VERSION}/bin/linux/amd64/kubeadm"
PATH_SHA256="${REPOSITORY}/${COMPONENT_VERSION}/bin/linux/amd64/kubeadm.sha256"
INSTALL_PATH="/usr/local/bin/kubeadm"
LOG_TAG="kubeadm-installer"
TMP_DIR="$(mktemp -d)"
logger -t "$LOG_TAG" "[INFO] Checking current kubeadm version..."
CURRENT_VERSION=$($INSTALL_PATH version -o json 2>/dev/null | jq -r '.clientVersion.gitVersion' | sed 's/^v//') || CURRENT_VERSION="none"
COMPONENT_VERSION_CLEAN=$(echo "$COMPONENT_VERSION" | sed 's/^v//')
logger -t "$LOG_TAG" "[INFO] Current: $CURRENT_VERSION, Target: $COMPONENT_VERSION_CLEAN"
if [[ "$CURRENT_VERSION" != "$COMPONENT_VERSION_CLEAN" ]]; then
logger -t "$LOG_TAG" "[INFO] Download URL: $PATH_BIN"
logger -t "$LOG_TAG" "[INFO] Updating kubeadm to version $COMPONENT_VERSION_CLEAN..."
cd "$TMP_DIR"
logger -t "$LOG_TAG" "[INFO] Working directory: $PWD"
logger -t "$LOG_TAG" "[INFO] Downloading kubeadm..."
curl -fsSL -o kubeadm "$PATH_BIN" || { logger -t "$LOG_TAG" "[ERROR] Failed to download kubeadm"; exit 1; }
logger -t "$LOG_TAG" "[INFO] Downloading checksum file..."
curl -fsSL -o kubeadm.sha256sum "$PATH_SHA256" || { logger -t "$LOG_TAG" "[ERROR] Failed to download checksum file"; exit 1; }
logger -t "$LOG_TAG" "[INFO] Verifying checksum..."
awk '{print $1" kubeadm"}' kubeadm.sha256sum | sha256sum -c - || { logger -t "$LOG_TAG" "[ERROR] Checksum verification failed!"; exit 1; }
logger -t "$LOG_TAG" "[INFO] Installing kubeadm..."
install -m 755 kubeadm "$INSTALL_PATH"
logger -t "$LOG_TAG" "[INFO] kubeadm successfully updated to $COMPONENT_VERSION_CLEAN."
rm -rf "$TMP_DIR"
else
logger -t "$LOG_TAG" "[INFO] kubeadm is already up to date. Skipping installation."
fi
EOF
Permission setup
chmod +x /etc/default/kubeadm/download-script.sh
Download service
cat <<EOF > /usr/lib/systemd/system/kubeadm-install.service
[Unit]
Description=Install and update kubeadm
After=network.target
Wants=network-online.target
[Service]
Type=oneshot
EnvironmentFile=-/etc/default/kubeadm/download.env
ExecStart=/bin/bash -c "/etc/default/kubeadm/download-script.sh"
RemainAfterExit=yes
[Install]
WantedBy=multi-user.target
EOF
Download
systemctl enable kubeadm-install.service
systemctl start kubeadm-install.service
Environment variables
- path: /etc/default/kubeadm/download.env
owner: root:root
permissions: '0644'
content: |
COMPONENT_VERSION="v1.30.4"
REPOSITORY="https://dl.k8s.io"
Download instructions/Permission setup
- path: /etc/default/kubeadm/download-script.sh
owner: root:root
permissions: '0755'
content: |
#!/bin/bash
set -Eeuo pipefail
COMPONENT_VERSION="${COMPONENT_VERSION:-v1.30.4}"
REPOSITORY="${REPOSITORY:-https://dl.k8s.io}"
PATH_BIN="${REPOSITORY}/${COMPONENT_VERSION}/bin/linux/amd64/kubeadm"
PATH_SHA256="${REPOSITORY}/${COMPONENT_VERSION}/bin/linux/amd64/kubeadm.sha256"
INSTALL_PATH="/usr/local/bin/kubeadm"
LOG_TAG="kubeadm-installer"
TMP_DIR="$(mktemp -d)"
logger -t "$LOG_TAG" "[INFO] Checking current kubeadm version..."
CURRENT_VERSION=$($INSTALL_PATH version -o json 2>/dev/null | jq -r '.clientVersion.gitVersion' | sed 's/^v//') || CURRENT_VERSION="none"
COMPONENT_VERSION_CLEAN=$(echo "$COMPONENT_VERSION" | sed 's/^v//')
logger -t "$LOG_TAG" "[INFO] Current: $CURRENT_VERSION, Target: $COMPONENT_VERSION_CLEAN"
if [[ "$CURRENT_VERSION" != "$COMPONENT_VERSION_CLEAN" ]]; then
logger -t "$LOG_TAG" "[INFO] Download URL: $PATH_BIN"
logger -t "$LOG_TAG" "[INFO] Updating kubeadm to version $COMPONENT_VERSION_CLEAN..."
cd "$TMP_DIR"
logger -t "$LOG_TAG" "[INFO] Working directory: $PWD"
logger -t "$LOG_TAG" "[INFO] Downloading kubeadm..."
curl -fsSL -o kubeadm "$PATH_BIN" || { logger -t "$LOG_TAG" "[ERROR] Failed to download kubeadm"; exit 1; }
logger -t "$LOG_TAG" "[INFO] Downloading checksum file..."
curl -fsSL -o kubeadm.sha256sum "$PATH_SHA256" || { logger -t "$LOG_TAG" "[ERROR] Failed to download checksum file"; exit 1; }
logger -t "$LOG_TAG" "[INFO] Verifying checksum..."
awk '{print $1" kubeadm"}' kubeadm.sha256sum | sha256sum -c - || { logger -t "$LOG_TAG" "[ERROR] Checksum verification failed!"; exit 1; }
logger -t "$LOG_TAG" "[INFO] Installing kubeadm..."
install -m 755 kubeadm "$INSTALL_PATH"
logger -t "$LOG_TAG" "[INFO] kubeadm successfully updated to $COMPONENT_VERSION_CLEAN."
rm -rf "$TMP_DIR"
else
logger -t "$LOG_TAG" "[INFO] kubeadm is already up to date. Skipping installation."
fi
Download service
- path: /usr/lib/systemd/system/kubeadm-install.service
owner: root:root
permissions: '0644'
content: |
[Unit]
Description=Install and update kubeadm
After=network.target
Wants=network-online.target
[Service]
Type=oneshot
EnvironmentFile=-/etc/default/kubeadm/download.env
ExecStart=/bin/bash -c "/etc/default/kubeadm/download-script.sh"
RemainAfterExit=yes
[Install]
WantedBy=multi-user.target
Download
- systemctl enable kubeadm-install.service
- systemctl start kubeadm-install.service
Installation verification
Installation verification
journalctl -t kubeadm-installer
***** [INFO] Checking current kubeadm version...
***** [INFO] Current: none, Target: v1.30.4
***** [INFO] Download URL: https://*******
***** [INFO] Updating kubeadm to version v1.30.4...
***** [INFO] Working directory: /tmp/tmp.*****
***** [INFO] Downloading kubeadm...
***** [INFO] Downloading checksum file...
***** [INFO] Verifying checksum...
***** [INFO] Extracting files...
***** [INFO] Installing binaries...
***** [INFO] kubeadm successfully updated to v1.30.4.
ls -la /usr/local/bin/ | grep 'kubeadm$'
-rwxr-xr-x 1 root root 50253976 Aug 14 2024 kubeadm
kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"30", GitVersion:"v1.30.4", GitCommit:"a51b3b711150f57ffc1f526a640ec058514ed596", GitTreeState:"clean", BuildDate:"2024-08-14T19:02:46Z", GoVersion:"go1.22.5", Compiler:"gc", Platform:"linux/amd64"}
9. Configuring Components
This section describes the setup and configuration of Kubernetes components that ensure proper cluster operation.
- containerd
- kubelet
- crictl
- kubeadm
- Kubernetes Audit
Configuration of containerd
● Required
Configuration of containerd
● Required
Component configuration steps
- Component configuration
- Systemd Unit setup for the component
- Systemd Unit start
This section depends on the following documents:
Component configuration
- Bash
- Cloud-init
Creating working directories
mkdir -p /etc/containerd/
mkdir -p /etc/containerd/conf.d
mkdir -p /etc/containerd/certs.d
Base configuration file
cat <<"EOF" > /etc/containerd/config.toml
version = 2
imports = ["/etc/containerd/conf.d/*.toml"]
EOF
Custom configuration file template
cat <<"EOF" > /etc/containerd/conf.d/in-cloud.toml
version = 2
[plugins]
[plugins."io.containerd.grpc.v1.cri"]
sandbox_image = "registry.k8s.io/pause:3.9"
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
runtime_type = "io.containerd.runc.v2"
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
SystemdCgroup = true
[plugins."io.containerd.grpc.v1.cri".registry]
config_path = "/etc/containerd/certs.d/"
EOF
Base configuration file
- path: /etc/containerd/config.toml
owner: root:root
permissions: '0644'
content: |
version = 2
imports = ["/etc/containerd/conf.d/*.toml"]
Custom configuration file template
- path: /etc/containerd/conf.d/in-cloud.toml
owner: root:root
permissions: '0644'
content: |
version = 2
[plugins]
[plugins."io.containerd.grpc.v1.cri"]
sandbox_image = "registry.k8s.io/pause:3.9"
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
runtime_type = "io.containerd.runc.v2"
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
SystemdCgroup = true
[plugins."io.containerd.grpc.v1.cri".registry]
config_path = "/etc/containerd/certs.d/"
Systemd Unit setup for the component
Delegate=yesdelegates cgroup subsystem management to the container runtime (required for proper Kubernetes operation).KillMode=processensures that when the service is stopped, only the main containerd process is terminated, not the child containers.OOMScoreAdjust=-999protects the process from OOM Killer — without the runtime, all containers on the node become unmanageable.
- Bash
- Cloud-init
cat <<EOF > /usr/lib/systemd/system/containerd.service
[Unit]
Description=containerd container runtime
Documentation=https://containerd.io
After=network.target local-fs.target containerd-install.service runc-install.service
Wants=containerd-install.service runc-install.service
[Service]
ExecStartPre=-/sbin/modprobe overlay
ExecStart=/usr/local/bin/containerd
Type=notify
Delegate=yes
KillMode=process
Restart=always
RestartSec=5
LimitNPROC=infinity
LimitCORE=infinity
LimitNOFILE=infinity
TasksMax=infinity
OOMScoreAdjust=-999
[Install]
WantedBy=multi-user.target
EOF
systemctl enable containerd
systemctl start containerd
# write_files:
- path: /usr/lib/systemd/system/containerd.service
owner: root:root
permissions: '0644'
content: |
[Unit]
Description=containerd container runtime
Documentation=https://containerd.io
After=network.target local-fs.target containerd-install.service runc-install.service
Wants=containerd-install.service runc-install.service
[Service]
ExecStartPre=-/sbin/modprobe overlay
ExecStart=/usr/local/bin/containerd
Type=notify
Delegate=yes
KillMode=process
Restart=always
RestartSec=5
LimitNPROC=infinity
LimitCORE=infinity
LimitNOFILE=infinity
TasksMax=infinity
OOMScoreAdjust=-999
[Install]
WantedBy=multi-user.target
#runcmd:
- systemctl enable containerd
- systemctl start containerd
Configuration verification
Configuration verification
tree /etc/containerd/
/etc/containerd/
├── certs.d
├── conf.d
│ └── cloud.toml
└── config.toml
systemctl status containerd
● containerd.service - containerd container runtime
Loaded: loaded (/usr/lib/systemd/system/containerd.service; enabled; preset: enabled)
Active: active (running) since Tue 2024-12-31 17:26:21 UTC; 2min 30s ago
Docs: https://containerd.io
Main PID: 839 (containerd)
Tasks: 7 (limit: 2274)
Memory: 62.0M (peak: 62.5M)
CPU: 375ms
CGroup: /system.slice/containerd.service
└─839 /usr/local/bin/containerd
***** level=info msg="Start subscribing containerd event"
***** level=info msg="Start recovering state"
***** level=info msg="Start event monitor"
***** level=info msg="Start snapshots syncer"
***** level=info msg="Start cni network conf syncer for default"
***** level=info msg="Start streaming server"
***** level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
***** level=info msg=serving... address=/run/containerd/containerd.sock
***** level=info msg="containerd successfully booted in 0.065807s"
***** Started containerd.service - containerd container runtime.
Configuration of kubelet
● Required
Configuration of kubelet
● Required
Component configuration steps
- Component Systemd Unit configuration
- Add Systemd Unit to autostart
- Custom component configuration
This section depends on the following documents:
Component Systemd Unit configuration
The dropin configuration
10-kubeadm.confseparates parameters into three levels:bootstrap-kubeconfigis used during initial node registration in the cluster (before obtaining a permanentkubelet.conf),kubeadm-flags.envcontains flags thatkubeadm init/kubeadm joingenerate dynamically during initialization, andextra-args.envallows specifying additional arguments (e.g.,--cloud-provider=externalwhen using Cloud Controller Manager).
- Bash
- Cloud-init
mkdir -p /usr/lib/systemd/system/kubelet.service.d
mkdir -p /var/lib/kubelet/
Systemd Unit
cat <<EOF > /usr/lib/systemd/system/kubelet.service
[Unit]
Description=kubelet: The Kubernetes Node Agent
Documentation=https://kubernetes.io/docs/
Wants=network-online.target containerd.service
After=network-online.target containerd.service
[Service]
ExecStart=/usr/local/bin/kubelet
Restart=always
StartLimitInterval=0
RestartSec=10
[Install]
WantedBy=multi-user.target
EOF
Systemd Unit Config
cat <<EOF > /usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf
# Note: This dropin only works with kubeadm and kubelet v1.11+
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"
Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml"
# This is a file that "kubeadm init" and "kubeadm join" generates at runtime, populating the KUBELET_KUBEADM_ARGS variable dynamically
EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env
# This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably, the user should use
# the .NodeRegistration.KubeletExtraArgs object in the configuration files instead. KUBELET_EXTRA_ARGS should be sourced from this file.
EnvironmentFile=-/etc/default/kubelet/extra-args.env
ExecStart=
ExecStart=/usr/local/bin/kubelet \$KUBELET_KUBECONFIG_ARGS \$KUBELET_CONFIG_ARGS \$KUBELET_KUBEADM_ARGS \$KUBELET_EXTRA_ARGS
EOF
- Default
- Cloud Controller Manager
Systemd Unit ENV
cat <<EOF > /etc/default/kubelet/extra-args.env
KUBELET_EXTRA_ARGS=""
EOF
Systemd Unit ENV
cat <<EOF > /etc/default/kubelet/extra-args.env
KUBELET_EXTRA_ARGS="--cloud-provider=external"
EOF
Add Systemd Unit to autostart
systemctl enable kubelet
Systemd Unit ENV
# write_files:
- path: /usr/lib/systemd/system/kubelet.service
owner: root:root
permissions: '0644'
content: |
[Unit]
Description=kubelet: The Kubernetes Node Agent
Documentation=https://kubernetes.io/docs/
Wants=network-online.target containerd.service
After=network-online.target containerd.service
[Service]
ExecStart=/usr/local/bin/kubelet
Restart=always
StartLimitInterval=0
RestartSec=10
[Install]
WantedBy=multi-user.target
Download instructions
# write_files:
- path: /usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf
owner: root:root
permissions: '0644'
content: |
# Note: This dropin only works with kubeadm and kubelet v1.11+
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"
Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml"
# This is a file that "kubeadm init" and "kubeadm join" generates at runtime, populating the KUBELET_KUBEADM_ARGS variable dynamically
EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env
# This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably, the user should use
# the .NodeRegistration.KubeletExtraArgs object in the configuration files instead. KUBELET_EXTRA_ARGS should be sourced from this file.
EnvironmentFile=-/etc/default/kubelet/extra-args.env
ExecStart=
ExecStart=/usr/local/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS
Systemd Unit Download
# write_files:
- path: /etc/default/kubelet/extra-args.env
owner: root:root
permissions: '0644'
content: |
KUBELET_EXTRA_ARGS="--cloud-provider=external"
Systemd Unit Custom ENV
This configuration block is applicable only when installing Kubernetes manually (using the "Kubernetes the Hard Way" method). When using the kubeadm utility, the configuration file will be created automatically based on the specification provided in the kubeadm-config file.
# write_files:
- path: /var/lib/kubelet/kubeadm-flags.env
owner: root:root
permissions: '0644'
content: |
KUBELET_KUBEADM_ARGS="--container-runtime-endpoint=unix:///var/run/containerd/containerd.sock --pod-infra-container-image=registry.k8s.io/pause:3.9 --config=/var/lib/kubelet/config-custom.yaml"
Add Systemd Unit to autostart
# runcmd:
- systemctl enable kubelet
Custom component configuration
Kubelet config
- Bash
- Cloud-init
Custom kubelet configuration file
cat <<EOF > /var/lib/kubelet/config-custom.yaml
---
apiVersion: kubelet.config.k8s.io/v1beta1
authentication:
anonymous:
enabled: false
webhook:
cacheTTL: 0s
enabled: true
x509:
clientCAFile: "/etc/kubernetes/pki/ca.crt"
authorization:
mode: Webhook
webhook:
cacheAuthorizedTTL: 0s
cacheUnauthorizedTTL: 0s
cgroupDriver: systemd
containerLogMaxSize: "50Mi"
containerRuntimeEndpoint: "/var/run/containerd/containerd.sock"
cpuManagerReconcilePeriod: 0s
evictionPressureTransitionPeriod: 5s
fileCheckFrequency: 0s
healthzBindAddress: 127.0.0.1
healthzPort: 10248
httpCheckFrequency: 0s
imageGCHighThresholdPercent: 55
imageGCLowThresholdPercent: 50
imageMaximumGCAge: 0s
imageMinimumGCAge: 0s
kind: KubeletConfiguration
logging:
flushFrequency: 0
options:
json:
infoBufferSize: "0"
text:
infoBufferSize: "0"
verbosity: 0
kubeAPIQPS: 50
kubeAPIBurst: 100
maxPods: 250
memorySwap: {}
nodeStatusReportFrequency: 1s
nodeStatusUpdateFrequency: 1s
podPidsLimit: 4096
registerNode: true
resolvConf: /run/systemd/resolve/resolv.conf
rotateCertificates: true
runtimeRequestTimeout: 0s
serializeImagePulls: false
serverTLSBootstrap: true
shutdownGracePeriod: 15s
shutdownGracePeriodCriticalPods: 5s
staticPodPath: /etc/kubernetes/manifests
streamingConnectionIdleTimeout: 0s
syncFrequency: 0s
tlsMinVersion: "VersionTLS12"
volumeStatsAggPeriod: 0s
featureGates:
RotateKubeletServerCertificate: true
APIPriorityAndFairness: true
tlsCipherSuites:
- "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"
- "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"
- "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384"
- "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"
- "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256"
- "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"
EOF
Custom kubelet configuration file
- path: /var/lib/kubelet/config-custom.yaml
owner: root:root
permissions: '0644'
content: |
---
apiVersion: kubelet.config.k8s.io/v1beta1
authentication:
anonymous:
enabled: false
webhook:
cacheTTL: 0s
enabled: true
x509:
clientCAFile: "/etc/kubernetes/pki/ca.crt"
authorization:
mode: Webhook
webhook:
cacheAuthorizedTTL: 0s
cacheUnauthorizedTTL: 0s
cgroupDriver: systemd
containerLogMaxSize: "50Mi"
containerRuntimeEndpoint: "/var/run/containerd/containerd.sock"
cpuManagerReconcilePeriod: 0s
evictionPressureTransitionPeriod: 5s
fileCheckFrequency: 0s
healthzBindAddress: 127.0.0.1
healthzPort: 10248
httpCheckFrequency: 0s
imageGCHighThresholdPercent: 55
imageGCLowThresholdPercent: 50
imageMaximumGCAge: 0s
imageMinimumGCAge: 0s
kind: KubeletConfiguration
logging:
flushFrequency: 0
options:
json:
infoBufferSize: "0"
text:
infoBufferSize: "0"
verbosity: 0
kubeAPIQPS: 50
kubeAPIBurst: 100
maxPods: 250
memorySwap: {}
nodeStatusReportFrequency: 1s
nodeStatusUpdateFrequency: 1s
podPidsLimit: 4096
registerNode: true
resolvConf: /run/systemd/resolve/resolv.conf
rotateCertificates: true
runtimeRequestTimeout: 0s
serializeImagePulls: false
serverTLSBootstrap: true
shutdownGracePeriod: 15s
shutdownGracePeriodCriticalPods: 5s
staticPodPath: /etc/kubernetes/manifests
streamingConnectionIdleTimeout: 0s
syncFrequency: 0s
tlsMinVersion: "VersionTLS12"
volumeStatsAggPeriod: 0s
featureGates:
RotateKubeletServerCertificate: true
APIPriorityAndFairness: true
tlsCipherSuites:
- "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"
- "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"
- "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384"
- "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"
- "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256"
- "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"
Configuration check
Configuration check
Note that when creating a cluster with Kubeadm without running kubeadm init or kubeadm join, the Kubelet configuration file (/var/lib/kubelet/config.yaml) will not be created.
ls -la /var/lib/kubelet/config-custom.yaml
ls -la /usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf
-rw-r--r-- 1 root root 1721 Feb 19 18:57 /var/lib/kubelet/config.yaml
-rw-r--r-- 1 root root 903 Feb 19 22:10 /usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf
Note that when creating a cluster with Kubeadm without running kubeadm init or kubeadm join, the Systemd Unit will be added to autostart but will be disabled.
systemctl status kubelet
○ kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; preset: enabled)
Drop-In: /usr/lib/systemd/system/kubelet.service.d
└─10-kubeadm.conf
Active: inactive (dead)
Docs: https://kubernetes.io/docs/
Configuration of crictl
● Optional
Configuration of crictl
● Optional
Component configuration
Component configuration steps
- Component configuration
- Systemd Unit setup for the component
- Systemd Unit start
This section depends on the following documents:
Component configuration
- Bash
- Cloud-init
Custom configuration file template
cat <<"EOF" > /etc/crictl.yaml
runtime-endpoint: unix:///var/run/containerd/containerd.sock
EOF
Custom configuration file template
- path: /etc/crictl.yaml
owner: root:root
permissions: '0644'
content: |
runtime-endpoint: unix:///var/run/containerd/containerd.sock
Configuration verification
Configuration verification
ls -la /etc/crictl.yaml
-rw-r--r-- 1 root root 61 Feb 18 15:18 /etc/crictl.yaml
crictl info |
jq '.status.conditions[] |
select(.type=="RuntimeReady") |
.status'
true
Configuration of kubeadm
● Optional
Configuration of kubeadm
● Optional
Component configuration steps
- Creating working directories.
- Component configuration
This section depends on the following documents:
Creating working directories
mkdir -p /var/run/kubeadm/
Component configuration
The
kubeadmconfiguration describesInitConfigurationparameters (bootstrap tokens, nodeRegistration, skipPhases) andClusterConfiguration(controlPlaneEndpoint, network subnets, control plane component arguments). Theinittab is used when creating the first node,join— when adding subsequent ones.
- Init
- Join
Kubeadm Configuration
- master-1
export HOST_NAME=master-1
Kubeadm configuration for cluster initialization
- Bash
- Cloud-init
Environment variables for configuration file template
export CLUSTER_NAME='my-first-cluster'
export BASE_DOMAIN='example.com'
export FULL_HOST_NAME=${HOST_NAME}.${CLUSTER_NAME}.${BASE_DOMAIN}
export INTERNAL_API=api.${CLUSTER_NAME}.${BASE_DOMAIN}
export MACHINE_LOCAL_ADDRESS=$(ip -4 addr show scope global | awk '/inet/ {print $2; exit}' | cut -d/ -f1)
export ETCD_INITIAL_CLUSTER="${FULL_HOST_NAME}=https://${MACHINE_LOCAL_ADDRESS}:2380"
export CERTIFICATE_UPLOAD_KEY=0c00c2fd5c67c37656c00d78a9d7e1f2eb794ef8e4fc3e2a4b532eb14323cd59
Kubeadm configuration file for cluster initialization
Note that in this configuration file the addons installation phase is skipped.
cat <<EOF > /var/run/kubeadm/kubeadm.yaml
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
skipPhases:
- addon
bootstrapTokens:
- token: "fjt9ex.lwzqgdlvoxtqk4yw"
description: "kubeadm bootstrap token"
ttl: "24h"
certificateKey: ${CERTIFICATE_UPLOAD_KEY}
nodeRegistration:
imagePullPolicy: IfNotPresent
taints: null
kubeletExtraArgs:
# -> Enable if managing state via Cloud Controller Manager
# cloud-provider: external
config: "/var/lib/kubelet/config-custom.yaml"
cluster-domain: cluster.local
cluster-dns: "29.64.0.10"
# name: '${FULL_HOST_NAME}'
ignorePreflightErrors:
# > When building the cluster step by step rather than running a single command,
# > you need to specify exceptions in the ignorePreflightErrors parameter
# > so that the kubeadm init phase preflight command runs without obstacles.
# > To do this, the following exceptions are added to nodeRegistration:
- FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml
- FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml
- FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml
- FileAvailable--etc-kubernetes-manifests-etcd.yaml
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
clusterName: "${CLUSTER_NAME}"
certificatesDir: /etc/kubernetes/pki
controlPlaneEndpoint: ${INTERNAL_API}:6443
imageRepository: "registry.k8s.io"
networking:
serviceSubnet: 29.64.0.0/16
dnsDomain: cluster.local
kubernetesVersion: v1.30.4
dns: {}
etcd:
local:
imageRepository: "registry.k8s.io"
dataDir: "/var/lib/etcd"
extraArgs:
auto-compaction-retention: "8"
cert-file: "/etc/kubernetes/pki/etcd/server.crt"
client-cert-auth: "true"
data-dir: "/var/lib/etcd"
election-timeout: "1500"
heartbeat-interval: "250"
key-file: "/etc/kubernetes/pki/etcd/server.key"
listen-client-urls: "https://0.0.0.0:2379"
listen-metrics-urls: "http://0.0.0.0:2381"
listen-peer-urls: "https://0.0.0.0:2380"
logger: "zap"
max-snapshots: "10"
max-wals: "10"
metrics: "extensive"
peer-cert-file: "/etc/kubernetes/pki/etcd/peer.crt"
peer-client-cert-auth: "true"
peer-key-file: "/etc/kubernetes/pki/etcd/peer.key"
peer-trusted-ca-file: "/etc/kubernetes/pki/etcd/ca.crt"
snapshot-count: "10000"
quota-backend-bytes: "10737418240" # TODO
experimental-initial-corrupt-check: "true"
experimental-watch-progress-notify-interval: "5s"
trusted-ca-file: "/etc/kubernetes/pki/etcd/ca.crt"
peerCertSANs:
- 127.0.0.1
serverCertSANs:
- 127.0.0.1
apiServer:
extraArgs:
aggregator-reject-forwarding-redirect: "true"
allow-privileged: "true"
anonymous-auth: "true"
api-audiences: "konnectivity-server"
apiserver-count: "1"
audit-log-batch-buffer-size: "10000"
audit-log-batch-max-size: "1"
audit-log-batch-max-wait: "0s"
audit-log-batch-throttle-burst: "0"
audit-log-batch-throttle-enable: "false"
audit-log-batch-throttle-qps: "0"
audit-log-compress: "false"
audit-log-format: "json"
audit-log-maxage: "30"
audit-log-maxbackup: "10"
audit-log-maxsize: "1000"
audit-log-mode: "batch"
audit-log-truncate-enabled: "false"
audit-log-truncate-max-batch-size: "10485760"
audit-log-truncate-max-event-size: "102400"
audit-log-version: "audit.k8s.io/v1"
audit-webhook-batch-buffer-size: "10000"
audit-webhook-batch-initial-backoff: "10s"
audit-webhook-batch-max-size: "400"
audit-webhook-batch-max-wait: "30s"
audit-webhook-batch-throttle-burst: "15"
audit-webhook-batch-throttle-enable: "true"
audit-webhook-batch-throttle-qps: "10"
audit-webhook-initial-backoff: "10s"
audit-webhook-mode: "batch"
audit-webhook-truncate-enabled: "false"
audit-webhook-truncate-max-batch-size: "10485760"
audit-webhook-truncate-max-event-size: "102400"
audit-webhook-version: "audit.k8s.io/v1"
audit-policy-file: /etc/kubernetes/audit-policy.yaml
audit-log-path: /var/log/kubernetes/audit/audit.log
authentication-token-webhook-cache-ttl: "2m0s"
authentication-token-webhook-version: "v1beta1"
authorization-mode: "Node,RBAC"
authorization-webhook-cache-authorized-ttl: "5m0s"
authorization-webhook-cache-unauthorized-ttl: "30s"
authorization-webhook-version: "v1beta1"
bind-address: "0.0.0.0"
cert-dir: "/var/run/kubernetes"
client-ca-file: "/etc/kubernetes/pki/ca.crt"
cloud-provider-gce-l7lb-src-cidrs: "130.211.0.0/22,35.191.0.0/16"
cloud-provider-gce-lb-src-cidrs: "130.211.0.0/22,209.85.152.0/22,209.85.204.0/22,35.191.0.0/16"
contention-profiling: "false"
default-not-ready-toleration-seconds: "300"
default-unreachable-toleration-seconds: "300"
default-watch-cache-size: "100"
delete-collection-workers: "1"
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,PodSecurity"
enable-aggregator-routing: "true"
enable-bootstrap-token-auth: "true"
enable-garbage-collector: "true"
enable-logs-handler: "true"
enable-priority-and-fairness: "true"
encryption-provider-config-automatic-reload: "false"
endpoint-reconciler-type: "lease"
etcd-cafile: "/etc/kubernetes/pki/etcd/ca.crt"
etcd-certfile: "/etc/kubernetes/pki/apiserver-etcd-client.crt"
etcd-compaction-interval: "5m0s"
etcd-count-metric-poll-period: "1m0s"
etcd-db-metric-poll-interval: "30s"
etcd-healthcheck-timeout: "2s"
etcd-keyfile: "/etc/kubernetes/pki/apiserver-etcd-client.key"
etcd-prefix: "/registry"
etcd-readycheck-timeout: "2s"
etcd-servers: "https://127.0.0.1:2379"
event-ttl: "1h0m0s"
feature-gates: "RotateKubeletServerCertificate=true"
goaway-chance: "0"
help: "false"
http2-max-streams-per-connection: "0"
kubelet-client-certificate: "/etc/kubernetes/pki/apiserver-kubelet-client.crt"
kubelet-client-key: "/etc/kubernetes/pki/apiserver-kubelet-client.key"
kubelet-port: "10250"
kubelet-preferred-address-types: "InternalIP,ExternalIP,Hostname"
kubelet-read-only-port: "10255"
kubelet-timeout: "5s"
kubernetes-service-node-port: "0"
lease-reuse-duration-seconds: "60"
livez-grace-period: "0s"
log-flush-frequency: "5s"
logging-format: "text"
log-json-info-buffer-size: "0"
log-json-split-stream: "false"
log-text-info-buffer-size: "0"
log-text-split-stream: "false"
max-connection-bytes-per-sec: "0"
max-mutating-requests-inflight: "200"
max-requests-inflight: "400"
min-request-timeout: "1800"
permit-address-sharing: "false"
permit-port-sharing: "false"
profiling: "false"
proxy-client-cert-file: "/etc/kubernetes/pki/front-proxy-client.crt"
proxy-client-key-file: "/etc/kubernetes/pki/front-proxy-client.key"
requestheader-allowed-names: "front-proxy-client"
requestheader-client-ca-file: "/etc/kubernetes/pki/front-proxy-ca.crt"
requestheader-extra-headers-prefix: "X-Remote-Extra-"
requestheader-group-headers: "X-Remote-Group"
requestheader-username-headers: "X-Remote-User"
request-timeout: "1m0s"
runtime-config: "api/all=true"
secure-port: "6443"
service-account-extend-token-expiration: "true"
service-account-issuer: "https://kubernetes.default.svc.cluster.local"
service-account-key-file: "/etc/kubernetes/pki/sa.pub"
service-account-lookup: "true"
service-account-max-token-expiration: "0s"
service-account-signing-key-file: "/etc/kubernetes/pki/sa.key"
service-cluster-ip-range: "29.64.0.0/16"
service-node-port-range: "30000-32767"
shutdown-delay-duration: "0s"
shutdown-send-retry-after: "false"
shutdown-watch-termination-grace-period: "0s"
storage-backend: "etcd3"
storage-media-type: "application/vnd.kubernetes.protobuf"
tls-cert-file: "/etc/kubernetes/pki/apiserver.crt"
tls-private-key-file: "/etc/kubernetes/pki/apiserver.key"
v: "2"
version: "false"
watch-cache: "true"
# ЕСЛИ НУЖНО ПОДКЛЮЧИТЬ CLOUD-CONTROLLER-MANAGER
# ТРЕБУЕТСЯ РАСКОМЕНТИРОВАТЬ
# ->
# cloud-provider: "external"
# Не указывать если значение "" или undefined
# cloud-config: ""
# strict-transport-security-directives: ""
# disable-admission-plugins: ""
# disabled-metrics: ""
# egress-selector-config-file: ""
# encryption-provider-config: ""
# etcd-servers-overrides: ""
# external-hostname: ""
# kubelet-certificate-authority: ""
# oidc-ca-file: ""
# oidc-client-id: ""
# oidc-groups-claim: ""
# oidc-groups-prefix: ""
# oidc-issuer-url: ""
# oidc-required-claim: ""
# oidc-signing-algs: "RS256"
# oidc-username-claim: "sub"
# oidc-username-prefix: ""
# peer-advertise-ip: ""
# peer-advertise-port: ""
# peer-ca-file: ""
# service-account-jwks-uri: ""
# show-hidden-metrics-for-version: ""
# tls-cipher-suites: ""
# tls-min-version: ""
# tls-sni-cert-key: ""
# token-auth-file: ""
# tracing-config-file: ""
# vmodule: ""
# watch-cache-sizes: ""
# authorization-webhook-config-file: ""
# cors-allowed-origins: ""
# debug-socket-path: ""
# authorization-policy-file: ""
# authorization-config: ""
# authentication-token-webhook-config-file: ""
# authentication-config: ""
# audit-webhook-config-file: ""
# audit-policy-file: "/etc/kubernetes/audit-policy.yaml"
# audit-log-path: "/var/log/kubernetes/audit/audit.log"
# allow-metric-labels: ""
# allow-metric-labels-manifest: ""
# admission-control: ""
# admission-control-config-file: ""
# advertise-address: ""
extraVolumes:
- name: "k8s-audit"
hostPath: "/var/log/kubernetes/audit/"
mountPath: "/var/log/kubernetes/audit/"
readOnly: false
pathType: DirectoryOrCreate
- name: "k8s-audit-policy"
hostPath: "/etc/kubernetes/audit-policy.yaml"
mountPath: "/etc/kubernetes/audit-policy.yaml"
pathType: File
certSANs:
- "127.0.0.1"
# TODO для доабвления внешнего FQDN в сертификаты кластера
# - ${INTERNAL_API}
timeoutForControlPlane: 4m0s
controllerManager:
extraArgs:
cluster-name: "${CLUSTER_NAME}"
allocate-node-cidrs: "false"
allow-untagged-cloud: "false"
attach-detach-reconcile-sync-period: "1m0s"
authentication-kubeconfig: "/etc/kubernetes/controller-manager.conf"
authentication-skip-lookup: "false"
authentication-token-webhook-cache-ttl: "10s"
authentication-tolerate-lookup-failure: "false"
authorization-always-allow-paths: "/healthz,/readyz,/livez,/metrics"
authorization-kubeconfig: "/etc/kubernetes/controller-manager.conf"
authorization-webhook-cache-authorized-ttl: "10s"
authorization-webhook-cache-unauthorized-ttl: "10s"
bind-address: "0.0.0.0"
cidr-allocator-type: "RangeAllocator"
client-ca-file: "/etc/kubernetes/pki/ca.crt"
# -> Включить, если управляете состоянием через Cloud Controller Manager
# cloud-provider: "external"
cloud-provider-gce-lb-src-cidrs: "130.211.0.0/22,209.85.152.0/22,209.85.204.0/22,35.191.0.0/16"
cluster-signing-cert-file: "/etc/kubernetes/pki/ca.crt"
cluster-signing-duration: "720h0m0s"
cluster-signing-key-file: "/etc/kubernetes/pki/ca.key"
concurrent-cron-job-syncs: "5"
concurrent-deployment-syncs: "5"
concurrent-endpoint-syncs: "5"
concurrent-ephemeralvolume-syncs: "5"
concurrent-gc-syncs: "20"
concurrent-horizontal-pod-autoscaler-syncs: "5"
concurrent-job-syncs: "5"
concurrent-namespace-syncs: "10"
concurrent-rc-syncs: "5"
concurrent-replicaset-syncs: "20"
concurrent-resource-quota-syncs: "5"
concurrent-service-endpoint-syncs: "5"
concurrent-service-syncs: "1"
concurrent-serviceaccount-token-syncs: "5"
concurrent-statefulset-syncs: "5"
concurrent-ttl-after-finished-syncs: "5"
concurrent-validating-admission-policy-status-syncs: "5"
configure-cloud-routes: "true"
contention-profiling: "false"
controller-start-interval: "0s"
controllers: "*,bootstrapsigner,tokencleaner"
disable-attach-detach-reconcile-sync: "false"
disable-force-detach-on-timeout: "false"
enable-dynamic-provisioning: "true"
enable-garbage-collector: "true"
enable-hostpath-provisioner: "false"
enable-leader-migration: "false"
endpoint-updates-batch-period: "0s"
endpointslice-updates-batch-period: "0s"
feature-gates: "RotateKubeletServerCertificate=true"
flex-volume-plugin-dir: "/usr/libexec/kubernetes/kubelet-plugins/volume/exec/"
help: "false"
horizontal-pod-autoscaler-cpu-initialization-period: "5m0s"
horizontal-pod-autoscaler-downscale-delay: "5m0s"
horizontal-pod-autoscaler-downscale-stabilization: "5m0s"
horizontal-pod-autoscaler-initial-readiness-delay: "30s"
horizontal-pod-autoscaler-sync-period: "30s"
horizontal-pod-autoscaler-tolerance: "0.1"
horizontal-pod-autoscaler-upscale-delay: "3m0s"
http2-max-streams-per-connection: "0"
kube-api-burst: "120"
kube-api-content-type: "application/vnd.kubernetes.protobuf"
kube-api-qps: "100"
kubeconfig: "/etc/kubernetes/controller-manager.conf"
large-cluster-size-threshold: "50"
leader-elect: "true"
leader-elect-lease-duration: "15s"
leader-elect-renew-deadline: "10s"
leader-elect-resource-lock: "leases"
leader-elect-resource-name: "kube-controller-manager"
leader-elect-resource-namespace: "kube-system"
leader-elect-retry-period: "2s"
legacy-service-account-token-clean-up-period: "8760h0m0s"
log-flush-frequency: "5s"
log-json-info-buffer-size: "0"
log-json-split-stream: "false"
log-text-info-buffer-size: "0"
log-text-split-stream: "false"
logging-format: "text"
max-endpoints-per-slice: "100"
min-resync-period: "12h0m0s"
mirroring-concurrent-service-endpoint-syncs: "5"
mirroring-endpointslice-updates-batch-period: "0s"
mirroring-max-endpoints-per-subset: "1000"
namespace-sync-period: "2m0s"
node-cidr-mask-size: "0"
node-cidr-mask-size-ipv4: "0"
node-cidr-mask-size-ipv6: "0"
node-eviction-rate: "0.1"
node-monitor-grace-period: "40s"
node-monitor-period: "5s"
node-startup-grace-period: "10s"
node-sync-period: "0s"
permit-address-sharing: "false"
permit-port-sharing: "false"
profiling: "false"
pv-recycler-increment-timeout-nfs: "30"
pv-recycler-minimum-timeout-hostpath: "60"
pv-recycler-minimum-timeout-nfs: "300"
pv-recycler-timeout-increment-hostpath: "30"
pvclaimbinder-sync-period: "15s"
requestheader-client-ca-file: "/etc/kubernetes/pki/front-proxy-ca.crt"
requestheader-extra-headers-prefix: "x-remote-extra-"
requestheader-group-headers: "x-remote-group"
requestheader-username-headers: "x-remote-user"
resource-quota-sync-period: "5m0s"
root-ca-file: "/etc/kubernetes/pki/ca.crt"
route-reconciliation-period: "10s"
secondary-node-eviction-rate: "0.01"
secure-port: "10257"
service-account-private-key-file: "/etc/kubernetes/pki/sa.key"
terminated-pod-gc-threshold: "0"
unhealthy-zone-threshold: "0.55"
use-service-account-credentials: "true"
v: "2"
version: "false"
volume-host-allow-local-loopback: "true"
# ЕСЛИ НУЖНО ПОДКЛЮЧИТЬ СЕРВЕРНЫЕ СЕРТИФИКАТЫ ДЛЯ KUBE-CONTROLLER-MANAGER
# ОБРАТИТЕ ВНИМАНИЕ, ЧТО KUBEADM НЕ СОЗДАЕТ ДАННЫЕ СЕРТИФИКАТЫ
# ТРЕБУЕТСЯ РАСКОМЕНТИРОВАТЬ
# ->
# tls-cert-file=/etc/kubernetes/pki/controller-manager-server.crt
# tls-private-key-file=/etc/kubernetes/pki/controller-manager-server.key
# Не указывать если значение "" или undefined
# cluster-signing-kube-apiserver-client-cert-file: ""
# cluster-signing-kube-apiserver-client-key-file: ""
# cluster-signing-kubelet-client-cert-file: ""
# cluster-signing-kubelet-client-key-file: ""
# cluster-signing-kubelet-serving-cert-file: ""
# cluster-signing-kubelet-serving-key-file: ""
# cluster-signing-legacy-unknown-cert-file: ""
# cluster-signing-legacy-unknown-key-file: ""
# cluster-cidr: ""
# cloud-config: ""
# cert-dir: ""
# allow-metric-labels-manifest: ""
# allow-metric-labels: ""
# disabled-metrics: ""
# leader-migration-config: ""
# master: ""
# pv-recycler-pod-template-filepath-hostpath: ""
# pv-recycler-pod-template-filepath-nfs: ""
# service-cluster-ip-range: ""
# show-hidden-metrics-for-version: ""
# tls-cipher-suites: ""
# tls-min-version: ""
# tls-sni-cert-key: ""
# vmodule: ""
# volume-host-cidr-denylist: ""
# external-cloud-volume-plugin: ""
# requestheader-allowed-names: ""
# ЕСЛИ НУЖНО ПОДКЛЮЧИТЬ СЕРВЕРНЫЕ СЕРТИФИКАТЫ ДЛЯ KUBE-CONTROLLER-MANAGER
# ОБРАТИТЕ ВНИМАНИЕ, ЧТО KUBEADM НЕ СОЗДАЕТ ДАННЫЕ СЕРТИФИКАТЫ
# ТРЕБУЕТСЯ РАСКОМЕНТИРОВАТЬ
# ->
# extraVolumes:
# - name: "controller-manager-crt"
# hostPath: "/etc/kubernetes/pki/controller-manager-server.crt"
# mountPath: "/etc/kubernetes/pki/controller-manager-server.crt"
# pathType: File
# - name: "controller-manager-key"
# hostPath: "/etc/kubernetes/pki/controller-manager-server.key"
# mountPath: "/etc/kubernetes/pki/controller-manager-server.key"
# pathType: File
scheduler:
extraArgs:
authentication-kubeconfig: "/etc/kubernetes/scheduler.conf"
authentication-skip-lookup: "false"
authentication-token-webhook-cache-ttl: "10s"
authentication-tolerate-lookup-failure: "true"
authorization-always-allow-paths: "/healthz,/readyz,/livez,/metrics"
authorization-kubeconfig: "/etc/kubernetes/scheduler.conf"
authorization-webhook-cache-authorized-ttl: "10s"
authorization-webhook-cache-unauthorized-ttl: "10s"
bind-address: "0.0.0.0"
client-ca-file: ""
contention-profiling: "true"
help: "false"
http2-max-streams-per-connection: "0"
kube-api-burst: "100"
kube-api-content-type: "application/vnd.kubernetes.protobuf"
kube-api-qps: "50"
kubeconfig: "/etc/kubernetes/scheduler.conf"
leader-elect: "true"
leader-elect-lease-duration: "15s"
leader-elect-renew-deadline: "10s"
leader-elect-resource-lock: "leases"
leader-elect-resource-name: "kube-scheduler"
leader-elect-resource-namespace: "kube-system"
leader-elect-retry-period: "2s"
log-flush-frequency: "5s"
log-json-info-buffer-size: "0"
log-json-split-stream: "false"
log-text-info-buffer-size: "0"
log-text-split-stream: "false"
logging-format: "text"
permit-address-sharing: "false"
permit-port-sharing: "false"
pod-max-in-unschedulable-pods-duration: "5m0s"
profiling: "true"
requestheader-extra-headers-prefix: "[x-remote-extra-]"
requestheader-group-headers: "[x-remote-group]"
requestheader-username-headers: "[x-remote-user]"
secure-port: "10259"
v: "2"
version: "false"
# ЕСЛИ НУЖНО ПОДКЛЮЧИТЬ СЕРВЕРНЫЕ СЕРТИФИКАТЫ ДЛЯ KUBE-SCHEDULER
# ОБРАТИТЕ ВНИМАНИЕ, ЧТО KUBEADM НЕ СОЗДАЕТ ДАННЫЕ СЕРТИФИКАТЫ
# ТРЕБУЕТСЯ РАСКОМЕНТИРОВАТЬ
# ->
# tls-cert-file=/etc/kubernetes/pki/scheduler-server.crt
# tls-private-key-file=/etc/kubernetes/pki/scheduler-server.key
# <-
# allow-metric-labels: "[]"
# allow-metric-labels-manifest: ""
# cert-dir: ""
# config: ""
# disabled-metrics: "[]"
# feature-gates: ""
# master: ""
# requestheader-allowed-names: "[]"
# requestheader-client-ca-file: ""
# show-hidden-metrics-for-version: ""
# tls-cipher-suites: "[]"
# tls-min-version: ""
# tls-sni-cert-key: "[]"
# vmodule: ""
# write-config-to: ""
# ЕСЛИ НУЖНО ПОДКЛЮЧИТЬ СЕРВЕРНЫЕ СЕРТИФИКАТЫ ДЛЯ KUBE-SCHEDULER
# ОБРАТИТЕ ВНИМАНИЕ, ЧТО KUBEADM НЕ СОЗДАЕТ ДАННЫЕ СЕРТИФИКАТЫ
# ТРЕБУЕТСЯ РА СКОМЕНТИРОВАТЬ
# ->
# extraVolumes:
# - name: "scheduler-crt"
# hostPath: "/etc/kubernetes/pki/scheduler-server.crt"
# mountPath: "/etc/kubernetes/pki/scheduler-server.crt"
# pathType: File
# - name: "scheduler-key"
# hostPath: "/etc/kubernetes/pki/scheduler-server.key"
# mountPath: "/etc/kubernetes/pki/scheduler-server.key"
# pathType: File
EOF
- path: /var/run/kubeadm/kubeadm.yaml
owner: root:root
permissions: '0644'
content: |
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
skipPhases:
- addon
bootstrapTokens:
- token: "fjt9ex.lwzqgdlvoxtqk4yw"
description: "kubeadm bootstrap token"
ttl: "24h"
certificateKey: 0c00c2fd5c67c37656c00d78a9d7e1f2eb794ef8e4fc3e2a4b532eb14323cd59
nodeRegistration:
imagePullPolicy: IfNotPresent
taints: null
kubeletExtraArgs:
cloud-provider: external
config: "/var/lib/kubelet/config-custom.yaml"
cluster-domain: cluster.local
cluster-dns: "29.64.0.10"
# Uncomment to explicitly specify the node name (recommended when using cloud-init)
# name: {{ local_hostname }}
ignorePreflightErrors:
# > When building the cluster step by step rather than running a single command,
# > you need to specify exceptions in the ignorePreflightErrors parameter
# > so that the kubeadm init phase preflight command runs without obstacles.
# > To do this, the following exceptions are added to nodeRegistration:
- FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml
- FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml
- FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml
- FileAvailable--etc-kubernetes-manifests-etcd.yaml
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
clusterName: "my-first-cluster"
# Uncomment and specify VIP Load Balancer instead of {{ local_hostname }} for HA cluster
controlPlaneEndpoint: {{ local_hostname }}:6443
imageRepository: "registry.k8s.io"
networking:
serviceSubnet: 29.64.0.0/16
dnsDomain: cluster.local
kubernetesVersion: v1.30.4
dns: {}
etcd:
local:
imageRepository: "registry.k8s.io"
dataDir: "/var/lib/etcd"
extraArgs:
auto-compaction-retention: "8"
cert-file: "/etc/kubernetes/pki/etcd/server.crt"
client-cert-auth: "true"
data-dir: "/var/lib/etcd"
election-timeout: "1500"
heartbeat-interval: "250"
key-file: "/etc/kubernetes/pki/etcd/server.key"
listen-client-urls: "https://0.0.0.0:2379"
listen-metrics-urls: "http://0.0.0.0:2381"
listen-peer-urls: "https://0.0.0.0:2380"
logger: "zap"
max-snapshots: "10"
max-wals: "10"
metrics: "extensive"
peer-cert-file: "/etc/kubernetes/pki/etcd/peer.crt"
peer-client-cert-auth: "true"
peer-key-file: "/etc/kubernetes/pki/etcd/peer.key"
peer-trusted-ca-file: "/etc/kubernetes/pki/etcd/ca.crt"
snapshot-count: "10000"
quota-backend-bytes: "10737418240" # TODO
experimental-initial-corrupt-check: "true"
experimental-watch-progress-notify-interval: "5s"
trusted-ca-file: "/etc/kubernetes/pki/etcd/ca.crt"
peerCertSANs:
- 127.0.0.1
serverCertSANs:
- 127.0.0.1
apiServer:
extraArgs:
aggregator-reject-forwarding-redirect: "true"
allow-privileged: "true"
anonymous-auth: "true"
api-audiences: "konnectivity-server"
apiserver-count: "1"
audit-log-batch-buffer-size: "10000"
audit-log-batch-max-size: "1"
audit-log-batch-max-wait: "0s"
audit-log-batch-throttle-burst: "0"
audit-log-batch-throttle-enable: "false"
audit-log-batch-throttle-qps: "0"
audit-log-compress: "false"
audit-log-format: "json"
audit-log-maxage: "30"
audit-log-maxbackup: "10"
audit-log-maxsize: "1000"
audit-log-mode: "batch"
audit-log-truncate-enabled: "false"
audit-log-truncate-max-batch-size: "10485760"
audit-log-truncate-max-event-size: "102400"
audit-log-version: "audit.k8s.io/v1"
audit-webhook-batch-buffer-size: "10000"
audit-webhook-batch-initial-backoff: "10s"
audit-webhook-batch-max-size: "400"
audit-webhook-batch-max-wait: "30s"
audit-webhook-batch-throttle-burst: "15"
audit-webhook-batch-throttle-enable: "true"
audit-webhook-batch-throttle-qps: "10"
audit-webhook-initial-backoff: "10s"
audit-webhook-mode: "batch"
audit-webhook-truncate-enabled: "false"
audit-webhook-truncate-max-batch-size: "10485760"
audit-webhook-truncate-max-event-size: "102400"
audit-webhook-version: "audit.k8s.io/v1"
audit-policy-file: /etc/kubernetes/audit-policy.yaml
audit-log-path: /var/log/kubernetes/audit/audit.log
authentication-token-webhook-cache-ttl: "2m0s"
authentication-token-webhook-version: "v1beta1"
authorization-mode: "Node,RBAC"
authorization-webhook-cache-authorized-ttl: "5m0s"
authorization-webhook-cache-unauthorized-ttl: "30s"
authorization-webhook-version: "v1beta1"
bind-address: "0.0.0.0"
cert-dir: "/var/run/kubernetes"
client-ca-file: "/etc/kubernetes/pki/ca.crt"
cloud-provider-gce-l7lb-src-cidrs: "130.211.0.0/22,35.191.0.0/16"
cloud-provider-gce-lb-src-cidrs: "130.211.0.0/22,209.85.152.0/22,209.85.204.0/22,35.191.0.0/16"
contention-profiling: "false"
default-not-ready-toleration-seconds: "300"
default-unreachable-toleration-seconds: "300"
default-watch-cache-size: "100"
delete-collection-workers: "1"
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,PodSecurity"
enable-aggregator-routing: "true"
enable-bootstrap-token-auth: "true"
enable-garbage-collector: "true"
enable-logs-handler: "true"
enable-priority-and-fairness: "true"
encryption-provider-config-automatic-reload: "false"
endpoint-reconciler-type: "lease"
etcd-cafile: "/etc/kubernetes/pki/etcd/ca.crt"
etcd-certfile: "/etc/kubernetes/pki/apiserver-etcd-client.crt"
etcd-compaction-interval: "5m0s"
etcd-count-metric-poll-period: "1m0s"
etcd-db-metric-poll-interval: "30s"
etcd-healthcheck-timeout: "2s"
etcd-keyfile: "/etc/kubernetes/pki/apiserver-etcd-client.key"
etcd-prefix: "/registry"
etcd-readycheck-timeout: "2s"
etcd-servers: "https://127.0.0.1:2379"
event-ttl: "1h0m0s"
feature-gates: "RotateKubeletServerCertificate=true"
goaway-chance: "0"
help: "false"
http2-max-streams-per-connection: "0"
kubelet-client-certificate: "/etc/kubernetes/pki/apiserver-kubelet-client.crt"
kubelet-client-key: "/etc/kubernetes/pki/apiserver-kubelet-client.key"
kubelet-port: "10250"
kubelet-preferred-address-types: "InternalIP,ExternalIP,Hostname"
kubelet-read-only-port: "10255"
kubelet-timeout: "5s"
kubernetes-service-node-port: "0"
lease-reuse-duration-seconds: "60"
livez-grace-period: "0s"
log-flush-frequency: "5s"
logging-format: "text"
log-json-info-buffer-size: "0"
log-json-split-stream: "false"
log-text-info-buffer-size: "0"
log-text-split-stream: "false"
max-connection-bytes-per-sec: "0"
max-mutating-requests-inflight: "200"
max-requests-inflight: "400"
min-request-timeout: "1800"
permit-address-sharing: "false"
permit-port-sharing: "false"
profiling: "false"
proxy-client-cert-file: "/etc/kubernetes/pki/front-proxy-client.crt"
proxy-client-key-file: "/etc/kubernetes/pki/front-proxy-client.key"
requestheader-allowed-names: "front-proxy-client"
requestheader-client-ca-file: "/etc/kubernetes/pki/front-proxy-ca.crt"
requestheader-extra-headers-prefix: "X-Remote-Extra-"
requestheader-group-headers: "X-Remote-Group"
requestheader-username-headers: "X-Remote-User"
request-timeout: "1m0s"
runtime-config: "api/all=true"
secure-port: "6443"
service-account-extend-token-expiration: "true"
service-account-issuer: "https://kubernetes.default.svc.cluster.local"
service-account-key-file: "/etc/kubernetes/pki/sa.pub"
service-account-lookup: "true"
service-account-max-token-expiration: "0s"
service-account-signing-key-file: "/etc/kubernetes/pki/sa.key"
service-cluster-ip-range: "29.64.0.0/16"
service-node-port-range: "30000-32767"
shutdown-delay-duration: "0s"
shutdown-send-retry-after: "false"
shutdown-watch-termination-grace-period: "0s"
storage-backend: "etcd3"
storage-media-type: "application/vnd.kubernetes.protobuf"
tls-cert-file: "/etc/kubernetes/pki/apiserver.crt"
tls-private-key-file: "/etc/kubernetes/pki/apiserver.key"
v: "2"
version: "false"
watch-cache: "true"
# ЕСЛИ НУЖНО ПОДКЛЮЧИТЬ CLOUD-CONTROLLER-MANAGER
# ТРЕБУЕТСЯ РАСКОМЕНТИРОВАТЬ
# ->
# cloud-provider: "external"
# Не указывать если значение "" или undefined
# cloud-config: ""
# strict-transport-security-directives: ""
# disable-admission-plugins: ""
# disabled-metrics: ""
# egress-selector-config-file: ""
# encryption-provider-config: ""
# etcd-servers-overrides: ""
# external-hostname: ""
# kubelet-certificate-authority: ""
# oidc-ca-file: ""
# oidc-client-id: ""
# oidc-groups-claim: ""
# oidc-groups-prefix: ""
# oidc-issuer-url: ""
# oidc-required-claim: ""
# oidc-signing-algs: "RS256"
# oidc-username-claim: "sub"
# oidc-username-prefix: ""
# peer-advertise-ip: ""
# peer-advertise-port: ""
# peer-ca-file: ""
# service-account-jwks-uri: ""
# show-hidden-metrics-for-version: ""
# tls-cipher-suites: ""
# tls-min-version: ""
# tls-sni-cert-key: ""
# token-auth-file: ""
# tracing-config-file: ""
# vmodule: ""
# watch-cache-sizes: ""
# authorization-webhook-config-file: ""
# cors-allowed-origins: ""
# debug-socket-path: ""
# authorization-policy-file: ""
# authorization-config: ""
# authentication-token-webhook-config-file: ""
# authentication-config: ""
# audit-webhook-config-file: ""
# audit-policy-file: "/etc/kubernetes/audit-policy.yaml"
# audit-log-path: "/var/log/kubernetes/audit/audit.log"
# allow-metric-labels: ""
# allow-metric-labels-manifest: ""
# admission-control: ""
# admission-control-config-file: ""
# advertise-address: ""
extraVolumes:
- name: "k8s-audit"
hostPath: "/var/log/kubernetes/audit/"
mountPath: "/var/log/kubernetes/audit/"
pathType: DirectoryOrCreate
certSANs:
- "127.0.0.1"
timeoutForControlPlane: 4m0s
controllerManager:
extraArgs:
cluster-name: "my-first-cluster"
allocate-node-cidrs: "false"
allow-untagged-cloud: "false"
attach-detach-reconcile-sync-period: "1m0s"
authentication-kubeconfig: "/etc/kubernetes/controller-manager.conf"
authentication-skip-lookup: "false"
authentication-token-webhook-cache-ttl: "10s"
authentication-tolerate-lookup-failure: "false"
authorization-always-allow-paths: "/healthz,/readyz,/livez,/metrics"
authorization-kubeconfig: "/etc/kubernetes/controller-manager.conf"
authorization-webhook-cache-authorized-ttl: "10s"
authorization-webhook-cache-unauthorized-ttl: "10s"
bind-address: "0.0.0.0"
cidr-allocator-type: "RangeAllocator"
client-ca-file: "/etc/kubernetes/pki/ca.crt"
# -> Включить, если управляете состоянием через Cloud Controller Manager
# cloud-provider: "external"
cloud-provider-gce-lb-src-cidrs: "130.211.0.0/22,209.85.152.0/22,209.85.204.0/22,35.191.0.0/16"
cluster-signing-cert-file: "/etc/kubernetes/pki/ca.crt"
cluster-signing-duration: "720h0m0s"
cluster-signing-key-file: "/etc/kubernetes/pki/ca.key"
concurrent-cron-job-syncs: "5"
concurrent-deployment-syncs: "5"
concurrent-endpoint-syncs: "5"
concurrent-ephemeralvolume-syncs: "5"
concurrent-gc-syncs: "20"
concurrent-horizontal-pod-autoscaler-syncs: "5"
concurrent-job-syncs: "5"
concurrent-namespace-syncs: "10"
concurrent-rc-syncs: "5"
concurrent-replicaset-syncs: "20"
concurrent-resource-quota-syncs: "5"
concurrent-service-endpoint-syncs: "5"
concurrent-service-syncs: "1"
concurrent-serviceaccount-token-syncs: "5"
concurrent-statefulset-syncs: "5"
concurrent-ttl-after-finished-syncs: "5"
concurrent-validating-admission-policy-status-syncs: "5"
configure-cloud-routes: "true"
contention-profiling: "false"
controller-start-interval: "0s"
controllers: "*,bootstrapsigner,tokencleaner"
disable-attach-detach-reconcile-sync: "false"
disable-force-detach-on-timeout: "false"
enable-dynamic-provisioning: "true"
enable-garbage-collector: "true"
enable-hostpath-provisioner: "false"
enable-leader-migration: "false"
endpoint-updates-batch-period: "0s"
endpointslice-updates-batch-period: "0s"
feature-gates: "RotateKubeletServerCertificate=true"
flex-volume-plugin-dir: "/usr/libexec/kubernetes/kubelet-plugins/volume/exec/"
help: "false"
horizontal-pod-autoscaler-cpu-initialization-period: "5m0s"
horizontal-pod-autoscaler-downscale-delay: "5m0s"
horizontal-pod-autoscaler-downscale-stabilization: "5m0s"
horizontal-pod-autoscaler-initial-readiness-delay: "30s"
horizontal-pod-autoscaler-sync-period: "30s"
horizontal-pod-autoscaler-tolerance: "0.1"
horizontal-pod-autoscaler-upscale-delay: "3m0s"
http2-max-streams-per-connection: "0"
kube-api-burst: "120"
kube-api-content-type: "application/vnd.kubernetes.protobuf"
kube-api-qps: "100"
kubeconfig: "/etc/kubernetes/controller-manager.conf"
large-cluster-size-threshold: "50"
leader-elect: "true"
leader-elect-lease-duration: "15s"
leader-elect-renew-deadline: "10s"
leader-elect-resource-lock: "leases"
leader-elect-resource-name: "kube-controller-manager"
leader-elect-resource-namespace: "kube-system"
leader-elect-retry-period: "2s"
legacy-service-account-token-clean-up-period: "8760h0m0s"
log-flush-frequency: "5s"
log-json-info-buffer-size: "0"
log-json-split-stream: "false"
log-text-info-buffer-size: "0"
log-text-split-stream: "false"
logging-format: "text"
max-endpoints-per-slice: "100"
min-resync-period: "12h0m0s"
mirroring-concurrent-service-endpoint-syncs: "5"
mirroring-endpointslice-updates-batch-period: "0s"
mirroring-max-endpoints-per-subset: "1000"
namespace-sync-period: "2m0s"
node-cidr-mask-size: "0"
node-cidr-mask-size-ipv4: "0"
node-cidr-mask-size-ipv6: "0"
node-eviction-rate: "0.1"
node-monitor-grace-period: "40s"
node-monitor-period: "5s"
node-startup-grace-period: "10s"
node-sync-period: "0s"
permit-address-sharing: "false"
permit-port-sharing: "false"
profiling: "false"
pv-recycler-increment-timeout-nfs: "30"
pv-recycler-minimum-timeout-hostpath: "60"
pv-recycler-minimum-timeout-nfs: "300"
pv-recycler-timeout-increment-hostpath: "30"
pvclaimbinder-sync-period: "15s"
requestheader-client-ca-file: "/etc/kubernetes/pki/front-proxy-ca.crt"
requestheader-extra-headers-prefix: "x-remote-extra-"
requestheader-group-headers: "x-remote-group"
requestheader-username-headers: "x-remote-user"
resource-quota-sync-period: "5m0s"
root-ca-file: "/etc/kubernetes/pki/ca.crt"
route-reconciliation-period: "10s"
secondary-node-eviction-rate: "0.01"
secure-port: "10257"
service-account-private-key-file: "/etc/kubernetes/pki/sa.key"
terminated-pod-gc-threshold: "0"
unhealthy-zone-threshold: "0.55"
use-service-account-credentials: "true"
v: "2"
version: "false"
volume-host-allow-local-loopback: "true"
# ЕСЛИ НУЖНО ПОДКЛЮЧИТЬ СЕРВЕРНЫЕ СЕРТИФИКАТЫ ДЛЯ KUBE-CONTROLLER-MANAGER
# ОБРАТИТЕ ВНИМАНИЕ, ЧТО KUBEADM НЕ СОЗДАЕТ ДАННЫЕ СЕРТИФИКАТЫ
# ТРЕБУЕТСЯ РАСКОМЕНТИРОВАТЬ
# ->
# tls-cert-file=/etc/kubernetes/pki/controller-manager-server.crt
# tls-private-key-file=/etc/kubernetes/pki/controller-manager-server.key
# Не указывать если значение "" или undefined
# cluster-signing-kube-apiserver-client-cert-file: ""
# cluster-signing-kube-apiserver-client-key-file: ""
# cluster-signing-kubelet-client-cert-file: ""
# cluster-signing-kubelet-client-key-file: ""
# cluster-signing-kubelet-serving-cert-file: ""
# cluster-signing-kubelet-serving-key-file: ""
# cluster-signing-legacy-unknown-cert-file: ""
# cluster-signing-legacy-unknown-key-file: ""
# cluster-cidr: ""
# cloud-config: ""
# cert-dir: ""
# allow-metric-labels-manifest: ""
# allow-metric-labels: ""
# disabled-metrics: ""
# leader-migration-config: ""
# master: ""
# pv-recycler-pod-template-filepath-hostpath: ""
# pv-recycler-pod-template-filepath-nfs: ""
# service-cluster-ip-range: ""
# show-hidden-metrics-for-version: ""
# tls-cipher-suites: ""
# tls-min-version: ""
# tls-sni-cert-key: ""
# vmodule: ""
# volume-host-cidr-denylist: ""
# external-cloud-volume-plugin: ""
# requestheader-allowed-names: ""
scheduler:
extraArgs:
authentication-kubeconfig: "/etc/kubernetes/scheduler.conf"
authentication-skip-lookup: "false"
authentication-token-webhook-cache-ttl: "10s"
authentication-tolerate-lookup-failure: "true"
authorization-always-allow-paths: "/healthz,/readyz,/livez,/metrics"
authorization-kubeconfig: "/etc/kubernetes/scheduler.conf"
authorization-webhook-cache-authorized-ttl: "10s"
authorization-webhook-cache-unauthorized-ttl: "10s"
bind-address: "0.0.0.0"
client-ca-file: ""
contention-profiling: "true"
help: "false"
http2-max-streams-per-connection: "0"
kube-api-burst: "100"
kube-api-content-type: "application/vnd.kubernetes.protobuf"
kube-api-qps: "50"
kubeconfig: "/etc/kubernetes/scheduler.conf"
leader-elect: "true"
leader-elect-lease-duration: "15s"
leader-elect-renew-deadline: "10s"
leader-elect-resource-lock: "leases"
leader-elect-resource-name: "kube-scheduler"
leader-elect-resource-namespace: "kube-system"
leader-elect-retry-period: "2s"
log-flush-frequency: "5s"
log-json-info-buffer-size: "0"
log-json-split-stream: "false"
log-text-info-buffer-size: "0"
log-text-split-stream: "false"
logging-format: "text"
permit-address-sharing: "false"
permit-port-sharing: "false"
pod-max-in-unschedulable-pods-duration: "5m0s"
profiling: "true"
requestheader-extra-headers-prefix: "[x-remote-extra-]"
requestheader-group-headers: "[x-remote-group]"
requestheader-username-headers: "[x-remote-user]"
secure-port: "10259"
v: "2"
version: "false"
# ЕСЛИ НУЖНО ПОДКЛЮЧИТЬ СЕРВЕРНЫЕ СЕРТИФИКАТЫ ДЛЯ KUBE-SCHEDULER
# ОБРАТИТЕ ВНИМАНИЕ, ЧТО KUBEADM НЕ СОЗДАЕТ ДАННЫЕ СЕРТИФИКАТЫ
# ТРЕБУЕТСЯ РАСКОМЕНТИРОВАТЬ
# ->
# tls-cert-file=/etc/kubernetes/pki/scheduler-server.crt
# tls-private-key-file=/etc/kubernetes/pki/scheduler-server.key
# <-
# allow-metric-labels: "[]"
# allow-metric-labels-manifest: ""
# cert-dir: ""
# config: ""
# disabled-metrics: "[]"
# feature-gates: ""
# master: ""
# requestheader-allowed-names: "[]"
# requestheader-client-ca-file: ""
# show-hidden-metrics-for-version: ""
# tls-cipher-suites: "[]"
# tls-min-version: ""
# tls-sni-cert-key: "[]"
# vmodule: ""
# write-config-to: ""
Kubeadm Configuration
- master-2
- master-3
export HOST_NAME=master-2
export HOST_NAME=master-3
Kubeadm configuration for joining a master node to the cluster
Environment variables for configuration file template
export MACHINE_LOCAL_ADDRESS="$(ip -4 addr show scope global | awk '/inet/ {print $2; exit}' | cut -d/ -f1)"
export CLUSTER_API_ENDPOINT=api.my-first-cluster.example.com
export CERTIFICATE_UPLOAD_KEY=0c00c2fd5c67c37656c00d78a9d7e1f2eb794ef8e4fc3e2a4b532eb14323cd59
Kubeadm configuration file for joining a master to the cluster
cat <<EOF > /var/run/kubeadm/kubeadm.yaml
---
apiVersion: kubeadm.k8s.io/v1beta3
controlPlane:
localAPIEndpoint:
advertiseAddress: ${MACHINE_LOCAL_ADDRESS}
bindPort: 6443
certificateKey: ${CERTIFICATE_UPLOAD_KEY}
discovery:
bootstrapToken:
apiServerEndpoint: ${CLUSTER_API_ENDPOINT}:6443
unsafeSkipCAVerification: true
token: "fjt9ex.lwzqgdlvoxtqk4yw"
kind: JoinConfiguration
EOF
Configuration verification
Configuration verification
ls -al /var/run/kubeadm/kubeadm.yaml
-rw-r--r-- 1 root root 6463 Feb 18 15:20 /var/run/kubeadm/kubeadm.yaml
Configuration of Kubernetes Audit
● Optional
Configuration of Kubernetes Audit
● Optional
The audit policy defines which requests to the API Server are logged and with what level of detail. The file is loaded at kube-apiserver startup via
--audit-policy-file; changes require a restart.
Component configuration steps
- Creating the working directory
- Preparing the audit policy
Creating the working directory
mkdir -p /var/log/kubernetes/audit
Preparing the audit policy
cat <<EOF > /etc/kubernetes/audit-policy.yaml
---
apiVersion: audit.k8s.io/v1
kind: Policy
# Общие правила
# Исключаем раннюю стадию аудита "RequestReceived", чтобы снизить объем логов и дублирование
# Эта настройка применяется глобально, но в некоторых правилах переопределяется локально
# omitStages может быть указано также внутри отдельных правил
rules:
# Отключаем логирование "watch"-запросов от kube-proxy к endpoint'ам и сервисам
- level: None
users: ["system:kube-proxy"]
verbs: ["watch"]
resources:
- group: "" # Core API group
resources: ["endpoints", "services", "services/status"]
# Отключаем логирование чтения configmap в kube-system от "system:unsecured"
- level: None
users: ["system:unsecured"]
namespaces: ["kube-system"]
verbs: ["get"]
resources:
- group: ""
resources: ["configmaps"]
# Отключаем логирование чтения узлов legacy-пользователем "kubelet"
- level: None
users: ["kubelet"]
verbs: ["get"]
resources:
- group: ""
resources: ["nodes", "nodes/status"]
# Отключаем логирование чтения узлов группой "system:nodes"
- level: None
userGroups: ["system:nodes"]
verbs: ["get"]
resources:
- group: ""
resources: ["nodes", "nodes/status"]
# Отключаем логирование get/update endpoint'ов в kube-system от контроллеров
- level: None
users:
- system:kube-controller-manager
- system:kube-scheduler
- system:serviceaccount:kube-system:endpoint-controller
verbs: ["get", "update"]
namespaces: ["kube-system"]
resources:
- group: ""
resources: ["endpoints"]
# Отключаем логирование операций с namespace'ами от системного пользователя apiserver
- level: None
users: ["system:apiserver"]
verbs: ["get"]
resources:
- group: ""
resources: ["namespaces", "namespaces/status", "namespaces/finalize"]
# Отключаем логирование операций с configmap и endpoint в kube-system от cluster-autoscaler
- level: None
users: ["cluster-autoscaler"]
verbs: ["get", "update"]
namespaces: ["kube-system"]
resources:
- group: ""
resources: ["configmaps", "endpoints"]
# Отключаем логирование запросов к метрикам от kube-controller-manager
- level: None
users: ["system:kube-controller-manager"]
verbs: ["get", "list"]
resources:
- group: "metrics.k8s.io"
# Отключаем логирование системных non-resource URL (здоровье, версия, swagger и т.п.)
- level: None
nonResourceURLs:
- /healthz*
- /version
- /swagger*
# Отключаем логирование событий (events) — они часто шумные и не критичны
- level: None
resources:
- group: ""
resources: ["events"]
# Логирование обновлений статуса узлов и подов от kubelet и node-problem-detector
- level: Request
users:
- kubelet
- system:node-problem-detector
- system:serviceaccount:kube-system:node-problem-detector
verbs:
- update
- patch
resources:
- group: ""
resources:
- nodes/status
- pods/status
omitStages:
- "RequestReceived"
# То же самое для всех участников группы system:nodes
- level: Request
userGroups: ["system:nodes"]
verbs:
- update
- patch
resources:
- group: ""
resources:
- nodes/status
- pods/status
omitStages:
- "RequestReceived"
# Логирование массового удаления (deletecollection) от namespace-controller
- level: Request
users: ["system:serviceaccount:kube-system:namespace-controller"]
verbs: ["deletecollection"]
omitStages:
- "RequestReceived"
# Логирование метаданных для чувствительных ресурсов: секретов, токенов, токен-рецензий
- level: Metadata
resources:
- group: ""
resources: ["secrets", "configmaps", "serviceaccounts/token"]
- group: authentication.k8s.io
resources: ["tokenreviews"]
omitStages:
- "RequestReceived"
# Логирование всех безопасных операций чтения (get/list/watch) по всем известным группам
- level: Request
verbs: ["get", "list", "watch"]
resources:
- group: "" # Core
- group: "admissionregistration.k8s.io"
- group: "apiextensions.k8s.io"
- group: "apiregistration.k8s.io"
- group: "apps"
- group: "authentication.k8s.io"
- group: "authorization.k8s.io"
- group: "autoscaling"
- group: "batch"
- group: "certificates.k8s.io"
- group: "extensions"
- group: "metrics.k8s.io"
- group: "networking.k8s.io"
- group: "policy"
- group: "rbac.authorization.k8s.io"
- group: "scheduling.k8s.io"
- group: "settings.k8s.io"
- group: "storage.k8s.io"
omitStages:
- "RequestReceived"
# Логирование всех операций, включая тело запроса и ответа (RequestResponse)
- level: RequestResponse
resources:
- group: "" # Core
- group: "admissionregistration.k8s.io"
- group: "apiextensions.k8s.io"
- group: "apiregistration.k8s.io"
- group: "apps"
- group: "authentication.k8s.io"
- group: "authorization.k8s.io"
- group: "autoscaling"
- group: "batch"
- group: "certificates.k8s.io"
- group: "extensions"
- group: "metrics.k8s.io"
- group: "networking.k8s.io"
- group: "policy"
- group: "rbac.authorization.k8s.io"
- group: "scheduling.k8s.io"
- group: "settings.k8s.io"
- group: "storage.k8s.io"
omitStages:
- "RequestReceived"
# Финальный catch-all: логируем метаданные всего остального
- level: Metadata
omitStages:
- "RequestReceived"
EOF
10. Verifying Component Readiness
This section describes the process of verifying the readiness of Kubernetes components before cluster initialization or joining new nodes.
- Init
- Join
Component readiness verification
● Optional
Component readiness verification
● Optional
kubeadm init phase preflight --dry-run \
--config=/var/run/kubeadm/kubeadm.yaml
If everything is installed correctly, the command will complete without errors, and you will see the following output:
[preflight] Running pre-flight checks
[preflight] Would pull the required images (like 'kubeadm config images pull')
If the process was performed in semi-automatic mode, the acceptable output may look like this:
[WARNING FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml]: /etc/kubernetes/manifests/kube-apiserver.yaml already exists
[WARNING FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml]: /etc/kubernetes/manifests/kube-controller-manager.yaml already exists
[WARNING FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml]: /etc/kubernetes/manifests/kube-scheduler.yaml already exists
[WARNING FileAvailable--etc-kubernetes-manifests-etcd.yaml]: /etc/kubernetes/manifests/etcd.yaml already exists
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks before initializing the new control plane instance
[preflight] Would pull the required images (like 'kubeadm config images pull')
Component readiness verification
● Optional
Component readiness verification
● Optional
kubeadm join phase preflight --dry-run \
--config=/var/run/kubeadm/kubeadm.yaml
If everything is installed correctly, the command will complete without errors, and you will see the following output:
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks before initializing the new control plane instance
[preflight] Would pull the required images (like 'kubeadm config images pull')
If the process was performed in semi-automatic mode, the acceptable output may look like this:
[WARNING FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml]: /etc/kubernetes/manifests/kube-apiserver.yaml already exists
[WARNING FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml]: /etc/kubernetes/manifests/kube-controller-manager.yaml already exists
[WARNING FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml]: /etc/kubernetes/manifests/kube-scheduler.yaml already exists
[WARNING FileAvailable--etc-kubernetes-manifests-etcd.yaml]: /etc/kubernetes/manifests/etcd.yaml already exists
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks before initializing the new control plane instance
[preflight] Would pull the required images (like 'kubeadm config images pull')
11. Working with Certificates
This section covers the rules for using certificates in a Kubernetes cluster: which components use certificates, who signs them, and how authentication is performed.
- masters
- workers
12. Creating Root Certificates
Certificate Authority (CA) is a trusted source that issues root certificates used to sign all other certificates within the Kubernetes cluster.
CA certificates play a key role in establishing trust between components, ensuring authentication, encryption, and integrity of communications.
This section describes the process of obtaining root certificates that are used to sign the remaining certificates in the Kubernetes cluster.
- Init
- Join
Creating root certificates
● Required
Creating root certificates
● Required
- Kubernetes CA
- FrontProxy CA
- ETCD CA
Kubernetes CA
Purpose: Kubernetes root Certificate Authority (CA). Signs the server and client certificates for kube-apiserver, kubelet, kube-controller-manager, and kube-scheduler. All cluster components trust this CA for TLS connection verification.
Note: this block describes only the process of creating Kubernetes CA root certificates.
- HardWay
- Kubeadm
Working directory
mkdir -p /etc/kubernetes/openssl
mkdir -p /etc/kubernetes/pki
Configuration
cat <<EOF > /etc/kubernetes/openssl/ca.conf
[req]
distinguished_name = req_distinguished_name
x509_extensions = v3_ca
prompt = no
[req_distinguished_name]
CN = kubernetes
[v3_ca]
keyUsage = critical, keyCertSign, keyEncipherment, digitalSignature
basicConstraints = critical,CA:TRUE
EOF
Private key generation
openssl genrsa \
-out /etc/kubernetes/pki/ca.key 2048
Public key generation
openssl req \
-x509 \
-new \
-nodes \
-key /etc/kubernetes/pki/ca.key \
-sha256 \
-days 3650 \
-out /etc/kubernetes/pki/ca.crt \
-config /etc/kubernetes/openssl/ca.conf
Certificate readiness verification
This section depends on the following sections:
/etc/kubernetes/openssl/cert-report.sh /etc/kubernetes/pki/ca.crt
CERTIFICATE AUTHORITY EXPIRES RESIDUAL TIME EXTERNALLY MANAGED
ca Oct 20, 2034 22:04 UTC 9y no
Certificate generation
kubeadm init phase certs ca \
--config=/var/run/kubeadm/kubeadm.yaml
After executing the commands, we get the following output.
#### Create Kubernetes CA
[certs] Generating "ca" certificate and key
FrontProxy CA
Purpose: CA for the API aggregation mechanism (extension API servers). Signs the
front-proxy-clientcertificate, which kube-apiserver uses when proxying requests to extended API servers (e.g., metrics-server, custom API servers).
Note: this block describes only the process of creating Front Proxy CA root certificates.
- HardWay
- Kubeadm
Working directory
mkdir -p /etc/kubernetes/openssl
mkdir -p /etc/kubernetes/pki
Configuration
cat <<EOF > /etc/kubernetes/openssl/front-proxy-ca.conf
[req]
distinguished_name = req_distinguished_name
x509_extensions = v3_ca
prompt = no
[req_distinguished_name]
CN = front-proxy-ca
[v3_ca]
keyUsage = critical, keyCertSign, keyEncipherment, digitalSignature
basicConstraints = critical,CA:TRUE
EOF
Private key generation
openssl genrsa \
-out /etc/kubernetes/pki/front-proxy-ca.key 2048
Public key generation
openssl req \
-x509 \
-new \
-nodes \
-key /etc/kubernetes/pki/front-proxy-ca.key \
-sha256 \
-days 3650 \
-out /etc/kubernetes/pki/front-proxy-ca.crt \
-config /etc/kubernetes/openssl/front-proxy-ca.conf
Certificate readiness verification
This section depends on the following sections:
/etc/kubernetes/openssl/cert-report.sh /etc/kubernetes/pki/front-proxy-ca.crt
CERTIFICATE AUTHORITY EXPIRES RESIDUAL TIME EXTERNALLY MANAGED
front-proxy-ca Oct 20, 2034 22:04 UTC 9y no
Certificate generation
kubeadm init phase certs front-proxy-ca \
--config=/var/run/kubeadm/kubeadm.yaml
After executing the commands, we get the following output.
#### Create Front Proxy CA
[certs] Generating "front-proxy-ca" certificate and key
ETCD CA
Purpose: CA for all etcd cluster certificates. Signs server, client, and peer certificates for etcd:
etcd-server(client connections, port 2379),etcd-peer(inter-node replication, port 2380), andetcd-healthcheck-client(health checks). Also used by kube-apiserver to verify the connection to etcd via theapiserver-etcd-clientcertificate.
Note: this section only describes the process of creating ETCD CA root certificates.
- HardWay
- Kubeadm
Working directory
mkdir -p /etc/kubernetes/openssl
mkdir -p /etc/kubernetes/pki/etcd
Configuration
cat <<EOF > /etc/kubernetes/openssl/etcd-ca.conf
[req]
distinguished_name = req_distinguished_name
x509_extensions = v3_ca
prompt = no
[req_distinguished_name]
CN = "etcd-ca"
[v3_ca]
keyUsage = critical, keyCertSign, keyEncipherment, digitalSignature
basicConstraints = critical,CA:TRUE
EOF
Private key generation
openssl genrsa \
-out /etc/kubernetes/pki/etcd/ca.key 2048
Public key generation
openssl req \
-x509 \
-new \
-nodes \
-key /etc/kubernetes/pki/etcd/ca.key \
-sha256 \
-days 3650 \
-out /etc/kubernetes/pki/etcd/ca.crt \
-config /etc/kubernetes/openssl/etcd-ca.conf
Certificate readiness check
This section depends on the following sections:
/etc/kubernetes/openssl/cert-report.sh /etc/kubernetes/pki/etcd/ca.crt
CERTIFICATE AUTHORITY EXPIRES RESIDUAL TIME EXTERNALLY MANAGED
etcd-ca Oct 20, 2034 22:04 UTC 9y no
Certificate generation
kubeadm init phase certs etcd-ca \
--config=/var/run/kubeadm/kubeadm.yaml
After running the command, we get the following output.
#### Create ETCD CA
[certs] Generating "etcd/ca" certificate and key
Downloading existing CAs
● Required
Downloading existing CAs
● Required
- HardWay
- Kubeadm
This section provides instructions for downloading root certificates from the Kubernetes control plane. The certificates are downloaded in encrypted form from the Secret resource, which allows them to be securely transferred and decrypted on the node for managing the control plane node lifecycle.
Working directory
mkdir -p /etc/kubernetes/openssl
mkdir -p /etc/kubernetes/pki
mkdir -p /etc/kubernetes/pki/etcd
Environment variables
In production environments, it is recommended to create a separate bootstrap token for each node. However, for demonstration purposes (and within this documentation), we have simplified the process and use a single shared token for all control plane nodes.
export CERTIFICATE_UPLOAD_KEY=0c00c2fd5c67c37656c00d78a9d7e1f2eb794ef8e4fc3e2a4b532eb14323cd59
export KUBE_API_BOOTSTRAP_TOKEN=fjt9ex.lwzqgdlvoxtqk4yw
export KUBE_API_SERVER=https://api.my-first-cluster.example.com:6443
cat <<EOF > /etc/kubernetes/openssl/decrypt.py
#!/usr/bin/env python3
import sys, base64
from cryptography.hazmat.primitives.ciphers.aead import AESGCM
key = bytes.fromhex(sys.argv[1])
payload = base64.b64decode(sys.argv[2])
nonce, ct = payload[:12], payload[12:]
aesgcm = AESGCM(key)
plain = aesgcm.decrypt(nonce, ct, None)
sys.stdout.buffer.write(plain)
EOF
cat <<'EOF' > /etc/kubernetes/openssl/download-certs.sh
#!/bin/bash
set -euo pipefail
CERT_PATH="/etc/kubernetes/pki"
KEY="${CERTIFICATE_UPLOAD_KEY:-}"
PY_SCRIPT="$(dirname "$0")/decrypt.py"
KUBE_API_SERVER="${KUBE_API_SERVER:-https://127.0.0.1:6443}"
TOKEN="${KUBE_API_BOOTSTRAP_TOKEN:?KUBE_API_BOOTSTRAP_TOKEN is required}"
declare -A files=(
["ca.crt"]="$CERT_PATH/ca.crt"
["ca.key"]="$CERT_PATH/ca.key"
["etcd-ca.crt"]="$CERT_PATH/etcd/ca.crt"
["etcd-ca.key"]="$CERT_PATH/etcd/ca.key"
["front-proxy-ca.crt"]="$CERT_PATH/front-proxy-ca.crt"
["front-proxy-ca.key"]="$CERT_PATH/front-proxy-ca.key"
["sa.key"]="$CERT_PATH/sa.key"
["sa.pub"]="$CERT_PATH/sa.pub"
)
mkdir -p "$CERT_PATH"
echo "[INFO] Using certificate key: $KEY"
echo "[WARN] TLS verification is DISABLED (insecure mode)"
echo "[INFO] Fetching Secret kubeadm-certs from API..."
RESPONSE=$(curl -sSL -k \
-H "Authorization: Bearer $TOKEN" \
"$KUBE_API_SERVER/api/v1/namespaces/kube-system/secrets/kubeadm-certs")
echo "$RESPONSE" | jq -r '.data | to_entries[] | @base64' | while read -r item; do
name=$(echo "$item" | base64 -d | jq -r '.key')
b64=$(echo "$item" | base64 -d | jq -r '.value' | tr -d '\n')
out_path="${files[$name]:-}"
if [[ -z "$out_path" ]]; then
echo "[WARN] Unknown certificate: $name — skipping"
continue
fi
mkdir -p "$(dirname "$out_path")"
echo "[INFO] Decrypting $name → $out_path"
python3 "$PY_SCRIPT" "$KEY" "$b64" > "$out_path"
done
echo "[INFO] Certificates unpacked."
EOF
Setting permissions
chmod +x /etc/kubernetes/openssl/download-certs.sh
Running the script
/etc/kubernetes/openssl/download-certs.sh
[INFO] Using certificate key: 0c00c2fd5c67c37656c00d78a9d7e1f2eb794ef8e4fc3e2a4b532eb14323cd59
[WARN] TLS verification is DISABLED (insecure mode)
[INFO] Fetching Secret kubeadm-certs from API...
[INFO] Decrypting ca.crt → /etc/kubernetes/pki/ca.crt
[INFO] Decrypting ca.key → /etc/kubernetes/pki/ca.key
[INFO] Decrypting etcd-ca.crt → /etc/kubernetes/pki/etcd/ca.crt
[INFO] Decrypting etcd-ca.key → /etc/kubernetes/pki/etcd/ca.key
[INFO] Decrypting front-proxy-ca.crt → /etc/kubernetes/pki/front-proxy-ca.crt
[INFO] Decrypting front-proxy-ca.key → /etc/kubernetes/pki/front-proxy-ca.key
[INFO] Decrypting sa.key → /etc/kubernetes/pki/sa.key
[INFO] Decrypting sa.pub → /etc/kubernetes/pki/sa.pub
[INFO] Certificates unpacked.
This section depends on the following sections:
Manifest generation
kubeadm join phase control-plane-prepare download-certs \
--config=/var/run/kubeadm/kubeadm.yaml
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[download-certs] Downloading the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[download-certs] Saving the certificates to the folder: "/etc/kubernetes/pki"
13. Creating Application Certificates
Certificates are digital documents that verify the authenticity of components within a Kubernetes cluster. They provide secure communication, authentication, and encryption during interactions between nodes, control components, and users.
All certificates are created based on Public Key Infrastructure (PKI) and contain information about the owner, validity period, and the Certificate Authority (CA) that issued the certificate.
This section generates the certificates required for various Kubernetes components (API server, kubelet, controller-manager, etc.).
- Init
- Join
Creating application certificates
● Required
Creating application certificates
● Required
- Kubelet Server
- API -> Etcd
- API -> Kubelet
- API Server
- Proxy -> API
- Etcd Client
- Etcd Server
- Etcd Peer
- Controller server
- Scheduler server
Kubelet server
Kubelet server
Purpose: kubelet server certificate for TLS on port 10250. Presented when kube-apiserver and other clients connect to the kubelet API. Signed by kubernetes-ca.
- HardWay
- Kubeadm
Environment variables
export CLUSTER_NAME=my-first-cluster
export BASE_DOMAIN=example.com
export CLUSTER_DOMAIN=cluster.local
export FULL_HOST_NAME=${HOST_NAME}.${CLUSTER_NAME}.${BASE_DOMAIN}
export MACHINE_LOCAL_ADDRESS=$(ip -4 addr show scope global | awk '/inet/ {print $2; exit}' | cut -d/ -f1)
Working directory
mkdir -p /etc/kubernetes/pki
mkdir -p /etc/kubernetes/openssl/csr
mkdir -p /var/lib/kubelet/pki
Configuration
cat <<EOF > /etc/kubernetes/openssl/kubelet-server.conf
[ req ]
default_bits = 2048
prompt = no
default_md = sha256
distinguished_name = dn
req_extensions = req_ext
[ req_ext ]
subjectAltName = @alt_names
[ alt_names ]
DNS.1 = localhost
DNS.2 = ${HOST_NAME}
DNS.3 = ${FULL_HOST_NAME}
IP.1 = 127.0.0.1
IP.2 = 0:0:0:0:0:0:0:1
IP.3 = ${MACHINE_LOCAL_ADDRESS}
[ dn ]
CN = "system:node:${FULL_HOST_NAME}
O = "system:nodes"
[ v3_ext ]
authorityKeyIdentifier=keyid,issuer:always
basicConstraints=CA:FALSE
keyUsage=keyEncipherment,dataEncipherment
extendedKeyUsage=serverAuth
subjectAltName=@alt_names
EOF
Private key generation
openssl genrsa \
-out /var/lib/kubelet/pki/kubelet-server-key.pem 2048
CSR generation
openssl req \
-new \
-key /var/lib/kubelet/pki/kubelet-server-key.pem \
-out /etc/kubernetes/openssl/csr/kubelet-server.csr \
-config /etc/kubernetes/openssl/kubelet-server.conf
CSR signing
openssl x509 \
-req \
-days 365 \
-sha256 \
-outform PEM \
-CA /etc/kubernetes/pki/ca.crt \
-CAkey /etc/kubernetes/pki/ca.key \
-CAcreateserial \
-in /etc/kubernetes/openssl/csr/kubelet-server.csr \
-out /var/lib/kubelet/pki/kubelet-server.pem \
-extensions v3_ext \
-extfile /etc/kubernetes/openssl/kubelet-server.conf
cat /var/lib/kubelet/pki/kubelet-server.pem /var/lib/kubelet/pki/kubelet-server-key.pem >> /var/lib/kubelet/pki/kubelet-server-$(date '+%Y-%m-%d-%H-%M-%S').pem
ln -s /var/lib/kubelet/pki/kubelet-server-$(date '+%Y-%m-%d-%H-%M-%S').pem /var/lib/kubelet/pki/kubelet-server-current.pem
Certificate readiness check
/etc/kubernetes/openssl/cert-report.sh /var/lib/kubelet/pki/kubelet-server.pem
CERTIFICATE EXPIRES RESIDUAL TIME CERTIFICATE AUTHORITY EXTERNALLY MANAGED
kubelet-server-current Oct 22, 2025 22:06 UTC 364d kubernetes no
kubeadm does not manage the server certificate used by the kubelet component.
When the kubelet systemd unit starts, it initiates a certificate signing request.
To complete the process, manual approval is required using the command:
kubectl certificate approve $CERT_NAME.
RotateKubeletServerCertificate
For automatickubelet certificate rotation, additional configuration is required:Kube-Apiserver configuration
spec:
containers:
- command:
- --feature-gates=RotateKubeletServerCertificate=true
apiServer:
extraArgs:
feature-gates: "RotateKubeletServerCertificate=true"
Kube-Controller-Manager configuration
spec:
containers:
- command:
- --feature-gates=RotateKubeletServerCertificate=true
controllerManager:
extraArgs:
feature-gates: "RotateKubeletServerCertificate=true"
Kubelet configuration
rotateCertificates: true
featureGates:
RotateKubeletServerCertificate: true
If you are using Cloud Controller Manager (CCM), the certificate will not be issued until
CCM assigns an address to the Node in the INTERNAL_IP field.
Certificate readiness check
kubeadm certs check-expiration
CERTIFICATE EXPIRES RESIDUAL TIME CERTIFICATE AUTHORITY EXTERNALLY MANAGED
kubelet-server-current Oct 22, 2025 22:06 UTC 364d kubernetes no
K8S-API client > Etcd server
K8S-API client > Etcd server
Purpose: API Server client certificate for connecting to etcd. Used by kube-apiserver when accessing the cluster data store. Signed by etcd-ca.
- HardWay
- Kubeadm
Working directory
mkdir -p /etc/kubernetes/pki
mkdir -p /etc/kubernetes/openssl/csr
Configuration
cat <<EOF > /etc/kubernetes/openssl/apiserver-etcd-client.conf
[ req ]
default_bits = 2048
prompt = no
default_md = sha256
distinguished_name = dn
[ dn ]
CN = kube-apiserver-etcd-client
[ v3_ext ]
authorityKeyIdentifier=keyid,issuer:always
basicConstraints=CA:FALSE
keyUsage=keyEncipherment,dataEncipherment
extendedKeyUsage=clientAuth
EOF
Private key generation
openssl genrsa \
-out /etc/kubernetes/pki/apiserver-etcd-client.key 2048
CSR generation
openssl req \
-new \
-key /etc/kubernetes/pki/apiserver-etcd-client.key \
-out /etc/kubernetes/openssl/csr/apiserver-etcd-client.csr \
-config /etc/kubernetes/openssl/apiserver-etcd-client.conf
CSR signing
openssl x509 \
-req \
-days 365 \
-sha256 \
-CA /etc/kubernetes/pki/etcd/ca.crt \
-CAkey /etc/kubernetes/pki/etcd/ca.key \
-CAcreateserial \
-in /etc/kubernetes/openssl/csr/apiserver-etcd-client.csr \
-out /etc/kubernetes/pki/apiserver-etcd-client.crt \
-extensions v3_ext \
-extfile /etc/kubernetes/openssl/apiserver-etcd-client.conf
Certificate readiness check
/etc/kubernetes/openssl/cert-report.sh /etc/kubernetes/pki/apiserver-etcd-client.crt
CERTIFICATE EXPIRES RESIDUAL TIME CERTIFICATE AUTHORITY EXTERNALLY MANAGED
apiserver-etcd-client Oct 22, 2025 22:06 UTC 364d etcd-ca no
Certificate generation
kubeadm init phase certs apiserver-etcd-client \
--config=/var/run/kubeadm/kubeadm.yaml
After executing the commands, we get the following output.
#### Certificate generation
[certs] Generating "apiserver-etcd-client" certificate and key
Certificate readiness check
kubeadm certs check-expiration
CERTIFICATE EXPIRES RESIDUAL TIME CERTIFICATE AUTHORITY EXTERNALLY MANAGED
apiserver-etcd-client Oct 22, 2025 22:06 UTC 364d etcd-ca no
K8S-API client > Kubelet server
K8S-API client > Kubelet server
Purpose: API Server client certificate for connecting to kubelet. Used by kube-apiserver when accessing the kubelet API (fetching logs, exec, port-forward). Signed by kubernetes-ca.
- HardWay
- Kubeadm
Working directory
mkdir -p /etc/kubernetes/pki
mkdir -p /etc/kubernetes/openssl/csr
Configuration
cat <<EOF > /etc/kubernetes/openssl/apiserver-kubelet-client.conf
[ req ]
default_bits = 2048
prompt = no
default_md = sha256
distinguished_name = dn
[ dn ]
CN = kube-apiserver-kubelet-client
O = kubeadm:cluster-admins
[ v3_ext ]
authorityKeyIdentifier=keyid,issuer:always
basicConstraints=CA:FALSE
keyUsage=keyEncipherment,dataEncipherment
extendedKeyUsage=clientAuth
EOF
Private key generation
openssl genrsa \
-out /etc/kubernetes/pki/apiserver-kubelet-client.key 2048
CSR generation
openssl req \
-new \
-key /etc/kubernetes/pki/apiserver-kubelet-client.key \
-out /etc/kubernetes/openssl/csr/apiserver-kubelet-client.csr \
-config /etc/kubernetes/openssl/apiserver-kubelet-client.conf
CSR signing
openssl x509 \
-req \
-days 365 \
-sha256 \
-CA /etc/kubernetes/pki/ca.crt \
-CAkey /etc/kubernetes/pki/ca.key \
-CAcreateserial \
-in /etc/kubernetes/openssl/csr/apiserver-kubelet-client.csr \
-out /etc/kubernetes/pki/apiserver-kubelet-client.crt \
-extensions v3_ext \
-extfile /etc/kubernetes/openssl/apiserver-kubelet-client.conf
Certificate readiness check
/etc/kubernetes/openssl/cert-report.sh /etc/kubernetes/pki/apiserver-kubelet-client.crt
CERTIFICATE EXPIRES RESIDUAL TIME CERTIFICATE AUTHORITY EXTERNALLY MANAGED
apiserver-kubelet-client Oct 22, 2025 22:06 UTC 364d ca no
Certificate generation
kubeadm init phase certs apiserver-kubelet-client \
--config=/var/run/kubeadm/kubeadm.yaml
After executing the commands, we get the following output.
#### Certificate generation
[certs] Generating "apiserver-kubelet-client" certificate and key
Certificate readiness check
kubeadm certs check-expiration
CERTIFICATE EXPIRES RESIDUAL TIME CERTIFICATE AUTHORITY EXTERNALLY MANAGED
apiserver-kubelet-client Oct 22, 2025 22:06 UTC 364d ca no
K8S-API server
K8S-API server
Purpose: API Server server certificate, presented to clients during TLS connection. Contains SAN (Subject Alternative Names) for all API access addresses: node IP addresses, load balancer VIP, DNS names, and the
kubernetes.defaultservice ClusterIP address. Signed by kubernetes-ca.
- HardWay
- Kubeadm
Environment variables
export CLUSTER_NAME=my-first-cluster
export BASE_DOMAIN=example.com
export CLUSTER_DOMAIN=cluster.local
export FULL_HOST_NAME=${HOST_NAME}.${CLUSTER_NAME}.${BASE_DOMAIN}
export CLUSTER_API_ENDPOINT=api.my-first-cluster.example.com
Working directory
mkdir -p /etc/kubernetes/pki
mkdir -p /etc/kubernetes/openssl/csr
Configuration
cat <<EOF > /etc/kubernetes/openssl/apiserver.conf
[ req ]
default_bits = 2048
prompt = no
default_md = sha256
distinguished_name = dn
req_extensions = req_ext
[ req_ext ]
subjectAltName = @alt_names
[ alt_names ]
DNS.1 = kubernetes
DNS.2 = kubernetes.default
DNS.3 = kubernetes.default.svc
DNS.4 = kubernetes.default.svc.${CLUSTER_DOMAIN}
DNS.5 = ${FULL_HOST_NAME}
DNS.6 = ${CLUSTER_API_ENDPOINT}
IP.1 = $(ip -4 addr show scope global | awk '/inet/ {print $2; exit}' | cut -d/ -f1)
IP.2 = 127.0.0.1
[ dn ]
CN = kube-apiserver
[ v3_ext ]
authorityKeyIdentifier=keyid,issuer:always
basicConstraints=CA:FALSE
keyUsage=keyEncipherment,dataEncipherment
extendedKeyUsage=serverAuth
subjectAltName=@alt_names
EOF
Private key generation
openssl genrsa \
-out /etc/kubernetes/pki/apiserver.key 2048
CSR generation
openssl req \
-new \
-key /etc/kubernetes/pki/apiserver.key \
-out /etc/kubernetes/openssl/csr/apiserver.csr \
-config /etc/kubernetes/openssl/apiserver.conf
CSR signing
openssl x509 \
-req \
-days 365 \
-sha256 \
-CA /etc/kubernetes/pki/ca.crt \
-CAkey /etc/kubernetes/pki/ca.key \
-CAcreateserial \
-in /etc/kubernetes/openssl/csr/apiserver.csr \
-out /etc/kubernetes/pki/apiserver.crt \
-extensions v3_ext \
-extfile /etc/kubernetes/openssl/apiserver.conf
Certificate readiness check
/etc/kubernetes/openssl/cert-report.sh /etc/kubernetes/pki/apiserver.crt
CERTIFICATE EXPIRES RESIDUAL TIME CERTIFICATE AUTHORITY EXTERNALLY MANAGED
apiserver Oct 22, 2025 22:06 UTC 364d ca no
Certificate generation
kubeadm init phase certs apiserver \
--config=/var/run/kubeadm/kubeadm.yaml
After executing the commands, we get the following output.
#### Certificate generation
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [api.my-first-cluster.example.com kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master-1.my-first-cluster.example.com] and IPs [29.64.0.1 10.0.0.16]
Certificate readiness check
kubeadm certs check-expiration
CERTIFICATE EXPIRES RESIDUAL TIME CERTIFICATE AUTHORITY EXTERNALLY MANAGED
apiserver Oct 22, 2025 22:06 UTC 364d ca no
FrontProxy client > K8S-API
FrontProxy client > K8S-API
Purpose: Client certificate for the API aggregation mechanism (extension API servers). Used by kube-apiserver when proxying requests to extended API servers (e.g., metrics-server). Signed by front-proxy-ca.
- HardWay
- Kubeadm
Working directory
mkdir -p /etc/kubernetes/pki
mkdir -p /etc/kubernetes/openssl/csr
Configuration
cat <<EOF > /etc/kubernetes/openssl/front-proxy-client.conf
[ req ]
default_bits = 2048
prompt = no
default_md = sha256
distinguished_name = dn
[ dn ]
CN = front-proxy-client
[ v3_ext ]
authorityKeyIdentifier=keyid,issuer:always
basicConstraints=CA:FALSE
keyUsage=keyEncipherment,dataEncipherment
extendedKeyUsage=clientAuth
EOF
Private key generation
openssl genrsa \
-out /etc/kubernetes/pki/front-proxy-client.key 2048
CSR generation
openssl req \
-new \
-key /etc/kubernetes/pki/front-proxy-client.key \
-out /etc/kubernetes/openssl/csr/front-proxy-client.csr \
-config /etc/kubernetes/openssl/front-proxy-client.conf
CSR signing
openssl x509 \
-req \
-days 365 \
-sha256 \
-CA /etc/kubernetes/pki/front-proxy-ca.crt \
-CAkey /etc/kubernetes/pki/front-proxy-ca.key \
-CAcreateserial \
-in /etc/kubernetes/openssl/csr/front-proxy-client.csr \
-out /etc/kubernetes/pki/front-proxy-client.crt \
-extensions v3_ext \
-extfile /etc/kubernetes/openssl/front-proxy-client.conf
Certificate readiness check
/etc/kubernetes/openssl/cert-report.sh /etc/kubernetes/pki/front-proxy-client.crt
CERTIFICATE EXPIRES RESIDUAL TIME CERTIFICATE AUTHORITY EXTERNALLY MANAGED
front-proxy-client Oct 22, 2025 22:06 UTC 364d front-proxy-ca no
Certificate generation
kubeadm init phase certs front-proxy-client \
--config=/var/run/kubeadm/kubeadm.yaml
After executing the commands, we get the following output.
#### Certificate generation
[certs] Generating "front-proxy-client" certificate and key
Certificate readiness check
kubeadm certs check-expiration
CERTIFICATE EXPIRES RESIDUAL TIME CERTIFICATE AUTHORITY EXTERNALLY MANAGED
front-proxy-client Oct 22, 2025 22:06 UTC 364d front-proxy-ca no
Etcd client > Etcd
Etcd client > Etcd
Purpose: Client certificate for etcd healthcheck probes (liveness probe). Used for connecting to the etcd client API when checking cluster availability. Signed by etcd-ca.
- HardWay
- Kubeadm
Working directory
mkdir -p /etc/kubernetes/pki/etcd
mkdir -p /etc/kubernetes/openssl/csr
Configuration
cat <<EOF > /etc/kubernetes/openssl/healthcheck-client.conf
[ req ]
default_bits = 2048
prompt = no
default_md = sha256
distinguished_name = dn
[ dn ]
CN = kube-etcd-healthcheck-client
[ v3_ext ]
authorityKeyIdentifier=keyid,issuer:always
basicConstraints=CA:FALSE
keyUsage=keyEncipherment,dataEncipherment
extendedKeyUsage=clientAuth
EOF
Private key generation
openssl genrsa \
-out /etc/kubernetes/pki/etcd/healthcheck-client.key 2048
CSR generation
openssl req \
-new \
-key /etc/kubernetes/pki/etcd/healthcheck-client.key \
-out /etc/kubernetes/openssl/csr/etcd-client.csr \
-config /etc/kubernetes/openssl/healthcheck-client.conf
CSR signing
openssl x509 \
-req \
-days 365 \
-sha256 \
-CA /etc/kubernetes/pki/etcd/ca.crt \
-CAkey /etc/kubernetes/pki/etcd/ca.key \
-CAcreateserial \
-in /etc/kubernetes/openssl/csr/etcd-client.csr \
-out /etc/kubernetes/pki/etcd/healthcheck-client.crt \
-extensions v3_ext \
-extfile /etc/kubernetes/openssl/healthcheck-client.conf
Certificate readiness check
/etc/kubernetes/openssl/cert-report.sh /etc/kubernetes/pki/etcd/healthcheck-client.crt
CERTIFICATE EXPIRES RESIDUAL TIME CERTIFICATE AUTHORITY EXTERNALLY MANAGED
etcd-healthcheck-client Oct 22, 2025 22:06 UTC 364d etcd-ca no
Certificate generation
kubeadm init phase certs etcd-healthcheck-client \
--config=/var/run/kubeadm/kubeadm.yaml
After running the command, we get the following output.
#### Certificate generation
[certs] Generating "etcd/healthcheck-client" certificate and key
Certificate readiness check
kubeadm certs check-expiration
CERTIFICATE EXPIRES RESIDUAL TIME CERTIFICATE AUTHORITY EXTERNALLY MANAGED
etcd-healthcheck-client Oct 22, 2025 22:06 UTC 364d etcd-ca no
Etcd server
Etcd server
Purpose: Server certificate for etcd serving client connections on port 2379. Presented during TLS connection from kube-apiserver and other etcd clients. Signed by etcd-ca.
- HardWay
- Kubeadm
Environment variables
export CLUSTER_NAME=my-first-cluster
export BASE_DOMAIN=example.com
export FULL_HOST_NAME=${HOST_NAME}.${CLUSTER_NAME}.${BASE_DOMAIN}
export MACHINE_LOCAL_ADDRESS=$(ip -4 addr show scope global | awk '/inet/ {print $2; exit}' | cut -d/ -f1)
Working directory
mkdir -p /etc/kubernetes/pki
mkdir -p /etc/kubernetes/openssl/csr
Configuration
cat <<EOF > /etc/kubernetes/openssl/etcd-server.conf
[ req ]
default_bits = 2048
prompt = no
default_md = sha256
distinguished_name = dn
req_extensions = req_ext
[ req_ext ]
subjectAltName = @alt_names
[ alt_names ]
DNS.1 = localhost
DNS.2 = ${HOST_NAME}
DNS.3 = ${FULL_HOST_NAME}
IP.1 = 127.0.0.1
IP.2 = 0:0:0:0:0:0:0:1
IP.3 = ${MACHINE_LOCAL_ADDRESS}
[ dn ]
CN = ${FULL_HOST_NAME}
[ v3_ext ]
authorityKeyIdentifier=keyid,issuer:always
basicConstraints=CA:FALSE
keyUsage=keyEncipherment,dataEncipherment
extendedKeyUsage=serverAuth,clientAuth
subjectAltName=@alt_names
EOF
Private key generation
openssl genrsa \
-out /etc/kubernetes/pki/etcd/server.key 2048
CSR generation
openssl req \
-new \
-key /etc/kubernetes/pki/etcd/server.key \
-out /etc/kubernetes/openssl/csr/etcd-server.csr \
-config /etc/kubernetes/openssl/etcd-server.conf
CSR signing
openssl x509 \
-req \
-days 365 \
-sha256 \
-CA /etc/kubernetes/pki/etcd/ca.crt \
-CAkey /etc/kubernetes/pki/etcd/ca.key \
-CAcreateserial \
-in /etc/kubernetes/openssl/csr/etcd-server.csr \
-out /etc/kubernetes/pki/etcd/server.crt \
-extensions v3_ext \
-extfile /etc/kubernetes/openssl/etcd-server.conf
Certificate readiness check
/etc/kubernetes/openssl/cert-report.sh /etc/kubernetes/pki/etcd/server.crt
CERTIFICATE EXPIRES RESIDUAL TIME CERTIFICATE AUTHORITY EXTERNALLY MANAGED
etcd-server Oct 22, 2025 22:06 UTC 364d etcd-ca no
Certificate generation
kubeadm init phase certs etcd-server \
--config=/var/run/kubeadm/kubeadm.yaml
After running the command, we get the following output.
#### Certificate generation
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost master-1.my-first-cluster.example.com] and IPs [192.168.10.27 127.0.0.1 ::1]
Certificate readiness check
kubeadm certs check-expiration
CERTIFICATE EXPIRES RESIDUAL TIME CERTIFICATE AUTHORITY EXTERNALLY MANAGED
etcd-server Oct 22, 2025 22:06 UTC 364d etcd-ca no
Etcd peer > Etcd
Etcd peer > Etcd
Purpose: Certificate for mutual authentication (mutual TLS) between etcd cluster nodes on port 2380. Each cluster member uses the peer certificate for both the server and client side of the connection. Signed by etcd-ca.
- HardWay
- Kubeadm
Environment variables
export CLUSTER_NAME=my-first-cluster
export FULL_HOST_NAME=${HOST_NAME}.${CLUSTER_NAME}.${BASE_DOMAIN}
export MACHINE_LOCAL_ADDRESS=$(ip -4 addr show scope global | awk '/inet/ {print $2; exit}' | cut -d/ -f1)
Working directory
mkdir -p /etc/kubernetes/pki
mkdir -p /etc/kubernetes/openssl/csr
Configuration
cat <<EOF > /etc/kubernetes/openssl/etcd-peer.conf
[ req ]
default_bits = 2048
prompt = no
default_md = sha256
distinguished_name = dn
req_extensions = req_ext
[ req_ext ]
subjectAltName = @alt_names
[ alt_names ]
DNS.1 = localhost
DNS.2 = ${HOST_NAME}
DNS.3 = ${FULL_HOST_NAME}
IP.1 = 127.0.0.1
IP.2 = 0:0:0:0:0:0:0:1
IP.3 = ${MACHINE_LOCAL_ADDRESS}
[ dn ]
CN = ${FULL_HOST_NAME}
[ v3_ext ]
authorityKeyIdentifier=keyid,issuer:always
basicConstraints=CA:FALSE
keyUsage=keyEncipherment,dataEncipherment
extendedKeyUsage=serverAuth,clientAuth
subjectAltName=@alt_names
EOF
Private key generation
openssl genrsa \
-out /etc/kubernetes/pki/etcd/peer.key 2048
CSR generation
openssl req \
-new \
-key /etc/kubernetes/pki/etcd/peer.key \
-out /etc/kubernetes/openssl/csr/etcd-peer.csr \
-config /etc/kubernetes/openssl/etcd-peer.conf
CSR signing
openssl x509 \
-req \
-days 365 \
-sha256 \
-CA /etc/kubernetes/pki/etcd/ca.crt \
-CAkey /etc/kubernetes/pki/etcd/ca.key \
-CAcreateserial \
-in /etc/kubernetes/openssl/csr/etcd-peer.csr \
-out /etc/kubernetes/pki/etcd/peer.crt \
-extensions v3_ext \
-extfile /etc/kubernetes/openssl/etcd-peer.conf
Certificate readiness check
/etc/kubernetes/openssl/cert-report.sh /etc/kubernetes/pki/etcd/peer.crt
CERTIFICATE EXPIRES RESIDUAL TIME CERTIFICATE AUTHORITY EXTERNALLY MANAGED
etcd-peer Oct 22, 2025 22:06 UTC 364d etcd-ca no
Certificate generation
kubeadm init phase certs etcd-peer \
--config=/var/run/kubeadm/kubeadm.yaml
After running the command, we get the following output.
#### Certificate generation
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost master-1.my-first-cluster.example.com] and IPs [192.168.10.27 127.0.0.1 ::1]
Certificate readiness check
kubeadm certs check-expiration
CERTIFICATE EXPIRES RESIDUAL TIME CERTIFICATE AUTHORITY EXTERNALLY MANAGED
etcd-peer Oct 22, 2025 22:06 UTC 364d etcd-ca no
Controller server
Controller server
Purpose: kube-controller-manager server certificate for TLS on the metrics port and healthz endpoints. Signed by kubernetes-ca. Note: kubeadm does not manage this certificate — it is only created in HardWay mode.
- HardWay
- Kubeadm
Environment variables
export MACHINE_LOCAL_ADDRESS=$(ip -4 addr show scope global | awk '/inet/ {print $2; exit}' | cut -d/ -f1)
Working directory
mkdir -p /etc/kubernetes/pki
mkdir -p /etc/kubernetes/openssl/csr
Configuration
cat <<EOF > /etc/kubernetes/openssl/controller-manager-server.conf
[ req ]
default_bits = 2048
prompt = no
default_md = sha256
distinguished_name = dn
req_extensions = req_ext
[ req_ext ]
subjectAltName = @alt_names
[ alt_names ]
DNS.1 = kube-controller-manager
DNS.2 = kube-controller-manager.kube-system
DNS.3 = kube-controller-manager.kube-system.svc
IP.1 = 127.0.0.1
IP.2 = 0:0:0:0:0:0:0:1
IP.3 = ${MACHINE_LOCAL_ADDRESS}
[ dn ]
CN = "system:kube-controller-manager-server"
[ v3_ext ]
authorityKeyIdentifier=keyid,issuer:always
basicConstraints=CA:FALSE
keyUsage=keyEncipherment,dataEncipherment
extendedKeyUsage=serverAuth
subjectAltName=@alt_names
EOF
Private key generation
openssl genrsa \
-out /etc/kubernetes/pki/controller-manager-server.key 2048
CSR generation
openssl req \
-new \
-key /etc/kubernetes/pki/controller-manager-server.key \
-out /etc/kubernetes/openssl/csr/controller-manager-server.csr \
-config /etc/kubernetes/openssl/controller-manager-server.conf
CSR signing
openssl x509 \
-req \
-days 365 \
-sha256 \
-outform PEM \
-CA /etc/kubernetes/pki/ca.crt \
-CAkey /etc/kubernetes/pki/ca.key \
-CAcreateserial \
-in /etc/kubernetes/openssl/csr/controller-manager-server.csr \
-out /etc/kubernetes/pki/controller-manager-server.crt \
-extensions v3_ext \
-extfile /etc/kubernetes/openssl/controller-manager-server.conf
Certificate readiness check
/etc/kubernetes/openssl/cert-report.sh /etc/kubernetes/pki/controller-manager-server.crt
CERTIFICATE EXPIRES RESIDUAL TIME CERTIFICATE AUTHORITY EXTERNALLY MANAGED
controller-manager-server Oct 22, 2025 22:06 UTC 364d kubernetes no
Please note that kubeadm does not manage these certificates. Use HardWay mode
Scheduler server
Scheduler server
Purpose: kube-scheduler server certificate for TLS on the metrics port and healthz endpoints. Signed by kubernetes-ca. Note: kubeadm does not manage this certificate — it is only created in HardWay mode.
- HardWay
- Kubeadm
Environment variables
export MACHINE_LOCAL_ADDRESS=$(ip -4 addr show scope global | awk '/inet/ {print $2; exit}' | cut -d/ -f1)
Working directory
mkdir -p /etc/kubernetes/pki
mkdir -p /etc/kubernetes/openssl/csr
Configuration
cat <<EOF > /etc/kubernetes/openssl/scheduler-server.conf
[ req ]
default_bits = 2048
prompt = no
default_md = sha256
distinguished_name = dn
req_extensions = req_ext
[ req_ext ]
subjectAltName = @alt_names
[ alt_names ]
DNS.1 = kube-scheduler
DNS.2 = kube-scheduler.kube-system
DNS.3 = kube-scheduler.kube-system.svc
IP.1 = 127.0.0.1
IP.2 = 0:0:0:0:0:0:0:1
IP.3 = ${MACHINE_LOCAL_ADDRESS}
[ dn ]
CN = "system:kube-scheduler-server"
[ v3_ext ]
authorityKeyIdentifier=keyid,issuer:always
basicConstraints=CA:FALSE
keyUsage=keyEncipherment,dataEncipherment
extendedKeyUsage=serverAuth
subjectAltName=@alt_names
EOF
Private key generation
openssl genrsa \
-out /etc/kubernetes/pki/scheduler-server.key 2048
CSR generation
openssl req \
-new \
-key /etc/kubernetes/pki/scheduler-server.key \
-out /etc/kubernetes/openssl/csr/scheduler-server.csr \
-config /etc/kubernetes/openssl/scheduler-server.conf
CSR signing
openssl x509 \
-req \
-days 365 \
-sha256 \
-outform PEM \
-CA /etc/kubernetes/pki/ca.crt \
-CAkey /etc/kubernetes/pki/ca.key \
-CAcreateserial \
-in /etc/kubernetes/openssl/csr/scheduler-server.csr \
-out /etc/kubernetes/pki/scheduler-server.crt \
-extensions v3_ext \
-extfile /etc/kubernetes/openssl/scheduler-server.conf
Certificate readiness check
/etc/kubernetes/openssl/cert-report.sh /etc/kubernetes/pki/scheduler-server.crt
CERTIFICATE EXPIRES RESIDUAL TIME CERTIFICATE AUTHORITY EXTERNALLY MANAGED
scheduler-server Oct 22, 2025 22:06 UTC 364d kubernetes no
Please note that kubeadm does not manage these certificates. Use HardWay mode
Creating application certificates
● Required
Creating application certificates
● Required
- Kubelet Server
- API -> Etcd
- API -> Kubelet
- API Server
- Proxy -> API
- Etcd Client
- Etcd Server
- Etcd Peer
- Controller server
- Scheduler server
Kubelet server
Kubelet server
- HardWay
- Kubeadm
Environment variables
export CLUSTER_NAME=my-first-cluster
export BASE_DOMAIN=example.com
export CLUSTER_DOMAIN=cluster.local
export FULL_HOST_NAME=${HOST_NAME}.${CLUSTER_NAME}.${BASE_DOMAIN}
export MACHINE_LOCAL_ADDRESS=$(ip -4 addr show scope global | awk '/inet/ {print $2; exit}' | cut -d/ -f1)
Working directory
mkdir -p /etc/kubernetes/pki
mkdir -p /etc/kubernetes/openssl/csr
mkdir -p /var/lib/kubelet/pki
Configuration
cat <<EOF > /etc/kubernetes/openssl/kubelet-server.conf
[ req ]
default_bits = 2048
prompt = no
default_md = sha256
distinguished_name = dn
req_extensions = req_ext
[ req_ext ]
subjectAltName = @alt_names
[ alt_names ]
DNS.1 = localhost
DNS.2 = ${HOST_NAME}
DNS.3 = ${FULL_HOST_NAME}
IP.1 = 127.0.0.1
IP.2 = 0:0:0:0:0:0:0:1
IP.3 = ${MACHINE_LOCAL_ADDRESS}
[ dn ]
CN = "system:node:${FULL_HOST_NAME}
O = "system:nodes"
[ v3_ext ]
authorityKeyIdentifier=keyid,issuer:always
basicConstraints=CA:FALSE
keyUsage=keyEncipherment,dataEncipherment
extendedKeyUsage=serverAuth
subjectAltName=@alt_names
EOF
Private key generation
openssl genrsa \
-out /var/lib/kubelet/pki/kubelet-server-key.pem 2048
CSR generation
openssl req \
-new \
-key /var/lib/kubelet/pki/kubelet-server-key.pem \
-out /etc/kubernetes/openssl/csr/kubelet-server.csr \
-config /etc/kubernetes/openssl/kubelet-server.conf
CSR signing
openssl x509 \
-req \
-days 365 \
-sha256 \
-outform PEM \
-CA /etc/kubernetes/pki/ca.crt \
-CAkey /etc/kubernetes/pki/ca.key \
-CAcreateserial \
-in /etc/kubernetes/openssl/csr/kubelet-server.csr \
-out /var/lib/kubelet/pki/kubelet-server.pem \
-extensions v3_ext \
-extfile /etc/kubernetes/openssl/kubelet-server.conf
cat /var/lib/kubelet/pki/kubelet-server.pem /var/lib/kubelet/pki/kubelet-server-key.pem >> /var/lib/kubelet/pki/kubelet-server-$(date '+%Y-%m-%d-%H-%M-%S').pem
ln -s /var/lib/kubelet/pki/kubelet-server-$(date '+%Y-%m-%d-%H-%M-%S').pem /var/lib/kubelet/pki/kubelet-server-current.pem
Certificate readiness check
/etc/kubernetes/openssl/cert-report.sh /var/lib/kubelet/pki/kubelet-server.pem
CERTIFICATE EXPIRES RESIDUAL TIME CERTIFICATE AUTHORITY EXTERNALLY MANAGED
kubelet-server-current Oct 22, 2025 22:06 UTC 364d kubernetes no
Please note: during the Join phase, you cannot choose which certificates to generate — kubeadm creates them all at once, in full.
Certificate generation
kubeadm join phase control-plane-prepare certs \
--config=/var/run/kubeadm/kubeadm.yaml
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost master-1.my-first-cluster.example.com master-3.my-first-cluster.example.com] and IPs [217.114.0.145 127.0.0.1 ::1 31.129.111.153]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost master-1.my-first-cluster.example.com master-3.my-first-cluster.example.com] and IPs [217.114.0.145 127.0.0.1 ::1 31.129.111.153]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [api.my-first-cluster.example.com kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.my-first-cluster.example.com master-3.my-first-cluster.example.com] and IPs [29.64.0.1 217.114.0.145 31.129.111.153 127.0.0.1]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[certs] Using the existing "sa" key
Certificate readiness check
kubeadm certs check-expiration
CERTIFICATE EXPIRES RESIDUAL TIME CERTIFICATE AUTHORITY EXTERNALLY MANAGED
kubelet-server-current Oct 22, 2025 22:06 UTC 364d kubernetes no
K8S-API client > Etcd server
K8S-API client > Etcd server
- HardWay
- Kubeadm
Working directory
mkdir -p /etc/kubernetes/pki
mkdir -p /etc/kubernetes/openssl/csr
Configuration
cat <<EOF > /etc/kubernetes/openssl/apiserver-etcd-client.conf
[ req ]
default_bits = 2048
prompt = no
default_md = sha256
distinguished_name = dn
[ dn ]
CN = kube-apiserver-etcd-client
[ v3_ext ]
authorityKeyIdentifier=keyid,issuer:always
basicConstraints=CA:FALSE
keyUsage=keyEncipherment,dataEncipherment
extendedKeyUsage=clientAuth
EOF
Private key generation
openssl genrsa \
-out /etc/kubernetes/pki/apiserver-etcd-client.key 2048
CSR generation
openssl req \
-new \
-key /etc/kubernetes/pki/apiserver-etcd-client.key \
-out /etc/kubernetes/openssl/csr/apiserver-etcd-client.csr \
-config /etc/kubernetes/openssl/apiserver-etcd-client.conf
CSR signing
openssl x509 \
-req \
-days 365 \
-sha256 \
-CA /etc/kubernetes/pki/etcd/ca.crt \
-CAkey /etc/kubernetes/pki/etcd/ca.key \
-CAcreateserial \
-in /etc/kubernetes/openssl/csr/apiserver-etcd-client.csr \
-out /etc/kubernetes/pki/apiserver-etcd-client.crt \
-extensions v3_ext \
-extfile /etc/kubernetes/openssl/apiserver-etcd-client.conf
Certificate readiness check
/etc/kubernetes/openssl/cert-report.sh /etc/kubernetes/pki/apiserver-etcd-client.crt
CERTIFICATE EXPIRES RESIDUAL TIME CERTIFICATE AUTHORITY EXTERNALLY MANAGED
apiserver-etcd-client Oct 22, 2025 22:06 UTC 364d etcd-ca no
Please note: during the Join phase, you cannot choose which certificates to generate — kubeadm creates them all at once, in full.
Certificate generation
kubeadm join phase control-plane-prepare certs \
--config=/var/run/kubeadm/kubeadm.yaml
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost master-1.my-first-cluster.example.com master-3.my-first-cluster.example.com] and IPs [217.114.0.145 127.0.0.1 ::1 31.129.111.153]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost master-1.my-first-cluster.example.com master-3.my-first-cluster.example.com] and IPs [217.114.0.145 127.0.0.1 ::1 31.129.111.153]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [api.my-first-cluster.example.com kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.my-first-cluster.example.com master-3.my-first-cluster.example.com] and IPs [29.64.0.1 217.114.0.145 31.129.111.153 127.0.0.1]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[certs] Using the existing "sa" key
Certificate readiness check
kubeadm certs check-expiration
CERTIFICATE EXPIRES RESIDUAL TIME CERTIFICATE AUTHORITY EXTERNALLY MANAGED
apiserver-etcd-client Oct 22, 2025 22:06 UTC 364d etcd-ca no
K8S-API client > Kubelet server
K8S-API client > Kubelet server
- HardWay
- Kubeadm
Working directory
mkdir -p /etc/kubernetes/pki
mkdir -p /etc/kubernetes/openssl/csr
Configuration
cat <<EOF > /etc/kubernetes/openssl/apiserver-kubelet-client.conf
[ req ]
default_bits = 2048
prompt = no
default_md = sha256
distinguished_name = dn
[ dn ]
CN = kube-apiserver-kubelet-client
O = kubeadm:cluster-admins
[ v3_ext ]
authorityKeyIdentifier=keyid,issuer:always
basicConstraints=CA:FALSE
keyUsage=keyEncipherment,dataEncipherment
extendedKeyUsage=clientAuth
EOF
Private key generation
openssl genrsa \
-out /etc/kubernetes/pki/apiserver-kubelet-client.key 2048
CSR generation
openssl req \
-new \
-key /etc/kubernetes/pki/apiserver-kubelet-client.key \
-out /etc/kubernetes/openssl/csr/apiserver-kubelet-client.csr \
-config /etc/kubernetes/openssl/apiserver-kubelet-client.conf
CSR signing
openssl x509 \
-req \
-days 365 \
-sha256 \
-CA /etc/kubernetes/pki/ca.crt \
-CAkey /etc/kubernetes/pki/ca.key \
-CAcreateserial \
-in /etc/kubernetes/openssl/csr/apiserver-kubelet-client.csr \
-out /etc/kubernetes/pki/apiserver-kubelet-client.crt \
-extensions v3_ext \
-extfile /etc/kubernetes/openssl/apiserver-kubelet-client.conf
Certificate readiness check
/etc/kubernetes/openssl/cert-report.sh /etc/kubernetes/pki/apiserver-kubelet-client.crt
CERTIFICATE EXPIRES RESIDUAL TIME CERTIFICATE AUTHORITY EXTERNALLY MANAGED
apiserver-kubelet-client Oct 22, 2025 22:06 UTC 364d ca no
Please note: during the Join phase, you cannot choose which certificates to generate — kubeadm creates them all at once, in full.
Certificate generation
kubeadm join phase control-plane-prepare certs \
--config=/var/run/kubeadm/kubeadm.yaml
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost master-1.my-first-cluster.example.com master-3.my-first-cluster.example.com] and IPs [217.114.0.145 127.0.0.1 ::1 31.129.111.153]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost master-1.my-first-cluster.example.com master-3.my-first-cluster.example.com] and IPs [217.114.0.145 127.0.0.1 ::1 31.129.111.153]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [api.my-first-cluster.example.com kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.my-first-cluster.example.com master-3.my-first-cluster.example.com] and IPs [29.64.0.1 217.114.0.145 31.129.111.153 127.0.0.1]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[certs] Using the existing "sa" key
Certificate readiness check
kubeadm certs check-expiration
CERTIFICATE EXPIRES RESIDUAL TIME CERTIFICATE AUTHORITY EXTERNALLY MANAGED
apiserver-kubelet-client Oct 22, 2025 22:06 UTC 364d ca no
K8S-API server
K8S-API server
- HardWay
- Kubeadm
Environment variables
export CLUSTER_NAME=my-first-cluster
export BASE_DOMAIN=example.com
export CLUSTER_DOMAIN=cluster.local
export FULL_HOST_NAME=${HOST_NAME}.${CLUSTER_NAME}.${BASE_DOMAIN}
export CLUSTER_API_ENDPOINT=api.my-first-cluster.example.com
Working directory
mkdir -p /etc/kubernetes/pki
mkdir -p /etc/kubernetes/openssl/csr
Configuration
cat <<EOF > /etc/kubernetes/openssl/apiserver.conf
[ req ]
default_bits = 2048
prompt = no
default_md = sha256
distinguished_name = dn
req_extensions = req_ext
[ req_ext ]
subjectAltName = @alt_names
[ alt_names ]
DNS.1 = kubernetes
DNS.2 = kubernetes.default
DNS.3 = kubernetes.default.svc
DNS.4 = kubernetes.default.svc.${CLUSTER_DOMAIN}
DNS.5 = ${FULL_HOST_NAME}
DNS.6 = ${CLUSTER_API_ENDPOINT}
IP.1 = $(ip -4 addr show scope global | awk '/inet/ {print $2; exit}' | cut -d/ -f1)
IP.2 = 127.0.0.1
[ dn ]
CN = kube-apiserver
[ v3_ext ]
authorityKeyIdentifier=keyid,issuer:always
basicConstraints=CA:FALSE
keyUsage=keyEncipherment,dataEncipherment
extendedKeyUsage=serverAuth
subjectAltName=@alt_names
EOF
Private key generation
openssl genrsa \
-out /etc/kubernetes/pki/apiserver.key 2048
CSR generation
openssl req \
-new \
-key /etc/kubernetes/pki/apiserver.key \
-out /etc/kubernetes/openssl/csr/apiserver.csr \
-config /etc/kubernetes/openssl/apiserver.conf
CSR signing
openssl x509 \
-req \
-days 365 \
-sha256 \
-CA /etc/kubernetes/pki/ca.crt \
-CAkey /etc/kubernetes/pki/ca.key \
-CAcreateserial \
-in /etc/kubernetes/openssl/csr/apiserver.csr \
-out /etc/kubernetes/pki/apiserver.crt \
-extensions v3_ext \
-extfile /etc/kubernetes/openssl/apiserver.conf
Certificate readiness check
/etc/kubernetes/openssl/cert-report.sh /etc/kubernetes/pki/apiserver.crt
CERTIFICATE EXPIRES RESIDUAL TIME CERTIFICATE AUTHORITY EXTERNALLY MANAGED
apiserver Oct 22, 2025 22:06 UTC 364d ca no
Please note: during the Join phase, you cannot choose which certificates to generate — kubeadm creates them all at once, in full.
Certificate generation
kubeadm join phase control-plane-prepare certs \
--config=/var/run/kubeadm/kubeadm.yaml
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost master-1.my-first-cluster.example.com master-3.my-first-cluster.example.com] and IPs [217.114.0.145 127.0.0.1 ::1 31.129.111.153]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost master-1.my-first-cluster.example.com master-3.my-first-cluster.example.com] and IPs [217.114.0.145 127.0.0.1 ::1 31.129.111.153]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [api.my-first-cluster.example.com kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.my-first-cluster.example.com master-3.my-first-cluster.example.com] and IPs [29.64.0.1 217.114.0.145 31.129.111.153 127.0.0.1]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[certs] Using the existing "sa" key
Certificate readiness check
kubeadm certs check-expiration
CERTIFICATE EXPIRES RESIDUAL TIME CERTIFICATE AUTHORITY EXTERNALLY MANAGED
apiserver Oct 22, 2025 22:06 UTC 364d ca no
FrontProxy client > K8S-API
FrontProxy client > K8S-API
- HardWay
- Kubeadm
Working directory
mkdir -p /etc/kubernetes/pki
mkdir -p /etc/kubernetes/openssl/csr
Configuration
cat <<EOF > /etc/kubernetes/openssl/front-proxy-client.conf
[ req ]
default_bits = 2048
prompt = no
default_md = sha256
distinguished_name = dn
[ dn ]
CN = front-proxy-client
[ v3_ext ]
authorityKeyIdentifier=keyid,issuer:always
basicConstraints=CA:FALSE
keyUsage=keyEncipherment,dataEncipherment
extendedKeyUsage=clientAuth
EOF
Private key generation
openssl genrsa \
-out /etc/kubernetes/pki/front-proxy-client.key 2048
CSR generation
openssl req \
-new \
-key /etc/kubernetes/pki/front-proxy-client.key \
-out /etc/kubernetes/openssl/csr/front-proxy-client.csr \
-config /etc/kubernetes/openssl/front-proxy-client.conf
CSR signing
openssl x509 \
-req \
-days 365 \
-sha256 \
-CA /etc/kubernetes/pki/front-proxy-ca.crt \
-CAkey /etc/kubernetes/pki/front-proxy-ca.key \
-CAcreateserial \
-in /etc/kubernetes/openssl/csr/front-proxy-client.csr \
-out /etc/kubernetes/pki/front-proxy-client.crt \
-extensions v3_ext \
-extfile /etc/kubernetes/openssl/front-proxy-client.conf
Certificate readiness check
/etc/kubernetes/openssl/cert-report.sh /etc/kubernetes/pki/front-proxy-client.crt
CERTIFICATE EXPIRES RESIDUAL TIME CERTIFICATE AUTHORITY EXTERNALLY MANAGED
front-proxy-client Oct 22, 2025 22:06 UTC 364d front-proxy-ca no
Please note: during the Join phase, you cannot choose which certificates to generate — kubeadm creates them all at once, in full.
Certificate generation
kubeadm join phase control-plane-prepare certs \
--config=/var/run/kubeadm/kubeadm.yaml
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost master-1.my-first-cluster.example.com master-3.my-first-cluster.example.com] and IPs [217.114.0.145 127.0.0.1 ::1 31.129.111.153]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost master-1.my-first-cluster.example.com master-3.my-first-cluster.example.com] and IPs [217.114.0.145 127.0.0.1 ::1 31.129.111.153]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [api.my-first-cluster.example.com kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.my-first-cluster.example.com master-3.my-first-cluster.example.com] and IPs [29.64.0.1 217.114.0.145 31.129.111.153 127.0.0.1]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[certs] Using the existing "sa" key
Certificate readiness check
kubeadm certs check-expiration
CERTIFICATE EXPIRES RESIDUAL TIME CERTIFICATE AUTHORITY EXTERNALLY MANAGED
front-proxy-client Oct 22, 2025 22:06 UTC 364d front-proxy-ca no
Etcd client > Etcd
Etcd client > Etcd
- HardWay
- Kubeadm
Working directory
mkdir -p /etc/kubernetes/pki/etcd
mkdir -p /etc/kubernetes/openssl/csr
Configuration
cat <<EOF > /etc/kubernetes/openssl/healthcheck-client.conf
[ req ]
default_bits = 2048
prompt = no
default_md = sha256
distinguished_name = dn
[ dn ]
CN = kube-etcd-healthcheck-client
[ v3_ext ]
authorityKeyIdentifier=keyid,issuer:always
basicConstraints=CA:FALSE
keyUsage=keyEncipherment,dataEncipherment
extendedKeyUsage=clientAuth
EOF
Private key generation
openssl genrsa \
-out /etc/kubernetes/pki/etcd/healthcheck-client.key 2048
CSR generation
openssl req \
-new \
-key /etc/kubernetes/pki/etcd/healthcheck-client.key \
-out /etc/kubernetes/openssl/csr/etcd-client.csr \
-config /etc/kubernetes/openssl/healthcheck-client.conf
CSR signing
openssl x509 \
-req \
-days 365 \
-sha256 \
-CA /etc/kubernetes/pki/etcd/ca.crt \
-CAkey /etc/kubernetes/pki/etcd/ca.key \
-CAcreateserial \
-in /etc/kubernetes/openssl/csr/etcd-client.csr \
-out /etc/kubernetes/pki/etcd/healthcheck-client.crt \
-extensions v3_ext \
-extfile /etc/kubernetes/openssl/healthcheck-client.conf
Certificate readiness check
/etc/kubernetes/openssl/cert-report.sh /etc/kubernetes/pki/etcd/healthcheck-client.crt
CERTIFICATE EXPIRES RESIDUAL TIME CERTIFICATE AUTHORITY EXTERNALLY MANAGED
etcd-healthcheck-client Oct 22, 2025 22:06 UTC 364d etcd-ca no
Please note: during the Join phase, you cannot choose which certificates to generate — kubeadm creates them all at once, in full.
Certificate generation
kubeadm join phase control-plane-prepare certs \
--config=/var/run/kubeadm/kubeadm.yaml
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost master-1.my-first-cluster.example.com master-3.my-first-cluster.example.com] and IPs [217.114.0.145 127.0.0.1 ::1 31.129.111.153]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost master-1.my-first-cluster.example.com master-3.my-first-cluster.example.com] and IPs [217.114.0.145 127.0.0.1 ::1 31.129.111.153]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [api.my-first-cluster.example.com kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.my-first-cluster.example.com master-3.my-first-cluster.example.com] and IPs [29.64.0.1 217.114.0.145 31.129.111.153 127.0.0.1]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[certs] Using the existing "sa" key
Certificate readiness check
kubeadm certs check-expiration
CERTIFICATE EXPIRES RESIDUAL TIME CERTIFICATE AUTHORITY EXTERNALLY MANAGED
etcd-healthcheck-client Oct 22, 2025 22:06 UTC 364d etcd-ca no
Etcd server
Etcd server
- HardWay
- Kubeadm
Environment variables
export CLUSTER_NAME=my-first-cluster
export BASE_DOMAIN=example.com
export FULL_HOST_NAME=${HOST_NAME}.${CLUSTER_NAME}.${BASE_DOMAIN}
export MACHINE_LOCAL_ADDRESS=$(ip -4 addr show scope global | awk '/inet/ {print $2; exit}' | cut -d/ -f1)
Working directory
mkdir -p /etc/kubernetes/pki
mkdir -p /etc/kubernetes/openssl/csr
Configuration
cat <<EOF > /etc/kubernetes/openssl/etcd-server.conf
[ req ]
default_bits = 2048
prompt = no
default_md = sha256
distinguished_name = dn
req_extensions = req_ext
[ req_ext ]
subjectAltName = @alt_names
[ alt_names ]
DNS.1 = localhost
DNS.2 = ${HOST_NAME}
DNS.3 = ${FULL_HOST_NAME}
IP.1 = 127.0.0.1
IP.2 = 0:0:0:0:0:0:0:1
IP.3 = ${MACHINE_LOCAL_ADDRESS}
[ dn ]
CN = ${FULL_HOST_NAME}
[ v3_ext ]
authorityKeyIdentifier=keyid,issuer:always
basicConstraints=CA:FALSE
keyUsage=keyEncipherment,dataEncipherment
extendedKeyUsage=serverAuth,clientAuth
subjectAltName=@alt_names
EOF
Private key generation
openssl genrsa \
-out /etc/kubernetes/pki/etcd/server.key 2048
CSR generation
openssl req \
-new \
-key /etc/kubernetes/pki/etcd/server.key \
-out /etc/kubernetes/openssl/csr/etcd-server.csr \
-config /etc/kubernetes/openssl/etcd-server.conf
CSR signing
openssl x509 \
-req \
-days 365 \
-sha256 \
-CA /etc/kubernetes/pki/etcd/ca.crt \
-CAkey /etc/kubernetes/pki/etcd/ca.key \
-CAcreateserial \
-in /etc/kubernetes/openssl/csr/etcd-server.csr \
-out /etc/kubernetes/pki/etcd/server.crt \
-extensions v3_ext \
-extfile /etc/kubernetes/openssl/etcd-server.conf
Certificate readiness check
/etc/kubernetes/openssl/cert-report.sh /etc/kubernetes/pki/etcd/server.crt
CERTIFICATE EXPIRES RESIDUAL TIME CERTIFICATE AUTHORITY EXTERNALLY MANAGED
etcd-server Oct 22, 2025 22:06 UTC 364d etcd-ca no
Please note: during the Join phase, you cannot choose which certificates to generate — kubeadm creates them all at once, in full.
Certificate generation
kubeadm join phase control-plane-prepare certs \
--config=/var/run/kubeadm/kubeadm.yaml
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost master-1.my-first-cluster.example.com master-3.my-first-cluster.example.com] and IPs [217.114.0.145 127.0.0.1 ::1 31.129.111.153]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost master-1.my-first-cluster.example.com master-3.my-first-cluster.example.com] and IPs [217.114.0.145 127.0.0.1 ::1 31.129.111.153]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [api.my-first-cluster.example.com kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.my-first-cluster.example.com master-3.my-first-cluster.example.com] and IPs [29.64.0.1 217.114.0.145 31.129.111.153 127.0.0.1]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[certs] Using the existing "sa" key
Certificate readiness check
kubeadm certs check-expiration
CERTIFICATE EXPIRES RESIDUAL TIME CERTIFICATE AUTHORITY EXTERNALLY MANAGED
etcd-server Oct 22, 2025 22:06 UTC 364d etcd-ca no
Etcd peer > Etcd
Etcd peer > Etcd
- HardWay
- Kubeadm
Environment variables
export CLUSTER_NAME=my-first-cluster
export FULL_HOST_NAME=${HOST_NAME}.${CLUSTER_NAME}.${BASE_DOMAIN}
export MACHINE_LOCAL_ADDRESS=$(ip -4 addr show scope global | awk '/inet/ {print $2; exit}' | cut -d/ -f1)
Working directory
mkdir -p /etc/kubernetes/pki
mkdir -p /etc/kubernetes/openssl/csr
Configuration
cat <<EOF > /etc/kubernetes/openssl/etcd-peer.conf
[ req ]
default_bits = 2048
prompt = no
default_md = sha256
distinguished_name = dn
req_extensions = req_ext
[ req_ext ]
subjectAltName = @alt_names
[ alt_names ]
DNS.1 = localhost
DNS.2 = ${HOST_NAME}
DNS.3 = ${FULL_HOST_NAME}
IP.1 = 127.0.0.1
IP.2 = 0:0:0:0:0:0:0:1
IP.3 = ${MACHINE_LOCAL_ADDRESS}
[ dn ]
CN = ${FULL_HOST_NAME}
[ v3_ext ]
authorityKeyIdentifier=keyid,issuer:always
basicConstraints=CA:FALSE
keyUsage=keyEncipherment,dataEncipherment
extendedKeyUsage=serverAuth,clientAuth
subjectAltName=@alt_names
EOF
Private key generation
openssl genrsa \
-out /etc/kubernetes/pki/etcd/peer.key 2048
CSR generation
openssl req \
-new \
-key /etc/kubernetes/pki/etcd/peer.key \
-out /etc/kubernetes/openssl/csr/etcd-peer.csr \
-config /etc/kubernetes/openssl/etcd-peer.conf
CSR signing
openssl x509 \
-req \
-days 365 \
-sha256 \
-CA /etc/kubernetes/pki/etcd/ca.crt \
-CAkey /etc/kubernetes/pki/etcd/ca.key \
-CAcreateserial \
-in /etc/kubernetes/openssl/csr/etcd-peer.csr \
-out /etc/kubernetes/pki/etcd/peer.crt \
-extensions v3_ext \
-extfile /etc/kubernetes/openssl/etcd-peer.conf
Certificate readiness check
/etc/kubernetes/openssl/cert-report.sh /etc/kubernetes/pki/etcd/peer.crt
CERTIFICATE EXPIRES RESIDUAL TIME CERTIFICATE AUTHORITY EXTERNALLY MANAGED
etcd-peer Oct 22, 2025 22:06 UTC 364d etcd-ca no
Please note: during the Join phase, you cannot choose which certificates to generate — kubeadm creates them all at once, in full.
Certificate generation
kubeadm join phase control-plane-prepare certs \
--config=/var/run/kubeadm/kubeadm.yaml
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost master-1.my-first-cluster.example.com master-3.my-first-cluster.example.com] and IPs [217.114.0.145 127.0.0.1 ::1 31.129.111.153]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost master-1.my-first-cluster.example.com master-3.my-first-cluster.example.com] and IPs [217.114.0.145 127.0.0.1 ::1 31.129.111.153]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [api.my-first-cluster.example.com kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.my-first-cluster.example.com master-3.my-first-cluster.example.com] and IPs [29.64.0.1 217.114.0.145 31.129.111.153 127.0.0.1]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[certs] Using the existing "sa" key
Certificate readiness check
kubeadm certs check-expiration
CERTIFICATE EXPIRES RESIDUAL TIME CERTIFICATE AUTHORITY EXTERNALLY MANAGED
etcd-peer Oct 22, 2025 22:06 UTC 364d etcd-ca no
Controller server
Controller server
- HardWay
- Kubeadm
Environment variables
export MACHINE_LOCAL_ADDRESS=$(ip -4 addr show scope global | awk '/inet/ {print $2; exit}' | cut -d/ -f1)
Working directory
mkdir -p /etc/kubernetes/pki
mkdir -p /etc/kubernetes/openssl/csr
Configuration
cat <<EOF > /etc/kubernetes/openssl/controller-manager-server.conf
[ req ]
default_bits = 2048
prompt = no
default_md = sha256
distinguished_name = dn
req_extensions = req_ext
[ req_ext ]
subjectAltName = @alt_names
[ alt_names ]
DNS.1 = kube-controller-manager
DNS.2 = kube-controller-manager.kube-system
DNS.3 = kube-controller-manager.kube-system.svc
IP.1 = 127.0.0.1
IP.2 = 0:0:0:0:0:0:0:1
IP.3 = ${MACHINE_LOCAL_ADDRESS}
[ dn ]
CN = "system:kube-controller-manager-server"
[ v3_ext ]
authorityKeyIdentifier=keyid,issuer:always
basicConstraints=CA:FALSE
keyUsage=keyEncipherment,dataEncipherment
extendedKeyUsage=serverAuth
subjectAltName=@alt_names
EOF
Private key generation
openssl genrsa \
-out /etc/kubernetes/pki/controller-manager-server.key 2048
CSR generation
openssl req \
-new \
-key /etc/kubernetes/pki/controller-manager-server.key \
-out /etc/kubernetes/openssl/csr/controller-manager-server.csr \
-config /etc/kubernetes/openssl/controller-manager-server.conf
CSR signing
openssl x509 \
-req \
-days 365 \
-sha256 \
-outform PEM \
-CA /etc/kubernetes/pki/ca.crt \
-CAkey /etc/kubernetes/pki/ca.key \
-CAcreateserial \
-in /etc/kubernetes/openssl/csr/controller-manager-server.csr \
-out /etc/kubernetes/pki/controller-manager-server.crt \
-extensions v3_ext \
-extfile /etc/kubernetes/openssl/controller-manager-server.conf
Certificate readiness check
/etc/kubernetes/openssl/cert-report.sh /etc/kubernetes/pki/controller-manager-server.crt
CERTIFICATE EXPIRES RESIDUAL TIME CERTIFICATE AUTHORITY EXTERNALLY MANAGED
controller-manager-server Oct 22, 2025 22:06 UTC 364d kubernetes no
Please note that kubeadm does not manage these certificates. Use HardWay mode
Scheduler server
Scheduler server
- HardWay
- Kubeadm
Environment variables
export MACHINE_LOCAL_ADDRESS=$(ip -4 addr show scope global | awk '/inet/ {print $2; exit}' | cut -d/ -f1)
Working directory
mkdir -p /etc/kubernetes/pki
mkdir -p /etc/kubernetes/openssl/csr
Configuration
cat <<EOF > /etc/kubernetes/openssl/scheduler-server.conf
[ req ]
default_bits = 2048
prompt = no
default_md = sha256
distinguished_name = dn
req_extensions = req_ext
[ req_ext ]
subjectAltName = @alt_names
[ alt_names ]
DNS.1 = kube-scheduler
DNS.2 = kube-scheduler.kube-system
DNS.3 = kube-scheduler.kube-system.svc
IP.1 = 127.0.0.1
IP.2 = 0:0:0:0:0:0:0:1
IP.3 = ${MACHINE_LOCAL_ADDRESS}
[ dn ]
CN = "system:kube-scheduler-server"
[ v3_ext ]
authorityKeyIdentifier=keyid,issuer:always
basicConstraints=CA:FALSE
keyUsage=keyEncipherment,dataEncipherment
extendedKeyUsage=serverAuth
subjectAltName=@alt_names
EOF
Private key generation
openssl genrsa \
-out /etc/kubernetes/pki/scheduler-server.key 2048
CSR generation
openssl req \
-new \
-key /etc/kubernetes/pki/scheduler-server.key \
-out /etc/kubernetes/openssl/csr/scheduler-server.csr \
-config /etc/kubernetes/openssl/scheduler-server.conf
CSR signing
openssl x509 \
-req \
-days 365 \
-sha256 \
-outform PEM \
-CA /etc/kubernetes/pki/ca.crt \
-CAkey /etc/kubernetes/pki/ca.key \
-CAcreateserial \
-in /etc/kubernetes/openssl/csr/scheduler-server.csr \
-out /etc/kubernetes/pki/scheduler-server.crt \
-extensions v3_ext \
-extfile /etc/kubernetes/openssl/scheduler-server.conf
Certificate readiness check
/etc/kubernetes/openssl/cert-report.sh /etc/kubernetes/pki/scheduler-server.crt
CERTIFICATE EXPIRES RESIDUAL TIME CERTIFICATE AUTHORITY EXTERNALLY MANAGED
scheduler-server Oct 22, 2025 22:06 UTC 364d kubernetes no
Please note that kubeadm does not manage these certificates. Use HardWay mode
14. Creating the ServiceAccount Signing Key
In Kubernetes,
ServiceAccountis a mechanism that allows applications within the cluster to authenticate when accessing the API server. The private key specified inkube-apiserverandkube-controller-manageris used for signing tokens of these accounts. This ensures secure and verifiable interaction between services and provides the ability for granular access control.
This section creates or connects the key used by Kubernetes to sign
ServiceAccounttokens.
- Init
- Join
Creating ServiceAccount signing key
● Required
Creating ServiceAccount signing key
● Required
- HardWay
- Kubeadm
openssl genpkey \
-algorithm RSA \
-out /etc/kubernetes/pki/sa.key \
-pkeyopt rsa_keygen_bits:2048
openssl rsa \
-pubout \
-in /etc/kubernetes/pki/sa.key \
-out /etc/kubernetes/pki/sa.pub
kubeadm init phase certs sa
After executing the commands, we get the following output.
#### Kube API certificate generation
[certs] Generating "sa" key and public key
Connecting ServiceAccount signing key
● Required
Connecting ServiceAccount signing key
● Required
The join phase does not generate a key, but uses the key obtained through the CA download phase.
Make sure you have completed the step:
15*. Creating All Certificates
This section describes the generation of all certificates.
If you have not performed manual certificate generation, use this block to automatically create the necessary files.
- Init
- Join
Generation of all certificates
● Optional
Generation of all certificates
● Optional
Certificate generation
kubeadm init phase certs all \
--config=/var/run/kubeadm/kubeadm.yaml
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [api.my-first-cluster.example.com kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.my-first-cluster.example.com pylcozuscb] and IPs [29.64.0.1 31.129.111.153 127.0.0.1]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost master-1.my-first-cluster.example.com pylcozuscb] and IPs [31.129.111.153 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost master-1.my-first-cluster.example.com pylcozuscb] and IPs [31.129.111.153 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
Generation of all certificates
● Optional
Generation of all certificates
● Optional
This section depends on the following sections:
Please note: during the Join phase, you cannot choose which certificates to generate — kubeadm creates them all at once, in full.
Certificate generation
kubeadm join phase control-plane-prepare certs \
--config=/var/run/kubeadm/kubeadm.yaml
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost master-1.my-first-cluster.example.com master-3.my-first-cluster.example.com] and IPs [217.114.0.145 127.0.0.1 ::1 31.129.111.153]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost master-1.my-first-cluster.example.com master-3.my-first-cluster.example.com] and IPs [217.114.0.145 127.0.0.1 ::1 31.129.111.153]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [api.my-first-cluster.example.com kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.my-first-cluster.example.com master-3.my-first-cluster.example.com] and IPs [29.64.0.1 217.114.0.145 31.129.111.153 127.0.0.1]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[certs] Using the existing "sa" key
16. Creating kubeconfig Configurations
Kubeconfig is a configuration file that provides access to a Kubernetes cluster. It contains information about API servers, user credentials (such as tokens or certificates), and contexts that define which cluster and user are being used. Kubeconfig provides authentication and authorization when interacting with the cluster through kubectl or other clients, allowing secure management of cluster resources and settings.
We create
kubeconfigfiles for components and users. This ensures secure and controlled connection to the API server.
- Init
- Join
Creating kubeconfig configurations and certificates
● Required
Creating kubeconfig configurations and certificates
● Required
- Super Admin
- Admin
- Controller
- Scheduler
- Kubelet
Super Admin
Super Admin
- HardWay
- Kubeadm
Working directory
mkdir -p /etc/kubernetes/pki
mkdir -p /etc/kubernetes/openssl/csr
mkdir -p /etc/kubernetes/kubeconfig
Configuration
cat <<EOF > /etc/kubernetes/openssl/super-admin.conf
[ req ]
default_bits = 2048
prompt = no
default_md = sha256
distinguished_name = dn
[ dn ]
CN = kubernetes-super-admin
O = system:masters
[ v3_ext ]
authorityKeyIdentifier=keyid,issuer:always
basicConstraints=CA:FALSE
keyUsage=keyEncipherment,dataEncipherment
extendedKeyUsage=clientAuth
EOF
Private key generation
openssl genrsa \
-out /etc/kubernetes/kubeconfig/super-admin.key 2048
CSR generation
openssl req \
-new \
-key /etc/kubernetes/kubeconfig/super-admin.key \
-out /etc/kubernetes/openssl/csr/super-admin.csr \
-config /etc/kubernetes/openssl/super-admin.conf
CSR signing
openssl x509 \
-req \
-days 365 \
-sha256 \
-CA /etc/kubernetes/pki/ca.crt \
-CAkey /etc/kubernetes/pki/ca.key \
-CAcreateserial \
-in /etc/kubernetes/openssl/csr/super-admin.csr \
-out /etc/kubernetes/kubeconfig/super-admin.crt \
-extensions v3_ext \
-extfile /etc/kubernetes/openssl/super-admin.conf
Kubeconfig setup for super-admin
kubectl config set-cluster kubernetes \
--certificate-authority=/etc/kubernetes/pki/ca.crt \
--embed-certs=true \
--server=https://127.0.0.1:6443 \
--kubeconfig=/etc/kubernetes/super-admin.conf
kubectl config set-credentials system:node:${HOST_NAME} \
--client-certificate=/etc/kubernetes/kubeconfig/super-admin.crt \
--client-key=/etc/kubernetes/kubeconfig/super-admin.key \
--embed-certs=true \
--kubeconfig=/etc/kubernetes/super-admin.conf
kubectl config set-context default \
--cluster=kubernetes \
--user=system:node:${HOST_NAME} \
--kubeconfig=/etc/kubernetes/super-admin.conf
kubectl config use-context default \
--kubeconfig=/etc/kubernetes/super-admin.conf
Certificate readiness check
/etc/kubernetes/openssl/cert-report.sh /etc/kubernetes/kubeconfig/super-admin.crt
CERTIFICATE EXPIRES RESIDUAL TIME CERTIFICATE AUTHORITY EXTERNALLY MANAGED
super-admin.conf Oct 22, 2025 22:06 UTC 364d kubernetes no
Certificate generation
kubeadm init phase kubeconfig super-admin \
--config=/var/run/kubeadm/kubeadm.yaml
After executing the commands, we get the following output.
#### Certificate generation
[kubeconfig] Writing "super-admin.conf" kubeconfig file
Certificate readiness check
kubeadm certs check-expiration
CERTIFICATE EXPIRES RESIDUAL TIME CERTIFICATE AUTHORITY EXTERNALLY MANAGED
super-admin.conf Oct 22, 2025 22:06 UTC 364d kubernetes no
Admin
Admin
- HardWay
- Kubeadm
Working directory
mkdir -p /etc/kubernetes/pki
mkdir -p /etc/kubernetes/openssl/csr
mkdir -p /etc/kubernetes/kubeconfig
Configuration
cat <<EOF > /etc/kubernetes/openssl/admin.conf
[ req ]
default_bits = 2048
prompt = no
default_md = sha256
distinguished_name = dn
[ dn ]
CN = kubernetes-admin
O = kubeadm:cluster-admins
[ v3_ext ]
authorityKeyIdentifier=keyid,issuer:always
basicConstraints=CA:FALSE
keyUsage=keyEncipherment,dataEncipherment
extendedKeyUsage=clientAuth
EOF
Private keys
openssl genrsa \
-out /etc/kubernetes/kubeconfig/admin.key 2048
CSR generation
openssl req \
-new \
-key /etc/kubernetes/kubeconfig/admin.key \
-out /etc/kubernetes/openssl/csr/admin.csr \
-config /etc/kubernetes/openssl/admin.conf
CSR signing
openssl x509 \
-req \
-days 365 \
-sha256 \
-CA /etc/kubernetes/pki/ca.crt \
-CAkey /etc/kubernetes/pki/ca.key \
-CAcreateserial \
-in /etc/kubernetes/openssl/csr/admin.csr \
-out /etc/kubernetes/kubeconfig/admin.crt \
-extensions v3_ext \
-extfile /etc/kubernetes/openssl/admin.conf
Kubeconfig setup for admin
kubectl config set-cluster kubernetes \
--certificate-authority=/etc/kubernetes/pki/ca.crt \
--embed-certs=true \
--server=https://127.0.0.1:6443 \
--kubeconfig=/etc/kubernetes/admin.conf
kubectl config set-credentials system:node:${HOST_NAME} \
--client-certificate=/etc/kubernetes/kubeconfig/admin.crt \
--client-key=/etc/kubernetes/kubeconfig/admin.key \
--embed-certs=true \
--kubeconfig=/etc/kubernetes/admin.conf
kubectl config set-context default \
--cluster=kubernetes \
--user=system:node:${HOST_NAME} \
--kubeconfig=/etc/kubernetes/admin.conf
kubectl config use-context default \
--kubeconfig=/etc/kubernetes/admin.conf
Certificate readiness check
/etc/kubernetes/openssl/cert-report.sh /etc/kubernetes/kubeconfig/admin.crt
CERTIFICATE EXPIRES RESIDUAL TIME CERTIFICATE AUTHORITY EXTERNALLY MANAGED
admin.conf Oct 22, 2025 22:06 UTC 364d kubernetes no
Certificate generation
kubeadm init phase kubeconfig admin \
--config=/etc/kubernetes/kubeadm.yaml
After executing the commands, we get the following output.
#### Certificate generation
[kubeconfig] Writing "admin.conf" kubeconfig file
Certificate readiness check
kubeadm certs check-expiration
CERTIFICATE EXPIRES RESIDUAL TIME CERTIFICATE AUTHORITY EXTERNALLY MANAGED
admin.conf Oct 22, 2025 22:06 UTC 364d kubernetes no
Kube Controller Manager
Kube Controller Manager
Purpose: kube-controller-manager client certificate for authentication to the API Server. Embedded in the
controller-manager.confkubeconfig and used for every controller-manager request to kube-apiserver. Signed by kubernetes-ca.
- HardWay
- Kubeadm
Working directory
mkdir -p /etc/kubernetes/pki
mkdir -p /etc/kubernetes/openssl/csr
Configuration
cat <<EOF > /etc/kubernetes/openssl/controller-manager-client.conf
[ req ]
default_bits = 2048
prompt = no
default_md = sha256
distinguished_name = dn
[ dn ]
CN = system:kube-controller-manager
[ v3_ext ]
authorityKeyIdentifier=keyid,issuer:always
basicConstraints=CA:FALSE
keyUsage=keyEncipherment,dataEncipherment
extendedKeyUsage=clientAuth
EOF
Private key generation
openssl genrsa \
-out /etc/kubernetes/kubeconfig/controller-manager-client-key.pem 2048
CSR generation
openssl req \
-new \
-key /etc/kubernetes/kubeconfig/controller-manager-client-key.pem \
-out /etc/kubernetes/openssl/csr/controller-manager-client.csr \
-config /etc/kubernetes/openssl/controller-manager-client.conf
CSR signing
openssl x509 \
-req \
-days 365 \
-sha256 \
-outform PEM \
-CA /etc/kubernetes/pki/ca.crt \
-CAkey /etc/kubernetes/pki/ca.key \
-CAcreateserial \
-in /etc/kubernetes/openssl/csr/controller-manager-client.csr \
-out /etc/kubernetes/kubeconfig/controller-manager-client.pem \
-extensions v3_ext \
-extfile /etc/kubernetes/openssl/controller-manager-client.conf
export CLUSTER_NAME="my-first-cluster"
export BASE_DOMAIN="example.com"
export CLUSTER_DOMAIN="cluster.local"
export FULL_HOST_NAME="${HOST_NAME}.${CLUSTER_NAME}.${BASE_DOMAIN}"
Kubeconfig setup for controller-manager-client
kubectl config set-cluster kubernetes \
--certificate-authority="/etc/kubernetes/pki/ca.crt" \
--embed-certs=true \
--server=https://127.0.0.1:6443 \
--kubeconfig=/etc/kubernetes/controller-manager.conf
kubectl config set-credentials system:node:${FULL_HOST_NAME} \
--client-certificate=/etc/kubernetes/kubeconfig/controller-manager-client.pem \
--client-key=/etc/kubernetes/kubeconfig/controller-manager-client-key.pem \
--embed-certs=true \
--kubeconfig=/etc/kubernetes/controller-manager.conf
kubectl config set-context default \
--cluster=kubernetes \
--user=system:node:${FULL_HOST_NAME} \
--kubeconfig=/etc/kubernetes/controller-manager.conf
kubectl config use-context default \
--kubeconfig=/etc/kubernetes/controller-manager.conf
Certificate readiness check
/etc/kubernetes/openssl/cert-report.sh /etc/kubernetes/kubeconfig/controller-manager-client.pem
CERTIFICATE EXPIRES RESIDUAL TIME CERTIFICATE AUTHORITY EXTERNALLY MANAGED
controller-manager.conf Oct 22, 2025 22:06 UTC 364d kubernetes no
Certificate generation
kubeadm init phase kubeconfig controller-manager \
--config=/var/run/kubeadm/kubeadm.yaml
After executing the commands, we get the following output.
#### Certificate generation
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
Certificate readiness check
kubeadm certs check-expiration
CERTIFICATE EXPIRES RESIDUAL TIME CERTIFICATE AUTHORITY EXTERNALLY MANAGED
controller-manager.conf Oct 22, 2025 22:06 UTC 364d kubernetes no
Kube Scheduler
Kube Scheduler
- HardWay
- Kubeadm
Working directory
mkdir -p /etc/kubernetes/pki
mkdir -p /etc/kubernetes/openssl/csr
Configuration
cat <<EOF > /etc/kubernetes/openssl/scheduler-client.conf
[ req ]
default_bits = 2048
prompt = no
default_md = sha256
distinguished_name = dn
[ dn ]
CN = system:kube-scheduler
[ v3_ext ]
authorityKeyIdentifier=keyid,issuer:always
basicConstraints=CA:FALSE
keyUsage=keyEncipherment,dataEncipherment
extendedKeyUsage=clientAuth
EOF
Private key generation
openssl genrsa \
-out /etc/kubernetes/kubeconfig/scheduler-client-key.pem 2048
CSR generation
openssl req \
-new \
-key /etc/kubernetes/kubeconfig/scheduler-client-key.pem \
-out /etc/kubernetes/openssl/csr/scheduler-client.csr \
-config /etc/kubernetes/openssl/scheduler-client.conf
CSR signing
openssl x509 \
-req \
-days 365 \
-sha256 \
-outform PEM \
-CA /etc/kubernetes/pki/ca.crt \
-CAkey /etc/kubernetes/pki/ca.key \
-CAcreateserial \
-in /etc/kubernetes/openssl/csr/scheduler-client.csr \
-out /etc/kubernetes/kubeconfig/scheduler-client.pem \
-extensions v3_ext \
-extfile /etc/kubernetes/openssl/scheduler-client.conf
export CLUSTER_NAME="my-first-cluster"
export BASE_DOMAIN="example.com"
export CLUSTER_DOMAIN="cluster.local"
export FULL_HOST_NAME="${HOST_NAME}.${CLUSTER_NAME}.${BASE_DOMAIN}"
Kubeconfig creation instructions
kubectl config set-cluster kubernetes \
--certificate-authority="/etc/kubernetes/pki/ca.crt" \
--embed-certs=true \
--server=https://127.0.0.1:6443 \
--kubeconfig=/etc/kubernetes/scheduler.conf
kubectl config set-credentials system:node:${FULL_HOST_NAME} \
--client-certificate=/etc/kubernetes/kubeconfig/scheduler-client.pem \
--client-key=/etc/kubernetes/kubeconfig/scheduler-client-key.pem \
--embed-certs=true \
--kubeconfig=/etc/kubernetes/scheduler.conf
kubectl config set-context default \
--cluster=kubernetes \
--user=system:node:${FULL_HOST_NAME} \
--kubeconfig=/etc/kubernetes/scheduler.conf
kubectl config use-context default \
--kubeconfig=/etc/kubernetes/scheduler.conf
Certificate readiness check
/etc/kubernetes/openssl/cert-report.sh /etc/kubernetes/kubeconfig/scheduler-client.pem
CERTIFICATE EXPIRES RESIDUAL TIME CERTIFICATE AUTHORITY EXTERNALLY MANAGED
scheduler.conf Oct 22, 2025 22:06 UTC 364d kubernetes no
Certificate generation
kubeadm init phase kubeconfig scheduler \
--config=/var/run/kubeadm/kubeadm.yaml
After executing the commands, we get the following output.
#### Kube Scheduler certificate generation
[kubeconfig] Writing "scheduler.conf" kubeconfig file
Certificate readiness check
kubeadm certs check-expiration
CERTIFICATE EXPIRES RESIDUAL TIME CERTIFICATE AUTHORITY EXTERNALLY MANAGED
scheduler.conf Oct 22, 2025 22:06 UTC 364d kubernetes no
Kubelet client
Kubelet client
Note! Can be signed via kubectl certificate approve
- HardWay
- Kubeadm
Environment variables
export CLUSTER_NAME=my-first-cluster
export BASE_DOMAIN=example.com
export CLUSTER_DOMAIN=cluster.local
export FULL_HOST_NAME=${HOST_NAME}.${CLUSTER_NAME}.${BASE_DOMAIN}
Working directory
mkdir -p /etc/kubernetes/pki
mkdir -p /etc/kubernetes/openssl/csr
mkdir -p /var/lib/kubelet/pki
Configuration
cat <<EOF > /etc/kubernetes/openssl/kubelet-client.conf
[ req ]
default_bits = 2048
prompt = no
default_md = sha256
distinguished_name = dn
[ dn ]
CN = system:node:${FULL_HOST_NAME}
O = system:nodes
[ v3_ext ]
authorityKeyIdentifier=keyid,issuer:always
basicConstraints=CA:FALSE
keyUsage=keyEncipherment,dataEncipherment
extendedKeyUsage=clientAuth
EOF
Private key generation
openssl genrsa \
-out /var/lib/kubelet/pki/kubelet-client-key.pem 2048
CSR generation
openssl req \
-new \
-key /var/lib/kubelet/pki/kubelet-client-key.pem \
-out /etc/kubernetes/openssl/csr/kubelet-client.csr \
-config /etc/kubernetes/openssl/kubelet-client.conf
CSR signing
openssl x509 \
-req \
-days 365 \
-sha256 \
-outform PEM \
-CA /etc/kubernetes/pki/ca.crt \
-CAkey /etc/kubernetes/pki/ca.key \
-CAcreateserial \
-in /etc/kubernetes/openssl/csr/kubelet-client.csr \
-out /var/lib/kubelet/pki/kubelet-client.pem \
-extensions v3_ext \
-extfile /etc/kubernetes/openssl/kubelet-client.conf
cat /var/lib/kubelet/pki/kubelet-client.pem /var/lib/kubelet/pki/kubelet-client-key.pem >> /var/lib/kubelet/pki/kubelet-client-$(date '+%Y-%m-%d-%H-%M-%S').pem
ln -s /var/lib/kubelet/pki/kubelet-client-$(date '+%Y-%m-%d-%H-%M-%S').pem /var/lib/kubelet/pki/kubelet-client-current.pem
export CLUSTER_NAME="my-first-cluster"
export BASE_DOMAIN="example.com"
export CLUSTER_DOMAIN="cluster.local"
export FULL_HOST_NAME="${HOST_NAME}.${CLUSTER_NAME}.${BASE_DOMAIN}"
Kubeconfig creation instructions
kubectl config set-cluster kubernetes \
--certificate-authority="/etc/kubernetes/pki/ca.crt" \
--embed-certs=true \
--server=https://127.0.0.1:6443 \
--kubeconfig=/etc/kubernetes/kubelet.conf
kubectl config set-credentials system:node:${FULL_HOST_NAME} \
--client-certificate=/var/lib/kubelet/pki/kubelet-client.pem \
--client-key=/var/lib/kubelet/pki/kubelet-client-key.pem \
--embed-certs=true \
--kubeconfig=/etc/kubernetes/kubelet.conf
kubectl config set-context default \
--cluster=kubernetes \
--user=system:node:${FULL_HOST_NAME} \
--kubeconfig=/etc/kubernetes/kubelet.conf
kubectl config use-context default \
--kubeconfig=/etc/kubernetes/kubelet.conf
Certificate readiness check
/etc/kubernetes/openssl/cert-report.sh /var/lib/kubelet/pki/kubelet-client-current.pem
CERTIFICATE EXPIRES RESIDUAL TIME CERTIFICATE AUTHORITY EXTERNALLY MANAGED
kubelet-client-current Oct 22, 2025 22:06 UTC 364d kubernetes no
Certificate generation
kubeadm init phase kubeconfig kubelet \
--config=/var/run/kubeadm/kubeadm.yaml
After executing the commands, we get the following output.
#### Certificate generation
[kubeconfig] Writing "kubelet.conf" kubeconfig file
Certificate rotation
kubeadm init phase kubelet-finalize experimental-cert-rotation \
--config=/var/run/kubeadm/kubeadm.yaml
After executing the commands, we get the following output.
#### Certificate rotation
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
Certificate readiness check
kubeadm certs check-expiration
CERTIFICATE EXPIRES RESIDUAL TIME CERTIFICATE AUTHORITY EXTERNALLY MANAGED
kubelet-client-current Oct 22, 2025 22:06 UTC 364d kubernetes no
Creating kubeconfig configurations and certificates
● Required
Creating kubeconfig configurations and certificates
● Required
- Super Admin
- Admin
- Controller
- Scheduler
- Kubelet
Super Admin
Super Admin
- HardWay
- Kubeadm
Working directory
mkdir -p /etc/kubernetes/pki
mkdir -p /etc/kubernetes/openssl/csr
mkdir -p /etc/kubernetes/kubeconfig
Configuration
cat <<EOF > /etc/kubernetes/openssl/super-admin.conf
[ req ]
default_bits = 2048
prompt = no
default_md = sha256
distinguished_name = dn
[ dn ]
CN = kubernetes-super-admin
O = system:masters
[ v3_ext ]
authorityKeyIdentifier=keyid,issuer:always
basicConstraints=CA:FALSE
keyUsage=keyEncipherment,dataEncipherment
extendedKeyUsage=clientAuth
EOF
Private key generation
openssl genrsa \
-out /etc/kubernetes/kubeconfig/super-admin.key 2048
CSR generation
openssl req \
-new \
-key /etc/kubernetes/kubeconfig/super-admin.key \
-out /etc/kubernetes/openssl/csr/super-admin.csr \
-config /etc/kubernetes/openssl/super-admin.conf
CSR signing
openssl x509 \
-req \
-days 365 \
-sha256 \
-CA /etc/kubernetes/pki/ca.crt \
-CAkey /etc/kubernetes/pki/ca.key \
-CAcreateserial \
-in /etc/kubernetes/openssl/csr/super-admin.csr \
-out /etc/kubernetes/kubeconfig/super-admin.crt \
-extensions v3_ext \
-extfile /etc/kubernetes/openssl/super-admin.conf
Kubeconfig setup for super-admin
kubectl config set-cluster kubernetes \
--certificate-authority=/etc/kubernetes/pki/ca.crt \
--embed-certs=true \
--server=https://127.0.0.1:6443 \
--kubeconfig=/etc/kubernetes/super-admin.conf
kubectl config set-credentials system:node:${HOST_NAME} \
--client-certificate=/etc/kubernetes/kubeconfig/super-admin.crt \
--client-key=/etc/kubernetes/kubeconfig/super-admin.key \
--embed-certs=true \
--kubeconfig=/etc/kubernetes/super-admin.conf
kubectl config set-context default \
--cluster=kubernetes \
--user=system:node:${HOST_NAME} \
--kubeconfig=/etc/kubernetes/super-admin.conf
kubectl config use-context default \
--kubeconfig=/etc/kubernetes/super-admin.conf
Certificate readiness check
/etc/kubernetes/openssl/cert-report.sh /etc/kubernetes/kubeconfig/super-admin.crt
CERTIFICATE EXPIRES RESIDUAL TIME CERTIFICATE AUTHORITY EXTERNALLY MANAGED
super-admin.conf Oct 22, 2025 22:06 UTC 364d kubernetes no
Please note: during the Join phase, you cannot choose which kubeconfigs to generate — kubeadm creates them all at once, in full.
Manifest generation
kubeadm join phase control-plane-prepare kubeconfig \
--config=/var/run/kubeadm/kubeadm.yaml
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubeconfig] Generating kubeconfig files
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
Certificate readiness check
kubeadm certs check-expiration
CERTIFICATE EXPIRES RESIDUAL TIME CERTIFICATE AUTHORITY EXTERNALLY MANAGED
super-admin.conf Oct 22, 2025 22:06 UTC 364d kubernetes no
Admin
Admin
- HardWay
- Kubeadm
Working directory
mkdir -p /etc/kubernetes/pki
mkdir -p /etc/kubernetes/openssl/csr
mkdir -p /etc/kubernetes/kubeconfig
Configuration
cat <<EOF > /etc/kubernetes/openssl/admin.conf
[ req ]
default_bits = 2048
prompt = no
default_md = sha256
distinguished_name = dn
[ dn ]
CN = kubernetes-admin
O = kubeadm:cluster-admins
[ v3_ext ]
authorityKeyIdentifier=keyid,issuer:always
basicConstraints=CA:FALSE
keyUsage=keyEncipherment,dataEncipherment
extendedKeyUsage=clientAuth
EOF
Private keys
openssl genrsa \
-out /etc/kubernetes/kubeconfig/admin.key 2048
CSR generation
openssl req \
-new \
-key /etc/kubernetes/kubeconfig/admin.key \
-out /etc/kubernetes/openssl/csr/admin.csr \
-config /etc/kubernetes/openssl/admin.conf
CSR signing
openssl x509 \
-req \
-days 365 \
-sha256 \
-CA /etc/kubernetes/pki/ca.crt \
-CAkey /etc/kubernetes/pki/ca.key \
-CAcreateserial \
-in /etc/kubernetes/openssl/csr/admin.csr \
-out /etc/kubernetes/kubeconfig/admin.crt \
-extensions v3_ext \
-extfile /etc/kubernetes/openssl/admin.conf
Kubeconfig setup for admin
kubectl config set-cluster kubernetes \
--certificate-authority=/etc/kubernetes/pki/ca.crt \
--embed-certs=true \
--server=https://127.0.0.1:6443 \
--kubeconfig=/etc/kubernetes/admin.conf
kubectl config set-credentials system:node:${HOST_NAME} \
--client-certificate=/etc/kubernetes/kubeconfig/admin.crt \
--client-key=/etc/kubernetes/kubeconfig/admin.key \
--embed-certs=true \
--kubeconfig=/etc/kubernetes/admin.conf
kubectl config set-context default \
--cluster=kubernetes \
--user=system:node:${HOST_NAME} \
--kubeconfig=/etc/kubernetes/admin.conf
kubectl config use-context default \
--kubeconfig=/etc/kubernetes/admin.conf
Certificate readiness check
/etc/kubernetes/openssl/cert-report.sh /etc/kubernetes/kubeconfig/admin.crt
CERTIFICATE EXPIRES RESIDUAL TIME CERTIFICATE AUTHORITY EXTERNALLY MANAGED
admin.conf Oct 22, 2025 22:06 UTC 364d kubernetes no
Please note: during the Join phase, you cannot choose which kubeconfigs to generate — kubeadm creates them all at once, in full.
Manifest generation
kubeadm join phase control-plane-prepare kubeconfig \
--config=/var/run/kubeadm/kubeadm.yaml
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubeconfig] Generating kubeconfig files
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
Certificate readiness check
kubeadm certs check-expiration
CERTIFICATE EXPIRES RESIDUAL TIME CERTIFICATE AUTHORITY EXTERNALLY MANAGED
admin.conf Oct 22, 2025 22:06 UTC 364d kubernetes no
Kube Controller Manager
Kube Controller Manager
- HardWay
- Kubeadm
Working directory
mkdir -p /etc/kubernetes/pki
mkdir -p /etc/kubernetes/openssl/csr
Configuration
cat <<EOF > /etc/kubernetes/openssl/controller-manager-client.conf
[ req ]
default_bits = 2048
prompt = no
default_md = sha256
distinguished_name = dn
[ dn ]
CN = system:kube-controller-manager
[ v3_ext ]
authorityKeyIdentifier=keyid,issuer:always
basicConstraints=CA:FALSE
keyUsage=keyEncipherment,dataEncipherment
extendedKeyUsage=clientAuth
EOF
Private key generation
openssl genrsa \
-out /etc/kubernetes/kubeconfig/controller-manager-client-key.pem 2048
CSR generation
openssl req \
-new \
-key /etc/kubernetes/kubeconfig/controller-manager-client-key.pem \
-out /etc/kubernetes/openssl/csr/controller-manager-client.csr \
-config /etc/kubernetes/openssl/controller-manager-client.conf
CSR signing
openssl x509 \
-req \
-days 365 \
-sha256 \
-outform PEM \
-CA /etc/kubernetes/pki/ca.crt \
-CAkey /etc/kubernetes/pki/ca.key \
-CAcreateserial \
-in /etc/kubernetes/openssl/csr/controller-manager-client.csr \
-out /etc/kubernetes/kubeconfig/controller-manager-client.pem \
-extensions v3_ext \
-extfile /etc/kubernetes/openssl/controller-manager-client.conf
export CLUSTER_NAME="my-first-cluster"
export BASE_DOMAIN="example.com"
export CLUSTER_DOMAIN="cluster.local"
export FULL_HOST_NAME="${HOST_NAME}.${CLUSTER_NAME}.${BASE_DOMAIN}"
Kubeconfig setup for controller-manager-client
kubectl config set-cluster kubernetes \
--certificate-authority="/etc/kubernetes/pki/ca.crt" \
--embed-certs=true \
--server=https://127.0.0.1:6443 \
--kubeconfig=/etc/kubernetes/controller-manager.conf
kubectl config set-credentials system:node:${FULL_HOST_NAME} \
--client-certificate=/etc/kubernetes/kubeconfig/controller-manager-client.pem \
--client-key=/etc/kubernetes/kubeconfig/controller-manager-client-key.pem \
--embed-certs=true \
--kubeconfig=/etc/kubernetes/controller-manager.conf
kubectl config set-context default \
--cluster=kubernetes \
--user=system:node:${FULL_HOST_NAME} \
--kubeconfig=/etc/kubernetes/controller-manager.conf
kubectl config use-context default \
--kubeconfig=/etc/kubernetes/controller-manager.conf
Certificate readiness check
/etc/kubernetes/openssl/cert-report.sh /etc/kubernetes/kubeconfig/controller-manager-client.pem
CERTIFICATE EXPIRES RESIDUAL TIME CERTIFICATE AUTHORITY EXTERNALLY MANAGED
controller-manager.conf Oct 22, 2025 22:06 UTC 364d kubernetes no
Please note: during the Join phase, you cannot choose which kubeconfigs to generate — kubeadm creates them all at once, in full.
Manifest generation
kubeadm join phase control-plane-prepare kubeconfig \
--config=/var/run/kubeadm/kubeadm.yaml
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubeconfig] Generating kubeconfig files
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
Certificate readiness check
kubeadm certs check-expiration
CERTIFICATE EXPIRES RESIDUAL TIME CERTIFICATE AUTHORITY EXTERNALLY MANAGED
controller-manager.conf Oct 22, 2025 22:06 UTC 364d kubernetes no
Kube Scheduler
Kube Scheduler
- HardWay
- Kubeadm
Working directory
mkdir -p /etc/kubernetes/pki
mkdir -p /etc/kubernetes/openssl/csr
Configuration
cat <<EOF > /etc/kubernetes/openssl/scheduler-client.conf
[ req ]
default_bits = 2048
prompt = no
default_md = sha256
distinguished_name = dn
[ dn ]
CN = system:kube-scheduler
[ v3_ext ]
authorityKeyIdentifier=keyid,issuer:always
basicConstraints=CA:FALSE
keyUsage=keyEncipherment,dataEncipherment
extendedKeyUsage=clientAuth
EOF
Private key generation
openssl genrsa \
-out /etc/kubernetes/kubeconfig/scheduler-client-key.pem 2048
CSR generation
openssl req \
-new \
-key /etc/kubernetes/kubeconfig/scheduler-client-key.pem \
-out /etc/kubernetes/openssl/csr/scheduler-client.csr \
-config /etc/kubernetes/openssl/scheduler-client.conf
CSR signing
openssl x509 \
-req \
-days 365 \
-sha256 \
-outform PEM \
-CA /etc/kubernetes/pki/ca.crt \
-CAkey /etc/kubernetes/pki/ca.key \
-CAcreateserial \
-in /etc/kubernetes/openssl/csr/scheduler-client.csr \
-out /etc/kubernetes/kubeconfig/scheduler-client.pem \
-extensions v3_ext \
-extfile /etc/kubernetes/openssl/scheduler-client.conf
export CLUSTER_NAME="my-first-cluster"
export BASE_DOMAIN="example.com"
export CLUSTER_DOMAIN="cluster.local"
export FULL_HOST_NAME="${HOST_NAME}.${CLUSTER_NAME}.${BASE_DOMAIN}"
Kubeconfig creation instructions
kubectl config set-cluster kubernetes \
--certificate-authority="/etc/kubernetes/pki/ca.crt" \
--embed-certs=true \
--server=https://127.0.0.1:6443 \
--kubeconfig=/etc/kubernetes/scheduler.conf
kubectl config set-credentials system:node:${FULL_HOST_NAME} \
--client-certificate=/etc/kubernetes/kubeconfig/scheduler-client.pem \
--client-key=/etc/kubernetes/kubeconfig/scheduler-client-key.pem \
--embed-certs=true \
--kubeconfig=/etc/kubernetes/scheduler.conf
kubectl config set-context default \
--cluster=kubernetes \
--user=system:node:${FULL_HOST_NAME} \
--kubeconfig=/etc/kubernetes/scheduler.conf
kubectl config use-context default \
--kubeconfig=/etc/kubernetes/scheduler.conf
Certificate readiness check
/etc/kubernetes/openssl/cert-report.sh /etc/kubernetes/kubeconfig/scheduler-client.pem
CERTIFICATE EXPIRES RESIDUAL TIME CERTIFICATE AUTHORITY EXTERNALLY MANAGED
scheduler.conf Oct 22, 2025 22:06 UTC 364d kubernetes no
Please note: during the Join phase, you cannot choose which kubeconfigs to generate — kubeadm creates them all at once, in full.
Manifest generation
kubeadm join phase control-plane-prepare kubeconfig \
--config=/var/run/kubeadm/kubeadm.yaml
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubeconfig] Generating kubeconfig files
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
Certificate readiness check
kubeadm certs check-expiration
CERTIFICATE EXPIRES RESIDUAL TIME CERTIFICATE AUTHORITY EXTERNALLY MANAGED
scheduler.conf Oct 22, 2025 22:06 UTC 364d kubernetes no
Kubelet client
Kubelet client
Note! Can be signed via kubectl certificate approve
- HardWay
- Kubeadm
Environment variables
export CLUSTER_NAME=my-first-cluster
export BASE_DOMAIN=example.com
export CLUSTER_DOMAIN=cluster.local
export FULL_HOST_NAME=${HOST_NAME}.${CLUSTER_NAME}.${BASE_DOMAIN}
Working directory
mkdir -p /etc/kubernetes/pki
mkdir -p /etc/kubernetes/openssl/csr
mkdir -p /var/lib/kubelet/pki
Configuration
cat <<EOF > /etc/kubernetes/openssl/kubelet-client.conf
[ req ]
default_bits = 2048
prompt = no
default_md = sha256
distinguished_name = dn
[ dn ]
CN = system:node:${FULL_HOST_NAME}
O = system:nodes
[ v3_ext ]
authorityKeyIdentifier=keyid,issuer:always
basicConstraints=CA:FALSE
keyUsage=keyEncipherment,dataEncipherment
extendedKeyUsage=clientAuth
EOF
Private key generation
openssl genrsa \
-out /var/lib/kubelet/pki/kubelet-client-key.pem 2048
CSR generation
openssl req \
-new \
-key /var/lib/kubelet/pki/kubelet-client-key.pem \
-out /etc/kubernetes/openssl/csr/kubelet-client.csr \
-config /etc/kubernetes/openssl/kubelet-client.conf
CSR signing
openssl x509 \
-req \
-days 365 \
-sha256 \
-outform PEM \
-CA /etc/kubernetes/pki/ca.crt \
-CAkey /etc/kubernetes/pki/ca.key \
-CAcreateserial \
-in /etc/kubernetes/openssl/csr/kubelet-client.csr \
-out /var/lib/kubelet/pki/kubelet-client.pem \
-extensions v3_ext \
-extfile /etc/kubernetes/openssl/kubelet-client.conf
cat /var/lib/kubelet/pki/kubelet-client.pem /var/lib/kubelet/pki/kubelet-client-key.pem >> /var/lib/kubelet/pki/kubelet-client-$(date '+%Y-%m-%d-%H-%M-%S').pem
ln -s /var/lib/kubelet/pki/kubelet-client-$(date '+%Y-%m-%d-%H-%M-%S').pem /var/lib/kubelet/pki/kubelet-client-current.pem
export CLUSTER_NAME="my-first-cluster"
export BASE_DOMAIN="example.com"
export CLUSTER_DOMAIN="cluster.local"
export FULL_HOST_NAME="${HOST_NAME}.${CLUSTER_NAME}.${BASE_DOMAIN}"
Kubeconfig creation instructions
kubectl config set-cluster kubernetes \
--certificate-authority="/etc/kubernetes/pki/ca.crt" \
--embed-certs=true \
--server=https://127.0.0.1:6443 \
--kubeconfig=/etc/kubernetes/kubelet.conf
kubectl config set-credentials system:node:${FULL_HOST_NAME} \
--client-certificate=/var/lib/kubelet/pki/kubelet-client.pem \
--client-key=/var/lib/kubelet/pki/kubelet-client-key.pem \
--embed-certs=true \
--kubeconfig=/etc/kubernetes/kubelet.conf
kubectl config set-context default \
--cluster=kubernetes \
--user=system:node:${FULL_HOST_NAME} \
--kubeconfig=/etc/kubernetes/kubelet.conf
kubectl config use-context default \
--kubeconfig=/etc/kubernetes/kubelet.conf
Certificate readiness check
/etc/kubernetes/openssl/cert-report.sh /var/lib/kubelet/pki/kubelet-client-current.pem
CERTIFICATE EXPIRES RESIDUAL TIME CERTIFICATE AUTHORITY EXTERNALLY MANAGED
kubelet-client-current Oct 22, 2025 22:06 UTC 364d kubernetes no
Please note: during the Join phase, you cannot choose which kubeconfigs to generate — kubeadm creates them all at once, in full.
Manifest generation
kubeadm join phase control-plane-prepare kubeconfig \
--config=/var/run/kubeadm/kubeadm.yaml
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubeconfig] Generating kubeconfig files
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
Certificate readiness check
kubeadm certs check-expiration
CERTIFICATE EXPIRES RESIDUAL TIME CERTIFICATE AUTHORITY EXTERNALLY MANAGED
kubelet-client-current Oct 22, 2025 22:06 UTC 364d kubernetes no
17*. Creating All kubeconfigs
This section describes the generation of all
kubeconfigfiles.
If you have not performed manual kubeconfig generation, use this block to automatically create the configurations.
- Init
- Join
Generation of all kubeconfig files
● Optional
Generation of all kubeconfig files
● Optional
Kubeconfig generation
kubeadm init phase kubeconfig all \
--config=/var/run/kubeadm/kubeadm.yaml
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
Generation of all kubeconfig files
● Optional
Generation of all kubeconfig files
● Optional
Please note: during the Join phase, you cannot choose which kubeconfigs to generate — kubeadm creates them all at once, in full.
Manifest generation
kubeadm join phase control-plane-prepare kubeconfig \
--config=/var/run/kubeadm/kubeadm.yaml
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubeconfig] Generating kubeconfig files
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
18. Verifying the Certificate Block
This section covers the verification of the correctness of created certificates and keys, as well as the correspondence between them. This is important for eliminating errors before launching Kubernetes components.
Certificate block verification
● Optional
Certificate block verification
● Optional
- HardWay
- Kubeadm
After configuring the certificates, it is recommended to verify their correctness using
Kubeadm
Working directory
mkdir -p /etc/kubernetes/openssl
Script creation instructions
Script creation instructions
cat <<'EOF' > /etc/kubernetes/openssl/cert-report.sh
#!/usr/bin/env bash
set -euo pipefail
TMPDIR=$(mktemp -d)
trap 'rm -rf "$TMPDIR"' EXIT
declare -A CN_TO_CA_NAME
declare -A PROCESSED_FINGERPRINTS
CERT_ROWS=()
CA_ROWS=()
CERT_HEADER=$(printf "%-28s %-25s %-15s %-24s %-20s" \
"CERTIFICATE" "EXPIRES" "RESIDUAL TIME" "CERTIFICATE AUTHORITY" "EXTERNALLY MANAGED")
CA_HEADER=$(printf "%-24s %-25s %-15s %-20s" \
"CERTIFICATE AUTHORITY" "EXPIRES" "RESIDUAL TIME" "EXTERNALLY MANAGED")
CERT_PATH="${1:-}"
if [ -n "$CERT_PATH" ]; then
FILES=("$CERT_PATH")
else
mapfile -t FILES < <(
find /etc/kubernetes/ \
-type d -name openssl -prune -o \
-type f \( -name "*.crt" -o -name "*.pem" -o -name "*.conf" \) -print 2>/dev/null
)
fi
extract_cert() {
local file="$1"
local out="$2"
if grep -q "client-certificate-data:" "$file"; then
awk '/client-certificate-data:/ {print $2}' "$file" | base64 -d > "$out"
else
cp "$file" "$out"
fi
}
cert_lifetime() {
local end="$1"
local end_ts now_ts days years
end_ts=$(date -d "$end" +%s)
now_ts=$(date +%s)
(( end_ts < now_ts )) && echo "expired" && return
days=$(( (end_ts - now_ts) / 86400 ))
years=$(( days / 365 ))
(( years > 0 )) && echo "${years}y" || echo "${days}d"
}
cert_name() {
local path="$1"
local base
base=$(basename "$path" | sed 's/\.\(crt\|pem\|conf\)$//')
case "$path" in
*/etcd/*) echo "etcd-$base" ;;
*/front-proxy/*) echo "front-proxy-$base" ;;
*) echo "$base" ;;
esac
}
for file in "${FILES[@]}"; do
crt="$TMPDIR/ca.crt"
extract_cert "$file" "$crt" || continue
openssl x509 -in "$crt" -noout -text 2>/dev/null | grep -A1 "Key Usage" | grep -q "Certificate Sign" || continue
cn=$(openssl x509 -in "$crt" -noout -subject 2>/dev/null | sed -n 's/.*CN *= *\([^,\/]*\).*/\1/p')
[[ -n "$cn" ]] && CN_TO_CA_NAME["$cn"]="$(cert_name "$file")"
done
for file in "${FILES[@]}"; do
crt="$TMPDIR/cert.crt"
extract_cert "$file" "$crt" || continue
openssl x509 -in "$crt" -noout >/dev/null 2>&1 || continue
fp=$(openssl x509 -in "$crt" -noout -fingerprint -sha256 | cut -d= -f2)
[[ -n "${PROCESSED_FINGERPRINTS[$fp]+x}" ]] && continue
PROCESSED_FINGERPRINTS[$fp]=1
name=$(cert_name "$file")
end_raw=$(openssl x509 -in "$crt" -noout -enddate | cut -d= -f2)
expires=$(date -d "$end_raw" "+%b %d, %Y %H:%M UTC")
residual=$(cert_lifetime "$end_raw")
if openssl x509 -in "$crt" -noout -text | grep -A1 "Key Usage" | grep -q "Certificate Sign"; then
CA_ROWS+=("$(printf "%-24s %-25s %-15s %-20s" "$name" "$expires" "$residual" "no")")
else
issuer_cn=$(openssl x509 -in "$crt" -noout -issuer | sed -n 's/.*CN *= *\([^,\/]*\).*/\1/p')
ca_name="${CN_TO_CA_NAME[$issuer_cn]:-$issuer_cn}"
CERT_ROWS+=("$(printf "%-28s %-25s %-15s %-24s %-20s" "$name" "$expires" "$residual" "$ca_name" "no")")
fi
done
echo
echo "$CERT_HEADER"
printf "%s\n" "${CERT_ROWS[@]}" | sort
echo
echo "$CA_HEADER"
printf "%s\n" "${CA_ROWS[@]}" | sort
EOF
Setting permissions
chmod +x /etc/kubernetes/openssl/cert-report.sh
Running the script for all certificates/kubeconfigs
/etc/kubernetes/openssl/cert-report.sh
CERTIFICATE EXPIRES RESIDUAL TIME CERTIFICATE AUTHORITY EXTERNALLY MANAGED
admin.conf Oct 22, 2025 22:06 UTC 364d ca no
apiserver Oct 22, 2025 22:06 UTC 364d ca no
apiserver-etcd-client Oct 22, 2025 22:06 UTC 364d etcd-ca no
apiserver-kubelet-client Oct 22, 2025 22:06 UTC 364d ca no
controller-manager.conf Oct 22, 2025 22:06 UTC 364d ca no
etcd-healthcheck-client Oct 22, 2025 22:06 UTC 364d etcd-ca no
etcd-peer Oct 22, 2025 22:06 UTC 364d etcd-ca no
etcd-server Oct 22, 2025 22:06 UTC 364d etcd-ca no
front-proxy-client Oct 22, 2025 22:06 UTC 364d front-proxy-ca no
scheduler.conf Oct 22, 2025 22:06 UTC 364d ca no
super-admin.conf Oct 22, 2025 22:06 UTC 364d ca no
CERTIFICATE AUTHORITY EXPIRES RESIDUAL TIME EXTERNALLY MANAGED
ca Oct 20, 2034 22:04 UTC 9y no
etcd-ca Oct 20, 2034 22:04 UTC 9y no
front-proxy-ca Oct 20, 2034 22:04 UTC 9y no
Running the script for a single certificate/kubeconfig
/etc/kubernetes/openssl/cert-report.sh /etc/kubernetes/pki/ca.crt
CERTIFICATE EXPIRES RESIDUAL TIME CERTIFICATE AUTHORITY EXTERNALLY MANAGED
CERTIFICATE AUTHORITY EXPIRES RESIDUAL TIME EXTERNALLY MANAGED
ca Oct 20, 2034 22:04 UTC 9y no
After configuring the certificates, it is recommended to verify their correctness using
Kubeadm
kubeadm certs check-expiration
CERTIFICATE EXPIRES RESIDUAL TIME CERTIFICATE AUTHORITY EXTERNALLY MANAGED
admin.conf Oct 22, 2025 22:06 UTC 364d ca no
apiserver Oct 22, 2025 22:06 UTC 364d ca no
apiserver-etcd-client Oct 22, 2025 22:06 UTC 364d etcd-ca no
apiserver-kubelet-client Oct 22, 2025 22:06 UTC 364d ca no
controller-manager.conf Oct 22, 2025 22:06 UTC 364d ca no
etcd-healthcheck-client Oct 22, 2025 22:06 UTC 364d etcd-ca no
etcd-peer Oct 22, 2025 22:06 UTC 364d etcd-ca no
etcd-server Oct 22, 2025 22:06 UTC 364d etcd-ca no
front-proxy-client Oct 22, 2025 22:06 UTC 364d front-proxy-ca no
scheduler.conf Oct 22, 2025 22:06 UTC 364d ca no
super-admin.conf Oct 22, 2025 22:06 UTC 364d ca no
CERTIFICATE AUTHORITY EXPIRES RESIDUAL TIME EXTERNALLY MANAGED
ca Oct 20, 2034 22:04 UTC 9y no
etcd-ca Oct 20, 2034 22:04 UTC 9y no
front-proxy-ca Oct 20, 2034 22:04 UTC 9y no
19. Creating Control Plane Static Pods
- Init
- Join
Static Pods setup
● Required
Static Pods setup
● Required
This section describes the manual creation of static pod manifests for Kubernetes control plane components.
- Kube-API
- Kube Controller Manager
- Kube Scheduler
Kube-API setup
● Required
Kube-API setup
● Required
This section is optional and intended only for cases where this resource needs to be configured separately from the others.
- HardWay
- Kubeadm
Environment variables
export MACHINE_LOCAL_ADDRESS=$(ip -4 addr show scope global | awk '/inet/ {print $2; exit}' | cut -d/ -f1)
Working directory
mkdir -p /etc/kubernetes/manifests
Static Pod Kube-apiserver
Manifest generation
cat <<EOF > /etc/kubernetes/manifests/kube-apiserver.yaml
---
apiVersion: v1
kind: Pod
metadata:
annotations:
kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: ${MACHINE_LOCAL_ADDRESS}:6443
creationTimestamp: null
labels:
component: kube-apiserver
tier: control-plane
name: kube-apiserver
namespace: kube-system
spec:
containers:
- command:
- kube-apiserver
- --advertise-address=${MACHINE_LOCAL_ADDRESS}
- --aggregator-reject-forwarding-redirect=true
- --allow-privileged=true
- --anonymous-auth=true
- --api-audiences=konnectivity-server
- --apiserver-count=1
- --audit-log-batch-buffer-size=10000
- --audit-log-batch-max-size=1
- --audit-log-batch-max-wait=0s
- --audit-log-batch-throttle-burst=0
- --audit-log-batch-throttle-enable=false
- --audit-log-batch-throttle-qps=0
- --audit-log-compress=false
- --audit-log-format=json
- --audit-log-maxage=30
- --audit-log-maxbackup=10
- --audit-log-maxsize=1000
- --audit-log-mode=batch
- --audit-log-truncate-enabled=false
- --audit-log-truncate-max-batch-size=10485760
- --audit-log-truncate-max-event-size=102400
- --audit-log-version=audit.k8s.io/v1
- --audit-webhook-batch-buffer-size=10000
- --audit-webhook-batch-initial-backoff=10s
- --audit-webhook-batch-max-size=400
- --audit-webhook-batch-max-wait=30s
- --audit-webhook-batch-throttle-burst=15
- --audit-webhook-batch-throttle-enable=true
- --audit-webhook-batch-throttle-qps=10
- --audit-webhook-initial-backoff=10s
- --audit-webhook-mode=batch
- --audit-webhook-truncate-enabled=false
- --audit-webhook-truncate-max-batch-size=10485760
- --audit-webhook-truncate-max-event-size=102400
- --audit-webhook-version=audit.k8s.io/v1
- --audit-policy-file=/etc/kubernetes/audit-policy.yaml
- --audit-log-path=/var/log/kubernetes/audit/audit.log
- --authentication-token-webhook-cache-ttl=2m0s
- --authentication-token-webhook-version=v1beta1
- --authorization-mode=Node,RBAC
- --authorization-webhook-cache-authorized-ttl=5m0s
- --authorization-webhook-cache-unauthorized-ttl=30s
- --authorization-webhook-version=v1beta1
- --bind-address=0.0.0.0
- --cert-dir=/var/run/kubernetes
- --client-ca-file=/etc/kubernetes/pki/ca.crt
# -> Enable if managing state via Cloud Controller Manager
# - --cloud-provider=external
- --cloud-provider-gce-l7lb-src-cidrs=130.211.0.0/22,35.191.0.0/16
- --cloud-provider-gce-lb-src-cidrs=130.211.0.0/22,209.85.152.0/22,209.85.204.0/22,35.191.0.0/16
- --contention-profiling=false
- --default-not-ready-toleration-seconds=300
- --default-unreachable-toleration-seconds=300
- --default-watch-cache-size=100
- --delete-collection-workers=1
- --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,PodSecurity
- --enable-aggregator-routing=true
- --enable-bootstrap-token-auth=true
- --enable-garbage-collector=true
- --enable-logs-handler=true
- --enable-priority-and-fairness=true
- --encryption-provider-config-automatic-reload=false
- --endpoint-reconciler-type=lease
- --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt
- --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt
- --etcd-compaction-interval=5m0s
- --etcd-count-metric-poll-period=1m0s
- --etcd-db-metric-poll-interval=30s
- --etcd-healthcheck-timeout=2s
- --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key
- --etcd-prefix=/registry
- --etcd-readycheck-timeout=2s
- --etcd-servers=https://127.0.0.1:2379
- --event-ttl=1h0m0s
- --feature-gates=RotateKubeletServerCertificate=true
- --goaway-chance=0
- --help=false
- --http2-max-streams-per-connection=0
- --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt
- --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key
- --kubelet-port=10250
- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
- --kubelet-read-only-port=10255
- --kubelet-timeout=5s
- --kubernetes-service-node-port=0
- --lease-reuse-duration-seconds=60
- --livez-grace-period=0s
- --log-flush-frequency=5s
- --logging-format=text
- --log-json-info-buffer-size=0
- --log-json-split-stream=false
- --log-text-info-buffer-size=0
- --log-text-split-stream=false
- --max-connection-bytes-per-sec=0
- --max-mutating-requests-inflight=200
- --max-requests-inflight=400
- --min-request-timeout=1800
- --permit-address-sharing=false
- --permit-port-sharing=false
- --profiling=false
- --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt
- --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key
- --requestheader-allowed-names=front-proxy-client
- --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
- --requestheader-extra-headers-prefix=X-Remote-Extra-
- --requestheader-group-headers=X-Remote-Group
- --requestheader-username-headers=X-Remote-User
- --request-timeout=1m0s
- --runtime-config=api/all=true
- --secure-port=6443
- --service-account-extend-token-expiration=true
- --service-account-issuer=https://kubernetes.default.svc.cluster.local
- --service-account-key-file=/etc/kubernetes/pki/sa.pub
- --service-account-lookup=true
- --service-account-max-token-expiration=0s
- --service-account-signing-key-file=/etc/kubernetes/pki/sa.key
- --service-cluster-ip-range=29.64.0.0/16
- --service-node-port-range=30000-32767
- --shutdown-delay-duration=0s
- --shutdown-send-retry-after=false
- --shutdown-watch-termination-grace-period=0s
- --storage-backend=etcd3
- --storage-media-type=application/vnd.kubernetes.protobuf
- --tls-cert-file=/etc/kubernetes/pki/apiserver.crt
- --tls-private-key-file=/etc/kubernetes/pki/apiserver.key
- --v=2
- --version=false
- --watch-cache=true
# IF YOU NEED TO CONNECT CLOUD-CONTROLLER-MANAGER
# UNCOMMENT THE FOLLOWING
# ->
# - --cloud-provider: "external"
# Do not specify if value is "" or undefined
# - --cloud-config=
# - --strict-transport-security-directives=
# - --disable-admission-plugins=
# - --disabled-metrics=
# - --egress-selector-config-file=
# - --encryption-provider-config=
# - --etcd-servers-overrides=
# - --external-hostname=
# - --kubelet-certificate-authority=
# - --oidc-ca-file=
# - --oidc-client-id=
# - --oidc-groups-claim=
# - --oidc-groups-prefix=
# - --oidc-issuer-url=
# - --oidc-required-claim=
# - --oidc-signing-algs=RS256
# - --oidc-username-claim=sub
# - --oidc-username-prefix=
# - --peer-advertise-ip=
# - --peer-advertise-port=
# - --peer-ca-file=
# - --service-account-jwks-uri=
# - --show-hidden-metrics-for-version=
# - --tls-cipher-suites=
# - --tls-min-version=
# - --tls-sni-cert-key=
# - --token-auth-file=
# - --tracing-config-file=
# - --vmodule=
# - --watch-cache-sizes=
# - --authorization-webhook-config-file=
# - --cors-allowed-origins=
# - --debug-socket-path=
# - --authorization-policy-file=
# - --authorization-config=
# - --authentication-token-webhook-config-file=
# - --authentication-config=
# - --audit-webhook-config-file=
# - --allow-metric-labels=
# - --allow-metric-labels-manifest=
# - --admission-control=
# - --admission-control-config-file=
# - --advertise-address=
image: registry.k8s.io/kube-apiserver:v1.30.4
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 8
httpGet:
host: ${MACHINE_LOCAL_ADDRESS}
path: /livez
port: 6443
scheme: HTTPS
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 15
name: kube-apiserver
readinessProbe:
failureThreshold: 3
httpGet:
host: ${MACHINE_LOCAL_ADDRESS}
path: /readyz
port: 6443
scheme: HTTPS
periodSeconds: 1
timeoutSeconds: 15
resources:
requests:
cpu: 250m
startupProbe:
failureThreshold: 24
httpGet:
host: ${MACHINE_LOCAL_ADDRESS}
path: /livez
port: 6443
scheme: HTTPS
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 15
volumeMounts:
- mountPath: /etc/ssl/certs
name: ca-certs
readOnly: true
- mountPath: /etc/ca-certificates
name: etc-ca-certificates
readOnly: true
- mountPath: /etc/pki
name: etc-pki
readOnly: true
- mountPath: /var/log/kubernetes/audit/
name: k8s-audit
- mountPath: /etc/kubernetes/audit-policy.yaml
name: k8s-audit-policy
readOnly: true
- mountPath: /etc/kubernetes/pki
name: k8s-certs
readOnly: true
- mountPath: /usr/local/share/ca-certificates
name: usr-local-share-ca-certificates
readOnly: true
- mountPath: /usr/share/ca-certificates
name: usr-share-ca-certificates
readOnly: true
hostNetwork: true
priority: 2000001000
priorityClassName: system-node-critical
securityContext:
seccompProfile:
type: RuntimeDefault
volumes:
- hostPath:
path: /etc/ssl/certs
type: DirectoryOrCreate
name: ca-certs
- hostPath:
path: /etc/ca-certificates
type: DirectoryOrCreate
name: etc-ca-certificates
- hostPath:
path: /etc/pki
type: DirectoryOrCreate
name: etc-pki
- hostPath:
path: /var/log/kubernetes/audit/
type: DirectoryOrCreate
name: k8s-audit
- hostPath:
path: /etc/kubernetes/audit-policy.yaml
type: File
name: k8s-audit-policy
- hostPath:
path: /etc/kubernetes/pki
type: DirectoryOrCreate
name: k8s-certs
- hostPath:
path: /usr/local/share/ca-certificates
type: DirectoryOrCreate
name: usr-local-share-ca-certificates
- hostPath:
path: /usr/share/ca-certificates
type: DirectoryOrCreate
name: usr-share-ca-certificates
status: {}
EOF
Manifest generation
kubeadm init phase control-plane apiserver \
--config=/var/run/kubeadm/kubeadm.yaml
#### Kube API
[control-plane] Creating static Pod manifest for "apiserver"
Kube Controller Manager setup
● Required
Kube Controller Manager setup
● Required
This section is optional and intended only for cases where this resource needs to be configured separately from the others.
- HardWay
- Kubeadm
Environment variables
export CLUSTER_NAME=my-first-cluster
Working directory
mkdir -p /etc/kubernetes/manifests
Static Pod Kube-Controller-Manager
Manifest generation
cat <<EOF > /etc/kubernetes/manifests/kube-controller-manager.yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
component: kube-controller-manager
tier: control-plane
name: kube-controller-manager
namespace: kube-system
spec:
containers:
- command:
- kube-controller-manager
- --allocate-node-cidrs=false
- --allow-untagged-cloud=false
- --attach-detach-reconcile-sync-period=1m0s
- --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf
- --authentication-skip-lookup=false
- --authentication-token-webhook-cache-ttl=10s
- --authentication-tolerate-lookup-failure=false
- --authorization-always-allow-paths=/healthz,/readyz,/livez,/metrics
- --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf
- --authorization-webhook-cache-authorized-ttl=10s
- --authorization-webhook-cache-unauthorized-ttl=10s
- --bind-address=0.0.0.0
- --cidr-allocator-type=RangeAllocator
- --client-ca-file=/etc/kubernetes/pki/ca.crt
- --cluster-name=${CLUSTER_NAME}
- --cloud-provider-gce-lb-src-cidrs=130.211.0.0/22,209.85.152.0/22,209.85.204.0/22,35.191.0.0/16
- --cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt
- --cluster-signing-duration=720h0m0s
- --cluster-signing-key-file=/etc/kubernetes/pki/ca.key
- --concurrent-cron-job-syncs=5
- --concurrent-deployment-syncs=5
- --concurrent-endpoint-syncs=5
- --concurrent-ephemeralvolume-syncs=5
- --concurrent-gc-syncs=20
- --concurrent-horizontal-pod-autoscaler-syncs=5
- --concurrent-job-syncs=5
- --concurrent-namespace-syncs=10
- --concurrent-rc-syncs=5
- --concurrent-replicaset-syncs=20
- --concurrent-resource-quota-syncs=5
- --concurrent-service-endpoint-syncs=5
- --concurrent-service-syncs=1
- --concurrent-serviceaccount-token-syncs=5
- --concurrent-statefulset-syncs=5
- --concurrent-ttl-after-finished-syncs=5
- --concurrent-validating-admission-policy-status-syncs=5
- --configure-cloud-routes=true
- --contention-profiling=false
- --controller-start-interval=0s
- --controllers=*,bootstrapsigner,tokencleaner
- --disable-attach-detach-reconcile-sync=false
- --disable-force-detach-on-timeout=false
- --enable-dynamic-provisioning=true
- --enable-garbage-collector=true
- --enable-hostpath-provisioner=false
- --enable-leader-migration=false
- --endpoint-updates-batch-period=0s
- --endpointslice-updates-batch-period=0s
- --feature-gates=RotateKubeletServerCertificate=true
- --flex-volume-plugin-dir=/usr/libexec/kubernetes/kubelet-plugins/volume/exec/
- --help=false
- --horizontal-pod-autoscaler-cpu-initialization-period=5m0s
- --horizontal-pod-autoscaler-downscale-delay=5m0s
- --horizontal-pod-autoscaler-downscale-stabilization=5m0s
- --horizontal-pod-autoscaler-initial-readiness-delay=30s
- --horizontal-pod-autoscaler-sync-period=30s
- --horizontal-pod-autoscaler-tolerance=0.1
- --horizontal-pod-autoscaler-upscale-delay=3m0s
- --http2-max-streams-per-connection=0
- --kube-api-burst=120
- --kube-api-content-type=application/vnd.kubernetes.protobuf
- --kube-api-qps=100
- --kubeconfig=/etc/kubernetes/controller-manager.conf
- --large-cluster-size-threshold=50
- --leader-elect=true
- --leader-elect-lease-duration=15s
- --leader-elect-renew-deadline=10s
- --leader-elect-resource-lock=leases
- --leader-elect-resource-name=kube-controller-manager
- --leader-elect-resource-namespace=kube-system
- --leader-elect-retry-period=2s
- --legacy-service-account-token-clean-up-period=8760h0m0s
- --log-flush-frequency=5s
- --log-json-info-buffer-size=0
- --log-json-split-stream=false
- --log-text-info-buffer-size=0
- --log-text-split-stream=false
- --logging-format=text
- --max-endpoints-per-slice=100
- --min-resync-period=12h0m0s
- --mirroring-concurrent-service-endpoint-syncs=5
- --mirroring-endpointslice-updates-batch-period=0s
- --mirroring-max-endpoints-per-subset=1000
- --namespace-sync-period=2m0s
- --node-cidr-mask-size=0
- --node-cidr-mask-size-ipv4=0
- --node-cidr-mask-size-ipv6=0
- --node-eviction-rate=0.1
- --node-monitor-grace-period=40s
- --node-monitor-period=5s
- --node-startup-grace-period=10s
- --node-sync-period=0s
- --permit-address-sharing=false
- --permit-port-sharing=false
- --profiling=false
- --pv-recycler-increment-timeout-nfs=30
- --pv-recycler-minimum-timeout-hostpath=60
- --pv-recycler-minimum-timeout-nfs=300
- --pv-recycler-timeout-increment-hostpath=30
- --pvclaimbinder-sync-period=15s
- --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
- --requestheader-extra-headers-prefix=x-remote-extra-
- --requestheader-group-headers=x-remote-group
- --requestheader-username-headers=x-remote-user
- --resource-quota-sync-period=5m0s
- --root-ca-file=/etc/kubernetes/pki/ca.crt
- --route-reconciliation-period=10s
- --secondary-node-eviction-rate=0.01
- --secure-port=10257
- --service-account-private-key-file=/etc/kubernetes/pki/sa.key
- --terminated-pod-gc-threshold=0
- --unhealthy-zone-threshold=0.55
- --use-service-account-credentials=true
- --v=2
- --version=false
- --volume-host-allow-local-loopback=true
# IF YOU NEED TO CONNECT CLOUD-CONTROLLER-MANAGER
# UNCOMMENT THE FOLLOWING
# ->
# - --cloud-provider: "external"
# IF YOU NEED TO CONNECT SERVER CERTIFICATES FOR KUBE-CONTROLLER-MANAGER
# NOTE THAT KUBEADM DOES NOT CREATE THESE CERTIFICATES
# UNCOMMENT THE FOLLOWING
# ->
# - --tls-cert-file=/etc/kubernetes/pki/controller-manager-server.crt
# - --tls-private-key-file=/etc/kubernetes/pki/controller-manager-server.key
# Do not specify if value is "" or undefined
# - --cluster-signing-kube-apiserver-client-cert-file=
# - --cluster-signing-kube-apiserver-client-key-file=
# - --cluster-signing-kubelet-client-cert-file=
# - --cluster-signing-kubelet-client-key-file=
# - --cluster-signing-kubelet-serving-cert-file=
# - --cluster-signing-kubelet-serving-key-file=
# - --cluster-signing-legacy-unknown-cert-file=
# - --cluster-signing-legacy-unknown-key-file=
# - --cluster-cidr=
# - --cloud-config=
# - --cert-dir=
# - --allow-metric-labels-manifest=
# - --allow-metric-labels=
# - --disabled-metrics=
# - --leader-migration-config=
# - --master=
# - --pv-recycler-pod-template-filepath-hostpath=
# - --pv-recycler-pod-template-filepath-nfs=
# - --service-cluster-ip-range=
# - --show-hidden-metrics-for-version=
# - --tls-cipher-suites=
# - --tls-min-version=
# - --tls-sni-cert-key=
# - --vmodule=
# - --volume-host-cidr-denylist=
# - --external-cloud-volume-plugin=
# - --requestheader-allowed-names=
image: registry.k8s.io/kube-controller-manager:v1.30.4
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 8
httpGet:
path: /healthz
port: 10257
scheme: HTTPS
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 15
name: kube-controller-manager
resources:
requests:
cpu: 200m
startupProbe:
failureThreshold: 24
httpGet:
path: /healthz
port: 10257
scheme: HTTPS
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 15
volumeMounts:
- mountPath: /etc/ssl/certs
name: ca-certs
readOnly: true
- mountPath: /etc/ca-certificates
name: etc-ca-certificates
readOnly: true
- mountPath: /etc/pki
name: etc-pki
readOnly: true
- mountPath: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/
name: flexvolume-dir
- mountPath: /etc/kubernetes/pki
name: k8s-certs
readOnly: true
- mountPath: /etc/kubernetes/controller-manager.conf
name: kubeconfig
readOnly: true
- mountPath: /usr/local/share/ca-certificates
name: usr-local-share-ca-certificates
readOnly: true
- mountPath: /usr/share/ca-certificates
name: usr-share-ca-certificates
readOnly: true
hostNetwork: true
priority: 2000001000
priorityClassName: system-node-critical
securityContext:
seccompProfile:
type: RuntimeDefault
volumes:
- hostPath:
path: /etc/ssl/certs
type: DirectoryOrCreate
name: ca-certs
- hostPath:
path: /etc/ca-certificates
type: DirectoryOrCreate
name: etc-ca-certificates
- hostPath:
path: /etc/pki
type: DirectoryOrCreate
name: etc-pki
- hostPath:
path: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/
type: DirectoryOrCreate
name: flexvolume-dir
- hostPath:
path: /etc/kubernetes/pki
type: DirectoryOrCreate
name: k8s-certs
- hostPath:
path: /etc/kubernetes/controller-manager.conf
type: FileOrCreate
name: kubeconfig
- hostPath:
path: /usr/local/share/ca-certificates
type: DirectoryOrCreate
name: usr-local-share-ca-certificates
- hostPath:
path: /usr/share/ca-certificates
type: DirectoryOrCreate
name: usr-share-ca-certificates
status: {}
EOF
Manifest generation
kubeadm init phase control-plane controller-manager \
--config=/var/run/kubeadm/kubeadm.yaml
#### Kube API
[control-plane] Creating static Pod manifest for "kube-controller-manager"
Kube Scheduler setup
● Required
Kube Scheduler setup
● Required
This section is optional and is intended only for cases where this resource needs to be configured separately from the rest.
- HardWay
- Kubeadm
Working directory
mkdir -p /etc/kubernetes/manifests
Static Pod Kube-Scheduler
Manifest generation
cat <<EOF > /etc/kubernetes/manifests/kube-scheduler.yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
component: kube-scheduler
tier: control-plane
name: kube-scheduler
namespace: kube-system
spec:
containers:
- command:
- kube-scheduler
- --authentication-kubeconfig=/etc/kubernetes/scheduler.conf
- --authentication-skip-lookup=false
- --authentication-token-webhook-cache-ttl=10s
- --authentication-tolerate-lookup-failure=true
- --authorization-always-allow-paths=/healthz,/readyz,/livez,/metrics
- --authorization-kubeconfig=/etc/kubernetes/scheduler.conf
- --authorization-webhook-cache-authorized-ttl=10s
- --authorization-webhook-cache-unauthorized-ttl=10s
- --bind-address=0.0.0.0
- --client-ca-file=
- --contention-profiling=true
- --help=false
- --http2-max-streams-per-connection=0
- --kube-api-burst=100
- --kube-api-content-type=application/vnd.kubernetes.protobuf
- --kube-api-qps=50
- --kubeconfig=/etc/kubernetes/scheduler.conf
- --leader-elect=true
- --leader-elect-lease-duration=15s
- --leader-elect-renew-deadline=10s
- --leader-elect-resource-lock=leases
- --leader-elect-resource-name=kube-scheduler
- --leader-elect-resource-namespace=kube-system
- --leader-elect-retry-period=2s
- --log-flush-frequency=5s
- --log-json-info-buffer-size=0
- --log-json-split-stream=false
- --log-text-info-buffer-size=0
- --log-text-split-stream=false
- --logging-format=text
- --permit-address-sharing=false
- --permit-port-sharing=false
- --pod-max-in-unschedulable-pods-duration=5m0s
- --profiling=true
- --requestheader-extra-headers-prefix=[x-remote-extra-]
- --requestheader-group-headers=[x-remote-group]
- --requestheader-username-headers=[x-remote-user]
- --secure-port=10259
- --v=2
- --version=false
# IF YOU NEED TO ATTACH SERVER CERTIFICATES FOR KUBE-SCHEDULER
# NOTE THAT KUBEADM DOES NOT CREATE THESE CERTIFICATES
# UNCOMMENT THE FOLLOWING
# ->
# - --tls-cert-file=/etc/kubernetes/pki/scheduler-server.crt
# - --tls-private-key-file=/etc/kubernetes/pki/scheduler-server.key
# <-
# - --allow-metric-labels=[]
# - --allow-metric-labels-manifest=
# - --cert-dir=
# - --config=
# - --disabled-metrics=[]
# - --feature-gates=
# - --master=
# - --requestheader-allowed-names=[]
# - --requestheader-client-ca-file=
# - --show-hidden-metrics-for-version=
# - --tls-cipher-suites=[]
# - --tls-min-version=
# - --tls-sni-cert-key=[]
# - --vmodule=
# - --write-config-to=
image: registry.k8s.io/kube-scheduler:v1.30.4
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 8
httpGet:
path: /healthz
port: 10259
scheme: HTTPS
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 15
name: kube-scheduler
resources:
requests:
cpu: 100m
startupProbe:
failureThreshold: 24
httpGet:
path: /healthz
port: 10259
scheme: HTTPS
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 15
volumeMounts:
- mountPath: /etc/kubernetes/scheduler.conf
name: kubeconfig
readOnly: true
# IF YOU NEED TO ATTACH SERVER CERTIFICATES FOR KUBE-SCHEDULER
# NOTE THAT KUBEADM DOES NOT CREATE THESE CERTIFICATES
# UNCOMMENT THE FOLLOWING
# ->
# - mountPath: /etc/kubernetes/pki/scheduler-server.crt
# name: kube-scheduler-crt
# readOnly: true
# - mountPath: /etc/kubernetes/pki/scheduler-server.key
# name: kube-scheduler-key
# readOnly: true
hostNetwork: true
priority: 2000001000
priorityClassName: system-node-critical
securityContext:
seccompProfile:
type: RuntimeDefault
volumes:
- hostPath:
path: /etc/kubernetes/scheduler.conf
type: FileOrCreate
name: kubeconfig
# IF YOU NEED TO ATTACH SERVER CERTIFICATES FOR KUBE-SCHEDULER
# NOTE THAT KUBEADM DOES NOT CREATE THESE CERTIFICATES
# UNCOMMENT THE FOLLOWING
# ->
# - hostPath:
# path: /etc/kubernetes/pki/scheduler-server.crt
# type: FileOrCreate
# name: kube-scheduler-crt
# - hostPath:
# path: /etc/kubernetes/pki/scheduler-server.key
# type: FileOrCreate
# name: kube-scheduler-key
status: {}
EOF
Manifest generation
kubeadm init phase control-plane scheduler \
--config=/var/run/kubeadm/kubeadm.yaml
#### Kube API
[control-plane] Creating static Pod manifest for "kube-scheduler"
Static Pods setup
● Required
Static Pods setup
● Required
This section describes the manual creation of static pod manifests for Kubernetes control plane components.
- Kube-API
- Kube Controller Manager
- Kube Scheduler
Kube-API setup
● Required
Kube-API setup
● Required
This section is optional and intended only for cases where this resource needs to be configured separately from the others.
- HardWay
- Kubeadm
Environment variables
export MACHINE_LOCAL_ADDRESS=$(ip -4 addr show scope global | awk '/inet/ {print $2; exit}' | cut -d/ -f1)
Working directory
mkdir -p /etc/kubernetes/manifests
Static Pod Kube-apiserver
Manifest generation
cat <<EOF > /etc/kubernetes/manifests/kube-apiserver.yaml
---
apiVersion: v1
kind: Pod
metadata:
annotations:
kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: ${MACHINE_LOCAL_ADDRESS}:6443
creationTimestamp: null
labels:
component: kube-apiserver
tier: control-plane
name: kube-apiserver
namespace: kube-system
spec:
containers:
- command:
- kube-apiserver
- --advertise-address=${MACHINE_LOCAL_ADDRESS}
- --aggregator-reject-forwarding-redirect=true
- --allow-privileged=true
- --anonymous-auth=true
- --api-audiences=konnectivity-server
- --apiserver-count=1
- --audit-log-batch-buffer-size=10000
- --audit-log-batch-max-size=1
- --audit-log-batch-max-wait=0s
- --audit-log-batch-throttle-burst=0
- --audit-log-batch-throttle-enable=false
- --audit-log-batch-throttle-qps=0
- --audit-log-compress=false
- --audit-log-format=json
- --audit-log-maxage=30
- --audit-log-maxbackup=10
- --audit-log-maxsize=1000
- --audit-log-mode=batch
- --audit-log-truncate-enabled=false
- --audit-log-truncate-max-batch-size=10485760
- --audit-log-truncate-max-event-size=102400
- --audit-log-version=audit.k8s.io/v1
- --audit-webhook-batch-buffer-size=10000
- --audit-webhook-batch-initial-backoff=10s
- --audit-webhook-batch-max-size=400
- --audit-webhook-batch-max-wait=30s
- --audit-webhook-batch-throttle-burst=15
- --audit-webhook-batch-throttle-enable=true
- --audit-webhook-batch-throttle-qps=10
- --audit-webhook-initial-backoff=10s
- --audit-webhook-mode=batch
- --audit-webhook-truncate-enabled=false
- --audit-webhook-truncate-max-batch-size=10485760
- --audit-webhook-truncate-max-event-size=102400
- --audit-webhook-version=audit.k8s.io/v1
- --audit-policy-file=/etc/kubernetes/audit-policy.yaml
- --audit-log-path=/var/log/kubernetes/audit/audit.log
- --authentication-token-webhook-cache-ttl=2m0s
- --authentication-token-webhook-version=v1beta1
- --authorization-mode=Node,RBAC
- --authorization-webhook-cache-authorized-ttl=5m0s
- --authorization-webhook-cache-unauthorized-ttl=30s
- --authorization-webhook-version=v1beta1
- --bind-address=0.0.0.0
- --cert-dir=/var/run/kubernetes
- --client-ca-file=/etc/kubernetes/pki/ca.crt
# -> Enable if managing state via Cloud Controller Manager
# - --cloud-provider=external
- --cloud-provider-gce-l7lb-src-cidrs=130.211.0.0/22,35.191.0.0/16
- --cloud-provider-gce-lb-src-cidrs=130.211.0.0/22,209.85.152.0/22,209.85.204.0/22,35.191.0.0/16
- --contention-profiling=false
- --default-not-ready-toleration-seconds=300
- --default-unreachable-toleration-seconds=300
- --default-watch-cache-size=100
- --delete-collection-workers=1
- --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,PodSecurity
- --enable-aggregator-routing=true
- --enable-bootstrap-token-auth=true
- --enable-garbage-collector=true
- --enable-logs-handler=true
- --enable-priority-and-fairness=true
- --encryption-provider-config-automatic-reload=false
- --endpoint-reconciler-type=lease
- --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt
- --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt
- --etcd-compaction-interval=5m0s
- --etcd-count-metric-poll-period=1m0s
- --etcd-db-metric-poll-interval=30s
- --etcd-healthcheck-timeout=2s
- --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key
- --etcd-prefix=/registry
- --etcd-readycheck-timeout=2s
- --etcd-servers=https://127.0.0.1:2379
- --event-ttl=1h0m0s
- --feature-gates=RotateKubeletServerCertificate=true
- --goaway-chance=0
- --help=false
- --http2-max-streams-per-connection=0
- --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt
- --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key
- --kubelet-port=10250
- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
- --kubelet-read-only-port=10255
- --kubelet-timeout=5s
- --kubernetes-service-node-port=0
- --lease-reuse-duration-seconds=60
- --livez-grace-period=0s
- --log-flush-frequency=5s
- --logging-format=text
- --log-json-info-buffer-size=0
- --log-json-split-stream=false
- --log-text-info-buffer-size=0
- --log-text-split-stream=false
- --max-connection-bytes-per-sec=0
- --max-mutating-requests-inflight=200
- --max-requests-inflight=400
- --min-request-timeout=1800
- --permit-address-sharing=false
- --permit-port-sharing=false
- --profiling=false
- --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt
- --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key
- --requestheader-allowed-names=front-proxy-client
- --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
- --requestheader-extra-headers-prefix=X-Remote-Extra-
- --requestheader-group-headers=X-Remote-Group
- --requestheader-username-headers=X-Remote-User
- --request-timeout=1m0s
- --runtime-config=api/all=true
- --secure-port=6443
- --service-account-extend-token-expiration=true
- --service-account-issuer=https://kubernetes.default.svc.cluster.local
- --service-account-key-file=/etc/kubernetes/pki/sa.pub
- --service-account-lookup=true
- --service-account-max-token-expiration=0s
- --service-account-signing-key-file=/etc/kubernetes/pki/sa.key
- --service-cluster-ip-range=29.64.0.0/16
- --service-node-port-range=30000-32767
- --shutdown-delay-duration=0s
- --shutdown-send-retry-after=false
- --shutdown-watch-termination-grace-period=0s
- --storage-backend=etcd3
- --storage-media-type=application/vnd.kubernetes.protobuf
- --tls-cert-file=/etc/kubernetes/pki/apiserver.crt
- --tls-private-key-file=/etc/kubernetes/pki/apiserver.key
- --v=2
- --version=false
- --watch-cache=true
# IF YOU NEED TO CONNECT CLOUD-CONTROLLER-MANAGER
# UNCOMMENT THE FOLLOWING
# ->
# - --cloud-provider: "external"
# Do not specify if value is "" or undefined
# - --cloud-config=
# - --strict-transport-security-directives=
# - --disable-admission-plugins=
# - --disabled-metrics=
# - --egress-selector-config-file=
# - --encryption-provider-config=
# - --etcd-servers-overrides=
# - --external-hostname=
# - --kubelet-certificate-authority=
# - --oidc-ca-file=
# - --oidc-client-id=
# - --oidc-groups-claim=
# - --oidc-groups-prefix=
# - --oidc-issuer-url=
# - --oidc-required-claim=
# - --oidc-signing-algs=RS256
# - --oidc-username-claim=sub
# - --oidc-username-prefix=
# - --peer-advertise-ip=
# - --peer-advertise-port=
# - --peer-ca-file=
# - --service-account-jwks-uri=
# - --show-hidden-metrics-for-version=
# - --tls-cipher-suites=
# - --tls-min-version=
# - --tls-sni-cert-key=
# - --token-auth-file=
# - --tracing-config-file=
# - --vmodule=
# - --watch-cache-sizes=
# - --authorization-webhook-config-file=
# - --cors-allowed-origins=
# - --debug-socket-path=
# - --authorization-policy-file=
# - --authorization-config=
# - --authentication-token-webhook-config-file=
# - --authentication-config=
# - --audit-webhook-config-file=
# - --allow-metric-labels=
# - --allow-metric-labels-manifest=
# - --admission-control=
# - --admission-control-config-file=
# - --advertise-address=
image: registry.k8s.io/kube-apiserver:v1.30.4
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 8
httpGet:
host: ${MACHINE_LOCAL_ADDRESS}
path: /livez
port: 6443
scheme: HTTPS
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 15
name: kube-apiserver
readinessProbe:
failureThreshold: 3
httpGet:
host: ${MACHINE_LOCAL_ADDRESS}
path: /readyz
port: 6443
scheme: HTTPS
periodSeconds: 1
timeoutSeconds: 15
resources:
requests:
cpu: 250m
startupProbe:
failureThreshold: 24
httpGet:
host: ${MACHINE_LOCAL_ADDRESS}
path: /livez
port: 6443
scheme: HTTPS
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 15
volumeMounts:
- mountPath: /etc/ssl/certs
name: ca-certs
readOnly: true
- mountPath: /etc/ca-certificates
name: etc-ca-certificates
readOnly: true
- mountPath: /etc/pki
name: etc-pki
readOnly: true
- mountPath: /var/log/kubernetes/audit/
name: k8s-audit
- mountPath: /etc/kubernetes/audit-policy.yaml
name: k8s-audit-policy
readOnly: true
- mountPath: /etc/kubernetes/pki
name: k8s-certs
readOnly: true
- mountPath: /usr/local/share/ca-certificates
name: usr-local-share-ca-certificates
readOnly: true
- mountPath: /usr/share/ca-certificates
name: usr-share-ca-certificates
readOnly: true
hostNetwork: true
priority: 2000001000
priorityClassName: system-node-critical
securityContext:
seccompProfile:
type: RuntimeDefault
volumes:
- hostPath:
path: /etc/ssl/certs
type: DirectoryOrCreate
name: ca-certs
- hostPath:
path: /etc/ca-certificates
type: DirectoryOrCreate
name: etc-ca-certificates
- hostPath:
path: /etc/pki
type: DirectoryOrCreate
name: etc-pki
- hostPath:
path: /var/log/kubernetes/audit/
type: DirectoryOrCreate
name: k8s-audit
- hostPath:
path: /etc/kubernetes/audit-policy.yaml
type: File
name: k8s-audit-policy
- hostPath:
path: /etc/kubernetes/pki
type: DirectoryOrCreate
name: k8s-certs
- hostPath:
path: /usr/local/share/ca-certificates
type: DirectoryOrCreate
name: usr-local-share-ca-certificates
- hostPath:
path: /usr/share/ca-certificates
type: DirectoryOrCreate
name: usr-share-ca-certificates
status: {}
EOF
Please note: during the Join phase, you cannot choose which manifests to generate — kubeadm creates all of them at once, in full.
Manifest generation
kubeadm join phase control-plane-prepare control-plane \
--config=/var/run/kubeadm/kubeadm.yaml
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
Kube Controller Manager setup
● Required
Kube Controller Manager setup
● Required
This section is optional and intended only for cases where this resource needs to be configured separately from the others.
- HardWay
- Kubeadm
Environment variables
export CLUSTER_NAME=my-first-cluster
Working directory
mkdir -p /etc/kubernetes/manifests
Static Pod Kube-Controller-Manager
Manifest generation
cat <<EOF > /etc/kubernetes/manifests/kube-controller-manager.yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
component: kube-controller-manager
tier: control-plane
name: kube-controller-manager
namespace: kube-system
spec:
containers:
- command:
- kube-controller-manager
- --allocate-node-cidrs=false
- --allow-untagged-cloud=false
- --attach-detach-reconcile-sync-period=1m0s
- --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf
- --authentication-skip-lookup=false
- --authentication-token-webhook-cache-ttl=10s
- --authentication-tolerate-lookup-failure=false
- --authorization-always-allow-paths=/healthz,/readyz,/livez,/metrics
- --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf
- --authorization-webhook-cache-authorized-ttl=10s
- --authorization-webhook-cache-unauthorized-ttl=10s
- --bind-address=0.0.0.0
- --cidr-allocator-type=RangeAllocator
- --client-ca-file=/etc/kubernetes/pki/ca.crt
- --cluster-name=${CLUSTER_NAME}
- --cloud-provider-gce-lb-src-cidrs=130.211.0.0/22,209.85.152.0/22,209.85.204.0/22,35.191.0.0/16
- --cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt
- --cluster-signing-duration=720h0m0s
- --cluster-signing-key-file=/etc/kubernetes/pki/ca.key
- --concurrent-cron-job-syncs=5
- --concurrent-deployment-syncs=5
- --concurrent-endpoint-syncs=5
- --concurrent-ephemeralvolume-syncs=5
- --concurrent-gc-syncs=20
- --concurrent-horizontal-pod-autoscaler-syncs=5
- --concurrent-job-syncs=5
- --concurrent-namespace-syncs=10
- --concurrent-rc-syncs=5
- --concurrent-replicaset-syncs=20
- --concurrent-resource-quota-syncs=5
- --concurrent-service-endpoint-syncs=5
- --concurrent-service-syncs=1
- --concurrent-serviceaccount-token-syncs=5
- --concurrent-statefulset-syncs=5
- --concurrent-ttl-after-finished-syncs=5
- --concurrent-validating-admission-policy-status-syncs=5
- --configure-cloud-routes=true
- --contention-profiling=false
- --controller-start-interval=0s
- --controllers=*,bootstrapsigner,tokencleaner
- --disable-attach-detach-reconcile-sync=false
- --disable-force-detach-on-timeout=false
- --enable-dynamic-provisioning=true
- --enable-garbage-collector=true
- --enable-hostpath-provisioner=false
- --enable-leader-migration=false
- --endpoint-updates-batch-period=0s
- --endpointslice-updates-batch-period=0s
- --feature-gates=RotateKubeletServerCertificate=true
- --flex-volume-plugin-dir=/usr/libexec/kubernetes/kubelet-plugins/volume/exec/
- --help=false
- --horizontal-pod-autoscaler-cpu-initialization-period=5m0s
- --horizontal-pod-autoscaler-downscale-delay=5m0s
- --horizontal-pod-autoscaler-downscale-stabilization=5m0s
- --horizontal-pod-autoscaler-initial-readiness-delay=30s
- --horizontal-pod-autoscaler-sync-period=30s
- --horizontal-pod-autoscaler-tolerance=0.1
- --horizontal-pod-autoscaler-upscale-delay=3m0s
- --http2-max-streams-per-connection=0
- --kube-api-burst=120
- --kube-api-content-type=application/vnd.kubernetes.protobuf
- --kube-api-qps=100
- --kubeconfig=/etc/kubernetes/controller-manager.conf
- --large-cluster-size-threshold=50
- --leader-elect=true
- --leader-elect-lease-duration=15s
- --leader-elect-renew-deadline=10s
- --leader-elect-resource-lock=leases
- --leader-elect-resource-name=kube-controller-manager
- --leader-elect-resource-namespace=kube-system
- --leader-elect-retry-period=2s
- --legacy-service-account-token-clean-up-period=8760h0m0s
- --log-flush-frequency=5s
- --log-json-info-buffer-size=0
- --log-json-split-stream=false
- --log-text-info-buffer-size=0
- --log-text-split-stream=false
- --logging-format=text
- --max-endpoints-per-slice=100
- --min-resync-period=12h0m0s
- --mirroring-concurrent-service-endpoint-syncs=5
- --mirroring-endpointslice-updates-batch-period=0s
- --mirroring-max-endpoints-per-subset=1000
- --namespace-sync-period=2m0s
- --node-cidr-mask-size=0
- --node-cidr-mask-size-ipv4=0
- --node-cidr-mask-size-ipv6=0
- --node-eviction-rate=0.1
- --node-monitor-grace-period=40s
- --node-monitor-period=5s
- --node-startup-grace-period=10s
- --node-sync-period=0s
- --permit-address-sharing=false
- --permit-port-sharing=false
- --profiling=false
- --pv-recycler-increment-timeout-nfs=30
- --pv-recycler-minimum-timeout-hostpath=60
- --pv-recycler-minimum-timeout-nfs=300
- --pv-recycler-timeout-increment-hostpath=30
- --pvclaimbinder-sync-period=15s
- --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
- --requestheader-extra-headers-prefix=x-remote-extra-
- --requestheader-group-headers=x-remote-group
- --requestheader-username-headers=x-remote-user
- --resource-quota-sync-period=5m0s
- --root-ca-file=/etc/kubernetes/pki/ca.crt
- --route-reconciliation-period=10s
- --secondary-node-eviction-rate=0.01
- --secure-port=10257
- --service-account-private-key-file=/etc/kubernetes/pki/sa.key
- --terminated-pod-gc-threshold=0
- --unhealthy-zone-threshold=0.55
- --use-service-account-credentials=true
- --v=2
- --version=false
- --volume-host-allow-local-loopback=true
# IF YOU NEED TO CONNECT CLOUD-CONTROLLER-MANAGER
# UNCOMMENT THE FOLLOWING
# ->
# - --cloud-provider: "external"
# IF YOU NEED TO CONNECT SERVER CERTIFICATES FOR KUBE-CONTROLLER-MANAGER
# NOTE THAT KUBEADM DOES NOT CREATE THESE CERTIFICATES
# UNCOMMENT THE FOLLOWING
# ->
# - --tls-cert-file=/etc/kubernetes/pki/controller-manager-server.crt
# - --tls-private-key-file=/etc/kubernetes/pki/controller-manager-server.key
# Do not specify if value is "" or undefined
# - --cluster-signing-kube-apiserver-client-cert-file=
# - --cluster-signing-kube-apiserver-client-key-file=
# - --cluster-signing-kubelet-client-cert-file=
# - --cluster-signing-kubelet-client-key-file=
# - --cluster-signing-kubelet-serving-cert-file=
# - --cluster-signing-kubelet-serving-key-file=
# - --cluster-signing-legacy-unknown-cert-file=
# - --cluster-signing-legacy-unknown-key-file=
# - --cluster-cidr=
# - --cloud-config=
# - --cert-dir=
# - --allow-metric-labels-manifest=
# - --allow-metric-labels=
# - --disabled-metrics=
# - --leader-migration-config=
# - --master=
# - --pv-recycler-pod-template-filepath-hostpath=
# - --pv-recycler-pod-template-filepath-nfs=
# - --service-cluster-ip-range=
# - --show-hidden-metrics-for-version=
# - --tls-cipher-suites=
# - --tls-min-version=
# - --tls-sni-cert-key=
# - --vmodule=
# - --volume-host-cidr-denylist=
# - --external-cloud-volume-plugin=
# - --requestheader-allowed-names=
image: registry.k8s.io/kube-controller-manager:v1.30.4
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 8
httpGet:
path: /healthz
port: 10257
scheme: HTTPS
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 15
name: kube-controller-manager
resources:
requests:
cpu: 200m
startupProbe:
failureThreshold: 24
httpGet:
path: /healthz
port: 10257
scheme: HTTPS
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 15
volumeMounts:
- mountPath: /etc/ssl/certs
name: ca-certs
readOnly: true
- mountPath: /etc/ca-certificates
name: etc-ca-certificates
readOnly: true
- mountPath: /etc/pki
name: etc-pki
readOnly: true
- mountPath: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/
name: flexvolume-dir
- mountPath: /etc/kubernetes/pki
name: k8s-certs
readOnly: true
- mountPath: /etc/kubernetes/controller-manager.conf
name: kubeconfig
readOnly: true
- mountPath: /usr/local/share/ca-certificates
name: usr-local-share-ca-certificates
readOnly: true
- mountPath: /usr/share/ca-certificates
name: usr-share-ca-certificates
readOnly: true
hostNetwork: true
priority: 2000001000
priorityClassName: system-node-critical
securityContext:
seccompProfile:
type: RuntimeDefault
volumes:
- hostPath:
path: /etc/ssl/certs
type: DirectoryOrCreate
name: ca-certs
- hostPath:
path: /etc/ca-certificates
type: DirectoryOrCreate
name: etc-ca-certificates
- hostPath:
path: /etc/pki
type: DirectoryOrCreate
name: etc-pki
- hostPath:
path: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/
type: DirectoryOrCreate
name: flexvolume-dir
- hostPath:
path: /etc/kubernetes/pki
type: DirectoryOrCreate
name: k8s-certs
- hostPath:
path: /etc/kubernetes/controller-manager.conf
type: FileOrCreate
name: kubeconfig
- hostPath:
path: /usr/local/share/ca-certificates
type: DirectoryOrCreate
name: usr-local-share-ca-certificates
- hostPath:
path: /usr/share/ca-certificates
type: DirectoryOrCreate
name: usr-share-ca-certificates
status: {}
EOF
Please note: during the Join phase, you cannot choose which manifests to generate — kubeadm creates all of them at once, in full.
Manifest generation
kubeadm join phase control-plane-prepare control-plane \
--config=/var/run/kubeadm/kubeadm.yaml
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
Kube Scheduler setup
● Required
Kube Scheduler setup
● Required
This section is optional and is intended only for cases where this resource needs to be configured separately from the rest.
- HardWay
- Kubeadm
Working directory
mkdir -p /etc/kubernetes/manifests
Static Pod Kube-Scheduler
Manifest generation
cat <<EOF > /etc/kubernetes/manifests/kube-scheduler.yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
component: kube-scheduler
tier: control-plane
name: kube-scheduler
namespace: kube-system
spec:
containers:
- command:
- kube-scheduler
- --authentication-kubeconfig=/etc/kubernetes/scheduler.conf
- --authentication-skip-lookup=false
- --authentication-token-webhook-cache-ttl=10s
- --authentication-tolerate-lookup-failure=true
- --authorization-always-allow-paths=/healthz,/readyz,/livez,/metrics
- --authorization-kubeconfig=/etc/kubernetes/scheduler.conf
- --authorization-webhook-cache-authorized-ttl=10s
- --authorization-webhook-cache-unauthorized-ttl=10s
- --bind-address=0.0.0.0
- --client-ca-file=
- --contention-profiling=true
- --help=false
- --http2-max-streams-per-connection=0
- --kube-api-burst=100
- --kube-api-content-type=application/vnd.kubernetes.protobuf
- --kube-api-qps=50
- --kubeconfig=/etc/kubernetes/scheduler.conf
- --leader-elect=true
- --leader-elect-lease-duration=15s
- --leader-elect-renew-deadline=10s
- --leader-elect-resource-lock=leases
- --leader-elect-resource-name=kube-scheduler
- --leader-elect-resource-namespace=kube-system
- --leader-elect-retry-period=2s
- --log-flush-frequency=5s
- --log-json-info-buffer-size=0
- --log-json-split-stream=false
- --log-text-info-buffer-size=0
- --log-text-split-stream=false
- --logging-format=text
- --permit-address-sharing=false
- --permit-port-sharing=false
- --pod-max-in-unschedulable-pods-duration=5m0s
- --profiling=true
- --requestheader-extra-headers-prefix=[x-remote-extra-]
- --requestheader-group-headers=[x-remote-group]
- --requestheader-username-headers=[x-remote-user]
- --secure-port=10259
- --v=2
- --version=false
# IF YOU NEED TO ATTACH SERVER CERTIFICATES FOR KUBE-SCHEDULER
# NOTE THAT KUBEADM DOES NOT CREATE THESE CERTIFICATES
# UNCOMMENT THE FOLLOWING
# ->
# - --tls-cert-file=/etc/kubernetes/pki/scheduler-server.crt
# - --tls-private-key-file=/etc/kubernetes/pki/scheduler-server.key
# <-
# - --allow-metric-labels=[]
# - --allow-metric-labels-manifest=
# - --cert-dir=
# - --config=
# - --disabled-metrics=[]
# - --feature-gates=
# - --master=
# - --requestheader-allowed-names=[]
# - --requestheader-client-ca-file=
# - --show-hidden-metrics-for-version=
# - --tls-cipher-suites=[]
# - --tls-min-version=
# - --tls-sni-cert-key=[]
# - --vmodule=
# - --write-config-to=
image: registry.k8s.io/kube-scheduler:v1.30.4
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 8
httpGet:
path: /healthz
port: 10259
scheme: HTTPS
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 15
name: kube-scheduler
resources:
requests:
cpu: 100m
startupProbe:
failureThreshold: 24
httpGet:
path: /healthz
port: 10259
scheme: HTTPS
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 15
volumeMounts:
- mountPath: /etc/kubernetes/scheduler.conf
name: kubeconfig
readOnly: true
# IF YOU NEED TO ATTACH SERVER CERTIFICATES FOR KUBE-SCHEDULER
# NOTE THAT KUBEADM DOES NOT CREATE THESE CERTIFICATES
# UNCOMMENT THE FOLLOWING
# ->
# - mountPath: /etc/kubernetes/pki/scheduler-server.crt
# name: kube-scheduler-crt
# readOnly: true
# - mountPath: /etc/kubernetes/pki/scheduler-server.key
# name: kube-scheduler-key
# readOnly: true
hostNetwork: true
priority: 2000001000
priorityClassName: system-node-critical
securityContext:
seccompProfile:
type: RuntimeDefault
volumes:
- hostPath:
path: /etc/kubernetes/scheduler.conf
type: FileOrCreate
name: kubeconfig
# IF YOU NEED TO ATTACH SERVER CERTIFICATES FOR KUBE-SCHEDULER
# NOTE THAT KUBEADM DOES NOT CREATE THESE CERTIFICATES
# UNCOMMENT THE FOLLOWING
# ->
# - hostPath:
# path: /etc/kubernetes/pki/scheduler-server.crt
# type: FileOrCreate
# name: kube-scheduler-crt
# - hostPath:
# path: /etc/kubernetes/pki/scheduler-server.key
# type: FileOrCreate
# name: kube-scheduler-key
status: {}
EOF
Please note: during the Join phase, you cannot choose which manifests to generate — kubeadm creates all of them at once, in full.
Manifest generation
kubeadm join phase control-plane-prepare control-plane \
--config=/var/run/kubeadm/kubeadm.yaml
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
20*. Creating All Control Plane Static Pods
This section describes the automatic generation of static pod manifests for Kubernetes control plane components using
kubeadm.
- Init
- Join
Static Pods setup
● Required
Static Pods setup
● Required
Certificate generation
kubeadm init phase certs all \
--config=/var/run/kubeadm/kubeadm.yaml
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [api.my-first-cluster.example.com kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.my-first-cluster.example.com pylcozuscb] and IPs [29.64.0.1 31.129.111.153 127.0.0.1]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost master-1.my-first-cluster.example.com pylcozuscb] and IPs [31.129.111.153 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost master-1.my-first-cluster.example.com pylcozuscb] and IPs [31.129.111.153 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
Static Pods setup
● Required
Static Pods setup
● Required
Please note: during the Join phase, you cannot choose which manifests to generate — kubeadm creates all of them at once, in full.
Manifest generation
kubeadm join phase control-plane-prepare control-plane \
--config=/var/run/kubeadm/kubeadm.yaml
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
21. Creating ETCD Cluster Static Pods
This section describes the manual creation of static pod manifests for ETCD.
- Init
- Join
Static Pods setup
● Required
Static Pods setup
● Required
This section is optional and is intended only for cases when you need to configure this resource separately from the others.
- HardWay
- Kubeadm
Environment variables
- master-1
export HOST_NAME=master-1
export CLUSTER_NAME=my-first-cluster
export BASE_DOMAIN=example.com
export MACHINE_LOCAL_ADDRESS=$(ip -4 addr show scope global | awk '/inet/ {print $2; exit}' | cut -d/ -f1)
export FULL_HOST_NAME="${HOST_NAME}.${CLUSTER_NAME}.${BASE_DOMAIN}"
export ETCD_INITIAL_CLUSTER="${FULL_HOST_NAME}=https://${MACHINE_LOCAL_ADDRESS}:2380"
Working directory
mkdir -p /etc/kubernetes/manifests
Static Pod ETCD
Manifest generation
cat <<EOF > /etc/kubernetes/manifests/etcd.yaml
apiVersion: v1
kind: Pod
metadata:
annotations:
kubeadm.kubernetes.io/etcd.advertise-client-urls: https://${MACHINE_LOCAL_ADDRESS}:2379
creationTimestamp: null
labels:
component: etcd
tier: control-plane
name: etcd
namespace: kube-system
spec:
containers:
- command:
- etcd
- --advertise-client-urls=https://${MACHINE_LOCAL_ADDRESS}:2379
- --auto-compaction-retention=8
- --cert-file=/etc/kubernetes/pki/etcd/server.crt
- --client-cert-auth=true
- --data-dir=/var/lib/etcd
- --election-timeout=1500
- --experimental-initial-corrupt-check=true
- --experimental-watch-progress-notify-interval=5s
- --heartbeat-interval=250
- --initial-advertise-peer-urls=https://${MACHINE_LOCAL_ADDRESS}:2380
- --initial-cluster=${ETCD_INITIAL_CLUSTER}
- --key-file=/etc/kubernetes/pki/etcd/server.key
- --listen-client-urls=https://0.0.0.0:2379
- --listen-metrics-urls=http://0.0.0.0:2381
- --listen-peer-urls=https://0.0.0.0:2380
- --logger=zap
- --max-snapshots=10
- --max-wals=10
- --metrics=extensive
- --name=${FULL_HOST_NAME}
- --peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt
- --peer-client-cert-auth=true
- --peer-key-file=/etc/kubernetes/pki/etcd/peer.key
- --peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt
- --snapshot-count=10000
- --quota-backend-bytes=10737418240
- --trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt
image: registry.k8s.io/etcd:3.5.12-0
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 8
httpGet:
host: 127.0.0.1
path: /health?exclude=NOSPACE&serializable=true
port: 2381
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 15
name: etcd
resources:
requests:
cpu: 100m
memory: 100Mi
startupProbe:
failureThreshold: 24
httpGet:
host: 127.0.0.1
path: /health?serializable=false
port: 2381
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 15
volumeMounts:
- mountPath: /var/lib/etcd
name: etcd-data
- mountPath: /etc/kubernetes/pki/etcd
name: etcd-certs
hostNetwork: true
priority: 2000001000
priorityClassName: system-node-critical
securityContext:
seccompProfile:
type: RuntimeDefault
volumes:
- hostPath:
path: /etc/kubernetes/pki/etcd
type: DirectoryOrCreate
name: etcd-certs
- hostPath:
path: /var/lib/etcd
type: DirectoryOrCreate
name: etcd-data
status: {}
EOF
Manifest generation
kubeadm init phase etcd local \
--config=/var/run/kubeadm/kubeadm.yaml
#### Kube API
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes"
Static Pods setup
● Required
Static Pods setup
● Required
This section is optional and is intended only for cases when you need to configure this resource separately from the others.
- HardWay
- Kubeadm
Environment variables
- master-2
- master-3
export HOST_NAME=master-2
export HOST_NAME=master-3
export CLUSTER_NAME=my-first-cluster
export BASE_DOMAIN=example.com
export MACHINE_LOCAL_ADDRESS=$(ip -4 addr show scope global | awk '/inet/ {print $2; exit}' | cut -d/ -f1)
export FULL_HOST_NAME=${HOST_NAME}.${CLUSTER_NAME}.${BASE_DOMAIN}
# Get the list of existing etcd nodes
mapfile -t ETCD_PODS < <(kubectl get pods \
--kubeconfig=/etc/kubernetes/admin.conf \
-n kube-system -l component=etcd \
-o jsonpath="{range .items[*]}{.metadata.name} {.status.podIP}{'\n'}{end}")
ETCD_EXISTING_NODES=""
ETCD_ENDPOINTS=""
for entry in "${ETCD_PODS[@]}"; do
read -r podname podip <<< "$entry"
nodename="${podname#etcd-}"
ETCD_EXISTING_NODES+="${nodename}=https://${podip}:2380,"
ETCD_ENDPOINTS+="https://${podip}:2379,"
done
ETCD_EXISTING_NODES="${ETCD_EXISTING_NODES%,}"
ETCD_ENDPOINTS="${ETCD_ENDPOINTS%,}"
# Add the current node if it's not in the list
ETCD_CURRENT_NODE="${FULL_HOST_NAME}=https://${MACHINE_LOCAL_ADDRESS}:2380"
if [[ "$ETCD_EXISTING_NODES" == *"${FULL_HOST_NAME}="* ]]; then
export ETCD_INITIAL_CLUSTER="$ETCD_EXISTING_NODES"
else
if [[ -n "$ETCD_EXISTING_NODES" ]]; then
export ETCD_INITIAL_CLUSTER="${ETCD_EXISTING_NODES},${ETCD_CURRENT_NODE}"
else
export ETCD_INITIAL_CLUSTER="${ETCD_CURRENT_NODE}"
fi
fi
export ETCD_ENDPOINTS
Working directory
mkdir -p /etc/kubernetes/manifests
Static Pod ETCD
Manifest generation
cat <<EOF > /etc/kubernetes/manifests/etcd.yaml
apiVersion: v1
kind: Pod
metadata:
annotations:
kubeadm.kubernetes.io/etcd.advertise-client-urls: https://${MACHINE_LOCAL_ADDRESS}:2379
creationTimestamp: null
labels:
component: etcd
tier: control-plane
name: etcd
namespace: kube-system
spec:
containers:
- command:
- etcd
- --advertise-client-urls=https://${MACHINE_LOCAL_ADDRESS}:2379
- --auto-compaction-retention=8
- --cert-file=/etc/kubernetes/pki/etcd/server.crt
- --client-cert-auth=true
- --data-dir=/var/lib/etcd
- --election-timeout=1500
- --experimental-initial-corrupt-check=true
- --experimental-watch-progress-notify-interval=5s
- --heartbeat-interval=250
- --initial-advertise-peer-urls=https://${MACHINE_LOCAL_ADDRESS}:2380
- --initial-cluster=${ETCD_INITIAL_CLUSTER}
- --initial-cluster-state=existing
- --key-file=/etc/kubernetes/pki/etcd/server.key
- --listen-client-urls=https://0.0.0.0:2379
- --listen-metrics-urls=http://0.0.0.0:2381
- --listen-peer-urls=https://0.0.0.0:2380
- --logger=zap
- --max-snapshots=10
- --max-wals=10
- --metrics=extensive
- --name=${FULL_HOST_NAME}
- --peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt
- --peer-client-cert-auth=true
- --peer-key-file=/etc/kubernetes/pki/etcd/peer.key
- --peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt
- --snapshot-count=10000
- --quota-backend-bytes=10737418240
- --trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt
image: registry.k8s.io/etcd:3.5.12-0
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 8
httpGet:
host: 0.0.0.0
path: /health?exclude=NOSPACE&serializable=true
port: 2381
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 15
name: etcd
resources:
requests:
cpu: 100m
memory: 100Mi
startupProbe:
failureThreshold: 24
httpGet:
host: 0.0.0.0
path: /health?serializable=false
port: 2381
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 15
volumeMounts:
- mountPath: /var/lib/etcd
name: etcd-data
- mountPath: /etc/kubernetes/pki/etcd
name: etcd-certs
hostNetwork: true
priority: 2000001000
priorityClassName: system-node-critical
securityContext:
seccompProfile:
type: RuntimeDefault
volumes:
- hostPath:
path: /etc/kubernetes/pki/etcd
type: DirectoryOrCreate
name: etcd-certs
- hostPath:
path: /var/lib/etcd
type: DirectoryOrCreate
name: etcd-data
status: {}
EOF
Expanding the ETCD cluster
Adding a node
Declare an alias for
etcdctlusing the required certificates
alias etcdctl='etcdctl \
--cert=/etc/kubernetes/pki/etcd/peer.crt \
--key=/etc/kubernetes/pki/etcd/peer.key \
--cacert=/etc/kubernetes/pki/etcd/ca.crt'
Function to get the list of client URLs for all current cluster members
etcdctlMembers() {
etcdctl member list \
--endpoints="$ETCD_ENDPOINTS" \
--write-out=json | jq \
-r '[.members[].clientURLs[]] | join(",")'
}
View the current quorum members
etcdctl \
--endpoints=$(etcdctlMembers) member list \
-w table
Adding a new node to the ETCD cluster
etcdctl \
--endpoints=$(etcdctlMembers) \
member add ${FULL_HOST_NAME} \
--peer-urls=https://${MACHINE_LOCAL_ADDRESS}:2380
After adding the second node to the ETCD quorum, the first master may become unavailable until the second ETCD node is started.
Make sure to start ETCD on the new node using kubelet (see the step below) before continuing.
Manifest generation
kubeadm join phase control-plane-join etcd \
--config=/var/run/kubeadm/kubeadm.yaml
#### Kube API
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes"
22. Starting the Kubelet Service
This section covers the manual startup of Kubelet with systemd unit configuration. It describes the creation of a basic kubelet configuration file, setting up environment variables for the kubelet.service, and starting the service itself.
- Init
- Join
Start/Configure kubelet
● Required
Start/Configure kubelet
● Required
- HardWay
- Kubeadm
This configuration file is required for
Kubeletto start.
Kubelet default config
- Bash
- Cloud-init
Basic kubelet configuration file
cat <<EOF > /var/lib/kubelet/config.yaml
---
apiVersion: kubelet.config.k8s.io/v1beta1
authentication:
anonymous:
enabled: false
webhook:
cacheTTL: 0s
enabled: true
x509:
clientCAFile: /etc/kubernetes/pki/ca.crt
authorization:
mode: Webhook
webhook:
cacheAuthorizedTTL: 0s
cacheUnauthorizedTTL: 0s
cgroupDriver: systemd
clusterDNS:
- 29.64.0.10
clusterDomain: cluster.local
containerRuntimeEndpoint: ""
cpuManagerReconcilePeriod: 0s
evictionPressureTransitionPeriod: 0s
fileCheckFrequency: 0s
healthzBindAddress: 127.0.0.1
healthzPort: 10248
httpCheckFrequency: 0s
imageMaximumGCAge: 0s
imageMinimumGCAge: 0s
kind: KubeletConfiguration
logging:
flushFrequency: 0
options:
json:
infoBufferSize: "0"
text:
infoBufferSize: "0"
verbosity: 0
memorySwap: {}
nodeStatusReportFrequency: 0s
nodeStatusUpdateFrequency: 0s
resolvConf: /run/systemd/resolve/resolv.conf
rotateCertificates: true
runtimeRequestTimeout: 0s
shutdownGracePeriod: 0s
shutdownGracePeriodCriticalPods: 0s
staticPodPath: /etc/kubernetes/manifests
streamingConnectionIdleTimeout: 0s
syncFrequency: 0s
volumeStatsAggPeriod: 0s
EOF
Basic kubelet configuration file
- path: /var/lib/kubelet/config.yaml
owner: root:root
permissions: '0644'
content: |
---
apiVersion: kubelet.config.k8s.io/v1beta1
authentication:
anonymous:
enabled: false
webhook:
cacheTTL: 0s
enabled: true
x509:
clientCAFile: /etc/kubernetes/pki/ca.crt
authorization:
mode: Webhook
webhook:
cacheAuthorizedTTL: 0s
cacheUnauthorizedTTL: 0s
cgroupDriver: systemd
clusterDNS:
- 29.64.0.10
clusterDomain: cluster.local
containerRuntimeEndpoint: ""
cpuManagerReconcilePeriod: 0s
evictionPressureTransitionPeriod: 0s
fileCheckFrequency: 0s
healthzBindAddress: 127.0.0.1
healthzPort: 10248
httpCheckFrequency: 0s
imageMaximumGCAge: 0s
imageMinimumGCAge: 0s
kind: KubeletConfiguration
logging:
flushFrequency: 0
options:
json:
infoBufferSize: "0"
text:
infoBufferSize: "0"
verbosity: 0
memorySwap: {}
nodeStatusReportFrequency: 0s
nodeStatusUpdateFrequency: 0s
resolvConf: /run/systemd/resolve/resolv.conf
rotateCertificates: true
runtimeRequestTimeout: 0s
shutdownGracePeriod: 0s
shutdownGracePeriodCriticalPods: 0s
staticPodPath: /etc/kubernetes/manifests
streamingConnectionIdleTimeout: 0s
syncFrequency: 0s
volumeStatsAggPeriod: 0s
Environment variables
This configuration block is applicable only when installing Kubernetes manually (using the "Kubernetes the Hard Way" method). When using the kubeadm utility, the configuration file will be created automatically based on the specification provided in the kubeadm-config file.
- Bash
- Cloud-init
cat <<EOF > /var/lib/kubelet/kubeadm-flags.env
KUBELET_KUBEADM_ARGS="--container-runtime-endpoint=unix:///var/run/containerd/containerd.sock --pod-infra-container-image=registry.k8s.io/pause:3.9 --config=/var/lib/kubelet/config-custom.yaml --cluster-domain=cluster.local --cluster-dns=29.64.0.10
"
EOF
- path: /var/lib/kubelet/kubeadm-flags.env
owner: root:root
permissions: '0644'
content: |
KUBELET_KUBEADM_ARGS="--container-runtime-endpoint=unix:///var/run/containerd/containerd.sock --pod-infra-container-image=registry.k8s.io/pause:3.9 --config=/var/lib/kubelet/config-custom.yaml --cluster-domain=cluster.local --cluster-dns=29.64.0.10 "
This command starts the Kubelet service, which is responsible for deploying all containers based on Static Pods manifests.
systemctl start kubelet
Systemd Unit Status
Systemd unit readiness check
Note that when creating a cluster with Kubeadm without running kubeadm init or kubeadm join, the Systemd Unit will be added to autostart but will be disabled.
systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; preset: enabled)
Drop-In: /usr/lib/systemd/system/kubelet.service.d
└─10-kubeadm.conf
Active: active (running) since Sat 2025-02-22 10:33:54 UTC; 17min ago
Docs: https://kubernetes.io/docs/
Main PID: 13779 (kubelet)
Tasks: 14 (limit: 7069)
Memory: 34.0M (peak: 35.3M)
CPU: 27.131s
CGroup: /system.slice/kubelet.service
└─13779 /usr/local/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml
This command starts the Kubelet service as part of a dedicated Kubeadm utility phase.
This section depends on the following sections:
Start kubelet
kubeadm init phase kubelet-start \
--config=/var/run/kubeadm/kubeadm.yaml
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
Start/Configure kubelet
● Required
Start/Configure kubelet
● Required
- HardWay
- Kubeadm
This configuration file is required for
Kubeletto start.
Kubelet default config
- Bash
- Cloud-init
Basic kubelet configuration file
cat <<EOF > /var/lib/kubelet/config.yaml
---
apiVersion: kubelet.config.k8s.io/v1beta1
authentication:
anonymous:
enabled: false
webhook:
cacheTTL: 0s
enabled: true
x509:
clientCAFile: /etc/kubernetes/pki/ca.crt
authorization:
mode: Webhook
webhook:
cacheAuthorizedTTL: 0s
cacheUnauthorizedTTL: 0s
cgroupDriver: systemd
clusterDNS:
- 29.64.0.10
clusterDomain: cluster.local
containerRuntimeEndpoint: ""
cpuManagerReconcilePeriod: 0s
evictionPressureTransitionPeriod: 0s
fileCheckFrequency: 0s
healthzBindAddress: 127.0.0.1
healthzPort: 10248
httpCheckFrequency: 0s
imageMaximumGCAge: 0s
imageMinimumGCAge: 0s
kind: KubeletConfiguration
logging:
flushFrequency: 0
options:
json:
infoBufferSize: "0"
text:
infoBufferSize: "0"
verbosity: 0
memorySwap: {}
nodeStatusReportFrequency: 0s
nodeStatusUpdateFrequency: 0s
resolvConf: /run/systemd/resolve/resolv.conf
rotateCertificates: true
runtimeRequestTimeout: 0s
shutdownGracePeriod: 0s
shutdownGracePeriodCriticalPods: 0s
staticPodPath: /etc/kubernetes/manifests
streamingConnectionIdleTimeout: 0s
syncFrequency: 0s
volumeStatsAggPeriod: 0s
EOF
Basic kubelet configuration file
- path: /var/lib/kubelet/config.yaml
owner: root:root
permissions: '0644'
content: |
---
apiVersion: kubelet.config.k8s.io/v1beta1
authentication:
anonymous:
enabled: false
webhook:
cacheTTL: 0s
enabled: true
x509:
clientCAFile: /etc/kubernetes/pki/ca.crt
authorization:
mode: Webhook
webhook:
cacheAuthorizedTTL: 0s
cacheUnauthorizedTTL: 0s
cgroupDriver: systemd
clusterDNS:
- 29.64.0.10
clusterDomain: cluster.local
containerRuntimeEndpoint: ""
cpuManagerReconcilePeriod: 0s
evictionPressureTransitionPeriod: 0s
fileCheckFrequency: 0s
healthzBindAddress: 127.0.0.1
healthzPort: 10248
httpCheckFrequency: 0s
imageMaximumGCAge: 0s
imageMinimumGCAge: 0s
kind: KubeletConfiguration
logging:
flushFrequency: 0
options:
json:
infoBufferSize: "0"
text:
infoBufferSize: "0"
verbosity: 0
memorySwap: {}
nodeStatusReportFrequency: 0s
nodeStatusUpdateFrequency: 0s
resolvConf: /run/systemd/resolve/resolv.conf
rotateCertificates: true
runtimeRequestTimeout: 0s
shutdownGracePeriod: 0s
shutdownGracePeriodCriticalPods: 0s
staticPodPath: /etc/kubernetes/manifests
streamingConnectionIdleTimeout: 0s
syncFrequency: 0s
volumeStatsAggPeriod: 0s
cat <<EOF > /etc/kubernetes/bootstrap-kubelet.conf
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: $(base64 -w 0 /etc/kubernetes/pki/ca.crt)
server: https://api.my-first-cluster.example.com:6443
name: my-first-cluster
contexts:
- context:
cluster: my-first-cluster
user: tls-bootstrap-token-user
name: tls-bootstrap-token-user@kubernetes
current-context: tls-bootstrap-token-user@kubernetes
kind: Config
preferences: {}
users:
- name: tls-bootstrap-token-user
user:
token: fjt9ex.lwzqgdlvoxtqk4yw
EOF
Environment variables
This configuration block is applicable only when installing Kubernetes manually (using the "Kubernetes the Hard Way" method). When using the kubeadm utility, the configuration file will be created automatically based on the specification provided in the kubeadm-config file.
- Bash
- Cloud-init
cat <<EOF > /var/lib/kubelet/kubeadm-flags.env
KUBELET_KUBEADM_ARGS="--container-runtime-endpoint=unix:///var/run/containerd/containerd.sock --pod-infra-container-image=registry.k8s.io/pause:3.9 --config=/var/lib/kubelet/config-custom.yaml --cluster-domain=cluster.local --cluster-dns=29.64.0.10
"
EOF
- path: /var/lib/kubelet/kubeadm-flags.env
owner: root:root
permissions: '0644'
content: |
KUBELET_KUBEADM_ARGS="--container-runtime-endpoint=unix:///var/run/containerd/containerd.sock --pod-infra-container-image=registry.k8s.io/pause:3.9 --config=/var/lib/kubelet/config-custom.yaml --cluster-domain=cluster.local --cluster-dns=29.64.0.10 "
This command starts the Kubelet service, which is responsible for deploying all containers based on Static Pods manifests.
systemctl start kubelet
Systemd Unit Status
Systemd unit readiness check
Note that when creating a cluster with Kubeadm without running kubeadm init or kubeadm join, the Systemd Unit will be added to autostart but will be disabled.
systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; preset: enabled)
Drop-In: /usr/lib/systemd/system/kubelet.service.d
└─10-kubeadm.conf
Active: active (running) since Sat 2025-02-22 10:33:54 UTC; 17min ago
Docs: https://kubernetes.io/docs/
Main PID: 13779 (kubelet)
Tasks: 14 (limit: 7069)
Memory: 34.0M (peak: 35.3M)
CPU: 27.131s
CGroup: /system.slice/kubelet.service
└─13779 /usr/local/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml
This command starts the Kubelet service as part of a dedicated Kubeadm utility phase.
This section depends on the following sections:
Start kubelet
kubeadm join phase kubelet-start \
--config=/var/run/kubeadm/kubeadm.yaml
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 502.252318ms
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap
23. Checking Cluster Status
This section is dedicated to verifying the status of cluster components after kubelet startup. It describes commands for monitoring image pulls, container startup, and successful initialization of cluster resources. This allows you to confirm that the cluster has started correctly before proceeding to the next stages.
Checking Cluster Status
● Not required
Checking Cluster Status
● Not required
After
kubeletstarts, the cluster initialization process will begin, consisting of 3 stages:
- Image download
- Container startup
- Migration
Image download check
crictl images
registry.k8s.io/etcd 3.5.12-0 3861cfcd7c04c 57.2MB
registry.k8s.io/kube-apiserver v1.30.4 8a97b1fb3e2eb 32.8MB
registry.k8s.io/kube-controller-manager v1.30.4 8398ad49a121d 31.1MB
registry.k8s.io/kube-scheduler v1.30.4 4939f82ab9ab4 19.3MB
registry.k8s.io/pause 3.9 e6f1816883972 322kB
Container state check
crictl ps -a
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
09c8c2d7b6446 4939f82ab9ab4 2 minutes ago Running kube-scheduler 1 934a798c482c3 kube-scheduler-master-1.my-first-cluster.example.com
15a10517ea727 8398ad49a121d 2 minutes ago Running kube-controller-manager 1 765405114b0a9 kube-controller-manager-master-1.my-first-cluster.example.com
4b2d766a5f129 8a97b1fb3e2eb 2 minutes ago Running kube-apiserver 0 bd281a893a7c1 kube-apiserver-master-1.my-first-cluster.example.com
3fb02d0f802ae 3861cfcd7c04c 2 minutes ago Running etcd 0 b6b62dc165409 etcd-master-1.my-first-cluster.example.com
Migration check
crictl logs $(crictl ps -name kube-apiserver \
-o json |
jq -r '.containers[''].id') 2>&1 |
grep created
Output
I0325 19:50:24.849116 1 strategy.go:270] "Successfully created " type="suggested" name="node-high"
I0325 19:50:25.015326 1 strategy.go:270] "Successfully created " type="suggested" name="leader-election"
I0325 19:50:25.015272 1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
I0325 19:50:25.062070 1 strategy.go:270] "Successfully created " type="suggested" name="workload-high"
I0325 19:50:25.092785 1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
I0325 19:50:25.093056 1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
I0325 19:50:25.097457 1 strategy.go:270] "Successfully created " type="suggested" name="workload-low"
I0325 19:50:25.122907 1 strategy.go:270] "Successfully created " type="suggested" name="global-default"
I0325 19:50:25.136110 1 strategy.go:270] "Successfully created " type="suggested" name="system-nodes"
I0325 19:50:25.145639 1 strategy.go:270] "Successfully created " type="suggested" name="system-node-high"
I0325 19:50:25.162094 1 strategy.go:270] "Successfully created " type="suggested" name="probes"
I0325 19:50:25.171177 1 storage_rbac.go:226] created clusterrole.rbac.authorization.k8s.io/cluster-admin
I0325 19:50:25.178987 1 strategy.go:270] "Successfully created " type="suggested" name="system-leader-election"
I0325 19:50:25.189666 1 strategy.go:270] "Successfully created " type="suggested" name="workload-leader-election"
I0325 19:50:25.194349 1 storage_rbac.go:226] created clusterrole.rbac.authorization.k8s.io/system:discovery
I0325 19:50:25.201448 1 strategy.go:270] "Successfully created " type="suggested" name="endpoint-controller"
I0325 19:50:25.209644 1 storage_rbac.go:226] created clusterrole.rbac.authorization.k8s.io/system:monitoring
I0325 19:50:25.216051 1 strategy.go:270] "Successfully created " type="suggested" name="kube-controller-manager"
I0325 19:50:25.247945 1 strategy.go:270] "Successfully created " type="suggested" name="kube-scheduler"
I0325 19:50:25.254378 1 storage_rbac.go:226] created clusterrole.rbac.authorization.k8s.io/system:basic-user
I0325 19:50:25.263224 1 strategy.go:270] "Successfully created " type="suggested" name="kube-system-service-accounts"
I0325 19:50:25.270945 1 storage_rbac.go:226] created clusterrole.rbac.authorization.k8s.io/system:public-info-viewer
I0325 19:50:25.281581 1 strategy.go:270] "Successfully created " type="suggested" name="service-accounts"
I0325 19:50:25.289105 1 strategy.go:270] "Successfully created " type="suggested" name="global-default"
I0325 19:50:25.291001 1 storage_rbac.go:226] created clusterrole.rbac.authorization.k8s.io/admin
I0325 19:50:25.314232 1 strategy.go:270] "Successfully created " type="mandatory" name="catch-all"
I0325 19:50:25.318737 1 storage_rbac.go:226] created clusterrole.rbac.authorization.k8s.io/edit
I0325 19:50:25.342170 1 strategy.go:270] "Successfully created " type="mandatory" name="exempt"
I0325 19:50:25.363630 1 storage_rbac.go:226] created clusterrole.rbac.authorization.k8s.io/view
I0325 19:50:25.364923 1 strategy.go:270] "Successfully created " type="mandatory" name="exempt"
I0325 19:50:25.372538 1 strategy.go:270] "Successfully created " type="mandatory" name="catch-all"
I0325 19:50:25.378771 1 storage_rbac.go:226] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-admin
I0325 19:50:25.390632 1 storage_rbac.go:226] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-edit
I0325 19:50:25.404175 1 storage_rbac.go:226] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-view
I0325 19:50:25.423981 1 storage_rbac.go:226] created clusterrole.rbac.authorization.k8s.io/system:heapster
I0325 19:50:25.455599 1 storage_rbac.go:226] created clusterrole.rbac.authorization.k8s.io/system:node
I0325 19:50:25.470607 1 storage_rbac.go:226] created clusterrole.rbac.authorization.k8s.io/system:node-problem-detector
I0325 19:50:25.476809 1 storage_rbac.go:226] created clusterrole.rbac.authorization.k8s.io/system:kubelet-api-admin
I0325 19:50:25.482742 1 storage_rbac.go:226] created clusterrole.rbac.authorization.k8s.io/system:node-bootstrapper
I0325 19:50:25.509907 1 storage_rbac.go:226] created clusterrole.rbac.authorization.k8s.io/system:auth-delegator
I0325 19:50:25.518103 1 storage_rbac.go:226] created clusterrole.rbac.authorization.k8s.io/system:kube-aggregator
I0325 19:50:25.523930 1 storage_rbac.go:226] created clusterrole.rbac.authorization.k8s.io/system:kube-controller-manager
I0325 19:50:25.530724 1 storage_rbac.go:226] created clusterrole.rbac.authorization.k8s.io/system:kube-dns
I0325 19:50:25.536652 1 storage_rbac.go:226] created clusterrole.rbac.authorization.k8s.io/system:persistent-volume-provisioner
I0325 19:50:25.548041 1 storage_rbac.go:226] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:nodeclient
I0325 19:50:25.552946 1 storage_rbac.go:226] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
I0325 19:50:25.563551 1 storage_rbac.go:226] created clusterrole.rbac.authorization.k8s.io/system:volume-scheduler
I0325 19:50:25.569432 1 storage_rbac.go:226] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:legacy-unknown-approver
I0325 19:50:25.587133 1 storage_rbac.go:226] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:kubelet-serving-approver
I0325 19:50:25.593244 1 storage_rbac.go:226] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:kube-apiserver-client-approver
I0325 19:50:25.601059 1 storage_rbac.go:226] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:kube-apiserver-client-kubelet-approver
I0325 19:50:25.610208 1 storage_rbac.go:226] created clusterrole.rbac.authorization.k8s.io/system:service-account-issuer-discovery
I0325 19:50:25.618408 1 storage_rbac.go:226] created clusterrole.rbac.authorization.k8s.io/system:node-proxier
I0325 19:50:25.633183 1 storage_rbac.go:226] created clusterrole.rbac.authorization.k8s.io/system:kube-scheduler
I0325 19:50:25.638420 1 storage_rbac.go:226] created clusterrole.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0325 19:50:25.646202 1 storage_rbac.go:226] created clusterrole.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0325 19:50:25.662691 1 storage_rbac.go:226] created clusterrole.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0325 19:50:25.670479 1 storage_rbac.go:226] created clusterrole.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0325 19:50:25.695624 1 storage_rbac.go:226] created clusterrole.rbac.authorization.k8s.io/system:controller:deployment-controller
I0325 19:50:25.704607 1 storage_rbac.go:226] created clusterrole.rbac.authorization.k8s.io/system:controller:disruption-controller
I0325 19:50:25.723784 1 storage_rbac.go:226] created clusterrole.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0325 19:50:25.730609 1 storage_rbac.go:226] created clusterrole.rbac.authorization.k8s.io/system:controller:endpointslice-controller
I0325 19:50:25.739767 1 storage_rbac.go:226] created clusterrole.rbac.authorization.k8s.io/system:controller:endpointslicemirroring-controller
I0325 19:50:25.749724 1 storage_rbac.go:226] created clusterrole.rbac.authorization.k8s.io/system:controller:expand-controller
I0325 19:50:25.770915 1 storage_rbac.go:226] created clusterrole.rbac.authorization.k8s.io/system:controller:ephemeral-volume-controller
I0325 19:50:25.778952 1 storage_rbac.go:226] created clusterrole.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0325 19:50:25.789374 1 storage_rbac.go:226] created clusterrole.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0325 19:50:25.849152 1 storage_rbac.go:226] created clusterrole.rbac.authorization.k8s.io/system:controller:job-controller
I0325 19:50:25.876849 1 storage_rbac.go:226] created clusterrole.rbac.authorization.k8s.io/system:controller:namespace-controller
I0325 19:50:25.911640 1 storage_rbac.go:226] created clusterrole.rbac.authorization.k8s.io/system:controller:node-controller
I0325 19:50:25.925130 1 storage_rbac.go:226] created clusterrole.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0325 19:50:25.931132 1 storage_rbac.go:226] created clusterrole.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0325 19:50:25.937393 1 storage_rbac.go:226] created clusterrole.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0325 19:50:25.946109 1 storage_rbac.go:226] created clusterrole.rbac.authorization.k8s.io/system:controller:replication-controller
I0325 19:50:25.960711 1 storage_rbac.go:226] created clusterrole.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0325 19:50:25.966392 1 storage_rbac.go:226] created clusterrole.rbac.authorization.k8s.io/system:controller:route-controller
I0325 19:50:25.974500 1 storage_rbac.go:226] created clusterrole.rbac.authorization.k8s.io/system:controller:service-account-controller
I0325 19:50:26.006739 1 storage_rbac.go:226] created clusterrole.rbac.authorization.k8s.io/system:controller:service-controller
I0325 19:50:26.024263 1 storage_rbac.go:226] created clusterrole.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0325 19:50:26.030127 1 storage_rbac.go:226] created clusterrole.rbac.authorization.k8s.io/system:controller:ttl-controller
I0325 19:50:26.038301 1 storage_rbac.go:226] created clusterrole.rbac.authorization.k8s.io/system:controller:certificate-controller
I0325 19:50:26.052458 1 storage_rbac.go:226] created clusterrole.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0325 19:50:26.059044 1 storage_rbac.go:226] created clusterrole.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0325 19:50:26.088843 1 storage_rbac.go:226] created clusterrole.rbac.authorization.k8s.io/system:controller:ttl-after-finished-controller
I0325 19:50:26.094917 1 storage_rbac.go:226] created clusterrole.rbac.authorization.k8s.io/system:controller:root-ca-cert-publisher
I0325 19:50:26.101768 1 storage_rbac.go:226] created clusterrole.rbac.authorization.k8s.io/system:controller:validatingadmissionpolicy-status-controller
I0325 19:50:26.116571 1 storage_rbac.go:226] created clusterrole.rbac.authorization.k8s.io/system:controller:legacy-service-account-token-cleaner
I0325 19:50:26.124067 1 storage_rbac.go:256] created clusterrolebinding.rbac.authorization.k8s.io/cluster-admin
I0325 19:50:26.130435 1 storage_rbac.go:256] created clusterrolebinding.rbac.authorization.k8s.io/system:monitoring
I0325 19:50:26.135037 1 storage_rbac.go:256] created clusterrolebinding.rbac.authorization.k8s.io/system:discovery
I0325 19:50:26.144777 1 storage_rbac.go:256] created clusterrolebinding.rbac.authorization.k8s.io/system:basic-user
I0325 19:50:26.152784 1 storage_rbac.go:256] created clusterrolebinding.rbac.authorization.k8s.io/system:public-info-viewer
I0325 19:50:26.165524 1 storage_rbac.go:256] created clusterrolebinding.rbac.authorization.k8s.io/system:node-proxier
I0325 19:50:26.172777 1 storage_rbac.go:256] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-controller-manager
I0325 19:50:26.179247 1 storage_rbac.go:256] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-dns
I0325 19:50:26.197002 1 storage_rbac.go:256] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-scheduler
I0325 19:50:26.203166 1 storage_rbac.go:256] created clusterrolebinding.rbac.authorization.k8s.io/system:volume-scheduler
I0325 19:50:26.208714 1 storage_rbac.go:256] created clusterrolebinding.rbac.authorization.k8s.io/system:node
I0325 19:50:26.217096 1 storage_rbac.go:256] created clusterrolebinding.rbac.authorization.k8s.io/system:service-account-issuer-discovery
I0325 19:50:26.226190 1 storage_rbac.go:256] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0325 19:50:26.239853 1 storage_rbac.go:256] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0325 19:50:26.244226 1 storage_rbac.go:256] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0325 19:50:26.257950 1 storage_rbac.go:256] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0325 19:50:26.262028 1 storage_rbac.go:256] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller
I0325 19:50:26.281103 1 storage_rbac.go:256] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller
I0325 19:50:26.294203 1 storage_rbac.go:256] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0325 19:50:26.309198 1 storage_rbac.go:256] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpointslice-controller
I0325 19:50:26.317701 1 storage_rbac.go:256] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpointslicemirroring-controller
I0325 19:50:26.333124 1 storage_rbac.go:256] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller
I0325 19:50:26.338934 1 storage_rbac.go:256] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ephemeral-volume-controller
I0325 19:50:26.344080 1 storage_rbac.go:256] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0325 19:50:26.355286 1 storage_rbac.go:256] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0325 19:50:26.365297 1 storage_rbac.go:256] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller
I0325 19:50:26.397412 1 storage_rbac.go:256] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller
I0325 19:50:26.402716 1 storage_rbac.go:256] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller
I0325 19:50:26.452669 1 storage_rbac.go:256] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0325 19:50:26.457837 1 storage_rbac.go:256] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0325 19:50:26.469955 1 storage_rbac.go:256] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0325 19:50:26.477884 1 storage_rbac.go:256] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller
I0325 19:50:26.490451 1 storage_rbac.go:256] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0325 19:50:26.509024 1 storage_rbac.go:256] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller
I0325 19:50:26.543252 1 storage_rbac.go:256] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller
I0325 19:50:26.549039 1 storage_rbac.go:256] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller
I0325 19:50:26.578269 1 storage_rbac.go:256] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0325 19:50:26.592059 1 storage_rbac.go:256] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller
I0325 19:50:26.603091 1 storage_rbac.go:256] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller
I0325 19:50:26.622458 1 storage_rbac.go:256] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0325 19:50:26.630783 1 storage_rbac.go:256] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0325 19:50:26.647976 1 storage_rbac.go:256] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-after-finished-controller
I0325 19:50:26.662162 1 storage_rbac.go:256] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:root-ca-cert-publisher
I0325 19:50:26.701501 1 storage_rbac.go:256] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:validatingadmissionpolicy-status-controller
I0325 19:50:26.711935 1 storage_rbac.go:256] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:legacy-service-account-token-cleaner
I0325 19:50:26.724206 1 storage_rbac.go:289] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system
I0325 19:50:26.736799 1 storage_rbac.go:289] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0325 19:50:26.747295 1 storage_rbac.go:289] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0325 19:50:26.757544 1 storage_rbac.go:289] created role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0325 19:50:26.766086 1 storage_rbac.go:289] created role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0325 19:50:26.773895 1 storage_rbac.go:289] created role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0325 19:50:26.785505 1 storage_rbac.go:289] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0325 19:50:26.813423 1 storage_rbac.go:321] created rolebinding.rbac.authorization.k8s.io/system::extension-apiserver-authentication-reader in kube-system
I0325 19:50:26.822640 1 storage_rbac.go:321] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0325 19:50:26.829331 1 storage_rbac.go:321] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0325 19:50:26.838203 1 storage_rbac.go:321] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0325 19:50:26.848813 1 storage_rbac.go:321] created rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0325 19:50:26.861183 1 storage_rbac.go:321] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0325 19:50:26.871910 1 storage_rbac.go:321] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
24. Configuring the Role Model
This section covers the configuration of the role model (RBAC) required for the correct operation of the kubeadm join mechanism. It describes the Roles/ClusterRoles, RoleBindings/ClusterRoleBindings, and Bootstrap token that allow new nodes to securely connect to the cluster, request certificates, and obtain API server configuration information.
- Init
Kubeadm role model setup
● Required
Kubeadm role model setup
● Required
- HardWay
- Kubeadm
Role bindings
Environment variables
export AUTH_EXTRA_GROUPS="system:bootstrappers:kubeadm:default-node-token"
Roles and bindings
This block is required so that kubeadm can check whether a node with this name is registered in the cluster or not.
kubectl \
--kubeconfig=/etc/kubernetes/super-admin.conf apply -f - <<EOF
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: kubeadm:get-nodes
rules:
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kubeadm:get-nodes
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: kubeadm:get-nodes
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: ${AUTH_EXTRA_GROUPS}
EOF
This block is required so that anonymous clients (e.g., kubeadm during the discovery phase) can retrieve the ConfigMap with cluster information (cluster-info) from the kube-public namespace. This allows loading the initial API server connection parameters and verifying the bootstrap token signature before establishing full authentication.
kubectl \
--kubeconfig=/etc/kubernetes/super-admin.conf apply -f - <<EOF
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: kubeadm:bootstrap-signer-clusterinfo
namespace: kube-public
rules:
- apiGroups:
- ""
resourceNames:
- cluster-info
resources:
- configmaps
verbs:
- get
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: kubeadm:bootstrap-signer-clusterinfo
namespace: kube-public
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: kubeadm:bootstrap-signer-clusterinfo
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: system:anonymous
EOF
This block is required to assign cluster-admin rights to all users in the kubeadm:cluster-admins group. This allows granting full cluster access with centralized rights management — unlike the system:masters group, from which access cannot be revoked through RBAC mechanisms. This approach simplifies administrative role setup and integration with external authorization systems.
kubectl \
--kubeconfig=/etc/kubernetes/super-admin.conf apply -f - <<EOF
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kubeadm:cluster-admins
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: kubeadm:cluster-admins
EOF
This block is required so that members of the ${AUTH_EXTRA_GROUPS} group (e.g., system:bootstrappers) can use the bootstrap token to initialize the kubelet connection to the cluster. Binding to the system:node-bootstrapper role allows such subjects to request TLS certificates for nodes through CSR (CertificateSigningRequest), which is a necessary step in the kubeadm join process.
kubectl \
--kubeconfig=/etc/kubernetes/super-admin.conf apply -f - <<EOF
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kubeadm:kubelet-bootstrap
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:node-bootstrapper
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: ${AUTH_EXTRA_GROUPS}
EOF
This block is required for automatic approval of client certificate requests from nodes joining the cluster via bootstrap token. It assigns the system:certificates.k8s.io:certificatesigningrequests:nodeclient role to the ${AUTH_EXTRA_GROUPS} group (e.g., system:bootstrappers), which allows kube-controller-manager to automatically sign CSRs from kubelet during kubeadm join.
kubectl \
--kubeconfig=/etc/kubernetes/super-admin.conf apply -f - <<EOF
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kubeadm:node-autoapprove-bootstrap
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:certificates.k8s.io:certificatesigningrequests:nodeclient
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: ${AUTH_EXTRA_GROUPS}
EOF
This block is required for automatic approval of kubelet client certificate renewal requests. It grants the system:nodes group rights that allow re-requesting and automatically receiving new certificates through CertificateSigningRequest. This is necessary for the correct operation of the node certificate rotation mechanism without manual intervention.
kubectl \
--kubeconfig=/etc/kubernetes/super-admin.conf apply -f - <<EOF
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kubeadm:node-autoapprove-certificate-rotation
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: system:nodes
EOF
clusterrole.rbac.authorization.k8s.io/kubeadm:get-nodes created
role.rbac.authorization.k8s.io/kubeadm:bootstrap-signer-clusterinfo created
rolebinding.rbac.authorization.k8s.io/kubeadm:bootstrap-signer-clusterinfo created
clusterrolebinding.rbac.authorization.k8s.io/kubeadm:cluster-admins created
clusterrolebinding.rbac.authorization.k8s.io/kubeadm:get-nodes created
clusterrolebinding.rbac.authorization.k8s.io/kubeadm:kubelet-bootstrap created
clusterrolebinding.rbac.authorization.k8s.io/kubeadm:node-autoapprove-bootstrap created
clusterrolebinding.rbac.authorization.k8s.io/kubeadm:node-autoapprove-certificate-rotation created
Bootstrap tokens
Environment variables
export AUTH_EXTRA_GROUPS="system:bootstrappers:kubeadm:default-node-token"
export DESCRIPTION="kubeadm bootstrap token"
export EXPIRATION=$(date -d '24 hours' "+%Y-%m-%dT%H:%M:%SZ")
export TOKEN_ID="fjt9ex"
export TOKEN_SECRET="lwzqgdlvoxtqk4yw"
export USAGE_BOOTSTRAP_AUTHENTIFICATION="true"
export USAGE_BOOTSTRAP_SIGNING="true"
Creating access token
This token is a bootstrap token, and it is needed to allow a new node to securely join the Kubernetes cluster via kubeadm join while it does not yet have its own certificates and a trusted kubeconfig.
In production environments, it is recommended to create a separate bootstrap token for each node. However, for demonstration purposes (and within this documentation), we have simplified the process and use a single shared token for all control plane nodes.
kubectl \
--kubeconfig=/etc/kubernetes/super-admin.conf \
apply -f - <<EOF
---
apiVersion: v1
kind: Secret
metadata:
name: bootstrap-token-${TOKEN_ID}
namespace: kube-system
data:
auth-extra-groups: $(echo -n "$AUTH_EXTRA_GROUPS" | base64)
description: $(echo -n "$DESCRIPTION" | base64)
expiration: $(echo -n "$EXPIRATION" | base64)
token-id: $(echo -n "$TOKEN_ID" | base64)
token-secret: $(echo -n "$TOKEN_SECRET" | base64)
usage-bootstrap-authentication: $(echo -n "$USAGE_BOOTSTRAP_AUTHENTIFICATION" | base64)
usage-bootstrap-signing: $(echo -n "$USAGE_BOOTSTRAP_SIGNING" | base64)
type: bootstrap.kubernetes.io/token
EOF
secret/bootstrap-token-fjt9ex configured
Cluster-Info
Environment variables
export KUBE_CA_CRT_BASE64=$(base64 -w 0 /etc/kubernetes/pki/ca.crt)
export CLUSTER_API_URL=https://api.my-first-cluster.example.com
Updating Cluster-info
cluster-info is a public source of basic cluster information required for secure bootstrap joining of new nodes via kubeadm.
- 🔐 Contains a public kubeconfig with CA and API address.
- 📥 Used by kubeadm join for discovery.
- 🌐 Accessible anonymously through kube-public.
- ✅ Allows the node to verify API server authenticity before authentication.
kubectl \
--kubeconfig=/etc/kubernetes/super-admin.conf \
apply -f - <<EOF
---
apiVersion: v1
data:
kubeconfig: |
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: ${KUBE_CA_CRT_BASE64}
server: ${CLUSTER_API_URL}:6443
name: ""
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null
kind: ConfigMap
metadata:
name: cluster-info
namespace: kube-public
EOF
configmap/cluster-info created
Role model generation
kubeadm init phase bootstrap-token \
--config=/var/run/kubeadm/kubeadm.yaml \
--kubeconfig=/etc/kubernetes/super-admin.conf
[bootstrap-token] Using token: fjt9ex.lwzqgdlvoxtqk4yw
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
25. Uploading Configuration to the Cluster
This section covers uploading the current
kubeadmandkubeletconfiguration to the cluster as aConfigMap. This configuration is required for the correct execution of thekubeadm joincommand, as it is used during initialization of new control plane nodes. Uploading the configuration centralizes cluster parameter management and ensures consistency across all nodes, including bothmasterandworkernodes.
- Init
- Join
Uploading configuration to the cluster
● Required
Uploading configuration to the cluster
● Required
This section describes the instructions for uploading the current Kubeadm and Kubelet configuration to the Kubernetes control plane as a ConfigMap resource. This approach simplifies managing configuration changes for Kubernetes nodes, covering both worker and master nodes.
- HardWay
- Kubeadm
Environment variables for configuration file template
export CLUSTER_NAME='my-first-cluster'
export BASE_DOMAIN='example.com'
export FULL_HOST_NAME=${HOST_NAME}.${CLUSTER_NAME}.${BASE_DOMAIN}
export INTERNAL_API=api.${CLUSTER_NAME}.${BASE_DOMAIN}
export MACHINE_LOCAL_ADDRESS=$(ip -4 addr show scope global | awk '/inet/ {print $2; exit}' | cut -d/ -f1)
export ETCD_INITIAL_CLUSTER="${FULL_HOST_NAME}=https://${MACHINE_LOCAL_ADDRESS}:2380"
export AUTH_EXTRA_GROUPS="system:bootstrappers:kubeadm:default-node-token"
kubeadm-config
This block is required to allow nodes to read the
kubeadm-configConfigMap in thekube-systemnamespace:
kubectl \
--kubeconfig=/etc/kubernetes/super-admin.conf \
apply -f - <<EOF
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: kubeadm:nodes-kubeadm-config
namespace: kube-system
rules:
- apiGroups:
- ""
resourceNames:
- kubeadm-config
resources:
- configmaps
verbs:
- get
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: kubeadm:nodes-kubeadm-config
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: kubeadm:nodes-kubeadm-config
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: ${AUTH_EXTRA_GROUPS}
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: system:nodes
EOF
This block is required so that when executing
kubeadm join, the node receives the currentClusterConfigurationfrom the control cluster and correctly joins the control-plane.
kubectl \
--kubeconfig=/etc/kubernetes/super-admin.conf \
apply -f - <<EOF
---
apiVersion: v1
kind: ConfigMap
metadata:
name: kubeadm-config
namespace: kube-system
data:
ClusterConfiguration: |
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
clusterName: "${CLUSTER_NAME}"
certificatesDir: /etc/kubernetes/pki
controlPlaneEndpoint: ${INTERNAL_API}:6443
imageRepository: "registry.k8s.io"
networking:
serviceSubnet: 29.64.0.0/16
dnsDomain: cluster.local
kubernetesVersion: v1.30.4
dns: {}
etcd:
local:
imageRepository: "registry.k8s.io"
dataDir: "/var/lib/etcd"
extraArgs:
auto-compaction-retention: "8"
cert-file: "/etc/kubernetes/pki/etcd/server.crt"
client-cert-auth: "true"
data-dir: "/var/lib/etcd"
election-timeout: "1500"
heartbeat-interval: "250"
key-file: "/etc/kubernetes/pki/etcd/server.key"
listen-client-urls: "https://0.0.0.0:2379"
listen-metrics-urls: "http://0.0.0.0:2381"
listen-peer-urls: "https://0.0.0.0:2380"
logger: "zap"
max-snapshots: "10"
max-wals: "10"
metrics: "extensive"
peer-cert-file: "/etc/kubernetes/pki/etcd/peer.crt"
peer-client-cert-auth: "true"
peer-key-file: "/etc/kubernetes/pki/etcd/peer.key"
peer-trusted-ca-file: "/etc/kubernetes/pki/etcd/ca.crt"
snapshot-count: "10000"
quota-backend-bytes: "10737418240" # TODO
experimental-initial-corrupt-check: "true"
experimental-watch-progress-notify-interval: "5s"
trusted-ca-file: "/etc/kubernetes/pki/etcd/ca.crt"
peerCertSANs:
- 127.0.0.1
serverCertSANs:
- 127.0.0.1
apiServer:
extraArgs:
aggregator-reject-forwarding-redirect: "true"
allow-privileged: "true"
anonymous-auth: "true"
api-audiences: "konnectivity-server"
apiserver-count: "1"
audit-log-batch-buffer-size: "10000"
audit-log-batch-max-size: "1"
audit-log-batch-max-wait: "0s"
audit-log-batch-throttle-burst: "0"
audit-log-batch-throttle-enable: "false"
audit-log-batch-throttle-qps: "0"
audit-log-compress: "false"
audit-log-format: "json"
audit-log-maxage: "30"
audit-log-maxbackup: "10"
audit-log-maxsize: "1000"
audit-log-mode: "batch"
audit-log-truncate-enabled: "false"
audit-log-truncate-max-batch-size: "10485760"
audit-log-truncate-max-event-size: "102400"
audit-log-version: "audit.k8s.io/v1"
audit-webhook-batch-buffer-size: "10000"
audit-webhook-batch-initial-backoff: "10s"
audit-webhook-batch-max-size: "400"
audit-webhook-batch-max-wait: "30s"
audit-webhook-batch-throttle-burst: "15"
audit-webhook-batch-throttle-enable: "true"
audit-webhook-batch-throttle-qps: "10"
audit-webhook-initial-backoff: "10s"
audit-webhook-mode: "batch"
audit-webhook-truncate-enabled: "false"
audit-webhook-truncate-max-batch-size: "10485760"
audit-webhook-truncate-max-event-size: "102400"
audit-webhook-version: "audit.k8s.io/v1"
audit-policy-file: /etc/kubernetes/audit-policy.yaml
audit-log-path: /var/log/kubernetes/audit/audit.log
authentication-token-webhook-cache-ttl: "2m0s"
authentication-token-webhook-version: "v1beta1"
authorization-mode: "Node,RBAC"
authorization-webhook-cache-authorized-ttl: "5m0s"
authorization-webhook-cache-unauthorized-ttl: "30s"
authorization-webhook-version: "v1beta1"
bind-address: "0.0.0.0"
cert-dir: "/var/run/kubernetes"
client-ca-file: "/etc/kubernetes/pki/ca.crt"
cloud-provider-gce-l7lb-src-cidrs: "130.211.0.0/22,35.191.0.0/16"
cloud-provider-gce-lb-src-cidrs: "130.211.0.0/22,209.85.152.0/22,209.85.204.0/22,35.191.0.0/16"
contention-profiling: "false"
default-not-ready-toleration-seconds: "300"
default-unreachable-toleration-seconds: "300"
default-watch-cache-size: "100"
delete-collection-workers: "1"
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,PodSecurity"
enable-aggregator-routing: "true"
enable-bootstrap-token-auth: "true"
enable-garbage-collector: "true"
enable-logs-handler: "true"
enable-priority-and-fairness: "true"
encryption-provider-config-automatic-reload: "false"
endpoint-reconciler-type: "lease"
etcd-cafile: "/etc/kubernetes/pki/etcd/ca.crt"
etcd-certfile: "/etc/kubernetes/pki/apiserver-etcd-client.crt"
etcd-compaction-interval: "5m0s"
etcd-count-metric-poll-period: "1m0s"
etcd-db-metric-poll-interval: "30s"
etcd-healthcheck-timeout: "2s"
etcd-keyfile: "/etc/kubernetes/pki/apiserver-etcd-client.key"
etcd-prefix: "/registry"
etcd-readycheck-timeout: "2s"
etcd-servers: "https://127.0.0.1:2379"
event-ttl: "1h0m0s"
feature-gates: "RotateKubeletServerCertificate=true"
goaway-chance: "0"
help: "false"
http2-max-streams-per-connection: "0"
kubelet-client-certificate: "/etc/kubernetes/pki/apiserver-kubelet-client.crt"
kubelet-client-key: "/etc/kubernetes/pki/apiserver-kubelet-client.key"
kubelet-port: "10250"
kubelet-preferred-address-types: "InternalIP,ExternalIP,Hostname"
kubelet-read-only-port: "10255"
kubelet-timeout: "5s"
kubernetes-service-node-port: "0"
lease-reuse-duration-seconds: "60"
livez-grace-period: "0s"
log-flush-frequency: "5s"
logging-format: "text"
log-json-info-buffer-size: "0"
log-json-split-stream: "false"
log-text-info-buffer-size: "0"
log-text-split-stream: "false"
max-connection-bytes-per-sec: "0"
max-mutating-requests-inflight: "200"
max-requests-inflight: "400"
min-request-timeout: "1800"
permit-address-sharing: "false"
permit-port-sharing: "false"
profiling: "false"
proxy-client-cert-file: "/etc/kubernetes/pki/front-proxy-client.crt"
proxy-client-key-file: "/etc/kubernetes/pki/front-proxy-client.key"
requestheader-allowed-names: "front-proxy-client"
requestheader-client-ca-file: "/etc/kubernetes/pki/front-proxy-ca.crt"
requestheader-extra-headers-prefix: "X-Remote-Extra-"
requestheader-group-headers: "X-Remote-Group"
requestheader-username-headers: "X-Remote-User"
request-timeout: "1m0s"
runtime-config: "api/all=true"
secure-port: "6443"
service-account-extend-token-expiration: "true"
service-account-issuer: "https://kubernetes.default.svc.cluster.local"
service-account-key-file: "/etc/kubernetes/pki/sa.pub"
service-account-lookup: "true"
service-account-max-token-expiration: "0s"
service-account-signing-key-file: "/etc/kubernetes/pki/sa.key"
service-cluster-ip-range: "29.64.0.0/16"
service-node-port-range: "30000-32767"
shutdown-delay-duration: "0s"
shutdown-send-retry-after: "false"
shutdown-watch-termination-grace-period: "0s"
storage-backend: "etcd3"
storage-media-type: "application/vnd.kubernetes.protobuf"
tls-cert-file: "/etc/kubernetes/pki/apiserver.crt"
tls-private-key-file: "/etc/kubernetes/pki/apiserver.key"
v: "2"
version: "false"
watch-cache: "true"
# ЕСЛИ НУЖНО ПОДКЛЮЧИТЬ CLOUD-CONTROLLER-MANAGER
# ТРЕБУЕТСЯ РАСКОМЕНТИРОВАТЬ
# ->
# cloud-provider: "external"
# Не указывать если значение "" или undefined
# cloud-config: ""
# strict-transport-security-directives: ""
# disable-admission-plugins: ""
# disabled-metrics: ""
# egress-selector-config-file: ""
# encryption-provider-config: ""
# etcd-servers-overrides: ""
# external-hostname: ""
# kubelet-certificate-authority: ""
# oidc-ca-file: ""
# oidc-client-id: ""
# oidc-groups-claim: ""
# oidc-groups-prefix: ""
# oidc-issuer-url: ""
# oidc-required-claim: ""
# oidc-signing-algs: "RS256"
# oidc-username-claim: "sub"
# oidc-username-prefix: ""
# peer-advertise-ip: ""
# peer-advertise-port: ""
# peer-ca-file: ""
# service-account-jwks-uri: ""
# show-hidden-metrics-for-version: ""
# tls-cipher-suites: ""
# tls-min-version: ""
# tls-sni-cert-key: ""
# token-auth-file: ""
# tracing-config-file: ""
# vmodule: ""
# watch-cache-sizes: ""
# authorization-webhook-config-file: ""
# cors-allowed-origins: ""
# debug-socket-path: ""
# authorization-policy-file: ""
# authorization-config: ""
# authentication-token-webhook-config-file: ""
# authentication-config: ""
# audit-webhook-config-file: ""
# audit-policy-file: "/etc/kubernetes/audit-policy.yaml"
# audit-log-path: "/var/log/kubernetes/audit/audit.log"
# allow-metric-labels: ""
# allow-metric-labels-manifest: ""
# admission-control: ""
# admission-control-config-file: ""
# advertise-address: ""
extraVolumes:
- name: "k8s-audit"
hostPath: "/var/log/kubernetes/audit/"
mountPath: "/var/log/kubernetes/audit/"
readOnly: false
pathType: DirectoryOrCreate
- name: "k8s-audit-policy"
hostPath: "/etc/kubernetes/audit-policy.yaml"
mountPath: "/etc/kubernetes/audit-policy.yaml"
pathType: File
certSANs:
- "127.0.0.1"
# TODO для доабвления внешнего FQDN в сертификаты кластера
# - ${INTERNAL_API}
timeoutForControlPlane: 4m0s
controllerManager:
extraArgs:
cluster-name: "${CLUSTER_NAME}"
allocate-node-cidrs: "false"
allow-untagged-cloud: "false"
attach-detach-reconcile-sync-period: "1m0s"
authentication-kubeconfig: "/etc/kubernetes/controller-manager.conf"
authentication-skip-lookup: "false"
authentication-token-webhook-cache-ttl: "10s"
authentication-tolerate-lookup-failure: "false"
authorization-always-allow-paths: "/healthz,/readyz,/livez,/metrics"
authorization-kubeconfig: "/etc/kubernetes/controller-manager.conf"
authorization-webhook-cache-authorized-ttl: "10s"
authorization-webhook-cache-unauthorized-ttl: "10s"
bind-address: "0.0.0.0"
cidr-allocator-type: "RangeAllocator"
client-ca-file: "/etc/kubernetes/pki/ca.crt"
# -> Включить, если управляете состоянием через Cloud Controller Manager
# cloud-provider: "external"
cloud-provider-gce-lb-src-cidrs: "130.211.0.0/22,209.85.152.0/22,209.85.204.0/22,35.191.0.0/16"
cluster-signing-cert-file: "/etc/kubernetes/pki/ca.crt"
cluster-signing-duration: "720h0m0s"
cluster-signing-key-file: "/etc/kubernetes/pki/ca.key"
concurrent-cron-job-syncs: "5"
concurrent-deployment-syncs: "5"
concurrent-endpoint-syncs: "5"
concurrent-ephemeralvolume-syncs: "5"
concurrent-gc-syncs: "20"
concurrent-horizontal-pod-autoscaler-syncs: "5"
concurrent-job-syncs: "5"
concurrent-namespace-syncs: "10"
concurrent-rc-syncs: "5"
concurrent-replicaset-syncs: "20"
concurrent-resource-quota-syncs: "5"
concurrent-service-endpoint-syncs: "5"
concurrent-service-syncs: "1"
concurrent-serviceaccount-token-syncs: "5"
concurrent-statefulset-syncs: "5"
concurrent-ttl-after-finished-syncs: "5"
concurrent-validating-admission-policy-status-syncs: "5"
configure-cloud-routes: "true"
contention-profiling: "false"
controller-start-interval: "0s"
controllers: "*,bootstrapsigner,tokencleaner"
disable-attach-detach-reconcile-sync: "false"
disable-force-detach-on-timeout: "false"
enable-dynamic-provisioning: "true"
enable-garbage-collector: "true"
enable-hostpath-provisioner: "false"
enable-leader-migration: "false"
endpoint-updates-batch-period: "0s"
endpointslice-updates-batch-period: "0s"
feature-gates: "RotateKubeletServerCertificate=true"
flex-volume-plugin-dir: "/usr/libexec/kubernetes/kubelet-plugins/volume/exec/"
help: "false"
horizontal-pod-autoscaler-cpu-initialization-period: "5m0s"
horizontal-pod-autoscaler-downscale-delay: "5m0s"
horizontal-pod-autoscaler-downscale-stabilization: "5m0s"
horizontal-pod-autoscaler-initial-readiness-delay: "30s"
horizontal-pod-autoscaler-sync-period: "30s"
horizontal-pod-autoscaler-tolerance: "0.1"
horizontal-pod-autoscaler-upscale-delay: "3m0s"
http2-max-streams-per-connection: "0"
kube-api-burst: "120"
kube-api-content-type: "application/vnd.kubernetes.protobuf"
kube-api-qps: "100"
kubeconfig: "/etc/kubernetes/controller-manager.conf"
large-cluster-size-threshold: "50"
leader-elect: "true"
leader-elect-lease-duration: "15s"
leader-elect-renew-deadline: "10s"
leader-elect-resource-lock: "leases"
leader-elect-resource-name: "kube-controller-manager"
leader-elect-resource-namespace: "kube-system"
leader-elect-retry-period: "2s"
legacy-service-account-token-clean-up-period: "8760h0m0s"
log-flush-frequency: "5s"
log-json-info-buffer-size: "0"
log-json-split-stream: "false"
log-text-info-buffer-size: "0"
log-text-split-stream: "false"
logging-format: "text"
max-endpoints-per-slice: "100"
min-resync-period: "12h0m0s"
mirroring-concurrent-service-endpoint-syncs: "5"
mirroring-endpointslice-updates-batch-period: "0s"
mirroring-max-endpoints-per-subset: "1000"
namespace-sync-period: "2m0s"
node-cidr-mask-size: "0"
node-cidr-mask-size-ipv4: "0"
node-cidr-mask-size-ipv6: "0"
node-eviction-rate: "0.1"
node-monitor-grace-period: "40s"
node-monitor-period: "5s"
node-startup-grace-period: "10s"
node-sync-period: "0s"
permit-address-sharing: "false"
permit-port-sharing: "false"
profiling: "false"
pv-recycler-increment-timeout-nfs: "30"
pv-recycler-minimum-timeout-hostpath: "60"
pv-recycler-minimum-timeout-nfs: "300"
pv-recycler-timeout-increment-hostpath: "30"
pvclaimbinder-sync-period: "15s"
requestheader-client-ca-file: "/etc/kubernetes/pki/front-proxy-ca.crt"
requestheader-extra-headers-prefix: "x-remote-extra-"
requestheader-group-headers: "x-remote-group"
requestheader-username-headers: "x-remote-user"
resource-quota-sync-period: "5m0s"
root-ca-file: "/etc/kubernetes/pki/ca.crt"
route-reconciliation-period: "10s"
secondary-node-eviction-rate: "0.01"
secure-port: "10257"
service-account-private-key-file: "/etc/kubernetes/pki/sa.key"
terminated-pod-gc-threshold: "0"
unhealthy-zone-threshold: "0.55"
use-service-account-credentials: "true"
v: "2"
version: "false"
volume-host-allow-local-loopback: "true"
# ЕСЛИ НУЖНО ПОДКЛЮЧИТЬ СЕРВЕРНЫЕ СЕРТИФИКАТЫ ДЛЯ KUBE-CONTROLLER-MANAGER
# ОБРАТИТЕ ВНИМАНИЕ, ЧТО KUBEADM НЕ СОЗДАЕТ ДАННЫЕ СЕРТИФИКАТЫ
# ТРЕБУЕТСЯ РАСКОМЕНТИРОВАТЬ
# ->
# tls-cert-file=/etc/kubernetes/pki/controller-manager-server.crt
# tls-private-key-file=/etc/kubernetes/pki/controller-manager-server.key
# Не указывать если значение "" или undefined
# cluster-signing-kube-apiserver-client-cert-file: ""
# cluster-signing-kube-apiserver-client-key-file: ""
# cluster-signing-kubelet-client-cert-file: ""
# cluster-signing-kubelet-client-key-file: ""
# cluster-signing-kubelet-serving-cert-file: ""
# cluster-signing-kubelet-serving-key-file: ""
# cluster-signing-legacy-unknown-cert-file: ""
# cluster-signing-legacy-unknown-key-file: ""
# cluster-cidr: ""
# cloud-config: ""
# cert-dir: ""
# allow-metric-labels-manifest: ""
# allow-metric-labels: ""
# disabled-metrics: ""
# leader-migration-config: ""
# master: ""
# pv-recycler-pod-template-filepath-hostpath: ""
# pv-recycler-pod-template-filepath-nfs: ""
# service-cluster-ip-range: ""
# show-hidden-metrics-for-version: ""
# tls-cipher-suites: ""
# tls-min-version: ""
# tls-sni-cert-key: ""
# vmodule: ""
# volume-host-cidr-denylist: ""
# external-cloud-volume-plugin: ""
# requestheader-allowed-names: ""
# ЕСЛИ НУЖНО ПОДКЛЮЧИТЬ СЕРВЕРНЫЕ СЕРТИФИКАТЫ ДЛЯ KUBE-CONTROLLER-MANAGER
# ОБРАТИТЕ ВНИМАНИЕ, ЧТО KUBEADM НЕ СОЗДАЕТ ДАННЫЕ СЕРТИФИКАТЫ
# ТРЕБУЕТСЯ РАСКОМЕНТИРОВАТЬ
# ->
# extraVolumes:
# - name: "controller-manager-crt"
# hostPath: "/etc/kubernetes/pki/controller-manager-server.crt"
# mountPath: "/etc/kubernetes/pki/controller-manager-server.crt"
# pathType: File
# - name: "controller-manager-key"
# hostPath: "/etc/kubernetes/pki/controller-manager-server.key"
# mountPath: "/etc/kubernetes/pki/controller-manager-server.key"
# pathType: File
scheduler:
extraArgs:
authentication-kubeconfig: "/etc/kubernetes/scheduler.conf"
authentication-skip-lookup: "false"
authentication-token-webhook-cache-ttl: "10s"
authentication-tolerate-lookup-failure: "true"
authorization-always-allow-paths: "/healthz,/readyz,/livez,/metrics"
authorization-kubeconfig: "/etc/kubernetes/scheduler.conf"
authorization-webhook-cache-authorized-ttl: "10s"
authorization-webhook-cache-unauthorized-ttl: "10s"
bind-address: "0.0.0.0"
client-ca-file: ""
contention-profiling: "true"
help: "false"
http2-max-streams-per-connection: "0"
kube-api-burst: "100"
kube-api-content-type: "application/vnd.kubernetes.protobuf"
kube-api-qps: "50"
kubeconfig: "/etc/kubernetes/scheduler.conf"
leader-elect: "true"
leader-elect-lease-duration: "15s"
leader-elect-renew-deadline: "10s"
leader-elect-resource-lock: "leases"
leader-elect-resource-name: "kube-scheduler"
leader-elect-resource-namespace: "kube-system"
leader-elect-retry-period: "2s"
log-flush-frequency: "5s"
log-json-info-buffer-size: "0"
log-json-split-stream: "false"
log-text-info-buffer-size: "0"
log-text-split-stream: "false"
logging-format: "text"
permit-address-sharing: "false"
permit-port-sharing: "false"
pod-max-in-unschedulable-pods-duration: "5m0s"
profiling: "true"
requestheader-extra-headers-prefix: "[x-remote-extra-]"
requestheader-group-headers: "[x-remote-group]"
requestheader-username-headers: "[x-remote-user]"
secure-port: "10259"
v: "2"
version: "false"
# ЕСЛИ НУЖНО ПОДКЛЮЧИТЬ СЕРВЕРНЫЕ СЕРТИФИКАТЫ ДЛЯ KUBE-SCHEDULER
# ОБРАТИТЕ ВНИМАНИЕ, ЧТО KUBEADM НЕ СОЗДАЕТ ДАННЫЕ СЕРТИФИКАТЫ
# ТРЕБУЕТСЯ РАСКОМЕНТИРОВАТЬ
# ->
# tls-cert-file=/etc/kubernetes/pki/scheduler-server.crt
# tls-private-key-file=/etc/kubernetes/pki/scheduler-server.key
# <-
# allow-metric-labels: "[]"
# allow-metric-labels-manifest: ""
# cert-dir: ""
# config: ""
# disabled-metrics: "[]"
# feature-gates: ""
# master: ""
# requestheader-allowed-names: "[]"
# requestheader-client-ca-file: ""
# show-hidden-metrics-for-version: ""
# tls-cipher-suites: "[]"
# tls-min-version: ""
# tls-sni-cert-key: "[]"
# vmodule: ""
# write-config-to: ""
# ЕСЛИ НУЖНО ПОДКЛЮЧИТЬ СЕРВЕРНЫЕ СЕРТИФИКАТЫ ДЛЯ KUBE-SCHEDULER
# ОБРАТИТЕ ВНИМАНИЕ, ЧТО KUBEADM НЕ СОЗДАЕТ ДАННЫЕ СЕРТИФИКАТЫ
# ТРЕБУЕТСЯ РАСКОМЕНТИРОВАТЬ
# ->
# extraVolumes:
# - name: "scheduler-crt"
# hostPath: "/etc/kubernetes/pki/scheduler-server.crt"
# mountPath: "/etc/kubernetes/pki/scheduler-server.crt"
# pathType: File
# - name: "scheduler-key"
# hostPath: "/etc/kubernetes/pki/scheduler-server.key"
# mountPath: "/etc/kubernetes/pki/scheduler-server.key"
# pathType: File
EOF
kubelet-config
This block is required to allow nodes to read the
kubelet-configConfigMap in thekube-systemnamespace:
kubectl \
--kubeconfig=/etc/kubernetes/super-admin.conf \
apply -f - <<EOF
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: kubeadm:kubelet-config
namespace: kube-system
rules:
- apiGroups:
- ""
resourceNames:
- kubelet-config
resources:
- configmaps
verbs:
- get
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: kubeadm:kubelet-config
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: kubeadm:kubelet-config
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: system:nodes
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: ${AUTH_EXTRA_GROUPS}
EOF
This block is required so that when executing
kubeadm join, the node receives the currentkubelet-configfrom the control cluster and correctly joins the control-plane.
kubectl \
--kubeconfig=/etc/kubernetes/super-admin.conf \
apply -f - <<EOF
---
apiVersion: v1
kind: ConfigMap
metadata:
name: kubelet-config
namespace: kube-system
data:
kubelet: |
apiVersion: kubelet.config.k8s.io/v1beta1
authentication:
anonymous:
enabled: false
webhook:
cacheTTL: 0s
enabled: true
x509:
clientCAFile: "/etc/kubernetes/pki/ca.crt"
authorization:
mode: Webhook
webhook:
cacheAuthorizedTTL: 0s
cacheUnauthorizedTTL: 0s
cgroupDriver: systemd
containerLogMaxSize: "50Mi"
containerRuntimeEndpoint: "/var/run/containerd/containerd.sock"
cpuManagerReconcilePeriod: 0s
evictionPressureTransitionPeriod: 5s
fileCheckFrequency: 0s
healthzBindAddress: 127.0.0.1
healthzPort: 10248
httpCheckFrequency: 0s
imageGCHighThresholdPercent: 55
imageGCLowThresholdPercent: 50
imageMaximumGCAge: 0s
imageMinimumGCAge: 0s
kind: KubeletConfiguration
logging:
flushFrequency: 0
options:
json:
infoBufferSize: "0"
text:
infoBufferSize: "0"
verbosity: 0
kubeAPIQPS: 50
kubeAPIBurst: 100
maxPods: 250
memorySwap: {}
nodeStatusReportFrequency: 1s
nodeStatusUpdateFrequency: 1s
podPidsLimit: 4096
registerNode: true
resolvConf: /run/systemd/resolve/resolv.conf
rotateCertificates: true
runtimeRequestTimeout: 0s
serializeImagePulls: false
serverTLSBootstrap: true
shutdownGracePeriod: 15s
shutdownGracePeriodCriticalPods: 5s
staticPodPath: /etc/kubernetes/manifests
streamingConnectionIdleTimeout: 0s
syncFrequency: 0s
tlsMinVersion: "VersionTLS12"
volumeStatsAggPeriod: 0s
featureGates:
RotateKubeletServerCertificate: true
APIPriorityAndFairness: true
tlsCipherSuites:
- "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"
- "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"
- "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384"
- "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"
- "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256"
- "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"
EOF
Configuration update
kubeadm init phase upload-config all \
--config=/var/run/kubeadm/kubeadm.yaml \
--kubeconfig=/etc/kubernetes/super-admin.conf
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
Uploading configuration to the cluster
● Required
Uploading configuration to the cluster
● Required
When adding a new control plane node (join), configuration upload is performed automatically
as part of the kubeadm join phase. The kubeadm and kubelet configuration is read from the existing ConfigMap
in the kube-system namespace, which was uploaded during the initialization of the first node.
A separate manual upload-config call is not required during join — kubeadm join independently
retrieves the necessary parameters from the cluster.
26. Uploading Root Certificates to the Cluster
This section covers uploading root certificates to the Kubernetes cluster. The kubeadm-certs secret is created manually and contains the keys and certificates required when adding new control plane nodes (
kubeadm join). This approach allows sensitive data to be securely transferred between control plane nodes.
- Init
Uploading root certificates to Kubernetes
● Required
Uploading root certificates to Kubernetes
● Required
This section provides instructions for uploading root certificates to the Kubernetes control plane. The certificates are uploaded in encrypted form as a Secret resource, which allows them to be securely transferred and decrypted on another node for managing the control plane node lifecycle.
- HardWay
- Kubeadm
Environment variables for configuration file template
export AUTH_EXTRA_GROUPS="system:bootstrappers:kubeadm:default-node-token"
Role model preparation
This block prepares the role model for granting access to the kubeadm-certs secret. This is necessary so that control plane nodes can securely obtain root certificates through the Kubernetes API when joining the cluster. The role is bound to the ${AUTH_EXTRA_GROUPS} group, which kubeadm typically falls under during join.
kubectl \
--kubeconfig=/etc/kubernetes/super-admin.conf apply -f - <<EOF
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: kubeadm:kubeadm-certs
namespace: kube-system
rules:
- apiGroups:
- ""
resourceNames:
- kubeadm-certs
resources:
- secrets
verbs:
- get
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: kubeadm:kubeadm-certs
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: kubeadm:kubeadm-certs
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: ${AUTH_EXTRA_GROUPS}
EOF
Working directory
mkdir -p /etc/kubernetes/openssl
Environment variables
export CERTIFICATE_UPLOAD_KEY=0c00c2fd5c67c37656c00d78a9d7e1f2eb794ef8e4fc3e2a4b532eb14323cd59
cat <<EOF > /etc/kubernetes/openssl/encrypt.py
#!/usr/bin/env python3
import sys, base64, os
from cryptography.hazmat.primitives.ciphers.aead import AESGCM
key = bytes.fromhex(sys.argv[1])
path = sys.argv[2]
with open(path, "rb") as f:
data = f.read()
nonce = os.urandom(12)
aesgcm = AESGCM(key)
ct = aesgcm.encrypt(nonce, data, None)
# kubeadm expects: nonce + ciphertext (including auth tag)
payload = nonce + ct
print(base64.b64encode(payload).decode())
EOF
cat <<'EOF' > /etc/kubernetes/openssl/upload-certs.sh
#!/bin/bash
set -euo pipefail
CERT_PATH="/etc/kubernetes/pki"
PY_SCRIPT="$(dirname "$0")/encrypt.py"
declare -A files=(
["ca.crt"]="$CERT_PATH/ca.crt"
["ca.key"]="$CERT_PATH/ca.key"
["etcd-ca.crt"]="$CERT_PATH/etcd/ca.crt"
["etcd-ca.key"]="$CERT_PATH/etcd/ca.key"
["front-proxy-ca.crt"]="$CERT_PATH/front-proxy-ca.crt"
["front-proxy-ca.key"]="$CERT_PATH/front-proxy-ca.key"
["sa.key"]="$CERT_PATH/sa.key"
["sa.pub"]="$CERT_PATH/sa.pub"
)
KEY="${CERTIFICATE_UPLOAD_KEY:-}"
if [[ -z "$KEY" ]]; then
echo "[ERROR] CERTIFICATE_UPLOAD_KEY is not set"
exit 1
fi
echo "[INFO] Using certificate key: $KEY"
TMP_DIR=$(mktemp -d)
SECRET_FILE="$TMP_DIR/secret.yaml"
cat <<EOF_SECRET > "$SECRET_FILE"
apiVersion: v1
kind: Secret
metadata:
name: kubeadm-certs
namespace: kube-system
type: Opaque
data:
EOF_SECRET
for name in "${!files[@]}"; do
path="${files[$name]}"
if [[ ! -f "$path" ]]; then
echo "[WARN] Skipping missing file: $path"
continue
fi
echo "[INFO] Encrypting $name..."
b64=$(python3 "$PY_SCRIPT" "$KEY" "$path")
echo " $name: $b64" >> "$SECRET_FILE"
done
echo "[INFO] Applying secret to cluster..."
kubectl apply -f "$SECRET_FILE"
echo "[INFO] Secret successfully uploaded."
EOF
Setting permissions
chmod +x /etc/kubernetes/openssl/upload-certs.sh
Running the script
/etc/kubernetes/openssl/upload-certs.sh
[INFO] Using certificate key: 0c00c2fd5c67c37656c00d78a9d7e1f2eb794ef8e4fc3e2a4b532eb14323cd59
[INFO] Encrypting front-proxy-ca.key...
[INFO] Encrypting sa.key...
[INFO] Encrypting front-proxy-ca.crt...
[INFO] Encrypting etcd-ca.crt...
[INFO] Encrypting sa.pub...
[INFO] Encrypting ca.key...
[INFO] Encrypting ca.crt...
[INFO] Encrypting etcd-ca.key...
[INFO] Applying secret to cluster...
secret/kubeadm-certs configured
[INFO] Secret successfully uploaded.
Uploading certificates
kubeadm init phase upload-certs \
--config=/var/run/kubeadm/kubeadm.yaml \
--kubeconfig=/etc/kubernetes/super-admin.conf \
--upload-certs
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
0c00c2fd5c67c37656c00d78a9d7e1f2eb794ef8e4fc3e2a4b532eb14323cd59
27 Labeling and Tainting Nodes
This section covers marking and restricting control plane nodes. It describes how to assign the control-plane role to a node and apply a taint that prevents scheduling workload pods on master nodes. These actions are necessary to ensure isolation of control plane components and to comply with the cluster architecture model.
- Init
- Join
Node marking and restriction
● Required
Node marking and restriction
● Required
This section describes the cluster configuration that allows you to set the container scheduling policy in advance and ensure isolation of the control plane from unplanned workloads.
- master-1
export HOST_NAME=master-1
Environment variables
export CLUSTER_NAME=my-first-cluster
export BASE_DOMAIN=example.com
export FULL_HOST_NAME=${HOST_NAME}.${CLUSTER_NAME}.${BASE_DOMAIN}
- HardWay
- Kubeadm
Node labeling
kubectl label node ${FULL_HOST_NAME} node-role.kubernetes.io/control-plane="" \
--kubeconfig=/etc/kubernetes/super-admin.conf
node/master-1.my-first-cluster.example.com labeled
Node tainting
kubectl taint node ${FULL_HOST_NAME} node-role.kubernetes.io/control-plane="":NoSchedule \
--overwrite \
--kubeconfig=/etc/kubernetes/super-admin.conf
node/master-1.my-first-cluster.example.com modified
kubeadm init phase mark-control-plane \
--config=/var/run/kubeadm/kubeadm.yaml
[mark-control-plane] Marking the node master-1.my-first-cluster.example.com as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node master-1.my-first-cluster.example.com as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
Node marking and restriction
● Required
Node marking and restriction
● Required
This section describes the cluster configuration that allows you to set the container scheduling policy in advance and protect the control plane from unplanned workloads.
- master-2
- master-3
export HOST_NAME=master-2
export HOST_NAME=master-3
Environment variables
export CLUSTER_NAME=my-first-cluster
export BASE_DOMAIN=example.com
export FULL_HOST_NAME=${HOST_NAME}.${CLUSTER_NAME}.${BASE_DOMAIN}
- HardWay
- Kubeadm
Node labeling
kubectl label node ${FULL_HOST_NAME} node-role.kubernetes.io/control-plane="" \
--kubeconfig=/etc/kubernetes/super-admin.conf
node/master-<n>.my-first-cluster.example.com labeled
Node tainting
kubectl taint node ${FULL_HOST_NAME} node-role.kubernetes.io/control-plane="":NoSchedule \
--overwrite \
--kubeconfig=/etc/kubernetes/super-admin.conf
node/master-<n>.my-first-cluster.example.com modified
kubeadm join phase control-plane-join mark-control-plane \
--config=/var/run/kubeadm/kubeadm.yaml
[mark-control-plane] Marking the node master-<n>.my-first-cluster.example.com as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node master-<n>.my-first-cluster.example.com as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
🍀 Conclusion
The Kubernetes The Hard Way journey for me has been a path spanning nearly two years. It opened up a wealth of new knowledge, opportunities... And, of course, challenges 🙂
This is far from my first article on this topic — if you're interested, check out my previous drafts on Habr:
- K8S Certificates or How to Untangle the Spaghetti. Part 1
- K8S Certificates or How to Untangle the Spaghetti. Part 2
- Kubernetes the Hard Way
- Kubernetes the Hard Way — Evolution. Part 1
- Managed Kubernetes the Hard Way
- Three Levels of Kubernetes in Kubernetes
To sum it up: this article took about four months to write.
Every script was hand-polished (with the help of chatGPT) and tested in real-world conditions.
No kidding — during all this time I spun up over 400 clusters.
Thanks to those who understood the idea, and special thanks to those who read all the way to the end 🙌 I'd love to hear your feedback and will definitely continue sharing my experience — in the same spirit, but in a new format.
🐾 During all four months, no animals were harmed... except for the Good Cat 😼 It was an amazing experience that I wouldn't recommend unless you have a slight inclination toward masochism 😅 And if you do — welcome!