二進制部署Kubernetes1.32.4最新版本高可用集群及附加組件

一、前言?

在云原生技術席卷全球的今天,Kubernetes(K8s)已成為容器編排領域的事實標準。當大家都習慣了kubeadm、kubeasz等自動化工具一鍵部署的便利時,選擇通過二進制方式手動搭建K8s集群更像是一場"知其然亦知其所以然"的深度修行。這種方式將帶您穿透抽象層,直面etcd的分布式存儲機制、kube-apiserver的RESTful接口設計、kubelet與CRI的交互細節,以及各個核心組件間的TLS雙向認證體系。 本文將還原最原始的K8s部署邏輯,從零開始構建一個符合生產環境要求的高可用集群。不同于封裝工具對底層實現的隱藏,二進制部署要求我們親手配置每個組件的systemd服務單元,精確編排CA證書鏈的簽發流程,甚至需要理解kube-proxy在不同模式下的iptables/ipvs規則生成機制。這種看似繁瑣的過程,恰恰是理解Kubernetes調度器如何通過watch機制協調集群狀態、控制器如何通過reconcile循環維持系統穩定的最佳實踐路徑。 通過本系列教程,您不僅將獲得一個可橫向擴展的集群架構(包括多master節點負載均衡與etcd集群部署方案),更重要的是建立起對K8s生態的立體認知。當面對證書過期、組件通信異常等生產環境常見故障時,這份通過二進制部署積累的排障經驗,將成為您運維分布式系統的寶貴資產。讓我們開始這場從源碼到服務的探索之旅,揭開Kubernetes神秘面紗背后的工程之美。

就在北京時間2025年4月23日上午,K8S發布了最新版本K8S1.32.4版本,于是我就使用二進制的方式部署出了一套完整的集群,并且整理了最詳細的筆記。

二、架構演示

三、基礎環境

# 由于小編的學習環境實在是資源有限,所以我把Etcd集群,master集群和worker集群,負載均衡,高可用全部都放在了三個節點上面,如果環境資源允許的話大家是可以解耦的,因為這些服務不存在端口沖突的情況,于是我就放在了一起部署使用。

操作系統主機名IP地址系統配置
Ubuntu22.04-LTSnode-exporter4110.0.0.41/244Core/8G/100GB
Ubuntu22.04-LTSnode-exporter4210.0.0.42/244Core/8G/100GB
Ubuntu22.04-LTSnode-exporter4310.0.0.43/244Core/8G/100GB

四、開始部署!!!

?1.部署 Etcd 高可用集群

準備 Etcd 程序包
  1. 下載 Etcd 軟件

    wget https://github.com/etcd-io/etcd/releases/download/v3.5.21/etcd-v3.5.21-linux-amd64.tar.gz
  2. 解壓 Etcd 的二進制程序包到 PATH 環境變量路

    tar -xf etcd-v3.5.21-linux-amd64.tar.gz -C /usr/local/bin etcd-v3.5.21-linux-amd64/etcd{,ctl} --strip-components=1

    驗證 etcdctl 版本

    etcdctl version
  3. 將軟件包下發到所有節點 使用 scp 命令將 /usr/local/bin 目錄下的 etcdetcdctl 文件復制到其他節點。

準備 Etcd 的證書文件
  1. 安裝 cfssl 證書管理工具

    wget http://192.168.16.253/Resources/Prometheus/softwares/Etcd/oldboyedu-cfssl-v1.6.5.zip
    unzip oldboyedu-cfssl-v1.6.5.zip
    apt install rename
    rename -v "s/_1.6.5_linux_amd64//g" cfssl*
    chmod +x /usr/local/bin/cfssl*
  2. 創建證書存儲目錄

    mkdir -pv /caofacan/certs/etcd && cd /caofacan/certs/etcd/
  3. 生成證書的 CSR 文件(證書簽發請求文件)

    cat > etcd-ca-csr.json <<EOF
    {
    "CN": "etcd",
    "key": {"algo": "rsa","size": 2048
    },
    "names": [{"C": "CN","ST": "Beijing","L": "Beijing","O": "etcd","OU": "Etcd Security"}
    ],
    "ca": {"expiry": "876000h"
    }
    }
    EOF
  4. 生成 Etcd CA 證書和 CA 證書的 key

    cfssl gencert -initca etcd-ca-csr.json | cfssljson -bare /caofacan/certs/etcd/etcd-ca
  5. 生成 Etcd 證書的有效期配置文件

    cat > ca-config.json <<EOF
    {"signing": {"default": {"expiry": "876000h"},"profiles": {"kubernetes": {"usages": ["signing","key encipherment","server auth","client auth"],"expiry": "876000h"}}}
    }
    EOF
  6. 生成 apiserver 證書的 CSR 文件

    cat > apiserver-csr.json <<EOF
    {"CN": "kube-apiserver","key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","ST": "Beijing","L": "Beijing","O": "Kubernetes","OU": "Kubernetes-manual"}]
    }
    EOF
  7. 基于自建 ca 證書生成 apiServer 的證書文件

    cfssl gencert \-ca=/caofacan/certs/etcd/etcd-ca.pem \-ca-key=/caofacan/certs/etcd/etcd-ca-key.pem \-config=ca-config.json \--hostname=10.200.0.1,10.0.0.240,kubernetes,kubernetes.default,kubernetes.default.svc,caofacan.com,kubernetes.default.svc.caofacan.com,10.0.0.41,10.0.0.42,10.0.0.43 \--profile=kubernetes \apiserver-csr.json | cfssljson -bare /caofacan/certs/etcd/etcd-server
創建 Etcd 集群各節點配置
  1. node-exporter41 節點的配置文件

    mkdir -pv /caofacan/softwares/etcd
    cat > /caofacan/softwares/etcd/etcd.config.yml << 'EOF'
    name: 'node-exporter41'
    data-dir: /var/lib/etcd
    listen-peer-urls: 'https://10.0.0.41:2380'
    listen-client-urls: 'https://10.0.0.41:2379,http://127.0.0.1:2379'
    initial-cluster: 'node-exporter41=https://10.0.0.41:2380,node-exporter42=https://10.0.0.42:2380,node-exporter43=https://10.0.0.43:2380'
    client-transport-security:cert-file: /caofacan/certs/etcd/etcd-server.pemkey-file: /caofacan/certs/etcd/etcd-server-key.pemca-file: /caofacan/certs/etcd/etcd-ca.pemclient-cert-auth: true
    peer-transport-security:cert-file: /caofacan/certs/etcd/etcd-server.pemkey-file: /caofacan/certs/etcd/etcd-server-key.pemca-file: /caofacan/certs/etcd/etcd-ca.pempeer-client-cert-auth: true
    EOF
  2. node-exporter42 和 node-exporter43 節點的配置文件:類似 node-exporter41,需修改對應 IP 和節點名稱。

配置 Etcd 啟動腳本

在每個節點上創建 /usr/lib/systemd/system/etcd.service 文件:

[Unit]
Description=Jason Yin's Etcd Service
Documentation=https://coreos.com/etcd/docs/latest/
After=network.target[Service]
ExecStart=/usr/local/bin/etcd --config-file=/caofacan/softwares/etcd/etcd.config.yml
Restart=on-failure
RestartSec=10
LimitNOFILE=65535[Install]
WantedBy=multi-user.target
啟動 Etcd 集群
systemctl daemon-reload && systemctl enable --now etcd
systemctl status etcd
查看 Etcd 集群狀態
etcdctl --endpoints="10.0.0.41:2379,10.0.0.42:2379,10.0.0.43:2379" --cacert=/caofacan/certs/etcd/etcd-ca.pem --cert=/caofacan/certs/etcd/etcd-server.pem --key=/caofacan/certs/etcd/etcd-server-key.pem endpoint status --write-out=table

輸出示例:

+----------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
|    ENDPOINT    |        ID        | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
+----------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| 10.0.0.41:2379 | 9378902f41df91e9 |  3.5.21 |  4.9 MB |      true |      false |        30 |      58023 |              58023 |        |
| 10.0.0.42:2379 | 18f972748ec1bd96 |  3.5.21 |  5.0 MB |     false |      false |        30 |      58023 |              58023 |        |
| 10.0.0.43:2379 | a3dfd2d37c461ee9 |  3.5.21 |  4.9 MB |     false |      false |        30 |      58023 |              58023 |        |
+----------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+

2.K8S 集群各主機環境準備

下載并解壓 K8S 程序包
wget https://dl.k8s.io/v1.32.4/kubernetes-server-linux-amd64.tar.gz
tar xf kubernetes-server-linux-amd64-v1.32.4.tar.gz --strip-components=3 -C /usr/local/bin kubernetes/server/bin/kube{let,ctl,-apiserver,-controller-manager,-scheduler,-proxy}
查看 kubelet 版本
kubelet --version
分發軟件包到其他節點
for i in `ls -1 /usr/local/bin/kube*`; do data_rsync.sh $i; done
所有節點安裝常用軟件包
apt -y install bind9-utils expect rsync jq psmisc net-tools lvm2 vim unzip rename tree
node-exporter41 節點免密鑰登錄集群并同步數據
  1. 配置免密碼登錄其他節點

    cat > password_free_login.sh <<'EOF'
    #!/bin/bash
    # auther: Jason Yin# 創建密鑰對
    ssh-keygen -t rsa -P "" -f /root/.ssh/id_rsa -q# 聲明你服務器密碼,建議所有節點的密碼均一致,否則該腳本需要再次進行優化
    export mypasswd=1# 定義主機列表
    k8s_host_list=(node-exporter41 node-exporter42 node-exporter43)# 配置免密登錄,利用expect工具免交互輸入
    for i in ${k8s_host_list[@]};do
    expect -c "
    spawn ssh-copy-id -i /root/.ssh/id_rsa.pub root@$iexpect {"*yes/no*" {send "yes\r"; exp_continue}"*password*" {send "$mypasswd\r"; exp_continue}}"
    done
    EOF
    bash password_free_login.sh
  2. 編寫同步腳本

    cat > /usr/local/sbin/data_rsync.sh <<'EOF'
    #!/bin/bashif [ $# -lt 1 ];thenecho "Usage: $0 /path/to/file(絕對路徑) [mode: m|w]"exit
    fiif [ ! -e $1 ];thenecho "[ $1 ] dir or file not find!"exit
    fifullpath=`dirname $1`basename=`basename $1`cd $fullpathcase $2 inWORKER_NODE|w)K8S_NODE=(node-exporter42 node-exporter43);;MASTER_NODE|m)K8S_NODE=(node-exporter42 node-exporter43);;*)K8S_NODE=(node-exporter42 node-exporter43);;
    esacfor host in ${K8S_NODE[@]};dotput setaf 2echo ===== rsyncing ${host}: $basename =====tput setaf 7rsync -az $basename  `whoami`@${host}:$fullpathif [ $? -eq 0 ];thenecho "命令執行成功!"fi
    done
    EOF
    chmod +x /usr/local/sbin/data_rsync.sh
    data_rsync.sh /etc/hosts
所有節點 Linux 基礎環境優化
systemctl disable --now NetworkManager ufw
swapoff -a && sysctl -w vm.swappiness=0
sed -ri '/^[^#]*swap/s@^@#@' /etc/fstab
free -h
ln -svf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
cat >> /etc/security/limits.conf <<'EOF'
* soft nofile 655360
* hard nofile 131072
* soft nproc 655350
* hard nproc 655350
* soft memlock unlimited
* hard memlock unlimited
EOF
sed -i 's@#UseDNS yes@UseDNS no@g' /etc/ssh/sshd_config
sed -i 's@^GSSAPIAuthentication yes@GSSAPIAuthentication no@g' /etc/ssh/sshd_config
cat > /etc/sysctl.d/k8s.conf <<'EOF'
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv6.conf.all.disable_ipv6 = 1
fs.may_detach_mounts = 1
vm.overcommit_memory=1
vm.panic_on_oom=0
fs.inotify.max_user_watches=89100
fs.file-max=52706963
fs.nr_open=52706963
net.netfilter.nf_conntrack_max=2310720
net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp_keepalive_probes = 3
net.ipv4.tcp_keepalive_intvl =15
net.ipv4.tcp_max_tw_buckets = 36000
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_max_orphans = 327680
net.ipv4.tcp_orphan_retries = 3
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.ip_conntrack_max = 65536
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.tcp_timestamps = 0
net.core.somaxconn = 16384
EOF
sysctl --system
cat <<EOF >> ~/.bashrc
PS1='[\[\e[34;1m\]\u@\[\e[0m\]\[\e[32;1m\]\H\[\e[0m\]\[\e[31;1m\] \W\[\e[0m\]]# '
EOF
source ~/.bashrc
所有節點安裝 ipvsadm 以實現 kube-proxy 的負載均衡
apt -y install ipvsadm ipset sysstat conntrack
cat > /etc/modules-load.d/ipvs.conf << 'EOF'
ip_vs
ip_vs_lc
ip_vs_wlc
ip_vs_rr
ip_vs_wrr
ip_vs_lblc
ip_vs_lblcr
ip_vs_dh
ip_vs_sh
ip_vs_fo
ip_vs_nq
ip_vs_sed
ip_vs_ftp
ip_vs_sh
nf_conntrack
br_netfilter
ip_tables
ip_set
xt_set
ipt_set
ipt_rpfilter
ipt_REJECT
ipip
EOF
驗證模塊是否加載成功
reboot
lsmod | grep --color=auto -e ip_vs -e nf_conntrack -e br_netfilter
部署 Containerd 運行時
# 這里我自己寫了一個自動部署containerd腳本
tar xf oldboyedu-autoinstall-containerd-v1.6.36.tar.gz
./install-containerd.sh i

檢查 Containerd 版本:

ctr version

?3.生成 K8S 組件相關證書

生成 K8S 組件相關證書
創建證書的 CSR 文件
mkdir -pv /caofacan/certs/pki && cd /caofacan/certs/pki
cat > k8s-ca-csr.json <<EOF
{
"CN": "kubernetes",
"key": {"algo": "rsa","size": 2048
},
"names": [{"C": "CN","ST": "Beijing","L": "Beijing","O": "Kubernetes","OU": "Kubernetes-manual"}
],
"ca": {"expiry": "876000h"
}
}
EOF
生成 K8S 證書
mkdir -pv /caofacan/certs/kubernetes/
cfssl gencert -initca k8s-ca-csr.json | cfssljson -bare /caofacan/certs/kubernetes/k8s-ca
生成 K8S 證書的有效期為 100 年
cat > k8s-ca-config.json <<EOF
{"signing": {"default": {"expiry": "876000h"},"profiles": {"kubernetes": {"usages": ["signing","key encipherment","server auth","client auth"],"expiry": "876000h"}}}
}
EOF
生成 apiserver 證書的 CSR 文件
cat > apiserver-csr.json <<EOF
{"CN": "kube-apiserver","key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","ST": "Beijing","L": "Beijing","O": "Kubernetes","OU": "Kubernetes-manual"}]
}
EOF
基于自建 ca 證書生成 apiServer 的證書文件
cfssl gencert \
-ca=/caofacan/certs/kubernetes/k8s-ca.pem \
-ca-key=/caofacan/certs/kubernetes/k8s-ca-key.pem \
-config=k8s-ca-config.json \
--hostname=10.200.0.1,10.0.0.240,kubernetes,kubernetes.default,kubernetes.default.svc,caofacan.com,kubernetes.default.svc.caofacan.com,10.0.0.41,10.0.0.42,10.0.0.43 \
--profile=kubernetes \
apiserver-csr.json | cfssljson -bare /caofacan/certs/kubernetes/apiserver
生成聚合證書的用于自建 ca 的 CSR 文件
cat > front-proxy-ca-csr.json <<EOF
{"CN": "kubernetes","key": {"algo": "rsa","size": 2048}
}
EOF
生成聚合證書的自建 ca 證書
cfssl gencert -initca front-proxy-ca-csr.json | cfssljson -bare /caofacan/certs/kubernetes/front-proxy-ca
生成聚合證書的用于客戶端的 CSR 文件
cat > front-proxy-client-csr.json <<EOF
{"CN": "front-proxy-client","key": {"algo": "rsa","size": 2048}
}
EOF
基于聚合證書的自建 ca 證書簽發聚合證書的客戶端證書
cfssl gencert \
-ca=/caofacan/certs/kubernetes/front-proxy-ca.pem \
-ca-key=/caofacan/certs/kubernetes/front-proxy-ca-key.pem \
-config=k8s-ca-config.json \
-profile=kubernetes \
front-proxy-client-csr.json | cfssljson -bare /caofacan/certs/kubernetes/front-proxy-client

4.生成 controller-manager 證書及 kubeconfig 文件?

  1. 生成 controller-manager 的 CSR 文件

    cat > controller-manager-csr.json <<EOF
    {"CN": "system:kube-controller-manager","key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","ST": "Beijing","L": "Beijing","O": "system:kube-controller-manager","OU": "Kubernetes-manual"}]
    }
    EOF
  2. 生成 controller-manager 證書文件

    cfssl gencert \-ca=/caofacan/certs/kubernetes/k8s-ca.pem \-ca-key=/caofacan/certs/kubernetes/k8s-ca-key.pem \-config=k8s-ca-config.json \-profile=kubernetes \controller-manager-csr.json | cfssljson -bare /caofacan/certs/kubernetes/controller-manager
  3. 創建 kubeconfig 目錄

    mkdir -pv /caofacan/certs/kubeconfig
  4. 設置集群

    kubectl config set-cluster yinzhengjie-k8s \--certificate-authority=/caofacan/certs/kubernetes/k8s-ca.pem \--embed-certs=true \--server=https://10.0.0.240:8443 \--kubeconfig=/caofacan/certs/kubeconfig/kube-controller-manager.kubeconfig
  5. 設置用戶項

    kubectl config set-credentials system:kube-controller-manager \--client-certificate=/caofacan/certs/kubernetes/controller-manager.pem \--client-key=/caofacan/certs/kubernetes/controller-manager-key.pem \--embed-certs=true \--kubeconfig=/caofacan/certs/kubeconfig/kube-controller-manager.kubeconfig
  6. 設置上下文環境

    kubectl config set-context system:kube-controller-manager@kubernetes \--cluster=yinzhengjie-k8s \--user=system:kube-controller-manager \--kubeconfig=/caofacan/certs/kubeconfig/kube-controller-manager.kubeconfig
  7. 使用默認上下文

    kubectl config use-context system:kube-controller-manager@kubernetes \--kubeconfig=/caofacan/certs/kubeconfig/kube-controller-manager.kubeconfig

5.生成 scheduler 證書及 kubeconfig 文件?

  1. 生成 scheduler 的 CSR 文件

    cat > scheduler-csr.json <<EOF
    {"CN": "system:kube-scheduler","key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","ST": "Beijing","L": "Beijing","O": "system:kube-scheduler","OU": "Kubernetes-manual"}]
    }
    EOF
  2. 生成 scheduler 證書文件

    cfssl gencert \-ca=/caofacan/certs/kubernetes/k8s-ca.pem \-ca-key=/caofacan/certs/kubernetes/k8s-ca-key.pem \-config=k8s-ca-config.json \-profile=kubernetes \scheduler-csr.json | cfssljson -bare /caofacan/certs/kubernetes/scheduler
  3. 設置集群

    kubectl config set-cluster yinzhengjie-k8s \--certificate-authority=/caofacan/certs/kubernetes/k8s-ca.pem \--embed-certs=true \--server=https://10.0.0.240:8443 \--kubeconfig=/caofacan/certs/kubeconfig/kube-scheduler.kubeconfig
  4. 設置用戶項

    kubectl config set-credentials system:kube-scheduler \--client-certificate=/caofacan/certs/kubernetes/scheduler.pem \--client-key=/caofacan/certs/kubernetes/scheduler-key.pem \--embed-certs=true \--kubeconfig=/caofacan/certs/kubeconfig/kube-scheduler.kubeconfig
  5. 設置上下文環境

    kubectl config set-context system:kube-scheduler@kubernetes \--cluster=yinzhengjie-k8s \--user=system:kube-scheduler \--kubeconfig=/caofacan/certs/kubeconfig/kube-scheduler.kubeconfig
  6. 使用默認上下文

    kubectl config use-context system:kube-scheduler@kubernetes \--kubeconfig=/caofacan/certs/kubeconfig/kube-scheduler.kubeconfig

6.配置 K8S 集群管理員證書及 kubeconfig 文件?

  1. 生成管理員的 CSR 文件

    cat > admin-csr.json <<EOF
    {"CN": "admin","key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","ST": "Beijing","L": "Beijing","O": "system:masters","OU": "Kubernetes-manual"}]
    }
    EOF
  2. 生成 K8S 集群管理員證書

    cfssl gencert \-ca=/caofacan/certs/kubernetes/k8s-ca.pem \-ca-key=/caofacan/certs/kubernetes/k8s-ca-key.pem \-config=k8s-ca-config.json \-profile=kubernetes \admin-csr.json | cfssljson -bare /caofacan/certs/kubernetes/admin
  3. 設置集群

    kubectl config set-cluster yinzhengjie-k8s \--certificate-authority=/caofacan/certs/kubernetes/k8s-ca.pem \--embed-certs=true \--server=https://10.0.0.240:8443 \--kubeconfig=/caofacan/certs/kubeconfig/kube-admin.kubeconfig
  4. 設置用戶項

    kubectl config set-credentials kube-admin \--client-certificate=/caofacan/certs/kubernetes/admin.pem \--client-key=/caofacan/certs/kubernetes/admin-key.pem \--embed-certs=true \--kubeconfig=/caofacan/certs/kubeconfig/kube-admin.kubeconfig
  5. 設置上下文環境

    kubectl config set-context kube-admin@kubernetes \--cluster=yinzhengjie-k8s \--user=kube-admin \--kubeconfig=/caofacan/certs/kubeconfig/kube-admin.kubeconfig
  6. 使用默認上下文

    kubectl config use-context kube-admin@kubernetes \--kubeconfig=/caofacan/certs/kubeconfig/kube-admin.kubeconfig

7.創建 ServiceAccount

openssl genrsa -out /caofacan/certs/kubernetes/sa.key 2048
openssl rsa -in /caofacan/certs/kubernetes/sa.key -pubout -out /caofacan/certs/kubernetes/sa.pub
  • 將 K8S 組件證書拷貝到其他兩個 master 節點
data_rsync.sh /caofacan/certs/kubeconfig/
data_rsync.sh /caofacan/certs/kubernetes/

8.高可用組件 haproxy + keepalived 安裝及驗證?

所有 master 節點安裝高可用組件
apt-get -y install keepalived haproxy
配置 haproxy
  1. 備份配置文件

    cp /etc/haproxy/haproxy.cfg{,`date +%F`}
  2. 所有節點的配置文件內容相同

    cat > /etc/haproxy/haproxy.cfg <<'EOF'
    globalmaxconn  2000ulimit-n  16384log  127.0.0.1 local0 errstats timeout 30sdefaultslog globalmode  httpoption  httplogtimeout connect 5000timeout client  50000timeout server  50000timeout http-request 15stimeout http-keep-alive 15sfrontend monitor-haproxybind *:9999mode httpoption httplogmonitor-uri /ruokfrontend yinzhengjie-k8sbind 0.0.0.0:8443bind 127.0.0.1:8443mode tcpoption tcplogtcp-request inspect-delay 5sdefault_backend yinzhengjie-k8sbackend yinzhengjie-k8smode tcpoption tcplogoption tcp-checkbalance roundrobindefault-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100server node-exporter41   10.0.0.41:6443  checkserver node-exporter42   10.0.0.42:6443  checkserver node-exporter43   10.0.0.43:6443  check
    EOF
配置 keepalived
  1. node-exporter41 節點創建配置文件

    cat > /etc/keepalived/keepalived.conf <<'EOF'
    ! Configuration File for keepalived
    global_defs {router_id 10.0.0.41
    }
    vrrp_script chk_nginx {script "/etc/keepalived/check_port.sh 8443"interval 2weight -20
    }
    vrrp_instance VI_1 {state MASTERinterface eth0virtual_router_id 251priority 100advert_int 1mcast_src_ip 10.0.0.41nopreemptauthentication {auth_type PASSauth_pass yinzhengjie_k8s}track_script {chk_nginx}virtual_ipaddress {10.0.0.240}
    }
    EOF
  2. node-exporter42 和 node-exporter43 節點創建配置文件:類似 node-exporter41,需修改對應 IP。

  3. 所有 keepalived 節點創建健康檢查腳本

    cat > /etc/keepalived/check_port.sh <<'EOF'
    #!/bin/bashCHK_PORT=$1
    if [ -n "$CHK_PORT" ];thenPORT_PROCESS=`ss -lt|grep $CHK_PORT|wc -l`if [ $PORT_PROCESS -eq 0 ];thenecho "Port $CHK_PORT Is Not Used,End."systemctl stop keepalivedfi
    elseecho "Check Port Cant Be Empty!"
    fi
    EOF
    chmod +x /etc/keepalived/check_port.sh
驗證 haproxy 服務
  1. 所有節點啟動 haproxy 服務

    systemctl enable --now haproxy
    systemctl restart haproxy
    systemctl status haproxy
    ss -ntl | egrep "8443|9999"
  2. 基于 webUI 進行驗證

    curl http://10.0.0.41:9999/ruok
    curl http://10.0.0.42:9999/ruok
    curl http://10.0.0.43:9999/ruok
啟動 keepalived 服務并驗證
  1. 所有節點啟動 keepalived 服務

    systemctl daemon-reload
    systemctl enable --now keepalived
    systemctl status keepalived
  2. 驗證服務是否正常

    ip a
  3. 基于 telnet 驗證 haproxy 是否正常

    telnet 10.0.0.240 8443
    ping 10.0.0.240 -c 3

?9.部署 ApiServer 組件

node-exporter41 節點啟動 ApiServer
  1. 創建 node-exporter41 節點的配置文件

    cat > /usr/lib/systemd/system/kube-apiserver.service << 'EOF'
    [Unit]
    Description=Jason Yin's Kubernetes API Server
    Documentation=https://github.com/kubernetes/kubernetes
    After=network.target[Service]
    ExecStart=/usr/local/bin/kube-apiserver \--v=2  \--bind-address=0.0.0.0  \--secure-port=6443  \--allow_privileged=true \--advertise-address=10.0.0.41 \--service-cluster-ip-range=10.200.0.0/16  \--service-node-port-range=3000-50000  \--etcd-servers=https://10.0.0.41:2379,https://10.0.0.42:2379,https://10.0.0.43:2379  \--etcd-cafile=/caofacan/certs/etcd/etcd-ca.pem  \--etcd-certfile=/caofacan/certs/etcd/etcd-server.pem  \--etcd-keyfile=/caofacan/certs/etcd/etcd-server-key.pem  \--client-ca-file=/caofacan/certs/kubernetes/k8s-ca.pem  \--tls-cert-file=/caofacan/certs/kubernetes/apiserver.pem  \--tls-private-key-file=/caofacan/certs/kubernetes/apiserver-key.pem  \--kubelet-client-certificate=/caofacan/certs/kubernetes/apiserver.pem  \--kubelet-client-key=/caofacan/certs/kubernetes/apiserver-key.pem  \--service-account-key-file=/caofacan/certs/kubernetes/sa.pub  \--service-account-signing-key-file=/caofacan/certs/kubernetes/sa.key \--service-account-issuer=https://kubernetes.default.svc.caofacan.com  \--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname  \--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota  \--authorization-mode=Node,RBAC  \--enable-bootstrap-token-auth=true  \--requestheader-client-ca-file=/caofacan/certs/kubernetes/front-proxy-ca.pem  \--proxy-client-cert-file=/caofacan/certs/kubernetes/front-proxy-client.pem  \--proxy-client-key-file=/caofacan/certs/kubernetes/front-proxy-client-key.pem  \--requestheader-allowed-names=aggregator  \--requestheader-group-headers=X-Remote-Group  \--requestheader-extra-headers-prefix=X-Remote-Extra-  \--requestheader-username-headers=X-Remote-UserRestart=on-failure
    RestartSec=10s
    LimitNOFILE=65535[Install]
    WantedBy=multi-user.target
    EOF
  2. 啟動服務

    systemctl daemon-reload && systemctl enable --now kube-apiserver
    systemctl status kube-apiserver
    ss -ntl | grep 6443
node-exporter42 和 node-exporter43 節點啟動 ApiServer

類似 node-exporter41,需修改對應 IP 和節點名稱。

10.部署 ControllerManager 組件

所有節點創建配置文件
cat > /usr/lib/systemd/system/kube-controller-manager.service << 'EOF'
[Unit]
Description=Jason Yin's Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
After=network.target[Service]
ExecStart=/usr/local/bin/kube-controller-manager \--v=2 \--root-ca-file=/caofacan/certs/kubernetes/k8s-ca.pem \--cluster-signing-cert-file=/caofacan/certs/kubernetes/k8s-ca.pem \--cluster-signing-key-file=/caofacan/certs/kubernetes/k8s-ca-key.pem \--service-account-private-key-file=/caofacan/certs/kubernetes/sa.key \--kubeconfig=/caofacan/certs/kubeconfig/kube-controller-manager.kubeconfig \--leader-elect=true \--use-service-account-credentials=true \--node-monitor-grace-period=40s \--node-monitor-period=5s \--controllers=*,bootstrapsigner,tokencleaner \--allocate-node-cidrs=true \--cluster-cidr=10.100.0.0/16 \--requestheader-client-ca-file=/caofacan/certs/kubernetes.front-proxy-ca.pem \--node-cidr-mask-size=24Restart=always
RestartSec=10s[Install]
WantedBy=multi-user.target
EOF
啟動 controller-manager 服務
systemctl daemon-reload
systemctl enable --now kube-controller-manager
systemctl status kube-controller-manager
ss -ntl | grep 10257

11.部署 Scheduler 組件?

所有節點創建配置文件
cat > /usr/lib/systemd/system/kube-scheduler.service <<'EOF'
[Unit]
Description=Jason Yin's Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
After=network.target[Service]
ExecStart=/usr/local/bin/kube-scheduler \--v=2 \--leader-elect=true \--kubeconfig=/caofacan/certs/kubeconfig/kube-scheduler.kubeconfigRestart=always
RestartSec=10s[Install]
WantedBy=multi-user.target
EOF
啟動 scheduler 服務
systemctl daemon-reload
systemctl enable --now kube-scheduler
systemctl status kube-scheduler
ss -ntl | grep 10259

?12.創建 Bootstrapping 自動頒發 kubelet 證書配置及集群管理證書配置

創建 bootstrap-kubelet.kubeconfig 文件
  1. 設置集群

    kubectl config set-cluster yinzhengjie-k8s \--certificate-authority=/caofacan/certs/kubernetes/k8s-ca.pem \--embed-certs=true \--server=https://10.0.0.240:8443 \--kubeconfig=/caofacan/certs/kubeconfig/bootstrap-kubelet.kubeconfig
  2. 創建用戶

    kubectl config set-credentials tls-bootstrap-token-user  \--token=caofacan.jasonyinzhengjie \--kubeconfig=/caofacan/certs/kubeconfig/bootstrap-kubelet.kubeconfig
  3. 將集群和用戶進行綁定

    kubectl config set-context tls-bootstrap-token-user@kubernetes \--cluster=yinzhengjie-k8s \--user=tls-bootstrap-token-user \--kubeconfig=/caofacan/certs/kubeconfig/bootstrap-kubelet.kubeconfig
  4. 配置默認的上下文

    kubectl config use-context tls-bootstrap-token-user@kubernetes \--kubeconfig=/caofacan/certs/kubeconfig/bootstrap-kubelet.kubeconfig
  5. 拷貝 kubelet 的 Kubeconfig 文件

    data_rsync.sh /caofacan/certs/kubeconfig/bootstrap-kubelet.kubeconfig

?13.所有 master 節點拷貝管理證書

  1. 所有 master 節點拷貝管理員的證書文件

    mkdir -p /root/.kube
    cp /caofacan/certs/kubeconfig/kube-admin.kubeconfig /root/.kube/config
  2. 查看 master 組件

    kubectl get cs
  3. 查看集群狀態

    kubectl cluster-info dump

14.創建 bootstrap-secret 授權?

  1. 創建 bootstrap-secret 文件用于授權

    cat > bootstrap-secret.yaml <<EOF
    apiVersion: v1
    kind: Secret
    metadata:name: bootstrap-token-caofacannamespace: kube-system
    type: bootstrap.kubernetes.io/token
    stringData:description: "The default bootstrap token generated by 'kubelet '."token-id: caofacantoken-secret: jasonyinzhengjieusage-bootstrap-authentication: "true"usage-bootstrap-signing: "true"auth-extra-groups:  system:bootstrappers:default-node-token,system:bootstrappers:worker,system:bootstrappers:ingress---apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:name: kubelet-bootstrap
    roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: system:node-bootstrapper
    subjects:
    - apiGroup: rbac.authorization.k8s.iokind: Groupname: system:bootstrappers:default-node-token---apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:name: node-autoapprove-bootstrap
    roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: system:certificates.k8s.io:certificatesigningrequests:nodeclient
    subjects:
    - apiGroup: rbac.authorization.k8s.iokind: Groupname: system:bootstrappers:default-node-token---apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:name: node-autoapprove-certificate-rotation
    roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
    subjects:
    - apiGroup: rbac.authorization.k8s.iokind: Groupname: system:nodes---apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRole
    metadata:annotations:rbac.authorization.kubernetes.io/autoupdate: "true"labels:kubernetes.io/bootstrapping: rbac-defaultsname: system:kube-apiserver-to-kubelet
    rules:- apiGroups:- ""resources:- nodes/proxy- nodes/stats- nodes/log- nodes/spec- nodes/metricsverbs:- "*"---apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:name: system:kube-apiservernamespace: ""
    roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: system:kube-apiserver-to-kubelet
    subjects:- apiGroup: rbac.authorization.k8s.iokind: Username: kube-apiserver
    EOF
  2. 應用 bootstrap-secret 配置文件

    kubectl apply -f bootstrap-secret.yaml

15.部署 worker 節點之 kubelet 啟動實戰

所有節點創建工作目錄
mkdir -p /var/lib/kubelet /var/log/kubernetes /etc/systemd/system/kubelet.service.d /etc/kubernetes/manifests/
所有節點創建 kubelet 的配置文件
cat > /etc/kubernetes/kubelet-conf.yml <<'EOF'
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
address: 0.0.0.0
port: 10250
readOnlyPort: 10255
authentication:anonymous:enabled: falsewebhook:cacheTTL: 2m0senabled: truex509:clientCAFile: /caofacan/certs/kubernetes/k8s-ca.pem
authorization:mode: Webhookwebhook:cacheAuthorizedTTL: 5m0scacheUnauthorizedTTL: 30s
cgroupDriver: systemd
cgroupsPerQOS: true
clusterDNS:
- 10.200.0.254
clusterDomain: caofacan.com
containerLogMaxFiles: 5
containerLogMaxSize: 10Mi
contentType: application/vnd.kubernetes.protobuf
cpuCFSQuota: true
cpuManagerPolicy: none
cpuManagerReconcilePeriod: 10s
enableControllerAttachDetach: true
enableDebuggingHandlers: true
enforceNodeAllocatable:
- pods
eventBurst: 10
eventRecordQPS: 5
evictionHard:imagefs.available: 15%memory.available: 100Minodefs.available: 10%nodefs.inodesFree: 5%
evictionPressureTransitionPeriod: 5m0s
failSwapOn: true
fileCheckFrequency: 20s
hairpinMode: promiscuous-bridge
healthzBindAddress: 127.0.0.1
healthzPort: 10248
httpCheckFrequency: 20s
imageGCHighThresholdPercent: 85
imageGCLowThresholdPercent: 80
imageMinimumGCAge: 2m0s
iptablesDropBit: 15
iptablesMasqueradeBit: 14
kubeAPIBurst: 10
kubeAPIQPS: 5
makeIPTablesUtilChains: true
maxOpenFiles: 1000000
maxPods: 110
nodeStatusUpdateFrequency: 10s
oomScoreAdj: -999
podPidsLimit: -1
registryBurst: 10
registryPullQPS: 5
resolvConf: /etc/kubernetes/resolv.conf
rotateCertificates: true
runtimeRequestTimeout: 2m0s
serializeImagePulls: true
staticPodPath: /etc/kubernetes/manifests
streamingConnectionIdleTimeout: 4h0m0s
syncFrequency: 1m0s
volumeStatsAggPeriod: 1m0s
EOF
所有節點配置 kubelet service
cat >  /usr/lib/systemd/system/kubelet.service <<'EOF'
[Unit]
Description=JasonYin's Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes
After=containerd.service
Requires=containerd.service[Service]
ExecStart=/usr/local/bin/kubelet
Restart=always
StartLimitInterval=0
RestartSec=10[Install]
WantedBy=multi-user.target
EOF
所有節點配置 kubelet service 的配置文件
cat > /etc/systemd/system/kubelet.service.d/10-kubelet.conf <<'EOF'
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/caofacan/certs/kubeconfig/bootstrap-kubelet.kubeconfig --kubeconfig=/caofacan/certs/kubeconfig/kubelet.kubeconfig"
Environment="KUBELET_CONFIG_ARGS=--config=/etc/kubernetes/kubelet-conf.yml"
Environment="KUBELET_SYSTEM_ARGS=--container-runtime-endpoint=unix:///run/containerd/containerd.sock"
Environment="KUBELET_EXTRA_ARGS=--node-labels=node.kubernetes.io/node='' "
ExecStart=
ExecStart=/usr/local/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_SYSTEM_ARGS $KUBELET_EXTRA_ARGS
EOF
啟動所有節點 kubelet
systemctl daemon-reload
systemctl enable --now kubelet
systemctl status kubelet
在所有 master 節點上查看 nodes 信息
kubectl get nodes -o wide
查看 csr 用戶客戶端的證書請求
kubectl get csr

16.部署 worker 節點之 kube-proxy 服務

生成 kube-proxy 的 csr 文件
cat > kube-proxy-csr.json  <<EOF
{
"CN": "system:kube-proxy",
"key": {"algo": "rsa","size": 2048
},
"names": [{"C": "CN","ST": "Beijing","L": "Beijing","O": "system:kube-proxy","OU": "Kubernetes-manual"}
]
}
EOF
創建 kube-proxy 需要的證書文件
cfssl gencert \
-ca=/caofacan/certs/kubernetes/k8s-ca.pem \
-ca-key=/caofacan/certs/kubernetes/k8s-ca-key.pem \
-config=k8s-ca-config.json \
-profile=kubernetes \
kube-proxy-csr.json | cfssljson -bare /caofacan/certs/kubernetes/kube-proxy
生成 Kubeconfig 文件
  1. 設置集群

    kubectl config set-cluster yinzhengjie-k8s \--certificate-authority=/caofacan/certs/kubernetes/k8s-ca.pem \--embed-certs=true \--server=https://10.0.0.240:8443 \--kubeconfig=/caofacan/certs/kubeconfig/kube-proxy.kubeconfig
  2. 設置用戶項

    kubectl config set-credentials system:kube-proxy \--client-certificate=/caofacan/certs/kubernetes/kube-proxy.pem \--client-key=/caofacan/certs/kubernetes/kube-proxy-key.pem \--embed-certs=true \--kubeconfig=/caofacan/certs/kubeconfig/kube-proxy.kubeconfig
  3. 設置上下文環境

    kubectl config set-context kube-proxy@kubernetes \--cluster=yinzhengjie-k8s \--user=system:kube-proxy \--kubeconfig=/caofacan/certs/kubeconfig/kube-proxy.kubeconfig
  4. 使用默認上下文

    kubectl config use-context kube-proxy@kubernetes \--kubeconfig=/caofacan/certs/kubeconfig/kube-proxy.kubeconfig
將 Kubeconfig 文件同步到所有的 worker 節點

data_rsync.sh /caofacan/certs/kubeconfig/kube-proxy.kubeconfig
所有節點創建 kube-proxy.conf 配置文件

cat > /etc/kubernetes/kube-proxy.yml << EOF
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
bindAddress: 0.0.0.0
metricsBindAddress: 127.0.0.1:10249
clientConnection:acceptConnection: ""burst: 10contentType: application/vnd.kubernetes.protobufkubeconfig: /caofacan/certs/kubeconfig/kube-proxy.kubeconfigqps: 5
clusterCIDR: 10.100.0.0/16
configSyncPeriod: 15m0s
conntrack:max: nullmaxPerCore: 32768min: 131072tcpCloseWaitTimeout: 1h0m0stcpEstablishedTimeout: 24h0m0s
enableProfiling: false
healthzBindAddress: 0.0.0.0:10256
hostnameOverride: ""
iptables:masqueradeAll: falsemasqueradeBit: 14minSyncPeriod: 0s
ipvs:masqueradeAll: trueminSyncPeriod: 5sscheduler: "rr"syncPeriod: 30s
mode: "ipvs"
nodeProtAddress: null
oomScoreAdj: -999
portRange: ""
udpIdelTimeout: 250ms
EOF
所有節點使用 systemd 管理 kube-proxy

cat > /usr/lib/systemd/system/kube-proxy.service << EOF
[Unit]
Description=Jason Yin's Kubernetes Proxy
After=network.target[Service]
ExecStart=/usr/local/bin/kube-proxy \--config=/etc/kubernetes/kube-proxy.yml \--v=2
Restart=on-failure
LimitNOFILE=65536[Install]
WantedBy=multi-user.target
EOF
所有節點啟動 kube-proxy

systemctl daemon-reload && systemctl enable --now kube-proxy
systemctl status kube-proxy
ss -ntl |egrep "10256|10249"

?17.附加組件部署CNI之flannel

參考鏈接:https://github.com/flannel-io/flannel?tab=readme-ov-file#deploying-flannel-manually1.下載資源清單 2.修改Flannel的資源清單
[root@node-exporter41 ~]# grep 16 kube-flannel.yml "Network": "10.244.0.0/16",
[root@node-exporter41 ~]# 
[root@node-exporter41 ~]# sed -i '/16/s#244#100#' kube-flannel.yml 
[root@node-exporter41 ~]# 
[root@node-exporter41 ~]# grep 16 kube-flannel.yml "Network": "10.100.0.0/16",
[root@node-exporter41 ~]# 
[root@node-exporter41 ~]# 3.所有節點安裝CNI通用插件
wget https://github.com/containernetworking/plugins/releases/download/v1.6.2/cni-plugins-linux-amd64-v1.6.2.tgztar xf cni-plugins-linux-amd64-v1.6.2.tgz -C /opt/cni/bin/[root@node-exporter41 ~]# ll /opt/cni/bin/
total 89780
drwxr-xr-x 2 root root     4096 Jan  7 00:12 ./
drwxr-xr-x 3 root root     4096 Apr 23 14:43 ../
-rwxr-xr-x 1 root root  4655178 Jan  7 00:12 bandwidth*
-rwxr-xr-x 1 root root  5287212 Jan  7 00:12 bridge*
-rwxr-xr-x 1 root root 12762814 Jan  7 00:12 dhcp*
-rwxr-xr-x 1 root root  4847854 Jan  7 00:12 dummy*
-rwxr-xr-x 1 root root  5315134 Jan  7 00:12 firewall*
-rwxr-xr-x 1 root root  4792010 Jan  7 00:12 host-device*
-rwxr-xr-x 1 root root  4060355 Jan  7 00:12 host-local*
-rwxr-xr-x 1 root root  4870719 Jan  7 00:12 ipvlan*
-rw-r--r-- 1 root root    11357 Jan  7 00:12 LICENSE
-rwxr-xr-x 1 root root  4114939 Jan  7 00:12 loopback*
-rwxr-xr-x 1 root root  4903324 Jan  7 00:12 macvlan*
-rwxr-xr-x 1 root root  4713429 Jan  7 00:12 portmap*
-rwxr-xr-x 1 root root  5076613 Jan  7 00:12 ptp*
-rw-r--r-- 1 root root     2343 Jan  7 00:12 README.md
-rwxr-xr-x 1 root root  4333422 Jan  7 00:12 sbr*
-rwxr-xr-x 1 root root  3651755 Jan  7 00:12 static*
-rwxr-xr-x 1 root root  4928874 Jan  7 00:12 tap*
-rwxr-xr-x 1 root root  4208424 Jan  7 00:12 tuning*
-rwxr-xr-x 1 root root  4868252 Jan  7 00:12 vlan*
-rwxr-xr-x 1 root root  4488658 Jan  7 00:12 vrf*
[root@node-exporter41 ~]#  [root@node-exporter42 ~]# ll /opt/cni/bin/
total 89780
drwxr-xr-x 2 root root     4096 Apr 23 14:56 ./
drwxr-xr-x 3 root root     4096 Apr 23 14:43 ../
-rwxr-xr-x 1 root root  4655178 Jan  7 00:12 bandwidth*
-rwxr-xr-x 1 root root  5287212 Jan  7 00:12 bridge*
-rwxr-xr-x 1 root root 12762814 Jan  7 00:12 dhcp*
-rwxr-xr-x 1 root root  4847854 Jan  7 00:12 dummy*
-rwxr-xr-x 1 root root  5315134 Jan  7 00:12 firewall*
-rwxr-xr-x 1 root root  4792010 Jan  7 00:12 host-device*
-rwxr-xr-x 1 root root  4060355 Jan  7 00:12 host-local*
-rwxr-xr-x 1 root root  4870719 Jan  7 00:12 ipvlan*
-rw-r--r-- 1 root root    11357 Jan  7 00:12 LICENSE
-rwxr-xr-x 1 root root  4114939 Jan  7 00:12 loopback*
-rwxr-xr-x 1 root root  4903324 Jan  7 00:12 macvlan*
-rwxr-xr-x 1 root root  4713429 Jan  7 00:12 portmap*
-rwxr-xr-x 1 root root  5076613 Jan  7 00:12 ptp*
-rw-r--r-- 1 root root     2343 Jan  7 00:12 README.md
-rwxr-xr-x 1 root root  4333422 Jan  7 00:12 sbr*
-rwxr-xr-x 1 root root  3651755 Jan  7 00:12 static*
-rwxr-xr-x 1 root root  4928874 Jan  7 00:12 tap*
-rwxr-xr-x 1 root root  4208424 Jan  7 00:12 tuning*
-rwxr-xr-x 1 root root  4868252 Jan  7 00:12 vlan*
-rwxr-xr-x 1 root root  4488658 Jan  7 00:12 vrf*
[root@node-exporter42 ~]# 
[root@node-exporter42 ~]# [root@node-exporter43 ~]# ll /opt/cni/bin/
total 89780
drwxr-xr-x 2 root root     4096 Apr 23 14:56 ./
drwxr-xr-x 3 root root     4096 Apr 23 14:43 ../
-rwxr-xr-x 1 root root  4655178 Jan  7 00:12 bandwidth*
-rwxr-xr-x 1 root root  5287212 Jan  7 00:12 bridge*
-rwxr-xr-x 1 root root 12762814 Jan  7 00:12 dhcp*
-rwxr-xr-x 1 root root  4847854 Jan  7 00:12 dummy*
-rwxr-xr-x 1 root root  5315134 Jan  7 00:12 firewall*
-rwxr-xr-x 1 root root  4792010 Jan  7 00:12 host-device*
-rwxr-xr-x 1 root root  4060355 Jan  7 00:12 host-local*
-rwxr-xr-x 1 root root  4870719 Jan  7 00:12 ipvlan*
-rw-r--r-- 1 root root    11357 Jan  7 00:12 LICENSE
-rwxr-xr-x 1 root root  4114939 Jan  7 00:12 loopback*
-rwxr-xr-x 1 root root  4903324 Jan  7 00:12 macvlan*
-rwxr-xr-x 1 root root  4713429 Jan  7 00:12 portmap*
-rwxr-xr-x 1 root root  5076613 Jan  7 00:12 ptp*
-rw-r--r-- 1 root root     2343 Jan  7 00:12 README.md
-rwxr-xr-x 1 root root  4333422 Jan  7 00:12 sbr*
-rwxr-xr-x 1 root root  3651755 Jan  7 00:12 static*
-rwxr-xr-x 1 root root  4928874 Jan  7 00:12 tap*
-rwxr-xr-x 1 root root  4208424 Jan  7 00:12 tuning*
-rwxr-xr-x 1 root root  4868252 Jan  7 00:12 vlan*
-rwxr-xr-x 1 root root  4488658 Jan  7 00:12 vrf*
[root@node-exporter43 ~]# 4.部署服務
[root@node-exporter41 ~]# kubectl apply -f kube-flannel.yml 
namespace/kube-flannel created
serviceaccount/flannel created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created
[root@node-exporter41 ~]# 5.檢查配置
[root@node-exporter41 ~]# kubectl get pods -o wide -A
NAMESPACE      NAME                    READY   STATUS    RESTARTS   AGE   IP          NODE              NOMINATED NODE   READINESS GATES
kube-flannel   kube-flannel-ds-8pt5n   1/1     Running   0          5s    10.0.0.41   node-exporter41   <none>           <none>
kube-flannel   kube-flannel-ds-gwhpb   1/1     Running   0          5s    10.0.0.43   node-exporter43   <none>           <none>
kube-flannel   kube-flannel-ds-mtmxt   1/1     Running   0          5s    10.0.0.42   node-exporter42   <none>           <none>
[root@node-exporter41 ~]# 6.驗證CNI網絡插件是否正常工作
[root@node-exporter41 ~]# cat > network-cni-test.yaml  <<EOF
apiVersion: v1
kind: Pod
metadata:name: xiuxian-v1
spec:nodeName: node-exporter42containers:- image: registry.cn-hangzhou.aliyuncs.com/yinzhengjie-k8s/apps:v1 name: xiuxian---apiVersion: v1
kind: Pod
metadata:name: xiuxian-v2
spec:nodeName: node-exporter43containers:- image: registry.cn-hangzhou.aliyuncs.com/yinzhengjie-k8s/apps:v2name: xiuxian
EOF[root@node-exporter41 ~]# kubectl apply -f  oldboyedu-network-cni-test.yaml 
pod/xiuxian-v1 created
pod/xiuxian-v2 created
[root@node-exporter41 ~]# 
[root@node-exporter41 ~]# kubectl get pods -o wide
NAME         READY   STATUS              RESTARTS   AGE   IP       NODE              NOMINATED NODE   READINESS GATES
xiuxian-v1   0/1     ContainerCreating   0          4s    <none>   node-exporter42   <none>           <none>
xiuxian-v2   0/1     ContainerCreating   0          4s    <none>   node-exporter43   <none>           <none>
[root@node-exporter41 ~]# 
[root@node-exporter41 ~]# kubectl get pods -o wide
NAME         READY   STATUS    RESTARTS   AGE   IP           NODE              NOMINATED NODE   READINESS GATES
xiuxian-v1   1/1     Running   0          9s    10.100.3.2   node-exporter42   <none>           <none>
xiuxian-v2   1/1     Running   0          9s    10.100.0.2   node-exporter43   <none>           <none>
[root@node-exporter41 ~]# 
[root@node-exporter41 ~]# 
[root@node-exporter41 ~]# curl 10.100.3.2 
<!DOCTYPE html>
<html><head><meta charset="utf-8"/><title>yinzhengjie apps v1</title><style>div img {width: 900px;height: 600px;margin: 0;}</style></head><body><h1 style="color: green">凡人修仙傳 v1 </h1><div><img src="1.jpg"><div></body></html>
[root@node-exporter41 ~]# 
[root@node-exporter41 ~]# curl 10.100.0.2 
<!DOCTYPE html>
<html><head><meta charset="utf-8"/><title>yinzhengjie apps v2</title><style>div img {width: 900px;height: 600px;margin: 0;}</style></head><body><h1 style="color: red">凡人修仙傳 v2 </h1><div><img src="2.jpg"><div></body></html>
[root@node-exporter41 ~]# 7.開啟自動補全功能
kubectl completion bash > ~/.kube/completion.bash.inc
echo "source '$HOME/.kube/completion.bash.inc'" >> $HOME/.bashrc
source $HOME/.bashrc

18.CoreDNS部署及故障排查

1 下載資源清單 
參考鏈接:https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/dns/coredns[root@node-exporter41 ~]# wget http://192.168.16.253/Resources/Kubernetes/Add-ons/CoreDNS/coredns.yaml.base2 修改資源清單模板的關鍵字段
[root@node-exporter41 ~]# sed -i  '/__DNS__DOMAIN__/s#__DNS__DOMAIN__#oldboyedu.com#' coredns.yaml.base 
[root@node-exporter41 ~]# 
[root@node-exporter41 ~]# sed -i '/__DNS__MEMORY__LIMIT__/s#__DNS__MEMORY__LIMIT__#200Mi#' coredns.yaml.base 
[root@node-exporter41 ~]# 
[root@node-exporter41 ~]# sed -i '/__DNS__SERVER__/s#__DNS__SERVER__#10.200.0.254#' coredns.yaml.base 
[root@node-exporter41 ~]# 相關字段說明:__DNS__DOMAIN__DNS自定義域名,要和你實際的K8S域名對應上。__DNS__MEMORY__LIMIT__CoreDNS組件的內存限制。__DNS__SERVER__DNS服務器的svc的CLusterIP地址。3 kubelet指定Pod的DNS服務器3.1 準備解析的配置文件
[root@node-exporter41 ~]# cat > /etc/kubernetes/resolv.conf <<EOF
nameserver 223.5.5.5
options edns0 trust-ad
search .
EOF[root@node-exporter41 ~]# data_rsync.sh /etc/kubernetes/resolv.conf3.2.kubelet指定dns文件
[root@node-exporter41 ~]# grep resolvConf /etc/kubernetes/kubelet-conf.yml 
resolvConf: /etc/resolv.conf
[root@node-exporter41 ~]# 
[root@node-exporter41 ~]# sed -i '/resolvConf/s#resolv.conf#kubernetes/resolv.conf#' /etc/kubernetes/kubelet-conf.yml 
[root@node-exporter41 ~]# 
[root@node-exporter41 ~]# grep resolvConf /etc/kubernetes/kubelet-conf.yml 
resolvConf: /etc/kubernetes/resolv.conf
[root@node-exporter41 ~]# 
[root@node-exporter41 ~]# data_rsync.sh /etc/kubernetes/kubelet-conf.yml
[root@node-exporter41 ~]# 3.3.K8S集群所有節點重啟服務使得配置生效
systemctl restart kubelet.service 4.部署CoreDNS組件 
[root@node-exporter41 ~]# kubectl apply -f  coredns.yaml.base 
serviceaccount/coredns created
clusterrole.rbac.authorization.k8s.io/system:coredns created
clusterrolebinding.rbac.authorization.k8s.io/system:coredns created
configmap/coredns created
deployment.apps/coredns created
service/kube-dns created
[root@node-exporter41 ~]# 
[root@node-exporter41 ~]# kubectl get pods -n kube-system -o wide
NAME                       READY   STATUS    RESTARTS   AGE   IP           NODE              NOMINATED NODE   READINESS GATES
coredns-5578c9dc84-wws5t   1/1     Running   0          10s   10.100.3.4   node-exporter42   <none>           <none>
[root@node-exporter41 ~]# 溫馨提示:如果鏡像下載失敗,可以手動導入。操作如下:wget http://192.168.16.253/Resources/Kubernetes/Add-ons/CoreDNS/oldboyedu-coredns-v1.12.0.tar.gzctr -n k8s.io i import oldboyedu-coredns-v1.12.0.tar.gz 5.驗證DNS服務
[root@node-exporter41 ~]# kubectl get svc -A
NAMESPACE     NAME         TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)                  AGE
default       kubernetes   ClusterIP   10.200.0.1     <none>        443/TCP                  3h56m
kube-system   kube-dns     ClusterIP   10.200.0.254   <none>        53/UDP,53/TCP,9153/TCP   15m
[root@node-exporter41 ~]# 
[root@node-exporter41 ~]# 
[root@node-exporter41 ~]# dig @10.200.0.254 kube-dns.kube-system.svc.oldboyedu.com +short
10.200.0.254
[root@node-exporter41 ~]# 
[root@node-exporter41 ~]# dig @10.200.0.254 kubernetes.default.svc.oldboyedu.com +short
10.200.0.1
[root@node-exporter41 ~]# 6.部署Pod驗證默認的DNS服務器
[root@node-exporter41 ~]# kubectl get pods -o wide
NAME         READY   STATUS    RESTARTS   AGE   IP           NODE              NOMINATED NODE   READINESS GATES
xiuxian-v1   1/1     Running   0          19s   10.100.3.5   node-exporter42   <none>           <none>
xiuxian-v2   1/1     Running   0          19s   10.100.0.4   node-exporter43   <none>           <none>
[root@node-exporter41 ~]# 
[root@node-exporter41 ~]# kubectl exec -it xiuxian-v1  -- cat /etc/resolv.conf
search default.svc.oldboyedu.com svc.oldboyedu.com oldboyedu.com
nameserver 10.200.0.254
options ndots:5
[root@node-exporter41 ~]# 7.清除Pod環境
[root@node-exporter41 ~]# kubectl get pods -o wide
NAME         READY   STATUS    RESTARTS   AGE     IP           NODE              NOMINATED NODE   READINESS GATES
xiuxian-v1   1/1     Running   0          2m20s   10.100.3.5   node-exporter42   <none>           <none>
xiuxian-v2   1/1     Running   0          2m20s   10.100.0.4   node-exporter43   <none>           <none>
[root@node-exporter41 ~]# 
[root@node-exporter41 ~]# kubectl delete pods --all
pod "xiuxian-v1" deleted
pod "xiuxian-v2" deleted
[root@node-exporter41 ~]# 
[root@node-exporter41 ~]# kubectl get pods -o wide
No resources found in default namespace.
[root@node-exporter41 ~]# 

19.驗證 K8S 集群高可用

kubectl get no -o wide

查看集群狀態:

NAME              STATUS   ROLES    AGE   VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION       CONTAINER-RUNTIME
node-exporter41   Ready    <none>   8h    v1.32.4   10.0.0.41     <none>        Ubuntu 22.04.4 LTS   5.15.0-135-generic   containerd://1.6.36
node-exporter42   Ready    <none>   8h    v1.32.4   10.0.0.42     <none>        Ubuntu 22.04.4 LTS   5.15.0-134-generic   containerd://1.6.36
node-exporter43   Ready    <none>   8h    v1.32.4   10.0.0.43     <none>        Ubuntu 22.04.4 LTS   5.15.0-135-generic   containerd://1.6.36

?五、附加組件部署

1.Helm

1.安裝helm
[root@node-exporter41 ~]# wget https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz
[root@node-exporter41 ~]# tar xf helm-v3.17.3-linux-amd64.tar.gz   -C /usr/local/bin/ linux-amd64/helm --strip-components=1
[root@node-exporter41 ~]# helm version
version.BuildInfo{Version:"v3.17.3", GitCommit:"e4da49785aa6e6ee2b86efd5dd9e43400318262b", GitTreeState:"clean", GoVersion:"go1.23.7"}# 同步到其他節點上
[root@node-exporter41 ~]# data_rsync.sh /usr/local/bin/helm 
===== rsyncing node-exporter42: helm =====
命令執行成功!
===== rsyncing node-exporter43: helm =====
命令執行成功!
[root@node-exporter41 ~]# 2.所有節點配置helm的自動補全功能
helm completion bash > /etc/bash_completion.d/helm
source /etc/bash_completion.d/helm
echo 'source /etc/bash_completion.d/helm' >> ~/.bashrc 

2.Ingress-nginx

github地址:https://github.com/kubernetes/ingress-nginx1.添加第三方倉庫
[root@node-exporter41 ~]# helm repo add ingress https://kubernetes.github.io/ingress-nginx
"oldboyedu-ingress" has been added to your repositories
[root@node-exporter41 ~]# helm  repo list
NAME                    URL                                       
ingress       https://kubernetes.github.io/ingress-nginx2.下載指定的Chart
[root@node-exporter41 ~]# helm search repo ingress-nginx 
NAME                            CHART VERSION   APP VERSION     DESCRIPTION                                       
oldboyedu-ingress/ingress-nginx 4.12.1          1.12.1
Ingress controller for Kubernetes using NGINX a...
[root@node-exporter41 ~]# helm pull oldboyedu-ingress/ingress-nginx --version 4.12.1 5.解壓軟件包并修改配置參數
[root@node-exporter41 ~]# tar xf ingress-nginx-4.12.1.tgz [root@node-exporter41 ~]# sed -ri '/digest:/s@^@#@' ingress-nginx/values.yaml
[root@node-exporter41 ~]# sed -i '/hostNetwork:/s#false#true#' ingress-nginx/values.yaml
[root@node-exporter41 ~]# sed -i  '/dnsPolicy/s#ClusterFirst#ClusterFirstWithHostNet#' ingress-nginx/values.yaml
[root@node-exporter41 ~]# sed -i '/kind/s#Deployment#DaemonSet#' ingress-nginx/values.yaml 
[root@node-exporter41 ~]# sed -i '/default:/s#false#true#'  ingress-nginx/values.yaml溫馨提示:注意,記得編輯values.yaml文件,將'admissionWebhooks'功能禁用。即: enabled: false [root@node-exporter41 ~]# grep -A 10 'admissionWebhooks' ingress-nginx/values.yaml admissionWebhooks:
..........enabled: false6.安裝ingress-nginx 
[root@node-exporter41 ~]# helm upgrade --install myingress ingress-nginx -n ingress-nginx --create-namespace
Release "myingress" does not exist. Installing it now.7.驗證Ingress-nginx是否安裝成功
[root@node-exporter41 ~]#  helm -n ingress-nginx list
NAME            NAMESPACE       REVISION        UPDATED                                 STATUS          CHART                   APP VERSION
myingress       ingress-nginx   1               2025-04-23 21:16:21.595928036 +0800 CST deployed        ingress-nginx-4.12.1    1.12.1 [root@node-exporter41 ~]# kubectl get ingressclass,deploy,svc,po -n ingress-nginx  -o wide
NAME                                   CONTROLLER             PARAMETERS   AGE
ingressclass.networking.k8s.io/nginx   k8s.io/ingress-nginx   <none>       2m7sNAME                                         TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE    SELECTOR
service/myingress-ingress-nginx-controller   LoadBalancer   10.200.105.97   <pending>     80:44150/TCP,443:15236/TCP   2m7s   app.kubernetes.io/component=controller,app.kubernetes.io/instance=myingress,app.kubernetes.io/name=ingress-nginxNAME                                           READY   STATUS    RESTARTS   AGE    IP          NODE              NOMINATED NODE   READINESS GATES
pod/myingress-ingress-nginx-controller-7mnbk   1/1     Running   0          2m7s   10.0.0.42   node-exporter42   <none>           <none>
pod/myingress-ingress-nginx-controller-hbpnp   1/1     Running   0          2m7s   10.0.0.41   node-exporter41   <none>           <none>
pod/myingress-ingress-nginx-controller-jkndx   1/1     Running   0          2m7s   10.0.0.43   node-exporter43   <none>           <none>8.訪問測試,因為我們的配置直接使用宿主機網絡即可。
http://10.0.0.41/
http://10.0.0.42/
http://10.0.0.43/

3.metallb

1.下載metallb部署
[root@node-exporter41 ~]# wget https://raw.githubusercontent.com/metallb/metallb/v0.14.9/config/manifests/metallb-native.yaml[root@node-exporter41 ~]# kubectl apply -f  metallb-native.yaml 
namespace/metallb-system created
....2. 查看metallb的狀態
[root@node-exporter41 ~]# kubectl get all -o wide -n metallb-system
NAME                             READY   STATUS    RESTARTS   AGE   IP           NODE              NOMINATED NODE   READINESS GATES
pod/controller-bb5f47665-tjz84   1/1     Running   0          30s   10.100.3.3   node-exporter43   <none>           <none>
pod/speaker-h6n9d                1/1     Running   0          30s   10.0.0.42    node-exporter42   <none>           <none>
pod/speaker-m4kk7                1/1     Running   0          30s   10.0.0.41    node-exporter41   <none>           <none>
pod/speaker-wxlwj                1/1     Running   0          30s   10.0.0.43    node-exporter43   <none>           <none>NAME                              TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE   SELECTOR
service/metallb-webhook-service   ClusterIP   10.200.119.247   <none>        443/TCP   30s   component=controllerNAME                     DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE   CONTAINERS   IMAGES                            SELECTOR
daemonset.apps/speaker   3         3         3       3            3           kubernetes.io/os=linux   30s   speaker      quay.io/metallb/speaker:v0.14.9   app=metallb,component=speakerNAME                         READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS   IMAGES                               SELECTOR
deployment.apps/controller   1/1     1            1           30s   controller   quay.io/metallb/controller:v0.14.9   app=metallb,component=controllerNAME                                   DESIRED   CURRENT   READY   AGE   CONTAINERS   IMAGES                               SELECTOR
replicaset.apps/controller-bb5f47665   1         1         1       30s   controller   quay.io/metallb/controller:v0.14.9   app=metallb,component=controller,pod-template-hash=bb5f476653.創建MetalLB地址池
[root@node-exporter41 ~]# cat > metallb-ip-pool.yaml <<EOF
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:name: yinzhengjie-k8s-metallb-custom-poolnamespace: metallb-system
spec:addresses:- 10.0.0.150-10.0.0.180---apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:name: oldboyedunamespace: metallb-system
spec:ipAddressPools:- yinzhengjie-k8s-metallb-custom-pool
EOF
[root@node-exporter41 ~]# kubectl apply -f metallb-ip-pool.yaml
ipaddresspool.metallb.io/yinzhengjie-k8s-metallb-custom-pool created
l2advertisement.metallb.io/oldboyedu created### 驗證我們的ingress使用的LB是否出現了地址
[root@node-exporter41 ~]# kubectl get svc -A
NAMESPACE        NAME                                 TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)                      AGE
default          kubernetes                           ClusterIP      10.200.0.1       <none>        443/TCP                      9h
ingress-nginx    myingress-ingress-nginx-controller   LoadBalancer   10.200.105.97    10.0.0.150    80:44150/TCP,443:15236/TCP   5m49s

4.metrics-server

部署文檔https://github.com/kubernetes-sigs/metrics-server1 下載資源清單 
[root@node-exporter41 ~]# wget https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/high-availability-1.21+.yaml2.編輯配置文件
[root@node-exporter41 ~]# vim high-availability-1.21+.yaml 
...
114 apiVersion: apps/v1
115 kind: Deployment
116 metadata:
...
144       - args:
145         - --kubelet-insecure-tls  # 不要驗證Kubelets提供的服務證書的CA。不配置則會報錯x509。
...### 驗證所有節點是否都配置了'--requestheader-allowed-names=front-proxy-client'參數
vim /lib/systemd/system/kube-apiserver.service
[Unit]
Description=Jason Yin's Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target[Service]
ExecStart=/usr/local/bin/kube-apiserver \--requestheader-allowed-names=front-proxy-client \--v=2  \2.如果沒有配置參數添加后重啟服務即可。
systemctl daemon-reload && systemctl restart kube-apiserverss -ntl |grep 6443
LISTEN 0      16384              *:6443             *:*   ss -ntl |grep 8443
LISTEN 0      2000       127.0.0.1:8443       0.0.0.0:*          
LISTEN 0      2000         0.0.0.0:8443       0.0.0.0:* 3.部署metrics-server
[root@node-exporter41 ~]# kubectl apply -f  high-availability-1.21+.yaml4.驗證測試 
[root@node-exporter41 ~]# kubectl get pods -o wide -n kube-system  -l k8s-app=metrics-server
NAME                             READY   STATUS    RESTARTS   AGE   IP           NODE              NOMINATED NODE   READINESS GATES
metrics-server-9c85f7647-8lfrz   1/1     Running   0          28s   10.100.1.3   node-exporter42   <none>           <none>
metrics-server-9c85f7647-kpzjz   1/1     Running   0          28s   10.100.3.4   node-exporter43   <none>           <none>
[root@node-exporter41 ~]# kubectl top no
NAME              CPU(cores)   CPU(%)   MEMORY(bytes)   MEMORY(%)   
node-exporter41   109m         5%       1064Mi          18%         
node-exporter42   116m         5%       866Mi           14%         
node-exporter43   218m         10%      1169Mi          20%       

5.基于helm部署Dashboard

參考鏈接:https://github.com/kubernetes/dashboard1.添加Dashboard的倉庫地址
[root@node-exporter41 ~]# helm repo add kubernetes-dashboard https://kubernetes.github.io/dashboard/
"kubernetes-dashboard" has been added to your repositories
[root@node-exporter41 ~]# helm repo list
NAME                    URL                                       
oldboyedu-ingress       https://kubernetes.github.io/ingress-nginx
kubernetes-dashboard    https://kubernetes.github.io/dashboard/ 2.安裝Dashboard 
[root@node-exporter41 ~]# helm upgrade --install kubernetes-dashboard kubernetes-dashboard/kubernetes-dashboard --create-namespace --namespace kubernetes-dashboard[root@node-exporter41 ~]# tar xf kubernetes-dashboard-7.12.0.tgz 
[root@node-exporter41 ~]# 
[root@node-exporter41 ~]# ll kubernetes-dashboard
total 56
drwxr-xr-x  4 root root  4096 Apr 23 21:46 ./
drwx------ 10 root root  4096 Apr 23 21:46 ../
-rw-r--r--  1 root root   497 Apr 16 23:19 Chart.lock
drwxr-xr-x  6 root root  4096 Apr 23 21:46 charts/
-rw-r--r--  1 root root   982 Apr 16 23:19 Chart.yaml
-rw-r--r--  1 root root   948 Apr 16 23:19 .helmignore
-rw-r--r--  1 root root  8209 Apr 16 23:19 README.md
drwxr-xr-x 10 root root  4096 Apr 23 21:46 templates/
-rw-r--r--  1 root root 13729 Apr 16 23:19 values.yaml[root@node-exporter41 ~]# helm upgrade --install mywebui kubernetes-dashboard  --create-namespace --namespace kubernetes-dashboard
Release "mywebui" does not exist. Installing it now.
NAME: mywebui
LAST DEPLOYED: Wed Apr 23 21:54:09 2025
NAMESPACE: kubernetes-dashboard
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
*************************************************************************************************
*** PLEASE BE PATIENT: Kubernetes Dashboard may need a few minutes to get up and become ready ***
*************************************************************************************************Congratulations! You have just installed Kubernetes Dashboard in your cluster.To access Dashboard run:kubectl -n kubernetes-dashboard port-forward svc/mywebui-kong-proxy 8443:443NOTE: In case port-forward command does not work, make sure that kong service name is correct.Check the services in Kubernetes Dashboard namespace using:kubectl -n kubernetes-dashboard get svcDashboard will be available at:https://localhost:84433.查看部署信息
[root@node-exporter41 ~]# helm -n kubernetes-dashboard list
NAME    NAMESPACE               REVISION        UPDATED                                 STATUS          CHART                           APP VERSION
mywebui kubernetes-dashboard    1               2025-04-23 21:54:09.158789016 +0800 CST deployed        kubernetes-dashboard-7.12.0 [root@node-exporter41 ~]# kubectl  -n kubernetes-dashboard get pods -o wide
NAME                                                            READY   STATUS    RESTARTS   AGE   IP           NODE              NOMINATED NODE   READINESS GATES
mywebui-kong-6dd8f649b4-2gxqw                                   1/1     Running   0          72s   10.100.1.4   node-exporter42   <none>           <none>
mywebui-kubernetes-dashboard-api-5445d676c7-ggrm8               1/1     Running   0          72s   10.100.1.5   node-exporter42   <none>           <none>
mywebui-kubernetes-dashboard-auth-8687596fbd-xs8w9              1/1     Running   0          72s   10.100.0.4   node-exporter41   <none>           <none>
mywebui-kubernetes-dashboard-metrics-scraper-6ccc47c6ff-cwnq4   1/1     Running   0          72s   10.100.3.5   node-exporter43   <none>           <none>
mywebui-kubernetes-dashboard-web-584df9444c-n9sgj               1/1     Running   0          72s   10.100.0.5   node-exporter41   <none>           <none>
[root@node-exporter41 ~]# 4.修改svc的類型
[root@node-exporter41 ~]# kubectl edit svc -n kubernetes-dashboard mywebui-kong-proxy type: NodePort                                                           
service/mywebui-kong-proxy edited[root@node-exporter41 ~]# kubectl  -n kubernetes-dashboard get pods -o wide
NAME                                                            READY   STATUS    RESTARTS   AGE     IP           NODE              NOMINATED NODE   READINESS GATES
mywebui-kong-6dd8f649b4-2gxqw                                   1/1     Running   0          3m18s   10.100.1.4   node-exporter42   <none>           <none>
mywebui-kubernetes-dashboard-api-5445d676c7-ggrm8               1/1     Running   0          3m18s   10.100.1.5   node-exporter42   <none>           <none>
mywebui-kubernetes-dashboard-auth-8687596fbd-xs8w9              1/1     Running   0          3m18s   10.100.0.4   node-exporter41   <none>           <none>
mywebui-kubernetes-dashboard-metrics-scraper-6ccc47c6ff-cwnq4   1/1     Running   0          3m18s   10.100.3.5   node-exporter43   <none>           <none>
mywebui-kubernetes-dashboard-web-584df9444c-n9sgj               1/1     Running   0          3m18s   10.100.0.5   node-exporter41   <none>           <none>[root@node-exporter41 ~]# kubectl get svc -n kubernetes-dashboard mywebui-kong-proxy 
NAME                 TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
mywebui-kong-proxy   NodePort   10.200.163.82   <none>        443:7118/TCP   3m27s5.訪問WebUI 
https://10.0.0.43:7118/#/login

6.創建登錄賬號6.1 創建sa 
[root@node-exporter41 ~]# kubectl create serviceaccount admin
serviceaccount/admin created
[root@node-exporter41 ~]# 6.2 將sa和CLuster-admin進行綁定 
[root@node-exporter41 ~]# kubectl create clusterrolebinding dashboard-admin --serviceaccount=default:admin --clusterrole=cluster-admin 
clusterrolebinding.rbac.authorization.k8s.io/dashboard-admin created
[root@node-exporter41 ~]# [root@node-exporter41 ~]# kubectl create token admin
eyJhbGciOiJSUzI1NiIsImtpZCI6IlJEV1R3UmY2Wm9JSmJrdF96amVBdFZQeUZyM2p5Z1BPY1VnVnJjWE9tR2cifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLm9sZGJveWVkdS5jb20iXSwiZXhwIjoxNzQ1NDIwNDE4LCJpYXQiOjE3NDU0MTY4MTgsImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5vbGRib3llZHUuY29tIiwianRpIjoiYjk5MzViMmQtNzI5YS00YmIwLThlNTEtYzI2OWUzMmMxNThlIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJkZWZhdWx0Iiwic2VydmljZWFjY291bnQiOnsibmFtZSI6ImFkbWluIiwidWlkIjoiNDFkN2I5OWEtYzFlMC00ZGEwLWExZjMtZDg0OWU1NDFiZjlhIn19LCJuYmYiOjE3NDU0MTY4MTgsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDpkZWZhdWx0OmFkbWluIn0.kcYS6zxk7bEoqwzQvIWasJgMNJRRZyiECb00VV1ty1oTsj6fk5jNCc2tQkSaHoEFp6WvTcP9-Qc99C00RNCLFFnmLiTrRkW9zGP8YfmevJUCdm3wbJ1qWyimdEcCCQcllZTMUMtvYr9gPk2kS6kCijokFv4sXWL8VsMUYg32gEIaz_o6KOGfR9BwfhQyzkQraIrp5M2-268kHwDTMdp-73C85IK7Uc4OuP93qZFCW961RHkEZY6IeXHbKwZx215J9PNxHdWxo8tWZhqV8YUX4ggeXEJUhDUlrApEJMJyC_IVQq9mn3dKv3p6Odwo6BNWKLVFR1GeswTZXic5OVUzZw

六、總結

通過以上步驟,成功部署了一個基于 Ubuntu 22.04 的 K8S 1.32 高可用集群。整個過程涵蓋了從基礎環境準備到 K8S 集群的完整部署,包括 Etcd 高可用集群的搭建、K8S 組件的部署、證書的生成與配置、高可用組件的安裝與驗證等關鍵環節。

這一部署過程詳細展示了如何構建一個多 master 的高可用 K8S 集群,確保了在單節點故障的情況下,集群仍能正常運行,提高了系統的可靠性和穩定性。可用于企業級應用的部署,為業務的連續性和高可用性提供了有力保障。

通過精心設計的網絡架構和組件配置,K8S 集群能夠在節點故障時自動進行負載轉移,保持服務的不間斷運行。同時,采用的證書管理方案確保了集群內部通信的安全性,為后續的應用部署和擴展奠定了堅實的基礎。

這一實踐不僅驗證了 K8S 高可用架構的可行性,還積累了寶貴的部署經驗,為后續復雜應用在 K8S 上的部署和管理提供了有力支持。未來可以在這一基礎上進一步擴展,如部署 Ingress 控制器、網絡存儲插件等,以滿足更多樣化的企業應用需求。

本文來自互聯網用戶投稿,該文觀點僅代表作者本人,不代表本站立場。本站僅提供信息存儲空間服務,不擁有所有權,不承擔相關法律責任。
如若轉載,請注明出處:http://www.pswp.cn/news/902619.shtml
繁體地址,請注明出處:http://hk.pswp.cn/news/902619.shtml
英文地址,請注明出處:http://en.pswp.cn/news/902619.shtml

如若內容造成侵權/違法違規/事實不符,請聯系多彩編程網進行投訴反饋email:809451989@qq.com,一經查實,立即刪除!

相關文章

樹莓派系統中設置固定 IP

在基于 Ubuntu 的樹莓派系統中&#xff0c;設置固定 IP 地址主要有以下幾種方法&#xff1a; 方法一&#xff1a;使用 Netplan 配置&#xff08;Ubuntu 18.04 及以上版本默認使用 Netplan&#xff09; 查看網絡接口名稱 在終端輸入ip link或ip a命令&#xff0c;查看當前所使…

主流單片機與編程調試工具對應關系表梳理

單片機系列/型號 | 官方IDE/工具鏈 | 調試器/燒錄器 | 第三方支持工具 |調試接口協議 | 特點與適用場景| | STMicroelectronics (STM32) STM32全系列 STM32CubeIDE ST-LINK/V2/V3 - PlatformIO (VS Code插件) SWD/JTAG 官方集成開發環境&#xff0c;支持HAL庫&#xff0c;免費…

VulnHub-DarkHole_2靶機滲透教程

1.靶機部署 [Onepanda] Mik1ysomething 靶機下載&#xff1a;https://download.vulnhub.com/darkhole/darkhole_2.zip 直接使用VMware導入打開就行 注意&#xff1a;靶機的網絡連接模式必須和kali一樣&#xff0c;讓靶機跟kali處于同一網段&#xff0c;這樣kali才能掃出靶機…

USO服務器操作系統手動升級GCC 12.2.0版本

1. 從 GNU 官方 FTP 服務器下載 GCC 12.2.0 的源碼包&#xff0c;并解壓進入源碼目錄。 wget https://ftp.gnu.org/gnu/gcc/gcc-12.2.0/gcc-12.2.0.tar.gz tar -zxvf gcc-12.2.0.tar.gz cd gcc-12.2.0 2. 運行腳本下載并配置 GCC 編譯所需的依賴庫。此步驟會自動下載如 GMP…

設計模式基礎概念(行為模式):觀察者模式(Observer)

概述 我們可以發現這樣一個場景&#xff1a;如果你訂閱了一份雜志或報紙&#xff0c; 那就不需要再去報攤查詢新出版的刊物了。 出版社 &#xff08;即應用中的 “發布者&#xff08;publisher&#xff09;”&#xff09; 會在刊物出版后 &#xff08;甚至提前&#xff09; 直…

JavaFX實戰:從零到一實現一個功能豐富的“高級反應速度測試”游戲

大家好&#xff01;今天我們不搞簡單的“紅變綠就點”了&#xff0c;來點硬核的&#xff01;我們要用 JavaFX 從頭開始&#xff0c;構建一個更復雜、更有趣也更考驗能力的“高級反應速度測試”游戲。這個版本將引入選擇反應時 (Choice Reaction Time) 的概念——你需要在多個干…

CSS 選擇器介紹

CSS 選擇器介紹 1. 基本概念 CSS&#xff08;層疊樣式表&#xff09;是一種用于描述 HTML 或 XML 文檔外觀的語言。通過 CSS&#xff0c;可以控制網頁中元素的布局、顏色、字體等視覺效果。而 CSS 選擇器則是用來指定哪些 HTML 元素應該應用這些樣式的工具。 2. 基本選擇器 …

Vue3父子組件數據同步方法

在 Vue 3 中&#xff0c;當子組件需要修改父組件傳遞的數據副本并同步更新時&#xff0c;可以通過以下步驟實現&#xff1a; 方法 1&#xff1a;使用 v-model 和計算屬性&#xff08;實時同步&#xff09; 父組件&#xff1a; vue <template><ChildComponent v-mo…

el-table中el-input的autofocus無法自動聚焦的解決方案

需求 有一個表格展示了一些進度信息&#xff0c;進度信息可以修改&#xff0c;需要點擊進度信息旁邊的編輯按鈕時&#xff0c;把進度變為輸入框且自動聚焦&#xff0c;當鼠標失去焦點時自動請求更新接口。 注&#xff1a;本例以vue2 element UI為例 分析 這個需求看著挺簡單…

用高斯濺射技術跨越機器人模擬與現實的鴻溝:SplatSim 框架解析

在機器人領域&#xff0c;讓機器人在現實世界中精準執行任務是大家一直追求的目標。可模擬環境和現實世界之間存在著不小的差距&#xff0c;特別是基于 RGB 圖像的操作策略&#xff0c;從模擬轉移到現實時總是狀況百出。 今天咱們就來聊聊 SplatSim 框架&#xff0c;看看它是怎…

【自然語言處理與大模型】如何知道自己部署的模型的最大并行訪問數呢?

當你自己在服務器上部署好一個模型后&#xff0c;使用場景會有兩種。第一種就是你自己去玩&#xff0c;結合自有的數據做RAG等等&#xff0c;這種情況下一般是不會考慮并發的問題。第二種是將部署好的服務給到別人來使用&#xff0c;這時候就必須知道我的服務到底支持多大的訪問…

[FPGA基礎] UART篇

Xilinx FPGA UART 硬件接口使用指南 1. 引言 UART (通用異步收發器) 是一種廣泛使用的串行通信接口&#xff0c;因其簡單、可靠和易于實現而成為 Xilinx FPGA 設計中的常見硬件接口。UART 用于在 FPGA 與外部設備&#xff08;如 PC、微控制器、傳感器等&#xff09;之間進行數…

【Netty4核心原理】【全系列文章目錄】

文章目錄 一、前言二、目錄 一、前言 本系列雖說本意是作為 《Netty4 核心原理》一書的讀書筆記&#xff0c;但在實際閱讀記錄過程中加入了大量個人閱讀的理解和內容&#xff0c;因此對書中內容存在大量刪改。 本系列內容基于 Netty 4.1.73.Final 版本&#xff0c;如下&#xf…

用 PyTorch 和numpy分別實現簡單的 CNN 二分類器

作業用到的知識&#xff1a; 1.Pytorch: 1. nn.Conv2d&#xff08;二維卷積層&#xff09; 作用&#xff1a; 對輸入的多通道二位數據&#xff08;如圖像&#xff09;進行特征提取&#xff0c;通過滑動卷積核計算局部區域的加權和&#xff0c;生成新的特征圖。 關鍵參數&a…

使用n8n構建自動化工作流:從數據庫查詢到郵件通知的使用指南

n8n是一款強大的開源工作流自動化工具&#xff0c;可以幫助你將各種服務和應用程序連接起來&#xff0c;創建復雜的自動化流程。下面我將詳細介紹一個實用的n8n用例&#xff1a;從MySQL數據庫查詢數據并發送郵件通知&#xff0c;包括使用場景、搭建步驟和節點部署方法。 使用場…

Vscode已經打開的python項目,如何使用已經建立的虛擬環境

在 VS Code 中使用已創建的 Conda/Mamba 虛擬環境 pe100&#xff0c;只需以下幾步&#xff1a; 步驟 1&#xff1a;確保虛擬環境已存在 在終端運行以下命令&#xff0c;檢查 pe100 環境是否已正確創建&#xff1a; conda activate pe100 python --version # 應顯示 Python 3…

Volatility工具學習

背景 VMware虛擬機系統hang死&#xff0c;手動重啟無法觸發系統panic&#xff0c;從而不能觸發kdump產生vmcore文件進行原因分析&#xff1b;此種情況下需要手動生成虛擬機內存快照&#xff0c;進而利用Volatility工具分析系統hang死的具體原因。 配置 使用VMware創建虛擬機…

學習筆記(C++篇)--- Day 4

目錄 1.賦值運算符重載 1.1 運算符重載 1.2 賦值運算符重載 1.3 日期類實現 1.賦值運算符重載 1.1 運算符重載 ①當運算符被用于類類型的對象時&#xff0c;C語言允許我們通過通過運算符重載的形式指定新的含義。C規定類類型對象使用運算符時&#xff0c;必須轉換成調用對…

Docker 快速入門教程

1. Docker 基本概念 鏡像(Image): 只讀模板&#xff0c;包含創建容器的指令 容器(Container): 鏡像的運行實例 Dockerfile: 用于構建鏡像的文本文件 倉庫(Repository): 存放鏡像的地方&#xff08;如Docker Hub&#xff09; 2. 安裝Docker 根據你的操作系統選擇安裝方式:…

vue項目中使用tinymce富文本編輯器

vue使用tinymce 文章目錄 vue使用tinymcetinymce富文本編輯器在這里插入圖片描述 一、本文要實現二、使用步驟1.安裝tinymce2.tinymce組件新建3. 在store添加商品詳情的狀態管理4. tinymce組件的引入 tinymce富文本編輯器 提示&#xff1a;以下是本篇文章正文內容&#xff0c;下…