【云原生】Kubernetes部署高可用平臺手冊

部署Kubernetes高可用平臺

文章目錄

  • 部署Kubernetes高可用平臺
    • 基礎環境
    • 一、基礎環境配置
      • 1.1、關閉Swap
      • 1.2、添加hosts解析
      • 1.3、橋接IPv4流量傳遞到iptables的鏈
    • 二、配置Kubernetes的VIP
      • 2.1、安裝Nginx
      • 2.2、修改Nginx配置文件
      • 2.3、啟動服務
      • 2.4、安裝Keepalived
      • 2.5、修改配置文件
        • 2.5.1、Nginx1節點配置文件
        • 2.5.2、Nginx2節點配置文件
        • 2.5.3、啟動服務
    • 三、部署Kubernetes
      • 3.1、安裝Docker容器運行時
      • 3.2、配置Docker
      • 3.3、安裝Kubeadm工具
      • 3.4、初始化Master節點
      • 3.5、Node節點加入集群
      • 3.6、其余Master節點加入集群
        • 3.6.1、Master1節點重新創建token和hash值
        • 3.6.2、Master1節點重新生成certificate-key
        • 3.6.3、拼接master身份加入集群的命令
        • 3.6.4、其他master節點加入集群
    • 四、部署網絡插件
    • 五、驗證
      • 5.1、查看所有Pod運行狀態
      • 5.2、查看節點狀態
      • 5.3、查看集群組件狀態

操作系統配置主機名IP
CentOS 7.92C4Gmaster1192.168.93.101
CentOS 7.92C4Gmaster2192.168.93.102
CentOS 7.92G4Gmaster3192.168.93.103
CentOS 7.92C4Gnode1192.168.93.104
CentOS 7.92C4Gnginx1192.168.93.105
CentOS 7.92C4Gnginx2192.168.93.106

基礎環境

  • 關閉防火墻
systemctl stop firewalld
systemctl disable firewalld
  • 關閉內核安全機制
setenforce 0
sed -i "s/^SELINUX=.*/SELINUX=disabled/g" /etc/selinux/config
  • 修改主機名
hostnamectl set-hostname master1
hostnamectl set-hostname master2
hostnamectl set-hostname master3
hostnamectl set-hostname node1
hostnamectl set-hostname nginx1
hostnamectl set-hostname nginx2

一、基礎環境配置

  • 以下操作要在所有節點進行操作,以Master1節點為例進行演示

1.1、關閉Swap

# 臨時關閉
[root@master1 ~]# swapoff -a
# 永久關閉
[root@master1 ~]# sed -i 's/.*swap.*/#&/g' /etc/fstab

1.2、添加hosts解析

[root@master1 ~]# cat >> /etc/hosts << EOF
192.168.93.101 master1
192.168.93.102 master2
192.168.93.103 master3
192.168.93.104 node1
192.168.93.105 nginx1
192.168.93.106 nginx2
EOF

1.3、橋接IPv4流量傳遞到iptables的鏈

[root@master1 ~]# modprobe overlay
[root@master1 ~]# modprobe br_netfilter[root@master1 ~]# cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf
net.bridge.bridge-nf-call-iptables  = 1
net.ipv4.ip_forward                 = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF[root@master1 ~]# sysctl --system

二、配置Kubernetes的VIP

  • 所有Nginx節點都要操作,以Nginx1節點為例進行演示

2.1、安裝Nginx

# 安裝nginx擴展源
[root@nginx1 ~]# yum -y install epel-release.noarch # 安裝nginx服務
[root@nginx1 ~]# yum -y install nginx# 安裝nginx流模塊(反向代理模塊)
[root@nginx1 ~]# yum -y install nginx-mod-stream

2.2、修改Nginx配置文件

  • 打開nginx配置文件在/etc/nginx/nginx.conf,在events代碼段下添加即可
[root@nginx1 ~]# vim /etc/nginx/nginx.conf
# 寫在events代碼段}這個符號下面
# 注意修改里面的IP,IP地址填寫3臺master節點的IP地址
stream {upstream apiserver {server 192.168.93.101:6443 max_fails=2  fail_timeout=5s weight=1;server 192.168.93.102:6443 max_fails=2  fail_timeout=5s weight=1;server 192.168.93.103:6443 max_fails=2  fail_timeout=5s weight=1;}server {listen  6443;proxy_pass apiserver;}
}

2.3、啟動服務

[root@nginx1 ~]# systemctl start nginx
[root@nginx1 ~]# systemctl enable nginx

2.4、安裝Keepalived

  • 所有Nginx節點都需要安裝
yum -y install keepalived

2.5、修改配置文件

2.5.1、Nginx1節點配置文件
[root@nginx1 ~]# vim /etc/keepalived/keepalived.conf
! Configuration File for keepalivedglobal_defs {router_id NGINX1
}vrrp_script check_nginx {script "/etc/keepalived/nginx_check.sh"interval 1   # 1秒檢查一次weight -2    # 如果腳本失敗則priority -2
}vrrp_instance VI_1 {state MASTERinterface ens33virtual_router_id 51priority 100advert_int 1authentication {auth_type PASSauth_pass 1111}track_script {check_nginx}virtual_ipaddress {192.168.93.200/24 # 填寫同網段,但是這個IP地址沒有被使用}
}
# 創建nginx服務檢查腳本
[root@nginx1 ~]# cat > /etc/keepalived/nginx_check.sh << 'EOF'
#!/bin/bash# 獲取nginx進程的數量
num=$(ps -ef | grep nginx | grep process | grep -v grep | wc -l)if [ "$num" -eq 0 ]
thensystemctl stop keepalived
fi
EOF# 添加可執行權限
[root@nginx1 ~]# chmod +x /etc/keepalived/nginx_check.sh
2.5.2、Nginx2節點配置文件
[root@nginx2 ~]# vim /etc/keepalived/keepalived.conf
! Configuration File for keepalivedglobal_defs {router_id NGINX2
}vrrp_script check_nginx {script "/etc/keepalived/nginx_check.sh"interval 1   # 1秒檢查一次weight -2    # 如果腳本失敗則priority -2
}vrrp_instance VI_1 {state BACKUPinterface ens33virtual_router_id 51priority 90advert_int 1authentication {auth_type PASSauth_pass 1111}track_script {check_nginx}virtual_ipaddress {192.168.93.200/24 # 同網段但是沒有使用的IP}
}
# 創建nginx服務檢查腳本
[root@nginx2 ~]# cat > /etc/keepalived/nginx_check.sh << 'EOF'
#!/bin/bash# 獲取nginx進程的數量
num=$(ps -ef | grep nginx | grep process | grep -v grep | wc -l)if [ "$num" -eq 0 ]
thensystemctl stop keepalived
fi
EOF# 添加可執行權限
[root@nginx2 ~]# chmod +x /etc/keepalived/nginx_check.sh
2.5.3、啟動服務
[root@nginx1 ~]# systemctl start keepalived.service 
[root@nginx1 ~]# systemctl enable keepalived.service[root@nginx2 ~]# systemctl start keepalived.service 
[root@nginx2 ~]# systemctl enable keepalived.service
# nginx1節點會出現VIP地址,nginx2節點暫時沒有
[root@nginx1 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00inet 127.0.0.1/8 scope host lovalid_lft forever preferred_lft foreverinet6 ::1/128 scope host valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000link/ether 00:0c:29:f0:47:e5 brd ff:ff:ff:ff:ff:ffinet 192.168.93.105/24 brd 192.168.93.255 scope global noprefixroute ens33valid_lft forever preferred_lft forever
#####################################################################inet 192.168.93.200/24 scope global secondary ens33
#####################################################################valid_lft forever preferred_lft foreverinet6 fe80::99c1:74ac:9584:dba4/64 scope link noprefixroute valid_lft forever preferred_lft forever

三、部署Kubernetes

  • 所有Kubernetes節點操作包括node1節點,以Master1節點為例進行演示

3.1、安裝Docker容器運行時

[root@master1 ~]# yum install -y yum-utils device-mapper-persistent-data lvm2
[root@master1 ~]# yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
[root@master1 ~]# yum clean all && yum makecache
[root@master1 ~]# yum -y install docker-ce docker-ce-cli containerd.io# 啟動服務
[root@master1 ~]# systemctl start docker
[root@master1 ~]# systemctl enable docker

3.2、配置Docker

[root@master1 ~]# cat > /etc/docker/daemon.json << EOF
{
"exec-opts": ["native.cgroupdriver=systemd"],
"registry-mirrors": ["https://8xpk5wnt.mirror.aliyuncs.com"]
}
EOF# 加載daemon并重啟docker
[root@master1 ~]# systemctl daemon-reload 
[root@master1 ~]# systemctl restart docker

3.3、安裝Kubeadm工具

# 配置Kubernetes源
[root@master1 ~]# cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF# 這里指定了版本號,若需要其他版本的可自行更改
[root@master1 ~]# yum install -y kubelet-1.23.0 kubeadm-1.23.0 kubectl-1.23.0# 只需要設置kubelet服務為永久開啟即可,千萬不要啟動
[root@master1 ~]# systemctl enable kubelet.service 

3.4、初始化Master節點

  • 只需要在Master1節點上操作即可
# 生成初始化配置文件
[root@master1 ~]# kubeadm config print init-defaults > kubeadm-config.yaml# 修改初始化配置文件
[root@master1 ~]# vim kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:- system:bootstrappers:kubeadm:default-node-tokentoken: abcdef.0123456789abcdefttl: 24h0m0susages:- signing- authentication
kind: InitConfiguration
localAPIEndpoint:advertiseAddress: 192.168.93.101	# 修改為本機IPbindPort: 6443
nodeRegistration:criSocket: /var/run/dockershim.sockimagePullPolicy: IfNotPresentname: master1		# 修改為本地主機名taints: null
---
apiServer:timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint: "192.168.93.200:6443"  # 添加控制平面IP也就是VIP地址,沒有就添加
controllerManager: {}
dns: {}
etcd:local:dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers  # 修改為國內鏡像
kind: ClusterConfiguration
kubernetesVersion: 1.23.0	# 查看版本是否與安裝Kubernetes的一致
networking:dnsDomain: cluster.localserviceSubnet: 10.96.0.0/12podSubnet: "10.244.0.0/16"	# 添加Pod容器網絡插件地址
scheduler: {}          
# 拉取所需鏡像,也可以提前準備好鏡像進行導入,注意如果導入的話建議導入到k8s所有節點中
[root@master1 ~]# kubeadm config images pull --config=kubeadm-config.yaml
W0706 09:06:51.221691    8866 strict.go:55] error unmarshaling configuration schema.GroupVersionKind{Group:"kubeadm.k8s.io", Version:"v1beta2", Kind:"InitConfiguration"}: error unmarshaling JSON: while decoding JSON: json: unknown field "imagePullPolicy"
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-apiserver:v1.23.0
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-controller-manager:v1.23.0
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-scheduler:v1.23.0
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-proxy:v1.23.0
[config/images] Pulled registry.aliyuncs.com/google_containers/pause:3.6
[config/images] Pulled registry.aliyuncs.com/google_containers/etcd:3.5.1-0
[config/images] Pulled registry.aliyuncs.com/google_containers/coredns:v1.8.6
# 初始化集群
[root@master1 ~]# kubeadm init --config kubeadm-config.yaml
W0706 09:10:47.900752    9256 strict.go:55] error unmarshaling configuration schema.GroupVersionKind{Group:"kubeadm.k8s.io", Version:"v1beta2", Kind:"InitConfiguration"}: error unmarshaling JSON: while decoding JSON: json: unknown field "imagePullPolicy"
[init] Using Kubernetes version: v1.23.0
[preflight] Running pre-flight checks[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 26.1.4. Latest validated version: 20.10
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master1] and IPs [10.96.0.1 192.168.93.101 192.168.93.200]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost master1] and IPs [192.168.93.101 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost master1] and IPs [192.168.93.101 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 6.035896 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.23" in namespace kube-system with the configuration for the kubelets in the cluster
NOTE: The "kubelet-config-1.23" naming of the kubelet ConfigMap is deprecated. Once the UnversionedKubeletConfigMap feature gate graduates to Beta the default name will become just "kubelet-config". Kubeadm upgrade will handle this transition transparently.
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node master1 as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node master1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: abcdef.0123456789abcdef
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxyYour Kubernetes control-plane has initialized successfully!To start using your cluster, you need to run the following as a regular user:mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/configAlternatively, if you are the root user, you can run:export KUBECONFIG=/etc/kubernetes/admin.confYou should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:https://kubernetes.io/docs/concepts/cluster-administration/addons/You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:kubeadm join 192.168.93.200:6443 --token abcdef.0123456789abcdef \--discovery-token-ca-cert-hash sha256:28ffbef6224f555172c7614e12a02bb82278e6a9181aaff2531bdc46184ffab3 \--control-plane Then you can join any number of worker nodes by running the following on each as root:kubeadm join 192.168.93.200:6443 --token abcdef.0123456789abcdef \--discovery-token-ca-cert-hash sha256:28ffbef6224f555172c7614e12a02bb82278e6a9181aaff2531bdc46184ffab3 
# 配置master1節點
[root@master1 ~]# mkdir -p $HOME/.kube
[root@master1 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@master1 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config

3.5、Node節點加入集群

  • 在master1節點初始化的時候返回信息中最后的命令就是node節點加入集群的命令,將命令復制到node節點執行即可
[root@node1 ~]# kubeadm join 192.168.93.200:6443 --token abcdef.0123456789abcdef \
> --discovery-token-ca-cert-hash sha256:28ffbef6224f555172c7614e12a02bb82278e6a9181aaff2531bdc46184ffab3 
[preflight] Running pre-flight checks[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 26.1.4. Latest validated version: 20.10
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
# 如果加入進去的命令找不到了可以在master1節點上生成一個
[root@master1 ~]# kubeadm token create --print-join-command
kubeadm join 192.168.93.200:6443 --token erlw7x.b5ikmqtha6aa7tqw --discovery-token-ca-cert-hash sha256:28ffbef6224f555172c7614e12a02bb82278e6a9181aaff2531bdc46184ffab3 

3.6、其余Master節點加入集群

3.6.1、Master1節點重新創建token和hash值
[root@master1 ~]# kubeadm token create --print-join-command
kubeadm join 192.168.93.200:6443 --token qx5782.tuypr2tqgg7gp48q --discovery-token-ca-cert-hash sha256:28ffbef6224f555172c7614e12a02bb82278e6a9181aaff2531bdc46184ffab3 
3.6.2、Master1節點重新生成certificate-key
[root@master1 ~]# kubeadm init phase upload-certs --upload-certs
I0706 09:17:38.538815   11359 version.go:255] remote version is much newer: v1.30.2; falling back to: stable-1.23
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
9418974e56f1c191c94259fa640d46ccbdb951b96d5962f5b4cd0fc768e65a06
3.6.3、拼接master身份加入集群的命令
  • 將master1生成的token和生成最后的hash值進行拼接
kubeadm join 192.168.93.200:6443 --token qx5782.tuypr2tqgg7gp48q --discovery-token-ca-cert-hash sha256:28ffbef6224f555172c7614e12a02bb82278e6a9181aaff2531bdc46184ffab3 --control-plane --certificate-key 9418974e56f1c191c94259fa640d46ccbdb951b96d5962f5b4cd0fc768e65a06
# 使用以下命令可以直接獲得一個可以Master加入進去的令牌
[root@master1 ~]# echo "$(kubeadm token create --print-join-command) --control-plane --certificate-key $(kubeadm init phase upload-certs --upload-certs | tail -1)"
I0706 16:16:46.421463   18254 version.go:255] remote version is much newer: v1.30.2; falling back to: stable-1.23
W0706 16:16:56.423291   18254 version.go:103] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.23.txt": Get "https://dl.k8s.io/release/stable-1.23.txt": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
W0706 16:16:56.423328   18254 version.go:104] falling back to the local client version: v1.23.0
#####################################################################
kubeadm join 192.168.93.200:6443 --token va1rss.5nhi7qb3mtb8la4c --discovery-token-ca-cert-hash sha256:932a1a57dc252afd38ee498d381db7a7d503d9ab0cef4bedfa52d6901ce8b7f8  --control-plane --certificate-key b5cb75d85303c403a0c2649a90a256e8bbd87c67f02e722d42f58341604bcae5
#####################################################################
3.6.4、其他master節點加入集群
# master2
[root@master2 ~]# kubeadm join 192.168.93.200:6443 --token qx5782.tuypr2tqgg7gp48q --discovery-token-ca-cert-hash sha256:28ffbef6224f555172c7614e12a02bb82278e6a9181aaff2531bdc46184ffab3 --control-plane --certificate-key 9418974e56f1c191c94259fa640d46ccbdb951b96d5962f5b4cd0fc768e65a06
[preflight] Running pre-flight checks[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 26.1.4. Latest validated version: 20.10
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks before initializing the new control plane instance
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[download-certs] Downloading the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost master2] and IPs [192.168.93.102 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost master2] and IPs [192.168.93.102 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master2] and IPs [10.96.0.1 192.168.93.102 192.168.93.200]
[certs] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[certs] Using the existing "sa" key
[kubeconfig] Generating kubeconfig files
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[check-etcd] Checking that the etcd cluster is healthy
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
[etcd] Announced new etcd member joining to the existing etcd cluster
[etcd] Creating static Pod manifest for "etcd"
[etcd] Waiting for the new etcd member to join the cluster. This can take up to 40s
The 'update-status' phase is deprecated and will be removed in a future release. Currently it performs no operation
[mark-control-plane] Marking the node master2 as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node master2 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]This node has joined the cluster and a new control plane instance was created:* Certificate signing request was sent to apiserver and approval was received.
* The Kubelet was informed of the new secure connection details.
* Control plane (master) label and taint were applied to the new node.
* The Kubernetes control plane instances scaled up.
* A new etcd member was added to the local/stacked etcd cluster.To start administering your cluster from this node, you need to run the following as a regular user:mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/configRun 'kubectl get nodes' to see this node join the cluster.[root@master2 ~]# mkdir -p $HOME/.kube
[root@master2 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@master2 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
# master3
[root@master3 ~]# kubeadm join 192.168.93.200:6443 --token qx5782.tuypr2tqgg7gp48q --discovery-token-ca-cert-hash sha256:28ffbef6224f555172c7614e12a02bb82278e6a9181aaff2531bdc46184ffab3 --control-plane --certificate-key 9418974e56f1c191c94259fa640d46ccbdb951b96d5962f5b4cd0fc768e65a06
[preflight] Running pre-flight checks[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 26.1.4. Latest validated version: 20.10
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks before initializing the new control plane instance
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[download-certs] Downloading the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost master3] and IPs [192.168.93.103 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost master3] and IPs [192.168.93.103 127.0.0.1 ::1]
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master3] and IPs [10.96.0.1 192.168.93.103 192.168.93.200]
[certs] Generating "front-proxy-client" certificate and key
[certs] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[certs] Using the existing "sa" key
[kubeconfig] Generating kubeconfig files
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[check-etcd] Checking that the etcd cluster is healthy
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
[etcd] Announced new etcd member joining to the existing etcd cluster
[etcd] Creating static Pod manifest for "etcd"
[etcd] Waiting for the new etcd member to join the cluster. This can take up to 40s
The 'update-status' phase is deprecated and will be removed in a future release. Currently it performs no operation
[mark-control-plane] Marking the node master3 as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node master3 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]This node has joined the cluster and a new control plane instance was created:* Certificate signing request was sent to apiserver and approval was received.
* The Kubelet was informed of the new secure connection details.
* Control plane (master) label and taint were applied to the new node.
* The Kubernetes control plane instances scaled up.
* A new etcd member was added to the local/stacked etcd cluster.To start administering your cluster from this node, you need to run the following as a regular user:mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/configRun 'kubectl get nodes' to see this node join the cluster.[root@master3 ~]# mkdir -p $HOME/.kube
[root@master3 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@master3 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config

四、部署網絡插件

  • 在Master1節點執行即可
[root@master1 ~]# kubectl apply -f kube-flannel.yaml 
namespace/kube-flannel created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created
# 拉取鏡像,目前是肯定缺少flannel鏡像的,拉取命令如下,如果拉取不下來就是用魔法
# 注意:所有k8s集群節點都需要存在這兩個鏡像
docker pull docker.io/flannel/flannel-cni-plugin:v1.1.2
docker pull docker.io/flannel/flannel:v0.21.5

五、驗證

5.1、查看所有Pod運行狀態

  • 狀態要前部是Running狀態,如果沒有運行起來,那么大概率是因為鏡像沒有拉取下來
[root@master1 ~]# kubectl get pod -A
NAMESPACE      NAME                              READY   STATUS    RESTARTS      AGE
kube-flannel   kube-flannel-ds-7sqv8             1/1     Running   0             9m38s
kube-flannel   kube-flannel-ds-qpvfc             1/1     Running   0             9m38s
kube-flannel   kube-flannel-ds-wvn4f             1/1     Running   0             9m38s
kube-flannel   kube-flannel-ds-xcp9g             1/1     Running   0             9m38s
kube-system    coredns-6d8c4cb4d-jl9td           1/1     Running   0             23m
kube-system    coredns-6d8c4cb4d-pp2vt           1/1     Running   0             23m
kube-system    etcd-master1                      1/1     Running   0             23m
kube-system    etcd-master2                      1/1     Running   0             13m
kube-system    etcd-master3                      1/1     Running   0             11m
kube-system    kube-apiserver-master1            1/1     Running   0             23m
kube-system    kube-apiserver-master2            1/1     Running   0             13m
kube-system    kube-apiserver-master3            1/1     Running   0             11m
kube-system    kube-controller-manager-master1   1/1     Running   1 (13m ago)   23m
kube-system    kube-controller-manager-master2   1/1     Running   0             13m
kube-system    kube-controller-manager-master3   1/1     Running   0             11m
kube-system    kube-proxy-4kmbt                  1/1     Running   0             13m
kube-system    kube-proxy-72cjh                  1/1     Running   0             23m
kube-system    kube-proxy-jz2sx                  1/1     Running   0             20m
kube-system    kube-proxy-x8kjh                  1/1     Running   0             11m
kube-system    kube-scheduler-master1            1/1     Running   1 (13m ago)   23m
kube-system    kube-scheduler-master2            1/1     Running   0             13m
kube-system    kube-scheduler-master3            1/1     Running   0             11m

5.2、查看節點狀態

[root@master1 ~]# kubectl get node
NAME      STATUS   ROLES                  AGE   VERSION
master1   Ready    control-plane,master   22m   v1.23.0
master2   Ready    control-plane,master   12m   v1.23.0
master3   Ready    control-plane,master   10m   v1.23.0
node1     Ready    <none>                 19m   v1.23.0

5.3、查看集群組件狀態

[root@master1 ~]# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE                         ERROR
controller-manager   Healthy   ok                              
scheduler            Healthy   ok                              
etcd-0               Healthy   {"health":"true","reason":""}   

本文來自互聯網用戶投稿,該文觀點僅代表作者本人,不代表本站立場。本站僅提供信息存儲空間服務,不擁有所有權,不承擔相關法律責任。
如若轉載,請注明出處:http://www.pswp.cn/web/41072.shtml
繁體地址,請注明出處:http://hk.pswp.cn/web/41072.shtml
英文地址,請注明出處:http://en.pswp.cn/web/41072.shtml

如若內容造成侵權/違法違規/事實不符,請聯系多彩編程網進行投訴反饋email:809451989@qq.com,一經查實,立即刪除!

相關文章

Linux 定時任務詳解:全面掌握 cron 和 at 命令

Linux 定時任務詳解&#xff1a;全面掌握 cron 和 at 命令 Linux 系統中定時任務的管理對于運維和開發人員來說都是至關重要的。通過定時任務&#xff0c;可以在特定時間自動執行腳本或命令&#xff0c;提高系統自動化程度。本文將詳細介紹 Linux 中常用的定時任務管理工具 cr…

一拖二快充線:生活充電新風尚,高效便捷解決雙設備充電難題

一拖二快充線在生活應用領域的優勢與雙接充電的便攜性問題 在現代快節奏的生活中&#xff0c;電子設備已成為我們不可或缺的日常伴侶。無論是智能手機、平板電腦還是筆記本電腦&#xff0c;它們在我們的工作、學習和娛樂中扮演著至關重要的角色。然而&#xff0c;隨著設備數量…

優化:遍歷List循環查找數據庫導致接口過慢問題

前提&#xff1a; 我們在寫查詢的時候&#xff0c;有時候會遇到多表聯查&#xff0c;一遇到多表聯查大家就會直接寫sql語句&#xff0c;不會使用較為方便的LambdaQueryWrapper去查詢了。作為一個2024新進入碼農世界的小白&#xff0c;我喜歡使用LambdaQueryWrapper&#xff0c;…

產品經理系列1—如何實現一個電商系統

具體筆記如下&#xff0c;主要按獲客—找貨—下單—售后四個部分進行模塊拆解

代碼隨想錄算法訓練Day58|LeetCode417-太平洋大西洋水流問題、LeetCode827-最大人工島

太平洋大西洋水流問題 力扣417-太平洋大西洋水流問題 有一個 m n 的矩形島嶼&#xff0c;與 太平洋 和 大西洋 相鄰。 “太平洋” 處于大陸的左邊界和上邊界&#xff0c;而 “大西洋” 處于大陸的右邊界和下邊界。 這個島被分割成一個由若干方形單元格組成的網格。給定一個…

用 Emacs 寫代碼有哪些值得推薦的插件

以下是一些用于 Emacs 寫代碼的值得推薦的插件&#xff1a; Ido-mode&#xff1a;交互式操作模式&#xff0c;它用列出當前目錄所有文件的列表來取代常規的打開文件提示符&#xff0c;能讓操作更可視化&#xff0c;快速遍歷文件。Smex&#xff1a;可替代普通的 M-x 提示符&…

【Unity】unity學習掃盲知識點

1、建議檢查下SystemInfo的引用。這個是什么 Unity的SystemInfo類提供了一種獲取關于當前硬件和操作系統的信息的方法。這包括設備類型&#xff0c;操作系統&#xff0c;處理器&#xff0c;內存&#xff0c;顯卡&#xff0c;支持的Unity特性等。使用SystemInfo類非常簡單。它的…

【python】生成完全數

定義 如果一個數恰好等于它的真因子之和&#xff0c;則稱該數為“完全數” [2]。各個小于它的約數&#xff08;真約數&#xff0c;列出某數的約數&#xff0c;去掉該數本身&#xff0c;剩下的就是它的真約數&#xff09;的和等于它本身的自然數叫做完全數&#xff08;Perfect …

Linux 查看磁盤是不是 ssd 的方法

lsblk 命令檢查 $ lsblk -d -o name,rota如果 ROTA 值為 1&#xff0c;則磁盤類型為 HDD&#xff0c;如果 ROTA 值為 0&#xff0c;則磁盤類型為 SSD。可以在上面的屏幕截圖中看到 sda 的 ROTA 值是 1&#xff0c;表示它是 HDD。 2. 檢查磁盤是否旋轉 $ cat /sys/block/sda/q…

php使用PHPExcel 導出數據表到Excel文件

直接上干貨&#xff1a;<?php$cards_list Cards::find($parameters);$objPHPExcel new \PHPExcel(); $objPHPExcel->getProperties()->setCreator("jiequan")->setLastModifiedBy("jiequan")->setTitle("card List")->setS…

Vuetify3: 根據滾動距離顯示/隱藏搜索組件

我們在使用vuetify3開發的時候&#xff0c;產品需要實現當搜索框因滾動條拉拽的時候&#xff0c;消失&#xff0c;搜索組件再次出現在頂部位置。這個我們需要獲取滾動高度&#xff0c;直接參考vuetify3 滾動指令???????&#xff0c;執行的時候發現一個問題需要設置 max-…

在什么情況下你會使用設計模式

設計模式是在軟件開發中解決常見問題的最佳實踐。它們提供了可復用的解決方案&#xff0c;使得代碼更加模塊化、易于理解和維護。以下是在什么情況下你可能會使用設計模式的一些常見情況&#xff1a; 代碼重復&#xff1a;當你發現項目中多處出現相同或相似的代碼結構時&#x…

機器學習之保存與加載

前言 模型的數據需要存儲和加載&#xff0c;這節介紹存儲和加載的方式方法。 存和加載模型權重 保存模型使用save_checkpoint接口&#xff0c;傳入網絡和指定的保存路徑&#xff0c;要加載模型權重&#xff0c;需要先創建相同模型的實例&#xff0c;然后使用load_checkpoint…

Autosar Dcm配置-0x85服務配置及使用-基于ETAS軟件

文章目錄 前言Dcm配置DcmDsdDcmDsp代碼實現總結前言 0x85服務用來控制DTC設置的開啟和關閉。某OEM3.0架構強制支持0x85服務,本文介紹ETAS工具中的配置 Dcm配置 DcmDsd 配置0x85服務 此處配置只在擴展會話下支持(具體需要根據需求決定),兩個子服務Disable為0x02,Enable…

馮諾依曼體系結構與操作系統(Linux)

文章目錄 前言馮諾依曼體系結構&#xff08;硬件&#xff09;操作系統&#xff08;軟件&#xff09;總結 前言 馮諾依曼體系結構&#xff08;硬件&#xff09; 上圖就是馮諾依曼體系結構圖&#xff0c;主要包括輸入設備&#xff0c;輸出設備&#xff0c;存儲器&#xff0c;運算…

Go高級庫存照片源碼v5.3

GoStock – 免費和付費庫存照片腳本這是一個免費和付費共享高質量庫存照片的平臺,用戶可以上傳照片與整個社區和訪客分享,并可以通過 PayPal 接收捐款。此外,用戶還可以點贊、評論、分享和收藏您最喜歡的照片。 下載 特征: 使用Laravel 10構建訂閱系統Stripe 連接漸進式網頁…

從零開始讀RocketMq源碼(一)生產者啟動

目錄 前言 獲取源碼 總概論 生產者實例 源碼 A-01:設置生產者組名稱 A-02:生產者服務啟動 B-01&#xff1a;初始化狀態 B-02&#xff1a;該方法再次對生產者組名稱進行校驗 B-03&#xff1a;判斷是否為默認生產者組名稱 B-04: 該方法是為了實例化MQClientInstance對…

白嫖A100-interLM大模型部署試用活動,親測有效-2.Git

申明 以下部分內容來源于活動教學文檔&#xff1a; Docs git 安裝 是一個開源的分布式版本控制系統&#xff0c;被廣泛用于軟件協同開發。程序員的必備基礎工具。 常用的 Git 操作 git init 初始化一個新的 Git 倉庫&#xff0c;在當前目錄創建一個 .git 隱藏文件夾來跟蹤…

Windows系統下載安裝ngnix

一 nginx下載安裝 nginx是HTTP服務器和反向代理服務器&#xff0c;功能非常豐富&#xff0c;在nginx官網首頁&#xff0c;點擊download 在download頁面下&#xff0c;可以選擇Stable version穩定版本&#xff0c;點擊下載 將下載完成的zip解壓即可&#xff0c;然乎在nginx所在…

SpringBoot新手快速入門系列教程五:基于JPA的一個Mysql簡單讀寫例子

現在我們來做一個簡單的讀寫Mysql的項目 1&#xff0c;先新建一個項目&#xff0c;我們叫它“HelloJPA”并且添加依賴 2&#xff0c;引入以下依賴&#xff1a; Spring Boot DevTools (可選&#xff0c;但推薦&#xff0c;用于開發時熱部署)Lombok&#xff08;可選&#xff0c…