k8s概述
容器部署優勢:部署方便,不依賴底層環境,升級鏡像
- 本質是一個容器編排工具,golang語言開發
master master管理節點:kube-api-server請求接口,kube-scheduler調度器,kube-controller-manager控制器/管理器,etcd分布式存儲數據庫
work node服務節點:kubelet代理保證容器運行在pod中,kube-proxy網絡代理[一組容器的統一接口]
在 Kubernetes 中,負責管理容器生命周期的核心組件是 kubelet
k8s安裝和部署
1.源碼包安裝
2.使用kubeadm部署集群
使用 kubeadm 創建集群 | Kubernetes
centos7.9
##https://kubernetes.io/zh-cn/docs/reference/setup-tools/kubeadm/kubeadm-init/---------------------所有主機均配置基礎環境【這里以master為例】
[root@node1 ~]# cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
> overlay
> br_netfilter
> EOF
[root@master ~]#
[root@master ~]# sudo modprobe overlay
[root@master ~]# sudo modprobe br_netfilter
## 設置所需的 sysctl 參數,參數在重新啟動后保持不變
[root@master ~]# cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
> net.bridge.bridge-nf-call-iptables = 1
> net.bridge.bridge-nf-call-ip6tables = 1
> net.ipv4.ip_forward = 1
> EOF## 應用 sysctl 參數而不重新啟動
[root@master ~]# sudo sysctl --system[root@master ~]# lsmod | grep br_netfilter
br_netfilter 22256 0
bridge 155432 1 br_netfilter
[root@master ~]# lsmod | grep overlay
overlay 91659 0
##查看版本
[root@master ~]# cat /etc/redhat-release
CentOS Linux release 7.9.2009 (Core)
##安裝容器源[centos查看aliyun源]
[root@master ~]# yum install -y yum-utils
[root@master ~]# yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
[root@master ~]# ls /etc/yum.repos.d/
CentOS-Base.repo docker-ce.repo epel.repo
##安裝容器
[root@master ~]# yum install containerd.io -y
[root@master ~]# containerd config default > /etc/containerd/config.toml
[root@master ~]# vim /etc/containerd/config.toml
#將sandbox鏡像注釋并修改為阿里云的路徑
# sandbox_image = "registry.k8s.io/pause:3.6"sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.6"
#查找此行并修改為trueSystemdCgroup = true[root@master ~]# systemctl enable containerd --now
##需要看到啟動成功
[root@master ~]# systemctl status containerd
[root@master ~]# cat <<EOF | tee /etc/yum.repos.d/kubernetes.repo
> [kubernetes]
> name=Kubernetes
> baseurl=https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.28/rpm/
> enabled=1
> gpgcheck=1
> gpgkey=https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.28/rpm/repodata/repomd.xml.key
> EOF##檢查基礎環境
[root@master ~]# free -h total used free shared buff/cache available
Mem: 7.4G 237M 6.0G 492K 1.1G 6.9G
Swap: 0B 0B 0B
[root@master ~]# cat /etc/fstab #
# /etc/fstab
# Created by anaconda on Fri Jun 28 04:16:23 2024
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
UUID=c8b5b2da-5565-4dc1-b002-2a8b07573e22 / ext4 defaults 1 1
[root@master ~]# netstat -tunlp |grep 6443
[root@master ~]# getenforce
Disabled
##安裝kubeadm
[root@master ~]# yum install -y kubelet kubeadm kubectl
[root@node1 ~]# systemctl enable kubelet --now---------------------------僅在master執行
##執行初始化操作【--apiserver-advertise-address為master的IP地址】
[root@master ~]# kubeadm init \
> --apiserver-advertise-address=192.168.88.1 \
> --image-repository registry.aliyuncs.com/google_containers \
> --service-cidr=172.10.0.0/12 \
> --pod-network-cidr=10.10.0.0/16 \
> --ignore-preflight-errors=all
I0722 15:36:54.413254 12545 version.go:256] remote version is much newer: v1.33.3; falling back to: stable-1.28
[init] Using Kubernetes version: v1.28.15
[preflight] Running pre-flight checks[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
W0722 15:37:11.464233 12545 checks.go:835] detected that the sandbox image "registry.aliyuncs.com/google_containers/pause:3.6" of the container runtime is inconsistent with that used by kubeadm. It is recommended that using "registry.aliyuncs.com/google_containers/pause:3.9" as the CRI sandbox image.
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master] and IPs [172.0.0.1 192.168.88.1]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost master] and IPs [192.168.88.1 127.0.0.1 ::1]
"/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
...
Your Kubernetes control-plane has initialized successfully!To start using your cluster, you need to run the following as a regular user:mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/configAlternatively, if you are the root user, you can run:export KUBECONFIG=/etc/kubernetes/admin.confYou should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:https://kubernetes.io/docs/concepts/cluster-administration/addons/Then you can join any number of worker nodes by running the following on each as root:kubeadm join 192.168.88.1:6443 --token 91kaxu.trpl8qwjaumnc910 \--discovery-token-ca-cert-hash sha256:fdd6b2c0f3e0ec81b3d792c34d925b3c688147d7a87b0993de050460f19adec5
[root@master ~]# mkdir -p $HOME/.kube
[root@master ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@master ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
-----------------------------在node節點與master建立連接
[root@node1 ~]# kubeadm join 192.168.88.1:6443 --token 91kaxu.trpl8qwjaumnc910 \
> --discovery-token-ca-cert-hash sha256:fdd6b2c0f3e0ec81b3d792c34d925b3c688147d7a87b0993de050460f19adec5
[preflight] Running pre-flight checks[WARNING Hostname]: hostname "node1" could not be reached[WARNING Hostname]: hostname "node1": lookup node1 on 100.100.2.138:53: no such host
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.Run 'kubectl get nodes' on the control-plane to see this node join the cluster.[root@node1 ~]# -----------------------------在master節點查看狀態
[root@master ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
master NotReady control-plane 96s v1.28.15
node1 NotReady <none> 22s v1.28.15
node2 NotReady <none> 7s v1.28.15
##上傳網絡插件
[root@master ~]# ls
20250621calico.yaml
##修改網絡插件的位置
[root@master ~]# mkdir k8s/calico -p
[root@master ~]# cd k8s/calico/
[root@master calico]# mv /root/20250621calico.yaml .
[root@master calico]# ls
20250621calico.yaml
[root@master calico]# kubectl create -f 20250621calico.yaml
##查看節點狀態
[root@master ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
master Ready control-plane 5m46s v1.28.15
node1 Ready <none> 4m32s v1.28.15
node2 Ready <none> 4m17s v1.28.15
##查看系統名稱空間的pod
[root@master calico]# watch kubectl get pod -n kube-system
.../如圖所示:
Every 2.0s: kubectl get pod -n kube-system Tue Jul 22 22:18:26 2025NAME READY STATUS RESTARTS AGE
calico-kube-controllers-6fcd5cd66f-gcv2q 1/1 Running 0 6h36m
calico-node-bbqnz 1/1 Running 0 6h36m
calico-node-ls7gm 1/1 Running 0 6h36m
calico-node-n6fz5 1/1 Running 0 6h36m
coredns-66f779496c-jnc4h 1/1 Running 0 6h40m
coredns-66f779496c-x79tt 1/1 Running 0 6h40m
etcd-master 1/1 Running 0 6h40m
kube-apiserver-master 1/1 Running 0 6h40m
kube-controller-manager-master 1/1 Running 0 6h40m
kube-proxy-6jpfs 1/1 Running 0 6h39m
kube-proxy-6mxx6 1/1 Running 0 6h40m
kube-proxy-cn26w 1/1 Running 0 6h39m
kube-scheduler-master 1/1 Running 0 6h40m##查看詳細信息,方便排錯
[root@master ~]# kubectl describe pod kube-proxy-vfdmh -n kube-system
[root@master ~]# #所有節點Ready 集群就安裝ok了!
[root@master ~]#
[root@master ~]# #結束
注釋:
##初始化集群【**僅在master節點執行**】
[root@master containerd]# kubeadm init \
--apiserver-advertise-address=10.38.102.71 \
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version v1.26.3 \
--service-cidr=10.96.0.0/12 \
--pod-network-cidr=10.244.0.0/16 \
--ignore-preflight-errors=all
##說明:--apiserver-advertise-address=10.38.102.71 \ #換成自己的master地址
##顯示此行表示:初始化成功
Your Kubernetes control-plane has initialized successfully![root@master containerd]# kubectl get nodes
E0702 02:26:32.034057 8125 memcache.go:265] couldn''t get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused
##根據提示申明環境變量
[root@master containerd]# export KUBECONFIG=/etc/kubernetes/admin.conf
##開機自啟
[root@master containerd]# vim /etc/profile
[root@master containerd]# tail -1 /etc/profile
export KUBECONFIG=/etc/kubernetes/admin.conf
##查看節點
[root@master containerd]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master NotReady control-plane 2m35s v1.26.3
##初始化成功后,服務將默認啟動
[root@master containerd]# systemctl status kubelet.service
● kubelet.service - kubelet: The Kubernetes Node AgentLoaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)Drop-In: /usr/lib/systemd/system/kubelet.service.d└─10-kubeadm.confActive: active (running) since Wed 2025-07-02 02:24:44 EDT; 4min 19s agoDocs: https://kubernetes.io/docs/
##查看pod,指定在系統的名稱空間中查看
[root@master containerd]# kubectl get pod -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-5bbd96d687-hpnxw 0/1 Pending 0 5m24s
coredns-5bbd96d687-rfq5n 0/1 Pending 0 5m24s
etcd-master 1/1 Running 0 5m38s
kube-apiserver-master 1/1 Running 0 5m38s
kube-controller-manager-master 1/1 Running 0 5m38s
kube-proxy-dhsn5 1/1 Running 0 5m24s
kube-scheduler-master 1/1 Running 0 5m38s##安裝網絡插件[calico,flannel]
[root@master containerd]# wget http://manongbiji.oss-cn-beijing.aliyuncs.com/ittailkshow/k8s/download/calico.yaml
#正常安裝顯示如下...
HTTP request sent, awaiting response... 200 OK
Length: 239997 (234K) [text/yaml]
Saving to: ‘calico.yaml’
100%[===============================================================================================>] 239,997 --.-K/s in 0.06s2025-07-02 02:36:42 (4.03 MB/s) - ‘calico.yaml’ saved [239997/239997]
[root@master containerd]# wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
#正常安裝顯示如下...
HTTP request sent, awaiting response... 200 OK
Length: 4415 (4.3K) [text/plain]
Saving to: ‘kube-flannel.yml’100%[===============================================================================================>] 4,415 235B/s in 19s2025-07-02 02:38:53 (235 B/s) - ‘kube-flannel.yml’ saved [4415/4415][root@master containerd]# ls
calico.yaml config.toml config.toml.bak kube-flannel.yml
[root@master containerd]# mv *.yml /root
[root@master containerd]# mv *.yaml /root
[root@master containerd]# ll
total 12
-rw-r--r--. 1 root root 7074 Jul 2 02:18 config.toml
-rw-r--r--. 1 root root 886 Jun 5 2024 config.toml.bak
##應用文件
[root@master ~]# kubectl apply -f calico.yaml
[root@master ~]# kubectl apply -f kube-flannel.yml
##查看系統名稱空間中的pod
[root@master ~]# kubectl get pod -n kube-system
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-6bd6b69df9-gpt7z 0/1 ContainerCreating 0 80s
calico-node-5cnrq 0/1 Init:2/3 0 81s
calico-typha-77fc8866f5-v764n 0/1 Pending 0 80s
coredns-5bbd96d687-hpnxw 0/1 ContainerCreating 0 17m
coredns-5bbd96d687-rfq5n 0/1 ContainerCreating 0 17m
etcd-master 1/1 Running 0 17m
kube-apiserver-master 1/1 Running 0 17m
kube-controller-manager-master 1/1 Running 0 17m
kube-proxy-dhsn5 1/1 Running 0 17m
kube-scheduler-master 1/1 Running 0 17m
##查看節點
[root@master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready control-plane 18m v1.26.3
集群管理命令
##查看節點信息
[root@master ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
master Ready control-plane 106m v1.26.3
node1 Ready <none> 53m v1.26.3
node2 Ready <none> 49m v1.26.3
##查看節點詳細信息
[root@master ~]# kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
master Ready control-plane 108m v1.26.3 10.38.102.71 <none> CentOS Linux 7 (Core) 3.10.0-957.el7.x86_64 containerd://1.6.33
node1 Ready <none> 55m v1.26.3 10.38.102.72 <none> CentOS Linux 7 (Core) 3.10.0-957.el7.x86_64 containerd://1.6.33
node2 Ready <none> 51m v1.26.3 10.38.102.73 <none> CentOS Linux 7 (Core) 3.10.0-957.el7.x86_64 containerd://1.6.33
##查看get幫助
[root@master ~]# kubectl get -h
##查看describe幫助
[root@master ~]# kubectl describe -h
##描述指定節點詳情
[root@master ~]# kubectl describe node master
##查看系統名稱空間的所有pod
[root@master ~]# kubectl get pod -n kube-system
##查看所有名稱空間運行的pod
[root@master ~]# kubectl get pod -A
##查看所有名稱空間運行的pod詳情
[root@master ~]# kubectl get pod -A -o wide
集群核心概念
- pod 最小調度或管理單元【一個pod有一個或多個容器】
- service 會記錄pod信息,記錄與pod之間有關系會進行負載均衡,提供接口,用戶通過service訪問pod【由于訪問pod可以使用IP,但是IP會改變;使用使用service為一組pod提供接口】
- label給k8s資源對象打上標簽
- label selector 標簽選擇器,對資源進行選擇【service進行選擇】
- replication controller 控制器,副本控制器,時刻保持pod數量達到用戶的期望值【控制pod數量】
- replication controller manager 副本控制器管理器【監視各種控制器,是一個管理組件】
- scheduler 調度器,接受api-server訪問請求,實現pod在某臺k8s node上運行【控制pod在哪個節點上運行】
- DNS 通過DNS解決集群內資源名稱,達到訪問資源目的【負責集群內部的名稱解析】
- namespace名稱空間 K8S中非常重要的資源,主要用來實現多套環境的資源隔離或多租戶的資源隔離【實現多套環境資源隔離或多租戶的資源隔離,常見的資源對象都需要放在名稱空間中】
資源對象介紹
無狀態服務:所有節點的關系都是等價的
有狀態服務:節點身份不對等,有主從;對數據持久存儲
--------------------------------------核心概念.名稱空間
##查看當前所有的名稱空間
#系統名稱空間放的是系統的pod;不指定則為默認名稱空間
[root@master ~]# kubectl get namespace
NAME STATUS AGE
default Active 100m
kube-flannel Active 83m
kube-node-lease Active 100m
kube-public Active 100m
kube-system Active 100m
#簡寫,查看所有名稱空間
[root@master ~]# kubectl get ns
NAME STATUS AGE
default Active 101m
kube-flannel Active 84m
kube-node-lease Active 101m
kube-public Active 101m
kube-system Active 101m
##創建名為wll的名稱空間【也可以使用yaml文件來創建】
[root@master ~]# kubectl create ns wll
namespace/wll created
[root@master ~]# kubectl get ns
NAME STATUS AGE
default Active 102m
kube-flannel Active 85m
kube-node-lease Active 102m
kube-public Active 102m
kube-system Active 102m
wll Active 2s
##刪除名為wll的名稱空間
[root@master ~]# kubectl delete ns wll
namespace "wll" deleted
[root@master ~]# kubectl get ns
NAME STATUS AGE
default Active 103m
kube-flannel Active 86m
kube-node-lease Active 103m
kube-public Active 103m
kube-system Active 103m------------------------------------核心概念.標簽
label標簽是一組綁定到k8s資源上的key/value鍵值對,可以通過多維度定義標簽。同一個資源對象上,key不能重復,必須唯一。
##查看節點標簽信息
[root@master ~]# kubectl get nodes --show-labels
NAME STATUS ROLES AGE VERSION LABELS
master Ready control-plane 112m v1.26.3 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=master,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node.kubernetes.io/exclude-from-external-load-balancers=
node1 Ready <none> 59m v1.26.3 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=node1,kubernetes.io/os=linux
node2 Ready <none> 55m v1.26.3 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=node2,kubernetes.io/os=linux
##為node2加上標簽【顯示節點2被標記】
[root@master ~]# kubectl label node node2 env=test
node/node2 labeled
##查看指定節點的標簽
[root@master ~]# kubectl get nodes node2 --show-labels
NAME STATUS ROLES AGE VERSION LABELS
node2 Ready <none> 63m v1.26.3 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,env=test,kubernetes.io/arch=amd64,kubernetes.io/hostname=node2,kubernetes.io/node2=test,kubernetes.io/os=linux
##只描述指定標簽
[root@master ~]# kubectl get nodes -L env
NAME STATUS ROLES AGE VERSION ENV
master Ready control-plane 121m v1.26.3
node1 Ready <none> 68m v1.26.3
node2 Ready <none> 64m v1.26.3 test
##查找具有指定標簽的節點【查找具備這個標簽的節點】
[root@master ~]# kubectl get nodes -l env=test
NAME STATUS ROLES AGE VERSION
node2 Ready <none> 66m v1.26.3
##修改標簽的值
#--overwrite=true 允許覆蓋
[root@master ~]# kubectl label node node2 env=dev --overwrite=true
node/node2 not labeled
##查看指定節點的標簽信息
[root@master ~]# kubectl get nodes node2 --show-labels
NAME STATUS ROLES AGE VERSION LABELS
node2 Ready <none> 70m v1.26.3 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,env=dev,kubernetes.io/arch=amd64,kubernetes.io/hostname=node2,kubernetes.io/node2=test,kubernetes.io/os=linux
##刪除標簽【刪除鍵】
[root@master ~]# kubectl label node node2 env-
node/node2 unlabeled
[root@master ~]# kubectl get nodes node2 --show-labels
NAME STATUS ROLES AGE VERSION LABELS
node2 Ready <none> 72m v1.26.3 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=node2,kubernetes.io/node2=test,kubernetes.io/os=linux##標簽選擇器主要有2類:
數值關系:=,!=
集合關系:key in {value1,value2....}##定義標簽
[root@master ~]# kubectl label node node2 bussiness=game
node/node2 labeled
[root@master ~]# kubectl label node node1 bussiness=ad
node/node1 labeled
##通過集合的方式選擇
[root@master ~]# kubectl get node -l "bussiness in (game,ad)"
NAME STATUS ROLES AGE VERSION
node1 Ready <none> 80m v1.26.3
node2 Ready <none> 76m v1.26.3---------------------------------------資源對象.注解
Annotation升級的時候打注解比較好用,回滾的時候方便查看
pod基本概念
pod分類
pod的yaml文件
#apiversion版本
#kind類型
#metadata元數據
##name名稱;namespace屬于哪個名稱空間,默認在default
#namespace名稱空間
##labels自定義標簽
##name自定義標簽名稱;annotations注釋列表
#spec詳細定義(期望)
##containers容器列表
###name容器名稱;images鏡像名稱;imagePullPolicy鏡像拉取策略【Always一直去拉取;Never從不拉取;IfNotPresent優先本地,本地沒有則拉取】;command容器啟動命令【參數和工作目錄】;volumeMounts掛載容器內部的存儲卷配置
####env環境變量
####resources限制資源,默認根據需要分配
#####limits限制資源上限
#####requests請求資源下限
####livenessprobe檢查
首次探測時間:表示延時一段時間再檢查
##定義重啟策略
##查看yaml的第一層【查看FIELDS:】
[root@master ~]# kubectl explain pod
##查看指定屬性包含的配置項【查看FIELDS:】
[root@master ~]# kubectl explain pod.metadata
##查看版本
[root@master ~]# kubectl api-versions
##查看詳細版本
[root@master ~]# kubectl api-resources | grep pod
pods po v1 true Pod
##創建pod
[root@master tmp]# vim pod1.yml
[root@master tmp]# kubectl apply -f pod1.yml
pod/nginx created
[root@master tmp]# cat pod1.yml
---
apiVersion: v1
kind: Pod
metadata:name: nginxlabels:name: ng01
spec:containers:- name: nginximage: nginx:1.20ports:- name: webportcontainerPort: 80##查看默認pod
[root@master tmp]# kubectl get pod
NAME READY STATUS RESTARTS AGE
nginx 0/1 ContainerCreating 0 70s
[root@master tmp]# kubectl get pod -n default
NAME READY STATUS RESTARTS AGE
nginx 0/1 ContainerCreating 0 109s
##查看pod中nginx詳細信息
[root@master tmp]# kubectl describe pod nginx
##刪除pod
[root@master tmp]# kubectl delete pod nginx
deployment控制器資源
deployment資源控制器創建pod副本
[root@master ~]# vim deploy.yaml
[root@master tmp]# cat deploy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:name: nginx-deploymentlabels:app: nginx
spec:replicas: 5selector:matchLabels:app: nginxtemplate:metadata:labels:app: nginxspec: ## 注意:這里的 spec 與 template.metadata 同級,縮進需保持一致containers: - name: nginximage: registry.cn-shanghai.aliyuncs.com/image_lqkhn/nginx:1.25.1-alpineports: - containerPort: 80
##查看pod
[root@master ~]# kubectl get pod
No resources found in default namespace.
#使用deployment創建pod
[root@master tmp]# kubectl apply -f deploy.yaml
deployment.apps/nginx-deployment created
[root@master tmp]# kubectl get deploy
NAME READY UP-TO-DATE AVAILABLE AGE
nginx-deployment 5/5 5 5 11s
[root@master tmp]# kubectl get pod
NAME READY STATUS RESTARTS AGE
nginx-deployment-74bd6d8c48-4wlkz 1/1 Running 0 15s
nginx-deployment-74bd6d8c48-6zdmq 1/1 Running 0 15s
nginx-deployment-74bd6d8c48-mlsq4 1/1 Running 0 15s
nginx-deployment-74bd6d8c48-vcmnk 1/1 Running 0 15s
nginx-deployment-74bd6d8c48-xvn82 1/1 Running 0 15s
[root@master tmp]# kubectl get rs
NAME DESIRED CURRENT READY AGE
nginx-deployment-74bd6d8c48 5 5 5 20s
##刪除Deployment,它會自動級聯刪除所有關聯的 ReplicaSet 和 Pod
##查看資源
[root@master tmp]# kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
nginx-deployment 0/5 5 0 13h
[root@master tmp]# kubectl get pod
NAME READY STATUS RESTARTS AGE
nginx-deployment-584f64656-44p4s 0/1 ImagePullBackOff 0 13h
nginx-deployment-584f64656-kh4bj 0/1 ImagePullBackOff 0 13h
nginx-deployment-584f64656-nrb9m 0/1 ImagePullBackOff 0 13h
nginx-deployment-584f64656-zc4xx 0/1 ErrImagePull 0 13h
nginx-deployment-584f64656-zdqx2 0/1 ImagePullBackOff 0 13h
[root@master tmp]# kubectl get rs
NAME DESIRED CURRENT READY AGE
nginx-deployment-584f64656 5 5 0 13h
##刪除控制器資源
[root@master tmp]# kubectl delete deployment nginx-deployment
deployment.apps "nginx-deployment" deleted
##查看
[root@master tmp]# kubectl get deployments,pods,replicasets
No resources found in default namespace.
水平擴展
deployment集成了滾動升級,創建pod副本數量等功能,包含并使用了RS
1.deployment中pod副本,水平擴展
##方法一:直接編輯資源
[root@master tmp]# kubectl edit deploy nginx-deployment
#/...
#spec:
# progressDeadlineSeconds: 600
# replicas: 3 ##直接將副本數量修改為3個,保存退出文件,
#.../
deployment.apps/nginx-deployment edited
##查看pod,可立即動態生效
[root@master tmp]# kubectl get pod
NAME READY STATUS RESTARTS AGE
nginx-deployment-74bd6d8c48-4wlkz 1/1 Running 0 28m
nginx-deployment-74bd6d8c48-mlsq4 1/1 Running 0 28m
nginx-deployment-74bd6d8c48-vcmnk 1/1 Running 0 28m
##方式二:直接修改deploy.yaml文件
需要重新啟用才能生效##方式三:命令行操作
[root@master tmp]# kubectl scale --replicas=2 deploy/nginx-deployment
deployment.apps/nginx-deployment scaled
[root@master tmp]# kubectl get pod
NAME READY STATUS RESTARTS AGE
nginx-deployment-74bd6d8c48-mlsq4 1/1 Running 0 30m
nginx-deployment-74bd6d8c48-vcmnk 1/1 Running 0 30m
更新deployment
- 更新過程
- 更新前:deployment管理的RS為"74bd6d"開頭,有五個pod副本數量
- 更新后:生成了新的RS為"5d55b"開頭,有五個pod副本數量
- 更新過程:殺死舊的RS下面的pod
模板標簽或容器鏡像被更新,才會觸發更新
##方式一:直接修改yaml文件
##將鏡像版本更新為1.27
[root@master tmp]# vim deploy.yaml
[root@master tmp]# cat deploy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:name: nginx-deploymentlabels:app: nginx
spec:replicas: 5selector:matchLabels:app: nginxtemplate:metadata:labels:app: nginxspec:containers: - name: nginximage: registry.cn-shanghai.aliyuncs.com/aliyun_lqkhn/nginx:1.27.1ports: - containerPort: 80
##查看更新過程,稱為更新策略(rolling update滾動更新)【前綴5d55和74bd6為RS】
[root@master tmp]# kubectl apply -f deploy.yaml
deployment.apps/nginx-deployment configured
[root@master tmp]# kubectl get pod
NAME READY STATUS RESTARTS AGE
nginx-deployment-5b55b95f79-9vjl8 0/1 ContainerCreating 0 9s
nginx-deployment-5b55b95f79-b9wqx 0/1 ContainerCreating 0 9s
nginx-deployment-5b55b95f79-jfkhj 0/1 ContainerCreating 0 9s
nginx-deployment-74bd6d8c48-5kzhx 1/1 Running 0 9s
nginx-deployment-74bd6d8c48-6g4rd 1/1 Running 0 9s
nginx-deployment-74bd6d8c48-mlsq4 1/1 Running 0 35m
nginx-deployment-74bd6d8c48-vcmnk 1/1 Running 0 35m
##此時運行的為新版本
[root@master tmp]# kubectl get pod
NAME READY STATUS RESTARTS AGE
nginx-deployment-5b55b95f79-77c9r 1/1 Running 0 43s
nginx-deployment-5b55b95f79-9vjl8 1/1 Running 0 55s
nginx-deployment-5b55b95f79-b9wqx 1/1 Running 0 55s
nginx-deployment-5b55b95f79-jfkhj 1/1 Running 0 55s
nginx-deployment-5b55b95f79-pgx6n 1/1 Running 0 43s
##查看某一個pod詳細信息,其中包含鏡像版本
[root@master tmp]# kubectl describe pod nginx-deployment-5b55b95f79-pgx6n
...
Containers:nginx:Container ID: containerd://707406779de1a7a08f11c9fa22123d662ff88caf2b9ceea9f4b57a7d619b84e5Image: registry.cn-shanghai.aliyuncs.com/aliyun_lqkhn/nginx:1.27.1##查看deployment詳情,觀察更新過程
[root@master tmp]# kubectl describe deployment
...
Events:Type Reason Age From Message---- ------ ---- ---- -------Normal ScalingReplicaSet 50m deployment-controller Scaled up replica set nginx-deployment-74bd6d8c48 to 5Normal ScalingReplicaSet 24m deployment-controller Scaled down replica set nginx-deployment-74bd6d8c48 to 3 from 5Normal ScalingReplicaSet 15m deployment-controller Scaled up replica set nginx-deployment-74bd6d8c48 to 5 from 2Normal ScalingReplicaSet 14m deployment-controller Scaled up replica set nginx-deployment-5b55b95f79 to 2Normal ScalingReplicaSet 14m deployment-controller Scaled down replica set nginx-deployment-74bd6d8c48 to 4 from 5 ##舊pod從5到4Normal ScalingReplicaSet 14m deployment-controller Scaled up replica set nginx-deployment-5b55b95f79 to 3 from 2 ##新pod從2到3Normal ScalingReplicaSet 14m (x2 over 19m) deployment-controller Scaled down replica set nginx-deployment-74bd6d8c48 to 2 from 3Normal ScalingReplicaSet 14m deployment-controller Scaled down replica set nginx-deployment-74bd6d8c48 to 3 from 4Normal ScalingReplicaSet 14m deployment-controller Scaled up replica set nginx-deployment-5b55b95f79 to 4 from 3Normal ScalingReplicaSet 14m deployment-controller Scaled up replica set nginx-deployment-5b55b95f79 to 5 from 4Normal ScalingReplicaSet 14m (x2 over 14m) deployment-controller (combined from similar events): Scaled down replica set nginx-deployment-74bd6d8c48 to 0 from 1##查看deployment
[root@master tmp]# kubectl describe deployment
Name: nginx-deployment
Namespace: default
CreationTimestamp: Wed, 09 Jul 2025 09:03:18 +0800
Labels: app=nginx
Annotations: deployment.kubernetes.io/revision: 2
Selector: app=nginx
Replicas: 5 desired | 5 updated | 5 total | 5 available | 0 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 25% max unavailable, 25% max surge
##表示至少%75處于運行狀態,默認可以超出%25
eg:如果期望值為8,pod數量只能處于6~10如上期望值為5,pod可以減少或增加的數量為1~2個,有時是一個有時是兩個
##可以通過RollingUpdateStrategy調整更新進度
deployment回滾
##查看版本歷史
[root@master tmp]# kubectl rollout history deployment nginx-deployment
deployment.apps/nginx-deployment
REVISION CHANGE-CAUSE
1 <none>
2 <none>
##查看版本記錄的詳細信息
[root@master tmp]# kubectl rollout history deployment nginx-deployment --revision=1
deployment.apps/nginx-deployment with revision #1
Pod Template:Labels: app=nginxpod-template-hash=74bd6d8c48Containers:ngin