一 Kubernetes 簡介及部署方法
1.1 應用部署方式演變
在部署應用程序的方式上,主要經歷了三個階段:
傳統部署:互聯網早期,會直接將應用程序部署在物理機上
-
優點:簡單,不需要其它技術的參與
-
缺點:不能為應用程序定義資源使用邊界,很難合理地分配計算資源,而且程序之間容易產生影響
虛擬化部署:可以在一臺物理機上運行多個虛擬機,每個虛擬機都是獨立的一個環境
-
優點:程序環境不會相互產生影響,提供了一定程度的安全性
-
缺點:增加了操作系統,浪費了部分資源
容器化部署:與虛擬化類似,但是共享了操作系統
容器化部署方式給帶來很多的便利,但是也會出現一些問題,比如說:
一個容器故障停機了,怎么樣讓另外一個容器立刻啟動去替補停機的容器
當并發訪問量變大的時候,怎么樣做到橫向擴展容器數量
1.2 容器編排應用
為了解決這些容器編排問題,就產生了一些容器編排的軟件:
-
Swarm:Docker自己的容器編排工具
-
Mesos:Apache的一個資源統一管控的工具,需要和Marathon結合使用
-
Kubernetes:Google開源的的容器編排工具
1.3 kubernetes 簡介
-
在Docker 作為高級容器引擎快速發展的同時,在Google內部,容器技術已經應用了很多年
-
Borg系統運行管理著成千上萬的容器應用。
-
Kubernetes項目來源于Borg,可以說是集結了Borg設計思想的精華,并且吸收了Borg系統中的經驗和教訓。
-
Kubernetes對計算資源進行了更高層次的抽象,通過將容器進行細致的組合,將最終的應用服務交給用戶。
kubernetes的本質是一組服務器集群,它可以在集群的每個節點上運行特定的程序,來對節點中的容器進行管理。目的是實現資源管理的自動化,主要提供了如下的主要功能:
-
自我修復:一旦某一個容器崩潰,能夠在1秒中左右迅速啟動新的容器
-
彈性伸縮:可以根據需要,自動對集群中正在運行的容器數量進行調整
-
服務發現:服務可以通過自動發現的形式找到它所依賴的服務
-
負載均衡:如果一個服務起動了多個容器,能夠自動實現請求的負載均衡
-
版本回退:如果發現新發布的程序版本有問題,可以立即回退到原來的版本
-
存儲編排:可以根據容器自身的需求自動創建存儲卷
1.4 K8S的設計架構
1.4.1 K8S各個組件用途
一個kubernetes集群主要是由控制節點(master)、工作節點(node)構成,每個節點上都會安裝不同的組件
1 master:集群的控制平面,負責集群的決策
-
ApiServer : 資源操作的唯一入口,接收用戶輸入的命令,提供認證、授權、API注冊和發現等機制
-
Scheduler : 負責集群資源調度,按照預定的調度策略將Pod調度到相應的node節點上
-
ControllerManager : 負責維護集群的狀態,比如程序部署安排、故障檢測、自動擴展、滾動更新等
-
Etcd :負責存儲集群中各種資源對象的信息
2 node:集群的數據平面,負責為容器提供運行環境
-
kubelet:負責維護容器的生命周期,同時也負責Volume(CVI)和網絡(CNI)的管理
-
Container runtime:負責鏡像管理以及Pod和容器的真正運行(CRI)
-
kube-proxy:負責為Service提供cluster內部的服務發現和負載均衡
1.4.2 K8S 各組件之間的調用關系
當我們要運行一個web服務時
-
kubernetes環境啟動之后,master和node都會將自身的信息存儲到etcd數據庫中
-
web服務的安裝請求會首先被發送到master節點的apiServer組件
-
apiServer組件會調用scheduler組件來決定到底應該把這個服務安裝到哪個node節點上
在此時,它會從etcd中讀取各個node節點的信息,然后按照一定的算法進行選擇,并將結果告知apiServer
-
apiServer調用controller-manager去調度Node節點安裝web服務
-
kubelet接收到指令后,會通知docker,然后由docker來啟動一個web服務的pod
-
如果需要訪問web服務,就需要通過kube-proxy來對pod產生訪問的代理
1.4.3 K8S 的 常用名詞感念
-
Master:集群控制節點,每個集群需要至少一個master節點負責集群的管控
-
Node:工作負載節點,由master分配容器到這些node工作節點上,然后node節點上的
-
Pod:kubernetes的最小控制單元,容器都是運行在pod中的,一個pod中可以有1個或者多個容器
-
Controller:控制器,通過它來實現對pod的管理,比如啟動pod、停止pod、伸縮pod的數量等等
-
Service:pod對外服務的統一入口,下面可以維護者同一類的多個pod
-
Label:標簽,用于對pod進行分類,同一類pod會擁有相同的標簽
-
NameSpace:命名空間,用來隔離pod的運行環
1.4.4 k8S的分層架構
-
核心層:Kubernetes最核心的功能,對外提供API構建高層的應用,對內提供插件式應用執行環境
-
應用層:部署(無狀態應用、有狀態應用、批處理任務、集群應用等)和路由(服務發現、DNS解析等)
-
管理層:系統度量(如基礎設施、容器和網絡的度量),自動化(如自動擴展、動態Provision等)以及策略管理(RBAC、Quota、PSP、NetworkPolicy等)
-
接口層:kubectl命令行工具、客戶端SDK以及集群聯邦
-
生態系統:在接口層之上的龐大容器集群管理調度的生態系統,可以劃分為兩個范疇
-
Kubernetes外部:日志、監控、配置管理、CI、CD、Workflow、FaaS、OTS應用、ChatOps等
-
Kubernetes內部:CRI、CNI、CVI、鏡像倉庫、Cloud Provider、集群自身的配置和管理等
二 K8S集群環境搭建
2.1 k8s中容器的管理方式
K8S 集群創建方式有3種:
centainerd
默認情況下,K8S在創建集群時使用的方式
docker
Docker使用的普記錄最高,雖然K8S在1.24版本后已經費力了kubelet對docker的支持,但時可以借助cri-docker方式來實現集群創建
cri-o
CRI-O的方式是Kubernetes創建容器最直接的一種方式,在創建集群的時候,需要借助于cri-o插件的方式來實現Kubernetes集群的創建。
注意“docker 和cri-o 這兩種方式要對kubelet程序的啟動參數進行設置
2.2 k8s 集群部署
2.2.1 k8s 環境部署說明
K8S中文官網:Kubernetes
主機名 | ip | 角色 |
---|---|---|
harbor | 192.168.121.200 | harbor倉庫 |
master | 192.168.121.100 | master,k8s集群控制節點 |
node1 | 192.168.121.10 | worker,k8s集群工作節點 |
node2 | 192.168.121.20 | worker,k8s集群工作節點 |
-
所有節點禁用selinux和防火墻
-
所有節點同步時間和解析
-
所有節點安裝docker-ce
-
所有節點禁用swap,注意注釋掉/etc/fstab文件中的定義
2.2.2 集群環境初始化
2.2.2.1.配置時間同步
在配置 Kubernetes(或任何分布式系統)時間同步時,通常會選一臺主機作為“內部時間服務器(NTP Server)”,這臺主機本身會先從公網或更上層 NTP 服務器同步時間,然后再讓集群中的其他節點作為客戶端,同步到這臺內部 server,從而保證整個集群時間一致、可靠、高效。
這里我選擇harbor作為server
(1)server配置
下載chrony用于時間同步
[root@harbor ~]#yum install chrony
修改配置文件運行其他主機跟server同步時間
[root@harbor ~]# cat /etc/chrony.conf
# Allow NTP client access from local network.
allow 192.168.121.0/24
(2)client配置
全部下載chrony
[root@master+node1+node2 ~]#yum install chrony
修改配置文件
[root@master+node1+node2 ~]# cat /etc/chrony.conf
# Use public servers from the pool.ntp.org project.
# Please consider joining the pool (https://www.pool.ntp.org/join.html).
#pool 2.rhel.pool.ntp.org iburst
server 192.168.121.200 iburst
查看當前系統通過 ??chrony?? 服務同步時間的??時間源列表及同步狀態
[root@master+node1+node2 ~]# chronyc sources
MS Name/IP address Stratum Poll Reach LastRx Last sample
===============================================================================
^* 192.168.121.200 3 6 377 14 -54us[ -100us] +/- 17ms
2.2.2.2.所有禁用swap和設置本地域名解析
]# systemctl mask swap.target
]# swapoff -a
]# vim /etc/fstab#
# /etc/fstab
# Created by anaconda on Sun Feb 19 17:38:40 2023
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/rhel-root / xfs defaults 0 0
UUID=ddb06c77-c9da-4e92-afd7-53cd76e6a94a /boot xfs defaults 0 0
#/dev/mapper/rhel-swap swap swap defaults 0 0
/dev/cdrom /media iso9660 defaults 0 0~]# vim /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.121.200 reg.timingy.org #這里是你的harbor倉庫域名
192.168.121.100 master
192.168.121.10 node1
192.168.121.20 node2
2.2.2.3.所有安裝docker
~]# vim /etc/yum.repos.d/docker.repo
[docker]
name=docker
baseurl=https://mirrors.aliyun.com/docker-ce/linux/rhel/9/x86_64/stable/
gpgcheck=0~]# dnf install docker-ce -y~]# cat /lib/systemd/system/docker.service
ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock --iptables=true#--iptables=true是 Docker 的一個啟動參數,??表示讓 Docker 自動管理系統的 iptables 規則??,用于實現端口映射、容器網絡通信等功能。??默認開啟,一般不要修改,否則可能導致網絡功能(如端口轉發)失效。?~]# systemctl enable --now docker
2.2.2.4.harbor倉庫搭建和設置registry加密傳輸
[root@harbor packages]# ll
total 3621832
-rw-r--r-- 1 root root 131209386 Aug 23 2024 1panel-v1.10.13-lts-linux-amd64.tar.gz
-rw-r--r-- 1 root root 4505600 Aug 26 2024 busybox-latest.tar.gz
-rw-r--r-- 1 root root 211699200 Aug 26 2024 centos-7.tar.gz
-rw-r--r-- 1 root root 22456832 Aug 26 2024 debian11.tar.gz
-rw-r--r-- 1 root root 693103681 Aug 26 2024 docker-images.tar.gz
-rw-r--r-- 1 root root 57175040 Aug 26 2024 game2048.tar.gz
-rw-r--r-- 1 root root 102946304 Aug 26 2024 haproxy-2.3.tar.gz
-rw-r--r-- 1 root root 738797440 Aug 17 2024 harbor-offline-installer-v2.5.4.tgz
-rw-r--r-- 1 root root 207404032 Aug 26 2024 mario.tar.gz
-rw-r--r-- 1 root root 519596032 Aug 26 2024 mysql-5.7.tar.gz
-rw-r--r-- 1 root root 146568704 Aug 26 2024 nginx-1.23.tar.gz
-rw-r--r-- 1 root root 191849472 Aug 26 2024 nginx-latest.tar.gz
-rw-r--r-- 1 root root 574838784 Aug 26 2024 phpmyadmin-latest.tar.gz
-rw-r--r-- 1 root root 26009088 Aug 17 2024 registry.tag.gz
drwxr-xr-x 2 root root 277 Aug 23 2024 rpm
-rw-r--r-- 1 root root 80572416 Aug 26 2024 ubuntu-latest.tar.gz#解壓Harbor 私有鏡像倉庫的離線安裝包??
[root@harbor packages]# tar zxf harbor-offline-installer-v2.5.4.tgz
[root@harbor packages]# mkdir -p /data/certs生成一個有效期為 365 天的自簽名 HTTPS 證書(timingy.org.crt)和對應的私鑰(timingy.org.key),該證書可用于域名 reg.timingy.org,私鑰不加密,密鑰長度 4096 位,使用 SHA-256 簽名。?
[root@harbor packages]# openssl req -newkey rsa:4096 -nodes -sha256 -keyout /data/certs/timingy.org.key --addext "subjectAltName = DNS:reg.timingy.org" -x509 -days 365 -out /data/certs/timingy.org.crtCommon Name (eg, your name or your server's hostname) []:reg.timingy.org #這里域名不能填錯[root@harbor harbor]# ls
common.sh harbor.v2.5.4.tar.gz harbor.yml.tmpl install.sh LICENSE prepare
[root@harbor harbor]# cp harbor.yml.tmpl harbor.yml #復制模板文件為harbor.yml
??harbor.yml是 Harbor 的核心配置文件,你通過編輯它來定義 Harbor 的訪問域名、是否啟用 HTTPS、管理員密碼、數據存儲位置等關鍵信息。編輯完成后,運行 ./install.sh即可基于該配置完成 Harbor 的安裝部署。?[root@harbor harbor]# vim harbor.yml
hostname: reg.timingy.org #harbor倉庫域名
https: #https設置# https port for harbor, default is 443port: 443certificate: /data/certs/timingy.org.crt #公鑰位置private_key: /data/certs/timingy.org.key #私鑰位置
harbor_admin_password: 123 #harbor倉庫admin用戶密碼#在使用 Harbor 離線安裝腳本時,??顯式要求安裝并啟用 ChartMuseum 組件,用于支持 Helm Chart(Kubernetes 應用包)的存儲與管理??
[root@harbor harbor]# ./install.sh --with-chartmuseum#為集群中的多個 Docker 節點配置私有鏡像倉庫的信任證書,解決 HTTPS 訪問時的證書驗證問題,保證鏡像拉取流程正常。
[root@harbor ~]# for i in 100 200 10 20
> do
> ssh -l root 192.168.121.$i mkdir -p /etc/docker/certs.d/reg.timingy.org
> scp /data/certs/timingy.org.crt root@192.168.121.$i:/etc/docker/certs.d/reg.timingy.org/ca.crt
> done#設置搭建的harbor倉庫為docker默認倉庫(所有主機)
~]# vim /etc/docker/daemon.json
{"registry-mirrors":["https://reg.timingy.org"]
}#重啟docker讓配置生效
~]# systemctl restart docker.service#查看docker信息
~]# docker infoRegistry Mirrors:https://reg.timingy.org/
Harbor 倉庫的啟動本質上就是通過?
docker-compose
按照你配置的參數(源自 harbor.yml)來拉起一組 Docker 容器,組成完整的 Harbor 服務。?當你執行 Harbor 的安裝腳本:
./install.sh
1. 讀取你的配置:harbor.yml
- 你之前編輯的?
harbor.yml
文件是 Harbor 的核心配置文件,用于定義如下內容:- Harbor 的訪問域名(hostname)
- 是否啟用 HTTPS,以及證書和私鑰路徑
- 數據存儲目錄(data_volume)
- 是否啟用 ChartMuseum(用于 Helm Chart 存儲)
- 管理員密碼等
它決定了 Harbor 的運行方式,例如使用什么域名訪問、是否啟用加密、數據存放在哪里等。
2. 生成 docker-compose 配置 & 加載 Docker 鏡像
install.sh
腳本會根據?harbor.yml
中的配置:
- 自動生成一份?
docker-compose.yml
文件(通常在內部目錄如?./make/
下生成,不直接展示給用戶)- 將 Harbor 所需的各個服務(如 UI、Registry、數據庫、Redis、ChartMuseum 等)打包為 Docker 鏡像
- 如果你使用的是 ??離線安裝包??,這些鏡像通常已經包含在包中,無需聯網下載
- 腳本會將這些鏡像通過?
docker load
命令加載到本地 Docker 環境中
3. 調用 docker-compose 啟動服務
最終,
install.sh
會調用類似于下面的命令(或內部等效邏輯)來啟動 Harbor 服務:docker-compose up -d
該命令會根據生成的?
docker-compose.yml
配置,以后臺模式啟動 Harbor 所需的多個容器些容器共同構成了一個完整的 Harbor 私有鏡像倉庫服務,支持鏡像管理、用戶權限、Helm Chart 存儲等功能。
在harbor安裝目錄下啟用harbor倉庫
訪問測試
點擊高級-->繼續訪問
登錄后創建公開項目k8s用于k8s集群搭建
2.2.2.5 安裝K8S部署工具
#部署軟件倉庫,添加K8S源
~]# vim /etc/yum.repos.d/k8s.repo
[k8s]
name=k8s
baseurl=https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.30/rpm
gpgcheck=0#安裝軟件
~]# dnf install kubelet-1.30.0 kubeadm-1.30.0 kubectl-1.30.0 -y
2.2.2.6 設置kubectl命令補齊功能
[root@k8s-master ~]# dnf install bash-completion -y
[root@k8s-master ~]# echo "source <(kubectl completion bash)" >> ~/.bashrc
[root@k8s-master ~]# source ~/.bashrc
2.2.2.7 在所有節點安裝cri-docker
k8s從1.24版本開始移除了dockershim,所以需要安裝cri-docker插件才能使用docker
軟件下載:https://github.com/Mirantis/cri-dockerd
下載docker連接插件及其依賴(讓k8s支持docker容器):
所有節點~] #dnf install libcgroup-0.41-19.el8.x86_64.rpm \
> cri-dockerd-0.3.14-3.el8.x86_64.rpm -y所有節點~]# cat /lib/systemd/system/cri-docker.service
[Unit]
Description=CRI Interface for Docker Application Container Engine
Documentation=https://docs.mirantis.com
After=network-online.target firewalld.service docker.service
Wants=network-online.target
Requires=cri-docker.socket[Service]
Type=notify#指定網絡插件名稱及基礎容器鏡像
ExecStart=/usr/bin/cri-dockerd --container-runtime-endpoint fd:// --network-plugin=cni --pod-infra-container-image=reg.timingy.org/k8s/pause:3.9
ExecReload=/bin/kill -s HUP $MAINPID
TimeoutSec=0
RestartSec=2
Restart=always所有節點~]# systemctl daemon-reload
所有節點~]# systemctl enable --now cri-docker
所有節點~]# ll /var/run/cri-dockerd.sock
srw-rw---- 1 root docker 0 Aug 20 21:44 /var/run/cri-dockerd.sock #cri-dockerd的套接字文件
2.2.2.8 在master節點拉取K8S所需鏡像
方法1.在線拉取
#拉取k8s集群所需要的鏡像
[root@k8s-master ~]# kubeadm config images pull \
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version v1.30.0 \
--cri-socket=unix:///var/run/cri-dockerd.sock#上傳鏡像到harbor倉庫
[root@k8s-master ~]# docker images | awk '/google/{ print $1":"$2}' \
| awk -F "/" '{system("docker tag "$0" reg.timinglee.org/k8s/"$3)}'[root@k8s-master ~]# docker images | awk '/k8s/{system("docker push "$1":"$2)}'方法2:離線導入
[root@master k8s-img]# ll
total 650320
-rw-r--r-- 1 root root 84103168 Aug 20 21:55 flannel-0.25.5.tag.gz
-rw-r--r-- 1 root root 581815296 Aug 20 21:55 k8s_docker_images-1.30.tar
-rw-r--r-- 1 root root 4406 Aug 20 21:55 kube-flannel.yml#導入鏡像
[root@master k8s-img]# docker load -i k8s_docker_images-1.30.tar
3d6fa0469044: Loading layer 327.7kB/327.7kB
49626df344c9: Loading layer 40.96kB/40.96kB
945d17be9a3e: Loading layer 2.396MB/2.396MB
4d049f83d9cf: Loading layer 1.536kB/1.536kB
af5aa97ebe6c: Loading layer 2.56kB/2.56kB
ac805962e479: Loading layer 2.56kB/2.56kB
bbb6cacb8c82: Loading layer 2.56kB/2.56kB
2a92d6ac9e4f: Loading layer 1.536kB/1.536kB
1a73b54f556b: Loading layer 10.24kB/10.24kB
f4aee9e53c42: Loading layer 3.072kB/3.072kB
b336e209998f: Loading layer 238.6kB/238.6kB
06ddf169d3f3: Loading layer 1.69MB/1.69MB
c0cb02961a3c: Loading layer 112.9MB/112.9MB
Loaded image: registry.aliyuncs.com/google_containers/kube-apiserver:v1.30.0
7b631378e22a: Loading layer 107.4MB/107.4MB
Loaded image: registry.aliyuncs.com/google_containers/kube-controller-manager:v1.30.0
62baa24e327e: Loading layer 58.3MB/58.3MB
Loaded image: registry.aliyuncs.com/google_containers/kube-scheduler:v1.30.0
3113ebfbe4c2: Loading layer 28.35MB/28.35MB
f76f3fb0cfaa: Loading layer 57.58MB/57.58MB
Loaded image: registry.aliyuncs.com/google_containers/kube-proxy:v1.30.0
e023e0e48e6e: Loading layer 327.7kB/327.7kB
6fbdf253bbc2: Loading layer 51.2kB/51.2kB
7bea6b893187: Loading layer 3.205MB/3.205MB
ff5700ec5418: Loading layer 10.24kB/10.24kB
d52f02c6501c: Loading layer 10.24kB/10.24kB
e624a5370eca: Loading layer 10.24kB/10.24kB
1a73b54f556b: Loading layer 10.24kB/10.24kB
d2d7ec0f6756: Loading layer 10.24kB/10.24kB
4cb10dd2545b: Loading layer 225.3kB/225.3kB
aec96fc6d10e: Loading layer 217.1kB/217.1kB
545a68d51bc4: Loading layer 57.16MB/57.16MB
Loaded image: registry.aliyuncs.com/google_containers/coredns:v1.11.1
e3e5579ddd43: Loading layer 746kB/746kB
Loaded image: registry.aliyuncs.com/google_containers/pause:3.9
54ad2ec71039: Loading layer 327.7kB/327.7kB
6fbdf253bbc2: Loading layer 51.2kB/51.2kB
accc3e6808c0: Loading layer 3.205MB/3.205MB
ff5700ec5418: Loading layer 10.24kB/10.24kB
d52f02c6501c: Loading layer 10.24kB/10.24kB
e624a5370eca: Loading layer 10.24kB/10.24kB
1a73b54f556b: Loading layer 10.24kB/10.24kB
d2d7ec0f6756: Loading layer 10.24kB/10.24kB
4cb10dd2545b: Loading layer 225.3kB/225.3kB
a9f9fc6d48ba: Loading layer 2.343MB/2.343MB
b48a138a7d6b: Loading layer 124.2MB/124.2MB
b4b40553581c: Loading layer 20.36MB/20.36MB
Loaded image: registry.aliyuncs.com/google_containers/etcd:3.5.12-0#打標簽
[root@master k8s-img]# docker images | awk '/google/{print $1":"$2}' | awk -F / '{system("docker tag "$0" reg.timingy.org/k8s/"$3)}'
[root@master k8s-img]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
reg.timingy.org/k8s/kube-apiserver v1.30.0 c42f13656d0b 16 months ago 117MB
registry.aliyuncs.com/google_containers/kube-apiserver v1.30.0 c42f13656d0b 16 months ago 117MB
reg.timingy.org/k8s/kube-controller-manager v1.30.0 c7aad43836fa 16 months ago 111MB
registry.aliyuncs.com/google_containers/kube-controller-manager v1.30.0 c7aad43836fa 16 months ago 111MB
reg.timingy.org/k8s/kube-scheduler v1.30.0 259c8277fcbb 16 months ago 62MB
registry.aliyuncs.com/google_containers/kube-scheduler v1.30.0 259c8277fcbb 16 months ago 62MB
reg.timingy.org/k8s/kube-proxy v1.30.0 a0bf559e280c 16 months ago 84.7MB#推送鏡像
[root@master k8s-img]# docker images | awk '/timingy/{system("docker push " $1":"$2)}'
2.2.2.9 集群初始化
#執行初始化命令
[root@master k8s-img]# kubeadm init --pod-network-cidr=10.244.0.0/16 \
> --image-repository reg.timingy.org/k8s \
> --kubernetes-version v1.30.0 \
> --cri-socket=unix:///var/run/cri-dockerd.sock#指定集群配置文件變量
[root@master ~]# echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile
[root@master k8s-img]# source ~/.bash_profile#當前節點沒有就緒,因為還沒有安裝網絡插件,容器沒有運行
[root@master k8s-img]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master NotReady control-plane 3m v1.30.0[root@master k8s-img]# kubectl get pod -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-7c677d6c78-7n96p 0/1 Pending 0 3m2s
kube-system coredns-7c677d6c78-jp6c5 0/1 Pending 0 3m2s
kube-system etcd-master 1/1 Running 0 3m16s
kube-system kube-apiserver-master 1/1 Running 0 3m18s
kube-system kube-controller-manager-master 1/1 Running 0 3m16s
kube-system kube-proxy-rjzl9 1/1 Running 0 3m2s
kube-system kube-scheduler-master 1/1 Running 0 3m16s
Note:
在此階段如果生成的集群token找不到了可以重新生成
[root@master ~]# kubeadm token create --print-join-command kubeadm join 192.168.121.100:6443 --token slx36w.np3pg2xzfhtj8hsr \ --discovery-token-ca-cert-hash sha256:29389ead6392e0bb1f68adb025e3a6817c9936a26f9140f8a166528e521addb3 --cri-socket=unix:///var/run/cri-dockerd.sock
2.2.2.10 安裝flannel網絡插件
官方網站:https://github.com/flannel-io/flannel
#下載flannel的yaml部署文件
[root@k8s-master ~]# wget https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml#下載鏡像:
[root@k8s-master ~]# docker pull docker.io/flannel/flannel:v0.25.5
[root@k8s-master ~]# docekr docker.io/flannel/flannel-cni-plugin:v1.5.1-flannel1#注意得現在harbor中建立flannel公開項目
[root@master k8s-img]# docker tag flannel/flannel:v0.25.5 reg.timingy.org/flannel/flannel:v0.25.5
[root@master k8s-img]# docker tag flannel/flannel-cni-plugin:v1.5.1-flannel1 reg.timingy.org/flannel/flannel-cni-plugin:v1.5.1-flannel1#推送
[root@master k8s-img]# docker push reg.timingy.org/flannel/flannel:v0.25.5
[root@master k8s-img]# docker push reg.timingy.org/flannel/flannel-cni-plugin:v1.5.1-flannel1#修改yml配置文件指定鏡像倉庫,官方的是docker.io,刪掉這里就行,docker會從默認的倉庫也就是我們的harbor倉庫拉取鏡像
[root@master k8s-img]# vim kube-flannel.yml
image: flannel/flannel:v0.25.5
image: flannel/flannel-cni-plugin:v1.5.1-flannel1
image: flannel/flannel:v0.25.5[root@master k8s-img]# kubectl apply -f kube-flannel.yml
namespace/kube-flannel created
serviceaccount/flannel created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created#查看pods運行情況
[root@master k8s-img]# kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-flannel kube-flannel-ds-jqz8p 1/1 Running 0 15s
kube-system coredns-7c677d6c78-7n96p 1/1 Running 0 18m
kube-system coredns-7c677d6c78-jp6c5 1/1 Running 0 18m
kube-system etcd-master 1/1 Running 0 18m
kube-system kube-apiserver-master 1/1 Running 0 18m
kube-system kube-controller-manager-master 1/1 Running 0 18m
kube-system kube-proxy-rjzl9 1/1 Running 0 18m
kube-system kube-scheduler-master 1/1 Running 0 18m#查看節點是否ready
[root@master k8s-img]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready control-plane 18m v1.30.0
2.2.2.11 節點擴容
在所有的worker節點中
1 確認部署好以下內容
2 禁用swap
3 安裝:
-
kubelet-1.30.0
-
kubeadm-1.30.0
-
kubectl-1.30.0
-
docker-ce
-
cri-dockerd
4 修改cri-dockerd啟動文件添加
-
--network-plugin=cni
-
--pod-infra-container-image=reg.timinglee.org/k8s/pause:3.9
5 啟動服務
-
kubelet.service
-
cri-docker.service
以上信息確認完畢后即可加入集群
[root@master k8s-img]# kubeadm token create --print-join-command
kubeadm join 192.168.121.100:6443 --token p3kfyl.ljipmtsklr21r9ah --discovery-token-ca-cert-hash sha256:e01d3ac26e5c7b3100487dae6e14ce16e49f183b1b35f18cacd2be8006177293[root@node1 ~]# kubeadm join 192.168.121.100:6443 --token p3kfyl.ljipmtsklr21r9ah --discovery-token-ca-cert-hash sha256:e01d3ac26e5c7b3100487dae6e14ce16e49f183b1b35f18cacd2be8006177293 --cri-socket=unix:///var/run/cri-dockerd.sock[root@node2 ~]# kubeadm join 192.168.121.100:6443 --token p3kfyl.ljipmtsklr21r9ah --discovery-token-ca-cert-hash sha256:e01d3ac26e5c7b3100487dae6e14ce16e49f183b1b35f18cacd2be8006177293 --cri-socket=unix:///var/run/cri-dockerd.sock
[preflight] Running pre-flight checks[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 2.50498435s
[kubelet-start] Waiting for the kubelet to perform the TLS BootstrapThis node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
在master節點中查看所有node的狀態
Note:
所有階段的STATUS為Ready狀態,那么恭喜你,你的kubernetes就裝好了!!
測試集群運行情況
[root@harbor ~]# cd packages/
[root@harbor packages]# ll
total 3621832
-rw-r--r-- 1 root root 131209386 Aug 23 2024 1panel-v1.10.13-lts-linux-amd64.tar.gz
-rw-r--r-- 1 root root 4505600 Aug 26 2024 busybox-latest.tar.gz
-rw-r--r-- 1 root root 211699200 Aug 26 2024 centos-7.tar.gz
-rw-r--r-- 1 root root 22456832 Aug 26 2024 debian11.tar.gz
-rw-r--r-- 1 root root 693103681 Aug 26 2024 docker-images.tar.gz
-rw-r--r-- 1 root root 57175040 Aug 26 2024 game2048.tar.gz
-rw-r--r-- 1 root root 102946304 Aug 26 2024 haproxy-2.3.tar.gz
drwxr-xr-x 3 root root 180 Aug 20 20:06 harbor
-rw-r--r-- 1 root root 738797440 Aug 17 2024 harbor-offline-installer-v2.5.4.tgz
-rw-r--r-- 1 root root 207404032 Aug 26 2024 mario.tar.gz
-rw-r--r-- 1 root root 519596032 Aug 26 2024 mysql-5.7.tar.gz
-rw-r--r-- 1 root root 146568704 Aug 26 2024 nginx-1.23.tar.gz
-rw-r--r-- 1 root root 191849472 Aug 26 2024 nginx-latest.tar.gz
-rw-r--r-- 1 root root 574838784 Aug 26 2024 phpmyadmin-latest.tar.gz
-rw-r--r-- 1 root root 26009088 Aug 17 2024 registry.tag.gz
drwxr-xr-x 2 root root 277 Aug 23 2024 rpm
-rw-r--r-- 1 root root 80572416 Aug 26 2024 ubuntu-latest.tar.gz#加載壓縮包為鏡像
[root@harbor packages]# docker load -i nginx-latest.tar.gz
#打標簽并推送
[root@harbor packages]# docker tag nginx:latest reg.timingy.org/library/nginx:latest
[root@harbor packages]# docker push reg.timingy.org/library/nginx:latest#建立一個pod
[root@master k8s-img]# kubectl run test --image=nginx
pod/test created#查看pod狀態[root@master k8s-img]# kubectl get pods
NAME READY STATUS RESTARTS AGE
test 1/1 Running 0 35s#刪除pod
[root@master k8s-img]# kubectl delete pod test
pod "test" deleted
三 kubernetes 中的資源
3.1 資源管理介紹
-
在kubernetes中,所有的內容都抽象為資源,用戶需要通過操作資源來管理kubernetes。
-
kubernetes的本質上就是一個集群系統,用戶可以在集群中部署各種服務
-
所謂的部署服務,其實就是在kubernetes集群中運行一個個的容器,并將指定的程序跑在容器中。
-
kubernetes的最小管理單元是pod而不是容器,只能將容器放在
Pod
中, -
kubernetes一般也不會直接管理Pod,而是通過
Pod控制器
來管理Pod的。 -
Pod中服務的訪問是由kubernetes提供的
Service
資源來實現。 -
Pod中程序的數據需要持久化是由kubernetes提供的各種存儲系統來實現
3.2 資源管理方式
-
命令式對象管理:直接使用命令去操作kubernetes資源
kubectl run nginx-pod --image=nginx:latest --port=80
-
命令式對象配置:通過命令配置和配置文件去操作kubernetes資源
kubectl create/patch -f nginx-pod.yaml
-
聲明式對象配置:通過apply命令和配置文件去操作kubernetes資源
kubectl apply -f nginx-pod.yaml
類型 | 適用環境 | 優點 | 缺點 |
---|---|---|---|
命令式對象管理 | 測試 | 簡單 | 只能操作活動對象,無法審計、跟蹤 |
命令式對象配置 | 開發 | 可以審計、跟蹤 | 項目大時,配置文件多,操作麻煩 |
聲明式對象配置 | 開發 | 支持目錄操作 | 意外情況下難以調試 |
3.2.1 命令式對象管理
kubectl是kubernetes集群的命令行工具,通過它能夠對集群本身進行管理,并能夠在集群上進行容器化應用的安裝部署
kubectl命令的語法如下:
kubectl [command] [type] [name] [flags]
comand:指定要對資源執行的操作,例如create、get、delete
type:指定資源類型,比如deployment、pod、service
name:指定資源的名稱,名稱大小寫敏感
flags:指定額外的可選參數
# 查看所有pod
kubectl get pod # 查看某個pod
kubectl get pod pod_name# 查看某個pod,以yaml格式展示結果
kubectl get pod pod_name -o yaml
3.2.2 資源類型
kubernetes中所有的內容都抽象為資源
kubectl api-resources
常用資源類型
kubect 常見命令操作
3.2.3 基本命令示例
kubectl的詳細說明地址:Kubectl Reference Docs
[root@master ~]# kubectl version
Client Version: v1.30.0
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.30.0
#顯示集群信息
[root@master ~]# kubectl cluster-info
Kubernetes control plane is running at https://192.168.121.100:6443
CoreDNS is running at https://192.168.121.100:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxyTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
#創建一個webcluster控制器,控制器中pod數量為2
[root@master ~]# kubectl create deployment webcluster --image nginx --replicas 2#查看控制器
[root@master ~]# kubectl get deployments.apps
NAME READY UP-TO-DATE AVAILABLE AGE
webcluster 2/2 2 2 22s
[root@master ~]# kubectl get deployments.apps
NAME READY UP-TO-DATE AVAILABLE AGE
webcluster 2/2 2 2 22s
#查看資源幫助
[root@master ~]# kubectl explain deployment
GROUP: apps
KIND: Deployment
VERSION: v1DESCRIPTION:Deployment enables declarative updates for Pods and ReplicaSets.FIELDS:apiVersion <string>APIVersion defines the versioned schema of this representation of an object.Servers should convert recognized schemas to the latest internal value, andmay reject unrecognized values. More info:https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resourceskind <string>Kind is a string value representing the REST resource this objectrepresents. Servers may infer this from the endpoint the client submitsrequests to. Cannot be updated. In CamelCase. More info:https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kindsmetadata <ObjectMeta>Standard object's metadata. More info:https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadataspec <DeploymentSpec>Specification of the desired behavior of the Deployment.status <DeploymentStatus>Most recently observed status of the Deployment.#查看控制器參數幫助
[root@master ~]# kubectl explain deployment.spec
GROUP: apps
KIND: Deployment
VERSION: v1FIELD: spec <DeploymentSpec>DESCRIPTION:Specification of the desired behavior of the Deployment.DeploymentSpec is the specification of the desired behavior of theDeployment.FIELDS:minReadySeconds <integer>Minimum number of seconds for which a newly created pod should be readywithout any of its container crashing, for it to be considered available.Defaults to 0 (pod will be considered available as soon as it is ready)paused <boolean>Indicates that the deployment is paused.progressDeadlineSeconds <integer>The maximum time in seconds for a deployment to make progress before it isconsidered to be failed. The deployment controller will continue to processfailed deployments and a condition with a ProgressDeadlineExceeded reasonwill be surfaced in the deployment status. Note that progress will not beestimated during the time a deployment is paused. Defaults to 600s.replicas <integer>Number of desired pods. This is a pointer to distinguish between explicitzero and not specified. Defaults to 1.revisionHistoryLimit <integer>The number of old ReplicaSets to retain to allow rollback. This is a pointerto distinguish between explicit zero and not specified. Defaults to 10.selector <LabelSelector> -required-Label selector for pods. Existing ReplicaSets whose pods are selected bythis will be the ones affected by this deployment. It must match the podtemplate's labels.strategy <DeploymentStrategy>The deployment strategy to use to replace existing pods with new ones.template <PodTemplateSpec> -required-Template describes the pods that will be created. The only allowedtemplate.spec.restartPolicy value is "Always".
#編輯控制器配置
[root@master ~]# kubectl edit deployments.apps webcluster
@@@@省略內容@@@@@@
spec:progressDeadlineSeconds: 600replicas: 3 #pods數量改為3
@@@@省略內容@@@@@@#查看控制器
[root@master ~]# kubectl get deployments.apps
NAME READY UP-TO-DATE AVAILABLE AGE
webcluster 3/3 3 3 4m56s
#利用補丁更改控制器配置
[root@master ~]# kubectl patch deployments.apps webcluster -p '{"spec":{"replicas":4}}'
deployment.apps/webcluster patched[root@master ~]# kubectl get deployments.apps webcluster
NAME READY UP-TO-DATE AVAILABLE AGE
webcluster 4/4 4 4 7m6s
#刪除資源
[root@master ~]# kubectl delete deployments.apps webcluster
deployment.apps "webcluster" deleted
[root@master ~]# kubectl get deployments.apps
No resources found in default namespace.
3.2.4 運行和調試命令示例
#拷貝文件到pod中
[root@master ~]# kubectl cp anaconda-ks.cfg nginx:/
[root@master ~]# kubectl exec -it pods/nginx /bin/bash
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
root@nginx:/# ls
anaconda-ks.cfg boot docker-entrypoint.d etc lib media opt root sbin sys usr
bin dev docker-entrypoint.sh home lib64 mnt proc run srv tmp var#拷貝pod中的文件到本機
[root@master ~]# kubectl cp nginx:/anaconda-ks.cfg ./
tar: Removing leading `/' from member names
3.2.5 高級命令示例
#利用命令生成yaml模板文件
[root@master ~]# kubectl create deployment webcluster --image nginx --dry-run=client -o yaml > webcluster.yml#利用yaml文件生成資源
(刪除不需要的配置后)
[root@master podsManager]# cat webcluster.yml
apiVersion: apps/v1
kind: Deployment
metadata:labels:app: webclustername: webcluster
spec:replicas: 1selector:matchLabels:app: webclustertemplate:metadata:labels:app: webclusterspec:containers:- image: nginxname: nginx#利用 YAML 文件定義并創建 Kubernetes 資源
[root@master podsManager]# kubectl apply -f webcluster.yml
deployment.apps/webcluster created#查看控制器
[root@master podsManager]# kubectl get deployments.apps
NAME READY UP-TO-DATE AVAILABLE AGE
webcluster 1/1 1 1 6s#刪除資源
[root@master podsManager]# kubectl delete -f webcluster.yml
deployment.apps "webcluster" deleted
#管理資源標簽
[root@master podsManager]# kubectl get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
nginx 1/1 Running 0 102s run=nginx[root@master podsManager]# kubectl label pods nginx app=xxy
pod/nginx labeled
[root@master podsManager]# kubectl get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
nginx 1/1 Running 0 2m47s app=xxy,run=nginx#更改標簽
[root@master podsManager]# kubectl label pods nginx app=webcluster --overwrite
pod/nginx labeled
[root@master podsManager]# kubectl get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
nginx 1/1 Running 0 5m55s app=webcluster,run=nginx#刪除標簽
[root@master podsManager]# cat webcluster.yml
apiVersion: apps/v1
kind: Deployment
metadata:labels:app: webclustername: webcluster
spec:replicas: 2selector:matchLabels:app: webclustertemplate:metadata:labels:app: webclusterspec:containers:- image: nginxname: nginx[root@master podsManager]# kubectl apply -f webcluster.yml
deployment.apps/webcluster created[root@master podsManager]# kubectl get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
webcluster-7c584f774b-7ncbj 1/1 Running 0 15s app=webcluster,pod-template-hash=7c584f774b
webcluster-7c584f774b-gxktm 1/1 Running 0 15s app=webcluster,pod-template-hash=7c584f774b#刪除pod上的標簽
[root@master podsManager]# kubectl label pods webcluster-7c584f774b-7ncbj app-
pod/webcluster-7c584f774b-7ncbj unlabeled#控制器會重新啟動新pod
[root@master podsManager]# kubectl get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
webcluster-7c584f774b-52bbq 1/1 Running 0 26s app=webcluster,pod-template-hash=7c584f774b
webcluster-7c584f774b-7ncbj 1/1 Running 0 4m28s pod-template-hash=7c584f774b
webcluster-7c584f774b-gxktm 1/1 Running 0 4m28s app=webcluster,pod-template-hash=7c584f774b
四 pod
4.1 什么是pod
-
Pod是可以創建和管理Kubernetes計算的最小可部署單元
-
一個Pod代表著集群中運行的一個進程,每個pod都有一個唯一的ip。
-
一個pod類似一個豌豆莢,包含一個或多個容器(通常是docker)
-
多個容器間共享IPC、Network和UTC namespace。
4.1.1 創建自主式pod (生產不推薦)
優點:
靈活性高:
-
可以精確控制 Pod 的各種配置參數,包括容器的鏡像、資源限制、環境變量、命令和參數等,滿足特定的應用需求。
學習和調試方便:
-
對于學習 Kubernetes 的原理和機制非常有幫助,通過手動創建 Pod 可以深入了解 Pod 的結構和配置方式。在調試問題時,可以更直接地觀察和調整 Pod 的設置。
適用于特殊場景:
-
在一些特殊情況下,如進行一次性任務、快速驗證概念或在資源受限的環境中進行特定配置時,手動創建 Pod 可能是一種有效的方式。
缺點:
管理復雜:
-
如果需要管理大量的 Pod,手動創建和維護會變得非常繁瑣和耗時。難以實現自動化的擴縮容、故障恢復等操作。
缺乏高級功能:
-
無法自動享受 Kubernetes 提供的高級功能,如自動部署、滾動更新、服務發現等。這可能導致應用的部署和管理效率低下。
#查看所有pods(當前namespace)
[root@master podsManager]# kubectl get pods
No resources found in default namespace.#建立一個名為timingy的pod
[root@master podsManager]# kubectl run timingy --image nginx
pod/timingy created[root@master podsManager]# kubectl get pods
NAME READY STATUS RESTARTS AGE
timingy 1/1 Running 0 5s#顯示pod的較為詳細的信息
[root@master podsManager]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
timingy 1/1 Running 0 14s 10.244.1.5 node1 <none> <none>
4.1.2 利用控制器管理pod(推薦)
高可用性和可靠性:
-
自動故障恢復:如果一個 Pod 失敗或被刪除,控制器會自動創建新的 Pod 來維持期望的副本數量。確保應用始終處于可用狀態,減少因單個 Pod 故障導致的服務中斷。
-
健康檢查和自愈:可以配置控制器對 Pod 進行健康檢查(如存活探針和就緒探針)。如果 Pod 不健康,控制器會采取適當的行動,如重啟 Pod 或刪除并重新創建它,以保證應用的正常運行。
可擴展性:
-
輕松擴縮容:可以通過簡單的命令或配置更改來增加或減少 Pod 的數量,以滿足不同的工作負載需求。例如,在高流量期間可以快速擴展以處理更多請求,在低流量期間可以縮容以節省資源。
-
水平自動擴縮容(HPA):可以基于自定義指標(如 CPU 利用率、內存使用情況或應用特定的指標)自動調整 Pod 的數量,實現動態的資源分配和成本優化。
版本管理和更新:
-
滾動更新:對于 Deployment 等控制器,可以執行滾動更新來逐步替換舊版本的 Pod 為新版本,確保應用在更新過程中始終保持可用。可以控制更新的速率和策略,以減少對用戶的影響。
-
回滾:如果更新出現問題,可以輕松回滾到上一個穩定版本,保證應用的穩定性和可靠性。
聲明式配置:
-
簡潔的配置方式:使用 YAML 或 JSON 格式的聲明式配置文件來定義應用的部署需求。這種方式使得配置易于理解、維護和版本控制,同時也方便團隊協作。
-
期望狀態管理:只需要定義應用的期望狀態(如副本數量、容器鏡像等),控制器會自動調整實際狀態與期望狀態保持一致。無需手動管理每個 Pod 的創建和刪除,提高了管理效率。
服務發現和負載均衡:
-
自動注冊和發現:Kubernetes 中的服務(Service)可以自動發現由控制器管理的 Pod,并將流量路由到它們。這使得應用的服務發現和負載均衡變得簡單和可靠,無需手動配置負載均衡器。
-
流量分發:可以根據不同的策略(如輪詢、隨機等)將請求分發到不同的 Pod,提高應用的性能和可用性。
多環境一致性:
-
一致的部署方式:在不同的環境(如開發、測試、生產)中,可以使用相同的控制器和配置來部署應用,確保應用在不同環境中的行為一致。這有助于減少部署差異和錯誤,提高開發和運維效率。
示例:
#建立控制器并自動運行pod
[root@master ~]# kubectl create deployment timingy --image nginx
deployment.apps/timingy created
[root@master ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
timingy-5bb68ff8f9-swfjk 1/1 Running 0 22s#為timingy擴容
[root@master ~]# kubectl scale deployment timingy --replicas 6
deployment.apps/timingy scaled
[root@master ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
timingy-5bb68ff8f9-8gc4z 1/1 Running 0 4m15s
timingy-5bb68ff8f9-hvn2j 1/1 Running 0 4m15s
timingy-5bb68ff8f9-mr48h 1/1 Running 0 4m15s
timingy-5bb68ff8f9-nsf4g 1/1 Running 0 4m15s
timingy-5bb68ff8f9-pnmk2 1/1 Running 0 4m15s
timingy-5bb68ff8f9-swfjk 1/1 Running 0 5m20s#為timinglee縮容
[root@master ~]# kubectl scale deployment timingy --replicas 2
deployment.apps/timingy scaled[root@master ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
timingy-5bb68ff8f9-hvn2j 1/1 Running 0 5m5s
timingy-5bb68ff8f9-swfjk 1/1 Running 0 6m10s
4.1.3 應用版本的更新
#利用控制器建立pod
[root@master ~]# kubectl create deployment timingy --image myapp:v1 --replicas 2
deployment.apps/timingy created#暴漏端口
[root@master ~]# kubectl expose deployment timingy --port 80 --target-port 80
service/timingy exposed#訪問服務
[root@master ~]# curl 10.107.166.185
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
[root@master ~]# curl 10.107.166.185
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>#產看歷史版本
[root@master ~]# kubectl rollout history deployment timingy
deployment.apps/timingy
REVISION CHANGE-CAUSE
1 <none>#更新控制器鏡像版本
[root@master ~]# kubectl set image deployments/timingy myapp=myapp:v2
deployment.apps/timingy image updated#查看歷史版本
[root@master ~]# kubectl rollout history deployment timingy
deployment.apps/timingy
REVISION CHANGE-CAUSE
1 <none>
2 <none>#訪問內容測試
[root@master ~]# curl 10.107.166.185
Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a>#版本回滾
[root@master ~]# kubectl rollout undo deployment timingy --to-revision 1
deployment.apps/timingy rolled back
[root@master ~]# curl 10.107.166.185
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>#不過還是建議在yaml文件中修改鏡像版本
4.1.4 利用yaml文件部署應用
4.1.4.1 用yaml文件部署應用有以下優點
聲明式配置:
-
清晰表達期望狀態:以聲明式的方式描述應用的部署需求,包括副本數量、容器配置、網絡設置等。這使得配置易于理解和維護,并且可以方便地查看應用的預期狀態。
-
可重復性和版本控制:配置文件可以被版本控制,確保在不同環境中的部署一致性。可以輕松回滾到以前的版本或在不同環境中重復使用相同的配置。
-
團隊協作:便于團隊成員之間共享和協作,大家可以對配置文件進行審查和修改,提高部署的可靠性和穩定性。
靈活性和可擴展性:
-
豐富的配置選項:可以通過 YAML 文件詳細地配置各種 Kubernetes 資源,如 Deployment、Service、ConfigMap、Secret 等。可以根據應用的特定需求進行高度定制化。
-
組合和擴展:可以將多個資源的配置組合在一個或多個 YAML 文件中,實現復雜的應用部署架構。同時,可以輕松地添加新的資源或修改現有資源以滿足不斷變化的需求。
與工具集成:
-
與 CI/CD 流程集成:可以將 YAML 配置文件與持續集成和持續部署(CI/CD)工具集成,實現自動化的應用部署。例如,可以在代碼提交后自動觸發部署流程,使用配置文件來部署應用到不同的環境。
-
命令行工具支持:Kubernetes 的命令行工具
kubectl
對 YAML 配置文件有很好的支持,可以方便地應用、更新和刪除配置。同時,還可以使用其他工具來驗證和分析 YAML 配置文件,確保其正確性和安全性。
4.1.4.2 資源清單參數
參數名稱 | 類型 | 參數說明 |
---|---|---|
version | String | 這里是指的是K8S API的版本,目前基本上是v1,可以用kubectl api-versions命令查詢 |
kind | String | 這里指的是yaml文件定義的資源類型和角色,比如:Pod |
metadata | Object | 元數據對象,固定值就寫metadata |
metadata.name | String | 元數據對象的名字,這里由我們編寫,比如命名Pod的名字 |
metadata.namespace | String | 元數據對象的命名空間,由我們自身定義 |
Spec | Object | 詳細定義對象,固定值就寫Spec |
spec.containers[] | list | 這里是Spec對象的容器列表定義,是個列表 |
spec.containers[].name | String | 這里定義容器的名字 |
spec.containers[].image | string | 這里定義要用到的鏡像名稱 |
spec.containers[].imagePullPolicy | String | 定義鏡像拉取策略,有三個值可選: (1) Always: 每次都嘗試重新拉取鏡像 (2) IfNotPresent:如果本地有鏡像就使用本地鏡像 (3) )Never:表示僅使用本地鏡像 |
spec.containers[].command[] | list | 指定容器運行時啟動的命令,若未指定則運行容器打包時指定的命令 |
spec.containers[].args[] | list | 指定容器運行參數,可以指定多個 |
spec.containers[].workingDir | String | 指定容器工作目錄 |
spec.containers[].volumeMounts[] | list | 指定容器內部的存儲卷配置 |
spec.containers[].volumeMounts[].name | String | 指定可以被容器掛載的存儲卷的名稱 |
spec.containers[].volumeMounts[].mountPath | String | 指定可以被容器掛載的存儲卷的路徑 |
spec.containers[].volumeMounts[].readOnly | String | 設置存儲卷路徑的讀寫模式,ture或false,默認為讀寫模式 |
spec.containers[].ports[] | list | 指定容器需要用到的端口列表 |
spec.containers[].ports[].name | String | 指定端口名稱 |
spec.containers[].ports[].containerPort | String | 指定容器需要監聽的端口號 |
spec.containers[] ports[].hostPort | String | 指定容器所在主機需要監聽的端口號,默認跟上面containerPort相同,注意設置了hostPort同一臺主機無法啟動該容器的相同副本(因為主機的端口號不能相同,這樣會沖突) |
spec.containers[].ports[].protocol | String | 指定端口協議,支持TCP和UDP,默認值為 TCP |
spec.containers[].env[] | list | 指定容器運行前需設置的環境變量列表 |
spec.containers[].env[].name | String | 指定環境變量名稱 |
spec.containers[].env[].value | String | 指定環境變量值 |
spec.containers[].resources | Object | 指定資源限制和資源請求的值(這里開始就是設置容器的資源上限) |
spec.containers[].resources.limits | Object | 指定設置容器運行時資源的運行上限 |
spec.containers[].resources.limits.cpu | String | 指定CPU的限制,單位為核心數,1=1000m |
spec.containers[].resources.limits.memory | String | 指定MEM內存的限制,單位為MIB、GiB |
spec.containers[].resources.requests | Object | 指定容器啟動和調度時的限制設置 |
spec.containers[].resources.requests.cpu | String | CPU請求,單位為core數,容器啟動時初始化可用數量 |
spec.containers[].resources.requests.memory | String | 內存請求,單位為MIB、GIB,容器啟動的初始化可用數量 |
spec.restartPolicy | string | 定義Pod的重啟策略,默認值為Always. (1)Always: Pod-旦終止運行,無論容器是如何 終止的,kubelet服務都將重啟它 (2)OnFailure: 只有Pod以非零退出碼終止時,kubelet才會重啟該容器。如果容器正常結束(退出碼為0),則kubelet將不會重啟它 (3) Never: Pod終止后,kubelet將退出碼報告給Master,不會重啟該 |
spec.nodeSelector | Object | 定義Node的Label過濾標簽,以key:value格式指定 |
spec.imagePullSecrets | Object | 定義pull鏡像時使用secret名稱,以name:secretkey格式指定 |
spec.hostNetwork | Boolean | 定義是否使用主機網絡模式,默認值為false。設置true表示使用宿主機網絡,不使用docker網橋,同時設置了true將無法在同一臺宿主機 上啟動第二個副本 |
4.1.4.3 如何獲得資源幫助
kubectl explain pod.spec.containers
4.1.4.4 編寫示例
4.1.4.4.1 示例1:運行簡單的單個容器pod
[root@k8s-master ~]# vim pod.yml
apiVersion: v1
kind: Pod
metadata:labels:run: timing #pod標簽name: timinglee #pod名稱
spec:containers:- image: myapp:v1 #pod鏡像name: timinglee #容器名稱
4.1.4.4.2?示例2:運行多個容器pod
注意:注意如果多個容器運行在一個pod中,資源共享的同時在使用相同資源時也會干擾,比如端口
#一個端口干擾示例:
[root@k8s-master ~]# vim pod.yml
apiVersion: v1
kind: Pod
metadata:labels:run: timingname: timinglee
spec:containers:- image: nginx:latestname: web1- image: nginx:latestname: web2[root@k8s-master ~]# kubectl apply -f pod.yml
pod/timinglee created[root@k8s-master ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
timinglee 1/2 Error 1 (14s ago) 18s#查看日志
[root@k8s-master ~]# kubectl logs timinglee web2
2024/08/31 12:43:20 [emerg] 1#1: bind() to [::]:80 failed (98: Address already in use)
nginx: [emerg] bind() to [::]:80 failed (98: Address already in use)
2024/08/31 12:43:20 [notice] 1#1: try again to bind() after 500ms
2024/08/31 12:43:20 [emerg] 1#1: still could not bind()
nginx: [emerg] still could not bind()
注意:在一個pod中開啟多個容器時一定要確保容器彼此不能互相干擾
[root@k8s-master ~]# vim pod.yml[root@k8s-master ~]# kubectl apply -f pod.yml
pod/timinglee created
apiVersion: v1
kind: Pod
metadata:labels:run: timingname: timinglee
spec:containers:- image: nginx:latestname: web1- image: busybox:latestname: busyboxcommand: ["/bin/sh","-c","sleep 1000000"][root@k8s-master ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
timinglee 2/2 Running 0 19s
4.1.4.4.3?示例3:理解pod間的網絡整合
同在一個pod中的容器公用一個網絡
[root@master podsManager]# cat pod.yml
apiVersion: v1
kind: Pod
metadata:labels:run: timingleename: test
spec:containers:- image: myapp:v1name: myapp1- image: busyboxplus:latestname: busyboxpluscommand: ["/bin/sh","-c","sleep 1000000"][root@master podsManager]# kubectl apply -f pod.yml
pod/test created
[root@master podsManager]# kubectl get pods
NAME READY STATUS RESTARTS AGE
test 2/2 Running 0 18s
[root@master podsManager]# kubectl exec test -c busyboxplus -- curl -s localhost
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>可以看到同一個pod里容器共享一個網絡
4.1.4.4.4 示例4:端口映射
[root@master podsManager]# cat 1-pod.yml
apiVersion: v1
kind: Pod
metadata:labels:run: timingleename: test
spec:containers:- image: myapp:v1name: myapp1ports:- name: httpcontainerPort: 80 hostPort: 80 #映射端口到被調度的節點的真實網卡ip上protocol: TCP[root@master podsManager]# kubectl apply -f 1-pod.yml
pod/test created#測試
[root@master podsManager]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
test 1/1 Running 0 69s 10.244.104.48 node2 <none> <none>
[root@master podsManager]# curl node2
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
4.1.4.4.5 示例5:如何設定環境變量
[root@master podsManager]# cat 2-pod.yml
apiVersion: v1
kind: Pod
metadata:labels:run: timingleename: test
spec:containers:- image: busybox:latestname: busyboxcommand: ["/bin/sh","-c","echo $NAME;sleep 3000000"]env:- name: NAMEvalue: timinglee[root@master podsManager]# kubectl apply -f 2-pod.yml
pod/test created
[root@master podsManager]# kubectl logs pods/test busybox
timinglee
4.1.4.4.6 示例6:資源限制
資源限制會影響pod的Qos Class資源優先級,資源優先級分為Guaranteed > Burstable > BestEffort
QoS(Quality of Service)即服務質量
資源設定 優先級類型 資源限定未設定 BestEffort 資源限定設定且最大和最小不一致 Burstable 資源限定設定且最大和最小一致 Guaranteed
[root@k8s-master ~]# vim pod.yml
apiVersion: v1
kind: Pod
metadata:labels:run: timingleename: test
spec:containers:- image: myapp:v1name: myappresources:limits: #pod使用資源的最高限制 cpu: 500mmemory: 100Mrequests: #pod期望使用資源量,不能大于limitscpu: 500mmemory: 100Mroot@k8s-master ~]# kubectl apply -f pod.yml
pod/test created[root@k8s-master ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
test 1/1 Running 0 3s[root@k8s-master ~]# kubectl describe pods testLimits:cpu: 500mmemory: 100MRequests:cpu: 500mmemory: 100M
QoS Class: Guaranteed
4.1.4.4.7 示例7 容器啟動管理
[root@k8s-master ~]# vim pod.yml
apiVersion: v1
kind: Pod
metadata:labels:run: timingleename: test
spec:restartPolicy: Alwayscontainers:- image: myapp:v1name: myapp
[root@k8s-master ~]# kubectl apply -f pod.yml
pod/test created[root@k8s-master ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
test 1/1 Running 0 6s 10.244.2.3 k8s-node2 <none> <none>[root@k8s-node2 ~]# docker rm -f ccac1d64ea81
4.1.4.4.8 示例8 選擇運行節點
[root@k8s-master ~]# vim pod.yml
apiVersion: v1
kind: Pod
metadata:labels:run: timingleename: test
spec:nodeSelector:kubernetes.io/hostname: k8s-node1restartPolicy: Alwayscontainers:- image: myapp:v1name: myapp[root@k8s-master ~]# kubectl apply -f pod.yml
pod/test created[root@k8s-master ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
test 1/1 Running 0 21s 10.244.1.5 k8s-node1 <none> <none>
4.1.4.4.9 示例9 共享宿主機網絡
[root@k8s-master ~]# vim pod.yml
apiVersion: v1
kind: Pod
metadata:labels:run: timingleename: test
spec:hostNetwork: true #共享宿主機網絡restartPolicy: Alwayscontainers:- image: busybox:latestname: busyboxcommand: ["/bin/sh","-c","sleep 100000"]
[root@k8s-master ~]# kubectl apply -f pod.yml
pod/test created
[root@k8s-master ~]# kubectl exec -it pods/test -c busybox -- /bin/sh
/ # ifconfig
cni0 Link encap:Ethernet HWaddr E6:D4:AA:81:12:B4inet addr:10.244.2.1 Bcast:10.244.2.255 Mask:255.255.255.0inet6 addr: fe80::e4d4:aaff:fe81:12b4/64 Scope:LinkUP BROADCAST RUNNING MULTICAST MTU:1450 Metric:1RX packets:6259 errors:0 dropped:0 overruns:0 frame:0TX packets:6495 errors:0 dropped:0 overruns:0 carrier:0collisions:0 txqueuelen:1000RX bytes:506704 (494.8 KiB) TX bytes:625439 (610.7 KiB)docker0 Link encap:Ethernet HWaddr 02:42:99:4A:30:DCinet addr:172.17.0.1 Bcast:172.17.255.255 Mask:255.255.0.0UP BROADCAST MULTICAST MTU:1500 Metric:1RX packets:0 errors:0 dropped:0 overruns:0 frame:0TX packets:0 errors:0 dropped:0 overruns:0 carrier:0collisions:0 txqueuelen:0RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)eth0 Link encap:Ethernet HWaddr 00:0C:29:6A:A8:61inet addr:172.25.254.20 Bcast:172.25.254.255 Mask:255.255.255.0inet6 addr: fe80::8ff3:f39c:dc0c:1f0e/64 Scope:LinkUP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1RX packets:27858 errors:0 dropped:0 overruns:0 frame:0TX packets:14454 errors:0 dropped:0 overruns:0 carrier:0collisions:0 txqueuelen:1000RX bytes:26591259 (25.3 MiB) TX bytes:1756895 (1.6 MiB)flannel.1 Link encap:Ethernet HWaddr EA:36:60:20:12:05inet addr:10.244.2.0 Bcast:0.0.0.0 Mask:255.255.255.255inet6 addr: fe80::e836:60ff:fe20:1205/64 Scope:LinkUP BROADCAST RUNNING MULTICAST MTU:1450 Metric:1RX packets:0 errors:0 dropped:0 overruns:0 frame:0TX packets:0 errors:0 dropped:40 overruns:0 carrier:0collisions:0 txqueuelen:0RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)lo Link encap:Local Loopbackinet addr:127.0.0.1 Mask:255.0.0.0inet6 addr: ::1/128 Scope:HostUP LOOPBACK RUNNING MTU:65536 Metric:1RX packets:163 errors:0 dropped:0 overruns:0 frame:0TX packets:163 errors:0 dropped:0 overruns:0 carrier:0collisions:0 txqueuelen:1000RX bytes:13630 (13.3 KiB) TX bytes:13630 (13.3 KiB)veth9a516531 Link encap:Ethernet HWaddr 7A:92:08:90:DE:B2inet6 addr: fe80::7892:8ff:fe90:deb2/64 Scope:LinkUP BROADCAST RUNNING MULTICAST MTU:1450 Metric:1RX packets:6236 errors:0 dropped:0 overruns:0 frame:0TX packets:6476 errors:0 dropped:0 overruns:0 carrier:0collisions:0 txqueuelen:0RX bytes:592532 (578.6 KiB) TX bytes:622765 (608.1 KiB)/ # exit
默認情況下,K8s 的 Pod 會有獨立的網絡命名空間(即獨立的 IP、網卡、端口等),與宿主機(運行 K8s 的服務器)網絡隔離。而通過設置?
hostNetwork: true
,Pod 會放棄獨立網絡,直接使用宿主機的網絡命名空間 —— 相當于 Pod 內的容器和宿主機 “共用一套網卡、IP 和端口”。
4.2 pod的生命周期
4.2.1 INIT 容器
官方文檔:Pod | Kubernetes
-
Pod 可以包含多個容器,應用運行在這些容器里面,同時 Pod 也可以有一個或多個先于應用容器啟動的 Init 容器。
-
Init 容器與普通的容器非常像,除了如下兩點:
-
它們總是運行到完成
-
init 容器不支持 Readiness,因為它們必須在 Pod 就緒之前運行完成,每個 Init 容器必須運行成功,下一個才能夠運行。
-
-
如果Pod的 Init 容器失敗,Kubernetes 會不斷地重啟該 Pod,直到 Init 容器成功為止。但是,如果 Pod 對應的 restartPolicy 值為 Never,它不會重新啟動。
4.2.1.1?INIT 容器的功能
-
Init 容器可以包含一些安裝過程中應用容器中不存在的實用工具或個性化代碼。
-
Init 容器可以安全地運行這些工具,避免這些工具導致應用鏡像的安全性降低。
-
應用鏡像的創建者和部署者可以各自獨立工作,而沒有必要聯合構建一個單獨的應用鏡像。
-
Init 容器能以不同于Pod內應用容器的文件系統視圖運行。因此,Init容器可具有訪問 Secrets 的權限,而應用容器不能夠訪問。
-
由于 Init 容器必須在應用容器啟動之前運行完成,因此 Init 容器提供了一種機制來阻塞或延遲應用容器的啟動,直到滿足了一組先決條件。一旦前置條件滿足,Pod內的所有的應用容器會并行啟動。
4.2.1.2?INIT 容器示例
[root@k8s-master ~]# vim pod.yml
apiVersion: v1
kind: Pod
metadata:labels:name: initpodname: initpod
spec:containers:- image: myapp:v1name: myappinitContainers:- name: init-myserviceimage: busyboxcommand: ["sh","-c","until test -e /testfile;do echo wating for myservice; sleep 2;done"][root@k8s-master ~]# kubectl apply -f pod.yml
pod/initpod created
[root@k8s-master ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
initpod 0/1 Init:0/1 0 3s[root@k8s-master ~]# kubectl logs pods/initpod init-myservice
wating for myservice
wating for myservice
wating for myservice
wating for myservice
wating for myservice
wating for myservice
[root@k8s-master ~]# kubectl exec pods/initpod -c init-myservice -- /bin/sh -c "touch /testfile"[root@k8s-master ~]# kubectl get pods NAME READY STATUS RESTARTS AGE
initpod 1/1 Running 0 62s
4.2.2?探針
探針是由 kubelet 對容器執行的定期診斷:
-
ExecAction:在容器內執行指定命令。如果命令退出時返回碼為 0 則認為診斷成功。
-
TCPSocketAction:對指定端口上的容器的 IP 地址進行 TCP 檢查。如果端口打開,則診斷被認為是成功的。
-
HTTPGetAction:對指定的端口和路徑上的容器的 IP 地址執行 HTTP Get 請求。如果響應的狀態碼大于等于200 且小于 400,則診斷被認為是成功的。
每次探測都將獲得以下三種結果之一:
-
成功:容器通過了診斷。
-
失敗:容器未通過診斷。
-
未知:診斷失敗,因此不會采取任何行動。
Kubelet 可以選擇是否執行在容器上運行的三種探針執行和做出反應:
-
livenessProbe:指示容器是否正在運行。如果存活探測失敗,則 kubelet 會殺死容器,并且容器將受到其重啟策略的影響。如果容器不提供存活探針,則默認狀態為 Success。
-
readinessProbe:指示容器是否準備好服務請求。如果就緒探測失敗,端點控制器將從與 Pod 匹配的所有 Service 的端點中刪除該 Pod 的 IP 地址。初始延遲之前的就緒狀態默認為 Failure。如果容器不提供就緒探針,則默認狀態為 Success。
-
startupProbe: 指示容器中的應用是否已經啟動。如果提供了啟動探測(startup probe),則禁用所有其他探測,直到它成功為止。如果啟動探測失敗,kubelet 將殺死容器,容器服從其重啟策略進行重啟。如果容器沒有提供啟動探測,則默認狀態為成功Success。
ReadinessProbe 與 LivenessProbe 的區別
-
ReadinessProbe 當檢測失敗后,將 Pod 的 IP:Port 從對應的 EndPoint 列表中刪除。
-
LivenessProbe 當檢測失敗后,將殺死容器并根據 Pod 的重啟策略來決定作出對應的措施
StartupProbe 與 ReadinessProbe、LivenessProbe 的區別
-
如果三個探針同時存在,先執行 StartupProbe 探針,其他兩個探針將會被暫時禁用,直到 pod 滿足 StartupProbe 探針配置的條件,其他 2 個探針啟動,如果不滿足按照規則重啟容器。
-
另外兩種探針在容器啟動后,會按照配置,直到容器消亡才停止探測,而 StartupProbe 探針只是在容器啟動后按照配置滿足一次后,不在進行后續的探測。
4.2.2.1 探針實例
4.2.1.1.1 存活探針示例:
[root@k8s-master ~]# vim pod.yml
apiVersion: v1
kind: Pod
metadata:labels:name: livenessname: liveness
spec:containers:- image: myapp:v1name: myapplivenessProbe:tcpSocket: #檢測端口存在性port: 8080initialDelaySeconds: 3 #容器啟動后要等待多少秒后就探針開始工作,默認是 0periodSeconds: 1 #執行探測的時間間隔,默認為 10stimeoutSeconds: 1 #探針執行檢測請求后,等待響應的超時時間,默認為 1s#測試:
[root@k8s-master ~]# kubectl apply -f pod.yml
pod/liveness created
[root@k8s-master ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
liveness 0/1 CrashLoopBackOff 2 (7s ago) 22s[root@k8s-master ~]# kubectl describe pods
Warning Unhealthy 1s (x9 over 13s) kubelet Liveness probe failed: dial tcp 10.244.2.6:8080: connect: connection refused
4.2.2.1.2 就緒探針示例:
[root@k8s-master ~]# vim pod.yml
apiVersion: v1
kind: Pod
metadata:labels:name: readinessname: readiness
spec:containers:- image: myapp:v1name: myappreadinessProbe:httpGet:path: /test.htmlport: 80initialDelaySeconds: 1periodSeconds: 3timeoutSeconds: 1#測試:
[root@k8s-master ~]# kubectl expose pod readiness --port 80 --target-port 80[root@k8s-master ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
readiness 0/1 Running 0 5m25s[root@k8s-master ~]# kubectl describe pods readiness
Warning Unhealthy 26s (x66 over 5m43s) kubelet Readiness probe failed: HTTP probe failed with statuscode: 404[root@k8s-master ~]# kubectl describe services readiness
Name: readiness
Namespace: default
Labels: name=readiness
Annotations: <none>
Selector: name=readiness
Type: ClusterIP
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.100.171.244
IPs: 10.100.171.244
Port: <unset> 80/TCP
TargetPort: 80/TCP
Endpoints: #沒有暴漏端口,就緒探針探測不滿足暴漏條件
Session Affinity: None
Events: <none>kubectl exec pods/readiness -c myapp -- /bin/sh -c "echo test > /usr/share/nginx/html/test.html"[root@k8s-master ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
readiness 1/1 Running 0 7m49s[root@k8s-master ~]# kubectl describe services readiness
Name: readiness
Namespace: default
Labels: name=readiness
Annotations: <none>
Selector: name=readiness
Type: ClusterIP
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.100.171.244
IPs: 10.100.171.244
Port: <unset> 80/TCP
TargetPort: 80/TCP
Endpoints: 10.244.2.8:80 #滿組條件端口暴漏
Session Affinity: None
Events: <none>
五 控制器
5.1 什么是控制器
官方文檔:
工作負載管理 | Kubernetes
控制器也是管理pod的一種手段
-
自主式pod:pod退出或意外關閉后不會被重新創建
-
控制器管理的 Pod:在控制器的生命周期里,始終要維持 Pod 的副本數目
Pod控制器是管理pod的中間層,使用Pod控制器之后,只需要告訴Pod控制器,想要多少個什么樣的Pod就可以了,它會創建出滿足條件的Pod并確保每一個Pod資源處于用戶期望的目標狀態。如果Pod資源在運行中出現故障,它會基于指定策略重新編排Pod
當建立控制器后,會把期望值寫入etcd,k8s中的apiserver檢索etcd中我們保存的期望狀態,并對比pod的當前狀態,如果出現差異代碼自驅動立即恢復
5.2?控制器常用類型
控制器名稱 | 控制器用途 |
---|---|
Replication Controller | 比較原始的pod控制器,已經被廢棄,由ReplicaSet替代 |
ReplicaSet | ReplicaSet 確保任何時間都有指定數量的 Pod 副本在運行 |
Deployment | 一個 Deployment 為 Pod 和 ReplicaSet 提供聲明式的更新能力 |
DaemonSet | DaemonSet 確保全指定節點上運行一個 Pod 的副本 |
StatefulSet | StatefulSet 是用來管理有狀態應用的工作負載 API 對象。 |
Job | 執行批處理任務,僅執行一次任務,保證任務的一個或多個Pod成功結束 |
CronJob | Cron Job 創建基于時間調度的 Jobs。 |
HPA全稱Horizontal Pod Autoscaler | 根據資源利用率自動調整service中Pod數量,實現Pod水平自動縮放 |
5.3?replicaset控制器
5.3.1 replicaset功能
-
ReplicaSet 是下一代的 Replication Controller,官方推薦使用ReplicaSet
-
ReplicaSet和Replication Controller的唯一區別是選擇器的支持,ReplicaSet支持新的基于集合的選擇器需求
-
ReplicaSet 確保任何時間都有指定數量的 Pod 副本在運行
-
雖然 ReplicaSets 可以獨立使用,但今天它主要被Deployments 用作協調 Pod 創建、刪除和更新的機制
5.3.2 replicaset參數說明
參數名稱 | 字段類型 | 參數說明 |
---|---|---|
spec | Object | 詳細定義對象,固定值就寫Spec |
spec.replicas | integer | 指定維護pod數量 |
spec.selector | Object | Selector是對pod的標簽查詢,與pod數量匹配 |
spec.selector.matchLabels | string | 指定Selector查詢標簽的名稱和值,以key:value方式指定 |
spec.template | Object | 指定對pod的描述信息,比如lab標簽,運行容器的信息等 |
spec.template.metadata | Object | 指定pod屬性 |
spec.template.metadata.labels | string | 指定pod標簽 |
spec.template.spec | Object | 詳細定義對象 |
spec.template.spec.containers | list | Spec對象的容器列表定義 |
spec.template.spec.containers.name | string | 指定容器名稱 |
spec.template.spec.containers.image | string | 指定容器鏡像 |
#生成yml文件
[root@k8s-master ~]# kubectl create deployment replicaset --image myapp:v1 --dry-run=client -o yaml > replicaset.yml[root@k8s-master ~]# vim replicaset.yml
apiVersion: apps/v1
kind: ReplicaSet
metadata:name: replicaset #指定pod名稱,一定小寫,如果出現大寫報錯
spec:replicas: 2 #指定維護pod數量為2selector: #指定檢測匹配方式matchLabels: #指定匹配方式為匹配標簽app: myapp #指定匹配的標簽為app=myapptemplate: #模板,當副本數量不足時,會根據下面的模板創建pod副本metadata:labels:app: myappspec:containers:- image: myapp:v1name: myapp[root@k8s-master ~]# kubectl apply -f replicaset.yml
replicaset.apps/replicaset created[root@k8s-master ~]# kubectl get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
replicaset-l4xnr 1/1 Running 0 96s app=myapp
replicaset-t2s5p 1/1 Running 0 96s app=myapp#replicaset是通過標簽匹配pod
[root@k8s-master ~]# kubectl label pod replicaset-f7ztm app=xie --overwrite
pod/replicaset-l4xnr labeled
[root@k8s-master ~]# kubectl get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
replicaset-gd5fh 1/1 Running 0 2s app=myapp #新開啟的pod
replicaset-l4xnr 1/1 Running 0 3m19s app=timinglee
replicaset-t2s5p 1/1 Running 0 3m19s app=myapp#恢復標簽后
[root@k8s2 pod]# kubectl label pod replicaset-example-q2sq9 app-
[root@k8s2 pod]# kubectl get pod --show-labels
NAME READY STATUS RESTARTS AGE LABELS
replicaset-example-q2sq9 1/1 Running 0 3m14s app=nginx
replicaset-example-th24v 1/1 Running 0 3m14s app=nginx
replicaset-example-w7zpw 1/1 Running 0 3m14s app=nginx#replicaset自動控制副本數量,pod可以自愈
[root@k8s-master ~]# kubectl delete pods replicaset-t2s5p
pod "replicaset-t2s5p" deleted[root@k8s-master ~]# kubectl get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
replicaset-l4xnr 1/1 Running 0 5m43s app=myapp
replicaset-nxmr9 1/1 Running 0 15s app=myapp回收資源
[root@k8s2 pod]# kubectl delete -f rs-example.yml
5.4?deployment 控制器
5.4.1 deployment控制器的功能
-
為了更好的解決服務編排的問題,kubernetes在V1.2版本開始,引入了Deployment控制器。
-
Deployment控制器并不直接管理pod,而是通過管理ReplicaSet來間接管理Pod
-
Deployment管理ReplicaSet,ReplicaSet管理Pod
-
Deployment 為 Pod 和 ReplicaSet 提供了一個申明式的定義方法
-
在Deployment中ReplicaSet相當于一個版本
典型的應用場景:
-
用來創建Pod和ReplicaSet
-
滾動更新和回滾
-
擴容和縮容
-
暫停與恢復
5.4.2 deployment控制器示例
#生成yaml文件
[root@k8s-master ~]# kubectl create deployment deployment --image myapp:v1 --dry-run=client -o yaml > deployment.yml[root@k8s-master ~]# vim deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:name: deployment
spec:replicas: 4selector:matchLabels:app: myapptemplate:metadata:labels:app: myappspec:containers:- image: myapp:v1name: myapp
#建立pod
root@k8s-master ~]# kubectl apply -f deployment.yml
deployment.apps/deployment created#查看pod信息
[root@k8s-master ~]# kubectl get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
deployment-5d886954d4-2ckqw 1/1 Running 0 23s app=myapp,pod-template-hash=5d886954d4
deployment-5d886954d4-m8gpd 1/1 Running 0 23s app=myapp,pod-template-hash=5d886954d4
deployment-5d886954d4-s7pws 1/1 Running 0 23s app=myapp,pod-template-hash=5d886954d4
deployment-5d886954d4-wqnvv 1/1 Running 0 23s app=myapp,pod-template-hash=5d886954d4
5.4.2.1 版本迭代
[root@k8s-master ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
deployment-5d886954d4-2ckqw 1/1 Running 0 2m40s 10.244.2.14 k8s-node2 <none> <none>
deployment-5d886954d4-m8gpd 1/1 Running 0 2m40s 10.244.1.17 k8s-node1 <none> <none>
deployment-5d886954d4-s7pws 1/1 Running 0 2m40s 10.244.1.16 k8s-node1 <none> <none>
deployment-5d886954d4-wqnvv 1/1 Running 0 2m40s 10.244.2.15 k8s-node2 <none> <none>#pod運行容器版本為v1
[root@k8s-master ~]# curl 10.244.2.14
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>[root@k8s-master ~]# kubectl describe deployments.apps deployment
Name: deployment
Namespace: default
CreationTimestamp: Sun, 01 Sep 2024 23:19:10 +0800
Labels: <none>
Annotations: deployment.kubernetes.io/revision: 1
Selector: app=myapp
Replicas: 4 desired | 4 updated | 4 total | 4 available | 0 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 25% max unavailable, 25% max surge #默認每次更新25%#更新容器運行版本
[root@k8s-master ~]# vim deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:name: deployment
spec:minReadySeconds: 5 #最小就緒時間5秒replicas: 4selector:matchLabels:app: myapptemplate:metadata:labels:app: myappspec:containers:- image: myapp:v2 #更新為版本2name: myapp[root@k8s2 pod]# kubectl apply -f deployment-example.yaml#更新過程
[root@k8s-master ~]# watch - n1 kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE
deployment-5d886954d4-8kb28 1/1 Running 0 48s
deployment-5d886954d4-8s4h8 1/1 Running 0 49s
deployment-5d886954d4-rclkp 1/1 Running 0 50s
deployment-5d886954d4-tt2hz 1/1 Running 0 50s
deployment-7f4786db9c-g796x 0/1 Pending 0 0s#測試更新效果
[root@k8s-master ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
deployment-7f4786db9c-967fk 1/1 Running 0 10s 10.244.1.26 k8s-node1 <none> <none>
deployment-7f4786db9c-cvb9k 1/1 Running 0 10s 10.244.2.24 k8s-node2 <none> <none>
deployment-7f4786db9c-kgss4 1/1 Running 0 9s 10.244.1.27 k8s-node1 <none> <none>
deployment-7f4786db9c-qts8c 1/1 Running 0 9s 10.244.2.25 k8s-node2 <none> <none>[root@k8s-master ~]# curl 10.244.1.26
Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a>
Note:
更新的過程是重新建立一個版本的RS,新版本的RS會把pod 重建,然后把老版本的RS回收
5.4.2.2 版本回滾
[root@k8s-master ~]# vim deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:name: deployment
spec:replicas: 4selector:matchLabels:app: myapptemplate:metadata:labels:app: myappspec:containers:- image: myapp:v1 #回滾到之前版本name: myapp[root@k8s-master ~]# kubectl apply -f deployment.yml
deployment.apps/deployment configured#測試回滾效果
[root@k8s-master ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
deployment-5d886954d4-dr74h 1/1 Running 0 8s 10.244.2.26 k8s-node2 <none> <none>
deployment-5d886954d4-thpf9 1/1 Running 0 7s 10.244.1.29 k8s-node1 <none> <none>
deployment-5d886954d4-vmwl9 1/1 Running 0 8s 10.244.1.28 k8s-node1 <none> <none>
deployment-5d886954d4-wprpd 1/1 Running 0 6s 10.244.2.27 k8s-node2 <none> <none>[root@k8s-master ~]# curl 10.244.2.26
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
5.4.2.3 滾動更新策略
[root@k8s-master ~]# vim deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:name: deployment
spec:minReadySeconds: 5 #最小就緒時間,指定pod每隔多久更新一次replicas: 4strategy: #指定更新策略rollingUpdate:maxSurge: 1 #比定義pod數量多幾個maxUnavailable: 0 #比定義pod個數少幾個selector:matchLabels:app: myapptemplate:metadata:labels:app: myappspec:containers:- image: myapp:v1name: myapp
[root@k8s2 pod]# kubectl apply -f deployment-example.yaml
5.4.2.4 暫停及恢復
在實際生產環境中我們做的變更可能不止一處,當修改了一處后,如果執行變更就直接觸發了
我們期望的觸發時當我們把所有修改都搞定后一次觸發
暫停,避免觸發不必要的線上更新
[root@k8s2 pod]# kubectl rollout pause deployment deployment-example[root@k8s2 pod]# vim deployment-example.yaml
apiVersion: apps/v1
kind: Deployment
metadata:name: deployment-example
spec:minReadySeconds: 5strategy:rollingUpdate:maxSurge: 1maxUnavailable: 0replicas: 6 selector:matchLabels:app: myapptemplate:metadata:labels:app: myappspec:containers:- name: myappimage: nginxresources:limits:cpu: 0.5memory: 200Mirequests:cpu: 0.5memory: 200Mi[root@k8s2 pod]# kubectl apply -f deployment-example.yaml#調整副本數,不受影響
[root@k8s-master ~]# kubectl describe pods deployment-7f4786db9c-8jw22
Name: deployment-7f4786db9c-8jw22
Namespace: default
Priority: 0
Service Account: default
Node: k8s-node1/172.25.254.10
Start Time: Mon, 02 Sep 2024 00:27:20 +0800
Labels: app=myapppod-template-hash=7f4786db9c
Annotations: <none>
Status: Running
IP: 10.244.1.31
IPs:IP: 10.244.1.31
Controlled By: ReplicaSet/deployment-7f4786db9c
Containers:myapp:Container ID: docker://01ad7216e0a8c2674bf17adcc9b071e9bfb951eb294cafa2b8482bb8b4940c1dImage: myapp:v2Image ID: docker-pullable://myapp@sha256:5f4afc8302ade316fc47c99ee1d41f8ba94dbe7e3e7747dd87215a15429b9102Port: <none>Host Port: <none>State: RunningStarted: Mon, 02 Sep 2024 00:27:21 +0800Ready: TrueRestart Count: 0Environment: <none>Mounts:/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-mfjjp (ro)
Conditions:Type StatusPodReadyToStartContainers TrueInitialized TrueReady TrueContainersReady TruePodScheduled True
Volumes:kube-api-access-mfjjp:Type: Projected (a volume that contains injected data from multiple sources)TokenExpirationSeconds: 3607ConfigMapName: kube-root-ca.crtConfigMapOptional: <nil>DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300snode.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:Type Reason Age From Message---- ------ ---- ---- -------Normal Scheduled 6m22s default-scheduler Successfully assigned default/deployment-7f4786db9c-8jw22 to k8s-node1Normal Pulled 6m22s kubelet Container image "myapp:v2" already present on machineNormal Created 6m21s kubelet Created container myappNormal Started 6m21s kubelet Started container myapp#但是更新鏡像和修改資源并沒有觸發更新
[root@k8s2 pod]# kubectl rollout history deployment deployment-example
deployment.apps/deployment-example
REVISION CHANGE-CAUSE
3 <none>
4 <none>#恢復后開始觸發更新
[root@k8s2 pod]# kubectl rollout resume deployment deployment-example[root@k8s2 pod]# kubectl rollout history deployment deployment-example
deployment.apps/deployment-example
REVISION CHANGE-CAUSE
3 <none>
4 <none>
5 <none>#回收
[root@k8s2 pod]# kubectl delete -f deployment-example.yaml
5.5 daemonset控制器
5.5.1 daemonset功能
DaemonSet 確保全部(或者某些)節點上運行一個 Pod 的副本。當有節點加入集群時, 也會為他們新增一個 Pod ,當有節點從集群移除時,這些 Pod 也會被回收。刪除 DaemonSet 將會刪除它創建的所有 Pod
DaemonSet 的典型用法:
-
在每個節點上運行集群存儲 DaemonSet,例如 glusterd、ceph。
-
在每個節點上運行日志收集 DaemonSet,例如 fluentd、logstash。
-
在每個節點上運行監控 DaemonSet,例如 Prometheus Node Exporter、zabbix agent等
-
一個簡單的用法是在所有的節點上都啟動一個 DaemonSet,將被作為每種類型的 daemon 使用
-
一個稍微復雜的用法是單獨對每種 daemon 類型使用多個 DaemonSet,但具有不同的標志, 并且對不同硬件類型具有不同的內存、CPU 要求
5.5.2 daemonset 示例
[root@k8s2 pod]# cat daemonset-example.yml
apiVersion: apps/v1
kind: DaemonSet
metadata:name: daemonset-example
spec:selector:matchLabels:app: nginxtemplate:metadata:labels:app: nginxspec:tolerations: #對于污點節點的容忍- effect: NoScheduleoperator: Existscontainers:- name: nginximage: nginx[root@k8s-master ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
daemonset-87h6s 1/1 Running 0 47s 10.244.0.8 k8s-master <none> <none>
daemonset-n4vs4 1/1 Running 0 47s 10.244.2.38 k8s-node2 <none> <none>
daemonset-vhxmq 1/1 Running 0 47s 10.244.1.40 k8s-node1 <none> <none>#回收
[root@k8s2 pod]# kubectl delete -f daemonset-example.yml
5.6? job 控制器
5.6.1 job控制器功能
Job,主要用于負責批量處理(一次要處理指定數量任務)短暫的一次性(每個任務僅運行一次就結束)任務
Job特點如下:
-
當Job創建的pod執行成功結束時,Job將記錄成功結束的pod數量
-
當成功結束的pod達到指定的數量時,Job將完成執行
5.6.2 job 控制器示例:
[root@k8s2 pod]# vim job.yml
apiVersion: batch/v1
kind: Job
metadata:name: pi
spec:completions: 6 #一共完成任務數為6 parallelism: 2 #每次并行完成2個template:spec:containers:- name: piimage: perl:5.34.0command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"] 計算Π的后2000位restartPolicy: Never #關閉后不自動重啟backoffLimit: 4 #運行失敗后嘗試4重新運行[root@k8s2 pod]# kubectl apply -f job.yml
Note:
關于重啟策略設置的說明:
如果指定為OnFailure,則job會在pod出現故障時重啟容器
而不是創建pod,failed次數不變
如果指定為Never,則job會在pod出現故障時創建新的pod
并且故障pod不會消失,也不會重啟,failed次數加1
如果指定為Always的話,就意味著一直重啟,意味著job任務會重復去執行了
5.7 cronjob 控制器
5.7.1 cronjob 控制器功能
-
Cron Job 創建基于時間調度的 Jobs。
-
CronJob控制器以Job控制器資源為其管控對象,并借助它管理pod資源對象,
-
CronJob可以以類似于Linux操作系統的周期性任務作業計劃的方式控制其運行時間點及重復運行的方式。
-
CronJob可以在特定的時間點(反復的)去運行job任務。
5.7.2 cronjob 控制器 示例
[root@k8s2 pod]# vim cronjob.yml
apiVersion: batch/v1
kind: CronJob
metadata:name: hello
spec:schedule: "* * * * *"jobTemplate:spec:template:spec:containers:- name: helloimage: busyboximagePullPolicy: IfNotPresentcommand:- /bin/sh- -c- date; echo Hello from the Kubernetes clusterrestartPolicy: OnFailure[root@k8s2 pod]# kubectl apply -f cronjob.yml
六 微服務
6.1?什么是微服務
用控制器來完成集群的工作負載,那么應用如何暴漏出去?需要通過微服務暴漏出去后才能被訪問
-
Service是一組提供相同服務的Pod對外開放的接口。
-
借助Service,應用可以實現服務發現和負載均衡。
-
service默認只支持4層負載均衡能力,沒有7層功能。(可以通過Ingress實現)
6.2?微服務的類型
ClusterIP | 默認值,k8s系統給service自動分配的虛擬IP,只能在集群內部訪問 |
---|---|
NodePort | 將Service通過指定的Node上的端口暴露給外部,訪問任意一個NodeIP:nodePort都將路由到ClusterIP |
微服務類型 | 作用描述 |
LoadBalancer | 在NodePort的基礎上,借助cloud provider創建一個外部的負載均衡器,并將請求轉發到 NodeIP:NodePort,此模式只能在云服務器上使用 |
ExternalName | 將服務通過 DNS CNAME 記錄方式轉發到指定的域名(通過 spec.externlName 設定 |
示例:
#生成控制器文件并建立控制器
[root@k8s-master ~]# kubectl create deployment timinglee --image myapp:v1 --replicas 2 --dry-run=client -o yaml > timinglee.yaml#生成微服務yaml追加到已有yaml中
[root@k8s-master ~]# kubectl expose deployment timinglee --port 80 --target-port 80 --dry-run=client -o yaml >> timinglee.yaml[root@k8s-master ~]# vim timinglee.yaml
apiVersion: apps/v1
kind: Deployment
metadata:labels:app: timingleename: timinglee
spec:replicas: 2selector:matchLabels:app: timingleetemplate:metadata:creationTimestamp: nulllabels:app: timingleespec:containers:- image: myapp:v1name: myapp
--- #不同資源間用---隔開apiVersion: v1
kind: Service
metadata:labels:app: timingleename: timinglee
spec:ports:- port: 80protocol: TCPtargetPort: 80selector:app: timinglee[root@k8s-master ~]# kubectl apply -f timinglee.yaml
deployment.apps/timinglee created
service/timinglee created[root@k8s-master ~]# kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 19h
timinglee ClusterIP 10.99.127.134 <none> 80/TCP 16s
微服務默認使用iptables調度
[root@k8s-master ~]# kubectl get services -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 19h <none>
timinglee ClusterIP 10.99.127.134 <none> 80/TCP 119s app=timinglee #集群內部IP 134#可以在火墻中查看到策略信息
[root@k8s-master ~]# iptables -t nat -nL
KUBE-SVC-I7WXYK76FWYNTTGM 6 -- 0.0.0.0/0 10.99.127.134 /* default/timinglee cluster IP */ tcp dpt:80
6.3 ipvs模式
-
Service 是由 kube-proxy 組件,加上 iptables 來共同實現的
-
kube-proxy 通過 iptables 處理 Service 的過程,需要在宿主機上設置相當多的 iptables 規則,如果宿主機有大量的Pod,不斷刷新iptables規則,會消耗大量的CPU資源
-
IPVS模式的service,可以使K8s集群支持更多量級的Pod
6.3.1 ipvs模式配置方式
1 在所有節點中安裝ipvsadm
[root@k8s-所有節點 pod]yum install ipvsadm –y
2 修改master節點的代理配置
[root@k8s-master ~]# kubectl -n kube-system edit cm kube-proxymetricsBindAddress: ""mode: "ipvs" #設置kube-proxy使用ipvs模式nftables:
3 重啟pod,在pod運行時配置文件中采用默認配置,當改變配置文件后已經運行的pod狀態不會變化,所以要重啟pod
[root@k8s-master ~]# kubectl -n kube-system get pods | awk '/kube-proxy/{system("kubectl -n kube-system delete pods "$1)}'[root@k8s-master ~]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 10.96.0.1:443 rr-> 172.25.254.100:6443 Masq 1 0 0
TCP 10.96.0.10:53 rr-> 10.244.0.2:53 Masq 1 0 0-> 10.244.0.3:53 Masq 1 0 0
TCP 10.96.0.10:9153 rr-> 10.244.0.2:9153 Masq 1 0 0-> 10.244.0.3:9153 Masq 1 0 0
TCP 10.97.59.25:80 rr-> 10.244.1.17:80 Masq 1 0 0-> 10.244.2.13:80 Masq 1 0 0
UDP 10.96.0.10:53 rr-> 10.244.0.2:53 Masq 1 0 0-> 10.244.0.3:53 Masq 1 0 0
Note:
切換ipvs模式后,kube-proxy會在宿主機上添加一個虛擬網卡:kube-ipvs0,并分配所有service IP
[root@k8s-master ~]# ip a | tailinet6 fe80::c4fb:e9ff:feee:7d32/64 scope linkvalid_lft forever preferred_lft forever 8: kube-ipvs0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN group defaultlink/ether fe:9f:c8:5d:a6:c8 brd ff:ff:ff:ff:ff:ffinet 10.96.0.10/32 scope global kube-ipvs0valid_lft forever preferred_lft foreverinet 10.96.0.1/32 scope global kube-ipvs0valid_lft forever preferred_lft foreverinet 10.97.59.25/32 scope global kube-ipvs0valid_lft forever preferred_lft forever
6.4 微服務類型詳解
6.4.1 clusterip
特點:
clusterip模式只能在集群內訪問,并對集群內的pod提供健康檢測和自動發現功能
示例:
[root@k8s2 service]# vim myapp.yml
---
apiVersion: v1
kind: Service
metadata:labels:app: timingleename: timinglee
spec:ports:- port: 80protocol: TCPtargetPort: 80selector:app: timingleetype: ClusterIPservice創建后集群DNS提供解析
[root@k8s-master ~]# dig timinglee.default.svc.cluster.local @10.96.0.10; <<>> DiG 9.16.23-RH <<>> timinglee.default.svc.cluster.local @10.96.0.10
;; global options: +cmd
;; Got answer:
;; WARNING: .local is reserved for Multicast DNS
;; You are currently testing what happens when an mDNS query is leaked to DNS
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 27827
;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
;; WARNING: recursion requested but not available;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
; COOKIE: 057d9ff344fe9a3a (echoed)
;; QUESTION SECTION:
;timinglee.default.svc.cluster.local. IN A;; ANSWER SECTION:
timinglee.default.svc.cluster.local. 30 IN A 10.97.59.25;; Query time: 8 msec
;; SERVER: 10.96.0.10#53(10.96.0.10)
;; WHEN: Wed Sep 04 13:44:30 CST 2024
;; MSG SIZE rcvd: 127
6.4.2 ClusterIP中的特殊模式headless
headless(無頭服務)
對于無頭 Services
并不會分配 Cluster IP,kube-proxy不會處理它們, 而且平臺也不會為它們進行負載均衡和路由,集群訪問通過dns解析直接指向到業務pod上的IP,所有的調度有dns單獨完成
[root@k8s-master ~]# vim timinglee.yaml
---
apiVersion: v1
kind: Service
metadata:labels:app: timingleename: timinglee
spec:ports:- port: 80protocol: TCPtargetPort: 80selector:app: timingleetype: ClusterIPclusterIP: None[root@k8s-master ~]# kubectl delete -f timinglee.yaml
[root@k8s-master ~]# kubectl apply -f timinglee.yaml
deployment.apps/timinglee created#測試
[root@k8s-master ~]# kubectl get services timinglee
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
timinglee ClusterIP None <none> 80/TCP 6s[root@k8s-master ~]# dig timinglee.default.svc.cluster.local @10.96.0.10; <<>> DiG 9.16.23-RH <<>> timinglee.default.svc.cluster.local @10.96.0.10
;; global options: +cmd
;; Got answer:
;; WARNING: .local is reserved for Multicast DNS
;; You are currently testing what happens when an mDNS query is leaked to DNS
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 51527
;; flags: qr aa rd; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 1
;; WARNING: recursion requested but not available;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
; COOKIE: 81f9c97b3f28b3b9 (echoed)
;; QUESTION SECTION:
;timinglee.default.svc.cluster.local. IN A;; ANSWER SECTION:
timinglee.default.svc.cluster.local. 20 IN A 10.244.2.14 #直接解析到pod上
timinglee.default.svc.cluster.local. 20 IN A 10.244.1.18;; Query time: 0 msec
;; SERVER: 10.96.0.10#53(10.96.0.10)
;; WHEN: Wed Sep 04 13:58:23 CST 2024
;; MSG SIZE rcvd: 178#開啟一個busyboxplus的pod測試
[root@k8s-master ~]# kubectl run test --image busyboxplus -it
If you don't see a command prompt, try pressing enter.
/ # nslookup timinglee-service
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.localName: timinglee-service
Address 1: 10.244.2.16 10-244-2-16.timinglee-service.default.svc.cluster.local
Address 2: 10.244.2.17 10-244-2-17.timinglee-service.default.svc.cluster.local
Address 3: 10.244.1.22 10-244-1-22.timinglee-service.default.svc.cluster.local
Address 4: 10.244.1.21 10-244-1-21.timinglee-service.default.svc.cluster.local
/ # curl timinglee-service
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
/ # curl timinglee-service/hostname.html
timinglee-c56f584cf-b8t6m
6.4.3 nodeport
通過ipvs暴漏端口從而使外部主機通過master節點的對外ip:<port>來訪問pod業務
其訪問過程為:
示例:
[root@k8s-master ~]# vim timinglee.yaml
---apiVersion: v1
kind: Service
metadata:labels:app: timinglee-servicename: timinglee-service
spec:ports:- port: 80protocol: TCPtargetPort: 80selector:app: timingleetype: NodePort[root@k8s-master ~]# kubectl apply -f timinglee.yaml
deployment.apps/timinglee created
service/timinglee-service created
[root@k8s-master ~]# kubectl get services timinglee-service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
timinglee-service NodePort 10.98.60.22 <none> 80:31771/TCP 8nodeport在集群節點上綁定端口,一個端口對應一個服務
[root@k8s-master ~]# for i in {1..5}
> do
> curl 172.25.254.100:31771/hostname.html
> done
timinglee-c56f584cf-fjxdk
timinglee-c56f584cf-5m2z5
timinglee-c56f584cf-z2w4d
timinglee-c56f584cf-tt5g6
timinglee-c56f584cf-fjxdk
Note:
nodeport默認端口
nodeport默認端口是30000-32767,超出會報錯
[root@k8s-master ~]# vim timinglee.yaml
apiVersion: v1
kind: Service
metadata:labels:app: timinglee-servicename: timinglee-service
spec:ports:- port: 80protocol: TCPtargetPort: 80nodePort: 33333selector:app: timingleetype: NodePort[root@k8s-master ~]# kubectl apply -f timinglee.yaml
deployment.apps/timinglee created
The Service "timinglee-service" is invalid: spec.ports[0].nodePort: Invalid value: 33333: provided port is not in the valid range. The range of valid ports is 30000-32767
如果需要使用這個范圍以外的端口就需要特殊設定
[root@k8s-master ~]# vim /etc/kubernetes/manifests/kube-apiserver.yaml- --service-node-port-range=30000-40000
Note:
添加“--service-node-port-range=“ 參數,端口范圍可以自定義
修改后api-server會自動重啟,等apiserver正常啟動后才能操作集群
集群重啟自動完成在修改完參數后全程不需要人為干
6.4.4 loadbalancer
云平臺會為我們分配vip并實現訪問,如果是裸金屬主機那么需要metallb來實現ip的分配
[root@k8s-master ~]# vim timinglee.yaml---
apiVersion: v1
kind: Service
metadata:labels:app: timinglee-servicename: timinglee-service
spec:ports:- port: 80protocol: TCPtargetPort: 80selector:app: timingleetype: LoadBalancer[root@k8s2 service]# kubectl apply -f myapp.yml默認無法分配外部訪問IP
[root@k8s2 service]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 4d1h
myapp LoadBalancer 10.107.23.134 <pending> 80:32537/TCP 4sLoadBalancer模式適用云平臺,裸金屬環境需要安裝metallb提供支持
6.4.5 metalLB
官網:Installation :: MetalLB, bare metal load-balancer for Kubernetes
metalLB功能:為LoadBalancer分配vip
部署方式
1.設置ipvs模式
[root@k8s-master ~]# kubectl edit cm -n kube-system kube-proxy
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: "ipvs"
ipvs:strictARP: true[root@k8s-master ~]# kubectl -n kube-system get pods | awk '/kube-proxy/{system("kubectl -n kube-system delete pods "$1)}'2.下載部署文件
[root@k8s2 metallb]# wget https://raw.githubusercontent.com/metallb/metallb/v0.13.12/config/manifests/metallb-native.yaml3.修改文件中鏡像地址,與harbor倉庫路徑保持一致
[root@k8s-master ~]# vim metallb-native.yaml
...
image: metallb/controller:v0.14.8
image: metallb/speaker:v0.14.84.上傳鏡像到harbor
[root@k8s-master ~]# docker pull quay.io/metallb/controller:v0.14.8
[root@k8s-master ~]# docker pull quay.io/metallb/speaker:v0.14.8[root@k8s-master ~]# docker tag quay.io/metallb/speaker:v0.14.8 reg.timinglee.org/metallb/speaker:v0.14.8
[root@k8s-master ~]# docker tag quay.io/metallb/controller:v0.14.8 reg.timinglee.org/metallb/controller:v0.14.8[root@k8s-master ~]# docker push reg.timinglee.org/metallb/speaker:v0.14.8
[root@k8s-master ~]# docker push reg.timinglee.org/metallb/controller:v0.14.8部署服務
[root@k8s2 metallb]# kubectl apply -f metallb-native.yaml
[root@k8s-master ~]# kubectl -n metallb-system get pods
NAME READY STATUS RESTARTS AGE
controller-65957f77c8-25nrw 1/1 Running 0 30s
speaker-p94xq 1/1 Running 0 29s
speaker-qmpct 1/1 Running 0 29s
speaker-xh4zh 1/1 Running 0 30s配置分配地址段
[root@k8s-master ~]# vim configmap.yml
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:name: first-pool #地址池名稱namespace: metallb-system
spec:addresses:- 172.25.254.50-172.25.254.99 #修改為自己本地地址段--- #兩個不同的kind中間必須加分割
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:name: examplenamespace: metallb-system
spec:ipAddressPools:- first-pool #使用地址池 [root@k8s-master ~]# kubectl apply -f configmap.yml
ipaddresspool.metallb.io/first-pool created
l2advertisement.metallb.io/example created[root@k8s-master ~]# kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 21h
timinglee-service LoadBalancer 10.109.36.123 172.25.254.50 80:31595/TCP 9m9s#通過分配地址從集群外訪問服務
[root@reg ~]# curl 172.25.254.50
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
6.4.6 externalname
-
開啟services后,不會被分配IP,而是用dns解析CNAME固定域名來解決ip變化問題
-
一般應用于外部業務和pod溝通或外部業務遷移到pod內時
-
在應用向集群遷移過程中,externalname在過度階段就可以起作用了。
-
集群外的資源遷移到集群時,在遷移的過程中ip可能會變化,但是域名+dns解析能完美解決此問題
示例:
[root@k8s-master ~]# vim timinglee.yaml
---
apiVersion: v1
kind: Service
metadata:labels:app: timinglee-servicename: timinglee-service
spec:selector:app: timingleetype: ExternalNameexternalName: www.timinglee.org[root@k8s-master ~]# kubectl apply -f timinglee.yaml[root@k8s-master ~]# kubectl get services timinglee-service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
timinglee-service ExternalName <none> www.timinglee.org <none> 2m58s
6.5?Ingress-nginx
官網:
Installation Guide - Ingress-Nginx Controller
6.5.1 ingress-nginx功能
-
一種全局的、為了代理不同后端 Service 而設置的負載均衡服務,支持7層
-
Ingress由兩部分組成:Ingress controller和Ingress服務
-
Ingress Controller 會根據你定義的 Ingress 對象,提供對應的代理能力。
-
業界常用的各種反向代理項目,比如 Nginx、HAProxy、Envoy、Traefik 等,都已經為Kubernetes 專門維護了對應的 Ingress Controller。
6.5.2 部署ingress
6.5.2.1 下載部署文件
[root@k8s-master ~]# wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.11.2/deploy/static/provider/baremetal/deploy.yaml
上傳ingress所需鏡像到harbor
[root@k8s-master ~]# docker tag registry.k8s.io/ingress-nginx/controller:v1.11.2@sha256:d5f8217feeac4887cb1ed21f27c2674e58be06bd8f5184cacea2a69abaf78dce reg.timinglee.org/ingress-nginx/controller:v1.11.2[root@k8s-master ~]# docker tag registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3 reg.timinglee.org/ingress-nginx/kube-webhook-certgen:v1.4.3[root@k8s-master ~]# docker push reg.timinglee.org/ingress-nginx/controller:v1.11.2
[root@k8s-master ~]# docker push reg.timinglee.org/ingress-nginx/kube-webhook-certgen:v1.4.3
6.5.2.2 安裝ingress
[root@k8s-master ~]# vim deploy.yaml
445 image: ingress-nginx/controller:v1.11.2
546 image: ingress-nginx/kube-webhook-certgen:v1.4.3
599 image: ingress-nginx/kube-webhook-certgen:v1.4.3[root@k8s-master ~]# kubectl -n ingress-nginx get pods
NAME READY STATUS RESTARTS AGE
ingress-nginx-admission-create-ggqm6 0/1 Completed 0 82s
ingress-nginx-admission-patch-q4wp2 0/1 Completed 0 82s
ingress-nginx-controller-bb7d8f97c-g2h4p 1/1 Running 0 82s
[root@k8s-master ~]# kubectl -n ingress-nginx get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller NodePort 10.103.33.148 <none> 80:34512/TCP,443:34727/TCP 108s
ingress-nginx-controller-admission ClusterIP 10.103.183.64 <none> 443/TCP 108s#修改微服務為loadbalancer
[root@k8s-master ~]# kubectl -n ingress-nginx edit svc ingress-nginx-controller
49 type: LoadBalancer[root@k8s-master ~]# kubectl -n ingress-nginx get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller LoadBalancer 10.103.33.148 172.25.254.50 80:34512/TCP,443:34727/TCP 4m43s
ingress-nginx-controller-admission ClusterIP 10.103.183.64 <none> 443/TCP 4m43s
Note:
在ingress-nginx-controller中看到的對外IP就是ingress最終對外開放的ip
6.5.2.3 測試ingress
#生成yaml文件
[root@k8s-master ~]# kubectl create ingress webcluster --rule '*/=timinglee-svc:80' --dry-run=client -o yaml > timinglee-ingress.yml[root@k8s-master ~]# vim timinglee-ingress.yml
aapiVersion: networking.k8s.io/v1
kind: Ingress
metadata:name: test-ingress
spec:ingressClassName: nginxrules:- http:paths:- backend:service:name: timinglee-svcport:number: 80path: /pathType: Prefix #Exact(精確匹配),ImplementationSpecific(特定實現),Prefix(前綴匹配),Regular expression(正則表達式匹配)#建立ingress控制器
[root@k8s-master ~]# kubectl apply -f timinglee-ingress.yml
ingress.networking.k8s.io/webserver created[root@k8s-master ~]# kubectl get ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
test-ingress nginx * 172.25.254.10 80 8m30s[root@reg ~]# for n in {1..5}; do curl 172.25.254.50/hostname.html; done
timinglee-c56f584cf-8jhn6
timinglee-c56f584cf-8cwfm
timinglee-c56f584cf-8jhn6
timinglee-c56f584cf-8cwfm
timinglee-c56f584cf-8jhn6
Note:
ingress必須和輸出的service資源處于同一namespace
6.5.3 ingress 的高級用法
6.5.3.1 基于路徑的訪問
1.建立用于測試的控制器myapp
[root@k8s-master app]# kubectl create deployment myapp-v1 --image myapp:v1 --dry-run=client -o yaml > myapp-v1.yaml[root@k8s-master app]# kubectl create deployment myapp-v2 --image myapp:v2 --dry-run=client -o yaml > myapp-v2.yaml[root@k8s-master app]# vim myapp-v1.yaml
apiVersion: apps/v1
kind: Deployment
metadata:labels:app: myapp-v1name: myapp-v1
spec:replicas: 1selector:matchLabels:app: myapp-v1strategy: {}template:metadata:labels:app: myapp-v1spec:containers:- image: myapp:v1name: myapp---apiVersion: v1
kind: Service
metadata:labels:app: myapp-v1name: myapp-v1
spec:ports:- port: 80protocol: TCPtargetPort: 80selector:app: myapp-v1[root@k8s-master app]# vim myapp-v2.yaml
apiVersion: apps/v1
kind: Deployment
metadata:labels:app: myapp-v2name: myapp-v2
spec:replicas: 1selector:matchLabels:app: myapp-v2template:metadata:labels:app: myapp-v2spec:containers:- image: myapp:v2name: myapp
---
apiVersion: v1
kind: Service
metadata:labels:app: myapp-v2name: myapp-v2
spec:ports:- port: 80protocol: TCPtargetPort: 80selector:app: myapp-v2[root@k8s-master app]# kubectl expose deployment myapp-v1 --port 80 --target-port 80 --dry-run=client -o yaml >> myapp-v1.yaml[root@k8s-master app]# kubectl expose deployment myapp-v2 --port 80 --target-port 80 --dry-run=client -o yaml >> myapp-v1.yaml[root@k8s-master app]# kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 29h
myapp-v1 ClusterIP 10.104.84.65 <none> 80/TCP 13s
myapp-v2 ClusterIP 10.105.246.219 <none> 80/TCP 7s
2.建立ingress的yaml
[root@k8s-master app]# vim ingress1.yml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:annotations:nginx.ingress.kubernetes.io/rewrite-target: / #訪問路徑后加任何內容都被定向到/name: ingress1
spec:ingressClassName: nginxrules:- host: www.timinglee.orghttp:paths:- backend:service:name: myapp-v1port:number: 80path: /v1pathType: Prefix- backend:service:name: myapp-v2port:number: 80path: /v2pathType: Prefix#測試:
[root@reg ~]# echo 172.25.254.50 www.timinglee.org >> /etc/hosts[root@reg ~]# curl www.timinglee.org/v1
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
[root@reg ~]# curl www.timinglee.org/v2
Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a>#nginx.ingress.kubernetes.io/rewrite-target: / 的功能實現
[root@reg ~]# curl www.timinglee.org/v2/aaaa
Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a>
6.5.3.2 基于域名的訪問
#在測試主機中設定解析
[root@reg ~]# vim /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
172.25.254.250 reg.timinglee.org
172.25.254.50 www.timinglee.org myappv1.timinglee.org myappv2.timinglee.org# 建立基于域名的yml文件
[root@k8s-master app]# vim ingress2.yml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:annotations:nginx.ingress.kubernetes.io/rewrite-target: /name: ingress2
spec:ingressClassName: nginxrules:- host: myappv1.timinglee.orghttp:paths:- backend:service:name: myapp-v1port:number: 80path: /pathType: Prefix- host: myappv2.timinglee.orghttp:paths:- backend:service:name: myapp-v2port:number: 80path: /pathType: Prefix#利用文件建立ingress
[root@k8s-master app]# kubectl apply -f ingress2.yml
ingress.networking.k8s.io/ingress2 created[root@k8s-master app]# kubectl describe ingress ingress2
Name: ingress2
Labels: <none>
Namespace: default
Address:
Ingress Class: nginx
Default backend: <default>
Rules:Host Path Backends---- ---- --------myappv1.timinglee.org/ myapp-v1:80 (10.244.2.31:80)myappv2.timinglee.org/ myapp-v2:80 (10.244.2.32:80)
Annotations: nginx.ingress.kubernetes.io/rewrite-target: /
Events:Type Reason Age From Message---- ------ ---- ---- -------Normal Sync 21s nginx-ingress-controller Scheduled for sync#在測試主機中測試
[root@reg ~]# curl www.timinglee.org/v1
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
[root@reg ~]# curl www.timinglee.org/v2
Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a>
6.5.3.3 建立tls加密
建立證書
[root@k8s-master app]# openssl req -newkey rsa:2048 -nodes -keyout tls.key -x509 -days 365 -subj "/CN=nginxsvc/O=nginxsvc" -out tls.crt
#建立加密資源類型secret
[root@k8s-master app]# kubectl create secret tls web-tls-secret --key tls.key --cert tls.crt
secret/web-tls-secret created
[root@k8s-master app]# kubectl get secrets
NAME TYPE DATA AGE
web-tls-secret kubernetes.io/tls 2 6s
Note:
secret通常在kubernetes中存放敏感數據,他并不是一種加密方式,在后面課程中會有專門講解
#建立ingress3基于tls認證的yml文件
[root@k8s-master app]# vim ingress3.yml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:annotations:nginx.ingress.kubernetes.io/rewrite-target: /name: ingress3
spec:tls:- hosts:- myapp-tls.timinglee.orgsecretName: web-tls-secretingressClassName: nginxrules:- host: myapp-tls.timinglee.orghttp:paths:- backend:service:name: myapp-v1port:number: 80path: /pathType: Prefix#測試
[root@reg ~]# curl -k https://myapp-tls.timinglee.org
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
6.5.3.4 建立auth認證
#建立認證文件
[root@k8s-master app]# dnf install httpd-tools -y
[root@k8s-master app]# htpasswd -cm auth lee
New password:
Re-type new password:
Adding password for user lee
[root@k8s-master app]# cat auth
lee:$apr1$BohBRkkI$hZzRDfpdtNzue98bFgcU10#建立認證類型資源
[root@k8s-master app]# kubectl create secret generic auth-web --from-file auth
root@k8s-master app]# kubectl describe secrets auth-web
Name: auth-web
Namespace: default
Labels: <none>
Annotations: <none>Type: OpaqueData
====
auth: 42 bytes
#建立ingress4基于用戶認證的yaml文件
[root@k8s-master app]# vim ingress4.yml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:annotations:nginx.ingress.kubernetes.io/auth-type: basicnginx.ingress.kubernetes.io/auth-secret: auth-webnginx.ingress.kubernetes.io/auth-realm: "Please input username and password"name: ingress4
spec:tls:- hosts:- myapp-tls.timinglee.orgsecretName: web-tls-secretingressClassName: nginxrules:- host: myapp-tls.timinglee.orghttp:paths:- backend:service:name: myapp-v1port:number: 80path: /pathType: Prefix#建立ingress4
[root@k8s-master app]# kubectl apply -f ingress4.yml
ingress.networking.k8s.io/ingress4 created
[root@k8s-master app]# kubectl describe ingress ingress4
Name: ingress4
Labels: <none>
Namespace: default
Address:
Ingress Class: nginx
Default backend: <default>
TLS:web-tls-secret terminates myapp-tls.timinglee.org
Rules:Host Path Backends---- ---- --------myapp-tls.timinglee.org/ myapp-v1:80 (10.244.2.31:80)
Annotations: nginx.ingress.kubernetes.io/auth-realm: Please input username and passwordnginx.ingress.kubernetes.io/auth-secret: auth-webnginx.ingress.kubernetes.io/auth-type: basic
Events:Type Reason Age From Message---- ------ ---- ---- -------Normal Sync 14s nginx-ingress-controller Scheduled for sync#測試:
[root@reg ~]# curl -k https://myapp-tls.timinglee.org
<html>
<head><title>401 Authorization Required</title></head>
<body>
<center><h1>401 Authorization Required</h1></center>
<hr><center>nginx</center>
</body>
</html>[root@reg ~]# curl -k https://myapp-tls.timinglee.org -ulee:lee
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
6.5.3.5 rewrite重定向
#指定默認訪問的文件到hostname.html上
[root@k8s-master app]# vim ingress5.yml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:annotations:nginx.ingress.kubernetes.io/app-root: /hostname.htmlnginx.ingress.kubernetes.io/auth-type: basicnginx.ingress.kubernetes.io/auth-secret: auth-webnginx.ingress.kubernetes.io/auth-realm: "Please input username and password"name: ingress5
spec:tls:- hosts:- myapp-tls.timinglee.orgsecretName: web-tls-secretingressClassName: nginxrules:- host: myapp-tls.timinglee.orghttp:paths:- backend:service:name: myapp-v1port:number: 80path: /pathType: Prefix
[root@k8s-master app]# kubectl apply -f ingress5.yml
ingress.networking.k8s.io/ingress5 created
[root@k8s-master app]# kubectl describe ingress ingress5
Name: ingress5
Labels: <none>
Namespace: default
Address: 172.25.254.10
Ingress Class: nginx
Default backend: <default>
TLS:web-tls-secret terminates myapp-tls.timinglee.org
Rules:Host Path Backends---- ---- --------myapp-tls.timinglee.org/ myapp-v1:80 (10.244.2.31:80)
Annotations: nginx.ingress.kubernetes.io/app-root: /hostname.htmlnginx.ingress.kubernetes.io/auth-realm: Please input username and passwordnginx.ingress.kubernetes.io/auth-secret: auth-webnginx.ingress.kubernetes.io/auth-type: basic
Events:Type Reason Age From Message---- ------ ---- ---- -------Normal Sync 2m16s (x2 over 2m54s) nginx-ingress-controller Scheduled for sync#測試:
[root@reg ~]# curl -Lk https://myapp-tls.timinglee.org -ulee:lee
myapp-v1-7479d6c54d-j9xc6[root@reg ~]# curl -Lk https://myapp-tls.timinglee.org/lee/hostname.html -ulee:lee
<html>
<head><title>404 Not Found</title></head>
<body bgcolor="white">
<center><h1>404 Not Found</h1></center>
<hr><center>nginx/1.12.2</center>
</body>
</html>#解決重定向路徑問題
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:annotations:nginx.ingress.kubernetes.io/rewrite-target: /$2nginx.ingress.kubernetes.io/use-regex: "true"nginx.ingress.kubernetes.io/auth-type: basicnginx.ingress.kubernetes.io/auth-secret: auth-webnginx.ingress.kubernetes.io/auth-realm: "Please input username and password"name: ingress6
spec:tls:- hosts:- myapp-tls.timinglee.orgsecretName: web-tls-secretingressClassName: nginxrules:- host: myapp-tls.timinglee.orghttp:paths:- backend:service:name: myapp-v1port:number: 80path: /pathType: Prefix- backend:service:name: myapp-v1port:number: 80path: /lee(/|$)(.*) #正則表達式匹配/lee/,/lee/abcpathType: ImplementationSpecific#測試
[root@reg ~]# curl -Lk https://myapp-tls.timinglee.org/lee/hostname.html -ulee:lee
myapp-v1-7479d6c54d-j9xc6
6.6 Canary金絲雀發布
6.6.1 什么是金絲雀發布
金絲雀發布(Canary Release)也稱為灰度發布,是一種軟件發布策略。
主要目的是在將新版本的軟件全面推廣到生產環境之前,先在一小部分用戶或服務器上進行測試和驗證,以降低因新版本引入重大問題而對整個系統造成的影響。
是一種Pod的發布方式。金絲雀發布采取先添加、再刪除的方式,保證Pod的總量不低于期望值。并且在更新部分Pod后,暫停更新,當確認新Pod版本運行正常后再進行其他版本的Pod的更新。
6.6.2 Canary發布方式
6.6.2.1 基于header(http包頭)灰度
-
通過Annotaion擴展
-
創建灰度ingress,配置灰度頭部key以及value
-
灰度流量驗證完畢后,切換正式ingress到新版本
-
之前我們在做升級時可以通過控制器做滾動更新,默認25%利用header可以使升級更為平滑,通過key 和vule 測試新的業務體系是否有問題。
示例:
#建立版本1的ingress
[root@k8s-master app]# vim ingress7.yml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:annotations:name: myapp-v1-ingress
spec:ingressClassName: nginxrules:- host: myapp.timinglee.orghttp:paths:- backend:service:name: myapp-v1port:number: 80path: /pathType: Prefix[root@k8s-master app]# kubectl describe ingress myapp-v1-ingress
Name: myapp-v1-ingress
Labels: <none>
Namespace: default
Address: 172.25.254.10
Ingress Class: nginx
Default backend: <default>
Rules:Host Path Backends---- ---- --------myapp.timinglee.org/ myapp-v1:80 (10.244.2.31:80)
Annotations: <none>
Events:Type Reason Age From Message---- ------ ---- ---- -------Normal Sync 44s (x2 over 73s) nginx-ingress-controller Scheduled for sync#建立基于header的ingress
[root@k8s-master app]# vim ingress8.yml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:annotations:nginx.ingress.kubernetes.io/canary: "true"nginx.ingress.kubernetes.io/canary-by-header: “version”nginx.ingress.kubernetes.io/canary-by-header-value: ”2“name: myapp-v2-ingress
spec:ingressClassName: nginxrules:- host: myapp.timinglee.orghttp:paths:- backend:service:name: myapp-v2port:number: 80path: /pathType: Prefix
[root@k8s-master app]# kubectl apply -f ingress8.yml
ingress.networking.k8s.io/myapp-v2-ingress created
[root@k8s-master app]# kubectl describe ingress myapp-v2-ingress
Name: myapp-v2-ingress
Labels: <none>
Namespace: default
Address:
Ingress Class: nginx
Default backend: <default>
Rules:Host Path Backends---- ---- --------myapp.timinglee.org/ myapp-v2:80 (10.244.2.32:80)
Annotations: nginx.ingress.kubernetes.io/canary: truenginx.ingress.kubernetes.io/canary-by-header: versionnginx.ingress.kubernetes.io/canary-by-header-value: 2
Events:Type Reason Age From Message---- ------ ---- ---- -------Normal Sync 21s nginx-ingress-controller Scheduled for sync#測試:
[root@reg ~]# curl myapp.timinglee.org
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
[root@reg ~]# curl -H "version: 2" myapp.timinglee.org
Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a>
6.6.2.2 基于權重的灰度發布
-
通過Annotaion拓展
-
創建灰度ingress,配置灰度權重以及總權重
-
灰度流量驗證完畢后,切換正式ingress到新版本
示例
#基于權重的灰度發布
[root@k8s-master app]# vim ingress8.yml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:annotations:nginx.ingress.kubernetes.io/canary: "true"nginx.ingress.kubernetes.io/canary-weight: "10" #更改權重值nginx.ingress.kubernetes.io/canary-weight-total: "100"name: myapp-v2-ingress
spec:ingressClassName: nginxrules:- host: myapp.timinglee.orghttp:paths:- backend:service:name: myapp-v2port:number: 80path: /pathType: Prefix[root@k8s-master app]# kubectl apply -f ingress8.yml
ingress.networking.k8s.io/myapp-v2-ingress created#測試:
[root@reg ~]# vim check_ingress.sh
#!/bin/bash
v1=0
v2=0for (( i=0; i<100; i++))
doresponse=`curl -s myapp.timinglee.org |grep -c v1`v1=`expr $v1 + $response`v2=`expr $v2 + 1 - $response`done
echo "v1:$v1, v2:$v2"[root@reg ~]# sh check_ingress.sh
v1:90, v2:10#更改完畢權重后繼續測試可觀察變化
七 k8s的存儲
7.1 configmap
-
configMap用于保存配置數據,以鍵值對形式存儲。
-
configMap 資源提供了向 Pod 注入配置數據的方法。
-
鏡像和配置文件解耦,以便實現鏡像的可移植性和可復用性。
-
etcd限制了文件大小不能超過1M
7.1.1?configmap的使用場景
-
填充環境變量的值
-
設置容器內的命令行參數
-
填充卷的配置文件
7.1.2?configmap創建方式
7.1.2.1 字面值創建
[root@k8s-master ~]# kubectl create cm lee-config --from-literal fname=timing --from-literal name=lee
configmap/lee-config created[root@k8s-master ~]# kubectl describe cm lee-config
Name: lee-config
Namespace: default
Labels: <none>
Annotations: <none>Data #鍵值信息顯示
====
fname:
----
timing
lname:
----
leeBinaryData
====Events: <none>
7.1.2.2 通過文件創建
[root@k8s-master ~]# cat /etc/resolv.conf
# Generated by NetworkManager
nameserver 114.114.114.114[root@k8s-master ~]# kubectl create cm lee2-config --from-file /etc/resolv.conf
configmap/lee2-config created
[root@k8s-master ~]# kubectl describe cm lee2-config
Name: lee2-config
Namespace: default
Labels: <none>
Annotations: <none>Data
====
resolv.conf:
----
# Generated by NetworkManager
nameserver 114.114.114.114BinaryData
====Events: <none>
7.1.2.3 通過目錄創建
[root@k8s-master ~]# mkdir leeconfig
[root@k8s-master ~]# cp /etc/fstab /etc/rc.d/rc.local leeconfig/
[root@k8s-master ~]# kubectl create cm lee3-config --from-file leeconfig/
configmap/lee3-config created
[root@k8s-master ~]# kubectl describe cm lee3-config
Name: lee3-config
Namespace: default
Labels: <none>
Annotations: <none>Data
====
fstab:
----#
# /etc/fstab
# Created by anaconda on Fri Jul 26 13:04:22 2024
#
# Accessible filesystems, by reference, are maintained under '/dev/disk/'.
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info.
#
# After editing this file, run 'systemctl daemon-reload' to update systemd
# units generated from this file.
#
UUID=6577c44f-9c1c-44f9-af56-6d6b505fcfa8 / xfs defaults 0 0
UUID=eec689b4-73d5-4f47-b999-9a585bb6da1d /boot xfs defaults 0 0
UUID=ED00-0E42 /boot/efi vfat umask=0077,shortname=winnt 0 2
#UUID=be2f2006-6072-4c77-83d4-f2ff5e237f9f none swap defaults 0 0rc.local:
----
#!/bin/bash
# THIS FILE IS ADDED FOR COMPATIBILITY PURPOSES
#
# It is highly advisable to create own systemd services or udev rules
# to run scripts during boot instead of using this file.
#
# In contrast to previous versions due to parallel execution during boot
# this script will NOT be run after all other services.
#
# Please note that you must run 'chmod +x /etc/rc.d/rc.local' to ensure
# that this script will be executed during boot.touch /var/lock/subsys/local
mount /dev/cdrom /rhel9BinaryData
====Events: <none>
7.1.2.4 通過yaml文件創建
[root@k8s-master ~]# kubectl create cm lee4-config --from-literal db_host=172.25.254.100 --from-literal db_port=3306 --dry-run=client -o yaml > lee-config.yaml[root@k8s-master ~]# vim lee-config.yaml
apiVersion: v1
kind: ConfigMap
metadata:name: lee4-config
data:db_host: ”172.25.254.100“db_port: "3306"[root@k8s-master ~]# kubectl describe cm lee4-config
Name: lee4-config
Namespace: default
Labels: <none>
Annotations: <none>Data
====
db_host:
----
172.25.254.100
db_port:
----
3306BinaryData
====Events: <none>
7.1.2.5 configmap的使用方式
-
通過環境變量的方式直接傳遞給pod
-
通過pod的 命令行運行方式
-
作為volume的方式掛載到pod內
7.1.2.5.1 使用configmap填充環境變量
#講cm中的內容映射為指定變量
[root@k8s-master ~]# vim testpod1.yml
apiVersion: v1
kind: Pod
metadata:labels:run: testpodname: testpod
spec:containers:- image: busyboxplus:latestname: testpodcommand:- /bin/sh- -c- envenv:- name: key1valueFrom:configMapKeyRef:name: lee4-configkey: db_host- name: key2valueFrom:configMapKeyRef:name: lee4-configkey: db_portrestartPolicy: Never[root@k8s-master ~]# kubectl apply -f testpod.yml
pod/testpod created[root@k8s-master ~]# kubectl logs pods/testpod
KUBERNETES_PORT=tcp://10.96.0.1:443
KUBERNETES_SERVICE_PORT=443
MYAPP_V1_SERVICE_HOST=10.104.84.65
HOSTNAME=testpod
SHLVL=1
MYAPP_V2_SERVICE_HOST=10.105.246.219
HOME=/
MYAPP_V1_PORT=tcp://10.104.84.65:80
MYAPP_V1_SERVICE_PORT=80
MYAPP_V2_SERVICE_PORT=80
MYAPP_V2_PORT=tcp://10.105.246.219:80
MYAPP_V1_PORT_80_TCP_ADDR=10.104.84.65
MYAPP_V2_PORT_80_TCP_ADDR=10.105.246.219
KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1
MYAPP_V1_PORT_80_TCP_PORT=80
MYAPP_V2_PORT_80_TCP_PORT=80
MYAPP_V1_PORT_80_TCP_PROTO=tcp
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
MYAPP_V2_PORT_80_TCP_PROTO=tcp
KUBERNETES_PORT_443_TCP_PORT=443
KUBERNETES_PORT_443_TCP_PROTO=tcp
key1=172.25.254.100
key2=3306
MYAPP_V1_PORT_80_TCP=tcp://10.104.84.65:80
MYAPP_V2_PORT_80_TCP=tcp://10.105.246.219:80
KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443
KUBERNETES_SERVICE_PORT_HTTPS=443
PWD=/
KUBERNETES_SERVICE_HOST=10.96.0.1#把cm中的值直接映射為變量
[root@k8s-master ~]# vim testpod2.yml
apiVersion: v1
kind: Pod
metadata:labels:run: testpodname: testpod
spec:containers:- image: busyboxplus:latestname: testpodcommand:- /bin/sh- -c- envenvFrom:- configMapRef:name: lee4-configrestartPolicy: Never#查看日志
[root@k8s-master ~]# kubectl logs pods/testpod
KUBERNETES_PORT=tcp://10.96.0.1:443
KUBERNETES_SERVICE_PORT=443
MYAPP_V1_SERVICE_HOST=10.104.84.65
HOSTNAME=testpod
SHLVL=1
MYAPP_V2_SERVICE_HOST=10.105.246.219
HOME=/
db_port=3306
MYAPP_V1_SERVICE_PORT=80
MYAPP_V1_PORT=tcp://10.104.84.65:80
MYAPP_V2_SERVICE_PORT=80
MYAPP_V2_PORT=tcp://10.105.246.219:80
MYAPP_V1_PORT_80_TCP_ADDR=10.104.84.65
MYAPP_V2_PORT_80_TCP_ADDR=10.105.246.219
KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1
MYAPP_V1_PORT_80_TCP_PORT=80
age=18
MYAPP_V2_PORT_80_TCP_PORT=80
MYAPP_V1_PORT_80_TCP_PROTO=tcp
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
KUBERNETES_PORT_443_TCP_PORT=443
MYAPP_V2_PORT_80_TCP_PROTO=tcp
KUBERNETES_PORT_443_TCP_PROTO=tcp
MYAPP_V1_PORT_80_TCP=tcp://10.104.84.65:80
MYAPP_V2_PORT_80_TCP=tcp://10.105.246.219:80
KUBERNETES_SERVICE_PORT_HTTPS=443
KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443
name=lee
PWD=/
KUBERNETES_SERVICE_HOST=10.96.0.1
db_host=172.25.254.100#在pod命令行中使用變量
[root@k8s-master ~]# vim testpod3.yml
apiVersion: v1
kind: Pod
metadata:labels:run: testpodname: testpod
spec:containers:- image: busyboxplus:latestname: testpodcommand:- /bin/sh- -c- echo ${db_host} ${db_port} #變量調用需envFrom:- configMapRef:name: lee4-configrestartPolicy: Never#查看日志
[root@k8s-master ~]# kubectl logs pods/testpod
172.25.254.100 3306
7.1.2.5.2 通過數據卷使用configmap
[root@k8s-master ~]# vim testpod4.yml
apiVersion: v1
kind: Pod
metadata:labels:run: testpodname: testpod
spec:containers:- image: busyboxplus:latestname: testpodcommand:- /bin/sh- -c- cat /config/db_hostvolumeMounts: #調用卷策略- name: config-volume #卷名稱mountPath: /configvolumes: #聲明卷的配置- name: config-volume #卷名稱configMap:name: lee4-configrestartPolicy: Never#查看日志
[root@k8s-master ~]# kubectl logs testpod
172.25.254.100
7.1.2.5.3 利用configMap填充pod的配置文件
#建立配置文件模板
[root@k8s-master ~]# vim nginx.conf
server {listen 8000;server_name _;root /usr/share/nginx/html;index index.html;
}#利用模板生成cm
root@k8s-master ~]# kubectl create cm nginx-conf --from-file nginx.conf
configmap/nginx-conf created
[root@k8s-master ~]# kubectl describe cm nginx-conf
Name: nginx-conf
Namespace: default
Labels: <none>
Annotations: <none>Data
====
nginx.conf:
----
server {listen 8000;server_name _;root /usr/share/nginx/html;index index.html;
}BinaryData
====Events: <none>#建立nginx控制器文件
[root@k8s-master ~]# kubectl create deployment nginx --image nginx:latest --replicas 1 --dry-run=client -o yaml > nginx.yml#設定nginx.yml中的卷
[root@k8s-master ~]# vim nginx.yml
[root@k8s-master ~]# cat nginx.
cat: nginx.: 沒有那個文件或目錄
[root@k8s-master ~]# cat nginx.yml
apiVersion: apps/v1
kind: Deployment
metadata:labels:app: nginxname: nginx
spec:replicas: 1selector:matchLabels:app: nginxtemplate:metadata:labels:app: nginxspec:containers:- image: nginx:latestname: nginxvolumeMounts:- name: config-volumemountPath: /etc/nginx/conf.dvolumes:- name: config-volumeconfigMap:name: nginx-conf#測試
[root@k8s-master ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-8487c65cfc-cz5hd 1/1 Running 0 3m7s 10.244.2.38 k8s-node2 <none> <none>
[root@k8s-master ~]# curl 10.244.2.38:8000
7.1.2.5.4 通過熱更新cm修改配置
[root@k8s-master ~]# kubectl edit cm nginx-conf
apiVersion: v1
data:nginx.conf: |server {listen 8080; #端口改為8080server_name _;root /usr/share/nginx/html;index index.html;}
kind: ConfigMap
metadata:creationTimestamp: "2024-09-07T02:49:20Z"name: nginx-confnamespace: defaultresourceVersion: "153055"uid: 20bee584-2dab-4bd5-9bcb-78318404fa7a#查看配置文件
[root@k8s-master ~]# kubectl exec pods/nginx-8487c65cfc-cz5hd -- cat /etc/nginx/conf.d/nginx.conf
server {listen 8080;server_name _;root /usr/share/nginx/html;index index.html;
}
Note:
配置文件修改后不會生效,需要刪除pod后控制器會重建pod,這時就生效了
[root@k8s-master ~]# kubectl delete pods nginx-8487c65cfc-cz5hd pod "nginx-8487c65cfc-cz5hd" deleted[root@k8s-master ~]# curl 10.244.2.41:8080
7.2 secrets配置管理
7.2.1 secrets的功能介紹
-
Secret 對象類型用來保存敏感信息,例如密碼、OAuth 令牌和 ssh key。
-
敏感信息放在 secret 中比放在 Pod 的定義或者容器鏡像中來說更加安全和靈活
-
Pod 可以用兩種方式使用 secret:
-
作為 volume 中的文件被掛載到 pod 中的一個或者多個容器里。
-
當 kubelet 為 pod 拉取鏡像時使用。
-
-
Secret的類型:
-
Service Account:Kubernetes 自動創建包含訪問 API 憑據的 secret,并自動修改 pod 以使用此類型的 secret。
-
Opaque:使用base64編碼存儲信息,可以通過base64 --decode解碼獲得原始數據,因此安全性弱。
-
kubernetes.io/dockerconfigjson:用于存儲docker registry的認證信息
-
7.2.2 secrets的創建
在創建secrets時我們可以用命令的方法或者yaml文件的方法
7.2.2.1從文件創建
[root@k8s-master secrets]# echo -n timinglee > username.txt
[root@k8s-master secrets]# echo -n lee > password.txt
root@k8s-master secrets]# kubectl create secret generic userlist --from-file username.txt --from-file password.txt
secret/userlist created
[root@k8s-master secrets]# kubectl get secrets userlist -o yaml
apiVersion: v1
data:password.txt: bGVlusername.txt: dGltaW5nbGVl
kind: Secret
metadata:creationTimestamp: "2024-09-07T07:30:42Z"name: userlistnamespace: defaultresourceVersion: "177216"uid: 9d76250c-c16b-4520-b6f2-cc6a8ad25594
type: Opaque
7.2.2.2 編寫yaml文件
[root@k8s-master secrets]# echo -n timinglee | base64
dGltaW5nbGVl
[root@k8s-master secrets]# echo -n lee | base64
bGVl[root@k8s-master secrets]# kubectl create secret generic userlist --dry-run=client -o yaml > userlist.yml[root@k8s-master secrets]# vim userlist.yml
apiVersion: v1
kind: Secret
metadata:creationTimestamp: nullname: userlist
type: Opaque
data:username: dGltaW5nbGVlpassword: bGVl[root@k8s-master secrets]# kubectl apply -f userlist.yml
secret/userlist created[root@k8s-master secrets]# kubectl describe secrets userlist
Name: userlist
Namespace: default
Labels: <none>
Annotations: <none>Type: OpaqueData
====
password: 3 bytes
username: 9 byte
7.2.3 Secret的使用方法
7.2.3.1 將Secret掛載到Volume中
[root@k8s-master secrets]# kubectl run nginx --image nginx --dry-run=client -o yaml > pod1.yaml#向固定路徑映射
[root@k8s-master secrets]# vim pod1.yaml
apiVersion: v1
kind: Pod
metadata:labels:run: nginxname: nginx
spec:containers:- image: nginxname: nginxvolumeMounts:- name: secretsmountPath: /secretreadOnly: truevolumes:- name: secretssecret:secretName: userlist[root@k8s-master secrets]# kubectl apply -f pod1.yaml
pod/nginx created[root@k8s-master secrets]# kubectl exec pods/nginx -it -- /bin/bash
root@nginx:/# cat /secret/
cat: /secret/: Is a directory
root@nginx:/# cd /secret/
root@nginx:/secret# ls
password username
root@nginx:/secret# cat password
leeroot@nginx:/secret# cat username
timingleeroot@nginx:/secret#
7.2.3.2 向指定路徑映射 secret 密鑰
#向指定路徑映射
[root@k8s-master secrets]# vim pod2.yaml
apiVersion: v1
kind: Pod
metadata:labels:run: nginx1name: nginx1
spec:containers:- image: nginxname: nginx1volumeMounts:- name: secretsmountPath: /secretreadOnly: truevolumes:- name: secretssecret:secretName: userlistitems:- key: usernamepath: my-users/username[root@k8s-master secrets]# kubectl apply -f pod2.yaml
pod/nginx1 created
[root@k8s-master secrets]# kubectl exec pods/nginx1 -it -- /bin/bash
root@nginx1:/# cd secret/
root@nginx1:/secret# ls
my-users
root@nginx1:/secret# cd my-users
root@nginx1:/secret/my-users# ls
username
root@nginx1:/secret/my-users# cat username
7.2.3.3 將Secret設置為環境變量
[root@k8s-master secrets]# vim pod3.yaml
apiVersion: v1
kind: Pod
metadata:labels:run: busyboxname: busybox
spec:containers:- image: busyboxname: busyboxcommand:- /bin/sh- -c- envenv:- name: USERNAMEvalueFrom:secretKeyRef:name: userlistkey: username- name: PASSvalueFrom:secretKeyRef:name: userlistkey: passwordrestartPolicy: Never[root@k8s-master secrets]# kubectl apply -f pod3.yaml
pod/busybox created
[root@k8s-master secrets]# kubectl logs pods/busybox
KUBERNETES_SERVICE_PORT=443
KUBERNETES_PORT=tcp://10.96.0.1:443
HOSTNAME=busybox
MYAPP_V1_SERVICE_HOST=10.104.84.65
MYAPP_V2_SERVICE_HOST=10.105.246.219
SHLVL=1
HOME=/root
MYAPP_V1_SERVICE_PORT=80
MYAPP_V1_PORT=tcp://10.104.84.65:80
MYAPP_V2_SERVICE_PORT=80
MYAPP_V2_PORT=tcp://10.105.246.219:80
MYAPP_V1_PORT_80_TCP_ADDR=10.104.84.65
USERNAME=timinglee
MYAPP_V2_PORT_80_TCP_ADDR=10.105.246.219
KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1
MYAPP_V1_PORT_80_TCP_PORT=80
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
MYAPP_V2_PORT_80_TCP_PORT=80
MYAPP_V1_PORT_80_TCP_PROTO=tcp
KUBERNETES_PORT_443_TCP_PORT=443
MYAPP_V2_PORT_80_TCP_PROTO=tcp
KUBERNETES_PORT_443_TCP_PROTO=tcp
MYAPP_V1_PORT_80_TCP=tcp://10.104.84.65:80
MYAPP_V2_PORT_80_TCP=tcp://10.105.246.219:80
PASS=lee
KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443
KUBERNETES_SERVICE_PORT_HTTPS=443
KUBERNETES_SERVICE_HOST=10.96.0.1
PWD=/
7.2.3.4 存儲docker registry的認證信息
建立私有倉庫并上傳鏡像
#登陸倉庫
[root@k8s-master secrets]# docker login reg.timinglee.org
Authenticating with existing credentials...
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credential-storesLogin Succeeded#上傳鏡像
[root@k8s-master secrets]# docker tag timinglee/game2048:latest reg.timinglee.org/timinglee/game2048:latest
[root@k8s-master secrets]# docker push reg.timinglee.org/timinglee/game2048:latest
The push refers to repository [reg.timinglee.org/timinglee/game2048]
88fca8ae768a: Pushed
6d7504772167: Pushed
192e9fad2abc: Pushed
36e9226e74f8: Pushed
011b303988d2: Pushed
latest: digest: sha256:8a34fb9cb168c420604b6e5d32ca6d412cb0d533a826b313b190535c03fe9390 size: 1364
#建立用于docker認證的secret[root@k8s-master secrets]# kubectl create secret docker-registry docker-auth --docker-server reg.timinglee.org --docker-username admin --docker-password lee --docker-email timinglee@timinglee.org
secret/docker-auth created
[root@k8s-master secrets]# vim pod3.yml
apiVersion: v1
kind: Pod
metadata:labels:run: game2048name: game2048
spec:containers:- image: reg.timinglee.org/timinglee/game2048:latestname: game2048imagePullSecrets: #不設定docker認證時無法下載鏡像- name: docker-auth[root@k8s-master secrets]# kubectl get pods
NAME READY STATUS RESTARTS AGE
game2048 1/1 Running 0 4s
7.3?volumes配置管理
-
容器中文件在磁盤上是臨時存放的,這給容器中運行的特殊應用程序帶來一些問題
-
當容器崩潰時,kubelet將重新啟動容器,容器中的文件將會丟失,因為容器會以干凈的狀態重建。
-
當在一個 Pod 中同時運行多個容器時,常常需要在這些容器之間共享文件。
-
Kubernetes 卷具有明確的生命周期與使用它的 Pod 相同
-
卷比 Pod 中運行的任何容器的存活期都長,在容器重新啟動時數據也會得到保留
-
當一個 Pod 不再存在時,卷也將不再存在。
-
Kubernetes 可以支持許多類型的卷,Pod 也能同時使用任意數量的卷。
-
卷不能掛載到其他卷,也不能與其他卷有硬鏈接。 Pod 中的每個容器必須獨立地指定每個卷的掛載位置。
7.3.1 kubernets支持的卷的類型
官網:卷 | Kubernetes
k8s支持的卷的類型如下:
-
awsElasticBlockStore 、azureDisk、azureFile、cephfs、cinder、configMap、csi
-
downwardAPI、emptyDir、fc (fibre channel)、flexVolume、flocker
-
gcePersistentDisk、gitRepo (deprecated)、glusterfs、hostPath、iscsi、local、
-
nfs、persistentVolumeClaim、projected、portworxVolume、quobyte、rbd
-
scaleIO、secret、storageos、vsphereVolume
7.3.2 emptyDir卷
功能:
當Pod指定到某個節點上時,首先創建的是一個emptyDir卷,并且只要 Pod 在該節點上運行,卷就一直存在。卷最初是空的。 盡管 Pod 中的容器掛載 emptyDir 卷的路徑可能相同也可能不同,但是這些容器都可以讀寫 emptyDir 卷中相同的文件。 當 Pod 因為某些原因被從節點上刪除時,emptyDir 卷中的數據也會永久刪除
emptyDir 的使用場景:
-
緩存空間,例如基于磁盤的歸并排序。
-
耗時較長的計算任務提供檢查點,以便任務能方便地從崩潰前狀態恢復執行。
-
在 Web 服務器容器服務數據時,保存內容管理器容器獲取的文件。
示例:
[root@k8s-master volumes]# vim pod1.yml
apiVersion: v1
kind: Pod
metadata:name: vol1
spec:containers:- image: busyboxplus:latestname: vm1command:- /bin/sh- -c- sleep 30000000volumeMounts:- mountPath: /cachename: cache-vol- image: nginx:latestname: vm2volumeMounts:- mountPath: /usr/share/nginx/htmlname: cache-volvolumes:- name: cache-volemptyDir:medium: MemorysizeLimit: 100Mi[root@k8s-master volumes]# kubectl apply -f pod1.yml#查看pod中卷的使用情況
[root@k8s-master volumes]# kubectl describe pods vol1#測試效果[root@k8s-master volumes]# kubectl exec -it pods/vol1 -c vm1 -- /bin/sh
/ # cd /cache/
/cache # ls
/cache # curl localhost
<html>
<head><title>403 Forbidden</title></head>
<body>
<center><h1>403 Forbidden</h1></center>
<hr><center>nginx/1.27.1</center>
</body>
</html>
/cache # echo timinglee > index.html
/cache # curl localhost
timinglee
/cache # dd if=/dev/zero of=bigfile bs=1M count=101
dd: writing 'bigfile': No space left on device
101+0 records in
99+1 records out
7.3.3 hostpath卷
功能:
hostPath 卷能將主機節點文件系統上的文件或目錄掛載到您的 Pod 中,不會因為pod關閉而被刪除
hostPath 的一些用法
-
運行一個需要訪問 Docker 引擎內部機制的容器,掛載 /var/lib/docker 路徑。
-
在容器中運行 cAdvisor(監控) 時,以 hostPath 方式掛載 /sys。
-
允許 Pod 指定給定的 hostPath 在運行 Pod 之前是否應該存在,是否應該創建以及應該以什么方式存在
hostPath的安全隱患
-
具有相同配置(例如從 podTemplate 創建)的多個 Pod 會由于節點上文件的不同而在不同節點上有不同的行為。
-
當 Kubernetes 按照計劃添加資源感知的調度時,這類調度機制將無法考慮由 hostPath 使用的資源。
-
基礎主機上創建的文件或目錄只能由 root 用戶寫入。您需要在 特權容器 中以 root 身份運行進程,或者修改主機上的文件權限以便容?能夠寫入 hostPath 卷。
示例:
[root@k8s-master volumes]# vim pod2.yml
apiVersion: v1
kind: Pod
metadata:name: vol1
spec:containers:- image: nginx:latestname: vm1volumeMounts:- mountPath: /usr/share/nginx/htmlname: cache-volvolumes:- name: cache-volhostPath:path: /datatype: DirectoryOrCreate #當/data目錄不存在時自動建立#測試:
[root@k8s-master volumes]# kubectl apply -f pod2.yml
pod/vol1 created
[root@k8s-master volumes]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
vol1 1/1 Running 0 10s 10.244.2.48 k8s-node2 <none> <none>[root@k8s-master volumes]# curl 10.244.2.48
<html>
<head><title>403 Forbidden</title></head>
<body>
<center><h1>403 Forbidden</h1></center>
<hr><center>nginx/1.27.1</center>
</body>
</html>[root@k8s-node2 ~]# echo timinglee > /data/index.html
[root@k8s-master volumes]# curl 10.244.2.48
timinglee#當pod被刪除后hostPath不會被清理
[root@k8s-master volumes]# kubectl delete -f pod2.yml
pod "vol1" deleted
[root@k8s-node2 ~]# ls /data/
index.html
7.3.4 nfs卷
NFS 卷允許將一個現有的 NFS 服務器上的目錄掛載到 Kubernetes 中的 Pod 中。這對于在多個 Pod 之間共享數據或持久化存儲數據非常有用
例如,如果有多個容器需要訪問相同的數據集,或者需要將容器中的數據持久保存到外部存儲,NFS 卷可以提供一種方便的解決方案。
7.3.4.1 部署一臺nfs共享主機并在所有k8s節點中安裝nfs-utils
#部署nfs主機
[root@reg ~]# dnf install nfs-utils -y
[root@reg ~]# systemctl enable --now nfs-server.service[root@reg ~]# vim /etc/exports
/nfsdata *(rw,sync,no_root_squash)[root@reg ~]# exportfs -rv
exporting *:/nfsdata[root@reg ~]# showmount -e
Export list for reg.timinglee.org:
/nfsdata *#在k8s所有節點中安裝nfs-utils
[root@k8s-master & node1 & node2 ~]# dnf install nfs-utils -y
7.3.4.2 部署nfs卷
[root@k8s-master volumes]# vim pod3.yml
apiVersion: v1
kind: Pod
metadata:name: vol1
spec:containers:- image: nginx:latestname: vm1volumeMounts:- mountPath: /usr/share/nginx/htmlname: cache-volvolumes:- name: cache-volnfs:server: 172.25.254.250path: /nfsdata[root@k8s-master volumes]# kubectl apply -f pod3.yml
pod/vol1 created#測試
[root@k8s-master volumes]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
vol1 1/1 Running 0 100s 10.244.2.50 k8s-node2 <none> <none>
[root@k8s-master volumes]# curl 10.244.2.50
<html>
<head><title>403 Forbidden</title></head>
<body>
<center><h1>403 Forbidden</h1></center>
<hr><center>nginx/1.27.1</center>
</body>
</html>##在nfs主機中
[root@reg ~]# echo timinglee > /nfsdata/index.html
[root@k8s-master volumes]# curl 10.244.2.50
timinglee
7.3.5 PersistentVolume持久卷
7.3.5.1 靜態持久卷pv與靜態持久卷聲明pvc
PersistentVolume(持久卷,簡稱PV)
-
pv是集群內由管理員提供的網絡存儲的一部分。
-
PV也是集群中的一種資源。是一種volume插件,
-
但是它的生命周期卻是和使用它的Pod相互獨立的。
-
PV這個API對象,捕獲了諸如NFS、ISCSI、或其他云存儲系統的實現細節
-
pv有兩種提供方式:靜態和動態
-
靜態PV:集群管理員創建多個PV,它們攜帶著真實存儲的詳細信息,它們存在于Kubernetes API中,并可用于存儲使用
-
動態PV:當管理員創建的靜態PV都不匹配用戶的PVC時,集群可能會嘗試專門地供給volume給PVC。這種供給基于StorageClass
-
PersistentVolumeClaim(持久卷聲明,簡稱PVC)
-
是用戶的一種存儲請求
-
它和Pod類似,Pod消耗Node資源,而PVC消耗PV資源
-
Pod能夠請求特定的資源(如CPU和內存)。PVC能夠請求指定的大小和訪問的模式持久卷配置
-
PVC與PV的綁定是一對一的映射。沒找到匹配的PV,那么PVC會無限期得處于unbound未綁定狀態
volumes訪問模式
-
ReadWriteOnce -- 該volume只能被單個節點以讀寫的方式映射
-
ReadOnlyMany -- 該volume可以被多個節點以只讀方式映射
-
ReadWriteMany -- 該volume可以被多個節點以讀寫的方式映射
-
在命令行中,訪問模式可以簡寫為:
-
RWO - ReadWriteOnce
-
ROX - ReadOnlyMany
-
RWX – ReadWriteMany
-
volumes回收策略
-
Retain:保留,需要手動回收
-
Recycle:回收,自動刪除卷中數據(在當前版本中已經廢棄)
-
Delete:刪除,相關聯的存儲資產,如AWS EBS,GCE PD,Azure Disk,or OpenStack Cinder卷都會被刪除
注意:
只有NFS和HostPath支持回收利用
AWS EBS,GCE PD,Azure Disk,or OpenStack Cinder卷支持刪除操作
volumes狀態說明
-
Available 卷是一個空閑資源,尚未綁定到任何申領
-
Bound 該卷已經綁定到某申領
-
Released 所綁定的申領已被刪除,但是關聯存儲資源尚未被集群回收
-
Failed 卷的自動回收操作失敗
volumes回收策略
-
Retain:保留,需要手動回收
-
Recycle:回收,自動刪除卷中數據(在當前版本中已經廢棄)
-
Delete:刪除,相關聯的存儲資產,如AWS EBS,GCE PD,Azure Disk,or OpenStack Cinder卷都會被刪除
注意:
只有NFS和HostPath支持回收利用
AWS EBS,GCE PD,Azure Disk,or OpenStack Cinder卷支持刪除操作
volumes狀態說明
-
Available 卷是一個空閑資源,尚未綁定到任何申領
-
Bound 該卷已經綁定到某申領
-
Released 所綁定的申領已被刪除,但是關聯存儲資源尚未被集群回收
-
Failed 卷的自動回收操作失敗
靜態pv實例:
#在nfs主機中建立實驗目錄
[root@reg ~]# mkdir /nfsdata/pv{1..3}#編寫創建pv的yml文件,pv是集群資源,不在任何namespace中
[root@k8s-master pvc]# vim pv.yml
apiVersion: v1
kind: PersistentVolume
metadata:name: pv1
spec:capacity:storage: 5GivolumeMode: FilesystemaccessModes:- ReadWriteOncepersistentVolumeReclaimPolicy: RetainstorageClassName: nfsnfs:path: /nfsdata/pv1server: 172.25.254.250---
apiVersion: v1
kind: PersistentVolume
metadata:name: pv2
spec:capacity:storage: 15GivolumeMode: FilesystemaccessModes:- ReadWriteManypersistentVolumeReclaimPolicy: RetainstorageClassName: nfsnfs:path: /nfsdata/pv2server: 172.25.254.250
---
apiVersion: v1
kind: PersistentVolume
metadata:name: pv3
spec:capacity:storage: 25GivolumeMode: FilesystemaccessModes:- ReadOnlyManypersistentVolumeReclaimPolicy: RetainstorageClassName: nfsnfs:path: /nfsdata/pv3server: 172.25.254.250[root@k8s-master pvc]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS VOLUMEATTRIBUTESCLASS REASON AGE
pv1 5Gi RWO Retain Available nfs <unset> 4m50s
pv2 15Gi RWX Retain Available nfs <unset> 4m50s
pv3 25Gi ROX Retain Available nfs <unset> 4m50s#建立pvc,pvc是pv使用的申請,需要保證和pod在一個namesapce中
[root@k8s-master pvc]# vim pvc.ym
apiVersion: v1
kind: PersistentVolumeClaim
metadata:name: pvc1
spec:storageClassName: nfsaccessModes:- ReadWriteOnceresources:requests:storage: 1Gi---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:name: pvc2
spec:storageClassName: nfsaccessModes:- ReadWriteManyresources:requests:storage: 10Gi---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:name: pvc3
spec:storageClassName: nfsaccessModes:- ReadOnlyManyresources:requests:storage: 15Gi
[root@k8s-master pvc]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE
pvc1 Bound pv1 5Gi RWO nfs <unset> 5s
pvc2 Bound pv2 15Gi RWX nfs <unset> 4s
pvc3 Bound pv3 25Gi ROX nfs <unset> 4s#在其他namespace中無法應用
[root@k8s-master pvc]# kubectl -n kube-system get pvc
No resources found in kube-system namespace.
在pod中使用pvc
[root@k8s-master pvc]# vim pod.yml
apiVersion: v1
kind: Pod
metadata:name: timinglee
spec:containers:- image: nginxname: nginxvolumeMounts:- mountPath: /usr/share/nginx/htmlname: vol1volumes:- name: vol1persistentVolumeClaim:claimName: pvc1[root@k8s-master pvc]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
timinglee 1/1 Running 0 83s 10.244.2.54 k8s-node2 <none> <none>
[root@k8s-master pvc]# kubectl exec -it pods/timinglee -- /bin/bash
root@timinglee:/# curl localhost
<html>
<head><title>403 Forbidden</title></head>
<body>
<center><h1>403 Forbidden</h1></center>
<hr><center>nginx/1.27.1</center>
</body>
</html>
root@timinglee:/# cd /usr/share/nginx/
root@timinglee:/usr/share/nginx# ls
html
root@timinglee:/usr/share/nginx# cd html/
root@timinglee:/usr/share/nginx/html# ls[root@reg ~]# echo timinglee > /data/pv1/index.html[root@k8s-master pvc]# kubectl exec -it pods/timinglee -- /bin/bash
root@timinglee:/# cd /usr/share/nginx/html/
root@timinglee:/usr/share/nginx/html# ls
index.html
7.4?存儲類storageclass
官網: https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner
7.4.1 StorageClass說明
-
StorageClass提供了一種描述存儲類(class)的方法,不同的class可能會映射到不同的服務質量等級和備份策略或其他策略等。
-
每個 StorageClass 都包含 provisioner、parameters 和 reclaimPolicy 字段, 這些字段會在StorageClass需要動態分配 PersistentVolume 時會使用到
7.4.2 StorageClass的屬性
屬性說明:存儲類 | Kubernetes
Provisioner(存儲分配器):用來決定使用哪個卷插件分配 PV,該字段必須指定。可以指定內部分配器,也可以指定外部分配器。外部分配器的代碼地址為: kubernetes-incubator/external-storage,其中包括NFS和Ceph等。
Reclaim Policy(回收策略):通過reclaimPolicy字段指定創建的Persistent Volume的回收策略,回收策略包括:Delete 或者 Retain,沒有指定默認為Delete。
7.4.3 存儲分配器NFS Client Provisioner
源碼地址:https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner
-
NFS Client Provisioner是一個automatic provisioner,使用NFS作為存儲,自動創建PV和對應的PVC,本身不提供NFS存儲,需要外部先有一套NFS存儲服務。
-
PV以 ${namespace}-${pvcName}-${pvName}的命名格式提供(在NFS服務器上)
-
PV回收的時候以 archieved-${namespace}-${pvcName}-${pvName} 的命名格式(在NFS服務器上)
7.4.4 部署NFS Client Provisioner
7.4.4.1 創建sa并授權
[root@k8s-master storageclass]# vim rbac.yml
apiVersion: v1
kind: Namespace
metadata:name: nfs-client-provisioner
---
apiVersion: v1
kind: ServiceAccount
metadata:name: nfs-client-provisionernamespace: nfs-client-provisioner
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:name: nfs-client-provisioner-runner
rules:- apiGroups: [""]resources: ["nodes"]verbs: ["get", "list", "watch"]- apiGroups: [""]resources: ["persistentvolumes"]verbs: ["get", "list", "watch", "create", "delete"]- apiGroups: [""]resources: ["persistentvolumeclaims"]verbs: ["get", "list", "watch", "update"]- apiGroups: ["storage.k8s.io"]resources: ["storageclasses"]verbs: ["get", "list", "watch"]- apiGroups: [""]resources: ["events"]verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:name: run-nfs-client-provisioner
subjects:- kind: ServiceAccountname: nfs-client-provisionernamespace: nfs-client-provisioner
roleRef:kind: ClusterRolename: nfs-client-provisioner-runnerapiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:name: leader-locking-nfs-client-provisionernamespace: nfs-client-provisioner
rules:- apiGroups: [""]resources: ["endpoints"]verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:name: leader-locking-nfs-client-provisionernamespace: nfs-client-provisioner
subjects:- kind: ServiceAccountname: nfs-client-provisionernamespace: nfs-client-provisioner
roleRef:kind: Rolename: leader-locking-nfs-client-provisionerapiGroup: rbac.authorization.k8s.io#查看rbac信息
[root@k8s-master storageclass]# kubectl apply -f rbac.yml
namespace/nfs-client-provisioner created
serviceaccount/nfs-client-provisioner created
clusterrole.rbac.authorization.k8s.io/nfs-client-provisioner-runner created
clusterrolebinding.rbac.authorization.k8s.io/run-nfs-client-provisioner created
role.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created
rolebinding.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created
[root@k8s-master storageclass]# kubectl -n nfs-client-provisioner get sa
NAME SECRETS AGE
default 0 14s
nfs-client-provisioner 0 14s
7.4.4.2 部署應用
[root@k8s-master storageclass]# vim deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:name: nfs-client-provisionerlabels:app: nfs-client-provisionernamespace: nfs-client-provisioner
spec:replicas: 1strategy:type: Recreateselector:matchLabels:app: nfs-client-provisionertemplate:metadata:labels:app: nfs-client-provisionerspec:serviceAccountName: nfs-client-provisionercontainers:- name: nfs-client-provisionerimage: sig-storage/nfs-subdir-external-provisioner:v4.0.2volumeMounts:- name: nfs-client-rootmountPath: /persistentvolumesenv:- name: PROVISIONER_NAMEvalue: k8s-sigs.io/nfs-subdir-external-provisioner- name: NFS_SERVERvalue: 172.25.254.250- name: NFS_PATHvalue: /nfsdatavolumes:- name: nfs-client-rootnfs:server: 172.25.254.250path: /nfsdata[root@k8s-master storageclass]# kubectl -n nfs-client-provisioner get deployments.apps nfs-client-provisioner
NAME READY UP-TO-DATE AVAILABLE AGE
nfs-client-provisioner 1/1 1 1 86s
7.4.4.3 創建存儲類
[root@k8s-master storageclass]# vim class.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:name: nfs-client
provisioner: k8s-sigs.io/nfs-subdir-external-provisioner
parameters:archiveOnDelete: "false"[root@k8s-master storageclass]# kubectl apply -f class.yaml
storageclass.storage.k8s.io/nfs-client created
[root@k8s-master storageclass]# kubectl get storageclasses.storage.k8s.io
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
nfs-client k8s-sigs.io/nfs-subdir-external-provisioner Delete Immediate false 9s
7.4.4.4 創建pvc
[root@k8s-master storageclass]# vim pvc.yml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:name: test-claim
spec:storageClassName: nfs-clientaccessModes:- ReadWriteManyresources:requests:storage: 1G
[root@k8s-master storageclass]# kubectl apply -f pvc.yml
persistentvolumeclaim/test-claim created[root@k8s-master storageclass]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE
test-claim Bound pvc-7782a006-381a-440a-addb-e9d659b8fe0b 1Gi RWX nfs-client <unset> 21m
7.4.4.5 創建測試pod
[root@k8s-master storageclass]# vim pod.yml
kind: Pod
apiVersion: v1
metadata:name: test-pod
spec:containers:- name: test-podimage: busyboxcommand:- "/bin/sh"args:- "-c"- "touch /mnt/SUCCESS && exit 0 || exit 1"volumeMounts:- name: nfs-pvcmountPath: "/mnt"restartPolicy: "Never"volumes:- name: nfs-pvcpersistentVolumeClaim:claimName: test-claim[root@k8s-master storageclass]# kubectl apply -f pod.yml[root@reg ~]# ls /data/default-test-claim-pvc-b1aef9cc-4be9-4d2a-8c5e-0fe7716247e2/
SUCCESS
7.4.4.6 設置默認存儲類
-
在未設定默認存儲類時pvc必須指定使用類的名稱
-
在設定存儲類后創建pvc時可以不用指定storageClassName
#一次性指定多個pvc
[root@k8s-master pvc]# vim pvc.yml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:name: pvc1
spec:storageClassName: nfs-clientaccessModes:- ReadWriteOnceresources:requests:storage: 1Gi---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:name: pvc2
spec:storageClassName: nfs-clientaccessModes:- ReadWriteManyresources:requests:storage: 10Gi---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:name: pvc3
spec:storageClassName: nfs-clientaccessModes:- ReadOnlyManyresources:requests:storage: 15Giroot@k8s-master pvc]# kubectl apply -f pvc.yml
persistentvolumeclaim/pvc1 created
persistentvolumeclaim/pvc2 created
persistentvolumeclaim/pvc3 created
[root@k8s-master pvc]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE
pvc1 Bound pvc-25a3c8c5-2797-4240-9270-5c51caa211b8 1Gi RWO nfs-client <unset> 4s
pvc2 Bound pvc-c7f34d1c-c8d3-4e7f-b255-e29297865353 10Gi RWX nfs-client <unset> 4s
pvc3 Bound pvc-5f1086ad-2999-487d-88d2-7104e3e9b221 15Gi ROX nfs-client <unset> 4s
test-claim Bound pvc-b1aef9cc-4be9-4d2a-8c5e-0fe7716247e2 1Gi RWX nfs-client <unset> 9m9s
設定默認存儲類
[root@k8s-master storageclass]# kubectl edit sc nfs-client
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:annotations:kubectl.kubernetes.io/last-applied-configuration: |{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{},"name":"nfs-client"},"parameters":{"archiveOnDelete":"false"},"provisioner":"k8s-sigs.io/nfs-subdir-external-provisioner"}storageclass.kubernetes.io/is-default-class: "true" #設定默認存儲類creationTimestamp: "2024-09-07T13:49:10Z"name: nfs-clientresourceVersion: "218198"uid: 9eb1e144-3051-4f16-bdec-30c472358028
parameters:archiveOnDelete: "false"
provisioner: k8s-sigs.io/nfs-subdir-external-provisioner
reclaimPolicy: Delete
volumeBindingMode: Immediate#測試,未指定storageClassName參數
[root@k8s-master storageclass]# vim pvc.yml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:name: test-claim
spec:accessModes:- ReadWriteManyresources:requests:storage: 1Gi[root@k8s-master storageclass]# kubectl apply -f pvc.yml
persistentvolumeclaim/test-claim created
[root@k8s-master storageclass]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE
test-claim Bound pvc-b96c6983-5a4f-440d-99ec-45c99637f9b5 1Gi RWX nfs-client <unset> 7s
7.5?statefulset控制器
7.5.1 功能特性
-
Statefulset是為了管理有狀態服務的問提設計的
-
StatefulSet將應用狀態抽象成了兩種情況:
-
拓撲狀態:應用實例必須按照某種順序啟動。新創建的Pod必須和原來Pod的網絡標識一樣
-
存儲狀態:應用的多個實例分別綁定了不同存儲數據。
-
StatefulSet給所有的Pod進行了編號,編號規則是:$(statefulset名稱)-$(序號),從0開始。
-
Pod被刪除后重建,重建Pod的網絡標識也不會改變,Pod的拓撲狀態按照Pod的“名字+編號”的方式固定下來,并且為每個Pod提供了一個固定且唯一的訪問入口,Pod對應的DNS記錄。
7.5.2 StatefulSet的組成部分
-
Headless Service:用來定義pod網絡標識,生成可解析的DNS記錄
-
volumeClaimTemplates:創建pvc,指定pvc名稱大小,自動創建pvc且pvc由存儲類供應。
-
StatefulSet:管理pod的
7.5.3 構建方法
#建立無頭服務
[root@k8s-master statefulset]# vim headless.yml
apiVersion: v1
kind: Service
metadata:name: nginx-svclabels:app: nginx
spec:ports:- port: 80name: webclusterIP: Noneselector:app: nginx
[root@k8s-master statefulset]# kubectl apply -f headless.yml#建立statefulset
[root@k8s-master statefulset]# vim statefulset.yml
apiVersion: apps/v1
kind: StatefulSet
metadata:name: web
spec:serviceName: "nginx-svc"replicas: 3selector:matchLabels:app: nginxtemplate:metadata:labels:app: nginxspec:containers:- name: nginximage: nginxvolumeMounts:- name: wwwmountPath: /usr/share/nginx/htmlvolumeClaimTemplates:- metadata:name: wwwspec:storageClassName: nfs-clientaccessModes:- ReadWriteOnceresources:requests:storage: 1Gi
[root@k8s-master statefulset]# kubectl apply -f statefulset.yml
statefulset.apps/web configured
root@k8s-master statefulset]# kubectl get pods
NAME READY STATUS RESTARTS AGE
web-0 1/1 Running 0 3m26s
web-1 1/1 Running 0 3m22s
web-2 1/1 Running 0 3m18s[root@reg nfsdata]# ls /nfsdata/
default-test-claim-pvc-34b3d968-6c2b-42f9-bbc3-d7a7a02dcbac
default-www-web-0-pvc-0390b736-477b-4263-9373-a53d20cc8f9f
default-www-web-1-pvc-a5ff1a7b-fea5-4e77-afd4-cdccedbc278c
default-www-web-2-pvc-83eff88b-4ae1-4a8a-b042-8899677ae854
7.5.4 測試:
#為每個pod建立index.html文件[root@reg nfsdata]# echo web-0 > default-www-web-0-pvc-0390b736-477b-4263-9373-a53d20cc8f9f/index.html
[root@reg nfsdata]# echo web-1 > default-www-web-1-pvc-a5ff1a7b-fea5-4e77-afd4-cdccedbc278c/index.html
[root@reg nfsdata]# echo web-2 > default-www-web-2-pvc-83eff88b-4ae1-4a8a-b042-8899677ae854/index.html#建立測試pod訪問web-0~2
[root@k8s-master statefulset]# kubectl run -it testpod --image busyboxplus
/ # curl web-0.nginx-svc
web-0
/ # curl web-1.nginx-svc
web-1
/ # curl web-2.nginx-svc
web-2#刪掉重新建立statefulset
[root@k8s-master statefulset]# kubectl delete -f statefulset.yml
statefulset.apps "web" deleted
[root@k8s-master statefulset]# kubectl apply -f statefulset.yml
statefulset.apps/web created#訪問依然不變
[root@k8s-master statefulset]# kubectl attach testpod -c testpod -i -t
If you don't see a command prompt, try pressing enter.
/ # cu
curl cut
/ # curl web-0.nginx-svc
web-0
/ # curl web-1.nginx-svc
web-1
/ # curl web-2.nginx-svc
web-2
7.5.5 statefulset的彈縮
首先,想要彈縮的StatefulSet. 需先清楚是否能彈縮該應用
用命令改變副本數
$ kubectl scale statefulsets <stateful-set-name> --replicas=<new-replicas>
通過編輯配置改變副本數
$ kubectl edit statefulsets.apps <stateful-set-name>
statefulset有序回收
[root@k8s-master statefulset]# kubectl scale statefulset web --replicas 0
statefulset.apps/web scaled
[root@k8s-master statefulset]# kubectl delete -f statefulset.yml
statefulset.apps "web" deleted
[root@k8s-master statefulset]# kubectl delete pvc --all
persistentvolumeclaim "test-claim" deleted
persistentvolumeclaim "www-web-0" deleted
persistentvolumeclaim "www-web-1" deleted
persistentvolumeclaim "www-web-2" deleted
persistentvolumeclaim "www-web-3" deleted
persistentvolumeclaim "www-web-4" deleted
persistentvolumeclaim "www-web-5" deleted
[root@k8s2 statefulset]# kubectl scale statefulsets web --replicas=0[root@k8s2 statefulset]# kubectl delete -f statefulset.yaml[root@k8s2 mysql]# kubectl delete pvc --all
八 k8s網絡通信
8.1 k8s通信整體架構
-
8s通過CNI接口接入其他插件來實現網絡通訊。目前比較流行的插件有flannel,calico等
-
CNI插件存放位置:# cat /etc/cni/net.d/10-flannel.conflist
-
插件使用的解決方案如下
-
虛擬網橋,虛擬網卡,多個容器共用一個虛擬網卡進行通信。
-
多路復用:MacVLAN,多個容器共用一個物理網卡進行通信。
-
硬件交換:SR-LOV,一個物理網卡可以虛擬出多個接口,這個性能最好。
-
-
容器間通信:
-
同一個pod內的多個容器間的通信,通過lo即可實現pod之間的通信
-
同一節點的pod之間通過cni網橋轉發數據包。
-
不同節點的pod之間的通信需要網絡插件支持
-
-
pod和service通信: 通過iptables或ipvs實現通信,ipvs取代不了iptables,因為ipvs只能做負載均衡,而做不了nat轉換
-
pod和外網通信:iptables的MASQUERADE
-
Service與集群外部客戶端的通信;(ingress、nodeport、loadbalancer)
8.2 flannel網絡插件
插件組成:
插件 | 功能 |
---|---|
VXLAN | 即Virtual Extensible LAN(虛擬可擴展局域網),是Linux本身支持的一網種網絡虛擬化技術。VXLAN可以完全在內核態實現封裝和解封裝工作,從而通過“隧道”機制,構建出覆蓋網絡(Overlay Network) |
VTEP | VXLAN Tunnel End Point(虛擬隧道端點),在Flannel中 VNI的默認值是1,這也是為什么宿主機的VTEP設備都叫flannel.1的原因 |
Cni0 | 網橋設備,每創建一個pod都會創建一對 veth pair。其中一端是pod中的eth0,另一端是Cni0網橋中的端口(網卡) |
Flannel.1 | TUN設備(虛擬網卡),用來進行 vxlan 報文的處理(封包和解包)。不同node之間的pod數據流量都從overlay設備以隧道的形式發送到對端 |
Flanneld | flannel在每個主機中運行flanneld作為agent,它會為所在主機從集群的網絡地址空間中,獲取一個小的網段subnet,本主機內所有容器的IP地址都將從中分配。同時Flanneld監聽K8s集群數據庫,為flannel.1設備提供封裝數據時必要的mac、ip等網絡數據信息 |
8.2.1 flannel跨主機通信原理
-
當容器發送IP包,通過veth pair 發往cni網橋,再路由到本機的flannel.1設備進行處理。
-
VTEP設備之間通過二層數據幀進行通信,源VTEP設備收到原始IP包后,在上面加上一個目的MAC地址,封裝成一個內部數據幀,發送給目的VTEP設備。
-
內部數據楨,并不能在宿主機的二層網絡傳輸,Linux內核還需要把它進一步封裝成為宿主機的一個普通的數據幀,承載著內部數據幀通過宿主機的eth0進行傳輸。
-
Linux會在內部數據幀前面,加上一個VXLAN頭,VXLAN頭里有一個重要的標志叫VNI,它是VTEP識別某個數據楨是不是應該歸自己處理的重要標識。
-
flannel.1設備只知道另一端flannel.1設備的MAC地址,卻不知道對應的宿主機地址是什么。在linux內核里面,網絡設備進行轉發的依據,來自FDB的轉發數據庫,這個flannel.1網橋對應的FDB信息,是由flanneld進程維護的。
-
linux內核在IP包前面再加上二層數據幀頭,把目標節點的MAC地址填進去,MAC地址從宿主機的ARP表獲取。
-
此時flannel.1設備就可以把這個數據幀從eth0發出去,再經過宿主機網絡來到目標節點的eth0設備。目標主機內核網絡棧會發現這個數據幀有VXLAN Header,并且VNI為1,Linux內核會對它進行拆包,拿到內部數據幀,根據VNI的值,交給本機flannel.1設備處理,flannel.1拆包,根據路由表發往cni網橋,最后到達目標容器。
#默認網絡通信路由
[root@k8s-master ~]# ip r
default via 172.25.254.2 dev eth0 proto static metric 100
10.244.0.0/24 dev cni0 proto kernel scope link src 10.244.0.1
10.244.1.0/24 via 10.244.1.0 dev flannel.1 onlink
10.244.2.0/24 via 10.244.2.0 dev flannel.1 onlink
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown
172.25.254.0/24 dev eth0 proto kernel scope link src 172.25.254.100 metric 100#橋接轉發數據庫
[root@k8s-master ~]# bridge fdb
01:00:5e:00:00:01 dev eth0 self permanent
33:33:00:00:00:01 dev eth0 self permanent
01:00:5e:00:00:fb dev eth0 self permanent
33:33:ff:65:cb:fa dev eth0 self permanent
33:33:00:00:00:fb dev eth0 self permanent
33:33:00:00:00:01 dev docker0 self permanent
01:00:5e:00:00:6a dev docker0 self permanent
33:33:00:00:00:6a dev docker0 self permanent
01:00:5e:00:00:01 dev docker0 self permanent
01:00:5e:00:00:fb dev docker0 self permanent
02:42:76:94:aa:bc dev docker0 vlan 1 master docker0 permanent
02:42:76:94:aa:bc dev docker0 master docker0 permanent
33:33:00:00:00:01 dev kube-ipvs0 self permanent
82:14:17:b1:1d:d0 dev flannel.1 dst 172.25.254.20 self permanent
22:7f:e7:fd:33:77 dev flannel.1 dst 172.25.254.10 self permanent
33:33:00:00:00:01 dev cni0 self permanent
01:00:5e:00:00:6a dev cni0 self permanent
33:33:00:00:00:6a dev cni0 self permanent
01:00:5e:00:00:01 dev cni0 self permanent
33:33:ff:aa:13:2f dev cni0 self permanent
01:00:5e:00:00:fb dev cni0 self permanent
33:33:00:00:00:fb dev cni0 self permanent
0e:49:e3:aa:13:2f dev cni0 vlan 1 master cni0 permanent
0e:49:e3:aa:13:2f dev cni0 master cni0 permanent
7a:1c:2d:5d:0e:9e dev vethf29f1523 master cni0
5e:4e:96:a0:eb:db dev vethf29f1523 vlan 1 master cni0 permanent
5e:4e:96:a0:eb:db dev vethf29f1523 master cni0 permanent
33:33:00:00:00:01 dev vethf29f1523 self permanent
01:00:5e:00:00:01 dev vethf29f1523 self permanent
33:33:ff:a0:eb:db dev vethf29f1523 self permanent
33:33:00:00:00:fb dev vethf29f1523 self permanent
b2:f9:14:9f:71:29 dev veth18ece01e master cni0
3a:05:06:21:bf:7f dev veth18ece01e vlan 1 master cni0 permanent
3a:05:06:21:bf:7f dev veth18ece01e master cni0 permanent
33:33:00:00:00:01 dev veth18ece01e self permanent
01:00:5e:00:00:01 dev veth18ece01e self permanent
33:33:ff:21:bf:7f dev veth18ece01e self permanent
33:33:00:00:00:fb dev veth18ece01e self permanent#arp列表
[root@k8s-master ~]# arp -n
Address HWtype HWaddress Flags Mask Iface
10.244.0.2 ether 7a:1c:2d:5d:0e:9e C cni0
172.25.254.1 ether 00:50:56:c0:00:08 C eth0
10.244.2.0 ether 82:14:17:b1:1d:d0 CM flannel.1
10.244.1.0 ether 22:7f:e7:fd:33:77 CM flannel.1
172.25.254.20 ether 00:0c:29:6a:a8:61 C eth0
172.25.254.10 ether 00:0c:29:ea:52:cb C eth0
10.244.0.3 ether b2:f9:14:9f:71:29 C cni0
172.25.254.2 ether 00:50:56:fc:e0:b9 C eth0
8.2.2 flannel支持的后端模式
網絡模式 | 功能 |
---|---|
vxlan | 報文封裝,默認模式 |
Directrouting | 直接路由,跨網段使用vxlan,同網段使用host-gw模式 |
host-gw | 主機網關,性能好,但只能在二層網絡中,不支持跨網絡 如果有成千上萬的Pod,容易產生廣播風暴,不推薦 |
UDP | 性能差,不推薦 |
更改flannel的默認模式
[root@k8s-master ~]# kubectl -n kube-flannel edit cm kube-flannel-cfg
apiVersion: v1
data:cni-conf.json: |{"name": "cbr0","cniVersion": "0.3.1","plugins": [{"type": "flannel","delegate": {"hairpinMode": true,"isDefaultGateway": true}},{"type": "portmap","capabilities": {"portMappings": true}}]}net-conf.json: |{"Network": "10.244.0.0/16","EnableNFTables": false,"Backend": {"Type": "host-gw" #更改內容}}
#重啟pod
[root@k8s-master ~]# kubectl -n kube-flannel delete pod --all
pod "kube-flannel-ds-bk8wp" deleted
pod "kube-flannel-ds-mmftf" deleted
pod "kube-flannel-ds-tmfdn" deleted[root@k8s-master ~]# ip r
default via 172.25.254.2 dev eth0 proto static metric 100
10.244.0.0/24 dev cni0 proto kernel scope link src 10.244.0.1
10.244.1.0/24 via 172.25.254.10 dev eth0
10.244.2.0/24 via 172.25.254.20 dev eth0
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown
172.25.254.0/24 dev eth0 proto kernel scope link src 172.25.254.100 metric 100
8.3 calico網絡插件
官網:
Installing on on-premises deployments | Calico Documentation
8.3.1 calico簡介:
-
純三層的轉發,中間沒有任何的NAT和overlay,轉發效率最好。
-
Calico 僅依賴三層路由可達。Calico 較少的依賴性使它能適配所有 VM、Container、白盒或者混合環境場景。
8.3.2 calico網絡架構
8.3.3 部署calico
刪除flannel插件
[root@master k8s-img]# kubectl delete -f kube-flannel.yml
namespace "kube-flannel" deleted
serviceaccount "flannel" deleted
clusterrole.rbac.authorization.k8s.io "flannel" deleted
clusterrolebinding.rbac.authorization.k8s.io "flannel" deleted
configmap "kube-flannel-cfg" deleted
daemonset.apps "kube-flannel-ds" deleted
刪除所有節點上flannel配置文件,避免沖突
[root@all k8s-img]# rm -rf /etc/cni/net.d/10-flannel.conflist
下載部署文件
[root@master calico]# curl https://raw.githubusercontent.com/projectcalico/calico/v3.28.1/manifests/calico-typha.yaml -o calico.yaml
下載鏡像上傳至倉庫:
#打標簽的過程就省略了
[root@master network]# docker push reg.timingy.org/calico/cni:v3.28.1
The push refers to repository [reg.timingy.org/calico/cni]
5f70bf18a086: Mounted from flannel/flannel
38ba74eb8103: Pushed
6b2e64a0b556: Pushed
v3.28.1: digest: sha256:4bf108485f738856b2a56dbcfb3848c8fb9161b97c967a7cd479a60855e13370 size: 946
[root@master network]# docker push reg.timingy.org/calico/node:v3.28.1
The push refers to repository [reg.timingy.org/calico/node]
3831744e3436: Pushed
v3.28.1: digest: sha256:f72bd42a299e280eed13231cc499b2d9d228ca2f51f6fd599d2f4176049d7880 size: 530
[root@master network]# docker push reg.timingy.org/calico/kube-controllers:v3.28.1
The push refers to repository [reg.timingy.org/calico/kube-controllers]
4f27db678727: Pushed
6b2e64a0b556: Mounted from calico/cni
v3.28.1: digest: sha256:8579fad4baca75ce79644db84d6a1e776a3c3f5674521163e960ccebd7206669 size: 740
[root@master network]# docker push reg.timingy.org/calico/typha:v3.28.1
The push refers to repository [reg.timingy.org/calico/typha]
993f578a98d3: Pushed
6b2e64a0b556: Mounted from calico/kube-controllers
v3.28.1: digest: sha256:093ee2e785b54c2edb64dc68c6b2186ffa5c47aba32948a35ae88acb4f30108f size: 740
更改yml設置
[root@k8s-master calico]# vim calico.yaml
4835 image: calico/cni:v3.28.1 #會從你配置的默認docker倉庫(/etc/docker/daemon.json)拉取鏡像
4835 image: calico/cni:v3.28.1
4906 image: calico/node:v3.28.1
4932 image: calico/node:v3.28.1
5160 image: calico/kube-controllers:v3.28.1
5249 - image: calico/typha:v3.28.14970 - name: CALICO_IPV4POOL_IPIP
4971 value: "Never"4999 - name: CALICO_IPV4POOL_CIDR
5000 value: "10.244.0.0/16"
5001 - name: CALICO_AUTODETECTION_METHOD
5002 value: "interface=eth0"[root@master network]# kubectl apply -f calico.yaml[root@master network]# kubectl -n kube-system get pods
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-6849cb478c-vqcj7 1/1 Running 0 31s
calico-node-54jw7 1/1 Running 0 31s
calico-node-knsq5 1/1 Running 0 31s
calico-node-nzmlx 1/1 Running 0 31s
calico-typha-fff9df85f-42n8s 1/1 Running 0 31s
coredns-7c677d6c78-7n96p 1/1 Running 1 (20h ago) 21h
coredns-7c677d6c78-jp6c5 1/1 Running 1 (20h ago) 21h
etcd-master 1/1 Running 1 (20h ago) 21h
kube-apiserver-master 1/1 Running 1 (20h ago) 21h
kube-controller-manager-master 1/1 Running 1 (20h ago) 21h
kube-proxy-qrfjs 1/1 Running 1 (20h ago) 21h
kube-proxy-rjzl9 1/1 Running 1 (20h ago) 21h
kube-proxy-tfwhz 1/1 Running 1 (20h ago) 21h
kube-scheduler-master 1/1 Running 1 (20h ago) 21h
測試:
[root@master network]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
test 1/1 Running 0 24s 10.244.166.128 node1 <none> <none>
[root@master network]# kubectl run test --image nginx
pod/test created
[root@master network]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
test 1/1 Running 0 24s 10.244.166.128 node1 <none> <none>[root@master network]# curl 10.244.166.128
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
九 k8s調度(Scheduling)
9.1 調度在Kubernetes中的作用
-
調度是指將未調度的Pod自動分配到集群中的節點的過程
-
調度器通過 kubernetes 的 watch 機制來發現集群中新創建且尚未被調度到 Node 上的 Pod
-
調度器會將發現的每一個未調度的 Pod 調度到一個合適的 Node 上來運行
9.2調度原理:
-
創建Pod
-
用戶通過Kubernetes API創建Pod對象,并在其中指定Pod的資源需求、容器鏡像等信息。
-
-
調度器監視Pod
-
Kubernetes調度器監視集群中的未調度Pod對象,并為其選擇最佳的節點。
-
-
選擇節點
-
調度器通過算法選擇最佳的節點,并將Pod綁定到該節點上。調度器選擇節點的依據包括節點的資源使用情況、Pod的資源需求、親和性和反親和性等。
-
-
綁定Pod到節點
-
調度器將Pod和節點之間的綁定信息保存在etcd數據庫中,以便節點可以獲取Pod的調度信息。
-
-
節點啟動Pod
-
節點定期檢查etcd數據庫中的Pod調度信息,并啟動相應的Pod。如果節點故障或資源不足,調度器會重新調度Pod,并將其綁定到其他節點上運行。
-
9.3 調度器種類
-
默認調度器(Default Scheduler):
-
是Kubernetes中的默認調度器,負責對新創建的Pod進行調度,并將Pod調度到合適的節點上。
-
-
自定義調度器(Custom Scheduler):
-
是一種自定義的調度器實現,可以根據實際需求來定義調度策略和規則,以實現更靈活和多樣化的調度功能。
-
-
擴展調度器(Extended Scheduler):
-
是一種支持調度器擴展器的調度器實現,可以通過調度器擴展器來添加自定義的調度規則和策略,以實現更靈活和多樣化的調度功能。
-
-
kube-scheduler是kubernetes中的默認調度器,在kubernetes運行后會自動在控制節點運行
9.4 常用調度方法
9.4.1 nodename
-
nodeName 是節點選擇約束的最簡單方法,但一般不推薦
-
如果 nodeName 在 PodSpec 中指定了,則它優先于其他的節點選擇方法
-
使用 nodeName 來選擇節點的一些限制
-
如果指定的節點不存在。
-
如果指定的節點沒有資源來容納 pod,則pod 調度失敗。
-
云環境中的節點名稱并非總是可預測或穩定的
-
實例:
[root@master network]# kubectl run test --image nginx --dry-run=client -o yaml > test.yml
[root@master network]# vim test.yml
apiVersion: v1
kind: Pod
metadata:labels:run: testname: test
spec:containers:- image: nginxname: test[root@master network]# kubectl apply -f test.yml
[root@master network]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
test 1/1 Running 0 18m 10.244.166.128 node1 <none> <none>#刪除資源
[root@master scheduler]# kubectl delete -f test.yml
pod "test" deleted#更改文件
[root@master scheduler]# cat test.yml
apiVersion: v1
kind: Pod
metadata:labels:run: testname: test
spec:nodeName: node2 #選擇你要調度的節點名稱containers:- image: nginxname: test[root@master scheduler]# kubectl apply -f test.yml
pod/test created[root@master scheduler]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
test 1/1 Running 0 6s 10.244.104.1 node2 <none> <none>
#注意:找不到節點pod會出現pending,優先級最高,其他調度方式無效
9.4.2 Nodeselector(通過標簽控制節點)
-
nodeSelector 是節點選擇約束的最簡單推薦形式
-
給選擇的節點添加標簽:
kubectl label nodes k8s-node1 lab=xxy
-
可以給多個節點設定相同標簽
示例:
#查看節點標簽
[root@master scheduler]# kubectl get nodes --show-labels
NAME STATUS ROLES AGE VERSION LABELS
master Ready control-plane 22h v1.30.0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=master,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node.kubernetes.io/exclude-from-external-load-balancers=
node1 Ready <none> 21h v1.30.0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=node1,kubernetes.io/os=linux
node2 Ready <none> 21h v1.30.0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=node2,kubernetes.io/os=linux#設定節點標簽
[root@master scheduler]# kubectl label nodes node1 disktype=ssd
node/node1 labeled[root@master scheduler]# kubectl get nodes node1 --show-labels
NAME STATUS ROLES AGE VERSION LABELS
node1 Ready <none> 21h v1.30.0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,disktype=ssd,kubernetes.io/arch=amd64,kubernetes.io/hostname=node1,kubernetes.io/os=linux#調度設置
[root@master scheduler]# cat nodeSelector.yml
apiVersion: v1
kind: Pod
metadata:labels:run: testname: test
spec:nodeSelector:disktype: ssdcontainers:- image: nginxname: test[root@master scheduler]# kubectl apply -f nodeSelector.yml
pod/test created[root@master scheduler]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
test 1/1 Running 0 8s 10.244.166.129 node1 <none> <none>
注意:節點標簽可以給N個節點加
使用 nodeName 和 nodeSelector 控制 Pod 調度存在以下主要缺陷:
-
靈活性不足:
-
nodeName 直接硬編碼節點名稱,一旦節點不可用,Pod 會調度失敗
-
無法動態適應集群節點變化,增減節點需手動修改配置
-
-
缺乏復雜調度策略:
-
僅能基于節點標簽做簡單匹配,無法實現資源親和性、反親和性等高級策略
-
不能根據節點資源使用率、負載情況動態調度
-
-
擴展性問題:
-
當集群規模擴大,節點標簽管理復雜時,維護成本顯著增加
-
難以應對多維度的調度需求(如同時考慮硬件類型、區域、可用區等)
-
-
容錯能力弱:
-
使用 nodeName 時,若指定節點故障,Pod 會一直處于 Pending 狀態
-
沒有重試或自動轉移到其他節點的機制
-
-
與自動擴縮容不兼容:
-
無法很好地配合集群自動擴縮容(HPA/VPA)工作
-
新擴容節點可能無法被正確選中
-
這些缺陷使得 nodeName 和 nodeSelector 更適合簡單場景,復雜場景通常需要使用 Node Affinity、Pod Affinity 等更高級的調度策略。
9.5 affinity(親和性)
官方文檔 :
將 Pod 指派給節點 | Kubernetes
9.5.1 親和與反親和
-
nodeSelector 提供了一種非常簡單的方法來將 pod 約束到具有特定標簽的節點上。親和/反親和功能極大地擴展了你可以表達約束的類型。
-
使用節點上的 pod 的標簽來約束,而不是使用節點本身的標簽,來允許哪些 pod 可以或者不可以被放置在一起。
9.5.2 nodeAffinity節點親和
-
哪個節點服務滿足指定條件就在哪個節點運行
-
requiredDuringSchedulingIgnoredDuringExecution 必須滿足,但不會影響已經調度
-
preferredDuringSchedulingIgnoredDuringExecution 傾向滿足,在無法滿足情況下也會調度pod
-
IgnoreDuringExecution 表示如果在Pod運行期間Node的標簽發生變化,導致親和性策略不能滿足,則繼續運行當前的Pod。
-
-
nodeaffinity還支持多種規則匹配條件的配置如
匹配規則 | 功能 |
---|---|
ln | label 的值在列表內 |
Notln | label 的值不在列表內 |
Gt | label 的值大于設置的值,不支持Pod親和性 |
Lt | label 的值小于設置的值,不支持pod親和性 |
Exists | 設置的label 存在 |
DoesNotExist | 設置的 label 不存在 |
nodeAffinity示例
[root@master scheduler]# cat nodeAffinity_test1.yml
apiVersion: v1
kind: Pod
metadata:labels:run: testname: test
spec:affinity:nodeAffinity:requiredDuringSchedulingIgnoredDuringExecution:nodeSelectorTerms:- matchExpressions:- key: disktypeoperator: Invalues:- ssd- fccontainers:- image: nginxname: test[root@master scheduler]# kubectl apply -f nodeAffinity_test1.yml
pod/test created
[root@master scheduler]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
test 1/1 Running 0 31s 10.244.166.130 node1 <none> <none>
[root@master scheduler]# kubectl get nodes --show-labels
NAME STATUS ROLES AGE VERSION LABELS
master Ready control-plane 22h v1.30.0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=master,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node.kubernetes.io/exclude-from-external-load-balancers=
node1 Ready <none> 22h v1.30.0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,disktype=ssd,kubernetes.io/arch=amd64,kubernetes.io/hostname=node1,kubernetes.io/os=linux #含有指定的鍵和值
node2 Ready <none> 22h v1.30.0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=node2,kubernetes.io/os=linux
9.5.3 Podaffinity(pod的親和)
-
那個節點有符合條件的POD就在那個節點運行
-
podAffinity 主要解決POD可以和哪些POD部署在同一個節點中的問題
-
podAntiAffinity主要解決POD不能和哪些POD部署在同一個節點中的問題。它們處理的是Kubernetes集群內部POD和POD之間的關系。
-
Pod 間親和與反親和在與更高級別的集合(例如 ReplicaSets,StatefulSets,Deployments 等)一起使用時,
-
Pod 間親和與反親和需要大量的處理,這可能會顯著減慢大規模集群中的調度。
Podaffinity示例
#先運行一個pod,記住他的標簽是run=test
[root@master scheduler]# kubectl apply -f nodeName.yml
pod/test created
[root@master scheduler]# kubectl get pods -o wide --show-labels
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES LABELS
test 1/1 Running 0 12s 10.244.104.3 node2 <none> <none> run=test#節點親和配置實例
[root@master scheduler]# vim podAffinity_test1.yml
apiVersion: v1
kind: Pod
metadata:labels:run: myappv1name: myappv1
spec:affinity:podAffinity:requiredDuringSchedulingIgnoredDuringExecution:- labelSelector:matchExpressions:- key: runoperator: Invalues:- testtopologyKey: "kubernetes.io/hostname"containers:- image: myapp:v1name: test[root@master scheduler]# kubectl apply -f podAffinity_test1.yml
pod/myappv1 created[root@master scheduler]# kubectl get pods -o wide --show-labels
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES LABELS
myappv1 1/1 Running 0 6s 10.244.104.4 node2 <none> <none> run=myappv1
test 1/1 Running 0 3m10s 10.244.104.3 node2 <none> <none> run=test
可以看到該Pods根據我們的節點親和配置被調度到了node2
也可以設置operator: NotIn 讓pod被調度到其他節點
9.5.4 Podantiaffinity(pod反親和)
Podantiaffinity示例
[root@master scheduler]# vim podAntiaffinity_test1.yml
apiVersion: v1
kind: Pod
metadata:labels:run: myappv1name: myappv1
spec:affinity:podAntiAffinity: #反親和requiredDuringSchedulingIgnoredDuringExecution:- labelSelector:matchExpressions:- key: runoperator: Invalues:- testtopologyKey: "kubernetes.io/hostname"containers:- image: myapp:v1name: test[root@master scheduler]# kubectl apply -f nodeName.yml
pod/test created[root@master scheduler]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
test 1/1 Running 0 7s 10.244.104.5 node2 <none> <none>[root@master scheduler]# kubectl apply -f podAntiaffinity_test1.yml
pod/myappv1 created[root@master scheduler]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
myappv1 1/1 Running 0 9s 10.244.166.131 node1 <none> <none>
test 1/1 Running 0 40s 10.244.104.5 node2 <none> <none>
9.6 Taints(污點模式,禁止調度)
-
Taints(污點)是Node的一個屬性,設置了Taints后,默認Kubernetes是不會將Pod調度到這個Node上
-
Kubernetes如果為Pod設置Tolerations(容忍),只要Pod能夠容忍Node上的污點,那么Kubernetes就會忽略Node上的污點,就能夠(不是必須)把Pod調度過去
-
可以使用命令 kubectl taint 給節點增加一個 taint:
$ kubectl taint nodes <nodename> key=string:effect #命令執行方法
$ kubectl taint nodes node1 key=value:NoSchedule #創建
$ kubectl describe nodes server1 | grep Taints #查詢
$ kubectl taint nodes node1 key- #刪除
其中[effect] 可取值:
effect值 | 解釋 |
---|---|
NoSchedule | POD 不會被調度到標記為 taints 節點 |
PreferNoSchedule | NoSchedule 的軟策略版本,盡量不調度到此節點 |
NoExecute | 如該節點內正在運行的 POD 沒有對應 Tolerate 設置,會直接被逐出 |
9.6.1 Taints示例
[root@master scheduler]# kubectl create deployment web --image nginx --replicas 2 --dry-run=client -o yaml > taints_test1.yml[root@master scheduler]# vim taints_test1.yml
apiVersion: apps/v1
kind: Deployment
metadata:labels:app: webname: web
spec:replicas: 2selector:matchLabels:app: webtemplate:metadata:labels:app: webspec:containers:- image: nginxname: nginx[root@master scheduler]# kubectl apply -f taints_test1.yml
deployment.apps/web created[root@master scheduler]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
web-7c56dcdb9b-fg4t7 1/1 Running 0 8s 10.244.104.6 node2 <none> <none>
web-7c56dcdb9b-mpsj6 1/1 Running 0 8s 10.244.166.132 node1 <none> <none>#設定污點為NoSchedule
[root@master scheduler]# kubectl taint node node1 name=xxy:NoSchedule
node/node1 tainted#控制器增加pod
[root@master scheduler]# kubectl scale deployment web --replicas 6
deployment.apps/web scaled#查看調度情況
[root@master scheduler]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
web-7c56dcdb9b-52sht 1/1 Running 0 6s 10.244.104.8 node2 <none> <none>
web-7c56dcdb9b-792zc 1/1 Running 0 6s 10.244.104.10 node2 <none> <none>
web-7c56dcdb9b-8mvc4 1/1 Running 0 6s 10.244.104.7 node2 <none> <none>
web-7c56dcdb9b-fg4t7 1/1 Running 0 5m28s 10.244.104.6 node2 <none> <none>
web-7c56dcdb9b-mpsj6 1/1 Running 0 5m28s 10.244.166.132 node1 <none> <none>
web-7c56dcdb9b-zw6ft 1/1 Running 0 6s 10.244.104.9 node2 <none> <none>
可以看到為node1設置了NoSchedule污點再增加pod,pod不會再被調度到pod1,但是已經運行在node1上的pod依然運行#設定污點為NoExecute
[root@master scheduler]# kubectl taint node node1 name=xxy:NoExecute
node/node1 tainted[root@master scheduler]# kubectl describe nodes master node1 node2 | grep Taints
Taints: node-role.kubernetes.io/control-plane:NoSchedule
Taints: name=xxy:NoExecute
Taints: <none>[root@master scheduler]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
web-7c56dcdb9b-52sht 1/1 Running 0 4m4s 10.244.104.8 node2 <none> <none>
web-7c56dcdb9b-792zc 1/1 Running 0 4m4s 10.244.104.10 node2 <none> <none>
web-7c56dcdb9b-8mvc4 1/1 Running 0 4m4s 10.244.104.7 node2 <none> <none>
web-7c56dcdb9b-fg4t7 1/1 Running 0 9m26s 10.244.104.6 node2 <none> <none>
web-7c56dcdb9b-vzkn6 1/1 Running 0 18s 10.244.104.11 node2 <none> <none>
web-7c56dcdb9b-zw6ft 1/1 Running 0 4m4s 10.244.104.9 node2 <none> <none>
設置node1污點為NoExecute已經在運行的pod被剔除,運行到了其他節點#刪除污點
[root@master scheduler]# kubectl taint node node1 name-
node/node1 untainted
[root@master scheduler]# kubectl describe nodes master node1 node2 | grep Taints
Taints: node-role.kubernetes.io/control-plane:NoSchedule
Taints: <none>
Taints: <none>
9.6.2 tolerations(污點容忍)
-
tolerations中定義的key、value、effect,要與node上設置的taint保持一直:
-
如果 operator 是 Equal ,則key與value之間的關系必須相等。
-
如果 operator 是 Exists ,value可以省略
-
如果不指定operator屬性,則默認值為Equal。
-
-
還有兩個特殊值:
-
當不指定key,再配合Exists 就能匹配所有的key與value ,可以容忍所有污點。
-
當不指定effect ,則匹配所有的effect
-
9.6.3 污點容忍示例:
#設定節點污點
[root@master scheduler]# kubectl taint node node1 nodetype=badnode:PreferNoSchedule
node/node1 tainted
[root@master scheduler]# kubectl taint node node2 nodetype=badnode:NoSchedule
node/node2 tainted
[root@master scheduler]# kubectl describe nodes node1 node2 | grep Taints
Taints: nodetype=badnode:PreferNoSchedule
Taints: nodetype=badnode:NoScheduleapiVersion: apps/v1
kind: Deployment
metadata:labels:app: webname: web
spec:replicas: 2selector:matchLabels:app: webtemplate:metadata:labels:app: webspec:containers:- image: nginxname: nginxtolerations: - operator: Exists #容忍所有污點tolerations: #容忍effect為PreferNoSchedule的污點- operator: Existseffect: PreferNoScheduletolerations: #容忍指定kv的NoSchedule污點- key: nodetypevalue: badnodeeffect: NoSchedule
注意:三種容忍方式每次測試寫一個即可
測試:
1.容忍所有污點
tolerations: - operator: Exists #容忍所有污點
2.容忍effect為PreferNoSchedule的污點
tolerations: #容忍effect為Noschedule的污點- operator: Existseffect: PreferNoSchedule
3.容忍指定kv的NoSchedule污點
tolerations: #容忍指定kv的NoSchedule污點- key: nodetypevalue: badnodeeffect: NoSchedule
十 kubernetes的認證和授權
10.1 kubernetes API 訪問控制
Authentication(認證)
-
認證方式現共有8種,可以啟用一種或多種認證方式,只要有一種認證方式通過,就不再進行其它方式的認證。通常啟用X509 Client Certs和Service Accout Tokens兩種認證方式。
-
Kubernetes集群有兩類用戶:由Kubernetes管理的Service Accounts (服務賬戶)和(Users Accounts) 普通賬戶。k8s中賬號的概念不是我們理解的賬號,它并不真的存在,它只是形式上存在。
Authorization(授權)
-
必須經過認證階段,才到授權請求,根據所有授權策略匹配請求資源屬性,決定允許或拒絕請求。授權方式現共有6種,AlwaysDeny、AlwaysAllow、ABAC、RBAC、Webhook、Node。默認集群強制開啟RBAC。
Admission Control(準入控制)
-
用于攔截請求的一種方式,運行在認證、授權之后,是權限認證鏈上的最后一環,對請求API資源對象進行修改和校驗。
10.1.1 UserAccount與ServiceAccount
-
用戶賬戶是針對人而言的。 服務賬戶是針對運行在 pod 中的進程而言的。
-
用戶賬戶是全局性的。 其名稱在集群各 namespace 中都是全局唯一的,未來的用戶資源不會做 namespace 隔離, 服務賬戶是 namespace 隔離的。
-
集群的用戶賬戶可能會從企業數據庫進行同步,其創建需要特殊權限,并且涉及到復雜的業務流程。 服務賬戶創建的目的是為了更輕量,允許集群用戶為了具體的任務創建服務賬戶 ( 即權限最小化原則 )。
10.1.1.1 ServiceAccount
-
服務賬戶控制器(Service account controller)
-
服務賬戶管理器管理各命名空間下的服務賬戶
-
每個活躍的命名空間下存在一個名為 “default” 的服務賬戶
-
-
服務賬戶準入控制器(Service account admission controller)
-
默認服務賬戶分配:當 Pod 未顯式指定
serviceAccountName
時,自動為其分配當前命名空間中的default
服務賬戶。 -
服務賬戶驗證:檢查 Pod 指定的服務賬戶是否存在于當前命名空間中,若不存在則拒絕 Pod 創建請求,防止無效配置。
-
鏡像拉取密鑰繼承:當 Pod 未配置
imagePullSecrets
時,自動繼承其關聯服務賬戶中定義的imagePullSecrets
,簡化私有鏡像倉庫的訪問配置。 -
自動掛載服務賬戶憑證:
-
為 Pod 添加一個特殊的 Volume,包含訪問 API Server 所需的 token
-
將該 Volume 自動掛載到 Pod 中所有容器的
/var/run/secrets/kubernetes.io/serviceaccount
路徑 -
掛載內容包括 token、CA 證書和 namespace 文件
-
-
10.1.1.2 ServiceAccount示例:
建立名字為admin的ServiceAccount
[root@k8s-master ~]# kubectl create sa timinglee
serviceaccount/timinglee created
[root@k8s-master ~]# kubectl describe sa timinglee
Name: timinglee
Namespace: default
Labels: <none>
Annotations: <none>
Image pull secrets: <none>
Mountable secrets: <none>
Tokens: <none>
Events: <none>
建立secrets
[root@k8s-master ~]# kubectl create secret docker-registry docker-login --docker-username admin --docker-password lee --docker-server reg.timinglee.org --docker-email lee@timinglee.org
secret/docker-login created
[root@k8s-master ~]# kubectl describe secrets docker-login
Name: docker-login
Namespace: default
Labels: <none>
Annotations: <none>Type: kubernetes.io/dockerconfigjsonData
====
.dockerconfigjson: 119 bytes
將secrets注入到sa中
[root@k8s-master ~]# kubectl edit sa timinglee
apiVersion: v1
imagePullSecrets:
- name: docker-login
kind: ServiceAccount
metadata:creationTimestamp: "2024-09-08T15:44:04Z"name: timingleenamespace: defaultresourceVersion: "262259"uid: 7645a831-9ad1-4ae8-a8a1-aca7b267ea2d[root@k8s-master ~]# kubectl describe sa timinglee
Name: timinglee
Namespace: default
Labels: <none>
Annotations: <none>
Image pull secrets: docker-login
Mountable secrets: <none>
Tokens: <none>
Events: <none>
建立私有倉庫并且利用pod訪問私有倉庫
[root@k8s-master auth]# vim example1.yml
[root@k8s-master auth]# kubectl apply -f example1.yml
pod/testpod created
[root@k8s-master auth]# kubectl describe pod testpodWarning Failed 5s kubelet Failed to pull image "reg.timinglee.org/lee/nginx:latest": Error response from daemon: unauthorized: unauthorized to access repository: lee/nginx, action: pull: unauthorized to access repository: lee/nginx, action: pullWarning Failed 5s kubelet Error: ErrImagePullNormal BackOff 3s (x2 over 4s) kubelet Back-off pulling image "reg.timinglee.org/lee/nginx:latest"Warning Failed 3s (x2 over 4s) kubelet Error: ImagePullBackOff
Warning:
在創建pod時會鏡像下載會受阻,因為docker私有倉庫下載鏡像需要認證
pod綁定sa
[root@k8s-master auth]# vim example1.yml
apiVersion: v1
kind: Pod
metadata:name: testpod
spec:serviceAccountName: timingleecontainers:- image: reg.timinglee.org/lee/nginx:latestname: testpod[root@k8s-master auth]# kubectl apply -f example1.yml
pod/testpod created
[root@k8s-master auth]# kubectl get pods
NAME READY STATUS RESTARTS AGE
testpod 1/1 Running 0 2s
10.2?認證(在k8s中建立認證用戶)
10.2.1 創建UserAccount
[root@master kubernetes]# cd /etc/kubernetes/pki #Kubernetes 集群的公鑰基礎設施目錄
[root@master pki]# openssl genrsa -out timinglee.key 2048 #生成私鑰
[root@master pki]# openssl req -new -key timinglee.key -out timinglee.csr -subj "/CN=timinglee" #使用前面生成的私鑰創建證書簽名請求(CSR)
[root@master pki]# openssl x509 -req -in timinglee.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out timinglee.crt -days 365 #使用 Kubernetes 集群的 CA 根證書對 CSR 進行簽名,生成最終的證書
Certificate request self-signature ok
subject=CN = timinglee
[root@master pki]# openssl x509 -in timinglee.crt -text -noout #以文本形式查看證書的詳細信息
Certificate:Data:Version: 1 (0x0)Serial Number:32:a5:fb:e1:5e:30:67:05:dc:af:1d:74:c6:7a:b2:aa:ce:af:be:85Signature Algorithm: sha256WithRSAEncryptionIssuer: CN = kubernetesValidityNot Before: Aug 21 15:01:16 2025 GMTNot After : Aug 21 15:01:16 2026 GMTSubject: CN = timingleeSubject Public Key Info:Public Key Algorithm: rsaEncryptionPublic-Key: (2048 bit)Modulus:00:a0:52:4d:95:29:33:61:a0:2a:66:a8:24:9c:a8:3c:34:2d:d2:cc:6f:68:66:b9:f4:e4:88:63:77:f6:11:89:bb:42:80:cb:f2:4e:f3:de:00:94:bd:90:79:03:7e:26:cb:99:6f:06:28:27:58:17:27:c0:01:42:6c:41:57:c3:f2:90:7e:1a:d6:26:32:4c:94:00:80:d2:8c:ce:42:79:6e:a1:97:48:a6:87:0a:18:7a:e5:35:6c:9f:84:0c:51:58:a2:57:65:2d:3a:0b:28:18:d4:76:d3:6d:e3:14:1f:a7:41:f9:ac:95:c0:20:de:61:67:ba:e4:33:4a:c4:19:19:6c:47:14:8c:87:b5:d2:67:22:80:06:6c:98:90:5c:ab:77:9e:30:9b:7d:31:62:cc:fb:e6:a1:8c:2c:71:6e:74:a8:8b:13:55:d3:28:1b:0d:d7:4b:51:94:4a:7f:36:6d:c5:62:03:06:8d:32:90:92:f8:bd:80:57:6e:bf:8a:52:f6:af:09:9b:a0:8b:c5:8a:05:b8:53:f5:23:9c:b9:1e:64:82:72:ba:7c:90:8e:05:9e:d0:c4:51:b1:f4:37:86:97:8b:a8:b7:b1:64:05:0f:e5:e2:a6:dc:90:03:80:4f:4b:c9:9c:c5:e0:1e:c4:e4:c1:b4:a7:9c:7a:7c:87:09Exponent: 65537 (0x10001)Signature Algorithm: sha256WithRSAEncryptionSignature Value:c4:52:7f:48:36:21:6d:c5:eb:b6:38:98:f2:0e:b1:ac:03:14:ef:99:f7:c1:74:34:30:56:20:31:3f:66:e2:59:ed:30:79:f4:fb:67:45:5d:15:b9:1e:13:28:73:8f:1f:f6:8d:58:6a:94:26:24:85:aa:2b:01:cb:b4:96:28:12:f3:42:97:70:95:f9:e3:fb:32:79:61:8a:c0:e6:b7:94:97:9c:9c:ea:73:57:88:74:db:7e:ee:cd:5d:54:46:b2:e2:35:fa:ee:3b:a8:ee:d8:24:fe:87:5e:36:24:e4:f3:5f:48:08:f9:b0:f1:82:8b:40:74:b2:03:3f:b7:79:2e:1c:60:fb:18:f9:97:5f:8d:31:78:ff:4f:5d:d6:44:a6:ff:af:96:e4:c6:b8:52:2f:82:e5:1b:02:f1:5a:ff:7a:15:63:80:f2:08:ac:89:d1:72:c6:35:c1:a7:c0:00:7a:9f:2e:06:58:89:ef:64:aa:58:e4:3b:fb:f1:85:7c:39:3b:c4:3d:a1:36:d3:dd:2c:51:58:87:69:64:89:0d:e3:ea:1f:36:97:e4:92:63:ec:08:2a:b7:e0:86:14:f2:34:9b:4f:ce:c7:52:7b:dd:b7:0a:2c:a4:09:29:88:c0:f6:40:7e:10:35:19:66:7f:78:1d:6e:ee:9b:9b:94:31:bc
補充:
當你用 timinglee
這個用戶通過 kubectl
訪問集群時,私鑰會隱性參與身份認證:
-
kubectl
會用你的私鑰,對 “訪問 API 的請求內容”(如 “獲取 Pod 列表”)進行數字簽名。 -
Kubernetes API Server 收到請求后,會從你提供的證書中提取公鑰,驗證這個簽名:
-
如果簽名驗證通過 → 證明 “這個請求確實是
timinglee
發送的”(因為只有你的私鑰能生成這個簽名); -
如果驗證失敗 → 說明請求可能被偽造,直接拒絕。
-
#建立k8s中的用戶
[root@master pki]# kubectl config set-credentials timinglee --client-certificate /etc/kubernetes/pki/timinglee.crt --client-key /etc/kubernetes/pki/timinglee.key --embed-certs=true
User "timinglee" set.
#--embed-certs=true用于把用戶配置保存到配置文件中,持久化數據,否則重啟后用戶就不存在了[root@master pki]# kubectl config view
apiVersion: v1
clusters:
- cluster:certificate-authority-data: DATA+OMITTEDserver: https://192.168.121.100:6443name: kubernetes
contexts:
- context:cluster: kubernetesuser: kubernetes-adminname: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-adminuser:client-certificate-data: DATA+OMITTEDclient-key-data: DATA+OMITTED
- name: timingleeuser:client-certificate-data: DATA+OMITTEDclient-key-data: DATA+OMITTED#為用戶創建集群的安全上下文
[root@master pki]# kubectl config set-context timinglee@kubernetes --cluster kubernetes --user timinglee
Context "timinglee@kubernetes" created.#切換用戶,用戶在集群中只有用戶身份沒有授權
[root@master pki]# kubectl config use-context timinglee@kubernetes
Switched to context "timinglee@kubernetes".
[root@master pki]# kubectl get pods
Error from server (Forbidden): pods is forbidden: User "timinglee" cannot list resource "pods" in API group "" in the namespace "default"#切換回集群管理
[root@master pki]# kubectl config use-context kubernetes-admin@kubernetes
Switched to context "kubernetes-admin@kubernetes".#如果需要刪除用戶
[root@master pki]# kubectl config delete-user timinglee
deleted user timinglee from /etc/kubernetes/admin.conf
10.2.2 RBAC(Role Based Access Control)
10.2.2.1 基于角色訪問控制授權:
-
允許管理員通過Kubernetes API動態配置授權策略。RBAC就是用戶通過角色與權限進行關聯。
-
RBAC只有授權,沒有拒絕授權,所以只需要定義允許該用戶做什么即可
-
RBAC的三個基本概念
-
Subject:被作用者,它表示k8s中的三類主體, user, group, serviceAccount
-
-
Role:角色,它其實是一組規則,定義了一組對 Kubernetes API 對象的操作權限。
-
RoleBinding:定義了“被作用者”和“角色”的綁定關系
-
RBAC包括四種類型:Role、ClusterRole、RoleBinding、ClusterRoleBinding
-
Role 和 ClusterRole
-
Role是一系列的權限的集合,Role只能授予單個namespace 中資源的訪問權限。
-
-
ClusterRole 跟 Role 類似,但是可以在集群中全局使用。
-
Kubernetes 還提供了四個預先定義好的 ClusterRole 來供用戶直接使用
-
cluster-amdin、admin、edit、view
10.2.2.2 role授權實施
#生成role的yaml文件
[root@master role]# kubectl create role myrole --dry-run=client --verb=get --resource pods -o yaml > myrole.yml#修改文件
[root@master role]# cat myrole.yml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:creationTimestamp: nullname: myrole
rules:
- apiGroups:- ""resources:- podsverbs:- get- watch- list- create- update- path- delete- apiGroups:- ""resources:- servicesverbs:- get- watch- list- create- update- path- delete[root@master role]# kubectl api-resources | less #可以查看資源所屬組來填寫 - apiGroups: 字段#創建role
[root@master role]# kubectl apply -f myrole.yml
role.rbac.authorization.k8s.io/myrole created[root@master role]# kubectl get roles.rbac.authorization.k8s.io
NAME CREATED AT
myrole 2025-08-21T15:34:41Z[root@master role]# kubectl describe roles.rbac.authorization.k8s.io myrole
Name: myrole
Labels: <none>
Annotations: <none>
PolicyRule:Resources Non-Resource URLs Resource Names Verbs--------- ----------------- -------------- -----pods [] [] [get watch list create update path delete]services [] [] [get watch list create update path delete]
#建立角色綁定
[root@master role]# kubectl create rolebinding timinglee --role myrole --namespace default --user timinglee --dry-run=client -o yaml > rolebinding-myrole.yml
[root@master role]# cat rolebinding-myrole.yml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:creationTimestamp: nullname: timingleenamespace: default ##角色綁定必須指定namespace
roleRef:apiGroup: rbac.authorization.k8s.iokind: Rolename: myrole
subjects:
- apiGroup: rbac.authorization.k8s.iokind: Username: timinglee[root@master role]# kubectl apply -f rolebinding-myrole.yml
rolebinding.rbac.authorization.k8s.io/timinglee created[root@master role]# kubectl get rolebindings.rbac.authorization.k8s.io
NAME ROLE AGE
timinglee Role/myrole 22s
#切換用戶測試授權
[root@master role]# kubectl config use-context timinglee@kubernetes
Switched to context "timinglee@kubernetes".[root@master role]# kubectl get pods
No resources found in default namespace.[root@master role]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 25h
testpod ClusterIP 10.111.131.206 <none> 80/TCP 6h12m[root@master role]# kubectl get namespaces #未對namespace資源授權
Error from server (Forbidden): namespaces is forbidden: User "timinglee" cannot list resource "namespaces" in API group "" at the cluster scope#切換回管理員
[root@master role]# kubectl config use-context kubernetes-admin@kubernetes
Switched to context "kubernetes-admin@kubernetes".
10.2.2.3 clusterrole授權實施
#建立集群角色
[root@master role]# cat myclusterrole.yml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:creationTimestamp: nullname: myclusterrole
rules:
- apiGroups:- appsresources:- deploymentsverbs:- get- list- watch- create- update- path- delete- apiGroups:- ""resources:- podsverbs:- get- list- watch- create- update- path- delete[root@master role]# kubectl apply -f myclusterrole.yml
clusterrole.rbac.authorization.k8s.io/myclusterrole created
[root@master role]# kubectl describe clusterrole myclusterrole
Name: myclusterrole
Labels: <none>
Annotations: <none>
PolicyRule:Resources Non-Resource URLs Resource Names Verbs--------- ----------------- -------------- -----pods [] [] [get list watch create update path delete]deployments.apps [] [] [get list watch create update path delete]#建立集群角色綁定
[root@master role]# kubectl create clusterrolebinding clusterrolebind-myclusterrole --clusterrole myclusterrole --user timinglee --dry-run=client -o yaml > clusterrolebind-myclusterrole.yml
[root@master role]# cat clusterrolebind-myclusterrole.yml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:creationTimestamp: nullname: clusterrolebind-myclusterrole
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: myclusterrole
subjects:
- apiGroup: rbac.authorization.k8s.iokind: Username: timinglee
[root@master role]# kubectl apply -f clusterrolebind-myclusterrole.yml
clusterrolebinding.rbac.authorization.k8s.io/clusterrolebind-myclusterrole created[root@master role]# kubectl describe clusterrolebindings.rbac.authorization.k8s.io clusterrolebind-myclusterrole
Name: clusterrolebind-myclusterrole
Labels: <none>
Annotations: <none>
Role:Kind: ClusterRoleName: myclusterrole
Subjects:Kind Name Namespace---- ---- ---------User timinglee#測試:
[root@master role]# kubectl config use-context timinglee@kubernetes
Switched to context "timinglee@kubernetes".
[root@master role]# kubectl get pods -A #可以訪問所有namespace的資源
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-kube-controllers-6849cb478c-vqcj7 1/1 Running 0 4h8m
kube-system calico-node-54jw7 1/1 Running 0 4h8m
kube-system calico-node-knsq5 1/1 Running 0 4h8m
kube-system calico-node-nzmlx 1/1 Running 0 4h8m
kube-system calico-typha-fff9df85f-42n8s 1/1 Running 0 4h8m
kube-system coredns-7c677d6c78-7n96p 1/1 Running 1 (24h ago) 25h
kube-system coredns-7c677d6c78-jp6c5 1/1 Running 1 (24h ago) 25h
kube-system etcd-master 1/1 Running 1 (24h ago) 25h
kube-system kube-apiserver-master 1/1 Running 1 (24h ago) 25h
kube-system kube-controller-manager-master 1/1 Running 2 (72m ago) 72m
kube-system kube-proxy-qrfjs 1/1 Running 1 (24h ago) 25h
kube-system kube-proxy-rjzl9 1/1 Running 1 (24h ago) 25h
kube-system kube-proxy-tfwhz 1/1 Running 1 (24h ago) 25h
kube-system kube-scheduler-master 1/1 Running 1 (24h ago) 25h
[root@master role]# kubectl get deployments.apps -A
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
kube-system calico-kube-controllers 1/1 1 1 4h8m
kube-system calico-typha 1/1 1 1 4h8m
kube-system coredns 2/2 2 2 25h
[root@master role]# kubectl get svc -A #clusterrole未對service資源授權
Error from server (Forbidden): services is forbidden: User "timinglee" cannot list resource "services" in API group "" at the cluster scope
10.2.2.4 服務賬戶的自動化
服務賬戶準入控制器(Service account admission controller)
-
如果該 pod 沒有 ServiceAccount 設置,將其 ServiceAccount 設為 default。
-
保證 pod 所關聯的 ServiceAccount 存在,否則拒絕該 pod。
-
如果 pod 不包含 ImagePullSecrets 設置,那么 將 ServiceAccount 中的 ImagePullSecrets 信息添加到 pod 中。
-
將一個包含用于 API 訪問的 token 的 volume 添加到 pod 中。
-
將掛載于 /var/run/secrets/kubernetes.io/serviceaccount 的 volumeSource 添加到 pod 下的每個容器中。
服務賬戶控制器(Service account controller)
服務賬戶管理器管理各命名空間下的服務賬戶,并且保證每個活躍的命名空間下存在一個名為 “default” 的服務賬戶