部署calico網絡插件
之前的k8s環境中主要使用了flannel作為網絡插件,這次改用calico。calico支持多種安裝方式,以下是具體的操作步驟。
1. 準備工作
- 環境信息
# 系統信息
root@master1:~# cat /etc/issue
Ubuntu 24.04 LTS \n \lroot@master1:~# uname -r
6.8.0-31-generic# k8s版本
root@master1:~# kubectl get node
NAME STATUS ROLES AGE VERSION
master1 NotReady control-plane 2m2s v1.28.2
node1 NotReady <none> 84s v1.28.2
node2 NotReady <none> 79s v1.28.2
- 版本配套
以最新的calico v3.28版本為例,適配如下k8s版本,我選用這個版本進行安裝。
- v1.27
- v1.28
- v1.29
- v1.30
參考自:System requirements | Calico Documentation (tigera.io)
2. Operator方式安裝
- 安裝operator
# 下載operator資源清單文件
root@master1:~/calico# wget https://raw.githubusercontent.com/projectcalico/calico/v3.28.0/manifests/tigera-operator.yaml
# 或者
root@master1:~/calico# curl https://raw.githubusercontent.com/projectcalico/calico/v3.28.0/manifests/tigera-operator.yaml -O# 創建應用資源清單文件,創建operator
root@master1:~/calico# kubectl create -f tigera-operator.yaml
namespace/tigera-operator created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgpfilters.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/apiservers.operator.tigera.io created
customresourcedefinition.apiextensions.k8s.io/imagesets.operator.tigera.io created
customresourcedefinition.apiextensions.k8s.io/installations.operator.tigera.io created
customresourcedefinition.apiextensions.k8s.io/tigerastatuses.operator.tigera.io created
serviceaccount/tigera-operator created
clusterrole.rbac.authorization.k8s.io/tigera-operator created
clusterrolebinding.rbac.authorization.k8s.io/tigera-operator created
deployment.apps/tigera-operator created
- 創建CRD資源
# 下載CRD資源清單文件
root@master1:~/calico# wget https://raw.githubusercontent.com/projectcalico/calico/v3.28.0/manifests/custom-resources.yaml#修改custom-resources.yaml的ip
root@master1:~/calico# vim custom-resources.yamlapiVersion: operator.tigera.io/v1
kind: Installation
metadata:name: default
spec:# Configures Calico networking.calicoNetwork:ipPools:- name: default-ipv4-ippoolblockSize: 24 # 改為24,每個節點一個C段地址cidr: 10.244.0.0/16 # 與kubeadm初始化時"--pod-network-cidr=10.244.0.0/16"保持一致encapsulation: VXLANCrossSubnetnatOutgoing: EnablednodeSelector: all()# 創建應用資源清單文件,創建operator
root@master1:~/calico# kubectl create -f custom-resources.yaml
installation.operator.tigera.io/default created
apiserver.operator.tigera.io/default created
- 確認安裝的pod
watch kubectl get pods -n calico-system
問題記錄
- node節點NotReady
root@master1:~/calico# kubectl get pod -n calico-system
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-b7fb9d96c-pbf9s 1/1 Running 0 47m
calico-node-gdsjw 1/1 Running 2 (16m ago) 47m
calico-node-hvqg4 1/1 Running 0 15m
calico-node-nntpd 0/1 Running 0 42s
calico-typha-55ccdf44bf-v2zmm 1/1 Running 0 15m
calico-typha-55ccdf44bf-w5l8w 1/1 Running 0 47m
csi-node-driver-bqvb7 2/2 Running 0 47m
csi-node-driver-cw59h 2/2 Running 0 47m
csi-node-driver-hbw2n 2/2 Running 0 47mWarning Unhealthy 55s (x2 over 56s) kubelet Readiness probe failed: calico/node is not ready: BIRD is not ready: Error querying BIRD: unable to connect to BIRDv4 socket: dial unix /var/run/calico/bird.ctl: connect: connection refusedWarning Unhealthy 28s kubelet Readiness probe failed: 2024-07-01 08:57:58.226 [INFO][267] confd/health.go 202: Number of node(s) with BGP peering established = 2
calico/node is not ready: felix is not ready: Get "http://localhost:9099/readiness": dial tcp: lookup localhost on 8.8.8.8:53: no such host
解決:
root@master1:~/calico# vi custom-resources.yaml
...
spec:# Configures Calico networking.calicoNetwork:ipPools:- name: default-ipv4-ippoolblockSize: 24cidr: 10.244.0.0/16encapsulation: VXLANCrossSubnetnatOutgoing: EnablednodeSelector: all()nodeAddressAutodetectionV4: #增加的配置interface: ens*# 更新
root@master1:~/calico# kubectl apply -f custom-resources.yaml# 同時修改節點的dns
root@node1:~# cat /etc/netplan/01-concfg.yaml
network:version: 2renderer: networkdethernets:ens33:dhcp4: noaddresses:- 192.168.0.62/24routes:- to: 0.0.0.0/0via: 192.168.0.1nameservers:addresses:- 223.5.5.5- 223.6.6.6# 更改生效
root@node1:~# netplan apply
3. Manifest方式安裝
根據node節點數量和使用的db選擇如下之一方式進行安裝:
- Install Calico with Kubernetes API datastore, 50 nodes or less
- Install Calico with Kubernetes API datastore, more than 50 nodes
- Install Calico with etcd datastore
以小于50個node,使用Kubernetes API datastore為例:
-
下載使用 Kubernetes API datastore的manifest
curl https://raw.githubusercontent.com/projectcalico/calico/v3.28.0/manifests/calico.yaml -O
-
修改Pod CIDR
取消注釋CALICO_IPV4POOL_CIDR,修改Pod CIDR。
-
基于YAML進行創建
kubectl apply -f calico.yaml
4. helm方式安裝
本次部署安裝calico作為k8s的網絡插件,使用helm進行安裝:
# 下載安裝helm
wget https://get.helm.sh/helm-v3.15.2-linux-amd64.tar.gz
tar xf helmelm-v3.15.2-linux-amd64.tar.gz
cd linux-amd64/helm /usr/local/bin/helm
helm version
安裝calico:
# 創建tigera-operator命名空間
kubectl create namespace tigera-operator
# 安裝tigera calico operator ,創建crd資源
helm install calico projectcalico/tigera-operator --version v3.28.0 --namespace tigera-operator
# 確認相關pod運行正常
watch kubectl get pods -n calico-system
準備工作
- 已經按照helm3
- 已經按照k8s環境
- 已經配置完成
kubeconfig
- Calico可以管理主機上的
cali
和tunl
接口。如果使用了NetworkManager,參考 Configure NetworkManager.
安裝
- 添加calico helm repo:
helm repo add projectcalico https://docs.tigera.io/calico/charts
如果要自定義chart相關的參數,可以配置values.yaml
cat > values.yaml <<EOF
installation:kubernetesProvider: AKScni:type: CalicocalicoNetwork:bgp: DisabledipPools:- cidr: 10.244.0.0/16encapsulation: VXLAN
EOF
- 創建
tigera-operator
命名空間.
kubectl create namespace tigera-operator
- 使用helm chart安裝Tigera Calico operator 和CRD
helm install calico projectcalico/tigera-operator --version v3.28.0 --namespace tigera-operator
或者使用values.yaml
傳遞參數值 :
helm install calico projectcalico/tigera-operator --version v3.28.0 -f values.yaml --namespace tigera-operator
- 確認pod運行正常
watch kubectl get pods -n calico-system
說明
Tigera operator安裝到calico-system命名空間,其他安裝方式使用了
kube-system
命名空間。
參考:
Install using Helm | Calico Documentation (tigera.io)
5. 參考資料
- https://helm.sh/zh/docs/intro/install/
- https://github.com/helm/helm/releases/
- https://docs.tigera.io/calico/latest/getting-started/kubernetes/
- Releases · projectcalico/calico (github.com)