一、卸載k8s
kubeadm reset -f
modprobe -r ipip
lsmod
rm -rf ~/.kube/# 自己選擇性刪除 坑點哦
rm -rf /etc/kubernetes/
rm -rf /etc/systemd/system/kubelet.service.d
rm -rf /etc/systemd/system/kubelet.service
rm -rf /usr/bin/kube*
rm -rf /etc/cni
rm -rf /opt/cni
rm -rf /var/lib/etcd
rm -rf /var/etcd
apt clean all
apt remove kube*
UBUNTU下載
oldubuntu-releases鏡像_oldubuntu-releases下載地址_oldubuntu-releases安裝教程-阿里巴巴開源鏡像站
建議不要下載desktop的鏡像,直接使用server的鏡像,選擇的版本22.04 :ubuntu-22.04.4-live-server-amd64.iso
?下載完成進入服務器替換apt源
ubuntu | 鏡像站使用幫助 | 清華大學開源軟件鏡像站 | Tsinghua Open Source Mirror
?按照提示的版本內容替換源即可。
二、安裝K8S一主兩從集群
k8s-master01 192.168.124.132? ? ?操作系統: Ubuntu22.04
k8s-node01 192.168.124.133 ? ? ?操作系統: Ubuntu22.04
k8s-node02 192.168.124.134 ? ? ?操作系統: Ubuntu22.04
最低配置:2核 2G內存 20G硬盤
?1、環境準備:(所有服務器都需要操作)
? 1)? 時間同步
timedatectl set-timezone Asia/Shanghai
sudo apt install ntpdate
sudo ntpdate ntp.ubuntu.com
2)? 固定IP地址
Ubuntu固定虛擬機的ip地址-CSDN博客
3)修改主機名
[root@k8s-master1 ~]# hostnamectl set-hostname k8s-master
[root@k8s-node1 ~]# hostnamectl set-hostname k8s-node1
[root@k8s-node2 ~]# hostnamectl set-hostname k8s-node2
4)關閉交換分區
#臨時關閉所有的交換分區
swapoff -a#永久關閉所有的交換分區
sed -i '/swap/s/^\(.*\)$/#\1/g' /etc/fstab
5)所有節點都添加集群ip與主機名到hosts中:
cat >> /etc/hosts << EOF
172.16.11.221 k8s-master
172.16.11.222 k8s-node1
172.16.11.223 k8s-node2
EOF
配置內核轉發及網橋過濾:cat > /etc/modules-load.d/k8s.conf << EOF
overlay
br_netfilter
EOF
加載配置:modprobe overlaymodprobe br_netfilter
查看是否加載:lsmod |grep overlaylsmod |grep br_netfilter
cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
讓配置生效:?
sysctl -p /etc/sysctl.d/k8s.conf
查看是否加載生效
lsmod |grep br_netfilter
6)關閉防火墻
sudo systemctl disable --now ufw
7)修改Ubuntu的sources.list
ubuntu | 鏡像站使用幫助 | 清華大學開源軟件鏡像站 | Tsinghua Open Source Mirror
# 默認注釋了源碼鏡像以提高 apt update 速度,如有需要可自行取消注釋
deb https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ focal main restricted universe multiverse
# deb-src https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ focal main restricted universe multiverse
deb https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ focal-updates main restricted universe multiverse
# deb-src https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ focal-updates main restricted universe multiverse
deb https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ focal-backports main restricted universe multiverse
# deb-src https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ focal-backports main restricted universe multiverse# 以下安全更新軟件源包含了官方源與鏡像站配置,如有需要可自行修改注釋切換
deb http://security.ubuntu.com/ubuntu/ focal-security main restricted universe multiverse
# deb-src http://security.ubuntu.com/ubuntu/ focal-security main restricted universe multiverse# 預發布軟件源,不建議啟用
# deb https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ focal-proposed main restricted universe multiverse
# # deb-src https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ focal-proposed main restricted universe multiverse
?8)安裝ipset和ipvsadm
apt install ipset ipvsadm
配置 ipvsadm 模塊加載方式
cat > /etc/modules-load.d/ipvs.conf << EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack
EOF
寫成一個腳本文件
cat << EOF | tee ipvs.sh
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack
EOF
授權運行檢查:
授權運行檢查:chmod 755 /etc/modules-load.d/ipvs.conf && bash /etc/modules-load.d/ipvs.conf && lsmod | grep -e ip_vs -e nf_conntrack
9)容器運行時containetd
wget https://github.com/containerd/containerd/releases/download/v1.7.15/cri-containerd-1.7.15-linux-amd64.tar.gz
可能由于網絡問題:鏈接: https://pan.baidu.com/s/1VoYMDB6ikOTSn4W-tziY9g 提取碼: 3cvd?
--來自百度網盤超級會員v2的分享
解壓并查看
tar xf cri-containerd-1.7.15-linux-amd64.tar.gz -C /
which containerd
10)containerd配置文件生成并修改
創建文件:mkdir /etc/containerd生成配置文件:containerd config default > /etc/containerd/config.toml
修改配置文件將3.8改為3.9
或者改為阿里云:registry.aliyuncs.com/google_containers/pause:3.9
vim /etc/containerd/config.toml
建議使用阿里云的鏡像
disabled_plugins = []
imports = []
oom_score = 0
plugin_dir = ""
required_plugins = []
root = "/var/lib/containerd"
state = "/run/containerd"
temp = ""
version = 2[cgroup]path = ""[debug]address = ""format = ""gid = 0level = ""uid = 0[grpc]address = "/run/containerd/containerd.sock"gid = 0max_recv_message_size = 16777216max_send_message_size = 16777216tcp_address = ""tcp_tls_ca = ""tcp_tls_cert = ""tcp_tls_key = ""uid = 0[metrics]address = ""grpc_histogram = false[plugins][plugins."io.containerd.gc.v1.scheduler"]deletion_threshold = 0mutation_threshold = 100pause_threshold = 0.02schedule_delay = "0s"startup_delay = "100ms"[plugins."io.containerd.grpc.v1.cri"]cdi_spec_dirs = ["/etc/cdi", "/var/run/cdi"]device_ownership_from_security_context = falsedisable_apparmor = falsedisable_cgroup = falsedisable_hugetlb_controller = truedisable_proc_mount = falsedisable_tcp_service = truedrain_exec_sync_io_timeout = "0s"enable_cdi = falseenable_selinux = falseenable_tls_streaming = falseenable_unprivileged_icmp = falseenable_unprivileged_ports = falseignore_deprecation_warnings = []ignore_image_defined_volumes = falseimage_pull_progress_timeout = "5m0s"image_pull_with_sync_fs = falsemax_concurrent_downloads = 3max_container_log_line_size = 16384netns_mounts_under_state_dir = falserestrict_oom_score_adj = falsesandbox_image = "registry.aliyuncs.com/google_containers/pause:3.9"selinux_category_range = 1024stats_collect_period = 10stream_idle_timeout = "4h0m0s"stream_server_address = "127.0.0.1"stream_server_port = "0"systemd_cgroup = falsetolerate_missing_hugetlb_controller = trueunset_seccomp_profile = ""[plugins."io.containerd.grpc.v1.cri".cni]bin_dir = "/opt/cni/bin"conf_dir = "/etc/cni/net.d"conf_template = ""ip_pref = ""max_conf_num = 1setup_serially = false[plugins."io.containerd.grpc.v1.cri".containerd]default_runtime_name = "runc"disable_snapshot_annotations = truediscard_unpacked_layers = falseignore_blockio_not_enabled_errors = falseignore_rdt_not_enabled_errors = falseno_pivot = falsesnapshotter = "overlayfs"[plugins."io.containerd.grpc.v1.cri".containerd.default_runtime]base_runtime_spec = ""cni_conf_dir = ""cni_max_conf_num = 0container_annotations = []pod_annotations = []privileged_without_host_devices = falseprivileged_without_host_devices_all_devices_allowed = falseruntime_engine = ""runtime_path = ""runtime_root = ""runtime_type = ""sandbox_mode = ""snapshotter = ""[plugins."io.containerd.grpc.v1.cri".containerd.default_runtime.options][plugins."io.containerd.grpc.v1.cri".containerd.runtimes][plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]base_runtime_spec = ""cni_conf_dir = ""cni_max_conf_num = 0container_annotations = []pod_annotations = []privileged_without_host_devices = falseprivileged_without_host_devices_all_devices_allowed = falseruntime_engine = ""runtime_path = ""runtime_root = ""runtime_type = "io.containerd.runc.v2"sandbox_mode = "podsandbox"snapshotter = ""[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]BinaryName = ""CriuImagePath = ""CriuPath = ""CriuWorkPath = ""IoGid = 0IoUid = 0NoNewKeyring = falseNoPivotRoot = falseRoot = ""ShimCgroup = ""SystemdCgroup = true[plugins."io.containerd.grpc.v1.cri".containerd.untrusted_workload_runtime]base_runtime_spec = ""cni_conf_dir = ""cni_max_conf_num = 0container_annotations = []pod_annotations = []privileged_without_host_devices = falseprivileged_without_host_devices_all_devices_allowed = falseruntime_engine = ""runtime_path = ""runtime_root = ""runtime_type = ""sandbox_mode = ""snapshotter = ""[plugins."io.containerd.grpc.v1.cri".containerd.untrusted_workload_runtime.options][plugins."io.containerd.grpc.v1.cri".image_decryption]key_model = "node"[plugins."io.containerd.grpc.v1.cri".registry]config_path = "/etc/containerd/certs.d"[plugins."io.containerd.grpc.v1.cri".registry.auths][plugins."io.containerd.grpc.v1.cri".registry.configs][plugins."io.containerd.grpc.v1.cri".registry.headers][plugins."io.containerd.grpc.v1.cri".registry.mirrors][plugins."io.containerd.grpc.v1.cri".x509_key_pair_streaming]tls_cert_file = ""tls_key_file = ""[plugins."io.containerd.internal.v1.opt"]path = "/opt/containerd"[plugins."io.containerd.internal.v1.restart"]interval = "10s"[plugins."io.containerd.internal.v1.tracing"][plugins."io.containerd.metadata.v1.bolt"]content_sharing_policy = "shared"[plugins."io.containerd.monitor.v1.cgroups"]no_prometheus = false[plugins."io.containerd.nri.v1.nri"]disable = truedisable_connections = falseplugin_config_path = "/etc/nri/conf.d"plugin_path = "/opt/nri/plugins"plugin_registration_timeout = "5s"plugin_request_timeout = "2s"socket_path = "/var/run/nri/nri.sock"[plugins."io.containerd.runtime.v1.linux"]no_shim = falseruntime = "runc"runtime_root = ""shim = "containerd-shim"shim_debug = false[plugins."io.containerd.runtime.v2.task"]platforms = ["linux/amd64"]sched_core = false[plugins."io.containerd.service.v1.diff-service"]default = ["walking"][plugins."io.containerd.service.v1.tasks-service"]blockio_config_file = ""rdt_config_file = ""[plugins."io.containerd.snapshotter.v1.aufs"]root_path = ""[plugins."io.containerd.snapshotter.v1.blockfile"]fs_type = ""mount_options = []root_path = ""scratch_file = ""[plugins."io.containerd.snapshotter.v1.btrfs"]root_path = ""[plugins."io.containerd.snapshotter.v1.devmapper"]async_remove = falsebase_image_size = ""discard_blocks = falsefs_options = ""fs_type = ""pool_name = ""root_path = ""[plugins."io.containerd.snapshotter.v1.native"]root_path = ""[plugins."io.containerd.snapshotter.v1.overlayfs"]mount_options = []root_path = ""sync_remove = falseupperdir_label = false[plugins."io.containerd.snapshotter.v1.zfs"]root_path = ""[plugins."io.containerd.tracing.processor.v1.otlp"][plugins."io.containerd.transfer.v1.local"]config_path = ""max_concurrent_downloads = 3max_concurrent_uploaded_layers = 3[[plugins."io.containerd.transfer.v1.local".unpack_config]]differ = ""platform = "linux/amd64"snapshotter = "overlayfs"[proxy_plugins][stream_processors][stream_processors."io.containerd.ocicrypt.decoder.v1.tar"]accepts = ["application/vnd.oci.image.layer.v1.tar+encrypted"]args = ["--decryption-keys-path", "/etc/containerd/ocicrypt/keys"]env = ["OCICRYPT_KEYPROVIDER_CONFIG=/etc/containerd/ocicrypt/ocicrypt_keyprovider.conf"]path = "ctd-decoder"returns = "application/vnd.oci.image.layer.v1.tar"[stream_processors."io.containerd.ocicrypt.decoder.v1.tar.gzip"]accepts = ["application/vnd.oci.image.layer.v1.tar+gzip+encrypted"]args = ["--decryption-keys-path", "/etc/containerd/ocicrypt/keys"]env = ["OCICRYPT_KEYPROVIDER_CONFIG=/etc/containerd/ocicrypt/ocicrypt_keyprovider.conf"]path = "ctd-decoder"returns = "application/vnd.oci.image.layer.v1.tar+gzip"[timeouts]"io.containerd.timeout.bolt.open" = "0s""io.containerd.timeout.metrics.shimstats" = "2s""io.containerd.timeout.shim.cleanup" = "5s""io.containerd.timeout.shim.load" = "5s""io.containerd.timeout.shim.shutdown" = "3s""io.containerd.timeout.task.state" = "2s"[ttrpc]address = ""gid = 0uid = 0
創建上圖相應的文件目錄mkdir -p /etc/containerd/certs.d/docker.io配置加速cat > /etc/containerd/certs.d/docker.io/hosts.toml << EOF
server = "https://docker.io"
[host."https://x46sxvnb.mirror.aliyuncs.com"]
capabilities = ["pull", "resolve"]
EOF
啟動并設置開機自啟
systemctl enable --now containerd
查看版本:
containerd --version
?查看狀態?systemctl status containerd.service
2、集群部署(所有服務器都需要操作)
1、下載用于kubernetes軟件包倉庫的公告簽名密鑰
k8s社區源(和下面的阿里云源二選一即可)網絡問題選擇阿里云
創建目錄:sudo mkdir -p /etc/apt/keyrings/下載密鑰curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.30/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
阿里云源創建目錄:sudo mkdir -p /etc/apt/keyrings/下載密鑰curl -fsSL https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.30/deb/Release.key | gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
k8s社區(和下面的阿里云源二選一即可)添加kubernetes apt倉庫echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.30/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list
阿里云添加kubernetes apt倉庫echo "deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.30/deb/ /" | sudo tee /etc/apt/sources.list.d/kubernetes.list
更新倉庫
apt-get update
查看軟件列表:
apt-cache policy kubeadm
安裝指定版本:sudo apt-get install -y kubelet=1.30.0-1.1 kubeadm=1.30.0-1.1 kubectl=1.30.0-1.1
修改kubelet配置vim /etc/sysconfig/kubelet KUBELET_EXTRA_ARGS="--cgroup-driver=systemd"
設置為開機自啟systemctl enable kubelet
鎖定版本,防止后期自動更新。sudo apt-mark hold kubelet kubeadm kubectl
解鎖版本,可以執行更新sudo apt-mark unhold kubelet kubeadm kubectl
3、集群初始化:(k8s-master01節點操作)
查看版本
kubeadm version
生成配置文件:
kubeadm config print init-defaults > kubeadm-config.yaml
完整的 kubeadm-config.yaml,修改對應的ip和name即可。
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
bootstrapTokens:
- groups:- system:bootstrappers:kubeadm:default-node-tokentoken: abcdef.0123456789abcdefttl: 24h0m0susages:- signing- authentication
localAPIEndpoint:advertiseAddress: 192.168.124.150bindPort: 6443
nodeRegistration:criSocket: unix:///var/run/containerd/containerd.sockimagePullPolicy: IfNotPresentname: k8s-mastertaints: null---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:timeoutForControlPlane: 4m0s
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
etcd:local:dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers
kubernetesVersion: 1.30.0
networking:dnsDomain: cluster.localserviceSubnet: 10.96.0.0/12podSubnet: 10.244.0.0/16
scheduler: {}---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
cgroupDriver: systemd
查看鏡像
kubeadm config images list --config kubeadm-config.yaml
下載鏡像
kubeadm config images pull --config kubeadm-config.yaml
查看鏡像
crictl images
查看鏡像:
kubeadm config images list --config kubeadm-config.yaml
如果出現報錯鏡像拉取不下來如下圖錯誤
解決辦法(我們在阿里云倉庫進行拉取鏡像)
kubeadm config images pull --image-repository registry.aliyuncs.com/google_containers
初始化集群
kubeadm init --config kubeadm-config.yaml
非首次安裝會出現下面這個錯誤,需要?kubeadm reset -f 再進行安裝
還是失敗,參考下面兩篇博客,我解決了上面的問題? ?使用?journalctl -xeu kubelet 查看報錯日志
“validate service connection: validate CRI v1 runtime API for endpoint \“unix:///var/run/containerd/-CSDN博客
KS8初始化遇到問題:rpc error: code = Unknown desc = failed to get sandbox image \“registry.aliyuncs.com/goog-CSDN博客
再次安裝成功!!!
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
? mkdir -p $HOME/.kube
? sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
? sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
? export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
? https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.124.150:6443 --token abcdef.0123456789abcdef \
? ? ? ? --discovery-token-ca-cert-hash sha256:aeceb14ab601148161cd69b9e041f7ac09e79f5e45135fdac9c4ee3e42e1acb8?
執行上面的命令,初始化集群。
node節點執行完之后去master節點查看node, kubectl get node
可以看到節點已經安裝成功!
4、網絡插件安裝部署(k8s-master01節點操作)
訪問calico的官網查看
Quickstart for Calico on Kubernetes | Calico Documentation (tigera.io)
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.28.0/manifests/tigera-operator.yaml
查看是否運行:
kubectl get pods -n tigera-operator
kubectl get ns
下載calico文件 建議使用wget 復制官網的鏈接進行下載,因為官網的命令是直接下載運行的,我們需要對文件先進行修改在運行,
wget https://raw.githubusercontent.com/projectcalico/calico/v3.28.0/manifests/custom-resources.yaml
由于網絡問題:科學上網:
# This section includes base Calico installation configuration.
# For more information, see: https://docs.tigera.io/calico/latest/reference/installation/api#operator.tigera.io/v1.Installation
apiVersion: operator.tigera.io/v1
kind: Installation
metadata:name: default
spec:# Configures Calico networking.calicoNetwork:ipPools:- name: default-ipv4-ippoolblockSize: 26cidr: 10.244.0.0/16encapsulation: VXLANCrossSubnetnatOutgoing: EnablednodeSelector: all()---# This section configures the Calico API server.
# For more information, see: https://docs.tigera.io/calico/latest/reference/installation/api#operator.tigera.io/v1.APIServer
apiVersion: operator.tigera.io/v1
kind: APIServer
metadata:name: default
spec: {}
修改文件
注我們在kubeadm-config.yaml文件中添加的pod網段是10.244.0.0/16所以我們要修改為一樣的
vim custom-resources.yaml
?
啟動
?kubectl create -f custom-resources.yaml
查看命名空間
kubectl get ns
查看命名空間中運行的pods 如果沒有全部起來稍微等一下,應該是在創建中,如果網絡慢可能要半個小時。
kubectl get pods -n calico-system
查看命名空間中運行的pods
kubectl get pods -n kube-system
然后再次查看我們的集群狀態:
kubectl get nodes
?
由于kubectl get pods -n calico-system 太慢了 我們先刪除掉
kubectl delete -f custom-resources.yaml
選擇網絡插件:https://kubernetes.io/docs/concepts/cluster-administration/addons/
快速開始配置:https://projectcalico.docs.tigera.io/archive/v3.20/getting-started/clis/calicoctl/install
calico網絡插件:https://docs.projectcalico.org/v3.9/getting-started/kubernetes/
kubectl apply -f https://docs.projectcalico.org/v3.9/manifests/calico.yaml?
https://docs.tigera.io/archive/v3.9/getting-started/kubernetes/
下載calico.yaml文件到本地:
---
# Source: calico/templates/calico-config.yaml
# This ConfigMap is used to configure a self-hosted Calico installation.
kind: ConfigMap
apiVersion: v1
metadata:name: calico-confignamespace: kube-system
data:# Typha is disabled.typha_service_name: "none"# Configure the backend to use.calico_backend: "bird"# Configure the MTU to useveth_mtu: "1440"# The CNI network configuration to install on each node. The special# values in this config will be automatically populated.cni_network_config: |-{"name": "k8s-pod-network","cniVersion": "0.3.1","plugins": [{"type": "calico","log_level": "info","datastore_type": "kubernetes","nodename": "__KUBERNETES_NODE_NAME__","mtu": __CNI_MTU__,"ipam": {"type": "calico-ipam"},"policy": {"type": "k8s"},"kubernetes": {"kubeconfig": "__KUBECONFIG_FILEPATH__"}},{"type": "portmap","snat": true,"capabilities": {"portMappings": true}}]}---
# Source: calico/templates/kdd-crds.yaml
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:name: felixconfigurations.crd.projectcalico.org
spec:scope: Clustergroup: crd.projectcalico.orgversion: v1names:kind: FelixConfigurationplural: felixconfigurationssingular: felixconfiguration
---apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:name: ipamblocks.crd.projectcalico.org
spec:scope: Clustergroup: crd.projectcalico.orgversion: v1names:kind: IPAMBlockplural: ipamblockssingular: ipamblock---apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:name: blockaffinities.crd.projectcalico.org
spec:scope: Clustergroup: crd.projectcalico.orgversion: v1names:kind: BlockAffinityplural: blockaffinitiessingular: blockaffinity---apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:name: ipamhandles.crd.projectcalico.org
spec:scope: Clustergroup: crd.projectcalico.orgversion: v1names:kind: IPAMHandleplural: ipamhandlessingular: ipamhandle---apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:name: ipamconfigs.crd.projectcalico.org
spec:scope: Clustergroup: crd.projectcalico.orgversion: v1names:kind: IPAMConfigplural: ipamconfigssingular: ipamconfig---apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:name: bgppeers.crd.projectcalico.org
spec:scope: Clustergroup: crd.projectcalico.orgversion: v1names:kind: BGPPeerplural: bgppeerssingular: bgppeer---apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:name: bgpconfigurations.crd.projectcalico.org
spec:scope: Clustergroup: crd.projectcalico.orgversion: v1names:kind: BGPConfigurationplural: bgpconfigurationssingular: bgpconfiguration---apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:name: ippools.crd.projectcalico.org
spec:scope: Clustergroup: crd.projectcalico.orgversion: v1names:kind: IPPoolplural: ippoolssingular: ippool---apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:name: hostendpoints.crd.projectcalico.org
spec:scope: Clustergroup: crd.projectcalico.orgversion: v1names:kind: HostEndpointplural: hostendpointssingular: hostendpoint---apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:name: clusterinformations.crd.projectcalico.org
spec:scope: Clustergroup: crd.projectcalico.orgversion: v1names:kind: ClusterInformationplural: clusterinformationssingular: clusterinformation---apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:name: globalnetworkpolicies.crd.projectcalico.org
spec:scope: Clustergroup: crd.projectcalico.orgversion: v1names:kind: GlobalNetworkPolicyplural: globalnetworkpoliciessingular: globalnetworkpolicy---apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:name: globalnetworksets.crd.projectcalico.org
spec:scope: Clustergroup: crd.projectcalico.orgversion: v1names:kind: GlobalNetworkSetplural: globalnetworksetssingular: globalnetworkset---apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:name: networkpolicies.crd.projectcalico.org
spec:scope: Namespacedgroup: crd.projectcalico.orgversion: v1names:kind: NetworkPolicyplural: networkpoliciessingular: networkpolicy---apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:name: networksets.crd.projectcalico.org
spec:scope: Namespacedgroup: crd.projectcalico.orgversion: v1names:kind: NetworkSetplural: networksetssingular: networkset
---
# Source: calico/templates/rbac.yaml# Include a clusterrole for the kube-controllers component,
# and bind it to the calico-kube-controllers serviceaccount.
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:name: calico-kube-controllers
rules:# Nodes are watched to monitor for deletions.- apiGroups: [""]resources:- nodesverbs:- watch- list- get# Pods are queried to check for existence.- apiGroups: [""]resources:- podsverbs:- get# IPAM resources are manipulated when nodes are deleted.- apiGroups: ["crd.projectcalico.org"]resources:- ippoolsverbs:- list- apiGroups: ["crd.projectcalico.org"]resources:- blockaffinities- ipamblocks- ipamhandlesverbs:- get- list- create- update- delete# Needs access to update clusterinformations.- apiGroups: ["crd.projectcalico.org"]resources:- clusterinformationsverbs:- get- create- update
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:name: calico-kube-controllers
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: calico-kube-controllers
subjects:
- kind: ServiceAccountname: calico-kube-controllersnamespace: kube-system
---
# Include a clusterrole for the calico-node DaemonSet,
# and bind it to the calico-node serviceaccount.
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:name: calico-node
rules:# The CNI plugin needs to get pods, nodes, and namespaces.- apiGroups: [""]resources:- pods- nodes- namespacesverbs:- get- apiGroups: [""]resources:- endpoints- servicesverbs:# Used to discover service IPs for advertisement.- watch- list# Used to discover Typhas.- get- apiGroups: [""]resources:- nodes/statusverbs:# Needed for clearing NodeNetworkUnavailable flag.- patch# Calico stores some configuration information in node annotations.- update# Watch for changes to Kubernetes NetworkPolicies.- apiGroups: ["networking.k8s.io"]resources:- networkpoliciesverbs:- watch- list# Used by Calico for policy information.- apiGroups: [""]resources:- pods- namespaces- serviceaccountsverbs:- list- watch# The CNI plugin patches pods/status.- apiGroups: [""]resources:- pods/statusverbs:- patch# Calico monitors various CRDs for config.- apiGroups: ["crd.projectcalico.org"]resources:- globalfelixconfigs- felixconfigurations- bgppeers- globalbgpconfigs- bgpconfigurations- ippools- ipamblocks- globalnetworkpolicies- globalnetworksets- networkpolicies- networksets- clusterinformations- hostendpoints- blockaffinitiesverbs:- get- list- watch# Calico must create and update some CRDs on startup.- apiGroups: ["crd.projectcalico.org"]resources:- ippools- felixconfigurations- clusterinformationsverbs:- create- update# Calico stores some configuration information on the node.- apiGroups: [""]resources:- nodesverbs:- get- list- watch# These permissions are only requried for upgrade from v2.6, and can# be removed after upgrade or on fresh installations.- apiGroups: ["crd.projectcalico.org"]resources:- bgpconfigurations- bgppeersverbs:- create- update# These permissions are required for Calico CNI to perform IPAM allocations.- apiGroups: ["crd.projectcalico.org"]resources:- blockaffinities- ipamblocks- ipamhandlesverbs:- get- list- create- update- delete- apiGroups: ["crd.projectcalico.org"]resources:- ipamconfigsverbs:- get# Block affinities must also be watchable by confd for route aggregation.- apiGroups: ["crd.projectcalico.org"]resources:- blockaffinitiesverbs:- watch# The Calico IPAM migration needs to get daemonsets. These permissions can be# removed if not upgrading from an installation using host-local IPAM.- apiGroups: ["apps"]resources:- daemonsetsverbs:- get
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:name: calico-node
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: calico-node
subjects:
- kind: ServiceAccountname: calico-nodenamespace: kube-system---
# Source: calico/templates/calico-node.yaml
# This manifest installs the calico-node container, as well
# as the CNI plugins and network config on
# each master and worker node in a Kubernetes cluster.
kind: DaemonSet
apiVersion: apps/v1
metadata:name: calico-nodenamespace: kube-systemlabels:k8s-app: calico-node
spec:selector:matchLabels:k8s-app: calico-nodeupdateStrategy:type: RollingUpdaterollingUpdate:maxUnavailable: 1template:metadata:labels:k8s-app: calico-nodeannotations:# This, along with the CriticalAddonsOnly toleration below,# marks the pod as a critical add-on, ensuring it gets# priority scheduling and that its resources are reserved# if it ever gets evicted.scheduler.alpha.kubernetes.io/critical-pod: ''spec:nodeSelector:beta.kubernetes.io/os: linuxhostNetwork: truetolerations:# Make sure calico-node gets scheduled on all nodes.- effect: NoScheduleoperator: Exists# Mark the pod as a critical add-on for rescheduling.- key: CriticalAddonsOnlyoperator: Exists- effect: NoExecuteoperator: ExistsserviceAccountName: calico-node# Minimize downtime during a rolling upgrade or deletion; tell Kubernetes to do a "force# deletion": https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods.terminationGracePeriodSeconds: 0priorityClassName: system-node-criticalinitContainers:# This container performs upgrade from host-local IPAM to calico-ipam.# It can be deleted if this is a fresh installation, or if you have already# upgraded to use calico-ipam.- name: upgrade-ipamimage: calico/cni:v3.9.6command: ["/opt/cni/bin/calico-ipam", "-upgrade"]env:- name: KUBERNETES_NODE_NAMEvalueFrom:fieldRef:fieldPath: spec.nodeName- name: CALICO_NETWORKING_BACKENDvalueFrom:configMapKeyRef:name: calico-configkey: calico_backendvolumeMounts:- mountPath: /var/lib/cni/networksname: host-local-net-dir- mountPath: /host/opt/cni/binname: cni-bin-dir# This container installs the CNI binaries# and CNI network config file on each node.- name: install-cniimage: calico/cni:v3.9.6command: ["/install-cni.sh"]env:# Name of the CNI config file to create.- name: CNI_CONF_NAMEvalue: "10-calico.conflist"# The CNI network config to install on each node.- name: CNI_NETWORK_CONFIGvalueFrom:configMapKeyRef:name: calico-configkey: cni_network_config# Set the hostname based on the k8s node name.- name: KUBERNETES_NODE_NAMEvalueFrom:fieldRef:fieldPath: spec.nodeName# CNI MTU Config variable- name: CNI_MTUvalueFrom:configMapKeyRef:name: calico-configkey: veth_mtu# Prevents the container from sleeping forever.- name: SLEEPvalue: "false"volumeMounts:- mountPath: /host/opt/cni/binname: cni-bin-dir- mountPath: /host/etc/cni/net.dname: cni-net-dir# Adds a Flex Volume Driver that creates a per-pod Unix Domain Socket to allow Dikastes# to communicate with Felix over the Policy Sync API.- name: flexvol-driverimage: calico/pod2daemon-flexvol:v3.9.6volumeMounts:- name: flexvol-driver-hostmountPath: /host/drivercontainers:# Runs calico-node container on each Kubernetes node. This# container programs network policy and routes on each# host.- name: calico-nodeimage: calico/node:v3.9.6env:# Use Kubernetes API as the backing datastore.- name: DATASTORE_TYPEvalue: "kubernetes"# Wait for the datastore.- name: WAIT_FOR_DATASTOREvalue: "true"# Set based on the k8s node name.- name: NODENAMEvalueFrom:fieldRef:fieldPath: spec.nodeName# Choose the backend to use.- name: CALICO_NETWORKING_BACKENDvalueFrom:configMapKeyRef:name: calico-configkey: calico_backend# Cluster type to identify the deployment type- name: CLUSTER_TYPEvalue: "k8s,bgp"# Auto-detect the BGP IP address.- name: IPvalue: "autodetect"# Enable IPIP- name: CALICO_IPV4POOL_IPIPvalue: "Always"# Set MTU for tunnel device used if ipip is enabled- name: FELIX_IPINIPMTUvalueFrom:configMapKeyRef:name: calico-configkey: veth_mtu# The default IPv4 pool to create on startup if none exists. Pod IPs will be# chosen from this range. Changing this value after installation will have# no effect. This should fall within `--cluster-cidr`.- name: CALICO_IPV4POOL_CIDRvalue: "192.168.0.0/16"# Disable file logging so `kubectl logs` works.- name: CALICO_DISABLE_FILE_LOGGINGvalue: "true"# Set Felix endpoint to host default action to ACCEPT.- name: FELIX_DEFAULTENDPOINTTOHOSTACTIONvalue: "ACCEPT"# Disable IPv6 on Kubernetes.- name: FELIX_IPV6SUPPORTvalue: "false"# Set Felix logging to "info"- name: FELIX_LOGSEVERITYSCREENvalue: "info"- name: FELIX_HEALTHENABLEDvalue: "true"securityContext:privileged: trueresources:requests:cpu: 250mlivenessProbe:exec:command:- /bin/calico-node- -felix-live- -bird-liveperiodSeconds: 10initialDelaySeconds: 10failureThreshold: 6readinessProbe:exec:command:- /bin/calico-node- -felix-ready- -bird-readyperiodSeconds: 10volumeMounts:- mountPath: /lib/modulesname: lib-modulesreadOnly: true- mountPath: /run/xtables.lockname: xtables-lockreadOnly: false- mountPath: /var/run/caliconame: var-run-calicoreadOnly: false- mountPath: /var/lib/caliconame: var-lib-calicoreadOnly: false- name: policysyncmountPath: /var/run/nodeagentvolumes:# Used by calico-node.- name: lib-moduleshostPath:path: /lib/modules- name: var-run-calicohostPath:path: /var/run/calico- name: var-lib-calicohostPath:path: /var/lib/calico- name: xtables-lockhostPath:path: /run/xtables.locktype: FileOrCreate# Used to install CNI.- name: cni-bin-dirhostPath:path: /opt/cni/bin- name: cni-net-dirhostPath:path: /etc/cni/net.d# Mount in the directory for host-local IPAM allocations. This is# used when upgrading from host-local to calico-ipam, and can be removed# if not using the upgrade-ipam init container.- name: host-local-net-dirhostPath:path: /var/lib/cni/networks# Used to create per-pod Unix Domain Sockets- name: policysynchostPath:type: DirectoryOrCreatepath: /var/run/nodeagent# Used to install Flex Volume Driver- name: flexvol-driver-hosthostPath:type: DirectoryOrCreatepath: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds
---apiVersion: v1
kind: ServiceAccount
metadata:name: calico-nodenamespace: kube-system---
# Source: calico/templates/calico-kube-controllers.yaml# See https://github.com/projectcalico/kube-controllers
apiVersion: apps/v1
kind: Deployment
metadata:name: calico-kube-controllersnamespace: kube-systemlabels:k8s-app: calico-kube-controllers
spec:# The controllers can only have a single active instance.replicas: 1selector:matchLabels:k8s-app: calico-kube-controllersstrategy:type: Recreatetemplate:metadata:name: calico-kube-controllersnamespace: kube-systemlabels:k8s-app: calico-kube-controllersannotations:scheduler.alpha.kubernetes.io/critical-pod: ''spec:nodeSelector:beta.kubernetes.io/os: linuxtolerations:# Mark the pod as a critical add-on for rescheduling.- key: CriticalAddonsOnlyoperator: Exists- key: node-role.kubernetes.io/mastereffect: NoScheduleserviceAccountName: calico-kube-controllerspriorityClassName: system-cluster-criticalcontainers:- name: calico-kube-controllersimage: calico/kube-controllers:v3.9.6env:# Choose which controllers to run.- name: ENABLED_CONTROLLERSvalue: node- name: DATASTORE_TYPEvalue: kubernetesreadinessProbe:exec:command:- /usr/bin/check-status- -r---apiVersion: v1
kind: ServiceAccount
metadata:name: calico-kube-controllersnamespace: kube-system
---
# Source: calico/templates/calico-etcd-secrets.yaml---
# Source: calico/templates/calico-typha.yaml---
# Source: calico/templates/configure-canal.yaml
kubectl get pods --all-namespaces -w 查看集群信息
5、由于國內的網絡問題,所以對應的鏡像拉不下來
刪除上面的網絡,安裝flannel網絡插件
這里參考這個大神的博客方法一解決了!!!
kubernetes(1.28)配置flannel:kubelet無法拉取鏡像(NotReady ImagePullBackOff)同時解決k8s配置harbor私人鏡像倉庫問題_flannel鏡像拉取失敗-CSDN博客
鏈接:https://pan.baidu.com/s/1bkDHddFxhRfFS3iiJv8ScA?
提取碼:d8bg?
--來自百度網盤超級會員V2的分享 下載對應的資料
?kube-flannel.yml來源:GitHub - flannel-io/flannel: flannel is a network fabric for containers, designed for Kubernetes
1、安裝docker?
Ubuntu安裝docker,指定版本-CSDN博客
?2、使用docker工具,將上面的文件導入為本地鏡像
但是,在1.24之后的版本之后kubelet 徹底移除了dockershim,改為默認使用Containerd。在本地有docker鏡像,yml修改為本地配置鏡像地址,也無法使用這一鏡像。還需要繼續操作。
1.將docker鏡像save為tar(因為ctr無法直接import這兩個鏡像壓縮包,只好借助docker轉一手)
#docker -o tar壓縮包名要保存的路徑 本地鏡像
docker save -o flannel-flannel-cni-plugin-v1.4.1-flannel1-amd64.tar flannel/flannel-cni-plugin:v1.4.1-flannel1
docker save -o flannel-flannel-v0.25.1-amd64.tar flannel/flannel:v0.25.1
操作成功,在當前文件夾下能看到這兩個tar文件。
2將tar鏡像壓縮包,導入到containerd的k8s.io命名空間中
#-n后面加命名空間名稱
ctr -n k8s.io images import flannel-flannel-v0.25.1-amd64.tar
ctr -n k8s.io images import flannel-flannel-cni-plugin-v1.4.1-flannel1-amd64.tar
操作成功,顯示如下:
ctr -n k8s.io i check | grep flannel
查看k8s.io下面是否有flannel的鏡像文件
注:Kubernetes 下使用的 containerd 默認命名空間是k8s.io,只有導入這個命名空間,kubelet才能從本地拉取這個鏡像文件。本文kubernetes集群是1.28的,所以不再是默認地從本地docker鏡像里獲取鏡像,而是從containerd的本地鏡像里面獲取。如果是1.24之前的版本,默認仍是docker,那么1.2以及1.3的操作是不需要的,kubelet能直接拉取本地docker鏡像。
1.4修改kube-flanne.yml
將開頭提及的kube-flannel.yml下載到本地后,修改所有image鍵值對,并添加鏡像拉取策略。修改內容如下:
?
kube-flannel.yml完整版,修改后的?
apiVersion: v1
kind: Namespace
metadata:labels:k8s-app: flannelpod-security.kubernetes.io/enforce: privilegedname: kube-flannel
---
apiVersion: v1
kind: ServiceAccount
metadata:labels:k8s-app: flannelname: flannelnamespace: kube-flannel
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:labels:k8s-app: flannelname: flannel
rules:
- apiGroups:- ""resources:- podsverbs:- get
- apiGroups:- ""resources:- nodesverbs:- get- list- watch
- apiGroups:- ""resources:- nodes/statusverbs:- patch
- apiGroups:- networking.k8s.ioresources:- clustercidrsverbs:- list- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:labels:k8s-app: flannelname: flannel
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: flannel
subjects:
- kind: ServiceAccountname: flannelnamespace: kube-flannel
---
apiVersion: v1
data:cni-conf.json: |{"name": "cbr0","cniVersion": "0.3.1","plugins": [{"type": "flannel","delegate": {"hairpinMode": true,"isDefaultGateway": true}},{"type": "portmap","capabilities": {"portMappings": true}}]}net-conf.json: |{"Network": "10.244.0.0/16","Backend": {"Type": "vxlan"}}
kind: ConfigMap
metadata:labels:app: flannelk8s-app: flanneltier: nodename: kube-flannel-cfgnamespace: kube-flannel
---
apiVersion: apps/v1
kind: DaemonSet
metadata:labels:app: flannelk8s-app: flanneltier: nodename: kube-flannel-dsnamespace: kube-flannel
spec:selector:matchLabels:app: flannelk8s-app: flanneltemplate:metadata:labels:app: flannelk8s-app: flanneltier: nodespec:affinity:nodeAffinity:requiredDuringSchedulingIgnoredDuringExecution:nodeSelectorTerms:- matchExpressions:- key: kubernetes.io/osoperator: Invalues:- linuxcontainers:- args:- --ip-masq- --kube-subnet-mgrcommand:- /opt/bin/flanneldenv:- name: POD_NAMEvalueFrom:fieldRef:fieldPath: metadata.name- name: POD_NAMESPACEvalueFrom:fieldRef:fieldPath: metadata.namespace- name: EVENT_QUEUE_DEPTHvalue: "5000"image: docker.io/flannel/flannel:v0.25.1imagePullPolicy: Nevername: kube-flannelresources:requests:cpu: 100mmemory: 50MisecurityContext:capabilities:add:- NET_ADMIN- NET_RAWprivileged: falsevolumeMounts:- mountPath: /run/flannelname: run- mountPath: /etc/kube-flannel/name: flannel-cfg- mountPath: /run/xtables.lockname: xtables-lockhostNetwork: trueinitContainers:- args:- -f- /flannel- /opt/cni/bin/flannelcommand:- cpimage: docker.io/flannel/flannel-cni-plugin:v1.4.1-flannel1imagePullPolicy: Nevername: install-cni-pluginvolumeMounts:- mountPath: /opt/cni/binname: cni-plugin- args:- -f- /etc/kube-flannel/cni-conf.json- /etc/cni/net.d/10-flannel.conflistcommand:- cpimage: docker.io/flannel/flannel:v0.25.1imagePullPolicy: Nevername: install-cnivolumeMounts:- mountPath: /etc/cni/net.dname: cni- mountPath: /etc/kube-flannel/name: flannel-cfgpriorityClassName: system-node-criticalserviceAccountName: flanneltolerations:- effect: NoScheduleoperator: Existsvolumes:- hostPath:path: /run/flannelname: run- hostPath:path: /opt/cni/binname: cni-plugin- hostPath:path: /etc/cni/net.dname: cni- configMap:name: kube-flannel-cfgname: flannel-cfg- hostPath:path: /run/xtables.locktype: FileOrCreatename: xtables-lock
-
#刪除資源
-
kubectl delete -f kube-flannel.yml
-
#重新運行
-
kubectl apply -f kube-flannel.yml
如下圖,集群終于搭建成功了!!!
我使用方式一解決了網絡插件問題,其實使用私倉更優雅!
解決辦法二:k8s1.28,為集群配置harbor鏡像倉庫,上傳鏡像,從倉庫拉取相關鏡像
針對1.24之前的版本,網上有許多配置harbor鏡像倉庫的辦法,重點是/etc/docker/daemon.json文件,這里不再贅述。
1.24之后 kubelet 徹底移除dockershim,改為默認Containerd。containerd 不能像docker一樣 “docker login 私人鏡像倉庫地址” 登錄到鏡像倉庫,即便是本地docker登錄harbor ,集群仍是無法從harbor拉取到鏡像。這里需要修改配置文件。
本文的containerd為1.6,按照網上常規的修改/etc/containerd/config.toml文件的方式不再被推薦。
參考官方文檔——containerd/docs/cri/config.md at main · containerd/containerd · GitHub
配置相應的hosts.toml,為集群設置Harbor鏡像倉庫
步驟如下(2.1-2.4操作每一臺機器都要進行!!!):
2.1修改/etc/containerd/config.toml
2.2自定義證書
繞過TLS驗證(containerd/docs/hosts.md at main · containerd/containerd · GitHub)
#創建文件目錄 IP和端口替換為你的私人鏡像倉庫的IP以及對外開放的端口
mkdir -p /etc/containerd/certs.d/192.168.12.34:5000
#創建-打開hosts.toml
vi /etc/containerd/certs.d/192.168.12.34:5000/hosts.toml
?
#想hosts.toml中寫入以下內容 IP和端口和前面一樣進行替換
server = "http://192.168.12.34:5000"
?
[host."192.168.12.34:5000"]
? capabilities = ["pull", "resolve", "push"]
? skip_verify = true
2.3重新啟動containerd
systemctl daemon-reload
systemctl restart containerd
2.4完成上述操作,可能crictl以及集群yml文件仍然無法正常從私人鏡像倉庫拉取鏡像
可能報錯如下:
解決操作:
vim /etc/crictl.yaml
將runtime以及image endpoint均替換為"unix:///run/containerd/containerd.sock"
?:wq保存退出
2.5測試將鏡像推送至harbor
使用docker上傳(傳統操作)
docker tag flannel/flannel-cni-plugin:v1.4.1-flannel1 IP:PORT/flannel/flannel-cni-plugin:v1.4.1-flannel1
docker push IP:PORT/flannel/flannel-cni-plugin:v1.4.1-flannel1
另一個鏡像也是相同操作(如果報錯,可能是需要先在harbor上創建一個flannel的project)。
2.6修改kube-flannel.yml文件
:wq保存退出。重新刪除并運行kube-flannel.yml發現成功拉取鏡像(可以通過kubectl describe詳查pod)。
總結
方法二比方法一更為正式,畢竟配置集群總該要配置私人鏡像,省去了每一個節點導入鏡像的操作。要是還未配置私人鏡像倉庫,使用方法一也可行。
其余知識點:/etc/containerd/config.toml配置文件對ctr不生效,這個配置文件由CRI使用,即對crictl生效。
如果你的ctr images pull總是報錯: http: server gave HTTP response to HTTPS client。可能這并不是你的配置出錯,而是你需要在后面pull操作后加上--plain-http參數。表示你當前client與鏡像倉庫之間接受http形式的通信。或者是使用自定義的hosts-dir來指定配置文件。
ctr images pull IP:PORT/library/service:v1.0.0 --plain-http
ctr images pull IP:PORT/library/service:v1.0.0 --hosts-dir "/etc/containerd/certs.d"
感謝參考:ubuntu系統 kubeadm方式搭建k8s集群_ubuntu kubeadm-CSDN博客