文章目錄
- 前言
- 一、環境列表
- 二、思路
- 三、環境準備
- 四、有網環境下準備文件
- 1.下載所需的rpm包
- 2.準備harbor需要用到的鏡像
- 3. k8s的鏡像文件
- 4、 生成離線安裝包
- 5、harbor創建項目腳本
- 五、無公網環境部署單點集群
- 1、基礎環境安裝
- 2、安裝harbor
- 3 、 準備k8s鏡像
- 4、安裝k8s
- 六、無公網環境部署多點集群
- 總結
前言
無公網環境下在centos7.9上使用kk工具部署k8s平臺(amd64架構)
有個項目需要部署到甲方那邊,需要斷網部署,準備一下部署包
增加:
centos7.6也可以用相同方式,只有offlinerpms.tar包不一致,其他的文件都一致。
一、環境列表
服務器架構:amd64
操作系統iso:CentOS-7-x86_64-Minimal-2009.iso
k8s版本:v1.23.6
kk工具版本:3.1.10
harbor:harbor-online-installer-v2.5.0.tgz
docker-compose:1.23.2
二、思路
分兩步,首先在可以訪問互聯網的機器A上下載部署所需文件鏡像等,然后在不能訪問互聯網的機器B上進行測試驗證。
三、環境準備
聯系網管老師,將192.168.150.140-149段IP打開互聯網訪問權限。
將192.168.150-159段IP保持關閉互聯網訪問權限。
四、有網環境下準備文件
服務器IP:192.168.150.141
使用CentOS-7-x86_64-Minimal-2009.iso鏡像安裝的全新的虛擬機
需要準備的文件列表:
- rpm安裝包文件:offlinerpms.tar、
- harbor的鏡像文件:harbor-image.tar
- kk工具:kk
- k8s安裝需要用到的docker鏡像包: kubesphereio-image.tar
- k8s安裝的離線包:kubesphere.tar.gz
- harbor的安裝包:harbor-online-installer-v2.5.0.tgz
- docker-compose二進制文件:docker-compose
- harbor創建項目腳本:create_project_harbor.sh
1.下載所需的rpm包
先給全新的虛擬機更換源
mkdir -p /etc/yum.repos.d/CentOS-Base.repo.backup;
mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup;
curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo
yum makecache;
sudo yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo;
下載所需的rpm包
mkdir -p /root/offlinerpms
# 下載必須的工具軟件
yum install -y yum-utils
# 基礎工具包
yum install --downloadonly --downloaddir=/root/offlinerpms wget ntp vim
# k8s用到的基礎環境包
yum install --downloadonly --downloaddir=/root/offlinerpms socat conntrack yum-utils epel-release
# docker相關包
yum install --downloadonly --downloaddir=/root/offlinerpms docker-ce docker-ce-cli
下載完成后打tar包
cd /root/
tar -cvf offlinerpms.tar offlinerpms/
2.準備harbor需要用到的鏡像
# 鏡像準備過程
docker pull docker.m.daocloud.io/goharbor/prepare:v2.5.0
docker pull docker.m.daocloud.io/goharbor/harbor-log:v2.5.0
docker pull docker.m.daocloud.io/goharbor/registry-photon:v2.5.0
docker pull docker.m.daocloud.io/goharbor/harbor-registryctl:v2.5.0
docker pull docker.m.daocloud.io/goharbor/harbor-db:v2.5.0
docker pull docker.m.daocloud.io/goharbor/harbor-core:v2.5.0
docker pull docker.m.daocloud.io/goharbor/harbor-portal:v2.5.0
docker pull docker.m.daocloud.io/goharbor/harbor-jobservice:v2.5.0
docker pull docker.m.daocloud.io/goharbor/redis-photon:v2.5.0
docker pull docker.m.daocloud.io/goharbor/nginx-photon:v2.5.0
# 修改tag
docker tag docker.m.daocloud.io/goharbor/prepare:v2.5.0 goharbor/prepare:v2.5.0
docker tag docker.m.daocloud.io/goharbor/harbor-log:v2.5.0 goharbor/harbor-log:v2.5.0
docker tag docker.m.daocloud.io/goharbor/registry-photon:v2.5.0 goharbor/registry-photon:v2.5.0
docker tag docker.m.daocloud.io/goharbor/harbor-registryctl:v2.5.0 goharbor/harbor-registryctl:v2.5.0
docker tag docker.m.daocloud.io/goharbor/harbor-db:v2.5.0 goharbor/harbor-db:v2.5.0
docker tag docker.m.daocloud.io/goharbor/harbor-core:v2.5.0 goharbor/harbor-core:v2.5.0
docker tag docker.m.daocloud.io/goharbor/harbor-portal:v2.5.0 goharbor/harbor-portal:v2.5.0
docker tag docker.m.daocloud.io/goharbor/harbor-jobservice:v2.5.0 goharbor/harbor-jobservice:v2.5.0
docker tag docker.m.daocloud.io/goharbor/redis-photon:v2.5.0 goharbor/redis-photon:v2.5.0
docker tag docker.m.daocloud.io/goharbor/nginx-photon:v2.5.0 goharbor/nginx-photon:v2.5.0
# 保存鏡像
docker save -o harbor-image.tar goharbor/prepare:v2.5.0 goharbor/harbor-log:v2.5.0 goharbor/registry-photon:v2.5.0 goharbor/harbor-registryctl:v2.5.0 goharbor/harbor-db:v2.5.0 goharbor/harbor-core:v2.5.0 goharbor/harbor-portal:v2.5.0 goharbor/harbor-jobservice:v2.5.0 goharbor/redis-photon:v2.5.0 goharbor/nginx-photon:v2.5.0
3. k8s的鏡像文件
生成manifest-sample.yaml文件獲取需要的docker-image列表,我的思路是手動準備image,所以把文件中的鏡像手動下載下來,把manifest-sample.yaml中的鏡像信息全部刪除掉。
chmod a+x kk
export KKZONE=cn
./kk create manifest --with-kubernetes v1.23.6 --arch amd64 --with-registry "docker registry"
apiVersion: kubekey.kubesphere.io/v1alpha2
kind: Manifest
metadata:name: sample
spec:arches:- amd64operatingSystems: []kubernetesDistributions:- type: kubernetesversion: v1.23.6components:helm: version: v3.14.3cni: version: v1.2.0etcd: version: v3.5.13containerRuntimes:- type: dockerversion: 24.0.9- type: containerdversion: 1.7.13calicoctl:version: v3.27.4crictl: version: v1.29.0docker-registry:version: "2"harbor:version: v2.10.1docker-compose:version: v2.26.1images:- registry.cn-beijing.aliyuncs.com/kubesphereio/pause:3.6- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-apiserver:v1.23.6- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controller-manager:v1.23.6- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-scheduler:v1.23.6- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-proxy:v1.23.6- registry.cn-beijing.aliyuncs.com/kubesphereio/coredns:1.8.6- registry.cn-beijing.aliyuncs.com/kubesphereio/k8s-dns-node-cache:1.22.20- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controllers:v3.27.4- registry.cn-beijing.aliyuncs.com/kubesphereio/cni:v3.27.4- registry.cn-beijing.aliyuncs.com/kubesphereio/node:v3.27.4- registry.cn-beijing.aliyuncs.com/kubesphereio/pod2daemon-flexvol:v3.27.4- registry.cn-beijing.aliyuncs.com/kubesphereio/typha:v3.27.4- registry.cn-beijing.aliyuncs.com/kubesphereio/flannel:v0.21.3- registry.cn-beijing.aliyuncs.com/kubesphereio/flannel-cni-plugin:v1.1.2- registry.cn-beijing.aliyuncs.com/kubesphereio/cilium:v1.15.3- registry.cn-beijing.aliyuncs.com/kubesphereio/operator-generic:v1.15.3- registry.cn-beijing.aliyuncs.com/kubesphereio/hybridnet:v0.8.6- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-ovn:v1.10.10- registry.cn-beijing.aliyuncs.com/kubesphereio/multus-cni:v3.8- registry.cn-beijing.aliyuncs.com/kubesphereio/provisioner-localpv:3.3.0- registry.cn-beijing.aliyuncs.com/kubesphereio/linux-utils:3.3.0- registry.cn-beijing.aliyuncs.com/kubesphereio/haproxy:2.9.6-alpine- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-vip:v0.7.2- registry.cn-beijing.aliyuncs.com/kubesphereio/kata-deploy:stable- registry.cn-beijing.aliyuncs.com/kubesphereio/node-feature-discovery:v0.10.0registry:auths: {}
下載鏡像
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/pause:3.6
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/kube-apiserver:v1.23.6
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controller-manager:v1.23.6
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/kube-scheduler:v1.23.6
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/kube-proxy:v1.23.6
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/coredns:1.8.6
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/k8s-dns-node-cache:1.22.20
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controllers:v3.27.4
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/cni:v3.27.4
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/node:v3.27.4
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/pod2daemon-flexvol:v3.27.4
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/typha:v3.27.4
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/flannel:v0.21.3
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/flannel-cni-plugin:v1.1.2
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/cilium:v1.15.3
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/operator-generic:v1.15.3
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/hybridnet:v0.8.6
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/kube-ovn:v1.10.10
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/multus-cni:v3.8
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/provisioner-localpv:3.3.0
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/linux-utils:3.3.0
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/haproxy:2.9.6-alpine
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/kube-vip:v0.7.2
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/kata-deploy:stable
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/node-feature-discovery:v0.10.0
打鏡像包
docker save -o kubesphereio-image.tar registry.cn-beijing.aliyuncs.com/kubesphereio/pause:3.6 registry.cn-beijing.aliyuncs.com/kubesphereio/kube-apiserver:v1.23.6 registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controller-manager:v1.23.6 registry.cn-beijing.aliyuncs.com/kubesphereio/kube-scheduler:v1.23.6 registry.cn-beijing.aliyuncs.com/kubesphereio/kube-proxy:v1.23.6 registry.cn-beijing.aliyuncs.com/kubesphereio/coredns:1.8.6 registry.cn-beijing.aliyuncs.com/kubesphereio/k8s-dns-node-cache:1.22.20 registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controllers:v3.27.4 registry.cn-beijing.aliyuncs.com/kubesphereio/cni:v3.27.4 registry.cn-beijing.aliyuncs.com/kubesphereio/node:v3.27.4 registry.cn-beijing.aliyuncs.com/kubesphereio/pod2daemon-flexvol:v3.27.4 registry.cn-beijing.aliyuncs.com/kubesphereio/typha:v3.27.4 registry.cn-beijing.aliyuncs.com/kubesphereio/flannel:v0.21.3 registry.cn-beijing.aliyuncs.com/kubesphereio/flannel-cni-plugin:v1.1.2 registry.cn-beijing.aliyuncs.com/kubesphereio/cilium:v1.15.3 registry.cn-beijing.aliyuncs.com/kubesphereio/operator-generic:v1.15.3 registry.cn-beijing.aliyuncs.com/kubesphereio/hybridnet:v0.8.6 registry.cn-beijing.aliyuncs.com/kubesphereio/kube-ovn:v1.10.10 registry.cn-beijing.aliyuncs.com/kubesphereio/multus-cni:v3.8 registry.cn-beijing.aliyuncs.com/kubesphereio/provisioner-localpv:3.3.0 registry.cn-beijing.aliyuncs.com/kubesphereio/linux-utils:3.3.0 registry.cn-beijing.aliyuncs.com/kubesphereio/haproxy:2.9.6-alpine registry.cn-beijing.aliyuncs.com/kubesphereio/kube-vip:v0.7.2 registry.cn-beijing.aliyuncs.com/kubesphereio/kata-deploy:stable registry.cn-beijing.aliyuncs.com/kubesphereio/node-feature-discovery:v0.10.0
4、 生成離線安裝包
修改manifest-sample.yaml文件
apiVersion: kubekey.kubesphere.io/v1alpha2
kind: Manifest
metadata:name: sample
spec:arches:- amd64operatingSystems: []kubernetesDistributions:- type: kubernetesversion: v1.23.6components:helm: version: v3.14.3cni: version: v1.2.0etcd: version: v3.5.13containerRuntimes:- type: dockerversion: 24.0.9- type: containerdversion: 1.7.13calicoctl:version: v3.27.4crictl: version: v1.29.0docker-registry:version: "2"harbor:version: v2.10.1docker-compose:version: v2.26.1images:registry:auths: {}
打離線安裝包
export KKZONE=cn
chmod a+x kk
./kk artifact export -m manifest-sample.yaml -o kubesphere.tar.gz
5、harbor創建項目腳本
create_project_harbor.sh
docker-compose version 1.23.2, build 1110ad01
[root@demo home]# cat create_project_harbor.sh
#!/usr/bin/env bash# Copyright 2018 The KubeSphere Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.url="http://XX.XX.XX.XX" # 或修改為實際鏡像倉庫地址
user="admin"
passwd="Harbor12345"harbor_projects=(kskubespherekubesphereiocorednscalicoflannelciliumhybridnetdevkubeovnopenebslibraryplndrjenkinsargoprojdexidpopenpolicyagentcurlimagesgrafanakubeedgenginxincpromkiwigridminioopensearchprojectistiojaegertracingtimberioprometheus-operatorjimmidysonelasticthanosiobranczprometheus
)for project in "${harbor_projects[@]}"; doecho "creating $project"curl -u "${user}:${passwd}" -X POST -H "Content-Type: application/json" "${url}/api/v2.0/projects" -d "{ \"project_name\": \"${project}\", \"public\": true}" -k # 注意在 curl 命令末尾加上 -k
五、無公網環境部署單點集群
服務器IP:192.168.150.152
使用CentOS-7-x86_64-Minimal-2009.iso鏡像安裝的全新的虛擬機
重點:一定要配置dns信息,配置到內網的dns服務器或者是8.8.8.8和114.114.114.114,不配置的話nodelocaldns會有報錯
把準備的好的文件上傳到/data/install/目錄下:
- rpm安裝包文件:offlinerpms.tar、
- harbor的鏡像文件:harbor-image.tar
- kk工具:kk
- k8s安裝需要用到的docker鏡像包: kubesphereio-image.tar
- k8s安裝的離線包:kubesphere.tar.gz
- harbor的安裝包:harbor-online-installer-v2.5.0.tgz
- docker-compose二進制文件:docker-compose
- harbor創建項目腳本:create_project_harbor.sh
1、基礎環境安裝
cd /data/install/
tar -xvf offlinerpms.tar
cd /data/install/offlinerpms
#修改docker的cgroupdriver
mkdir -p /etc/docker/;
cat > /etc/docker/daemon.json <<EOF
{"insecure-registries": ["http://192.168.150.152:80"],"exec-opts":["native.cgroupdriver=systemd"],"log-driver":"json-file","log-opts":{"max-size":"100m"}
}
EOF
yum localinstall -y *.rpm
#修改到阿里云的時間服務器,內網環境修改到內網ntp服務器
sudo sed -i 's/^server /#server /' /etc/ntp.conf;
sed -i '/3.centos.pool.ntp.org iburst/a server time1.aliyun.com prefer\nserver time2.aliyun.com\nserver time3.aliyun.com\nserver time4.aliyun.com\nserver time5.aliyun.com\nserver time6.aliyun.com\nserver time7.aliyun.com' /etc/ntp.conf;
#重啟并加入自啟
systemctl enable ntpd;
systemctl restart ntpd;
timedatectl set-timezone "Asia/Shanghai";
ntpq -p;
hwclock;
#關閉selinux
setenforce 0
sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/sysconfig/selinux;
sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config;
sed -i 's/SELINUX=permissive/SELINUX=disabled/g' /etc/sysconfig/selinux;
sed -i 's/SELINUX=permissive/SELINUX=disabled/g' /etc/selinux/config;
#關閉防火墻
systemctl stop firewalld.service;
systemctl disable firewalld.service;
#啟動服務,設置自啟動
systemctl restart docker;
systemctl enable docker;
# 重啟服務器
reboot
2、安裝harbor
拷貝docker-compose,導入harbor所需的鏡像
cd /data/install
\cp /data/install/docker-compose /usr/local/bin/
chmod a+x /usr/local/bin/docker-composedocker-compose --version# 創建目錄
mkdir -p /data/harbor/data
# 導入鏡像
docker load -i harbor-image.tarcd /data/install/
tar -xvf harbor-online-installer-v2.5.0.tgz
cd /data/install/harbor/
修改harbor的安裝文件harbor.yml
hostname: 192.168.150.152
http:# port for http, default is 80. If https enabled, this port will redirect to https portport: 80
# https related config
#https:# https port for harbor, default is 443#port: 443# The path of cert and key files for nginx#certificate: /your/certificate/path#private_key: /your/private/key/path
data_volume: /data/harbor/data
安裝harbor
# 創建目錄
mkdir -p /data/harbor/data
cd /data/install/harbor/
./install.sh
創建harbor中的項目
cd /data/install
# 修改create_project_harbor.sh中的url信息
# url="http://192.168.150.152" # 或修改為實際鏡像倉庫地址
./create_project_harbor.sh
3 、 準備k8s鏡像
導入鏡像并登陸倉庫
cd /data/install
docker load -i kubesphereio-image.tar
docker login 192.168.150.152:80
修改鏡像名字
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/pause:3.6 192.168.150.152:80/kubesphereio/pause:3.6
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/kube-apiserver:v1.23.6 192.168.150.152:80/kubesphereio/kube-apiserver:v1.23.6
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controller-manager:v1.23.6 192.168.150.152:80/kubesphereio/kube-controller-manager:v1.23.6
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/kube-scheduler:v1.23.6 192.168.150.152:80/kubesphereio/kube-scheduler:v1.23.6
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/kube-proxy:v1.23.6 192.168.150.152:80/kubesphereio/kube-proxy:v1.23.6
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/coredns:1.8.6 192.168.150.152:80/kubesphereio/coredns:1.8.6
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/k8s-dns-node-cache:1.22.20 192.168.150.152:80/kubesphereio/k8s-dns-node-cache:1.22.20
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controllers:v3.27.4 192.168.150.152:80/kubesphereio/kube-controllers:v3.27.4
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/cni:v3.27.4 192.168.150.152:80/kubesphereio/cni:v3.27.4
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/node:v3.27.4 192.168.150.152:80/kubesphereio/node:v3.27.4
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/pod2daemon-flexvol:v3.27.4 192.168.150.152:80/kubesphereio/pod2daemon-flexvol:v3.27.4
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/typha:v3.27.4 192.168.150.152:80/kubesphereio/typha:v3.27.4
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/flannel:v0.21.3 192.168.150.152:80/kubesphereio/flannel:v0.21.3
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/flannel-cni-plugin:v1.1.2 192.168.150.152:80/kubesphereio/flannel-cni-plugin:v1.1.2
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/cilium:v1.15.3 192.168.150.152:80/kubesphereio/cilium:v1.15.3
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/operator-generic:v1.15.3 192.168.150.152:80/kubesphereio/operator-generic:v1.15.3
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/hybridnet:v0.8.6 192.168.150.152:80/kubesphereio/hybridnet:v0.8.6
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/kube-ovn:v1.10.10 192.168.150.152:80/kubesphereio/kube-ovn:v1.10.10
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/multus-cni:v3.8 192.168.150.152:80/kubesphereio/multus-cni:v3.8
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/provisioner-localpv:3.3.0 192.168.150.152:80/kubesphereio/provisioner-localpv:3.3.0
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/linux-utils:3.3.0 192.168.150.152:80/kubesphereio/linux-utils:3.3.0
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/haproxy:2.9.6-alpine 192.168.150.152:80/kubesphereio/haproxy:2.9.6-alpine
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/kube-vip:v0.7.2 192.168.150.152:80/kubesphereio/kube-vip:v0.7.2
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/kata-deploy:stable 192.168.150.152:80/kubesphereio/kata-deploy:stable
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/node-feature-discovery:v0.10.0 192.168.150.152:80/kubesphereio/node-feature-discovery:v0.10.0
上傳到harbor倉庫
docker push 192.168.150.152:80/kubesphereio/pause:3.6
docker push 192.168.150.152:80/kubesphereio/kube-apiserver:v1.23.6
docker push 192.168.150.152:80/kubesphereio/kube-controller-manager:v1.23.6
docker push 192.168.150.152:80/kubesphereio/kube-scheduler:v1.23.6
docker push 192.168.150.152:80/kubesphereio/kube-proxy:v1.23.6
docker push 192.168.150.152:80/kubesphereio/coredns:1.8.6
docker push 192.168.150.152:80/kubesphereio/k8s-dns-node-cache:1.22.20
docker push 192.168.150.152:80/kubesphereio/kube-controllers:v3.27.4
docker push 192.168.150.152:80/kubesphereio/cni:v3.27.4
docker push 192.168.150.152:80/kubesphereio/node:v3.27.4
docker push 192.168.150.152:80/kubesphereio/pod2daemon-flexvol:v3.27.4
docker push 192.168.150.152:80/kubesphereio/typha:v3.27.4
docker push 192.168.150.152:80/kubesphereio/flannel:v0.21.3
docker push 192.168.150.152:80/kubesphereio/flannel-cni-plugin:v1.1.2
docker push 192.168.150.152:80/kubesphereio/cilium:v1.15.3
docker push 192.168.150.152:80/kubesphereio/operator-generic:v1.15.3
docker push 192.168.150.152:80/kubesphereio/hybridnet:v0.8.6
docker push 192.168.150.152:80/kubesphereio/kube-ovn:v1.10.10
docker push 192.168.150.152:80/kubesphereio/multus-cni:v3.8
docker push 192.168.150.152:80/kubesphereio/provisioner-localpv:3.3.0
docker push 192.168.150.152:80/kubesphereio/linux-utils:3.3.0
docker push 192.168.150.152:80/kubesphereio/haproxy:2.9.6-alpine
docker push 192.168.150.152:80/kubesphereio/kube-vip:v0.7.2
docker push 192.168.150.152:80/kubesphereio/kata-deploy:stable
docker push 192.168.150.152:80/kubesphereio/node-feature-discovery:v0.10.0
4、安裝k8s
創建部署文件
cd /data/install
export KKZONE=cn
./kk create config --with-kubernetes v1.23.6
修改部署配置文件/data/install/config-sample.yaml
按照本地節點信息修改hosts和roleGroups、按照本地harbor信息修改registry
apiVersion: kubekey.kubesphere.io/v1alpha2
kind: Cluster
metadata:name: sample
spec:hosts:- {name: demo, address: 192.168.150.152, internalAddress: 192.168.150.152, user: root, password: "smartcore"}roleGroups:etcd:- democontrol-plane: - demoworker:- democontrolPlaneEndpoint:## Internal loadbalancer for apiservers internalLoadbalancer: haproxydomain: lb.kubesphere.localaddress: ""port: 6443kubernetes:version: v1.23.6clusterName: cluster.localautoRenewCerts: truecontainerManager: dockeretcd:type: kubekeynetwork:plugin: calicokubePodsCIDR: 10.233.64.0/18kubeServiceCIDR: 10.233.0.0/18## multus support. https://github.com/k8snetworkplumbingwg/multus-cnimultusCNI:enabled: falseregistry:type: harborauths:"192.168.150.152:80":username: adminpassword: Harbor12345skipTLSVerify: trueprivateRegistry: "192.168.150.152:80"namespaceOverride: "kubesphereio"registryMirrors: []insecureRegistries: []addons: []
創建集群
cd /data/install
export KKZONE=cn
# 將本機信息配置進hosts文件
echo 192.168.150.152 demo >> /etc/hosts
# 創建
./kk create cluster -f config-sample.yaml -a kubesphere.tar.gz --skip-push-images -y
部署成功,查看部署結果
kubectl get pod -A -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-system calico-kube-controllers-84f449dd8-lqn6w 1/1 Running 0 35s 10.233.93.2 demo <none> <none>
kube-system calico-node-p29jj 1/1 Running 0 35s 192.168.150.152 demo <none> <none>
kube-system coredns-7fcdc7c747-5g4p6 1/1 Running 0 35s 10.233.93.1 demo <none> <none>
kube-system coredns-7fcdc7c747-92kgl 1/1 Running 0 35s 10.233.93.3 demo <none> <none>
kube-system kube-apiserver-demo 1/1 Running 0 49s 192.168.150.152 demo <none> <none>
kube-system kube-controller-manager-demo 1/1 Running 0 49s 192.168.150.152 demo <none> <none>
kube-system kube-proxy-9zc2d 1/1 Running 0 35s 192.168.150.152 demo <none> <none>
kube-system kube-scheduler-demo 1/1 Running 0 50s 192.168.150.152 demo <none> <none>
kube-system nodelocaldns-xhgmv 1/1 Running 0 35s 192.168.150.152 demo <none> <none>
六、無公網環境部署多點集群
多點部署的話,就是等部署節點harbor好之后,再開始其他的節點安裝,安裝好基礎環境之后,登陸一下harbor一下就可以了
總結
kubesphere閉源了,大家且用且珍惜。