無公網環境下在centos7.9上使用kk工具部署k8s平臺(amd64架構)

文章目錄

  • 前言
  • 一、環境列表
  • 二、思路
  • 三、環境準備
  • 四、有網環境下準備文件
    • 1.下載所需的rpm包
    • 2.準備harbor需要用到的鏡像
    • 3. k8s的鏡像文件
    • 4、 生成離線安裝包
    • 5、harbor創建項目腳本
  • 五、無公網環境部署單點集群
    • 1、基礎環境安裝
    • 2、安裝harbor
    • 3 、 準備k8s鏡像
    • 4、安裝k8s
  • 六、無公網環境部署多點集群
  • 總結


前言

無公網環境下在centos7.9上使用kk工具部署k8s平臺(amd64架構)
有個項目需要部署到甲方那邊,需要斷網部署,準備一下部署包
增加:
centos7.6也可以用相同方式,只有offlinerpms.tar包不一致,其他的文件都一致。


一、環境列表

服務器架構:amd64
操作系統iso:CentOS-7-x86_64-Minimal-2009.iso
k8s版本:v1.23.6
kk工具版本:3.1.10
harbor:harbor-online-installer-v2.5.0.tgz
docker-compose:1.23.2

二、思路

分兩步,首先在可以訪問互聯網的機器A上下載部署所需文件鏡像等,然后在不能訪問互聯網的機器B上進行測試驗證。

三、環境準備

聯系網管老師,將192.168.150.140-149段IP打開互聯網訪問權限。
將192.168.150-159段IP保持關閉互聯網訪問權限。

四、有網環境下準備文件

服務器IP:192.168.150.141
使用CentOS-7-x86_64-Minimal-2009.iso鏡像安裝的全新的虛擬機
需要準備的文件列表:

  • rpm安裝包文件:offlinerpms.tar、
  • harbor的鏡像文件:harbor-image.tar
  • kk工具:kk
  • k8s安裝需要用到的docker鏡像包: kubesphereio-image.tar
  • k8s安裝的離線包:kubesphere.tar.gz
  • harbor的安裝包:harbor-online-installer-v2.5.0.tgz
  • docker-compose二進制文件:docker-compose
  • harbor創建項目腳本:create_project_harbor.sh

1.下載所需的rpm包

先給全新的虛擬機更換源

mkdir -p /etc/yum.repos.d/CentOS-Base.repo.backup;
mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup;
curl  -o  /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo
yum makecache;
sudo yum-config-manager --add-repo  http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo;

下載所需的rpm包

mkdir -p /root/offlinerpms
# 下載必須的工具軟件
yum install -y  yum-utils
# 基礎工具包
yum install --downloadonly --downloaddir=/root/offlinerpms  wget ntp vim 
# k8s用到的基礎環境包
yum install --downloadonly --downloaddir=/root/offlinerpms socat conntrack yum-utils epel-release
# docker相關包
yum install --downloadonly --downloaddir=/root/offlinerpms  docker-ce docker-ce-cli

下載完成后打tar包

cd /root/
tar -cvf offlinerpms.tar offlinerpms/

2.準備harbor需要用到的鏡像

# 鏡像準備過程
docker pull docker.m.daocloud.io/goharbor/prepare:v2.5.0
docker pull docker.m.daocloud.io/goharbor/harbor-log:v2.5.0
docker pull docker.m.daocloud.io/goharbor/registry-photon:v2.5.0
docker pull docker.m.daocloud.io/goharbor/harbor-registryctl:v2.5.0
docker pull docker.m.daocloud.io/goharbor/harbor-db:v2.5.0
docker pull docker.m.daocloud.io/goharbor/harbor-core:v2.5.0
docker pull docker.m.daocloud.io/goharbor/harbor-portal:v2.5.0
docker pull docker.m.daocloud.io/goharbor/harbor-jobservice:v2.5.0
docker pull docker.m.daocloud.io/goharbor/redis-photon:v2.5.0
docker pull docker.m.daocloud.io/goharbor/nginx-photon:v2.5.0
# 修改tag
docker tag docker.m.daocloud.io/goharbor/prepare:v2.5.0  goharbor/prepare:v2.5.0
docker tag docker.m.daocloud.io/goharbor/harbor-log:v2.5.0 goharbor/harbor-log:v2.5.0
docker tag docker.m.daocloud.io/goharbor/registry-photon:v2.5.0 goharbor/registry-photon:v2.5.0
docker tag docker.m.daocloud.io/goharbor/harbor-registryctl:v2.5.0 goharbor/harbor-registryctl:v2.5.0
docker tag docker.m.daocloud.io/goharbor/harbor-db:v2.5.0 goharbor/harbor-db:v2.5.0
docker tag docker.m.daocloud.io/goharbor/harbor-core:v2.5.0 goharbor/harbor-core:v2.5.0
docker tag docker.m.daocloud.io/goharbor/harbor-portal:v2.5.0 goharbor/harbor-portal:v2.5.0
docker tag docker.m.daocloud.io/goharbor/harbor-jobservice:v2.5.0 goharbor/harbor-jobservice:v2.5.0
docker tag docker.m.daocloud.io/goharbor/redis-photon:v2.5.0 goharbor/redis-photon:v2.5.0
docker tag docker.m.daocloud.io/goharbor/nginx-photon:v2.5.0 goharbor/nginx-photon:v2.5.0
# 保存鏡像
docker save -o harbor-image.tar  goharbor/prepare:v2.5.0 goharbor/harbor-log:v2.5.0 goharbor/registry-photon:v2.5.0 goharbor/harbor-registryctl:v2.5.0 goharbor/harbor-db:v2.5.0 goharbor/harbor-core:v2.5.0 goharbor/harbor-portal:v2.5.0 goharbor/harbor-jobservice:v2.5.0 goharbor/redis-photon:v2.5.0 goharbor/nginx-photon:v2.5.0

3. k8s的鏡像文件

生成manifest-sample.yaml文件獲取需要的docker-image列表,我的思路是手動準備image,所以把文件中的鏡像手動下載下來,把manifest-sample.yaml中的鏡像信息全部刪除掉。

chmod a+x kk
export KKZONE=cn
./kk create manifest --with-kubernetes v1.23.6  --arch amd64  --with-registry "docker registry"
apiVersion: kubekey.kubesphere.io/v1alpha2
kind: Manifest
metadata:name: sample
spec:arches:- amd64operatingSystems: []kubernetesDistributions:- type: kubernetesversion: v1.23.6components:helm: version: v3.14.3cni: version: v1.2.0etcd: version: v3.5.13containerRuntimes:- type: dockerversion: 24.0.9- type: containerdversion: 1.7.13calicoctl:version: v3.27.4crictl: version: v1.29.0docker-registry:version: "2"harbor:version: v2.10.1docker-compose:version: v2.26.1images:- registry.cn-beijing.aliyuncs.com/kubesphereio/pause:3.6- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-apiserver:v1.23.6- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controller-manager:v1.23.6- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-scheduler:v1.23.6- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-proxy:v1.23.6- registry.cn-beijing.aliyuncs.com/kubesphereio/coredns:1.8.6- registry.cn-beijing.aliyuncs.com/kubesphereio/k8s-dns-node-cache:1.22.20- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controllers:v3.27.4- registry.cn-beijing.aliyuncs.com/kubesphereio/cni:v3.27.4- registry.cn-beijing.aliyuncs.com/kubesphereio/node:v3.27.4- registry.cn-beijing.aliyuncs.com/kubesphereio/pod2daemon-flexvol:v3.27.4- registry.cn-beijing.aliyuncs.com/kubesphereio/typha:v3.27.4- registry.cn-beijing.aliyuncs.com/kubesphereio/flannel:v0.21.3- registry.cn-beijing.aliyuncs.com/kubesphereio/flannel-cni-plugin:v1.1.2- registry.cn-beijing.aliyuncs.com/kubesphereio/cilium:v1.15.3- registry.cn-beijing.aliyuncs.com/kubesphereio/operator-generic:v1.15.3- registry.cn-beijing.aliyuncs.com/kubesphereio/hybridnet:v0.8.6- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-ovn:v1.10.10- registry.cn-beijing.aliyuncs.com/kubesphereio/multus-cni:v3.8- registry.cn-beijing.aliyuncs.com/kubesphereio/provisioner-localpv:3.3.0- registry.cn-beijing.aliyuncs.com/kubesphereio/linux-utils:3.3.0- registry.cn-beijing.aliyuncs.com/kubesphereio/haproxy:2.9.6-alpine- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-vip:v0.7.2- registry.cn-beijing.aliyuncs.com/kubesphereio/kata-deploy:stable- registry.cn-beijing.aliyuncs.com/kubesphereio/node-feature-discovery:v0.10.0registry:auths: {}

下載鏡像

docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/pause:3.6
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/kube-apiserver:v1.23.6
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controller-manager:v1.23.6
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/kube-scheduler:v1.23.6
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/kube-proxy:v1.23.6
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/coredns:1.8.6
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/k8s-dns-node-cache:1.22.20
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controllers:v3.27.4
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/cni:v3.27.4
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/node:v3.27.4
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/pod2daemon-flexvol:v3.27.4
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/typha:v3.27.4
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/flannel:v0.21.3
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/flannel-cni-plugin:v1.1.2
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/cilium:v1.15.3
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/operator-generic:v1.15.3
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/hybridnet:v0.8.6
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/kube-ovn:v1.10.10
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/multus-cni:v3.8
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/provisioner-localpv:3.3.0
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/linux-utils:3.3.0
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/haproxy:2.9.6-alpine
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/kube-vip:v0.7.2
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/kata-deploy:stable
docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/node-feature-discovery:v0.10.0

打鏡像包

docker save -o kubesphereio-image.tar registry.cn-beijing.aliyuncs.com/kubesphereio/pause:3.6 registry.cn-beijing.aliyuncs.com/kubesphereio/kube-apiserver:v1.23.6 registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controller-manager:v1.23.6 registry.cn-beijing.aliyuncs.com/kubesphereio/kube-scheduler:v1.23.6 registry.cn-beijing.aliyuncs.com/kubesphereio/kube-proxy:v1.23.6 registry.cn-beijing.aliyuncs.com/kubesphereio/coredns:1.8.6 registry.cn-beijing.aliyuncs.com/kubesphereio/k8s-dns-node-cache:1.22.20 registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controllers:v3.27.4 registry.cn-beijing.aliyuncs.com/kubesphereio/cni:v3.27.4 registry.cn-beijing.aliyuncs.com/kubesphereio/node:v3.27.4 registry.cn-beijing.aliyuncs.com/kubesphereio/pod2daemon-flexvol:v3.27.4 registry.cn-beijing.aliyuncs.com/kubesphereio/typha:v3.27.4 registry.cn-beijing.aliyuncs.com/kubesphereio/flannel:v0.21.3  registry.cn-beijing.aliyuncs.com/kubesphereio/flannel-cni-plugin:v1.1.2 registry.cn-beijing.aliyuncs.com/kubesphereio/cilium:v1.15.3 registry.cn-beijing.aliyuncs.com/kubesphereio/operator-generic:v1.15.3 registry.cn-beijing.aliyuncs.com/kubesphereio/hybridnet:v0.8.6 registry.cn-beijing.aliyuncs.com/kubesphereio/kube-ovn:v1.10.10 registry.cn-beijing.aliyuncs.com/kubesphereio/multus-cni:v3.8 registry.cn-beijing.aliyuncs.com/kubesphereio/provisioner-localpv:3.3.0 registry.cn-beijing.aliyuncs.com/kubesphereio/linux-utils:3.3.0  registry.cn-beijing.aliyuncs.com/kubesphereio/haproxy:2.9.6-alpine registry.cn-beijing.aliyuncs.com/kubesphereio/kube-vip:v0.7.2 registry.cn-beijing.aliyuncs.com/kubesphereio/kata-deploy:stable registry.cn-beijing.aliyuncs.com/kubesphereio/node-feature-discovery:v0.10.0

4、 生成離線安裝包

修改manifest-sample.yaml文件

apiVersion: kubekey.kubesphere.io/v1alpha2
kind: Manifest
metadata:name: sample
spec:arches:- amd64operatingSystems: []kubernetesDistributions:- type: kubernetesversion: v1.23.6components:helm: version: v3.14.3cni: version: v1.2.0etcd: version: v3.5.13containerRuntimes:- type: dockerversion: 24.0.9- type: containerdversion: 1.7.13calicoctl:version: v3.27.4crictl: version: v1.29.0docker-registry:version: "2"harbor:version: v2.10.1docker-compose:version: v2.26.1images:registry:auths: {}

打離線安裝包

export KKZONE=cn
chmod a+x kk
./kk artifact export -m manifest-sample.yaml -o kubesphere.tar.gz

5、harbor創建項目腳本

create_project_harbor.sh

docker-compose version 1.23.2, build 1110ad01
[root@demo home]# cat create_project_harbor.sh 
#!/usr/bin/env bash# Copyright 2018 The KubeSphere Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.url="http://XX.XX.XX.XX"  # 或修改為實際鏡像倉庫地址
user="admin"
passwd="Harbor12345"harbor_projects=(kskubespherekubesphereiocorednscalicoflannelciliumhybridnetdevkubeovnopenebslibraryplndrjenkinsargoprojdexidpopenpolicyagentcurlimagesgrafanakubeedgenginxincpromkiwigridminioopensearchprojectistiojaegertracingtimberioprometheus-operatorjimmidysonelasticthanosiobranczprometheus
)for project in "${harbor_projects[@]}"; doecho "creating $project"curl -u "${user}:${passwd}" -X POST -H "Content-Type: application/json" "${url}/api/v2.0/projects" -d "{ \"project_name\": \"${project}\", \"public\": true}" -k  # 注意在 curl 命令末尾加上 -k

五、無公網環境部署單點集群

服務器IP:192.168.150.152
使用CentOS-7-x86_64-Minimal-2009.iso鏡像安裝的全新的虛擬機

重點:一定要配置dns信息,配置到內網的dns服務器或者是8.8.8.8和114.114.114.114,不配置的話nodelocaldns會有報錯

把準備的好的文件上傳到/data/install/目錄下:

  • rpm安裝包文件:offlinerpms.tar、
  • harbor的鏡像文件:harbor-image.tar
  • kk工具:kk
  • k8s安裝需要用到的docker鏡像包: kubesphereio-image.tar
  • k8s安裝的離線包:kubesphere.tar.gz
  • harbor的安裝包:harbor-online-installer-v2.5.0.tgz
  • docker-compose二進制文件:docker-compose
  • harbor創建項目腳本:create_project_harbor.sh

1、基礎環境安裝

cd /data/install/
tar -xvf offlinerpms.tar
cd /data/install/offlinerpms
#修改docker的cgroupdriver
mkdir -p /etc/docker/;
cat > /etc/docker/daemon.json <<EOF
{"insecure-registries": ["http://192.168.150.152:80"],"exec-opts":["native.cgroupdriver=systemd"],"log-driver":"json-file","log-opts":{"max-size":"100m"}
}
EOF
yum localinstall -y *.rpm
#修改到阿里云的時間服務器,內網環境修改到內網ntp服務器
sudo sed -i 's/^server /#server /' /etc/ntp.conf;
sed -i '/3.centos.pool.ntp.org iburst/a server time1.aliyun.com prefer\nserver time2.aliyun.com\nserver time3.aliyun.com\nserver time4.aliyun.com\nserver time5.aliyun.com\nserver time6.aliyun.com\nserver time7.aliyun.com' /etc/ntp.conf;
#重啟并加入自啟
systemctl enable ntpd;
systemctl restart ntpd;
timedatectl set-timezone "Asia/Shanghai";
ntpq -p;
hwclock;
#關閉selinux
setenforce 0
sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/sysconfig/selinux;
sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config;
sed -i 's/SELINUX=permissive/SELINUX=disabled/g' /etc/sysconfig/selinux;
sed -i 's/SELINUX=permissive/SELINUX=disabled/g' /etc/selinux/config;
#關閉防火墻
systemctl stop firewalld.service;
systemctl disable firewalld.service;
#啟動服務,設置自啟動
systemctl restart docker;
systemctl enable  docker;
# 重啟服務器
reboot

2、安裝harbor

拷貝docker-compose,導入harbor所需的鏡像

cd /data/install
\cp  /data/install/docker-compose /usr/local/bin/
chmod a+x /usr/local/bin/docker-composedocker-compose --version# 創建目錄
mkdir -p /data/harbor/data
# 導入鏡像
docker load -i harbor-image.tarcd /data/install/
tar -xvf harbor-online-installer-v2.5.0.tgz
cd /data/install/harbor/

修改harbor的安裝文件harbor.yml

hostname: 192.168.150.152
http:# port for http, default is 80. If https enabled, this port will redirect to https portport: 80
# https related config
#https:# https port for harbor, default is 443#port: 443# The path of cert and key files for nginx#certificate: /your/certificate/path#private_key: /your/private/key/path
data_volume: /data/harbor/data

安裝harbor

# 創建目錄
mkdir -p /data/harbor/data
cd /data/install/harbor/
./install.sh

創建harbor中的項目

cd /data/install
# 修改create_project_harbor.sh中的url信息
# url="http://192.168.150.152"  # 或修改為實際鏡像倉庫地址
./create_project_harbor.sh

3 、 準備k8s鏡像

導入鏡像并登陸倉庫

cd /data/install
docker load -i kubesphereio-image.tar
docker login 192.168.150.152:80

修改鏡像名字

docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/pause:3.6                           192.168.150.152:80/kubesphereio/pause:3.6
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/kube-apiserver:v1.23.6              192.168.150.152:80/kubesphereio/kube-apiserver:v1.23.6 
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controller-manager:v1.23.6     192.168.150.152:80/kubesphereio/kube-controller-manager:v1.23.6
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/kube-scheduler:v1.23.6              192.168.150.152:80/kubesphereio/kube-scheduler:v1.23.6
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/kube-proxy:v1.23.6                  192.168.150.152:80/kubesphereio/kube-proxy:v1.23.6
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/coredns:1.8.6                       192.168.150.152:80/kubesphereio/coredns:1.8.6
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/k8s-dns-node-cache:1.22.20          192.168.150.152:80/kubesphereio/k8s-dns-node-cache:1.22.20
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controllers:v3.27.4            192.168.150.152:80/kubesphereio/kube-controllers:v3.27.4
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/cni:v3.27.4                         192.168.150.152:80/kubesphereio/cni:v3.27.4
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/node:v3.27.4                        192.168.150.152:80/kubesphereio/node:v3.27.4
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/pod2daemon-flexvol:v3.27.4          192.168.150.152:80/kubesphereio/pod2daemon-flexvol:v3.27.4
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/typha:v3.27.4                       192.168.150.152:80/kubesphereio/typha:v3.27.4
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/flannel:v0.21.3                     192.168.150.152:80/kubesphereio/flannel:v0.21.3
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/flannel-cni-plugin:v1.1.2           192.168.150.152:80/kubesphereio/flannel-cni-plugin:v1.1.2
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/cilium:v1.15.3                      192.168.150.152:80/kubesphereio/cilium:v1.15.3
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/operator-generic:v1.15.3            192.168.150.152:80/kubesphereio/operator-generic:v1.15.3
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/hybridnet:v0.8.6                    192.168.150.152:80/kubesphereio/hybridnet:v0.8.6
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/kube-ovn:v1.10.10                   192.168.150.152:80/kubesphereio/kube-ovn:v1.10.10
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/multus-cni:v3.8                     192.168.150.152:80/kubesphereio/multus-cni:v3.8
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/provisioner-localpv:3.3.0           192.168.150.152:80/kubesphereio/provisioner-localpv:3.3.0
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/linux-utils:3.3.0                   192.168.150.152:80/kubesphereio/linux-utils:3.3.0
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/haproxy:2.9.6-alpine                192.168.150.152:80/kubesphereio/haproxy:2.9.6-alpine
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/kube-vip:v0.7.2                     192.168.150.152:80/kubesphereio/kube-vip:v0.7.2
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/kata-deploy:stable                  192.168.150.152:80/kubesphereio/kata-deploy:stable
docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/node-feature-discovery:v0.10.0      192.168.150.152:80/kubesphereio/node-feature-discovery:v0.10.0

上傳到harbor倉庫

docker push    192.168.150.152:80/kubesphereio/pause:3.6
docker push    192.168.150.152:80/kubesphereio/kube-apiserver:v1.23.6 
docker push    192.168.150.152:80/kubesphereio/kube-controller-manager:v1.23.6
docker push    192.168.150.152:80/kubesphereio/kube-scheduler:v1.23.6
docker push    192.168.150.152:80/kubesphereio/kube-proxy:v1.23.6
docker push    192.168.150.152:80/kubesphereio/coredns:1.8.6
docker push    192.168.150.152:80/kubesphereio/k8s-dns-node-cache:1.22.20
docker push    192.168.150.152:80/kubesphereio/kube-controllers:v3.27.4
docker push    192.168.150.152:80/kubesphereio/cni:v3.27.4
docker push    192.168.150.152:80/kubesphereio/node:v3.27.4
docker push    192.168.150.152:80/kubesphereio/pod2daemon-flexvol:v3.27.4
docker push    192.168.150.152:80/kubesphereio/typha:v3.27.4
docker push    192.168.150.152:80/kubesphereio/flannel:v0.21.3
docker push    192.168.150.152:80/kubesphereio/flannel-cni-plugin:v1.1.2
docker push    192.168.150.152:80/kubesphereio/cilium:v1.15.3
docker push    192.168.150.152:80/kubesphereio/operator-generic:v1.15.3
docker push    192.168.150.152:80/kubesphereio/hybridnet:v0.8.6
docker push    192.168.150.152:80/kubesphereio/kube-ovn:v1.10.10
docker push    192.168.150.152:80/kubesphereio/multus-cni:v3.8
docker push    192.168.150.152:80/kubesphereio/provisioner-localpv:3.3.0
docker push    192.168.150.152:80/kubesphereio/linux-utils:3.3.0
docker push    192.168.150.152:80/kubesphereio/haproxy:2.9.6-alpine
docker push    192.168.150.152:80/kubesphereio/kube-vip:v0.7.2
docker push    192.168.150.152:80/kubesphereio/kata-deploy:stable
docker push    192.168.150.152:80/kubesphereio/node-feature-discovery:v0.10.0

4、安裝k8s

創建部署文件

cd /data/install
export KKZONE=cn
./kk create config --with-kubernetes v1.23.6  

修改部署配置文件/data/install/config-sample.yaml
按照本地節點信息修改hosts和roleGroups、按照本地harbor信息修改registry

apiVersion: kubekey.kubesphere.io/v1alpha2
kind: Cluster
metadata:name: sample
spec:hosts:- {name: demo, address: 192.168.150.152, internalAddress: 192.168.150.152, user: root, password: "smartcore"}roleGroups:etcd:- democontrol-plane: - demoworker:- democontrolPlaneEndpoint:## Internal loadbalancer for apiservers internalLoadbalancer: haproxydomain: lb.kubesphere.localaddress: ""port: 6443kubernetes:version: v1.23.6clusterName: cluster.localautoRenewCerts: truecontainerManager: dockeretcd:type: kubekeynetwork:plugin: calicokubePodsCIDR: 10.233.64.0/18kubeServiceCIDR: 10.233.0.0/18## multus support. https://github.com/k8snetworkplumbingwg/multus-cnimultusCNI:enabled: falseregistry:type: harborauths:"192.168.150.152:80":username: adminpassword: Harbor12345skipTLSVerify: trueprivateRegistry: "192.168.150.152:80"namespaceOverride: "kubesphereio"registryMirrors: []insecureRegistries: []addons: []

創建集群

cd /data/install
export KKZONE=cn
# 將本機信息配置進hosts文件
echo 192.168.150.152 demo >> /etc/hosts
# 創建
./kk create cluster -f config-sample.yaml -a kubesphere.tar.gz --skip-push-images -y

部署成功,查看部署結果
kubectl get pod -A -o wide

NAMESPACE     NAME                                      READY   STATUS    RESTARTS   AGE   IP                NODE   NOMINATED NODE   READINESS GATES
kube-system   calico-kube-controllers-84f449dd8-lqn6w   1/1     Running   0          35s   10.233.93.2       demo   <none>           <none>
kube-system   calico-node-p29jj                         1/1     Running   0          35s   192.168.150.152   demo   <none>           <none>
kube-system   coredns-7fcdc7c747-5g4p6                  1/1     Running   0          35s   10.233.93.1       demo   <none>           <none>
kube-system   coredns-7fcdc7c747-92kgl                  1/1     Running   0          35s   10.233.93.3       demo   <none>           <none>
kube-system   kube-apiserver-demo                       1/1     Running   0          49s   192.168.150.152   demo   <none>           <none>
kube-system   kube-controller-manager-demo              1/1     Running   0          49s   192.168.150.152   demo   <none>           <none>
kube-system   kube-proxy-9zc2d                          1/1     Running   0          35s   192.168.150.152   demo   <none>           <none>
kube-system   kube-scheduler-demo                       1/1     Running   0          50s   192.168.150.152   demo   <none>           <none>
kube-system   nodelocaldns-xhgmv                        1/1     Running   0          35s   192.168.150.152   demo   <none>           <none>

六、無公網環境部署多點集群

多點部署的話,就是等部署節點harbor好之后,再開始其他的節點安裝,安裝好基礎環境之后,登陸一下harbor一下就可以了

總結

kubesphere閉源了,大家且用且珍惜。

本文來自互聯網用戶投稿,該文觀點僅代表作者本人,不代表本站立場。本站僅提供信息存儲空間服務,不擁有所有權,不承擔相關法律責任。
如若轉載,請注明出處:http://www.pswp.cn/bicheng/92106.shtml
繁體地址,請注明出處:http://hk.pswp.cn/bicheng/92106.shtml
英文地址,請注明出處:http://en.pswp.cn/bicheng/92106.shtml

如若內容造成侵權/違法違規/事實不符,請聯系多彩編程網進行投訴反饋email:809451989@qq.com,一經查實,立即刪除!

相關文章

Objective-C中非傳統設計模式的探索與實踐

本文還有配套的精品資源&#xff0c;點擊獲取 簡介&#xff1a;Objective-C的設計模式不僅僅局限于經典模式&#xff0c;還可以利用其動態特性實現一些非傳統的模式。本文介紹了一系列基于Objective-C動態特性的設計模式&#xff0c;包括使用協議代替類繼承、通過分類擴展類…

【筆記】重學單片機(51)(下)

中斷系統 正常運行過程中&#xff0c;被打斷進行另外工作&#xff0c;結束后回到原有進程。 5個中斷源 外部中斷源&#xff08;2個&#xff09;&#xff1a;INT0——由P3.2端口線引入&#xff0c;低電平或下降沿引起。INT1——由P3.3端口線引入&#xff0c;低電平或下降沿引起。…

Go實現程序啟動器進而實現隱藏真實內容

注意&#xff1a; 本文內容于 2025-08-03 01:10:35 創建&#xff0c;可能不會在此平臺上進行更新。如果您希望查看最新版本或更多相關內容&#xff0c;請訪問原文地址&#xff1a;Go實現程序啟動器進而實現隱藏真實內容。感謝您的關注與支持&#xff01; 突發奇想&#xff0c;…

Fiddler 中文版怎么用 實現接口抓包調試與前后端聯調閉環

API調試在現代開發流程中的地位愈發重要&#xff1a;接口數量激增、請求邏輯復雜、數據結構多變、安全校驗機制加嚴……一個小小的參數錯誤、一次隱蔽的跨域問題、一個環境配置疏漏&#xff0c;都可能導致長時間的排查成本。而擁有一款既強大又易用的調試工具&#xff0c;尤其是…

ollama 多實例部署

如果我們需要在一臺服務器上使用多個ollama服務&#xff0c;那么我們需要進行將ollama前端和ollama后端對應連接的操作&#xff0c;否則就會出現如下場景&#xff1a;我們可以在當前端口設置&#xff0c;這句話就是指明當前ollama實例使用哪個后端進行請求&#xff1a;export O…

orchestrator部署

場景&#xff1a; 用于管理MySQL高可用 下載jq包 每臺orchestrator集群機器上都進行下載。 # wget http://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm # rpm -ivh epel-release-latest-7.noarch.rpm # yum repolist ###檢查是否已經添加到源列表 # yum i…

CentOS 6.4 上安裝 Oracle 10.2.0.1 并升級到 10.2.0.4

目錄 一、系統檢查與設置 1. 檢查系統版本與磁盤空間 2. 修改系統參數 3. 創建組和用戶 4. 設置主機名 5. 檢查安裝軟件包 6. 設置 oracle 用戶環境變量 二、安裝 Oracle 軟件包 1. 安裝 10.2.0.1 安裝包 2. 安裝 10.2.0.4 補丁 三、建庫 四、配置監聽器 1. 編輯配…

【基于C# + HALCON的工業視系統開發實戰】二十六、車規級PCB全自動質檢:3D SPI+AI光學檢測融合方案

摘要&#xff1a;本文詳細闡述基于C# .NET Core 6與HALCON 24.11開發的車規級PCB板AOI智能檢測系統&#xff0c;提出3D SPI與AI光學檢測融合方案。系統通過結構光3D測量技術實現錫膏印刷質量檢測&#xff0c;結合多算法融合的自動光學檢測完成元件缺陷識別&#xff0c;構建SPI與…

Go源碼解讀——互斥鎖與讀寫鎖

互斥鎖Mutextype Mutex struct {// 表示互斥鎖狀態state int32// 表示信號量&#xff0c;協程阻塞等待該信號量&#xff0c;解鎖的協程釋放信號量從而喚醒等待信號量的協程sema uint32 }Locked: 表示該Mutex是否已被鎖定&#xff0c;0&#xff1a;沒有鎖定 1&#xff1a;已被鎖…

Linux(centos)安全狗

sdui進入操作頁面 [rootlocalhost safedog_an_linux64_2.8.32947]# sdui維護 查看、啟動或停止服務。 [rootiZbp1f0xuq9rc41s6gdvfyZ /]# systemctl status safedog [rootiZbp1f0xuq9rc41s6gdvfyZ /]# systemctl start safedog [rootiZbp1f0xuq9rc41s6gdvfyZ /]# systemct…

ES9 / ES2018 正則表達式增強

? 一、命名捕獲組&#xff08;Named Capture Groups&#xff09;給捕獲結果起名字&#xff0c;更易讀、更易維護。&#x1f539; 傳統寫法&#xff08;位置識別&#xff09;&#xff1a;const result /(\d{4})-(\d{2})-(\d{2})/.exec("2025-07-31"); console.log(…

深入Java開發:Token的全方位解析與實戰指南(下)

深入Java開發&#xff1a;Token的全方位解析與實戰指南&#xff08;下&#xff09; 上一篇 深入Java開發&#xff1a;Token的全方位解析與實戰指南&#xff08;上&#xff09; 五、Token 的生命周期與管理 5.1 Token 的生命周期狀態 Token 的生命周期涵蓋了從創建到最終失效…

第二十四天(數據結構:棧和隊列)隊列實踐請看下一篇

棧和隊列棧 &#xff1a; 是限定在表尾進行插入和刪除操作的線性表實現是一回事&#xff0c;但是必須要滿足棧的基本特點它的設計思路是:先進后出&#xff0c;后進先出棧有兩端1 棧頂(top) &#xff1a;插入數據刪除數據都只能在這一端訪問也只能訪問棧頂2 棧底(bottom) : 棧底…

三、Spark 運行環境部署:全面掌握四種核心模式

作者&#xff1a;IvanCodes 日期&#xff1a;2025年7月25日 專欄&#xff1a;Spark教程 Apache Spark 作為統一的大數據分析引擎&#xff0c;以其高性能和靈活性著稱。要充分利用Spark的強大能力&#xff0c;首先需要根據不同的應用場景和資源環境&#xff0c;正確地部署其運行…

【Django】-2- 處理HTTP請求

一、request 請求 先理解&#xff1a;Request 是啥&#xff1f;用戶訪問你的網站時&#xff0c;會發一個 “請求包” &#x1f4e6; &#xff0c;里面裝著&#xff1a;想訪問啥路徑&#xff1f;用啥方法&#xff08;GET/POST 等&#xff09;&#xff1f;帶了啥頭信息&#xff0…

飛算 JavaAI:突破效率邊界的代碼智能構造平臺

飛算 JavaAI&#xff1a;突破效率邊界的代碼智能構造平臺 一、引言&#xff1a;數字化浪潮下的開發效率困局與破局路徑 當企業數字化轉型駛入深水區&#xff0c;軟件開發正面臨需求迭代頻次激增、人力成本高企、技術架構復雜化的多重挑戰。傳統開發模式中&#xff0c;從需求分…

國家科學技術獎答辯PPT案例_科技進步獎ppt制作_技術發明獎ppt設計美化_自然科學獎ppt模板 | WordinPPT

“國家科學技術獎”是在科學技術領域設立的最高榮譽&#xff0c;旨在獎勵在科學技術進步活動中做出突出貢獻的個人和組織&#xff0c;從而推動國家科學技術事業的發展&#xff0c;加快建設科技強國。科學技術獎是國內科技界的最高殿堂&#xff0c;是對做出杰出貢獻的科技工作者…

如何通過黑白棋盤進行定位配準融合?(前后安裝的兩個相機)

一.總結: 完整流程 &#xff1a;硬件準備 → 數據采集 → 空間統一 → 相機標定&#xff08;內參畸變&#xff09; → 外參求解 → 定位配準融合 → 校驗 → 生成映射表 → 上線remap驗證 我們場景流程 &#xff1a;硬件準備 → 數據采集 → 空間統一 → 定位配準融合 → …

【node】token的生成與解析配置

在用戶登錄成功之后為了記錄用戶的登錄狀態通常會將用戶信息編寫為一個token&#xff0c;通過解析token判斷用戶是否登錄。 token的生成 JSON Web Token&#xff08;JWT&#xff09; 是一種基于JSON的輕量級身份驗證和授權機制。它是一種開放標準&#xff08;RFC 7519&#xff…

yolo 、Pytorch (5)IOU

一、簡介 IOU的全稱為交并比&#xff08;Intersection over Union&#xff09;&#xff0c;是目標檢測中使用的一個概念&#xff0c;IoU計算的是“預測的邊框”和“真實的邊框”的交疊率&#xff0c;即它們的交集和并集的比值。最理想情況是完全重疊&#xff0c;即比值為1。 …