鯤鵬arm64架構下安裝KubeSphere

鯤鵬arm64架構下安裝KubeSphere

官方參考文檔: https://kubesphere.io/zh/docs/quick-start/minimal-kubesphere-on-k8s/

在Kubernetes基礎上最小化安裝 KubeSphere
前提條件

官方參考文檔: https://kubesphere.io/zh/docs/installing-on-kubernetes/introduction/prerequisites/

  • 如需在 Kubernetes 上安裝 KubeSphere 3.2.1,您的 Kubernetes 版本必須為:1.19.x、1.20.x、1.21.x 或 1.22.x(實驗性支持)。

  • 確保您的機器滿足最低硬件要求:CPU > 1 核,內存 > 2 GB。

  • 在安裝之前,需要配置 Kubernetes 集群中的默認存儲類型。

uname -a
顯示架構如下:
Linux localhost.localdomain 4.14.0-115.el7a.0.1.aarch64 #1 SMP Sun Nov 25 20:54:21 UTC 2018 aarch64 aarch64 aarch64 GNU/Linuxkubectl version
顯示版本如下:
Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.3", GitCommit:"1e11e4a2108024935ecfcb2912226cedeafd99df", GitTreeState:"clean", BuildDate:"2020-10-14T12:50:19Z", GoVersion:"go1.15.2", Compiler:"gc", Platform:"linux/arm64"}
Server Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.3", GitCommit:"1e11e4a2108024935ecfcb2912226cedeafd99df", GitTreeState:"clean", BuildDate:"2020-10-14T12:41:49Z", GoVersion:"go1.15.2", Compiler:"gc", Platform:"linux/arm64"}free -g
顯示內存如下:total        used        free      shared  buff/cache   available
Mem:            127          48          43           1          34          57
Swap:             0           0           0kubectl get sc
顯示存儲默認 StorageClass如下:
NAME                    PROVISIONER                                       RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
glusterfs (default)     cluster.local/nfs-client-nfs-client-provisioner   Delete          Immediate           true                   24h
nfs-storage (default)   k8s-sigs.io/nfs-subdir-external-provisioner       Delete          Immediate           false                  23h
部署 KubeSphere

確保您的機器滿足安裝的前提條件之后,可以按照以下步驟安裝 KubeSphere。

1 執行以下命令開始安裝:
kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.2.1/kubesphere-installer.yamlkubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.2.1/cluster-configuration.yaml
2 檢查安裝日志:
kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f報錯如下:
2022-02-23T16:02:30+08:00 INFO : shell-operator latest
2022-02-23T16:02:30+08:00 INFO : HTTP SERVER Listening on 0.0.0.0:9115
2022-02-23T16:02:30+08:00 INFO : Use temporary dir: /tmp/shell-operator
2022-02-23T16:02:30+08:00 INFO : Initialize hooks manager ...
2022-02-23T16:02:30+08:00 INFO : Search and load hooks ...
2022-02-23T16:02:30+08:00 INFO : Load hook config from '/hooks/kubesphere/installRunner.py'
2022-02-23T16:02:31+08:00 INFO : Load hook config from '/hooks/kubesphere/schedule.sh'
2022-02-23T16:02:31+08:00 INFO : Initializing schedule manager ...
2022-02-23T16:02:31+08:00 INFO : KUBE Init Kubernetes client
2022-02-23T16:02:31+08:00 INFO : KUBE-INIT Kubernetes client is configured successfully
2022-02-23T16:02:31+08:00 INFO : MAIN: run main loop
2022-02-23T16:02:31+08:00 INFO : MAIN: add onStartup tasks
2022-02-23T16:02:31+08:00 INFO : QUEUE add all HookRun@OnStartup
2022-02-23T16:02:31+08:00 INFO : Running schedule manager ...
2022-02-23T16:02:31+08:00 INFO : MSTOR Create new metric shell_operator_live_ticks
2022-02-23T16:02:31+08:00 INFO : MSTOR Create new metric shell_operator_tasks_queue_length
2022-02-23T16:02:31+08:00 ERROR : error getting GVR for kind 'ClusterConfiguration': Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused
2022-02-23T16:02:31+08:00 ERROR : Enable kube events for hooks error: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused
2022-02-23T16:02:34+08:00 INFO : TASK_RUN Exit: program halts.

原因是8080端口,k8s默認不對外開放

開放8080端口

8080端口訪問不了,k8s開放8080端口
https://www.cnblogs.com/liuxingxing/p/13399729.html
https://blog.csdn.net/qq_29274865/article/details/108953259

進入 cd /etc/kubernetes/manifests/vim kube-apiserver.yaml
添加
- --insecure-port=8080- --insecure-bind-address=0.0.0.0重啟apiserver
docker restart apiserver容器id

在這里插入圖片描述

重新執行上面的命令

kubectl delete -f https://github.com/kubesphere/ks-installer/releases/download/v3.2.1/cluster-configuration.yaml
kubectl delete -f https://github.com/kubesphere/ks-installer/releases/download/v3.2.1/kubesphere-installer.yamlkubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.2.1/cluster-configuration.yaml
kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.2.1/kubesphere-installer.yaml
3 使用 kubectl get pod --all-namespaces 查看所有 Pod 是否在 KubeSphere 的相關命名空間中正常運行。如果是,請通過以下命令檢查控制臺的端口(默認為 30880):
kubectl get pod --all-namespaces
顯示Error
kubesphere-system              ks-installer-d8b656fb4-gb2qg                         0/1     Error               0          27s查看日志
kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f查看日志報錯如下:
standard_init_linux.go:228: exec user process caused: exec format error

原因是arm64架構報錯,運行不起來

https://hub.docker.com/
hub官方搜索下鏡像有沒有arm64的

在這里插入圖片描述

根本就沒有arm64架構的鏡像,找了一圈,發現有個:kubespheredev/ks-installer:v3.0.0-arm64 試試

kubectl delete -f https://github.com/kubesphere/ks-installer/releases/download/v3.2.1/cluster-configuration.yaml
kubectl delete -f https://github.com/kubesphere/ks-installer/releases/download/v3.2.1/kubesphere-installer.yamlkubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.2.1/cluster-configuration.yaml
kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.2.1/kubesphere-installer.yaml下載鏡像
docker pull kubespheredev/ks-installer:v3.0.0-arm64在使用rancher去修改deployments文件,或下載yaml文件去修改
由于我的k8s master節點是arm64架構的,kubesphere/ks-installer 官方沒有arm64的鏡像
kubesphere/ks-installer:v3.2.1修改為 kubespheredev/ks-installer:v3.0.0-arm64
節點選擇指定master節點部署報錯如下:
TASK [common : Kubesphere | Creating common component manifests] ***************
failed: [localhost] (item={'path': 'etcd', 'file': 'etcd.yaml'}) => {"ansible_loop_var": "item", "changed": false, "item": {"file": "etcd.yaml", "path": "etcd"}, "msg": "AnsibleUndefinedVariable: 'dict object' has no attribute 'etcdVolumeSize'"}
failed: [localhost] (item={'name': 'mysql', 'file': 'mysql.yaml'}) => {"ansible_loop_var": "item", "changed": false, "item": {"file": "mysql.yaml", "name": "mysql"}, "msg": "AnsibleUndefinedVariable: 'dict object' has no attribute 'mysqlVolumeSize'"}
failed: [localhost] (item={'path': 'redis', 'file': 'redis.yaml'}) => {"ansible_loop_var": "item", "changed": false, "item": {"file": "redis.yaml", "path": "redis"}, "msg": "AnsibleUndefinedVariable: 'dict object' has no attribute 'redisVolumSize'"}

原因是版本對不上,使用v3.2.1版本,鏡像確是v3.0.0-arm64

下面使用v3.0.0版本試試
使用v3.0.0版本
kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.0.0/kubesphere-installer.yamlkubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.0.0/cluster-configuration.yaml
在使用rancher去修改deployments文件或者下載v3.0.0的yaml文件,并修改
kubesphere-installer.yaml
cluster-configuration.yamlkubesphere-installer.yaml文件修改如下
由于我的k8s master節點是arm64架構的,kubesphere/ks-installer 官方沒有arm64的鏡像
kubesphere/ks-installer:v3.0.0修改為 kubespheredev/ks-installer:v3.0.0-arm64
節點選擇指定master節點部署cluster-configuration.yaml文件修改
endpointIps: 192.168.xxx.xx 修改為k8s master節點IP

不報錯,ks-installer也跑起來了,但是其他的沒有arm64鏡像,其他deployments都跑不起來 ks-controller-manager報錯

把所有deployments鏡像都換成arm64架構的,節點選擇k8s master節點

kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -fkubectl get pod --all-namespacesdocker pull kubespheredev/ks-installer:v3.0.0-arm64
docker pull bobsense/redis-arm64 要去掉掛載的存儲pvc,不然報錯
docker pull kubespheredev/ks-controller-manager:v3.2.1
docker pull kubespheredev/ks-console:v3.0.0-arm64
docker pull kubespheredev/ks-apiserver:v3.2.0除了ks-controller-manager,其他都運行起來了kubectl get svc/ks-console -n kubesphere-system
顯示如下:
NAME         TYPE       CLUSTER-IP   EXTERNAL-IP   PORT(S)        AGE
ks-console   NodePort   10.1.4.225   <none>        80:30880/TCP   6h12m再次查看日志
kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f
最后顯示如下:以為成功了
Console: http://192.168.xxx.xxx:30880
Account: admin
Password: P@88w0rd

在這里插入圖片描述
在這里插入圖片描述

kubectl logs  ks-controller-manager-646b8fff9f-pd7w7 --namespace=kubesphere-system
ks-controller-manager報錯如下: 
W0224 11:36:55.643227       1 client_config.go:615] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
E0224 11:36:55.649703       1 server.go:101] failed to connect to ldap service, please check ldap status, error: factory is not able to fill the pool: LDAP Result Code 200 "Network Error": dial tcp: lookup openldap.kubesphere-system.svc on 10.1.0.10:53: no such host

登錄報錯: 可能是ks-controller-manager沒成功運行
日志如下:
request to http://ks-apiserver.kubesphere-system.svc/oauth/token failed, reason: connect ECONNREFUSED 10.1.146.137:80

應該是openldap沒啟動成功的原因

查看StatefulSets的openldap日志

提示有2個StorageClasses,刪除一個之后就運行成功了
persistentvolumeclaims "openldap-pvc-openldap-0" is forbidden: Internal error occurred: 2 default StorageClasses were found

在這里插入圖片描述

刪除StorageClasses
查看
kubectl get sc
顯示如下:之前有2條記錄,是因為我刪除了一條
NAME                    PROVISIONER                                   RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
nfs-storage (default)   k8s-sigs.io/nfs-subdir-external-provisioner   Delete          Immediate           false                  40h刪除
kubectl delete sc glusterfs

刪除StorageClasses 后,只有一個StorageClasses 了, openldap正常運行了,ks-controller-manager也正常運行了,有希望了

在這里插入圖片描述
在這里插入圖片描述

在這里插入圖片描述
在這里插入圖片描述
所有pod都正常運行了,但是登錄還是一樣的有問題.查看日志

ks-apiserver-556f698dfb-5p2fc
日志如下:
E0225 10:40:17.460271 1 reflector.go:138] pkg/client/informers/externalversions/factory.go:128: Failed to watch *v1alpha1.HelmApplication: failed to list *v1alpha1.HelmApplication: the server could not find the requested resource (get helmapplications.application.kubesphere.io)
E0225 10:40:17.548278 1 reflector.go:138] pkg/client/informers/externalversions/factory.go:128: Failed to watch *v1alpha1.HelmRepo: failed to list *v1alpha1.HelmRepo: the server could not find the requested resource (get helmrepos.application.kubesphere.io)
E0225 10:40:17.867914 1 reflector.go:138] pkg/models/openpitrix/interface.go:89: Failed to watch *v1alpha1.HelmCategory: failed to list *v1alpha1.HelmCategory: the server could not find the requested resource (get helmcategories.application.kubesphere.io)
E0225 10:40:18.779136 1 reflector.go:138] pkg/client/informers/externalversions/factory.go:128: Failed to watch *v1alpha1.HelmRelease: failed to list *v1alpha1.HelmRelease: the server could not find the requested resource (get helmreleases.application.kubesphere.io)
E0225 10:40:19.870229 1 reflector.go:138] pkg/models/openpitrix/interface.go:90: Failed to watch *v1alpha1.HelmRepo: failed to list *v1alpha1.HelmRepo: the server could not find the requested resource (get helmrepos.application.kubesphere.io)
E0225 10:40:20.747617 1 reflector.go:138] pkg/client/informers/externalversions/factory.go:128: Failed to watch *v1alpha1.HelmCategory: failed to list *v1alpha1.HelmCategory: the server could not find the requested resource (get helmcategories.application.kubesphere.io)
E0225 10:40:23.130177 1 reflector.go:138] pkg/client/informers/externalversions/factory.go:128: Failed to watch *v1alpha1.HelmApplicationVersion: failed to list *v1alpha1.HelmApplicationVersion: the server could not find the requested resource (get helmapplicationversions.application.kubesphere.io)

在這里插入圖片描述

ks-console-65f46d7649-5zt8c
日志如下:
<-- GET / 2022/02/25T10:41:28.642
{ UnauthorizedError: Not Login
at Object.throw (/opt/kubesphere/console/server/server.js:31701:11)
at getCurrentUser (/opt/kubesphere/console/server/server.js:9037:14)
at renderView (/opt/kubesphere/console/server/server.js:23231:46)
at dispatch (/opt/kubesphere/console/server/server.js:6870:32)
at next (/opt/kubesphere/console/server/server.js:6871:18)
at /opt/kubesphere/console/server/server.js:70183:16
at dispatch (/opt/kubesphere/console/server/server.js:6870:32)
at next (/opt/kubesphere/console/server/server.js:6871:18)
at /opt/kubesphere/console/server/server.js:77986:37
at dispatch (/opt/kubesphere/console/server/server.js:6870:32)
at next (/opt/kubesphere/console/server/server.js:6871:18)
at /opt/kubesphere/console/server/server.js:70183:16
at dispatch (/opt/kubesphere/console/server/server.js:6870:32)
at next (/opt/kubesphere/console/server/server.js:6871:18)
at /opt/kubesphere/console/server/server.js:77986:37
at dispatch (/opt/kubesphere/console/server/server.js:6870:32) message: 'Not Login' }
--> GET / 302 6ms 43b 2022/02/25T10:41:28.648
<-- GET /login 2022/02/25T10:41:28.649
{ FetchError: request to http://ks-apiserver.kubesphere-system.svc/kapis/config.kubesphere.io/v1alpha2/configs/oauth failed, reason: connect ECONNREFUSED 10.1.144.129:80
at ClientRequest.<anonymous> (/opt/kubesphere/console/server/server.js:80604:11)
at ClientRequest.emit (events.js:198:13)
at Socket.socketErrorListener (_http_client.js:392:9)
at Socket.emit (events.js:198:13)
at emitErrorNT (internal/streams/destroy.js:91:8)
at emitErrorAndCloseNT (internal/streams/destroy.js:59:3)
at process._tickCallback (internal/process/next_tick.js:63:19)
message:
'request to http://ks-apiserver.kubesphere-system.svc/kapis/config.kubesphere.io/v1alpha2/configs/oauth failed, reason: connect ECONNREFUSED 10.1.144.129:80',
type: 'system',
errno: 'ECONNREFUSED',
code: 'ECONNREFUSED' }
--> GET /login 200 7ms 14.82kb 2022/02/25T10:41:28.656

在這里插入圖片描述

ks-controller-manager-548545f4b4-w4wmx
日志如下:
E0225 10:41:41.633013 1 reflector.go:138] github.com/kubernetes-csi/external-snapshotter/client/v4/informers/externalversions/factory.go:117: Failed to watch *v1.VolumeSnapshotClass: failed to list *v1.VolumeSnapshotClass: the server could not find the requested resource (get volumesnapshotclasses.snapshot.storage.k8s.io)
E0225 10:41:41.634349 1 reflector.go:138] pkg/client/informers/externalversions/factory.go:128: Failed to watch *v1alpha2.GroupBinding: failed to list *v1alpha2.GroupBinding: the server could not find the requested resource (get groupbindings.iam.kubesphere.io)
E0225 10:41:41.722377 1 reflector.go:138] pkg/client/informers/externalversions/factory.go:128: Failed to watch *v1alpha2.Group: failed to list *v1alpha2.Group: the server could not find the requested resource (get groups.iam.kubesphere.io)
E0225 10:41:42.636612 1 reflector.go:138] pkg/client/informers/externalversions/factory.go:128: Failed to watch *v1alpha2.GroupBinding: failed to list *v1alpha2.GroupBinding: the server could not find the requested resource (get groupbindings.iam.kubesphere.io)
E0225 10:41:42.875652 1 reflector.go:138] pkg/client/informers/externalversions/factory.go:128: Failed to watch *v1alpha2.Group: failed to list *v1alpha2.Group: the server could not find the requested resource (get groups.iam.kubesphere.io)
E0225 10:41:42.964819 1 reflector.go:138] github.com/kubernetes-csi/external-snapshotter/client/v4/informers/externalversions/factory.go:117: Failed to watch *v1.VolumeSnapshotClass: failed to list *v1.VolumeSnapshotClass: the server could not find the requested resource (get volumesnapshotclasses.snapshot.storage.k8s.io)
E0225 10:41:45.177641 1 reflector.go:138] pkg/client/informers/externalversions/factory.go:128: Failed to watch *v1alpha2.GroupBinding: failed to list *v1alpha2.GroupBinding: the server could not find the requested resource (get groupbindings.iam.kubesphere.io)
E0225 10:41:45.327393 1 reflector.go:138] pkg/client/informers/externalversions/factory.go:128: Failed to watch *v1alpha2.Group: failed to list *v1alpha2.Group: the server could not find the requested resource (get groups.iam.kubesphere.io)
E0225 10:41:46.164454 1 reflector.go:138] github.com/kubernetes-csi/external-snapshotter/client/v4/informers/externalversions/factory.go:117: Failed to watch *v1.VolumeSnapshotClass: failed to list *v1.VolumeSnapshotClass: the server could not find the requested resource (get volumesnapshotclasses.snapshot.storage.k8s.io)
E0225 10:41:49.011152 1 reflector.go:138] pkg/client/informers/externalversions/factory.go:128: Failed to watch *v1alpha2.Group: failed to list *v1alpha2.Group: the server could not find the requested resource (get groups.iam.kubesphere.io)
E0225 10:41:50.299769 1 reflector.go:138] github.com/kubernetes-csi/external-snapshotter/client/v4/informers/externalversions/factory.go:117: Failed to watch *v1.VolumeSnapshotClass: failed to list *v1.VolumeSnapshotClass: the server could not find the requested resource (get volumesnapshotclasses.snapshot.storage.k8s.io)
E0225 10:41:50.851105 1 reflector.go:138] pkg/client/informers/externalversions/factory.go:128: Failed to watch *v1alpha2.GroupBinding: failed to list *v1alpha2.GroupBinding: the server could not find the requested resource (get groupbindings.iam.kubesphere.io)
E0225 10:41:56.831265 1 helm_category_controller.go:158] get helm category: ctg-uncategorized failed, error: no matches for kind "HelmCategory" in version "application.kubesphere.io/v1alpha1"
E0225 10:41:56.923487 1 helm_category_controller.go:176] create helm category: uncategorized failed, error: no matches for kind "HelmCategory" in version "application.kubesphere.io/v1alpha1"
E0225 10:41:58.696406 1 reflector.go:138] pkg/client/informers/externalversions/factory.go:128: Failed to watch *v1alpha2.GroupBinding: failed to list *v1alpha2.GroupBinding: the server could not find the requested resource (get groupbindings.iam.kubesphere.io)
E0225 10:41:59.876998 1 reflector.go:138] pkg/client/informers/externalversions/factory.go:128: Failed to watch *v1alpha2.Group: failed to list *v1alpha2.Group: the server could not find the requested resource (get groups.iam.kubesphere.io)
E0225 10:42:01.266422 1 reflector.go:138] github.com/kubernetes-csi/external-snapshotter/client/v4/informers/externalversions/factory.go:117: Failed to watch *v1.VolumeSnapshotClass: failed to list *v1.VolumeSnapshotClass: the server could not find the requested resource (get volumesnapshotclasses.snapshot.storage.k8s.io)
E0225 10:42:11.724869 1 helm_category_controller.go:158] get helm category: ctg-uncategorized failed, error: no matches for kind "HelmCategory" in version "application.kubesphere.io/v1alpha1"
E0225 10:42:11.929837 1 helm_category_controller.go:176] create helm category: uncategorized failed, error: no matches for kind "HelmCategory" in version "application.kubesphere.io/v1alpha1"
E0225 10:42:12.355338 1 reflector.go:138] pkg/client/informers/externalversions/factory.go:128: Failed to watch *v1alpha2.GroupBinding: failed to list *v1alpha2.GroupBinding: the server could not find the requested resource (get groupbindings.iam.kubesphere.io)
I0225 10:42:15.625073 1 leaderelection.go:253] successfully acquired lease kubesphere-system/ks-controller-manager-leader-election
I0225 10:42:15.625301 1 globalrolebinding_controller.go:122] Starting GlobalRoleBinding controller
I0225 10:42:15.625343 1 globalrolebinding_controller.go:125] Waiting for informer caches to sync
I0225 10:42:15.625365 1 globalrolebinding_controller.go:137] Starting workers
I0225 10:42:15.625351 1 snapshotclass_controller.go:102] Waiting for informer cache to sync.
I0225 10:42:15.625391 1 globalrolebinding_controller.go:143] Started workers
I0225 10:42:15.625380 1 capability_controller.go:110] Waiting for informer caches to sync
I0225 10:42:15.625449 1 capability_controller.go:123] Started workers
I0225 10:42:15.625447 1 basecontroller.go:59] Starting controller: loginrecord-controller
I0225 10:42:15.625478 1 globalrolebinding_controller.go:205] Successfully synced key:authenticated
I0225 10:42:15.625481 1 basecontroller.go:60] Waiting for informer caches to sync for: loginrecord-controller
I0225 10:42:15.625488 1 clusterrolebinding_controller.go:114] Starting ClusterRoleBinding controller
I0225 10:42:15.625546 1 clusterrolebinding_controller.go:117] Waiting for informer caches to sync
I0225 10:42:15.625540 1 basecontroller.go:59] Starting controller: group-controller
I0225 10:42:15.625515 1 basecontroller.go:59] Starting controller: groupbinding-controller
I0225 10:42:15.625596 1 basecontroller.go:60] Waiting for informer caches to sync for: group-controller
I0225 10:42:15.625615 1 basecontroller.go:60] Waiting for informer caches to sync for: groupbinding-controller
I0225 10:42:15.625579 1 clusterrolebinding_controller.go:122] Starting workers
I0225 10:42:15.625480 1 certificatesigningrequest_controller.go:109] Starting CSR controller
I0225 10:42:15.625681 1 certificatesigningrequest_controller.go:112] Waiting for csrInformer caches to sync

在這里插入圖片描述

就差那么一點點就可以了,等下次解決了在補上吧

參考鏈接:
https://kubesphere.io/zh/docs/quick-start/minimal-kubesphere-on-k8s/
https://www.yuque.com/leifengyang/oncloud/gz1sls#BIxCW

本文來自互聯網用戶投稿,該文觀點僅代表作者本人,不代表本站立場。本站僅提供信息存儲空間服務,不擁有所有權,不承擔相關法律責任。
如若轉載,請注明出處:http://www.pswp.cn/news/710968.shtml
繁體地址,請注明出處:http://hk.pswp.cn/news/710968.shtml
英文地址,請注明出處:http://en.pswp.cn/news/710968.shtml

如若內容造成侵權/違法違規/事實不符,請聯系多彩編程網進行投訴反饋email:809451989@qq.com,一經查實,立即刪除!

相關文章

基于大模型思維鏈(Chain-of-Thought)技術的定制化思維鏈提示和定向刺激提示的心理咨詢場景定向ai智能應用

本篇為個人筆記 記錄基于大模型思維鏈&#xff08;Chain-of-Thought&#xff09;技術的定制化思維鏈提示和定向刺激提示的心理咨詢場景定向ai智能應用 人工智能為個人興趣領域 業余研究 如有錯漏歡迎指出&#xff01;&#xff01;&#xff01; 目錄 本篇為個人筆記 記錄基…

價格腰斬,騰訊云2024優惠活動云服務器62元一年,多配置報價

騰訊云服務器多少錢一年&#xff1f;62元一年起&#xff0c;2核2G3M配置&#xff0c;騰訊云2核4G5M輕量應用服務器218元一年、756元3年&#xff0c;4核16G12M服務器32元1個月、312元一年&#xff0c;8核32G22M服務器115元1個月、345元3個月&#xff0c;騰訊云服務器網txyfwq.co…

Node.js中的并發和多線程處理

在Node.js中&#xff0c;處理并發和多線程是一個非常重要的話題。由于Node.js是單線程的&#xff0c;這意味著它在任何給定時間內只能執行一個任務。然而&#xff0c;Node.js的事件驅動和非阻塞I/O模型使得處理并發和多線程變得更加高效和簡單。在本文中&#xff0c;我們將探討…

【排坑】搭建 Karmada 環境

git clone 報錯 問題&#xff1a;Failed to connect to github.com port 443:connection timed out 解決&#xff1a; git config --global --unset http.proxy【hack/local-up-karmada.sh】 1. karmada ca-certificates (no such package) 問題&#xff1a;fetching http…

老化的電動車與高層電梯樓的結合,將是巨大的安全隱患

中國是全球最大的電動汽車市場&#xff0c;其實中國還是全球最大的電動兩輪車市場&#xff0c;而電動兩輪車的老化比電動汽車更快&#xff0c;電動汽車的電池壽命可以達到10年&#xff0c;而電動兩輪車的電池壽命只有3-5年&#xff0c;而首批電動兩輪車至今已老化得相當嚴重&am…

【Pytorch深度學習開發實踐學習】【AlexNet】經典算法復現-Pytorch實現AlexNet神經網絡(1)model.py

算法簡介 AlexNet是人工智能深度學習在CV領域的開山之作&#xff0c;是最先把深度卷積神經網絡應用于圖像分類領域的研究成果&#xff0c;對后面的諸多研究起到了巨大的引領作用&#xff0c;因此有必要學習這個算法并能夠實現它。 主要的創新點在于&#xff1a; 首次使用GPU…

AI語音識別的技術解析

從語音識別算法的發展來看&#xff0c;語音識別技術主要分為三大類&#xff0c;第一類是模型匹配法&#xff0c;包括矢量量化(VQ) 、動態時間規整(DTW)等&#xff1b;第二類是概率統計方法&#xff0c;包括高斯混合模型(GMM) 、隱馬爾科夫模型(HMM)等&#xff1b;第三類是辨別器…

golang gin單獨部署vue3.0前后端分離應用

概述 因為公司最近的項目前端使用vue 3.0&#xff0c;后端api使用golang gin框架。測試通過后&#xff0c;博文記錄&#xff0c;用于備忘。 步驟 npm run build&#xff0c;構建出前端項目的dist目錄&#xff0c;dist目錄的結構具體如下圖 將dist目錄復制到后端程序同級目錄…

嵌入式軟件bug從哪里來,到哪里去

摘要&#xff1a;軟件從來不是一次就能完美的&#xff0c;需要以包容的眼光看待它的殘缺。那問題究竟為何產生&#xff0c;如何去除呢&#xff1f; 1、軟件問題從哪來 軟件缺陷問題千千萬萬&#xff0c;主要是需求、實現、和運行環境三方面。 1.1 需求描述偏差 客戶角度的描…

PSO-CNN-LSTM多輸入回歸預測|粒子群算法優化的卷積-長短期神經網絡回歸預測(Matlab)——附代碼數據

目錄 一、程序及算法內容介紹&#xff1a; 基本內容&#xff1a; 亮點與優勢&#xff1a; 二、實際運行效果&#xff1a; 三、算法介紹&#xff1a; 四、完整程序數據分享下載&#xff1a; 一、程序及算法內容介紹&#xff1a; 基本內容&#xff1a; 本代碼基于Matlab平臺…

5 局域網基礎(3)

1.AAA 服務器 AAA 是驗證、授權和記賬(Authentication、Authorization、Accounting)3個英文單詞的簡稱&#xff0c;是一個能夠處理用戶訪問請求的服務器程序,提供驗證授權以及帳戶服務,主要目的是管理用戶訪問網絡服務器&#xff0c;對具有訪問權的用戶提供服務。AAA服務器通常…

Java TCP文件上傳案例

文件上傳分析 【客戶端】輸入流&#xff0c;從硬盤讀取文件數據到程序中。【客戶端】輸出流&#xff0c;寫出文件數據到服務端。【服務端】輸入流&#xff0c;讀取文件數據到服務端程序。【服務端】輸出流&#xff0c;寫出文件數據到服務器硬盤中。 基本實現 服務端實現 pu…

【二分查找】樸素二分查找

二分查找 題目描述 給定一個 n 個元素有序的&#xff08;升序&#xff09;整型數組 nums 和一個目標值 target &#xff0c;寫一個函數搜索 nums 中的 target&#xff0c;如果目標值存在返回下標&#xff0c;否則返回 -1。 示例 1: 輸入: nums [-1,0,3,5,9,12], target 9…

網絡編程:基于TCP和UDP的服務器、客戶端

1.基于TCP通信服務器 程序代碼&#xff1a; 1 #include<myhead.h>2 #define SER_IP "192.168.126.121"//服務器IP3 #define SER_PORT 8888//服務器端口號4 int main(int argc, const char *argv[])5 {6 //1.創建用于監聽的套接字7 int sfd-1;8 sf…

MYSQL C++鏈接接口編程

使用MYSQL 提供的C接口來訪問數據庫,官網比較零碎,又不想全部精讀一下,百度CSDN都是亂七八糟的,大部分不可用 官網教程地址 https://dev.mysql.com/doc/connector-cpp/1.1/en/connector-cpp-examples-connecting.html 網上之所以亂七八糟,主要是MYSQL提供了3個接口兩個包,使用…

C++ //練習 10.9 實現你自己的elimDups。測試你的程序,分別在讀取輸入后、調用unique后以及調用erase后打印vector的內容。

C Primer&#xff08;第5版&#xff09; 練習 10.9 練習 10.9 實現你自己的elimDups。測試你的程序&#xff0c;分別在讀取輸入后、調用unique后以及調用erase后打印vector的內容。 環境&#xff1a;Linux Ubuntu&#xff08;云服務器&#xff09; 工具&#xff1a;vim 代碼…

Flask g對象和插件

四、Flask進階 1. Flask插件 I. flask-caching 安裝 pip install flask-caching初始化 from flask_cache import Cache cache Cache(config(CACHE_TYPE:"simple" )) cache.init_app(appapp)使用 在視圖函數上添加緩存 blue.route("/") cache.cached(tim…

django5生產級部署和并發測試(開發者服務器和uvicorn服務器)

目錄 1. 創建django項目2. 安裝壓力測試工具3. 安裝生產級服務器uvicorn4. 多進程部署 1. 創建django項目 在桌面創建一個名為django_test的項目&#xff1a; django-admin startproject django_test然后使用cd命令進入django_test文件夾內&#xff0c;使用開發者服務器運行項…

前端架構: 腳手架包管理工具之lerna的全流程開發教程

Lerna 1 &#xff09;文檔 Lerna 文檔 https://www.npmjs.com/package/lernahttps://lerna.js.org [請直達這個鏈接] 使用 Lerna 幫助我們做包管理&#xff0c;并不復雜&#xff0c;中間常用的命令并不是很多這里是命令直達&#xff1a;https://lerna.js.org/docs/api-referen…

掌匯云 | FBIF個性化票務系統,展會活動數據好沉淀

“把票全賣光&#xff01;賣到一票難求&#xff0c;現場座無虛席。” 賣票人和買票人可能永遠不在一個頻道上。 2022年辦活動&#xff0c;就是一個字&#xff0c;搏&#xff01;和“黑天鵝”趕時間&#xff0c;能不能辦不由主辦方說了算。這種情況在2023年得到了改善&#xff…