通過sealos工具在ubuntu 24.02上安裝k8s集群

一、系統準備

(1)安裝openssh服務
sudo apt install openssh-server
sudo systemctl start ssh
sudo systemctl enable ssh(2)放通防火墻
sudo ufw allow ssh(3)開通root直接登錄
vim /etc/ssh/sshd_config#PermitRootLogin prohibit-password修改為PermitRootLogin yes重啟
systemctl daemon-reload
systemctl restart sshd

二、安裝sealos工具

(1)在master01上安裝sealos工具
echo "deb [trusted=yes] https://apt.fury.io/labring/ /" | sudo tee /etc/apt/sources.list.d/labring.list
sudo apt update
sudo apt install sealosroot@master01:~# echo "deb [trusted=yes] https://apt.fury.io/labring/ /" | sudo tee /etc/apt/sources.list.d/labring.list
sudo apt update
sudo apt install sealos
deb [trusted=yes] https://apt.fury.io/labring/ /
Hit:1 http://mirrors.tuna.tsinghua.edu.cn/ubuntu noble InRelease
Hit:2 http://mirrors.tuna.tsinghua.edu.cn/ubuntu noble-updates InRelease
Hit:4 http://security.ubuntu.com/ubuntu noble-security InRelease
Hit:3 http://mirrors.tuna.tsinghua.edu.cn/ubuntu noble-backports InRelease
Ign:5 https://apt.fury.io/labring  InRelease
Ign:6 https://apt.fury.io/labring  Release
Ign:7 https://apt.fury.io/labring  Packages
Ign:8 https://apt.fury.io/labring  Translation-en
Ign:9 https://apt.fury.io/labring  Translation-en_US
Get:7 https://apt.fury.io/labring  Packages
Ign:8 https://apt.fury.io/labring  Translation-en
Ign:9 https://apt.fury.io/labring  Translation-en_US
Ign:8 https://apt.fury.io/labring  Translation-en
Ign:9 https://apt.fury.io/labring  Translation-en_US
Ign:8 https://apt.fury.io/labring  Translation-en
Ign:9 https://apt.fury.io/labring  Translation-en_US
Ign:8 https://apt.fury.io/labring  Translation-en
Ign:9 https://apt.fury.io/labring  Translation-en_US
Ign:8 https://apt.fury.io/labring  Translation-en
Ign:9 https://apt.fury.io/labring  Translation-en_US
Ign:8 https://apt.fury.io/labring  Translation-en
Ign:9 https://apt.fury.io/labring  Translation-en_US
Fetched 7,953 B in 7s (1,202 B/s)
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
294 packages can be upgraded. Run 'apt list --upgradable' to see them.
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
The following NEW packages will be installed:sealos
0 upgraded, 1 newly installed, 0 to remove and 294 not upgraded.
Need to get 31.5 MB of archives.
After this operation, 94.2 MB of additional disk space will be used.
Get:1 https://apt.fury.io/labring  sealos 5.0.1 [31.5 MB]
Fetched 31.5 MB in 20s (1,546 kB/s)
Selecting previously unselected package sealos.
(Reading database ... 152993 files and directories currently installed.)
Preparing to unpack .../sealos_5.0.1_amd64.deb ...
Unpacking sealos (5.0.1) ...
Setting up sealos (5.0.1) ...
root@master01:~#

三、通過sealos安裝k8s

(1)通過sealos工具安裝k8s 1.29.9,網絡插件選擇ciliumsealos run registry.cn-shanghai.aliyuncs.com/labring/kubernetes:v1.29.9 \
registry.cn-shanghai.aliyuncs.com/labring/helm:v3.9.4 \
registry.cn-shanghai.aliyuncs.com/labring/cilium:v1.13.4 \
--masters 192.168.1.98 \
--nodes 192.168.1.102,192.168.1.103 -p 'As(2dc_2saccC82'root@master01:~#
sealos run registry.cn-shanghai.aliyuncs.com/labring/kubernetes:v1.29.9 \
registry.cn-shanghai.aliyuncs.com/labring/helm:v3.9.4 \
registry.cn-shanghai.aliyuncs.com/labring/cilium:v1.13.4 \
--masters 192.168.1.98 \
--nodes 192.168.1.102,192.168.1.103 -p 'As(2dc_2saccC82'
2025-08-10T12:17:40 info Start to create a new cluster: master [192.168.1.98], worker [192.168.1.102 192.168.1.103], registry 192.168.1.98
2025-08-10T12:17:40 info Executing pipeline Check in CreateProcessor.
2025-08-10T12:17:40 info checker:hostname [192.168.1.98:22 192.168.1.102:22 192.168.1.103:22]
2025-08-10T12:17:40 info checker:timeSync [192.168.1.98:22 192.168.1.102:22 192.168.1.103:22]
2025-08-10T12:17:41 info checker:containerd [192.168.1.98:22 192.168.1.102:22 192.168.1.103:22]
2025-08-10T12:17:41 info Executing pipeline PreProcess in CreateProcessor.
Trying to pull registry.cn-shanghai.aliyuncs.com/labring/kubernetes:v1.29.9...
Getting image source signatures
Copying blob a90669518f1a done
Copying blob 45c9d75a9656 done
Copying blob 2fbba8062b0b done
Copying blob fdc3a198d6ba done
Copying config bca192f355 done
Writing manifest to image destination
Storing signatures
Trying to pull registry.cn-shanghai.aliyuncs.com/labring/helm:v3.9.4...
Getting image source signatures
Copying blob 7f5c52c74e5b done
Copying config 3376f68220 done
Writing manifest to image destination
Storing signatures
Trying to pull registry.cn-shanghai.aliyuncs.com/labring/cilium:v1.13.4...
Getting image source signatures
Copying blob 7ca2ee4eb38c done
Copying config 71aa52ad0a done
Writing manifest to image destination
Storing signatures
2025-08-10T12:19:52 info Executing pipeline RunConfig in CreateProcessor.
2025-08-10T12:19:52 info Executing pipeline MountRootfs in CreateProcessor.
2025-08-10T12:19:53 info render /var/lib/sealos/data/default/rootfs/etc/config.toml from /var/lib/sealos/data/default/rootfs/etc/config.toml.tmpl completed
2025-08-10T12:19:53 info render /var/lib/sealos/data/default/rootfs/etc/containerd.service from /var/lib/sealos/data/default/rootfs/etc/containerd.service.tmpl completed
2025-08-10T12:19:53 info render /var/lib/sealos/data/default/rootfs/etc/hosts.toml from /var/lib/sealos/data/default/rootfs/etc/hosts.toml.tmpl completed
2025-08-10T12:19:53 info render /var/lib/sealos/data/default/rootfs/etc/image-cri-shim.service from /var/lib/sealos/data/default/rootfs/etc/image-cri-shim.service.tmpl completed
2025-08-10T12:19:53 info render /var/lib/sealos/data/default/rootfs/etc/image-cri-shim.yaml from /var/lib/sealos/data/default/rootfs/etc/image-cri-shim.yaml.tmpl completed
2025-08-10T12:19:53 info render /var/lib/sealos/data/default/rootfs/etc/kubelet.service from /var/lib/sealos/data/default/rootfs/etc/kubelet.service.tmpl completed
2025-08-10T12:19:53 info render /var/lib/sealos/data/default/rootfs/etc/registry.service from /var/lib/sealos/data/default/rootfs/etc/registry.service.tmpl completed
2025-08-10T12:19:53 info render /var/lib/sealos/data/default/rootfs/etc/registry.yml from /var/lib/sealos/data/default/rootfs/etc/registry.yml.tmpl completed
2025-08-10T12:19:53 info render /var/lib/sealos/data/default/rootfs/etc/registry_config.yml from /var/lib/sealos/data/default/rootfs/etc/registry_config.yml.tmpl completed
2025-08-10T12:19:53 info render /var/lib/sealos/data/default/rootfs/etc/systemd/system/kubelet.service.d/10-kubeadm.conf from /var/lib/sealos/data/default/rootfs/etc/systemd/system/kubelet.service.d/10-kubeadm.conf.tmpl completed
192.168.1.103:22es to 192025-08-10T12:20:15 info render /var/lib/sealos/data/default/rootfs/etc/config.toml from /var/lib/sealos/data/default/rootfs/etc/config.toml.tmpl completed
192.168.1.103:22        2025-08-10T12:20:15 info render /var/lib/sealos/data/default/rootfs/etc/containerd.service from /var/lib/sealos/data/default/rootfs/etc/containerd.service.tmpl completed
192.168.1.103:22        2025-08-10T12:20:15 info render /var/lib/sealos/data/default/rootfs/etc/hosts.toml from /var/lib/sealos/data/default/rootfs/etc/hosts.toml.tmpl completed
192.168.1.103:22        2025-08-10T12:20:15 info render /var/lib/sealos/data/default/rootfs/etc/image-cri-shim.service from /var/lib/sealos/data/default/rootfs/etc/image-cri-shim.service.tmpl completed
192.168.1.103:22        2025-08-10T12:20:15 info render /var/lib/sealos/data/default/rootfs/etc/image-cri-shim.yaml from /var/lib/sealos/data/default/rootfs/etc/image-cri-shim.yaml.tmpl completed
192.168.1.103:22        2025-08-10T12:20:15 info render /var/lib/sealos/data/default/rootfs/etc/kubelet.service from /var/lib/sealos/data/default/rootfs/etc/kubelet.service.tmpl completed
192.168.1.103:22        2025-08-10T12:20:15 info render /var/lib/sealos/data/default/rootfs/etc/registry.service from /var/lib/sealos/data/default/rootfs/etc/registry.service.tmpl completed
192.168.1.103:22        2025-08-10T12:20:15 info render /var/lib/sealos/data/default/rootfs/etc/registry.yml from /var/lib/sealos/data/default/rootfs/etc/registry.yml.tmpl completed
192.168.1.103:22        2025-08-10T12:20:15 info render /var/lib/sealos/data/default/rootfs/etc/registry_config.yml from /var/lib/sealos/data/default/rootfs/etc/registry_config.yml.tmpl completed
192.168.1.103:22        2025-08-10T12:20:15 info render /var/lib/sealos/data/default/rootfs/etc/systemd/system/kubelet.service.d/10-kubeadm.conf from /var/lib/sealos/data/default/rootfs/etc/systemd/system/kubelet.service.d/10-kubeadm.conf.tmpl completed
192.168.1.102:22es to 192025-08-10T12:20:15 info render /var/lib/sealos/data/default/rootfs/etc/config.toml from /var/lib/sealos/data/default/rootfs/etc/config.toml.tmpl completed
192.168.1.102:22        2025-08-10T12:20:15 info render /var/lib/sealos/data/default/rootfs/etc/containerd.service from /var/lib/sealos/data/default/rootfs/etc/containerd.service.tmpl completed
192.168.1.102:22        2025-08-10T12:20:15 info render /var/lib/sealos/data/default/rootfs/etc/hosts.toml from /var/lib/sealos/data/default/rootfs/etc/hosts.toml.tmpl completed
192.168.1.102:22        2025-08-10T12:20:15 info render /var/lib/sealos/data/default/rootfs/etc/image-cri-shim.service from /var/lib/sealos/data/default/rootfs/etc/image-cri-shim.service.tmpl completed
192.168.1.102:22        2025-08-10T12:20:15 info render /var/lib/sealos/data/default/rootfs/etc/image-cri-shim.yaml from /var/lib/sealos/data/default/rootfs/etc/image-cri-shim.yaml.tmpl completed
192.168.1.102:22        2025-08-10T12:20:15 info render /var/lib/sealos/data/default/rootfs/etc/kubelet.service from /var/lib/sealos/data/default/rootfs/etc/kubelet.service.tmpl completed
192.168.1.102:22        2025-08-10T12:20:15 info render /var/lib/sealos/data/default/rootfs/etc/registry.service from /var/lib/sealos/data/default/rootfs/etc/registry.service.tmpl completed
192.168.1.102:22        2025-08-10T12:20:15 info render /var/lib/sealos/data/default/rootfs/etc/registry.yml from /var/lib/sealos/data/default/rootfs/etc/registry.yml.tmpl completed
192.168.1.102:22        2025-08-10T12:20:15 info render /var/lib/sealos/data/default/rootfs/etc/registry_config.yml from /var/lib/sealos/data/default/rootfs/etc/registry_config.yml.tmpl completed
192.168.1.102:22        2025-08-10T12:20:15 info render /var/lib/sealos/data/default/rootfs/etc/systemd/system/kubelet.service.d/10-kubeadm.conf from /var/lib/sealos/data/default/rootfs/etc/systemd/system/kubelet.service.d/10-kubeadm.conf.tmpl completed
2025-08-10T12:20:15 info Executing pipeline MirrorRegistry in CreateProcessor.
2025-08-10T12:20:15 info trying default http mode to sync images to hosts [192.168.1.98:22]
2025-08-10T12:20:18 info Executing pipeline Bootstrap in CreateProcessorINFO [2025-08-10 12:20:18] >> Check port kubelet port 10249..10259, reserved port 5050..5054 inuse. Please wait...
192.168.1.103:22         INFO [2025-08-10 12:20:24] >> Check port kubelet port 10249..10259, reserved port 5050..5054 inuse. Please wait...
192.168.1.102:22         INFO [2025-08-10 12:20:18] >> Check port kubelet port 10249..10259, reserved port 5050..5054 inuse. Please wait...INFO [2025-08-10 12:20:19] >> check root,port,cri success
192.168.1.103:22         INFO [2025-08-10 12:20:25] >> check root,port,cri success
192.168.1.102:22         INFO [2025-08-10 12:20:19] >> check root,port,cri success
2025-08-10T12:20:19 info domain sealos.hub:192.168.1.98 append success
192.168.1.103:22        2025-08-10T12:20:25 info domain sealos.hub:192.168.1.98 append success
192.168.1.102:22        2025-08-10T12:20:19 info domain sealos.hub:192.168.1.98 append success
Created symlink /etc/systemd/system/multi-user.target.wants/registry.service → /etc/systemd/system/registry.service.INFO [2025-08-10 12:20:20] >> Health check registry!INFO [2025-08-10 12:20:20] >> registry is runningINFO [2025-08-10 12:20:20] >> init registry success
2025-08-10T12:20:20 info domain apiserver.cluster.local:192.168.1.98 append success
192.168.1.102:22        2025-08-10T12:20:20 info domain apiserver.cluster.local:10.103.97.2 append success
192.168.1.103:22        2025-08-10T12:20:26 info domain apiserver.cluster.local:10.103.97.2 append success
192.168.1.102:22        2025-08-10T12:20:21 info domain lvscare.node.ip:192.168.1.102 append success
192.168.1.103:22        2025-08-10T12:20:27 info domain lvscare.node.ip:192.168.1.103 append success
Created symlink /etc/systemd/system/multi-user.target.wants/containerd.service → /etc/systemd/system/containerd.service.
192.168.1.102:22        Created symlink /etc/systemd/system/multi-user.target.wants/containerd.service → /etc/systemd/system/containerd.service.
192.168.1.103:22        Created symlink /etc/systemd/system/multi-user.target.wants/containerd.service → /etc/systemd/system/containerd.service.INFO [2025-08-10 12:20:23] >> Health check containerd!INFO [2025-08-10 12:20:23] >> containerd is runningINFO [2025-08-10 12:20:23] >> init containerd success
Created symlink /etc/systemd/system/multi-user.target.wants/image-cri-shim.service → /etc/systemd/system/image-cri-shim.service.
192.168.1.102:22         INFO [2025-08-10 12:20:23] >> Health check containerd!
192.168.1.102:22         INFO [2025-08-10 12:20:23] >> containerd is running
192.168.1.102:22         INFO [2025-08-10 12:20:23] >> init containerd success
192.168.1.102:22        Created symlink /etc/systemd/system/multi-user.target.wants/image-cri-shim.service → /etc/systemd/system/image-cri-shim.service.INFO [2025-08-10 12:20:24] >> Health check image-cri-shim!INFO [2025-08-10 12:20:24] >> image-cri-shim is runningINFO [2025-08-10 12:20:24] >> init shim success
127.0.0.1 localhost
::1     ip6-localhost ip6-loopback
192.168.1.103:22         INFO [2025-08-10 12:20:30] >> Health check containerd!
192.168.1.103:22         INFO [2025-08-10 12:20:30] >> containerd is running
192.168.1.103:22         INFO [2025-08-10 12:20:30] >> init containerd success
192.168.1.103:22        Created symlink /etc/systemd/system/multi-user.target.wants/image-cri-shim.service → /etc/systemd/system/image-cri-shim.service.
192.168.1.102:22         INFO [2025-08-10 12:20:24] >> Health check image-cri-shim!
192.168.1.102:22         INFO [2025-08-10 12:20:24] >> image-cri-shim is running
192.168.1.102:22         INFO [2025-08-10 12:20:24] >> init shim success
192.168.1.102:22        127.0.0.1 localhost
192.168.1.102:22        ::1     ip6-localhost ip6-loopback
192.168.1.103:22         INFO [2025-08-10 12:20:31] >> Health check image-cri-shim!
192.168.1.103:22         INFO [2025-08-10 12:20:31] >> image-cri-shim is running
192.168.1.103:22         INFO [2025-08-10 12:20:31] >> init shim success
192.168.1.103:22        127.0.0.1 localhost
192.168.1.103:22        ::1     ip6-localhost ip6-loopback
Firewall stopped and disabled on system startup
* Applying /usr/lib/sysctl.d/10-apparmor.conf ...
* Applying /etc/sysctl.d/10-bufferbloat.conf ...
* Applying /etc/sysctl.d/10-console-messages.conf ...
* Applying /etc/sysctl.d/10-ipv6-privacy.conf ...
* Applying /etc/sysctl.d/10-kernel-hardening.conf ...
* Applying /etc/sysctl.d/10-magic-sysrq.conf ...
* Applying /etc/sysctl.d/10-map-count.conf ...
* Applying /etc/sysctl.d/10-network-security.conf ...
* Applying /etc/sysctl.d/10-ptrace.conf ...
* Applying /etc/sysctl.d/10-zeropage.conf ...
* Applying /usr/lib/sysctl.d/30-tracker.conf ...
* Applying /usr/lib/sysctl.d/50-bubblewrap.conf ...
* Applying /usr/lib/sysctl.d/50-pid-max.conf ...
* Applying /usr/lib/sysctl.d/99-protect-links.conf ...
* Applying /etc/sysctl.d/99-sysctl.conf ...
* Applying /etc/sysctl.conf ...
kernel.apparmor_restrict_unprivileged_userns = 1
net.core.default_qdisc = fq_codel
kernel.printk = 4 4 1 7
net.ipv6.conf.all.use_tempaddr = 2
net.ipv6.conf.default.use_tempaddr = 2
kernel.kptr_restrict = 1
kernel.sysrq = 176
vm.max_map_count = 1048576
net.ipv4.conf.default.rp_filter = 2
net.ipv4.conf.all.rp_filter = 2
kernel.yama.ptrace_scope = 1
vm.mmap_min_addr = 65536
fs.inotify.max_user_watches = 65536
kernel.unprivileged_userns_clone = 1
kernel.pid_max = 4194304
fs.protected_fifos = 1
fs.protected_hardlinks = 1
fs.protected_regular = 2
fs.protected_symlinks = 1
fs.file-max = 1048576 # sealos
net.bridge.bridge-nf-call-ip6tables = 1 # sealos
net.bridge.bridge-nf-call-iptables = 1 # sealos
net.core.somaxconn = 65535 # sealos
net.ipv4.conf.all.rp_filter = 0 # sealos
net.ipv4.ip_forward = 1 # sealos
net.ipv4.ip_local_port_range = 1024 65535 # sealos
net.ipv4.tcp_keepalive_intvl = 30 # sealos
net.ipv4.tcp_keepalive_time = 600 # sealos
net.ipv4.vs.conn_reuse_mode = 0 # sealos
net.ipv4.vs.conntrack = 1 # sealos
net.ipv6.conf.all.forwarding = 1 # sealos
vm.max_map_count = 2147483642 # sealos
fs.file-max = 1048576 # sealos
net.bridge.bridge-nf-call-ip6tables = 1 # sealos
net.bridge.bridge-nf-call-iptables = 1 # sealos
net.core.somaxconn = 65535 # sealos
net.ipv4.conf.all.rp_filter = 0 # sealos
net.ipv4.ip_forward = 1 # sealos
net.ipv4.ip_local_port_range = 1024 65535 # sealos
net.ipv4.tcp_keepalive_intvl = 30 # sealos
net.ipv4.tcp_keepalive_time = 600 # sealos
net.ipv4.vs.conn_reuse_mode = 0 # sealos
net.ipv4.vs.conntrack = 1 # sealos
net.ipv6.conf.all.forwarding = 1 # sealos
vm.max_map_count = 2147483642 # sealosINFO [2025-08-10 12:20:25] >> pull pause image sealos.hub:5000/pause:3.9
192.168.1.102:22        Firewall stopped and disabled on system startup
Image is up to date for sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
192.168.1.102:22        * Applying /usr/lib/sysctl.d/10-apparmor.conf ...
192.168.1.102:22        * Applying /etc/sysctl.d/10-bufferbloat.conf ...
192.168.1.102:22        * Applying /etc/sysctl.d/10-console-messages.conf ...
192.168.1.102:22        * Applying /etc/sysctl.d/10-ipv6-privacy.conf ...
192.168.1.102:22        * Applying /etc/sysctl.d/10-kernel-hardening.conf ...
192.168.1.102:22        * Applying /etc/sysctl.d/10-magic-sysrq.conf ...
192.168.1.102:22        * Applying /etc/sysctl.d/10-map-count.conf ...
192.168.1.102:22        * Applying /etc/sysctl.d/10-network-security.conf ...
192.168.1.102:22        * Applying /etc/sysctl.d/10-ptrace.conf ...
192.168.1.102:22        * Applying /etc/sysctl.d/10-zeropage.conf ...
192.168.1.102:22        * Applying /usr/lib/sysctl.d/30-tracker.conf ...
192.168.1.102:22        * Applying /usr/lib/sysctl.d/50-bubblewrap.conf ...
192.168.1.102:22        * Applying /usr/lib/sysctl.d/50-pid-max.conf ...
192.168.1.102:22        * Applying /usr/lib/sysctl.d/99-protect-links.conf ...
192.168.1.102:22        * Applying /etc/sysctl.d/99-sysctl.conf ...
192.168.1.102:22        * Applying /etc/sysctl.conf ...
192.168.1.102:22        kernel.apparmor_restrict_unprivileged_userns = 1
192.168.1.102:22        net.core.default_qdisc = fq_codel
192.168.1.102:22        kernel.printk = 4 4 1 7
192.168.1.102:22        net.ipv6.conf.all.use_tempaddr = 2
192.168.1.102:22        net.ipv6.conf.default.use_tempaddr = 2
192.168.1.102:22        kernel.kptr_restrict = 1
192.168.1.102:22        kernel.sysrq = 176
192.168.1.102:22        vm.max_map_count = 1048576
192.168.1.102:22        net.ipv4.conf.default.rp_filter = 2
192.168.1.102:22        net.ipv4.conf.all.rp_filter = 2
192.168.1.102:22        kernel.yama.ptrace_scope = 1
192.168.1.102:22        vm.mmap_min_addr = 65536
192.168.1.102:22        fs.inotify.max_user_watches = 65536
192.168.1.102:22        kernel.unprivileged_userns_clone = 1
192.168.1.102:22        kernel.pid_max = 4194304
192.168.1.102:22        fs.protected_fifos = 1
192.168.1.102:22        fs.protected_hardlinks = 1
192.168.1.102:22        fs.protected_regular = 2
192.168.1.102:22        fs.protected_symlinks = 1
192.168.1.102:22        fs.file-max = 1048576 # sealos
192.168.1.102:22        net.bridge.bridge-nf-call-ip6tables = 1 # sealos
192.168.1.102:22        net.bridge.bridge-nf-call-iptables = 1 # sealos
192.168.1.102:22        net.core.somaxconn = 65535 # sealos
192.168.1.102:22        net.ipv4.conf.all.rp_filter = 0 # sealos
192.168.1.102:22        net.ipv4.ip_forward = 1 # sealos
192.168.1.102:22        net.ipv4.ip_local_port_range = 1024 65535 # sealos
192.168.1.102:22        net.ipv4.tcp_keepalive_intvl = 30 # sealos
192.168.1.102:22        net.ipv4.tcp_keepalive_time = 600 # sealos
192.168.1.102:22        net.ipv4.vs.conn_reuse_mode = 0 # sealos
192.168.1.102:22        net.ipv4.vs.conntrack = 1 # sealos
192.168.1.102:22        net.ipv6.conf.all.forwarding = 1 # sealos
192.168.1.102:22        vm.max_map_count = 2147483642 # sealos
192.168.1.102:22        fs.file-max = 1048576 # sealos
192.168.1.102:22        net.bridge.bridge-nf-call-ip6tables = 1 # sealos
192.168.1.102:22        net.bridge.bridge-nf-call-iptables = 1 # sealos
192.168.1.102:22        net.core.somaxconn = 65535 # sealos
192.168.1.102:22        net.ipv4.conf.all.rp_filter = 0 # sealos
192.168.1.102:22        net.ipv4.ip_forward = 1 # sealos
192.168.1.102:22        net.ipv4.ip_local_port_range = 1024 65535 # sealos
192.168.1.102:22        net.ipv4.tcp_keepalive_intvl = 30 # sealos
192.168.1.102:22        net.ipv4.tcp_keepalive_time = 600 # sealos
192.168.1.102:22        net.ipv4.vs.conn_reuse_mode = 0 # sealos
192.168.1.102:22        net.ipv4.vs.conntrack = 1 # sealos
192.168.1.102:22        net.ipv6.conf.all.forwarding = 1 # sealos
192.168.1.102:22        vm.max_map_count = 2147483642 # sealos
192.168.1.103:22        Firewall stopped and disabled on system startup
192.168.1.102:22         INFO [2025-08-10 12:20:26] >> pull pause image sealos.hub:5000/pause:3.9
Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /etc/systemd/system/kubelet.service.
192.168.1.103:22        * Applying /usr/lib/sysctl.d/10-apparmor.conf ...
192.168.1.103:22        * Applying /etc/sysctl.d/10-bufferbloat.conf ...
192.168.1.103:22        * Applying /etc/sysctl.d/10-console-messages.conf ...
192.168.1.103:22        * Applying /etc/sysctl.d/10-ipv6-privacy.conf ...
192.168.1.103:22        * Applying /etc/sysctl.d/10-kernel-hardening.conf ...
192.168.1.103:22        * Applying /etc/sysctl.d/10-magic-sysrq.conf ...
192.168.1.103:22        * Applying /etc/sysctl.d/10-map-count.conf ...
192.168.1.103:22        * Applying /etc/sysctl.d/10-network-security.conf ...
192.168.1.103:22        * Applying /etc/sysctl.d/10-ptrace.conf ...
192.168.1.103:22        * Applying /etc/sysctl.d/10-zeropage.conf ...
192.168.1.103:22        * Applying /usr/lib/sysctl.d/30-tracker.conf ...
192.168.1.103:22        * Applying /usr/lib/sysctl.d/50-bubblewrap.conf ...
192.168.1.103:22        * Applying /usr/lib/sysctl.d/50-pid-max.conf ...
192.168.1.103:22        * Applying /usr/lib/sysctl.d/99-protect-links.conf ...
192.168.1.103:22        * Applying /etc/sysctl.d/99-sysctl.conf ...
192.168.1.103:22        * Applying /etc/sysctl.conf ...
192.168.1.103:22        kernel.apparmor_restrict_unprivileged_userns = 1
192.168.1.103:22        net.core.default_qdisc = fq_codel
192.168.1.103:22        kernel.printk = 4 4 1 7
192.168.1.103:22        net.ipv6.conf.all.use_tempaddr = 2
192.168.1.103:22        net.ipv6.conf.default.use_tempaddr = 2
192.168.1.103:22        kernel.kptr_restrict = 1
192.168.1.103:22        kernel.sysrq = 176
192.168.1.103:22        vm.max_map_count = 1048576
192.168.1.103:22        net.ipv4.conf.default.rp_filter = 2
192.168.1.103:22        net.ipv4.conf.all.rp_filter = 2
192.168.1.103:22        kernel.yama.ptrace_scope = 1
192.168.1.103:22        vm.mmap_min_addr = 65536
192.168.1.103:22        fs.inotify.max_user_watches = 65536
192.168.1.103:22        kernel.unprivileged_userns_clone = 1
192.168.1.103:22        kernel.pid_max = 4194304
192.168.1.103:22        fs.protected_fifos = 1
192.168.1.103:22        fs.protected_hardlinks = 1
192.168.1.103:22        fs.protected_regular = 2
192.168.1.103:22        fs.protected_symlinks = 1
192.168.1.103:22        fs.file-max = 1048576 # sealos
192.168.1.103:22        net.bridge.bridge-nf-call-ip6tables = 1 # sealos
192.168.1.103:22        net.bridge.bridge-nf-call-iptables = 1 # sealos
192.168.1.103:22        net.core.somaxconn = 65535 # sealos
192.168.1.103:22        net.ipv4.conf.all.rp_filter = 0 # sealos
192.168.1.103:22        net.ipv4.ip_forward = 1 # sealos
192.168.1.103:22        net.ipv4.ip_local_port_range = 1024 65535 # sealos
192.168.1.103:22        net.ipv4.tcp_keepalive_intvl = 30 # sealos
192.168.1.103:22        net.ipv4.tcp_keepalive_time = 600 # sealos
192.168.1.103:22        net.ipv4.vs.conn_reuse_mode = 0 # sealos
192.168.1.103:22        net.ipv4.vs.conntrack = 1 # sealos
192.168.1.103:22        net.ipv6.conf.all.forwarding = 1 # sealos
192.168.1.103:22        vm.max_map_count = 2147483642 # sealos
192.168.1.103:22        fs.file-max = 1048576 # sealos
192.168.1.103:22        net.bridge.bridge-nf-call-ip6tables = 1 # sealos
192.168.1.103:22        net.bridge.bridge-nf-call-iptables = 1 # sealos
192.168.1.103:22        net.core.somaxconn = 65535 # sealos
192.168.1.103:22        net.ipv4.conf.all.rp_filter = 0 # sealos
192.168.1.103:22        net.ipv4.ip_forward = 1 # sealos
192.168.1.103:22        net.ipv4.ip_local_port_range = 1024 65535 # sealos
192.168.1.103:22        net.ipv4.tcp_keepalive_intvl = 30 # sealos
192.168.1.103:22        net.ipv4.tcp_keepalive_time = 600 # sealos
192.168.1.103:22        net.ipv4.vs.conn_reuse_mode = 0 # sealos
192.168.1.103:22        net.ipv4.vs.conntrack = 1 # sealos
192.168.1.103:22        net.ipv6.conf.all.forwarding = 1 # sealos
192.168.1.103:22        vm.max_map_count = 2147483642 # sealos
192.168.1.103:22         INFO [2025-08-10 12:20:32] >> pull pause image sealos.hub:5000/pause:3.9INFO [2025-08-10 12:20:26] >> init kubelet successINFO [2025-08-10 12:20:26] >> init rootfs success
192.168.1.102:22        Image is up to date for sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
192.168.1.102:22        Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /etc/systemd/system/kubelet.service.
192.168.1.103:22        Image is up to date for sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
192.168.1.102:22         INFO [2025-08-10 12:20:27] >> init kubelet success
192.168.1.102:22         INFO [2025-08-10 12:20:27] >> init rootfs success
192.168.1.103:22        Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /etc/systemd/system/kubelet.service.
192.168.1.103:22         INFO [2025-08-10 12:20:34] >> init kubelet success
192.168.1.103:22         INFO [2025-08-10 12:20:34] >> init rootfs success
2025-08-10T12:20:28 info Executing pipeline Init in CreateProcessor.
2025-08-10T12:20:28 info Copying kubeadm config to master0
2025-08-10T12:20:28 info start to generate cert and kubeConfig...
2025-08-10T12:20:28 info start to generate and copy certs to masters...
2025-08-10T12:20:28 info apiserver altNames : {map[apiserver.cluster.local:apiserver.cluster.local kubernetes:kubernetes kubernetes.default:kubernetes.default kubernetes.default.svc:kubernetes.default.svc kubernetes.default.svc.cluster.local:kubernetes.default.svc.cluster.local localhost:localhost master01:master01] map[10.103.97.2:10.103.97.2 10.96.0.1:10.96.0.1 127.0.0.1:127.0.0.1 192.168.1.98:192.168.1.98]}
2025-08-10T12:20:28 info Etcd altnames : {map[localhost:localhost master01:master01] map[127.0.0.1:127.0.0.1 192.168.1.98:192.168.1.98 ::1:::1]}, commonName : master01
2025-08-10T12:20:30 info start to copy etc pki files to masters
2025-08-10T12:20:30 info start to create kubeconfig...
2025-08-10T12:20:30 info start to copy kubeconfig files to masters
2025-08-10T12:20:30 info start to copy static files to masters
2025-08-10T12:20:30 info start to init master0...
[config/images] Pulled registry.k8s.io/kube-apiserver:v1.29.9
[config/images] Pulled registry.k8s.io/kube-controller-manager:v1.29.9
[config/images] Pulled registry.k8s.io/kube-scheduler:v1.29.9
[config/images] Pulled registry.k8s.io/kube-proxy:v1.29.9
[config/images] Pulled registry.k8s.io/coredns/coredns:v1.11.1
[config/images] Pulled registry.k8s.io/pause:3.9
[config/images] Pulled registry.k8s.io/etcd:3.5.15-0
W0810 12:20:39.357353    8594 utils.go:69] The recommended value for "healthzBindAddress" in "KubeletConfiguration" is: 127.0.0.1; the provided value is: 0.0.0.0
[init] Using Kubernetes version: v1.29.9
[preflight] Running pre-flight checks[WARNING FileExisting-socat]: socat not found in system path
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
W0810 12:20:39.455337    8594 checks.go:835] detected that the sandbox image "sealos.hub:5000/pause:3.9" of the container runtime is inconsistent with that used by kubeadm. It is recommended that using "registry.k8s.io/pause:3.9" as the CRI sandbox image.
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf"
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/kubelet.conf"
W0810 12:20:40.239475    8594 kubeconfig.go:273] a kubeconfig file "/etc/kubernetes/controller-manager.conf" exists already but has an unexpected API Server URL: expected: https://192.168.1.98:6443, got: https://apiserver.cluster.local:6443
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/controller-manager.conf"
W0810 12:20:40.383648    8594 kubeconfig.go:273] a kubeconfig file "/etc/kubernetes/scheduler.conf" exists already but has an unexpected API Server URL: expected: https://192.168.1.98:6443, got: https://apiserver.cluster.local:6443
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/scheduler.conf"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 4.001602 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node master01 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node master01 as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxyYour Kubernetes control-plane has initialized successfully!To start using your cluster, you need to run the following as a regular user:mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/configAlternatively, if you are the root user, you can run:export KUBECONFIG=/etc/kubernetes/admin.confYou should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:https://kubernetes.io/docs/concepts/cluster-administration/addons/You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:kubeadm join apiserver.cluster.local:6443 --token <value withheld> \--discovery-token-ca-cert-hash sha256:957a2a9cbc2a717e819cd5108c7415c577baadccd95f01963d1be9a2357e1736 \--control-plane --certificate-key <value withheld>Then you can join any number of worker nodes by running the following on each as root:kubeadm join apiserver.cluster.local:6443 --token <value withheld> \--discovery-token-ca-cert-hash sha256:957a2a9cbc2a717e819cd5108c7415c577baadccd95f01963d1be9a2357e1736
2025-08-10T12:20:46 info Executing pipeline Join in CreateProcessor.
2025-08-10T12:20:46 info [192.168.1.102:22 192.168.1.103:22] will be added as worker
2025-08-10T12:20:46 info start to get kubernetes token...
2025-08-10T12:20:46 info fetch certSANs from kubeadm configmap
2025-08-10T12:20:46 info start to join 192.168.1.103:22 as worker
2025-08-10T12:20:46 info start to copy kubeadm join config to node: 192.168.1.103:22
2025-08-10T12:20:46 info start to join 192.168.1.102:22 as worker
2025-08-10T12:20:47 info run ipvs once module: 192.168.1.103:221/1, 643 it/s)
2025-08-10T12:20:47 info start to copy kubeadm join config to node: 192.168.1.102:22
192.168.1.103:22        2025-08-10T12:20:53 info Trying to add route
192.168.1.103:22        2025-08-10T12:20:53 info success to set route.(host:10.103.97.2, gateway:192.168.1.103)
2025-08-10T12:20:47 info start join node: 192.168.1.103:22
192.168.1.103:22        [preflight] Running pre-flight checks
192.168.1.103:22                [WARNING FileExisting-socat]: socat not found in system path
2025-08-10T12:20:47 info run ipvs once module: 192.168.1.102:221/1, 728 it/s)
192.168.1.102:22        2025-08-10T12:20:48 info Trying to add route
192.168.1.102:22        2025-08-10T12:20:48 info success to set route.(host:10.103.97.2, gateway:192.168.1.102)
2025-08-10T12:20:48 info start join node: 192.168.1.102:22
192.168.1.102:22        [preflight] Running pre-flight checks
192.168.1.102:22                [WARNING FileExisting-socat]: socat not found in system path
192.168.1.102:22        [preflight] Reading configuration from the cluster...
192.168.1.102:22        [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
192.168.1.102:22        W0810 12:21:00.215357    9534 utils.go:69] The recommended value for "healthzBindAddress" in "KubeletConfiguration" is: 127.0.0.1; the provided value is: 0.0.0.0
192.168.1.102:22        [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
192.168.1.102:22        [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
192.168.1.102:22        [kubelet-start] Starting the kubelet
192.168.1.102:22        [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
192.168.1.102:22
192.168.1.102:22        This node has joined the cluster:
192.168.1.102:22        * Certificate signing request was sent to apiserver and a response was received.
192.168.1.102:22        * The Kubelet was informed of the new secure connection details.
192.168.1.102:22
192.168.1.102:22        Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
192.168.1.102:22
2025-08-10T12:21:02 info succeeded in joining 192.168.1.102:22 as worker
192.168.1.103:22        [preflight] Reading configuration from the cluster...
192.168.1.103:22        [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
192.168.1.103:22        W0810 12:21:11.756483    6695 utils.go:69] The recommended value for "healthzBindAddress" in "KubeletConfiguration" is: 127.0.0.1; the provided value is: 0.0.0.0
192.168.1.103:22        [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
192.168.1.103:22        [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
192.168.1.103:22        [kubelet-start] Starting the kubelet
192.168.1.103:22        [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
192.168.1.103:22
192.168.1.103:22        This node has joined the cluster:
192.168.1.103:22        * Certificate signing request was sent to apiserver and a response was received.
192.168.1.103:22        * The Kubelet was informed of the new secure connection details.
192.168.1.103:22
192.168.1.103:22        Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
192.168.1.103:22
2025-08-10T12:21:07 info succeeded in joining 192.168.1.103:22 as worker
2025-08-10T12:21:07 info start to sync lvscare static pod to node: 192.168.1.103:22 master: [192.168.1.98:6443]
2025-08-10T12:21:07 info start to sync lvscare static pod to node: 192.168.1.102:22 master: [192.168.1.98:6443]
192.168.1.103:22        2025-08-10T12:21:14 info generator lvscare static pod is success
192.168.1.102:22        2025-08-10T12:21:08 info generator lvscare static pod is success
2025-08-10T12:21:08 info Executing pipeline RunGuest in CreateProcessor.
??  Using Cilium version 1.13.4
🔮 Auto-detected cluster name: kubernetes
🔮 Auto-detected datapath mode: tunnel
🔮 Auto-detected kube-proxy has been installed
2025-08-10T12:21:09 info succeeded in creating a new cluster, enjoy it!
2025-08-10T12:21:09 info___           ___           ___           ___       ___           ___/\  \         /\  \         /\  \         /\__\     /\  \         /\  \/::\  \       /::\  \       /::\  \       /:/  /    /::\  \       /::\  \/:/\ \  \     /:/\:\  \     /:/\:\  \     /:/  /    /:/\:\  \     /:/\ \  \_\:\~\ \  \   /::\~\:\  \   /::\~\:\  \   /:/  /    /:/  \:\  \   _\:\~\ \  \/\ \:\ \ \__\ /:/\:\ \:\__\ /:/\:\ \:\__\ /:/__/    /:/__/ \:\__\ /\ \:\ \ \__\\:\ \:\ \/__/ \:\~\:\ \/__/ \/__\:\/:/  / \:\  \    \:\  \ /:/  / \:\ \:\ \/__/\:\ \:\__\    \:\ \:\__\        \::/  /   \:\  \    \:\  /:/  /   \:\ \:\__\\:\/:/  /     \:\ \/__/        /:/  /     \:\  \    \:\/:/  /     \:\/:/  /\::/  /       \:\__\         /:/  /       \:\__\    \::/  /       \::/  /\/__/         \/__/         \/__/         \/__/     \/__/         \/__/Website: https://www.sealos.io/Address: github.com/labring/sealosVersion: 5.0.1-2b74a1281root@master01:~# 

四、查看k8s服務狀態

(1)查看使用到的鏡像
root@master01:~# sealos images
REPOSITORY                                             TAG       IMAGE ID       CREATED        SIZE
registry.cn-shanghai.aliyuncs.com/labring/kubernetes   v1.29.9   bca192f35556   3 months ago   669 MB
registry.cn-shanghai.aliyuncs.com/labring/cilium       v1.13.4   71aa52ad0a11   2 years ago    483 MB
registry.cn-shanghai.aliyuncs.com/labring/helm         v3.9.4    3376f6822067   2 years ago    46.4 MB
root@master01:~#(2)查看節點狀態
root@master01:~# kubectl get nodes
NAME       STATUS   ROLES           AGE     VERSION
master01   Ready    control-plane   3m42s   v1.29.9
node1      Ready    <none>          3m24s   v1.29.9
node2      Ready    <none>          3m19s   v1.29.9
root@master01:~#(3)查看pod狀態
root@master01:~# kubectl get pod -A
NAMESPACE     NAME                               READY   STATUS    RESTARTS   AGE
kube-system   cilium-2gdqf                       1/1     Running   0          3m29s
kube-system   cilium-operator-6946ccbcc5-9w275   1/1     Running   0          3m29s
kube-system   cilium-pmhgr                       1/1     Running   0          3m29s
kube-system   cilium-wnp9r                       1/1     Running   0          3m29s
kube-system   coredns-76f75df574-nf7bd           1/1     Running   0          3m39s
kube-system   coredns-76f75df574-s89vx           1/1     Running   0          3m39s
kube-system   etcd-master01                      1/1     Running   0          3m52s
kube-system   kube-apiserver-master01            1/1     Running   0          3m54s
kube-system   kube-controller-manager-master01   1/1     Running   0          3m53s
kube-system   kube-proxy-6mlkb                   1/1     Running   0          3m39s
kube-system   kube-proxy-7jx96                   1/1     Running   0          3m32s
kube-system   kube-proxy-9k92l                   1/1     Running   0          3m37s
kube-system   kube-scheduler-master01            1/1     Running   0          3m52s
kube-system   kube-sealos-lvscare-node1          1/1     Running   0          3m17s
kube-system   kube-sealos-lvscare-node2          1/1     Running   0          3m12s
root@master01:~#(4)查看證書有效期
root@master01:~# kubeadm certs check-expiration
[check-expiration] Reading configuration from the cluster...
[check-expiration] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
W0810 12:26:42.829869   12731 utils.go:69] The recommended value for "healthzBindAddress" in "KubeletConfiguration" is: 127.0.0.1; the provided value is: 0.0.0.0CERTIFICATE                EXPIRES                  RESIDUAL TIME   CERTIFICATE AUTHORITY   EXTERNALLY MANAGED
admin.conf                 Jul 17, 2125 04:20 UTC   99y             ca                      no
apiserver                  Jul 17, 2125 04:20 UTC   99y             ca                      no
apiserver-etcd-client      Jul 17, 2125 04:20 UTC   99y             etcd-ca                 no
apiserver-kubelet-client   Jul 17, 2125 04:20 UTC   99y             ca                      no
controller-manager.conf    Jul 17, 2125 04:20 UTC   99y             ca                      no
etcd-healthcheck-client    Jul 17, 2125 04:20 UTC   99y             etcd-ca                 no
etcd-peer                  Jul 17, 2125 04:20 UTC   99y             etcd-ca                 no
etcd-server                Jul 17, 2125 04:20 UTC   99y             etcd-ca                 no
front-proxy-client         Jul 17, 2125 04:20 UTC   99y             front-proxy-ca          no
scheduler.conf             Jul 17, 2125 04:20 UTC   99y             ca                      no
super-admin.conf           Aug 10, 2026 04:20 UTC   364d            ca                      noCERTIFICATE AUTHORITY   EXPIRES                  RESIDUAL TIME   EXTERNALLY MANAGED
ca                      Jul 17, 2125 04:20 UTC   99y             no
etcd-ca                 Jul 17, 2125 04:20 UTC   99y             no
front-proxy-ca          Jul 17, 2125 04:20 UTC   99y             no
root@master01:~#

五、在線添加node節點,ip為192.168.1.104

root@master01:~# sealos add --nodes 192.168.1.104
2025-08-10T14:55:12 info start to scale this cluster
2025-08-10T14:55:12 info Executing pipeline JoinCheck in ScaleProcessor.
2025-08-10T14:55:12 info checker:hostname [192.168.1.98:22 192.168.1.104:22]
2025-08-10T14:55:12 info checker:timeSync [192.168.1.98:22 192.168.1.104:22]
2025-08-10T14:55:13 info checker:containerd [192.168.1.104:22]
2025-08-10T14:55:13 info Executing pipeline PreProcess in ScaleProcessor.
2025-08-10T14:55:13 info Executing pipeline PreProcessImage in ScaleProcessor.
2025-08-10T14:55:13 info Executing pipeline RunConfig in ScaleProcessor.
2025-08-10T14:55:13 info Executing pipeline MountRootfs in ScaleProcessor.
192.168.1.104:22es to 192025-08-10T14:55:51 info render /var/lib/sealos/data/default/rootfs/etc/config.toml from /var/lib/sealos/data/default/rootfs/etc/config.toml.tmpl completed
192.168.1.104:22        2025-08-10T14:55:51 info render /var/lib/sealos/data/default/rootfs/etc/containerd.service from /var/lib/sealos/data/default/rootfs/etc/containerd.service.tmpl completed
192.168.1.104:22        2025-08-10T14:55:51 info render /var/lib/sealos/data/default/rootfs/etc/hosts.toml from /var/lib/sealos/data/default/rootfs/etc/hosts.toml.tmpl completed
192.168.1.104:22        2025-08-10T14:55:51 info render /var/lib/sealos/data/default/rootfs/etc/image-cri-shim.service from /var/lib/sealos/data/default/rootfs/etc/image-cri-shim.service.tmpl completed
192.168.1.104:22        2025-08-10T14:55:51 info render /var/lib/sealos/data/default/rootfs/etc/image-cri-shim.yaml from /var/lib/sealos/data/default/rootfs/etc/image-cri-shim.yaml.tmpl completed
192.168.1.104:22        2025-08-10T14:55:51 info render /var/lib/sealos/data/default/rootfs/etc/kubelet.service from /var/lib/sealos/data/default/rootfs/etc/kubelet.service.tmpl completed
192.168.1.104:22        2025-08-10T14:55:51 info render /var/lib/sealos/data/default/rootfs/etc/registry.service from /var/lib/sealos/data/default/rootfs/etc/registry.service.tmpl completed
192.168.1.104:22        2025-08-10T14:55:51 info render /var/lib/sealos/data/default/rootfs/etc/registry.yml from /var/lib/sealos/data/default/rootfs/etc/registry.yml.tmpl completed
192.168.1.104:22        2025-08-10T14:55:51 info render /var/lib/sealos/data/default/rootfs/etc/registry_config.yml from /var/lib/sealos/data/default/rootfs/etc/registry_config.yml.tmpl completed
192.168.1.104:22        2025-08-10T14:55:51 info render /var/lib/sealos/data/default/rootfs/etc/systemd/system/kubelet.service.d/10-kubeadm.conf from /var/lib/sealos/data/default/rootfs/etc/systemd/system/kubelet.service.d/10-kubeadm.conf.tmpl completed
2025-08-10T14:55:46 info Executing pipeline Bootstrap in ScaleProcessor
192.168.1.104:22         INFO [2025-08-10 14:55:51] >> Check port kubelet port 10249..10259, reserved port 5050..5054 inuse. Please wait...
192.168.1.104:22         INFO [2025-08-10 14:55:53] >> check root,port,cri success
192.168.1.104:22        2025-08-10T14:55:53 info domain sealos.hub:192.168.1.98 append success
192.168.1.104:22        2025-08-10T14:55:53 info domain apiserver.cluster.local:10.103.97.2 append success
192.168.1.104:22        2025-08-10T14:55:54 info domain lvscare.node.ip:192.168.1.104 append success
192.168.1.104:22        Created symlink /etc/systemd/system/multi-user.target.wants/containerd.service → /etc/systemd/system/containerd.service.
192.168.1.104:22         INFO [2025-08-10 14:55:59] >> Health check containerd!
192.168.1.104:22         INFO [2025-08-10 14:55:59] >> containerd is running
192.168.1.104:22         INFO [2025-08-10 14:55:59] >> init containerd success
192.168.1.104:22        Created symlink /etc/systemd/system/multi-user.target.wants/image-cri-shim.service → /etc/systemd/system/image-cri-shim.service.
192.168.1.104:22         INFO [2025-08-10 14:56:00] >> Health check image-cri-shim!
192.168.1.104:22         INFO [2025-08-10 14:56:00] >> image-cri-shim is running
192.168.1.104:22         INFO [2025-08-10 14:56:00] >> init shim success
192.168.1.104:22        127.0.0.1 localhost
192.168.1.104:22        ::1     ip6-localhost ip6-loopback
192.168.1.104:22        Firewall stopped and disabled on system startup
192.168.1.104:22        * Applying /usr/lib/sysctl.d/10-apparmor.conf ...
192.168.1.104:22        * Applying /etc/sysctl.d/10-bufferbloat.conf ...
192.168.1.104:22        * Applying /etc/sysctl.d/10-console-messages.conf ...
192.168.1.104:22        * Applying /etc/sysctl.d/10-ipv6-privacy.conf ...
192.168.1.104:22        * Applying /etc/sysctl.d/10-kernel-hardening.conf ...
192.168.1.104:22        * Applying /etc/sysctl.d/10-magic-sysrq.conf ...
192.168.1.104:22        * Applying /etc/sysctl.d/10-map-count.conf ...
192.168.1.104:22        * Applying /etc/sysctl.d/10-network-security.conf ...
192.168.1.104:22        * Applying /etc/sysctl.d/10-ptrace.conf ...
192.168.1.104:22        * Applying /etc/sysctl.d/10-zeropage.conf ...
192.168.1.104:22        * Applying /usr/lib/sysctl.d/30-tracker.conf ...
192.168.1.104:22        * Applying /usr/lib/sysctl.d/50-bubblewrap.conf ...
192.168.1.104:22        * Applying /usr/lib/sysctl.d/50-pid-max.conf ...
192.168.1.104:22        * Applying /usr/lib/sysctl.d/99-protect-links.conf ...
192.168.1.104:22        * Applying /etc/sysctl.d/99-sysctl.conf ...
192.168.1.104:22        * Applying /etc/sysctl.conf ...
192.168.1.104:22        kernel.apparmor_restrict_unprivileged_userns = 1
192.168.1.104:22        net.core.default_qdisc = fq_codel
192.168.1.104:22        kernel.printk = 4 4 1 7
192.168.1.104:22        net.ipv6.conf.all.use_tempaddr = 2
192.168.1.104:22        net.ipv6.conf.default.use_tempaddr = 2
192.168.1.104:22        kernel.kptr_restrict = 1
192.168.1.104:22        kernel.sysrq = 176
192.168.1.104:22        vm.max_map_count = 1048576
192.168.1.104:22        net.ipv4.conf.default.rp_filter = 2
192.168.1.104:22        net.ipv4.conf.all.rp_filter = 2
192.168.1.104:22        kernel.yama.ptrace_scope = 1
192.168.1.104:22        vm.mmap_min_addr = 65536
192.168.1.104:22        fs.inotify.max_user_watches = 65536
192.168.1.104:22        kernel.unprivileged_userns_clone = 1
192.168.1.104:22        kernel.pid_max = 4194304
192.168.1.104:22        fs.protected_fifos = 1
192.168.1.104:22        fs.protected_hardlinks = 1
192.168.1.104:22        fs.protected_regular = 2
192.168.1.104:22        fs.protected_symlinks = 1
192.168.1.104:22        fs.file-max = 1048576 # sealos
192.168.1.104:22        net.bridge.bridge-nf-call-ip6tables = 1 # sealos
192.168.1.104:22        net.bridge.bridge-nf-call-iptables = 1 # sealos
192.168.1.104:22        net.core.somaxconn = 65535 # sealos
192.168.1.104:22        net.ipv4.conf.all.rp_filter = 0 # sealos
192.168.1.104:22        net.ipv4.ip_forward = 1 # sealos
192.168.1.104:22        net.ipv4.ip_local_port_range = 1024 65535 # sealos
192.168.1.104:22        net.ipv4.tcp_keepalive_intvl = 30 # sealos
192.168.1.104:22        net.ipv4.tcp_keepalive_time = 600 # sealos
192.168.1.104:22        net.ipv4.vs.conn_reuse_mode = 0 # sealos
192.168.1.104:22        net.ipv4.vs.conntrack = 1 # sealos
192.168.1.104:22        net.ipv6.conf.all.forwarding = 1 # sealos
192.168.1.104:22        vm.max_map_count = 2147483642 # sealos
192.168.1.104:22        fs.file-max = 1048576 # sealos
192.168.1.104:22        net.bridge.bridge-nf-call-ip6tables = 1 # sealos
192.168.1.104:22        net.bridge.bridge-nf-call-iptables = 1 # sealos
192.168.1.104:22        net.core.somaxconn = 65535 # sealos
192.168.1.104:22        net.ipv4.conf.all.rp_filter = 0 # sealos
192.168.1.104:22        net.ipv4.ip_forward = 1 # sealos
192.168.1.104:22        net.ipv4.ip_local_port_range = 1024 65535 # sealos
192.168.1.104:22        net.ipv4.tcp_keepalive_intvl = 30 # sealos
192.168.1.104:22        net.ipv4.tcp_keepalive_time = 600 # sealos
192.168.1.104:22        net.ipv4.vs.conn_reuse_mode = 0 # sealos
192.168.1.104:22        net.ipv4.vs.conntrack = 1 # sealos
192.168.1.104:22        net.ipv6.conf.all.forwarding = 1 # sealos
192.168.1.104:22        vm.max_map_count = 2147483642 # sealos
192.168.1.104:22         INFO [2025-08-10 14:56:03] >> pull pause image sealos.hub:5000/pause:3.9
192.168.1.104:22        Image is up to date for sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
192.168.1.104:22        Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /etc/systemd/system/kubelet.service.
192.168.1.104:22         INFO [2025-08-10 14:56:05] >> init kubelet success
192.168.1.104:22         INFO [2025-08-10 14:56:05] >> init rootfs success
2025-08-10T14:56:00 info Executing pipeline Join in ScaleProcessor.
2025-08-10T14:56:00 info [192.168.1.104:22] will be added as worker
2025-08-10T14:56:00 info start to get kubernetes token...
2025-08-10T14:56:01 info fetch certSANs from kubeadm configmap
2025-08-10T14:56:01 info start to join 192.168.1.104:22 as worker
2025-08-10T14:56:01 info start to copy kubeadm join config to node: 192.168.1.104:22
2025-08-10T14:56:02 info run ipvs once module: 192.168.1.104:221/1, 186 it/s)
192.168.1.104:22        2025-08-10T14:56:07 info Trying to add route
192.168.1.104:22        2025-08-10T14:56:07 info success to set route.(host:10.103.97.2, gateway:192.168.1.104)
2025-08-10T14:56:02 info start join node: 192.168.1.104:22
192.168.1.104:22        [preflight] Running pre-flight checks
192.168.1.104:22                [WARNING FileExisting-socat]: socat not found in system path
192.168.1.104:22        [preflight] Reading configuration from the cluster...
192.168.1.104:22        [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
192.168.1.104:22        W0810 14:56:08.112112    6085 common.go:200] WARNING: could not obtain a bind address for the API Server: no default routes found in "/proc/net/route" or "/proc/net/ipv6_route"; using: 0.0.0.0
192.168.1.104:22        W0810 14:56:08.112331    6085 utils.go:69] The recommended value for "healthzBindAddress" in "KubeletConfiguration" is: 127.0.0.1; the provided value is: 0.0.0.0
192.168.1.104:22        [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
192.168.1.104:22        [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
192.168.1.104:22        [kubelet-start] Starting the kubelet
192.168.1.104:22        [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
192.168.1.104:22
192.168.1.104:22        This node has joined the cluster:
192.168.1.104:22        * Certificate signing request was sent to apiserver and a response was received.
192.168.1.104:22        * The Kubelet was informed of the new secure connection details.
192.168.1.104:22
192.168.1.104:22        Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
192.168.1.104:22
2025-08-10T14:56:05 info succeeded in joining 192.168.1.104:22 as worker
2025-08-10T14:56:05 info start to sync lvscare static pod to node: 192.168.1.104:22 master: [192.168.1.98:6443]
192.168.1.104:22        2025-08-10T14:56:11 info generator lvscare static pod is success
2025-08-10T14:56:06 info Executing pipeline RunGuest in ScaleProcessor.
2025-08-10T14:56:07 info succeeded in scaling this cluster
2025-08-10T14:56:07 info___           ___           ___           ___       ___           ___/\  \         /\  \         /\  \         /\__\     /\  \         /\  \/::\  \       /::\  \       /::\  \       /:/  /    /::\  \       /::\  \/:/\ \  \     /:/\:\  \     /:/\:\  \     /:/  /    /:/\:\  \     /:/\ \  \_\:\~\ \  \   /::\~\:\  \   /::\~\:\  \   /:/  /    /:/  \:\  \   _\:\~\ \  \/\ \:\ \ \__\ /:/\:\ \:\__\ /:/\:\ \:\__\ /:/__/    /:/__/ \:\__\ /\ \:\ \ \__\\:\ \:\ \/__/ \:\~\:\ \/__/ \/__\:\/:/  / \:\  \    \:\  \ /:/  / \:\ \:\ \/__/\:\ \:\__\    \:\ \:\__\        \::/  /   \:\  \    \:\  /:/  /   \:\ \:\__\\:\/:/  /     \:\ \/__/        /:/  /     \:\  \    \:\/:/  /     \:\/:/  /\::/  /       \:\__\         /:/  /       \:\__\    \::/  /       \::/  /\/__/         \/__/         \/__/         \/__/     \/__/         \/__/Website: https://www.sealos.io/Address: github.com/labring/sealosVersion: 5.0.1-2b74a1281root@master01:~# root@master01:~# kubectl get nodes
NAME       STATUS   ROLES           AGE    VERSION
master01   Ready    control-plane   156m   v1.29.9
node03     Ready    <none>          86s    v1.29.9
node1      Ready    <none>          156m   v1.29.9
node2      Ready    <none>          156m   v1.29.9
root@master01:~#

六、安裝sealos集群,實現k8s 圖形化paas服務

(1)下載sealos-cloud鏡像,并上傳到master01節點
注:直接去拉阿里云的鏡像,會報權限問題
docker pull docker.io/labring/sealos-cloud:latest(2)打包鏡像
docker save -o sealos-cloud.tar docker.io/labring/sealos-cloud:latest(3)上傳sealos-cloud.tar到/home/test目錄,并使用sealos load導入鏡像
root@master01:~# sealos load -i /home/test/sealos-cloud.tar
Getting image source signatures
Copying blob b63eb4a8e470 done
Copying config 8f15d6df44 done
Writing manifest to image destination
Storing signatures
Loaded image: docker.io/labring/sealos-cloud:latest
root@master01:~# sealos images
REPOSITORY                                             TAG       IMAGE ID       CREATED        SIZE
registry.cn-shanghai.aliyuncs.com/labring/kubernetes   v1.29.9   bca192f35556   3 months ago   669 MB
docker.io/labring/sealos-cloud                         latest    8f15d6df448e   7 months ago   1.46 GB
registry.cn-shanghai.aliyuncs.com/labring/cilium       v1.13.4   71aa52ad0a11   2 years ago    483 MB
registry.cn-shanghai.aliyuncs.com/labring/helm         v3.9.4    3376f6822067   2 years ago    46.4 MB
root@master01:~# sealos tag docker.io/labring/sealos-cloud:latest registry.cn-shanghai.aliyuncs.com/labring/sealos-cloud:latest
root@master01:~# sealos images
REPOSITORY                                               TAG       IMAGE ID       CREATED        SIZE
registry.cn-shanghai.aliyuncs.com/labring/kubernetes     v1.29.9   bca192f35556   3 months ago   669 MB
docker.io/labring/sealos-cloud                           latest    8f15d6df448e   7 months ago   1.46 GB
registry.cn-shanghai.aliyuncs.com/labring/sealos-cloud   latest    8f15d6df448e   7 months ago   1.46 GB
registry.cn-shanghai.aliyuncs.com/labring/cilium         v1.13.4   71aa52ad0a11   2 years ago    483 MB
registry.cn-shanghai.aliyuncs.com/labring/helm           v3.9.4    3376f6822067   2 years ago    46.4 MB
root@master01:~#

七、安裝kubeblocks

(1)安裝snapshot

(1)安裝snapshot
kubectl get crd volumesnapshotclasses.snapshot.storage.k8s.io
kubectl get crd volumesnapshots.snapshot.storage.k8s.io
kubectl get crd volumesnapshotcontents.snapshot.storage.k8s.iokubectl create -f https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/v8.2.0/client/config/crd/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
kubectl create -f https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/v8.2.0/client/config/crd/snapshot.storage.k8s.io_volumesnapshots.yaml
kubectl create -f https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/v8.2.0/client/config/crd/snapshot.storage.k8s.io_volumesnapshotcontents.yamlroot@master01:~# kubectl get crd volumesnapshotclasses.snapshot.storage.k8s.io
NAME                                            CREATED AT
volumesnapshotclasses.snapshot.storage.k8s.io   2025-08-10T07:31:06Z
root@master01:~# kubectl get crd volumesnapshots.snapshot.storage.k8s.io
NAME                                      CREATED AT
volumesnapshots.snapshot.storage.k8s.io   2025-08-10T07:31:07Z
root@master01:~# kubectl get crd volumesnapshotcontents.snapshot.storage.k8s.io
NAME                                             CREATED AT
volumesnapshotcontents.snapshot.storage.k8s.io   2025-08-10T07:31:08Z
root@master01:~#
1.2 部署快照控制器
root@master01:~# helm repo add piraeus-charts https://piraeus.io/helm-charts/
"piraeus-charts" has been added to your repositories
root@master01:~# helm repo update
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "piraeus-charts" chart repository
Update Complete. ?Happy Helming!?
root@master01:~# helm install snapshot-controller piraeus-charts/snapshot-controller -n kb-system --create-namespace
NAME: snapshot-controller
LAST DEPLOYED: Sun Aug 10 15:35:25 2025
NAMESPACE: kb-system
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
Volume Snapshot Controller installed.If you already have volume snapshots deployed using a CRDs before v1, you should
verify that the existing snapshots are upgradable to v1 CRDs. The snapshot controller (>= v3.0.0)
will label any invalid snapshots it can find. Use the following commands to find any invalid snapshotkubectl get volumesnapshots --selector=snapshot.storage.kubernetes.io/invalid-snapshot-resource="" --all-namespaces
kubectl get volumesnapshotcontents --selector=snapshot.storage.kubernetes.io/invalid-snapshot-resource="" --all-namespacesIf the above commands return any items, you need to remove them before upgrading to the newer v1 CRDs.
root@master01:~#


1.3 驗證部署

root@master01:~# kubectl create -f https://ghfast.top/https://github.com/apecloud/kubeblocks/releases/download/v1.0.0/kubeblocks_crds.yaml
customresourcedefinition.apiextensions.k8s.io/clusterdefinitions.apps.kubeblocks.io created
customresourcedefinition.apiextensions.k8s.io/clusters.apps.kubeblocks.io created
customresourcedefinition.apiextensions.k8s.io/componentdefinitions.apps.kubeblocks.io created
customresourcedefinition.apiextensions.k8s.io/components.apps.kubeblocks.io created
customresourcedefinition.apiextensions.k8s.io/componentversions.apps.kubeblocks.io created
customresourcedefinition.apiextensions.k8s.io/configconstraints.apps.kubeblocks.io created
customresourcedefinition.apiextensions.k8s.io/configurations.apps.kubeblocks.io created
customresourcedefinition.apiextensions.k8s.io/servicedescriptors.apps.kubeblocks.io created
customresourcedefinition.apiextensions.k8s.io/shardingdefinitions.apps.kubeblocks.io created
customresourcedefinition.apiextensions.k8s.io/sidecardefinitions.apps.kubeblocks.io created
customresourcedefinition.apiextensions.k8s.io/actionsets.dataprotection.kubeblocks.io created
customresourcedefinition.apiextensions.k8s.io/backuppolicies.dataprotection.kubeblocks.io created
customresourcedefinition.apiextensions.k8s.io/backuppolicytemplates.dataprotection.kubeblocks.io created
customresourcedefinition.apiextensions.k8s.io/backuprepos.dataprotection.kubeblocks.io created
customresourcedefinition.apiextensions.k8s.io/backups.dataprotection.kubeblocks.io created
customresourcedefinition.apiextensions.k8s.io/backupschedules.dataprotection.kubeblocks.io created
customresourcedefinition.apiextensions.k8s.io/restores.dataprotection.kubeblocks.io created
customresourcedefinition.apiextensions.k8s.io/storageproviders.dataprotection.kubeblocks.io created
customresourcedefinition.apiextensions.k8s.io/nodecountscalers.experimental.kubeblocks.io created
customresourcedefinition.apiextensions.k8s.io/addons.extensions.kubeblocks.io created
customresourcedefinition.apiextensions.k8s.io/opsdefinitions.operations.kubeblocks.io created
customresourcedefinition.apiextensions.k8s.io/opsrequests.operations.kubeblocks.io created
customresourcedefinition.apiextensions.k8s.io/componentparameters.parameters.kubeblocks.io created
customresourcedefinition.apiextensions.k8s.io/paramconfigrenderers.parameters.kubeblocks.io created
customresourcedefinition.apiextensions.k8s.io/parameters.parameters.kubeblocks.io created
customresourcedefinition.apiextensions.k8s.io/parametersdefinitions.parameters.kubeblocks.io created
customresourcedefinition.apiextensions.k8s.io/reconciliationtraces.trace.kubeblocks.io created
customresourcedefinition.apiextensions.k8s.io/instancesets.workloads.kubeblocks.io created
root@master01:~#

本文來自互聯網用戶投稿,該文觀點僅代表作者本人,不代表本站立場。本站僅提供信息存儲空間服務,不擁有所有權,不承擔相關法律責任。
如若轉載,請注明出處:http://www.pswp.cn/news/918359.shtml
繁體地址,請注明出處:http://hk.pswp.cn/news/918359.shtml
英文地址,請注明出處:http://en.pswp.cn/news/918359.shtml

如若內容造成侵權/違法違規/事實不符,請聯系多彩編程網進行投訴反饋email:809451989@qq.com,一經查實,立即刪除!

相關文章

nginx+Lua環境集成、nginx+Lua應用

nginxluaredis實踐 概述 nginx、lua訪問redis的三種方式&#xff1a; 1。 HttpRedis模塊。 指令少&#xff0c;功能單一 &#xff0c;適合簡單的緩存。只支持get 、select命令。 2。 HttpRedis2Module模塊。 功能強大&#xff0c;比較靈活。 3。 lua-resty-redis庫 OpenResty。…

機器學習 K-Means聚類 無監督學習

目錄 K-Means 聚類&#xff1a;從原理到實踐的完整指南 什么是 K-Means 聚類&#xff1f; 應用場景舉例 K-Means 算法的核心原理 K-Means 算法的步驟詳解 可視化理解 K-Means 的優缺點分析 優點 缺點 如何選擇合適的 K 值&#xff1f; 1. 肘部法&#xff08;Elbow Me…

RabbitMQ面試精講 Day 16:生產者優化策略與實踐

【RabbitMQ面試精講 Day 16】生產者優化策略與實踐 開篇 歡迎來到"RabbitMQ面試精講"系列第16天&#xff0c;今天我們聚焦RabbitMQ生產者優化策略與實踐。在消息隊列系統中&#xff0c;生產者的性能表現直接影響整個系統的吞吐量和可靠性。掌握生產者優化技巧不僅能…

Android 系統的安全 和 三星安全的區別

維度Android&#xff08;AOSP 通用&#xff09;Samsung&#xff08;Knox 強化&#xff09;本質差異一句話信任根標準 Verified Boot&#xff08;公鑰由谷歌或 OEM 托管&#xff09;額外在 自家 SoC 里燒錄 Knox 密鑰 熔絲位&#xff0c;一旦解鎖即觸發 Knox 0x1 熔斷&#xff…

開源大模型實戰:GPT-OSS本地部署與全面測評

文章目錄一、引言二、安裝Ollama三、Linux部署GPT-OSS-20B模型四、模型測試4.1 AI幻覺檢測題題目1&#xff1a;虛假歷史事件題目2&#xff1a;不存在的科學概念題目3&#xff1a;虛構的地理信息題目4&#xff1a;錯誤的數學常識題目5&#xff1a;虛假的生物學事實4.2 算法題測試…

【無標題】命名管道(Named Pipe)是一種在操作系統中用于**進程間通信(IPC)** 的機制

命名管道&#xff08;Named Pipe&#xff09;是一種在操作系統中用于進程間通信&#xff08;IPC&#xff09; 的機制&#xff0c;它允許不相關的進程&#xff08;甚至不同用戶的進程&#xff09;通過一個可見的文件系統路徑進行數據交換。與匿名管道&#xff08;僅存在于內存&a…

Baumer相機如何通過YoloV8深度學習模型實現危險區域人員的實時檢測識別(C#代碼UI界面版)

《------往期經典推薦------》 AI應用軟件開發實戰專欄【鏈接】 序號 項目名稱 項目名稱 1 1.工業相機 + YOLOv8 實現人物檢測識別:(C#代碼,UI界面版) 2.工業相機 + YOLOv8 實現PCB的缺陷檢測:(C#代碼,UI界面版) 2 3.工業相機 + YOLOv8 實現動物分類識別:(C#代碼,U…

本文章分享一個本地錄音和實時傳輸錄音給app的功能(杰理)

我用的是杰理手表sdk&#xff0c;該功能學會就可自行在任何杰里sdk上做&#xff0c;庫函數大致一樣&#xff0c;學會運用這個方向就好。1.我們要驗證這個喇叭和麥是否正常最簡單的的辦法&#xff0c;就是直接萬用表測試&#xff0c;直接接正負極&#xff0c;看看是否通路&#…

Netty-Rest搭建筆記

0.相關知識Component、Repository、ServiceRepository //Scope設置bean的作用范圍 Scope("singleton")//單例 prototype每次創建都會給一個新實例。 public class BookDaoImpl implements BookDao { //生命周期public void save() {System.out.println("book d…

工作筆記-----lwip網絡任務初始化問題排查

工作筆記-----基于FreeRTOS的lwIP網絡任務初始化問題排查 Author&#xff1a;明月清了個風Date&#xff1a; 2025/8/10PS&#xff1a;新項目中在STMF7開發板上基于freeRTOS和lwIP開發網口相關任務&#xff0c;開發過程中遇到了網口無法連接的問題&#xff0c;進行了一系列的排查…

Kotlin動態代理池+無頭瀏覽器協程化實戰

我看到了很多作者展示了Kotlin在爬蟲領域的各種高級用法。我需要從中提取出最"牛叉"的操作&#xff0c;也就是那些充分利用Kotlin語言特性&#xff0c;使爬蟲開發更高效、更強大的技巧。 我準備用幾個主要部分來組織內容&#xff0c;每個部分會突出Kotlin特有的"…

PDF編輯工具,免費OCR識別表單

軟件介紹 今天推薦一款功能全面的PDF編輯工具——PDF XChange Editor&#xff0c;支持文本、圖片編輯及OCR識別&#xff0c;還能一鍵提取表單信息&#xff0c;滿足多樣化PDF處理需求。 軟件優勢 該軟件完全免費&#xff0c;下載后雙擊圖標即可直接運行&#xff0c;無需安裝&…

OpenEnler等Linux系統中安裝git工具的方法

在歐拉系統中安裝 Git使用 yum 包管理器安裝&#xff08;推薦&#xff0c;適用于歐拉等基于 RPM 的系統&#xff09;&#xff1a;# 切換到 root 用戶&#xff08;若當前不是&#xff09; su - root# 安裝 Git yum install -y git驗證安裝是否成功&#xff1a;git --version若輸…

UE5 第三人稱視角如何設置camera移動旋轉

“奇怪&#xff0c;這blog不支持md格式嗎”## 第1步&#xff1a;設置玩家Pawn 創建一個藍圖類&#xff0c;繼承自 Pawn&#xff0c;在游戲模式&#xff08;Game Mode&#xff09;中&#xff0c;將這個Pawn設置為默認 在組件面板中&#xff0c;添加一個 Spring Arm 組件 在組件面…

OpenCV 入門教程:開啟計算機視覺之旅

目錄 一、引言? 二、OpenCV 簡介 ?&#xff08;一&#xff09;什么是 OpenCV &#xff08;二&#xff09;OpenCV 的特點與優勢 &#xff08;三&#xff09;OpenCV 的應用領域 三、環境搭建 &#xff08;一&#xff09;安裝 OpenCV 庫? 四、OpenCV 基礎操作 &#xf…

C++高頻知識點(十九)

文章目錄91. TCP斷開連接的時候為什么必須4次而不是3次&#xff1f;92. 為什么要區分用戶態和內核態&#xff1f;93. 說說編寫socket套接字的步驟1. 服務器端編寫步驟1.1 創建套接字1.2 綁定套接字1.3 監聽連接1.4 接受連接1.5 數據傳輸1.6 關閉套接字2. 客戶端編寫步驟2.1 創建…

一個基于 epoll 實現的多路復用 TCP 服務器程序,相比 select 和 poll 具有更高的效率

/*5 - 使用epoll實現多路復用 */ #include <stdio.h> // 標準輸入輸出函數庫 #include <stdlib.h> // 標準庫函數&#xff0c;包含exit等 #include <string.h> // 字符串處理函數 #include <unistd.h> // Unix標準函…

元數據管理與數據治理平臺:Apache Atlas 通知和業務元數據 Notifications And Business Metadata

文中內容僅限技術學習與代碼實踐參考&#xff0c;市場存在不確定性&#xff0c;技術分析需謹慎驗證&#xff0c;不構成任何投資建議。Apache Atlas 框架是一套可擴展的核心基礎治理服務&#xff0c;使企業能夠有效、高效地滿足 Hadoop 中的合規性要求&#xff0c;并支持與整個企…

rem:CSS中的相對長度單位

&#x1f90d; 前端開發工程師、技術日更博主、已過CET6 &#x1f368; 阿珊和她的貓_CSDN博客專家、23年度博客之星前端領域TOP1 &#x1f560; 牛客高級專題作者、打造專欄《前端面試必備》 、《2024面試高頻手撕題》、《前端求職突破計劃》 &#x1f35a; 藍橋云課簽約作者、…

【10】C#實戰篇——C# 調用 C++ dll(C++ 導出函數、C++導出類)

文章目錄1 導出C 類函數 、導出 C函數1.1 .h文件1.2 .cpp 文件1.3 C# 調用2 C與C#數據類型對應3 保姆級教程&#xff08;項目搭建、代碼、調用&#xff0c;圖文并茂&#xff09;1 導出C 類函數 、導出 C函數 C 生成動態庫.dll 詳細教程&#xff1a; C 生成動態庫.dll 及 C調用…