文章目錄
- 📋 1. 環境準備
- 1.1 系統要求
- 1.2 軟件清單
- 🚀 2. 安裝步驟
- 2.1 安裝Parallels Desktop
- 2.2 配置網絡代理(可選)
- 2.3 安裝Homebrew
- 2,4 準備項目目錄
- 2.5 安裝Vagrant及插件
- 2.6 配置Python環境
- 2.6.1 安裝Python管理工具
- 2.6.2 配置Shell環境
- 2.6.3 驗證Python環境
- 2.6.4 安裝 pyenv
- 2.6.5 升級Python工具
- 2.6.6 創建Python虛擬環境
- 2.7 配置Kubespray
- 2.7.1 配置核心配置文件
- 2.7.1.1 配置集群config.rb
- 2.7.1.2 配置 containerd.yml(可選)
- 🔧 3. 部署集群
- 3.1 啟動虛擬機并部署K8s
- 3.2 如果部署失敗,可以重試
- 🎯 4. 配置kubectl訪問
- 4.1 安裝kubectl客戶端
- 4.2 配置集群訪問
- 📦 5. 安裝Helm(可選)
- 🧹 6. 清理環境
- 6.1 銷毀虛擬機
- 6.2 退出Python虛擬環境
- 🛠? 7. 故障排除
- 7.1 常見問題
- 7.2 有用的調試命令
- 📝 8. 總結
本指南將幫助你在macOS上使用Kubespray、Vagrant和Parallels Desktop搭建一個完整的Kubernetes測試集群。
📋 1. 環境準備
1.1 系統要求
- macOS(Apple Silicon或Intel)
- 至少16GB內存
- 50GB以上可用磁盤空間
1.2 軟件清單
- Parallels Desktop(商業版)
- Homebrew
- Vagrant + vagrant-parallels插件
- Python 3.12+ 及虛擬環境
- Git
🚀 2. 安裝步驟
2.1 安裝Parallels Desktop
💡 提示: 需要購買商業版許可證,可考慮在閑魚等平臺購買
安裝完成后確保Parallels Desktop正常運行。
2.2 配置網絡代理(可選)
如果你的網絡環境需要代理,創建代理配置腳本:
vim ~/.zshrc
添加以下內容:
# 網絡代理配置
proxy_url="http://172.0.0.1:7890" # 修改為你的代理地址
export no_proxy="10.0.0.0/8,192.168.16.0/20,localhost,127.0.0.0/8,registry.ocp.local,.svc,.svc.cluster-27,.coding.net,.tencentyun.com,.myqcloud.com"# 代理控制函數
enable_proxy() {export http_proxy="${proxy_url}"export https_proxy="${proxy_url}"git config --global http.proxy "${proxy_url}"git config --global https.proxy "${proxy_url}"echo "? 代理已啟用: ${proxy_url}"
}disable_proxy() {unset http_proxyunset https_proxygit config --global --unset http.proxygit config --global --unset https.proxyecho "? 代理已禁用"
}# 默認禁用代理
disable_proxy
應用配置:
source ~/.zshrc# 如需啟用代理
enable_proxy
2.3 安裝Homebrew
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"# 更新Homebrew
brew update
2,4 準備項目目錄
# 創建項目目錄
mkdir -p ~/Projects/k8s-testing
cd ~/Projects/k8s-testing# 克隆Kubespray項目
git clone https://github.com/upmio/kubespray-upm.git
cd kubespray-upm
2.5 安裝Vagrant及插件
# 安裝Vagrant
brew tap hashicorp/tap
brew install hashicorp/tap/hashicorp-vagrant# 驗證安裝
vagrant --version# 安裝Parallels插件
vagrant plugin install vagrant-parallels# 查看已安裝插件
vagrant plugin list
2.6 配置Python環境
2.6.1 安裝Python管理工具
brew install python
2.6.2 配置Shell環境
vim ~/.zshrc
添加以下配置:
# Python環境配置
alias python=python3
alias pip=pip3
應用配置:
source ~/.zshrc
2.6.3 驗證Python環境
python --version
pip --version
2.6.4 安裝 pyenv
安裝依賴
brew install openssl readline sqlite3 xz zlib
安裝pyenv
curl https://pyenv.run | bash
vim ~/.zshrc
添加以下配置:
export PYENV_ROOT="$HOME/.pyenv"
export PATH="$PYENV_ROOT/bin:$PATH"
eval "$(pyenv init -)"
應用配置:
source ~/.zshrc
2.6.5 升級Python工具
python -m pip install --upgrade pip
pyenv update
2.6.6 創建Python虛擬環境
# 安裝Python 3.12.11
pyenv install 3.12.11# 創建專用虛擬環境
pyenv virtualenv 3.12.11 kubespray-3.12.11-env#為項目設置默認 Python 環境
pyenv local kubespray-3.12.11-env# 安裝項目依賴
pip install -r requirements.txt
2.7 配置Kubespray
2.7.1 配置核心配置文件
# 復制Vagrant配置文件
cp vagrant_setup_scripts/Vagrantfile ./Vagrantfile# 創建vagrant配置目錄
mkdir -p vagrant
2.7.1.1 配置集群config.rb
$ vim vagrant/config.rb
# Vagrant configuration file for Kubespray
# Vagrant configuration file for Kubespray
# Kubespray Vagrant Configuration Sample
# This file allows you to customize various settings for your Vagrant environment
# Copy this file to vagrant/config.rb and modify the values according to your needs# =============================================================================
# PROXY CONFIGURATION
# =============================================================================
# Configure proxy settings for the cluster if you're behind a corporate firewall
# Leave empty or comment out if no proxy is needed# HTTP proxy URL - used for HTTP traffic
# Example: "http://proxy.company.com:8080"
# $http_proxy = ""
$http_proxy = "http://10.211.55.2:7890"# HTTPS proxy URL - used for HTTPS traffic
# Example: "https://proxy.company.com:8080"
# $https_proxy = ""
$https_proxy = "http://10.211.55.2:7890"# No proxy list - comma-separated list of hosts/domains that should bypass proxy
# Common entries: localhost, 127.0.0.1, local domains, cluster subnets
# Example: "localhost,127.0.0.1,.local,.company.com,10.0.0.0/8,192.168.0.0/16"
# $no_proxy = ""
$no_proxy = "localhost,127.0.0.1,192.168.0.0/16,10.0.0.0/8,172.16.0.0/12,::1,.demo.com"# Additional no proxy entries - will be added to the default no_proxy list
# Use this to add extra domains without overriding the defaults
# Example: ".internal,.corp,.k8s.local"
# $additional_no_proxy = ""
$additional_no_proxy = "localhost,127.0.0.1,192.168.0.0/16,10.0.0.0/8,172.16.0.0/12,::1,.demo.com"# =============================================================================
# ANSIBLE CONFIGURATION
# =============================================================================
# Ansible verbosity level for debugging (uncomment to enable)
# Options: "v" (verbose), "vv" (more verbose), "vvv" (debug), "vvvv" (connection debug)
#$ansible_verbosity = "vvv"# =============================================================================
# VIRTUAL MACHINE CONFIGURATION
# =============================================================================
# Prefix for VM instance names (will be followed by node number)
$instance_name_prefix = "k8s"# Default CPU and memory settings for worker nodes
$vm_cpus = 8 # Number of CPU cores per worker node
$vm_memory = 16384 # Memory in MB per worker node (16GB)# Master/Control plane node resources
$kube_master_vm_cpus = 4 # CPU cores for Kubernetes master nodes
$kube_master_vm_memory = 4096 # Memory in MB for Kubernetes master nodes (4GB)# UPM Control plane node resources (if using UPM)
$upm_control_plane_vm_cpus = 12 # CPU cores for UPM control plane
$upm_control_plane_vm_memory = 24576 # Memory in MB for UPM control plane (24GB)# =============================================================================
# STORAGE CONFIGURATION
# =============================================================================
# Enable additional disks for worker nodes (useful for storage testing)
$kube_node_instances_with_disks = true# Size of additional disks in GB (200GB in this example)
$kube_node_instances_with_disks_size = "200G"# Number of additional disks per node
$kube_node_instances_with_disks_number = 1# Directory to store additional disk files
$kube_node_instances_with_disk_dir = ENV['HOME'] + "/kubespray_vm_disk/upm_disks"# Suffix for disk file names
$kube_node_instances_with_disk_suffix = "upm"# VolumeGroup configuration for additional disks
# Name of the VolumeGroup to create for additional disks
$kube_node_instances_volume_group = "local_vg_dev"# Enable automatic VolumeGroup creation for additional disks
$kube_node_instances_create_vg = true# =============================================================================
# CLUSTER TOPOLOGY
# =============================================================================
# Total number of nodes in the cluster (masters + workers)
$num_instances = 5# Number of etcd instances (should be odd number: 1, 3, 5, etc.)
$etcd_instances = 1# Number of Kubernetes master/control plane instances
$kube_master_instances = 1# Number of UPM control instances (if using UPM)
$upm_ctl_instances = 1# =============================================================================
# SYSTEM CONFIGURATION
# =============================================================================
# Vagrant Provider Configuration
# Specify the Vagrant provider to use for virtual machines
# If not set, Vagrant will auto-detect available providers in this order:
# 1. Command line --provider argument (highest priority)
# 2. VAGRANT_DEFAULT_PROVIDER environment variable
# 3. Auto-detection of installed providers (parallels > virtualbox > libvirt)
#
# Supported options: "virtualbox", "libvirt", "parallels"
#
# Provider recommendations:
# - virtualbox: Best for development and testing (free, cross-platform)
# - libvirt: Good for Linux production environments (KVM-based)
# - parallels: Good for macOS users with Parallels Desktop
#
# Leave commented for auto-detection, or uncomment and set to force a specific provider
# $provider = "virtualbox"# Timezone for all VMs
$time_zone = "Asia/Shanghai"# Ntp Sever Configuration
$ntp_enabled = "True"
$ntp_manage_config = "True"# Operating system for VMs
# Supported options: "ubuntu2004", "ubuntu2204", "centos7", "centos8", "rockylinux8", "rockylinux9", etc.
$os = "rockylinux9"# =============================================================================
# NETWORK CONFIGURATION
# =============================================================================
# Network type: "nat" or "bridge"
#
# nat: Auto-detect provider network and assign IPs (recommended)
# - Automatically detects provider default network (usually 192.168.x.0/24)
# - Uses NAT networking for VM internet access
# - VMs can communicate with each other and host
# - Simpler setup, no bridge configuration required
# - Recommended for development and testing
#
# bridge: Use bridge network with manual IP configuration
# - Requires manual bridge interface setup on host
# - VMs get IPs from same subnet as host network
# - Direct network access, VMs appear as separate devices on network
# - More complex setup, requires bridge configuration
# - Recommended for production-like environments
$vm_network = "nat"# Starting IP for the 4th octet (VMs will get IPs starting from this number)
# Used in both nat (with auto-detected subnet) and bridge modes
$subnet_split4 = 100# The following network settings are only used when $vm_network = "bridge"
# For nat, subnet/gateway/netmask are auto-detected from provider# Network subnet (first 3 octets) - bridge only
# $subnet = "10.37.129"# Network configuration - bridge only
# $netmask = "255.255.255.0" # Subnet mask
# $gateway = "10.37.129.1" # Default gateway
# $dns_server = "8.8.8.8" # DNS server
$dns_server = "10.211.55.2" # (可選)如果配置私有dns,需要在macOS 安裝 dns server,否則重啟虛擬機,可能會有pod異常。
# Bridge network interface (required when using "bridge")
# Example: On linux, libvirt bridge interface name: br0
# $bridge_nic = "br0"
# Example: On linux, vitrulbox bridge interface name: virbr0
# $bridge_nic = "virbr0"# =============================================================================
# KUBERNETES CONFIGURATION
# =============================================================================
# Container Network Interface (CNI) plugin
# Options: "calico", "flannel", "weave", "cilium", "kube-ovn", etc.
$network_plugin = "calico"# Cert-Manager Configuration
$cert_manager_enabled = "True" # Enable cert-manager# Local Path Provisioner Configuration
$local_path_provisioner_enabled = "False" # Enable local path provisioner
$local_path_provisioner_claim_root = "/opt/local-path-provisioner/" # Local path root# Ansible inventory directory
$inventory = "inventory/sample"# Shared folders between host and VMs (empty by default)
$shared_folders = {}# Kubernetes version to install
$kube_version = "1.33.3"
# Kubespray Vagrant Configuration Sample
2.7.1.2 配置 containerd.yml(可選)
$ vim inventory/sample/group_vars/all/containerd.yml
---
# Please see roles/container-engine/containerd/defaults/main.yml for more configuration options# containerd_storage_dir: "/var/lib/containerd"
# containerd_state_dir: "/run/containerd"
# containerd_oom_score: 0# containerd_default_runtime: "runc"
# containerd_snapshotter: "native"# containerd_runc_runtime:
# name: runc
# type: "io.containerd.runc.v2"
# engine: ""
# root: ""# containerd_additional_runtimes:
# Example for Kata Containers as additional runtime:
# - name: kata
# type: "io.containerd.kata.v2"
# engine: ""
# root: ""# containerd_grpc_max_recv_message_size: 16777216
# containerd_grpc_max_send_message_size: 16777216# Containerd debug socket location: unix or tcp format
# containerd_debug_address: ""# Containerd log level
# containerd_debug_level: "info"# Containerd logs format, supported values: text, json
# containerd_debug_format: ""# Containerd debug socket UID
# containerd_debug_uid: 0# Containerd debug socket GID
# containerd_debug_gid: 0# containerd_metrics_address: ""# containerd_metrics_grpc_histogram: false# Registries defined within containerd.
containerd_registries_mirrors:- prefix: quay.iomirrors:- host: https://quay.nju.edu.cncapabilities: ["pull", "resolve"]skip_verify: false- host: http://harbor.demo.comcapabilities: ["pull", "resolve"]skip_verify: true- prefix: docker.iomirrors:- host: http://harbor.demo.comcapabilities: ["pull", "resolve"]skip_verify: true- host: https://dockerproxy.comcapabilities: ["pull", "resolve"]skip_verify: false- prefix: ghcr.iomirrors:- host: https://ghcr.nju.edu.cncapabilities: ["pull", "resolve"]skip_verify: false- host: https://ghcr.dockerproxy.comcapabilities: ["pull", "resolve"]skip_verify: false- prefix: registry.k8s.iomirrors:- host: https://k8s.mirror.nju.edu.cncapabilities: ["pull", "resolve"]skip_verify: false- host: https://k8s.dockerproxy.comcapabilities: ["pull", "resolve"]skip_verify: false# containerd_max_container_log_line_size: -1
🔧 3. 部署集群
3.1 啟動虛擬機并部署K8s
?? 注意: 此過程需要10-15分鐘,具體時間取決于網絡狀況和硬件性能
vagrant up --no-parallel
3.2 如果部署失敗,可以重試
vagrant provision --provision-with ansible
🎯 4. 配置kubectl訪問
4.1 安裝kubectl客戶端
# 下載kubectl(Apple Silicon版本)
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/darwin/arm64/kubectl"# 設置執行權限并移動到PATH目錄
chmod +x kubectl && mv kubectl /usr/local/bin/kubectl# 驗證安裝
kubectl version --client
4.2 配置集群訪問
# 復制kubeconfig文件
mkdir -p ~/.kube
cp inventory/sample/artifacts/admin.conf ~/.kube/config# 驗證集群連接
kubectl get nodes
kubectl get pods --all-namespaces
📦 5. 安裝Helm(可選)
# 下載安裝腳本
curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3# 執行安裝
chmod 700 get_helm.sh
./get_helm.sh# 驗證安裝
helm version
🧹 6. 清理環境
6.1 銷毀虛擬機
vagrant destroy -f
6.2 退出Python虛擬環境
pyenv deactivate
🛠? 7. 故障排除
7.1 常見問題
1. Vagrant啟動失敗
- 檢查Parallels Desktop是否正常運行
- 確認系統資源充足(內存、磁盤空間)
- 檢查網絡連接狀態
2. Python依賴安裝失敗
- 確認已激活正確的虛擬環境
- 嘗試升級pip:
pip install --upgrade pip
- 檢查網絡代理設置
3. kubectl無法連接集群
- 確認kubeconfig文件路徑正確
- 檢查虛擬機網絡狀態:
vagrant status
- 驗證SSH連接:
vagrant ssh
4. 網絡問題
- 如在國內環境,建議配置代理
- 可以嘗試使用國內鏡像源
7.2 有用的調試命令
# 檢查Vagrant狀態
vagrant status# 查看虛擬機日志
vagrant ssh -c "sudo journalctl -u kubelet"# 重新加載Vagrant配置
vagrant reload# 查看集群狀態
kubectl cluster-info
kubectl get componentstatuses
📝 8. 總結
通過以上步驟,你應該已經成功搭建了一個基于Kubespray的Kubernetes測試集群。這個環境非常適合:
- 學習Kubernetes核心概念
- 測試應用部署
- 驗證集群配置
- 開發云原生應用
💡 提示: 建議定期備份重要的配置文件和項目代碼,避免因誤操作導致數據丟失。
相關資源鏈接:
- Kubespray官方文檔
- Vagrant官方文檔
- Kubernetes官方文檔
- Helm官方文檔