目錄
一、環境準備
1.1、主機初始化配置
1.2、部署docker環境
二、部署kubernetes集群
2.1、組件介紹
2.2、配置阿里云yum源
2.3、安裝kubelet kubeadm kubectl
2.4、配置init-config.yaml
2.5、安裝master節點
2.6、安裝node節點
2.7、安裝flannel、cni
2.8、部署測試應用
3、部署Prometheus監控平臺
3.1、準備Prometheus相關YAML文件
3.2、部署prometheus
4、部署Grafana服務
4.1、部署Grafana相關yaml文件
4.2、配置Grafana數據源
一、環境準備
操作系統 | IP地址 | 主機名 | 組件 |
CentOS7.5 | 192.168.147.137 | k8s-master | kubeadm、kubelet、kubectl、docker-ce |
CentOS7.5 | 192.168.147.139 | k8s-node01 | kubeadm、kubelet、kubectl、docker-ce |
CentOS7.5 | 192.168.147.140 | k8s-node02 | kubeadm、kubelet、kubectl、docker-ce |
注意:所有主機配置推薦CPU:2C+ ?Memory:2G+
項目拓撲
?
1.1、主機初始化配置
所有主機配置禁用防火墻和selinux
[root@localhost ~]# setenforce 0
[root@localhost ~]# iptables -F
[root@localhost ~]# systemctl stop firewalld
[root@localhost ~]# systemctl disable firewalld
[root@localhost ~]# systemctl stop NetworkManager
[root@localhost ~]# systemctl disable NetworkManager
[root@localhost ~]# sed -i '/^SELINUX=/s/enforcing/disabled/' /etc/selinux/config
配置主機名并綁定hosts,不同主機名稱不同
[root@localhost ~]# hostname k8s-master
[root@localhost ~]# bash
[root@k8s-master ~]# cat << EOF >> /etc/hosts
192.168.147.137 k8s-master
192.168.147.139 k8s-node01
192.168.147.140 k8s-node02
EOF[root@k8s-master ~]# scp /etc/hosts 192.168.200.112:/etc/
[root@k8s-master ~]# scp /etc/hosts 192.168.200.113:/etc/[root@localhost ~]# hostname k8s-node01
[root@localhost ~]# bash
[root@k8s-node01 ~]#[root@localhost ~]# hostname k8s-node02
[root@localhost ~]# bash
[root@k8s-node02 ~]#
主機配置初始化
[root@k8s-master ~]# yum -y install vim wget net-tools lrzsz[root@k8s-master ~]# swapoff -a
[root@k8s-master ~]# sed -i '/swap/s/^/#/' /etc/fstab[root@k8s-node01 ~]# cat << EOF >> /etc/sysctl.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
[root@k8s-node01 ~]# modprobe br_netfilter
[root@k8s-node01 ~]# sysctl -p
1.2、部署docker環境
三臺主機上分別部署 Docker 環境,因為 Kubernetes 對容器的編排需要 Docker 的支持。
[root@k8s-master ~]# wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
[root@k8s-master ~]# yum install -y yum-utils device-mapper-persistent-data lvm2
使用 YUM 方式安裝 Docker 時,推薦使用阿里的 YUM 源。
[root@k8s-master ~]# yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo[root@k8s-master ~]# yum clean all && yum makecache fast [root@k8s-master ~]# yum -y install docker-ce
[root@k8s-master ~]# systemctl start docker
[root@k8s-master ~]# systemctl enable docker
鏡像加速器(所有主機配置)
[root@k8s-master ~]# cat << END > /etc/docker/daemon.json
{"registry-mirrors":[ "https://nyakyfun.mirror.aliyuncs.com" ]
}
END
[root@k8s-master ~]# systemctl daemon-reload
[root@k8s-master ~]# systemctl restart docker
二、部署kubernetes集群
2.1、組件介紹
三個節點都需要安裝下面三個組件
- kubeadm:安裝工具,使所有的組件都會以容器的方式運行
- kubectl:客戶端連接K8S API工具
- kubelet:運行在node節點,用來啟動容器的工具
2.2、配置阿里云yum源
使用 YUM 方式安裝 Kubernetes時,推薦使用阿里的 YUM 源。
[root@k8s-master ~]# cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpghttps://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF[root@k8s-master ~]# ls /etc/yum.repos.d/
backup Centos-7.repo CentOS-Media.repo CentOS-x86_64-kernel.repo docker-ce.repo kubernetes.repo
2.3、安裝kubelet kubeadm kubectl
所有主機配置
[root@k8s-master ~]# yum install -y kubelet kubeadm kubectl
[root@k8s-master ~]# systemctl enable kubelet
kubelet 剛安裝完成后,通過 systemctl start kubelet 方式是無法啟動的,需要加入節點或初始化為 master 后才可啟動成功。
2.4、配置init-config.yaml
Kubeadm 提供了很多配置項,Kubeadm 配置在 Kubernetes 集群中是存儲在ConfigMap 中的,也可將這些配置寫入配置文件,方便管理復雜的配置項。Kubeadm 配內容是通過 kubeadm config 命令寫入配置文件的。
在master節點安裝,master 定于為192.168.147.137,通過如下指令創建默認的init-config.yaml文件:
[root@k8s-master ~]# kubeadm config print init-defaults > init-config.yaml
init-config.yaml配置
[root@k8s-master ~]# cat init-config.yaml
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:- system:bootstrappers:kubeadm:default-node-tokentoken: abcdef.0123456789abcdefttl: 24h0m0susages:- signing- authentication
kind: InitConfiguration
localAPIEndpoint:advertiseAddress: 192.168.147.137 //master節點IP地址bindPort: 6443
nodeRegistration:criSocket: /var/run/dockershim.sockname: k8s-master //如果使用域名保證可以解析,或直接使用 IP 地址taints:- effect: NoSchedulekey: node-role.kubernetes.io/master
---
apiServer:timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns:type: CoreDNS
etcd:local:dataDir: /var/lib/etcd //etcd 容器掛載到本地的目錄
imageRepository: registry.aliyuncs.com/google_containers //修改為國內地址
kind: ClusterConfiguration
kubernetesVersion: v1.19.0
networking:dnsDomain: cluster.localserviceSubnet: 10.96.0.0/12podSubnet: 10.244.0.0/16 //新增加 Pod 網段
scheduler: {}
2.5、安裝master節點
拉取所需鏡像
[root@k8s-master ~]# kubeadm config images list --config init-config.yaml
W0816 18:15:37.343955 20212 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
registry.aliyuncs.com/google_containers/kube-apiserver:v1.19.0
registry.aliyuncs.com/google_containers/kube-controller-manager:v1.19.0
registry.aliyuncs.com/google_containers/kube-scheduler:v1.19.0
registry.aliyuncs.com/google_containers/kube-proxy:v1.19.0
registry.aliyuncs.com/google_containers/pause:3.2
registry.aliyuncs.com/google_containers/etcd:3.4.9-1
registry.aliyuncs.com/google_containers/coredns:1.7.0
[root@k8s-master ~]# ls | while read line
do
docker load < $line
done
安裝matser節點
[root@k8s-master ~]# kubeadm init --config=init-config.yaml //初始化安裝K8S
根據提示操作
kubectl 默認會在執行的用戶家目錄下面的.kube 目錄下尋找config 文件。這里是將在初始化時[kubeconfig]步驟生成的admin.conf 拷貝到.kube/config
[root@k8s-master ~]# mkdir -p $HOME/.kube
[root@k8s-master ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8s-master ~]# chown $(id -u):$(id -g) $HOME/.kube/config
Kubeadm 通過初始化安裝是不包括網絡插件的,也就是說初始化之后是不具備相關網絡功能的,比如 k8s-master 節點上查看節點信息都是“Not Ready”狀態、Pod 的 CoreDNS無法提供服務等。
2.6、安裝node節點
根據master安裝時的提示信息
[root@k8s-node01 ~]# kubeadm join 192.168.147.137:6443 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:db9458ca9d8eaae330ab33da5e28f61778515af2ec06ff14f79d94285445ece9
[root@k8s-master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master NotReady master 2m22s v1.19.0
k8s-node01 NotReady <none> 15s v1.19.0
k8s-node02 NotReady <none> 11s v1.19.0
前面已經提到,在初始化 k8s-master 時并沒有網絡相關配置,所以無法跟 node 節點通信,因此狀態都是“NotReady”。但是通過 kubeadm join 加入的 node 節點已經在k8s-master 上可以看到。
2.7、安裝flannel、cni
Master 節點NotReady 的原因就是因為沒有使用任何的網絡插件,此時Node 和Master的連接還不正常。目前最流行的Kubernetes 網絡插件有Flannel、Calico、Canal、Weave 這里選擇使用flannel。
所有主機上傳flannel_v0.12.0-amd64.tar、cni-plugins-linux-amd64-v0.8.6.tgz
[root@k8s-master ~]# docker load < flannel_v0.12.0-amd64.tar
[root@k8s-master ~]# tar xf cni-plugins-linux-amd64-v0.8.6.tgz
[root@k8s-master ~]# cp flannel /opt/cni/bin/
master上傳kube-flannel.yml
master主機配置:
[root@k8s-master ~]# kubectl apply -f kube-flannel.yml
[root@k8s-master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 7m8s v1.19.0
k8s-node01 Ready <none> 5m1s v1.19.0
k8s-node02 Ready <none> 4m57s v1.19.0
[root@k8s-master ~]# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-6d56c8448f-8282g 1/1 Running 0 7m19s
coredns-6d56c8448f-wrdw6 1/1 Running 0 7m19s
etcd-k8s-master 1/1 Running 0 7m30s
kube-apiserver-k8s-master 1/1 Running 0 7m30s
kube-controller-manager-k8s-master 1/1 Running 0 7m30s
kube-flannel-ds-amd64-pvzxl 1/1 Running 0 62s
kube-flannel-ds-amd64-qkjtd 1/1 Running 0 62s
kube-flannel-ds-amd64-szwp4 1/1 Running 0 62s
kube-proxy-9fbkb 1/1 Running 0 7m19s
kube-proxy-p2txx 1/1 Running 0 5m28s
kube-proxy-zpb98 1/1 Running 0 5m32s
kube-scheduler-k8s-master 1/1 Running 0 7m30s
?已經是ready狀態
2.8、部署測試應用
所有node主機導入測試鏡像
[root@k8s-node01 ~]# docker load < nginx-1.19.tar
[root@k8s-node01 ~]# docker tag nginx nginx:1.19.6
在Kubernetes集群中創建一個pod,驗證是否正常運行。
[root@k8s-master demo]# rz -E
rz waiting to receive.
[root@k8s-master demo]# vim nginx-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:name: nginx-deploymentlabels:app: nginx
spec:replicas: 3selector: matchLabels:app: nginxtemplate:metadata:labels:app: nginxspec:containers:- name: nginximage: nginx:1.19.6ports:- containerPort: 80
創建完 Deployment 的資源清單之后,使用 create 執行資源清單來創建容器。通過 get pods 可以查看到 Pod 容器資源已經自動創建完成。
[root@k8s-master demo]# kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-deployment-76ccf9dd9d-29ch8 1/1 Running 0 8s
nginx-deployment-76ccf9dd9d-lm7nl 1/1 Running 0 8s
nginx-deployment-76ccf9dd9d-lx29n 1/1 Running 0 8s
[root@k8s-master demo]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-deployment-76ccf9dd9d-29ch8 1/1 Running 0 18s 10.244.2.4 k8s-node02 <none> <none>
nginx-deployment-76ccf9dd9d-lm7nl 1/1 Running 0 18s 10.244.1.3 k8s-node01 <none> <none>
nginx-deployment-76ccf9dd9d-lx29n 1/1 Running 0 18s 10.244.2.3 k8s-node02 <none> <none>
創建Service資源清單
在創建的 nginx-service 資源清單中,定義名稱為 nginx-service 的 Service、標簽選擇器為 app: nginx、type 為 NodePort 指明外部流量可以訪問內部容器。在 ports 中定義暴露的端口庫號列表,對外暴露訪問的端口是 80,容器內部的端口也是 80。
[root@k8s-master demo]# vim nginx-service.yaml
kind: Service
apiVersion: v1
metadata:name: nginx-service
spec:selector:app: nginxtype: NodePortports:- protocol: TCPport: 80
targetPort: 80[root@k8s-master demo]# vim nginx-server.yaml
[root@k8s-master demo]# kubectl create -f nginx-server.yaml
service/nginx-service created
[root@k8s-master demo]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 7m46s
nginx-service NodePort 10.106.168.130 <none> 80:31487/TCP 10s
?
3、部署Prometheus監控平臺
3.1、準備Prometheus相關YAML文件
在master節點/opt目錄下新建pgmonitor目錄
[root@k8s-master ~]# mkdir /opt/pgmonitor
[root@k8s-master ~]# cd /opt/pgmonitor
將下載yaml包上傳至/opt/pgmonitor目錄并解壓
[root@k8s-master pgmonitor]# unzip k8s-prometheus-grafana-master.zip
3.2、部署prometheus
部署守護進程
[root@k8s-master pgmonitor]# cd k8s-prometheus-grafana-master/
[root@k8s-master k8s-prometheus-grafana-master]# kubectl create -f node-exporter.yaml
daemonset.apps/node-exporter created
service/node-exporter created
部署其他yaml文件
進入/opt/pgmonitor/k8s-prometheus-grafana-master/prometheus目錄
[root@k8s-master k8s-prometheus-grafana-master]# cd prometheus
部署rbac、部署configmap.yaml、部署prometheus.deploy.yml、部署prometheus.svc.yml
[root@k8s-master prometheus]# kubectl create -f rbac-setup.yaml
clusterrole.rbac.authorization.k8s.io/prometheus created
serviceaccount/prometheus created
clusterrolebinding.rbac.authorization.k8s.io/prometheus created
[root@k8s-master prometheus]# kubectl create -f configmap.yaml
configmap/prometheus-config created
[root@k8s-master prometheus]# kubectl create -f prometheus.deploy.yml
deployment.apps/prometheus created
[root@k8s-master prometheus]# kubectl create -f prometheus.svc.yml
service/prometheus created
查看prometheus狀態
[root@k8s-master prometheus]# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-6d56c8448f-8zrt7 1/1 Running 0 14m
coredns-6d56c8448f-hzm5v 1/1 Running 0 14m
etcd-k8s-master 1/1 Running 0 15m
kube-apiserver-k8s-master 1/1 Running 0 15m
kube-controller-manager-k8s-master 1/1 Running 0 15m
kube-flannel-ds-amd64-4654f 1/1 Running 0 11m
kube-flannel-ds-amd64-bpx5q 1/1 Running 0 11m
kube-flannel-ds-amd64-nnhlh 1/1 Running 0 11m
kube-proxy-2sps9 1/1 Running 0 13m
kube-proxy-99hn4 1/1 Running 0 13m
kube-proxy-s624n 1/1 Running 0 14m
kube-scheduler-k8s-master 1/1 Running 0 15m
node-exporter-brgw6 1/1 Running 0 3m28s
node-exporter-kvvgp 1/1 Running 0 3m28s
prometheus-68546b8d9-vmjms 1/1 Running 0 87s
4、部署Grafana服務
4.1、部署Grafana相關yaml文件
進入/opt/pgmonitor/k8s-prometheus-grafana-master/grafana目錄
[root@k8s-master prometheus]# cd ../grafana/
部署grafana-deploy.yaml、部署grafana-svc.yaml、部署grafana-ing.yaml
[root@k8s-master prometheus]# cd ../grafana/
[root@k8s-master grafana]# kubectl create -f grafana-deploy.yaml
deployment.apps/grafana-core created
[root@k8s-master grafana]# kubectl create -f grafana-svc.yaml
service/grafana created
[root@k8s-master grafana]# kubectl create -f grafana-ing.yaml
Warning: extensions/v1beta1 Ingress is deprecated in v1.14+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
ingress.extensions/grafana created
查看Grafana狀態
[root@k8s-master grafana]# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-6d56c8448f-8zrt7 1/1 Running 0 18m
coredns-6d56c8448f-hzm5v 1/1 Running 0 18m
etcd-k8s-master 1/1 Running 0 18m
grafana-core-6d6fb7566-vphhz 1/1 Running 0 115s
kube-apiserver-k8s-master 1/1 Running 0 18m
kube-controller-manager-k8s-master 1/1 Running 0 18m
kube-flannel-ds-amd64-4654f 1/1 Running 0 14m
kube-flannel-ds-amd64-bpx5q 1/1 Running 0 14m
kube-flannel-ds-amd64-nnhlh 1/1 Running 0 14m
kube-proxy-2sps9 1/1 Running 0 16m
kube-proxy-99hn4 1/1 Running 0 16m
kube-proxy-s624n 1/1 Running 0 18m
kube-scheduler-k8s-master 1/1 Running 0 18m
node-exporter-brgw6 1/1 Running 0 6m55s
node-exporter-kvvgp 1/1 Running 0 6m55s
prometheus-68546b8d9-vmjms 1/1 Running 0 4m54s
4.2、配置Grafana數據源
查看grafana的端口
[root@k8s-master grafana]# kubectl get svc -n kube-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
grafana NodePort 10.105.158.0 <none> 3000:31191/TCP 2m19s
kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 19m
node-exporter NodePort 10.111.78.61 <none> 9100:31672/TCP 7m25s
prometheus NodePort 10.98.254.105 <none> 9090:30003/TCP 5m12s
通過瀏覽器訪問grafana,http://[masterIP]:[grafana端口]
例如:http://192.168.200.111:31191,默認的用戶名和密碼:admin/admin
?
?設置DataSource
名字自定義、url是cluster-IP
?進入Import
輸入315并移除光標,等一會兒即可進入下一個頁面
?
?
?
?