kubeadm安裝kubernetes 1.13.2多master高可用集群

1. 簡介

Kubernetes v1.13版本發布后,kubeadm才正式進入GA,可以生產使用,用kubeadm部署kubernetes集群也是以后的發展趨勢。目前Kubernetes的對應鏡像倉庫,在國內阿里云也有了鏡像站點,使用kubeadm部署Kubernetes集群變得簡單并且容易了很多,本文使用kubeadm帶領大家快速部署Kubernetes v1.13.2版本。

注意:請不要把目光僅僅放在部署上,如果你是新手,推薦先熟悉用二進制文件部署后,再來學習用kubeadm部署。二進制文件部署請查看我博客的其他文章。

2. 架構信息

系統版本:CentOS 7.6
內核:3.10.0-957.el7.x86_64
Kubernetes: v1.13.2
Docker-ce: 18.06
推薦硬件配置:2核2GKeepalived保證apiserever服務器的IP高可用
Haproxy實現apiserver的負載均衡

為了減少服務器數量,haproxy、keepalived配置在node-01和node-02。

節點名稱角色IP安裝軟件
負載VIPVIP10.31.90.200
node-01master10.31.90.201kubeadm、kubelet、kubectl、docker、haproxy、keepalived
node-02master10.31.90.202kubeadm、kubelet、kubectl、docker、haproxy、keepalived
node-03master10.31.90.203kubeadm、kubelet、kubectl、docker
node-04node10.31.90.204kubeadm、kubelet、kubectl、docker
node-05node10.31.90.205kubeadm、kubelet、kubectl、docker
node-06node10.31.90.206kubeadm、kubelet、kubectl、docker
service網段10.245.0.0/16

2.部署前準備工作

1) 關閉selinux和防火墻

sed -ri 's#(SELINUX=).*#\1disabled#' /etc/selinux/config
setenforce 0
systemctl disable firewalld
systemctl stop firewalld

2) 關閉swap

swapoff -a

3) 為每臺服務器添加host解析記錄

cat >>/etc/hosts<<EOF
10.31.90.201 node-01
10.31.90.202 node-02
10.31.90.203 node-03
10.31.90.204 node-04
10.31.90.205 node-05
10.31.90.206 node-06
EOF

4) 創建并分發密鑰

在node-01創建ssh密鑰。

[root@node-01 ~]# ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa): 
Created directory '/root/.ssh'.
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:26z6DcUarn7wP70dqOZA28td+K/erv7NlaJPLVE1BTA root@node-01
The key's randomart image is:
+---[RSA 2048]----+
|            E..o+|
|             .  o|
|               . |
|         .    .  |
|        S o  .   |
|      .o X   oo .|
|       oB +.o+oo.|
|       .o*o+++o+o|
|     .++o+Bo+=B*B|
+----[SHA256]-----+

分發node-01的公鑰,用于免密登錄其他服務器

for n in `seq -w 01 06`;do ssh-copy-id node-$n;done

5) 配置內核參數

cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_nonlocal_bind = 1
net.ipv4.ip_forward = 1
vm.swappiness=0
EOFsysctl --system

6) 加載ipvs模塊

cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4

7) 添加yum源

cat << EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOFwget http://mirrors.aliyun.com/repo/Centos-7.repo -O /etc/yum.repos.d/CentOS-Base.repo
wget http://mirrors.aliyun.com/repo/epel-7.repo -O /etc/yum.repos.d/epel.repo 
wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo

2. 部署keepalived和haproxy

1) 安裝keepalived和haproxy

在node-01和node-02安裝keepalived和haproxy

yum install -y keepalived haproxy

2) 修改配置

keepalived配置

node-01的priority為100,node-02的priority為90,其他配置一樣。

[root@node-01 ~]# cat /etc/keepalived/keepalived.conf
! Configuration File for keepalivedglobal_defs {notification_email {feng110498@163.com}notification_email_from Alexandre.Cassen@firewall.locsmtp_server 127.0.0.1smtp_connect_timeout 30router_id LVS_1
}vrrp_instance VI_1 {state MASTER          interface eth0lvs_sync_daemon_inteface eth0virtual_router_id 88advert_int 1priority 100         authentication {auth_type PASSauth_pass 1111}virtual_ipaddress {10.31.90.200/24}
}

haproxy配置

node-01和node-02的haproxy配置是一樣的。此處我們監聽的是10.31.90.200的8443端口,因為haproxy是和k8s apiserver是部署在同一臺服務器上,都用6443會沖突。

globalchroot  /var/lib/haproxydaemongroup haproxyuser haproxylog 127.0.0.1:514 local0 warningpidfile /var/lib/haproxy.pidmaxconn 20000spread-checks 3nbproc 8defaultslog     globalmode    tcpretries 3option redispatchlisten https-apiserverbind 10.31.90.200:8443mode tcpbalance roundrobintimeout server 900stimeout connect 15sserver apiserver01 10.31.90.201:6443 check port 6443 inter 5000 fall 5server apiserver02 10.31.90.202:6443 check port 6443 inter 5000 fall 5server apiserver03 10.31.90.203:6443 check port 6443 inter 5000 fall 5

3) 啟動服務

systemctl enable keepalived && systemctl start keepalived 
systemctl enable haproxy && systemctl start haproxy 

3. 部署kubernetes

1) 安裝軟件

由于kubeadm對Docker的版本是有要求的,需要安裝與kubeadm匹配的版本。
由于版本更新頻繁,請指定對應的版本號,本文采用1.13.2版本,其它版本未經測試。

yum install -y kubelet-1.13.2 kubeadm-1.13.2 kubectl-1.13.2 ipvsadm ipset docker-ce-18.06.1.ce#啟動docker
systemctl enable docker && systemctl start docker#設置kubelet開機自啟動
systemctl enable kubelet 

2) 修改初始化配置

使用kubeadm config print init-defaults > kubeadm-init.yaml 打印出默認配置,然后在根據自己的環境修改配置.

[root@node-01 ~]# cat kubeadm-init.yaml 
apiVersion: kubeadm.k8s.io/v1beta1
bootstrapTokens:
- groups:- system:bootstrappers:kubeadm:default-node-tokentoken: abcdef.0123456789abcdefttl: 24h0m0susages:- signing- authentication
kind: InitConfiguration
localAPIEndpoint:advertiseAddress: 10.31.90.201bindPort: 6443
nodeRegistration:criSocket: /var/run/dockershim.sockname: node-01taints:- effect: NoSchedulekey: node-role.kubernetes.io/master
---
apiVersion: kubeadm.k8s.io/v1beta1
kind: ClusterConfiguration
apiServer:timeoutForControlPlane: 4m0s
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint: "10.31.90.200:8443"
dns:type: CoreDNS
etcd:local:dataDir: /var/lib/etcd
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers
kubernetesVersion: v1.13.2
networking:dnsDomain: cluster.localpodSubnet: ""serviceSubnet: "10.245.0.0/16"
scheduler: {}
controllerManager: {}
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: "ipvs"

3) 預下載鏡像

[root@node-01 ~]# kubeadm config images pull --config kubeadm-init.yaml 
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.13.2
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.13.2
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.13.2
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.13.2
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.2.24
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.2.6

4) 初始化

[root@node-01 ~]# kubeadm init --config kubeadm-init.yaml    
[init] Using Kubernetes version: v1.13.2
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [node-01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.12.0.1 10.31.90.201 10.31.90.200]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [node-01 localhost] and IPs [10.31.90.201 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [node-01 localhost] and IPs [10.31.90.201 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "admin.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 22.503955 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.13" in namespace kube-system with the configuration for the kubelets in the cluster
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "node-01" as an annotation
[mark-control-plane] Marking the node node-01 as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node node-01 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: abcdef.0123456789abcdef
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[addons] Applied essential addon: kube-proxyYour Kubernetes master has initialized successfully!To start using your cluster, you need to run the following as a regular user:mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/configYou should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:https://kubernetes.io/docs/concepts/cluster-administration/addons/You can now join any number of machines by running the following on each node
as root:kubeadm join 10.31.90.200:8443 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:84201a329ec4388263e97303c6e4de50c2de2aa157a3b961cb8a6f325fadedb1

kubeadm init主要執行了以下操作:

  • [init]:指定版本進行初始化操作

  • [preflight] :初始化前的檢查和下載所需要的Docker鏡像文件

  • [kubelet-start] :生成kubelet的配置文件”/var/lib/kubelet/config.yaml”,沒有這個文件kubelet無法啟動,所以初始化之前的kubelet實際上啟動失敗。

  • [certificates]:生成Kubernetes使用的證書,存放在/etc/kubernetes/pki目錄中。

  • [kubeconfig] :生成 KubeConfig 文件,存放在/etc/kubernetes目錄中,組件之間通信需要使用對應文件。

  • [control-plane]:使用/etc/kubernetes/manifest目錄下的YAML文件,安裝 Master 組件。

  • [etcd]:使用/etc/kubernetes/manifest/etcd.yaml安裝Etcd服務。

  • [wait-control-plane]:等待control-plan部署的Master組件啟動。

  • [apiclient]:檢查Master組件服務狀態。

  • [uploadconfig]:更新配置

  • [kubelet]:使用configMap配置kubelet。

  • [patchnode]:更新CNI信息到Node上,通過注釋的方式記錄。

  • [mark-control-plane]:為當前節點打標簽,打了角色Master,和不可調度標簽,這樣默認就不會使用Master節點來運行Pod。

  • [bootstrap-token]:生成token記錄下來,后邊使用kubeadm join往集群中添加節點時會用到

  • [addons]:安裝附加組件CoreDNS和kube-proxy

5) 為kubectl準備Kubeconfig文件

kubectl默認會在執行的用戶家目錄下面的.kube目錄下尋找config文件。這里是將在初始化時[kubeconfig]步驟生成的admin.conf拷貝到.kube/config。

[root@node-01 ~]# mkdir -p $HOME/.kube
[root@node-01 ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@node-01 ~]# chown $(id -u):$(id -g)$HOME/.kube/config

在該配置文件中,記錄了API Server的訪問地址,所以后面直接執行kubectl命令就可以正常連接到API Server中。

6) 查看組件狀態

[root@node-01 ~]# kubectl get cs
NAME                 STATUS    MESSAGE              ERROR
scheduler            Healthy   ok                   
controller-manager   Healthy   ok                   
etcd-0               Healthy   {"health": "true"}  
[root@node-01 ~]# kubectl get node
NAME      STATUS   ROLES    AGE   VERSION
node-01   NotReady    master   14m   v1.13.2

目前只有一個節點,角色是Master,狀態是NotReady。

7) 其他master部署

在node-01將證書文件拷貝至其他master節點

USER=root
CONTROL_PLANE_IPS="node-02 node-03"
for host in ${CONTROL_PLANE_IPS}; dossh "${USER}"@$host "mkdir -p /etc/kubernetes/pki/etcd"scp /etc/kubernetes/pki/ca.* "${USER}"@$host:/etc/kubernetes/pki/scp /etc/kubernetes/pki/sa.* "${USER}"@$host:/etc/kubernetes/pki/scp /etc/kubernetes/pki/front-proxy-ca.* "${USER}"@$host:/etc/kubernetes/pki/scp /etc/kubernetes/pki/etcd/ca.* "${USER}"@$host:/etc/kubernetes/pki/etcd/scp /etc/kubernetes/admin.conf "${USER}"@$host:/etc/kubernetes/
done

在其他master執行,注意--experimental-control-plane參數

kubeadm join 10.31.90.200:8443 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:84201a329ec4388263e97303c6e4de50c2de2aa157a3b961cb8a6f325fadedb1 --experimental-control-plane

注意:token有效期是有限的,如果舊的token過期,可以使用kubeadm token create --print-join-command重新創建一條token。

8) node部署

在node-04、node-05、node-06執行,注意沒有--experimental-control-plane參數

kubeadm join 10.31.90.200:8443 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:84201a329ec4388263e97303c6e4de50c2de2aa157a3b961cb8a6f325fadedb1

9) 部署網絡插件flannel

Master節點NotReady的原因就是因為沒有使用任何的網絡插件,此時Node和Master的連接還不正常。目前最流行的Kubernetes網絡插件有Flannel、Calico、Canal、Weave這里選擇使用flannel。

[root@node-01 ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

10) 查看節點狀態

所有的節點已經處于Ready狀態。

[root@node-01 ~]# kubectl get node
NAME      STATUS   ROLES    AGE   VERSION
node-01   Ready    master   35m   v1.13.2
node-02   Ready    master   36m   v1.13.2
node-03   Ready    master   36m   v1.13.2
node-04   Ready    <none>   40m   v1.13.2
node-05   Ready    <none>   40m   v1.13.2
node-06   Ready    <none>   40m   v1.13.2

查看pod

[root@node-01 ~]# kubectl get pod -n kube-system
NAME                                        READY   STATUS    RESTARTS   AGE
coredns-89cc84847-j8mmg                     1/1     Running   0          1d
coredns-89cc84847-rbjxs                     1/1     Running   0          1d
etcd-node-01                                1/1     Running   1          1d
etcd-node-02                                1/1     Running   0          1d
etcd-node-03                                1/1     Running   0          1d
kube-apiserver-node-01                      1/1     Running   0          1d
kube-apiserver-node-02                      1/1     Running   0          1d
kube-apiserver-node-03                      1/1     Running   0          1d
kube-controller-manager-node-01             1/1     Running   2          1d
kube-controller-manager-node-02             1/1     Running   0          1d
kube-controller-manager-node-03             1/1     Running   0          1d
kube-proxy-jfbmv                            1/1     Running   0          1d
kube-proxy-lvkms                            1/1     Running   0          1d
kube-proxy-qx7kh                            1/1     Running   0          1d
kube-proxy-xst5v                            1/1     Running   0          1d
kube-proxy-zfwrk                            1/1     Running   0          1d
kube-proxy-ztg6j                            1/1     Running   0          1d
kube-scheduler-node-01                      1/1     Running   1          1d
kube-scheduler-node-02                      1/1     Running   1          1d
kube-scheduler-node-03                      1/1     Running   1          1d
kube-flannel-ds-amd64-87wzj                 1/1     Running   0          1d
kube-flannel-ds-amd64-lczwm                 1/1     Running   0          1d
kube-flannel-ds-amd64-lwc2j                 1/1     Running   0          1d
kube-flannel-ds-amd64-mwlfq                 1/1     Running   0          1d
kube-flannel-ds-amd64-nj2mk                 1/1     Running   0          1d
kube-flannel-ds-amd64-wx7vd                 1/1     Running   0          1d

查看ipvs的狀態

[root@node-01 ~]# ipvsadm -L -n
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags-> RemoteAddress:Port           Forward Weight ActiveConn InActConn  
TCP  10.245.0.1:443 rr-> 10.31.90.201:6443            Masq    1      2          0         -> 10.31.90.202:6443            Masq    1      0          0         -> 10.31.90.203:6443            Masq    1      2          0         
TCP  10.245.0.10:53 rr-> 10.32.0.3:53                 Masq    1      0          0         -> 10.32.0.4:53                 Masq    1      0          0                
TCP  10.245.90.161:80 rr-> 10.45.0.1:80                 Masq    1      0          0         
TCP  10.245.90.161:443 rr-> 10.45.0.1:443                Masq    1      0          0         
TCP  10.245.149.227:1 rr-> 10.31.90.204:1               Masq    1      0          0         -> 10.31.90.205:1               Masq    1      0          0         -> 10.31.90.206:1               Masq    1      0          0         
TCP  10.245.181.126:80 rr-> 10.34.0.2:80                 Masq    1      0          0         -> 10.45.0.0:80                 Masq    1      0          0         -> 10.46.0.0:80                 Masq    1      0          0             
UDP  10.245.0.10:53 rr-> 10.32.0.3:53                 Masq    1      0          0         -> 10.32.0.4:53                 Masq    1      0          0    

至此kubernetes集群部署完成。如有問題歡迎在下面留言交流。希望大家多多關注和點贊,謝謝!

轉載于:https://blog.51cto.com/billy98/2350660

本文來自互聯網用戶投稿,該文觀點僅代表作者本人,不代表本站立場。本站僅提供信息存儲空間服務,不擁有所有權,不承擔相關法律責任。
如若轉載,請注明出處:http://www.pswp.cn/news/388278.shtml
繁體地址,請注明出處:http://hk.pswp.cn/news/388278.shtml
英文地址,請注明出處:http://en.pswp.cn/news/388278.shtml

如若內容造成侵權/違法違規/事實不符,請聯系多彩編程網進行投訴反饋email:809451989@qq.com,一經查實,立即刪除!

相關文章

通才與專家_那么您準備聘請數據科學家了嗎? 通才還是專家?

通才與專家Throughout my 10-year career, I have seen people often spend their time and energy in passionate debates about what data science can deliver, and what data scientists do or do not do. I submit that these are the wrong questions to focus on when y…

ubuntu opengl 安裝

安裝相應的庫&#xff1a; sudo apt-get install build-essential libgl1-mesa-dev sudo apt-get install freeglut3-dev sudo apt-get install libglew-dev libsdl2-dev libsdl2-image-dev libglm-dev libfreetype6-dev 實例&#xff1a; #include "GL/glut.h" void…

分享一病毒源代碼,破壞MBR,危險!!僅供學習參考,勿運行(vc++2010已編譯通過)

我在編譯的時候&#xff0c;殺毒軟件提示病毒并將其攔截&#xff0c;所以會導致編譯不成功。 1>D:\c工程\windows\windows\MBR病毒.cpp : fatal error C1083: 無法打開編譯器中間文件:“C:\Users\lenovo\AppData\Local\Temp\_CL_953b34fein”: Permission denied 1> 1>…

HTTP請求錯誤400、401、402、403、404、405、406、407、412、414、500、501、502解析

【轉載】本文來自 chenxinchongcn 的CSDN 博客 &#xff0c;全文地址請點擊&#xff1a;https://blog.csdn.net/chenxinchongcn/article/details/54945998?utm_sourcecopy HTTP 錯誤 400 400 請求出錯 由于語法格式有誤&#xff0c;服務器無法理解此請求。不作修改&#xff0…

數據科學家 數據工程師_數據科學家實際上賺了多少錢?

數據科學家 數據工程師目錄 (Table of Contents) Introduction 介紹 Junior Data Scientist 初級數據科學家 Mid-Level Data Scientist 中級數據科學家 Senior Data Scientist 資深數據科學家 Additional Compensation 額外補償 Summary 摘要 介紹 (Introduction) The lucrativ…

Spring Cloud構建微服務架構-Hystrix監控面板

在Spring Cloud中構建一個Hystrix Dashboard非常簡單&#xff0c;只需要下面四步&#xff1a;愿意了解源碼的朋友直接求求交流分享技術 一零三八七七四六二六 創建一個標準的Spring Boot工程&#xff0c;命名為&#xff1a;hystrix-dashboard。 編輯pom.xml&#xff0c;具體依賴…

Google 地圖 API 參考

楊航收集技術資料&#xff0c;分享給大家 Google 地圖 API 參考 Google 地圖 API 現在與 Google AJAX API 載入器集成&#xff0c;后者創建了一個公共命名空間&#xff0c;以便載入和使用多個 Google AJAX API。該框架可讓您將可選 google.maps.* 命名空間用于當前在 Google …

spotify歌曲下載_使用Spotify數據預測哪些“ Novidades da semana”歌曲會成為熱門歌曲

spotify歌曲下載TL; DR (TL;DR) Spotify is my favorite digital music service and I’m very passionate about the potential to extract meaningful insights from data. Therefore, I decided to do this article to consolidate my knowledge of some classification mod…

Hook技術之Hook Activity

一、Hook技術概述 Hook技術的核心實際上是動態分析技術&#xff0c;動態分析是指在程序運行時對程序進行調試的技術。眾所周知&#xff0c;Android系統的代碼和回調是按照一定的順序執行的&#xff0c;這里舉一個簡單的例子&#xff0c;如圖所示。 對象A調用類對象B&#xff0c…

(第三周)周報

此作業要求https://edu.cnblogs.com/campus/nenu/2018fall/homework/2143 1.本周PSP 總計&#xff1a;1422 min 2.本周進度條 (1)代碼累積折線圖 (2)博文字數累積折線圖 4.PSP餅狀圖 轉載于:https://www.cnblogs.com/gongylx/p/9761852.html

功能測試代碼python_如何使您的Python代碼更具功能性

功能測試代碼pythonFunctional programming has been getting more and more popular in recent years. Not only is it perfectly suited for tasks like data analysis and machine learning. It’s also a powerful way to make code easier to test and maintain.近年來&am…

layou split 屬性

layou split&#xff1a;true - 顯示側分欄 轉載于:https://www.cnblogs.com/jasonlai2016/p/9764450.html

BZOJ4503:兩個串(bitset)

Description 兔子們在玩兩個串的游戲。給定兩個字符串S和T&#xff0c;兔子們想知道T在S中出現了幾次&#xff0c;分別在哪些位置出現。注意T中可能有“?”字符&#xff0c;這個字符可以匹配任何字符。Input 兩行兩個字符串&#xff0c;分別代表S和TOutput 第一行一個正整數k&…

C#Word轉Html的類

C#Word轉Html的類/**//******************************************************************** created: 2007/11/02 created: 2:11:2007 23:13 filename: D:C#程序練習WordToChmWordToHtml.cs file path: D:C#程序練習WordToChm file bas…

分庫分表的幾種常見形式以及可能遇到的難題

前言 在談論數據庫架構和數據庫優化的時候&#xff0c;我們經常會聽到“分庫分表”、“分片”、“Sharding”…這樣的關鍵詞。讓人感到高興的是&#xff0c;這些朋友所服務的公司業務量正在&#xff08;或者即將面臨&#xff09;高速增長&#xff0c;技術方面也面臨著一些挑戰。…

iOS 鑰匙串的基本使用

級別&#xff1a; ★☆☆☆☆ 標簽&#xff1a;「鑰匙串」「keychain」「iOS」 作者&#xff1a; WYW 審校&#xff1a; QiShare團隊 前言 &#xff1a; 項目中有時會需要存儲敏感信息&#xff08;如密碼、密鑰等&#xff09;&#xff0c;蘋果官方提供了一種存儲機制--鑰匙串&a…

線性回歸和將線擬合到數據

Linear Regression is the Supervised Machine Learning Algorithm that predicts continuous value outputs. In Linear Regression we generally follow three steps to predict the output.線性回歸是一種監督機器學習算法&#xff0c;可預測連續值輸出。 在線性回歸中&…

Spring Boot MyBatis配置多種數據庫

mybatis-config.xml是支持配置多種數據庫的&#xff0c;本文將介紹在Spring Boot中使用配置類來配置。 1. 配置application.yml # mybatis配置 mybatis:check-config-location: falsetype-aliases-package: ${base.package}.modelconfiguration:map-underscore-to-camel-case: …

小米盒子4 拆解圖解_我希望當我開始學習R時會得到的盒子圖解指南

小米盒子4 拆解圖解Customizing a graph to transform it into a beautiful figure in R isn’t alchemy. Nonetheless, it took me a lot of time (and frustration) to figure out how to make these plots informative and publication-quality. Rather than hoarding this …

組態王仿真隨機數

1、新建IO設備&#xff0c;選擇PLC---亞控---仿真PLC&#xff0c;一直“下一步”。 2、在“數據詞典”中新建變量“Tag1”&#xff0c;雙擊Tag1&#xff0c;變量類型選&#xff1a;I/O實數&#xff1b;初始值設為&#xff1a;0.6&#xff1b;最小值設為&#xff1a;0.5&#xf…