Docker 容器部署 Consul 集群
Consul 介紹
Consul 提供了分布式系統的服務發現和配置的解決方案。基于go語言實現。并且在git上開放了源碼consul-git。consul還包括了分布式一致協議的實現,健康檢查和管理UI。Consul和zk相比較起來,更加輕量級,而且一致性上基于RAFT算法,zk使用的Paxos 算法。跟zk比較起來更加輕量級,Consul提供了通過一個DNS或者HTTP接口的方式來控制執行,而zk卻需要自己定制解決方案。同樣比較被廣泛使用的服務發現解決方案中也有etcd 。etcd也是采用RAFT算法實現,但是etcd不提供管理UI。Consul跟Vagrant都是Hashicorp 公司的產品。作為一整套的分布式系統解決方案,配合同樣基于go語言實現的Docker開源技術,都是一個不錯的選擇。Docker 的簡單介紹,可以參考 Docker 介紹 (本文后面將不再介紹docker 命令以及容器等相關概念)。配合Docker來做應用容器,用Consul 來做集群的服務發現和健康檢查,并且還可以輕量級得做到水平和垂直可擴展。
Consul Agent、Server、Client
通過運行 consul agent 命令,可以通過后臺守護進程的方式運行在所有consul集群節點中。并且可以以server或者client 模式運行。并且以HTTP或者DNS 接口方式,負責運行檢查和服務同步。Server模式的agent負責維護consul集群狀態,相應RPC查詢,并且還要負責和其他數據中心(DataCenter)進行WAN Gossips交換。Client 節點是相對無狀態的,Client的唯一活動就是轉發(foward)請求給Server節點,以保持低延遲和少資源消耗。
??如下圖,是官網的一個典型系統結構,Consul建議我們每個DataCenter的Server的節點最好在3到5個之間,以方便在失敗以及數據復制的性能。Client的數量可以任意。圖中,最重要的兩個概念一個是Gossip協議,一個是Consensus 協議。DataCenter的所有節點都會參與到Gossip協議。Client 到Server 會通過LAN Gossip。所有的節點都在Gossip pool中,通過消息層來實現節點之間的通信以達到故障檢測的目的,并且不需要給Client配置Server的地址。而Server節點還會參與到WAN Gossip池中。這樣,通過Server節點就可以讓DataCenter之間做簡單的服務發現。比如增加一個Datacenter就只需要讓Server節點參與到Gossip Pool中。并且,DataCneter之間的通信和服務請求就可以通過WAN Gossip 來隨機請求另外一個DataCenter的Server節點,然后被請求的Server 會再把請求foward到本DataCenter的leader節點。Server leader的選舉是通過Consul的Raft 算法實現。Leader 節點需要負責所有請求和處理,并且這些請求也必須復制給所有的其他非leader的Server節點。同樣,非Leader節點接收到RPC請求的時候也會foward 到Leader節點。
??
在Docker 容器中啟動Consul Agent
- 下載 progrium/consul 鏡像
- 以Server 模式在容器中啟動一個agent
docker run -p 8600:53/udp -h node1 progrium/consul -server -bootstrap
docker@boot2docker:~$ docker run -p 8600:53/udp -h node1 progrium/consul -server -bootstrap
==> WARNING: Bootstrap mode enabled! Do not enable unless necessary
==> WARNING: It is highly recommended to set GOMAXPROCS higher than 1
==> Starting raft data migration...
==> Starting Consul agent...
==> Starting Consul agent RPC...
==> Consul agent running!Node name: 'node1'Datacenter: 'dc1'Server: true (bootstrap: true)Client Addr: 0.0.0.0 (HTTP: 8500, HTTPS: -1, DNS: 53, RPC: 8400)Cluster Addr: 172.17.0.1 (LAN: 8301, WAN: 8302)Gossip encrypt: false, RPC-TLS: false, TLS-Incoming: falseAtlas: <disabled>==> Log data will now stream in as it occurs:2015/09/29 03:13:43 [INFO] serf: EventMemberJoin: node1 172.17.0.12015/09/29 03:13:43 [INFO] serf: EventMemberJoin: node1.dc1 172.17.0.12015/09/29 03:13:43 [INFO] raft: Node at 172.17.0.1:8300 [Follower] entering Follower state2015/09/29 03:13:43 [INFO] consul: adding server node1 (Addr: 172.17.0.1:8300) (DC: dc1)2015/09/29 03:13:43 [INFO] consul: adding server node1.dc1 (Addr: 172.17.0.1:8300) (DC: dc1)2015/09/29 03:13:43 [ERR] agent: failed to sync remote state: No cluster leader2015/09/29 03:13:45 [WARN] raft: Heartbeat timeout reached, starting election2015/09/29 03:13:45 [INFO] raft: Node at 172.17.0.1:8300 [Candidate] entering Candidate state2015/09/29 03:13:45 [INFO] raft: Election won. Tally: 12015/09/29 03:13:45 [INFO] raft: Node at 172.17.0.1:8300 [Leader] entering Leader state2015/09/29 03:13:45 [INFO] consul: cluster leadership acquired2015/09/29 03:13:45 [INFO] raft: Disabling EnableSingleNode (bootstrap)2015/09/29 03:13:45 [INFO] consul: New leader elected: node12015/09/29 03:13:45 [INFO] consul: member 'node1' joined, marking health alive2015/09/29 03:13:45 [INFO] agent: Synced service 'consul'
這里試驗一下8600(DNS) 接口,然后我們就用dig的方式可以交互和訪問了。
用Docker 容器啟動Consul集群
分別啟動三個server節點
剛才啟動單獨的服務節點,用bootstrap。現在啟動三個節點需要用bootstrap-expect 3 ,并且綁定到容器的同一個ip,這里綁定到server1上。
docker@boot2docker:~$ docker run -d –name server1 -h server1 progrium/consul -server -bootstrap-expect 3
docker@boot2docker:~JOINIP=”(docker inspect -f ‘{{ .NetworkSettings.IPAddress }}’ server1)”
docker@boot2docker:~dockerrun?d??nameserver2?hserver2progrium/consul?server?joinJOIN_IP
docker@boot2docker:~dockerrun?d??nameserver3?hserver3progrium/consul?server?joinJOIN_IP
然后用docker命令查看已經啟動的容器。關于docker 相關內容可以參考“Docker介紹 ”
docker@boot2docker:~$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
87bd80f8132d progrium/consul "/bin/start -server -" 3 seconds ago Up 2 seconds 53/tcp, 53/udp, 8300-8302/tcp, 8400/tcp, 8500/tcp, 8301-8302/udp server3
a18d0597bf2d progrium/consul "/bin/start -server -" 18 seconds ago Up 17 seconds 53/tcp, 53/udp, 8300-8302/tcp, 8400/tcp, 8301-8302/udp, 8500/tcp server2
448a550224fb progrium/consul "/bin/start -server -" About a minute ago Up About a minute 53/tcp, 53/udp, 8300-8302/tcp, 8400/tcp, 8500/tcp, 8301-8302/udp server1
啟動client節點
docker@boot2docker:~dockerrun?d?p8400:8400?p8500:8500?p8600:53/udp?hclient1progrium/consul?joinJOIN_IP
查看容器信息:
docker@boot2docker:~$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
0410ad7bb68c progrium/consul "/bin/start -join 172" 4 seconds ago Up 3 seconds 53/tcp, 0.0.0.0:8400->8400/tcp, 8300-8302/tcp, 8301-8302/udp, 0.0.0.0:8500->8500/tcp, 0.0.0.0:8600->53/udp focused_leakey
87bd80f8132d progrium/consul "/bin/start -server -" 3 minutes ago Up 3 minutes 53/tcp, 53/udp, 8300-8302/tcp, 8400/tcp, 8500/tcp, 8301-8302/udp server3
a18d0597bf2d progrium/consul "/bin/start -server -" 3 minutes ago Up 3 minutes 53/tcp, 53/udp, 8300-8302/tcp, 8400/tcp, 8500/tcp, 8301-8302/udp server2
448a550224fb progrium/consul "/bin/start -server -" 4 minutes ago Up 4 minutes 53/tcp, 53/udp, 8300-8302/tcp, 8400/tcp, 8301-8302/udp, 8500/tcp server1
進入容器查看
我們可以進入容器來查看一下consul是如何管理agent節點,以及選舉server 的leader的。這個時候我們關掉Server節點,容器name是server1的 448a550224fb 容器,再觀察。
?? 關閉后server1信息:
==> Gracefully shutting down agent...2015/09/29 04:08:17 [INFO] consul: server starting leave2015/09/29 04:08:17 [INFO] raft: Removed peer 172.17.0.28:8300, stopping replication (Index: 18)2015/09/29 04:08:17 [INFO] raft: Removed peer 172.17.0.29:8300, stopping replication (Index: 18)2015/09/29 04:08:17 [INFO] raft: Removed ourself, transitioning to follower2015/09/29 04:08:17 [INFO] raft: Node at 172.17.0.27:8300 [Follower] entering Follower state2015/09/29 04:08:17 [INFO] serf: EventMemberLeave: server1.dc1 172.17.0.272015/09/29 04:08:17 [INFO] consul: cluster leadership lost2015/09/29 04:08:17 [INFO] raft: aborting pipeline replication to peer 172.17.0.28:83002015/09/29 04:08:17 [INFO] raft: aborting pipeline replication to peer 172.17.0.29:83002015/09/29 04:08:17 [INFO] consul: removing server server1.dc1 (Addr: 172.17.0.27:8300) (DC: dc1)2015/09/29 04:08:18 [INFO] serf: EventMemberLeave: server1 172.17.0.272015/09/29 04:08:18 [INFO] consul: removing server server1 (Addr: 172.17.0.27:8300) (DC: dc1)2015/09/29 04:08:18 [INFO] agent: requesting shutdown2015/09/29 04:08:18 [INFO] consul: shutting down server2015/09/29 04:08:18 [INFO] agent: shutdown complete
server2節點信息如下:
docker@boot2docker:~$ docker attach server22015/09/29 04:08:18 [INFO] serf: EventMemberLeave: server1 172.17.0.272015/09/29 04:08:18 [INFO] consul: removing server server1 (Addr: 172.17.0.27:8300) (DC: dc1)2015/09/29 04:08:20 [WARN] raft: Rejecting vote from 172.17.0.29:8300 since we have a leader: 172.17.0.27:83002015/09/29 04:08:20 [WARN] raft: Heartbeat timeout reached, starting election2015/09/29 04:08:20 [INFO] raft: Node at 172.17.0.28:8300 [Candidate] entering Candidate state2015/09/29 04:08:21 [INFO] raft: Node at 172.17.0.28:8300 [Follower] entering Follower state2015/09/29 04:08:21 [INFO] consul: New leader elected: server3
可以看到server1 節點下線,并且重新選舉leader server節點為server3,再看一下server3 信息如下:
docker@boot2docker:~$ docker attach server32015/09/29 04:08:18 [INFO] serf: EventMemberLeave: server1 172.17.0.272015/09/29 04:08:18 [INFO] consul: removing server server1 (Addr: 172.17.0.27:8300) (DC: dc1)2015/09/29 04:08:20 [WARN] raft: Heartbeat timeout reached, starting election2015/09/29 04:08:20 [INFO] raft: Node at 172.17.0.29:8300 [Candidate] entering Candidate state2015/09/29 04:08:20 [INFO] raft: Duplicate RequestVote for same term: 22015/09/29 04:08:21 [WARN] raft: Election timeout reached, restarting election2015/09/29 04:08:21 [INFO] raft: Node at 172.17.0.29:8300 [Candidate] entering Candidate state2015/09/29 04:08:21 [INFO] raft: Election won. Tally: 22015/09/29 04:08:21 [INFO] raft: Node at 172.17.0.29:8300 [Leader] entering Leader state2015/09/29 04:08:21 [INFO] consul: cluster leadership acquired2015/09/29 04:08:21 [INFO] consul: New leader elected: server32015/09/29 04:08:21 [INFO] raft: pipelining replication to peer 172.17.0.28:83002015/09/29 04:08:21 [INFO] consul: member 'server1' left, deregistering
client 節點信息如下:
docker@boot2docker:~$ docker attach focused_leakey2015/09/29 04:08:18 [INFO] serf: EventMemberLeave: server1 172.17.0.272015/09/29 04:08:18 [INFO] consul: removing server server1 (Addr: 172.17.0.27:8300) (DC: dc1)2015/09/29 04:08:21 [INFO] consul: New leader elected: server3