Docker 運維管理

Docker 運維管理

  • 一、Swarm集群管理
    • 1.1 Swarm的核心概念
      • 1.1.1 集群
      • 1.1.2 節點
      • 1.1.3 服務和任務
      • 1.1.4 負載均衡
    • 1.2 Swarm安裝
      • 準備工作
      • 創建集群
      • 添加工作節點到集群
      • 發布服務到集群
      • 擴展一個或多個服務
      • 從集群中刪除服務
      • ssh免密登錄
  • 二、Docker Compose
    • 與 Swarm 一起使用 Compose
  • 三、配置私有倉庫(Harbor)
    • 3.1 環境準備
    • 3.2 安裝 Docker
    • 3.3 安裝 docker-compose
    • 3.4 準備 Harbor
    • 3.5 配置證書
    • 3.6 部署配置 Harbor
    • 3.7 配置啟動服務
    • 3.8 定制本地倉庫
    • 3.9 測試本地倉庫

一、Swarm集群管理

docker swarm 是 docker 官方提供的一套容器編排系統,是 Docker 公司推出的官方容器集群平臺。基于 Go 語言 實現。

架構如下:
在這里插入圖片描述

1.1 Swarm的核心概念

1.1.1 集群

一個集群由多個 Docker 主機組成,這些 Docker 主機以集群模式運行,并充當 Manager 和 Worker

1.1.2 節點

swarm 是一系列節點的集合,節點可以是一臺裸機或者一臺虛擬機。一個節點能扮演一個或者兩個角色 Manager 或者 Worker

  • Manager 節點
    Docker Swarm 集群需要至少一個 manager 節點,節點之間使用 Raft consensus protocol 進行協同工作。
    通常,第一個啟用 docker swarm 的節點將成為 leader,后來加入的都是 follower。當前的 leader 如果掛掉,剩余的節點將重新選舉出一個新的 leader。 每一個 manager 都有一個完整的當前集群狀態的副本,可以保證 manager 的高可用

  • Worker 節點
    worker 節點是運行實際應用服務的容器所在的地方。理論上,一個 manager 節點也能同時成為 worker 節點,但在生產環境中,不建議這樣做。 worker 節點之間,通過 control plane 進行通信,這種通信使用 gossip 協議,并且是 異步 的。

1.1.3 服務和任務

  • services(服務)
    swarm service 是一個抽象的概念,它只是一個對運行在 swarm 集群上的應用服務,所期望狀態的描述。它就像一個描述了下面物品的清單列表一樣:
    • 服務名稱
    • 使用哪個鏡像來創建容器
    • 要運行多少個副本
    • 服務的容器要連接到哪個網絡上
    • 應該映射哪些端口
  • task(任務)
    在Docker Swarm中,task 是一個部署的最小單元,task與容器是 一對一 的關系。
  • stack(堆棧)
    stack 是描述一系列相關 services 的集合。我們通過在一個 YAML 文件中來定義一個 stack

1.1.4 負載均衡

集群管理器可以自動為服務分配一個已發布端口,也可以為該服務配置一個已發布端口
可以指定任何未使用的端口。如果未指定端口,則集群管理器會為服務分配30000-32767 范圍內的端口。

1.2 Swarm安裝

準備工作

[root@docker ~]# hostnamectl hostname manager
[root@docker ~]# exit
[root@manager ~]# nmcli c modify ens160 ipv4.addresses IP地址/24
[root@manager ~]# init 6
[root@manager ~]# cat >> /etc/hosts <<EOF
> 192.168.98.47  manager1
> 192.168.98.48  worker1
> 192.168.98.49  worker2
> EOF
[root@manager ~]# ping -c 3 worker1
PING worker1 (192.168.98.48) 56(84) bytes of data.
64 bytes from worker1 (192.168.98.48): icmp_seq=1 ttl=64 time=0.678 ms
64 bytes from worker1 (192.168.98.48): icmp_seq=2 ttl=64 time=0.461 ms
64 bytes from worker1 (192.168.98.48): icmp_seq=3 ttl=64 time=0.353 ms--- worker1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2035ms
rtt min/avg/max/mdev = 0.353/0.497/0.678/0.135 ms
[root@manager ~]# ping -c 3 worker2
PING worker2 (192.168.98.49) 56(84) bytes of data.
64 bytes from worker2 (192.168.98.49): icmp_seq=1 ttl=64 time=0.719 ms
64 bytes from worker2 (192.168.98.49): icmp_seq=2 ttl=64 time=0.300 ms
64 bytes from worker2 (192.168.98.49): icmp_seq=3 ttl=64 time=0.417 ms--- worker2 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2089ms
rtt min/avg/max/mdev = 0.300/0.478/0.719/0.176 ms
[root@manager ~]# ping -c 3 manager1
PING manager1 (192.168.98.47) 56(84) bytes of data.
64 bytes from manager1 (192.168.98.47): icmp_seq=1 ttl=64 time=0.034 ms
64 bytes from manager1 (192.168.98.47): icmp_seq=2 ttl=64 time=0.038 ms
64 bytes from manager1 (192.168.98.47): icmp_seq=3 ttl=64 time=0.035 ms--- manager1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2042ms
rtt min/avg/max/mdev = 0.034/0.035/0.038/0.001 ms[root@worker1 ~]# ping -c 3 worker1
PING worker1 (192.168.98.48) 56(84) bytes of data.
64 bytes from worker1 (192.168.98.48): icmp_seq=1 ttl=64 time=0.023 ms
64 bytes from worker1 (192.168.98.48): icmp_seq=2 ttl=64 time=0.024 ms
64 bytes from worker1 (192.168.98.48): icmp_seq=3 ttl=64 time=0.035 ms--- worker1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2056ms
rtt min/avg/max/mdev = 0.023/0.027/0.035/0.005 ms
[root@worker1 ~]# ping -c 3 worker2
PING worker2 (192.168.98.49) 56(84) bytes of data.
64 bytes from worker2 (192.168.98.49): icmp_seq=1 ttl=64 time=0.405 ms
64 bytes from worker2 (192.168.98.49): icmp_seq=2 ttl=64 time=0.509 ms
64 bytes from worker2 (192.168.98.49): icmp_seq=3 ttl=64 time=0.381 ms--- worker2 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2065ms
rtt min/avg/max/mdev = 0.381/0.431/0.509/0.055 ms[root@worker2 ~]# ping -c 3 worker1
PING worker1 (192.168.98.48) 56(84) bytes of data.
64 bytes from worker1 (192.168.98.48): icmp_seq=1 ttl=64 time=0.304 ms
64 bytes from worker1 (192.168.98.48): icmp_seq=2 ttl=64 time=0.346 ms
64 bytes from worker1 (192.168.98.48): icmp_seq=3 ttl=64 time=0.460 ms--- worker1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2030ms
rtt min/avg/max/mdev = 0.304/0.370/0.460/0.065 ms
[root@worker2 ~]# ping -c 3 worker2
PING worker2 (192.168.98.49) 56(84) bytes of data.
64 bytes from worker2 (192.168.98.49): icmp_seq=1 ttl=64 time=0.189 ms
64 bytes from worker2 (192.168.98.49): icmp_seq=2 ttl=64 time=0.079 ms
64 bytes from worker2 (192.168.98.49): icmp_seq=3 ttl=64 time=0.055 ms--- worker2 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2038ms
rtt min/avg/max/mdev = 0.055/0.107/0.189/0.058 ms# 查看版本
docker run --rm swarm -v
swarm version 1.2.9 (527a849)
  • 鏡像:
[root@manager ~]# ls
anaconda-ks.cfg
[root@manager ~]# ls
anaconda-ks.cfg  swarm_1.2.9.tar.gz
[root@manager ~]# docker load -i swarm_1.2.9.tar.gz
6104cec23b11: Loading layer  12.44MB/12.44MB
9c4e304108a9: Loading layer  281.1kB/281.1kB
a8731583ab53: Loading layer  2.048kB/2.048kB
Loaded image: swarm:1.2.9
[root@manager ~]# docker images
REPOSITORY   TAG       IMAGE ID       CREATED       SIZE
swarm        1.2.9     1a5eb59a410f   4 years ago   12.7MB
[root@manager ~]# ls
anaconda-ks.cfg  swarm_1.2.9.tar.gz# 遠程傳輸至worker1和woker2
scp swarm_1.2.9.tar.gz root@192.168.98.48:~
[root@worker1 ~]# ls
anaconda-ks.cfg  swarm_1.2.9.tar.gz

創建集群

語法格式:

 docker swarm init --advertise <MANAGER-IP>	#manager主機的IP
  • manager
[root@manager ~]# docker swarm init --advertise-addr 192.168.98.47
# --advertise-addr 參數表示其它 swarm 中的 worker 節點使用此 ip 地址與 manager 聯系
Swarm initialized: current node (bsicfg4mvo18a0tv0z17crtnh) is now a manager.To add a worker to this swarm, run the following command:#token::令牌(令牌不是永久有效的,有時效性)docker swarm join --token SWMTKN-1-2xidivy00h9ts8veuk602dqwwwuhvoerzgklk3tvwcin92wnm9-67wyr4mfs1vwsncmnsco6fmqb 192.168.98.47:2377
# 將這個命令在 worker 主機上執行
To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.# 查看節點狀態(都是ready)
[root@manager ~]# docker node ls
ID                            HOSTNAME   STATUS    AVAILABILITY   MANAGER STATUS   ENGINE VERSION
bsicfg4mvo18a0tv0z17crtnh *   manager    Ready     Active         Leader           28.0.4
xtc6pax5faoobzqo341vf72rv     worker1    Ready     Active                          28.0.4
ik0tqz8axejwu82mmukwjo47m     worker2    Ready     Active                          28.0.4#查看集群狀態
docker info
#將節點強制驅除集群
docker swarm leave --force[root@manager ~]# docker service ls
ID        NAME      MODE      REPLICAS   IMAGE     PORTS
[root@manager ~]# docker ps
CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES

#查看節點信息
docker node ls
#查看集群狀態
docker info
#將節點強制驅除集群
docker swarm leave --force

添加工作節點到集群

創建了一個集群與管理器節點,就可以添加工作節點

  • 添加工作節點 worker1
[root@worker1 ~]# docker swarm join --token SWMTKN-1-2xidivy00h9ts8veuk602dqwwwuhvoerzgklk3tvwcin92wnm9-67wyr4mfs1vwsncmnsco6fmqb 192.168.98.47:2377
This node joined a swarm as a worker.
  • 添加工作節點 worker2
[root@worker2 ~]# docker swarm join --token SWMTKN-1-2xidivy00h9ts8veuk602dqwwwuhvoerzgklk3tvwcin92wnm9-67wyr4mfs1vwsncmnsco6fmqb 192.168.98.47:2377
This node joined a swarm as a worker.
  • 如果忘記了 token 的值,在管理節點 192.168.98.47(manager)主機上執行:
    (令牌不是永久有效的,有時效性)
# 在管理者節點上寫
[root@manager ~]# docker swarm join-token worker
To add a worker to this swarm, run the following command:docker swarm join --token SWMTKN-1-2xidivy00h9ts8veuk602dqwwwuhvoerzgklk3tvwcin92wnm9-67wyr4mfs1vwsncmnsco6fmqb 192.168.98.47:2377

發布服務到集群

# 查看節點狀態(都是ready)
[root@manager ~]# docker node ls
ID                            HOSTNAME   STATUS    AVAILABILITY   MANAGER STATUS   ENGINE VERSION
bsicfg4mvo18a0tv0z17crtnh *   manager    Ready     Active         Leader           28.0.4
xtc6pax5faoobzqo341vf72rv     worker1    Ready     Active                          28.0.4
ik0tqz8axejwu82mmukwjo47m     worker2    Ready     Active                          28.0.4
  • manager
# docker服務創建 ,--replicas 1:副本數量# 在管理節點 192.168.98.47(manager)主機上執行:			#容器名稱			#容器鏡像名稱
[root@manager ~]# docker service create --replicas 1 --name nginx2 -p 80:80 nginx:1.27.4#創建一個容器作為服務,名稱叫做nginx2(類似docker run),如果沒有nginx:1.27.4這個文件就會自動去拉取

沒有自動拉取就執行以下指令(三臺主機都要拉取nginx_1.27.4

[root@manager ~]# docker load -i nginx_1.27.4.tar.gz 
7914c8f600f5: Loading layer  77.83MB/77.83MB
9574fd0ae014: Loading layer  118.3MB/118.3MB
17129ef2de1a: Loading layer  3.584kB/3.584kB
320c70dd6b6b: Loading layer  4.608kB/4.608kB
2ef6413cdcb5: Loading layer   2.56kB/2.56kB
d6266720b0a6: Loading layer   5.12kB/5.12kB
1fb7f1e96249: Loading layer  7.168kB/7.168kB
Loaded image: nginx:1.27.4
[root@manager ~]# docker images
REPOSITORY   TAG       IMAGE ID       CREATED        SIZE
nginx        1.27.4    97662d24417b   2 months ago   192MB
swarm        1.2.9     1a5eb59a410f   4 years ago    12.7MB
[root@manager ~]# docker service ls
ID             NAME      MODE         REPLICAS   IMAGE          PORTS
o9atmzaj9on8   nginx2    replicated   1/1        nginx:1.27.4   *:80->80/tcp[root@manager ~]# docker service create --replicas 1 --name nginx2 -p 80:80 nginx:1.27.4
z0j2olxqwrlpm84uho1w1m20p
overall progress: 1 out of 1 tasks 
1/1: running   
verify: Service z0j2olxqwrlpm84uho1w1m20p converged scp root@managere:~/nginx 1.27.4.tar.gz
scp root@192.168.98.47:~/nginx 1.27.4.tar.gz
# --pretty:打印得好看一點
[root@manager ~]# docker service inspect --pretty nginx2ID:		o9atmzaj9on8tf9ffmp8k7bun	#容器ID
Name:		nginx2		#容器名稱
Service Mode:	Replicated		#模式:ReplicatedReplicas:	1	#副本數
Placement:
UpdateConfig:Parallelism:	1	#限制(變形的方式有一個)On failure:	pauseMonitoring Period: 5sMax failure ratio: 0Update order:      stop-first
RollbackConfig:Parallelism:	1On failure:	pauseMonitoring Period: 5sMax failure ratio: 0Rollback order:    stop-first
ContainerSpec:		#鏡像的名稱:nginx:1.27.4Image:			nginx:1.27.4@sha256:09369da6b10306312cd908661320086bf87fbae1b6b0c49a1f50ba531fef2eabInit:		false
Resources:
Endpoint Mode:	vip
Ports:PublishedPort = 80Protocol = tcpTargetPort = 80PublishMode = ingress[root@manager ~]# docker service ps nginx2 
ID             NAME       IMAGE          NODE      DESIRED STATE   CURRENT STATE           ERROR     PORTS
id3xjfro1bdl   nginx2.1   nginx:1.27.4   manager   Running         Running 5 minutes ago
#	 ID			名稱			鏡像			在哪個節點上	是否在運行		運行時間

擴展一個或多個服務

# scale:伸縮
[root@manager ~]# docker service scale nginx2=5
nginx2 scaled to 5
overall progress: 5 out of 5 tasks 
1/5: running   
2/5: running   
3/5: running   
4/5: running   
5/5: running   
verify: Service nginx2 converged [root@manager ~]# docker service ps nginx2
ID             NAME       IMAGE          NODE      DESIRED STATE   CURRENT STATE           ERROR     PORTS
rbmlfw6oiyot   nginx2.1   nginx:1.27.4   worker1   Running         Running 9 minutes ago             
uvko24qqf1ps   nginx2.2   nginx:1.27.4   manager   Running         Running 2 minutes ago             
kwjys4eldv2d   nginx2.3   nginx:1.27.4   manager   Running         Running 2 minutes ago             
onnmzewg4ou4   nginx2.4   nginx:1.27.4   worker2   Running         Running 2 minutes ago             
v0we91sbj7p5   nginx2.5   nginx:1.27.4   worker1   Running         Running 2 minutes ago             [root@manager ~]# docker service scale nginx2=1
nginx2 scaled to 1
overall progress: 1 out of 1 tasks 
1/1: running   
verify: Service nginx2 converged 
[root@manager ~]# docker service ls
ID             NAME      MODE         REPLICAS   IMAGE          PORTS  
z0j2olxqwrlp   nginx2    replicated   1/1        nginx:1.27.4   *:80->80/tcp
[root@manager ~]# docker service scale nginx2=3
nginx2 scaled to 3
overall progress: 3 out of 3 tasks 
1/3: running   [==================================================>] 
2/3: running   [==================================================>] 
3/3: running   [==================================================>] 
verify: Service nginx2 converged 
[root@manager ~]# docker service ls
ID             NAME      MODE         REPLICAS   IMAGE          PORTS
o9atmzaj9on8   nginx2    replicated   3/3        nginx:1.27.4   *:80->80/tcp
[root@manager ~]# docker service ps nginx2 
ID             NAME       IMAGE          NODE      DESIRED STATE   CURRENT STATE            ERROR     PORTS
id3xjfro1bdl   nginx2.1   nginx:1.27.4   manager   Running         Running 15 minutes ago             
hx7sqjr88ac8   nginx2.2   nginx:1.27.4   worker2   Running         Running 2 minutes ago              
p7x2lyo6vdk8   nginx2.5   nginx:1.27.4   worker1   Running         Running 2 minutes ago  
  • 更新服務:
[root@manager ~]# docker service update --publish-rm 80:80 --publish-add 88:80 nginx2
nginx2
overall progress: 3 out of 3 tasks 
1/3: running   [==================================================>] 
2/3: running   [==================================================>] 
3/3: running   [==================================================>] 
verify: Service nginx2 converged 
[root@manager ~]# docker ps
CONTAINER ID   IMAGE          COMMAND                  CREATED          STATUS          PORTS     NAMES
78adf323559b   nginx:1.27.4   "/docker-entrypoint.…"   20 seconds ago   Up 17 seconds   80/tcp    nginx2.1.ds6dy33b6m62kzajs9frvdw4g
[root@manager ~]# docker service ps nginx2 
ID             NAME           IMAGE          NODE      DESIRED STATE   CURRENT STATE             ERROR     PORTS
ds6dy33b6m62   nginx2.1       nginx:1.27.4   manager   Running         Running 19 seconds ago              
id3xjfro1bdl    \_ nginx2.1   nginx:1.27.4   manager   Shutdown        Shutdown 20 seconds ago             
rpnodskwk4f7   nginx2.2       nginx:1.27.4   worker2   Running         Running 23 seconds ago              
hx7sqjr88ac8    \_ nginx2.2   nginx:1.27.4   worker2   Shutdown        Shutdown 24 seconds ago             
shdko7kq7vvw   nginx2.5       nginx:1.27.4   worker1   Running         Running 16 seconds ago              
p7x2lyo6vdk8    \_ nginx2.5   nginx:1.27.4   worker1   Shutdown        Shutdown 16 seconds ago             
[root@manager ~]# docker service ls
ID             NAME      MODE         REPLICAS   IMAGE          PORTS
o9atmzaj9on8   nginx2    replicated   3/3        nginx:1.27.4   *:88->80/tcp
  • worker1
[root@worker1 ~]# docker ps
CONTAINER ID   IMAGE          COMMAND                  CREATED              STATUS              PORTS     NAMES
21c117b44558   nginx:1.27.4   "/docker-entrypoint.…"   About a minute ago   Up About a minute   80/tcp    nginx2.5.shdko7kq7vvwqsmzgtm5790s2
  • worker2
[root@worker2 ~]# docker ps
CONTAINER ID   IMAGE          COMMAND                  CREATED              STATUS              PORTS     NAMES
a761908c5f2d   nginx:1.27.4   "/docker-entrypoint.…"   About a minute ago   Up About a minute   80/tcp    nginx2.2.rpnodskwk4f79yxpf0tctb5n8

從集群中刪除服務

[root@manager ~]# docker service rm nginx2
nginx2
[root@manager ~]# docker ps
CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
[root@manager ~]# docker service ls
ID        NAME      MODE      REPLICAS   IMAGE     PORTS
[root@worker1 ~]# docker ps
CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
[root@worker2 ~]# docker ps
CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES

ssh免密登錄

  • ssh-keygen
  • ssh-copy-id root@worker1
  • ssh-copy-id root@worker2
  • ssh worker1
  • exit
[root@manager ~]# ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa): 	#回車
Enter passphrase (empty for no passphrase):  	#回車
Enter same passphrase again:  	#回車
Your identification has been saved in /root/.ssh/id_rsa
Your public key has been saved in /root/.ssh/id_rsa.pub
The key fingerprint is:
SHA256:6/aBJGgBhXrJ5l541rWVPguq1ao5QLDqe6LfztIxBAw root@manager
The key's randomart image is:
+---[RSA 3072]----+
|Eo.o.            |
|. +.             |
| = o.      .     |
|o * .o  . o      |
|.= oo...S+       |
|. +.* .+ooo      |
|.. * o..+..o     |
| oo+oo.o. ..     |
|oo=oB+.....      |
+----[SHA256]-----+[root@manager ~]# ssh-copy-id root@worker1
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@worker1's password: Number of key(s) added: 1Now try logging into the machine, with:   "ssh 'root@worker1'"
and check to make sure that only the key(s) you wanted were added.[root@manager ~]# ssh-copy-id root@worker2
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host 'worker2 (192.168.98.49)' can't be established.
ED25519 key fingerprint is SHA256:I3/lsrnTEnXOE3LFvTLRUXAJ+AhSVrIEWtqTnleRz9w.
This host key is known by the following other names/addresses:~/.ssh/known_hosts:1: manager~/.ssh/known_hosts:4: worker1
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@worker2's password: Number of key(s) added: 1Now try logging into the machine, with:   "ssh 'root@worker2'"
and check to make sure that only the key(s) you wanted were added.[root@manager ~]# ssh worker1
Register this system with Red Hat Insights: rhc connectExample:
# rhc connect --activation-key <key> --organization <org>The rhc client and Red Hat Insights will enable analytics and additional
management capabilities on your system.
View your connected systems at https://console.redhat.com/insightsYou can learn more about how to register your system 
using rhc at https://red.ht/registration
Last login: Thu May  8 16:36:42 2025 from 192.168.98.1
[root@worker1 ~]# exit
logout
Connection to worker1 closed.

二、Docker Compose

拉取 docker-compose-linux-x86_64 文件

  • 創建 docker-compose.yaml 文件
[root@docker ~]# mv docker-compose-linux-x86_64 /usr/bin/docker-compose
[root@docker ~]# ll
total 4
-rw-------. 1 root root 989 Feb 27 16:19 anaconda-ks.cfg
[root@docker ~]# chmod +x /usr/bin/docker-compose 
[root@docker ~]# ll /usr/bin/docker-compose 
-rwxr-xr-x. 1 root root 73699264 May 10 09:55 /usr/bin/docker-compose
[root@docker ~]# docker-compose --version
Docker Compose version v2.35.1
[root@docker ~]# vim docker-compose.yaml
[root@docker ~]# ls
anaconda-ks.cfg  docker-compose.yaml
  • 復制會話,創建需要的宿主機目錄
[root@docker ~]# mkdir -p /opt/{nginx,mysql,redis}
[root@docker ~]# mkdir /opt/nginx/{conf,html}
[root@docker ~]# mkdir /opt/mysql/data
[root@docker ~]# mkdir /opt/redis/data
[root@docker ~]# tree /opt
  • 執行
[root@docker ~]# docker-compose up -d
WARN[0000] /root/docker-compose.yaml: the attribute `version` is obsolete, it will be ignored, please remove it to avoid potential confusion 
[+] Running 26/26? mysql Pulled                                                                                                      91.7s ? c2eb5d06bfea Pull complete                                                                                      16.4s ? ba361f0ba5e7 Pull complete                                                                                      16.4s ? 0e83af98b000 Pull complete                                                                                      16.4s ? 770e931107be Pull complete                                                                                      16.6s ? a2be1b721112 Pull complete                                                                                      16.6s ? 68c594672ed3 Pull complete                                                                                      16.6s ? cfd201189145 Pull complete                                                                                      55.2s ? e9f009c5b388 Pull complete                                                                                      55.3s ? 61a291920391 Pull complete                                                                                      87.0s ? c8604ede059a Pull complete                                                                                      87.0s ? redis Pulled                                                                                                      71.0s ? cd07ede39ddc Pull complete                                                                                      40.5s ? 63df650ee4e0 Pull complete                                                                                      43.1s ? c175c1c9487d Pull complete                                                                                      55.0s ? 91cf9601b872 Pull complete                                                                                      55.0s ? 4f4fb700ef54 Pull complete                                                                                      55.0s ? c70d7dc4bd70 Pull complete                                                                                      55.0s ? nginx Pulled                                                                                                      53.6s ? 254e724d7786 Pull complete                                                                                      20.5s ? 913115292750 Pull complete                                                                                      31.3s ? 3e544d53ce49 Pull complete                                                                                      32.7s ? 4f21ed9ac0c0 Pull complete                                                                                      36.7s ? d38f2ef2d6f2 Pull complete                                                                                      39.7s ? 40a6e9f4e456 Pull complete                                                                                      42.7s ? d3dc5ec71e9d Pull complete                                                                                      45.3s 
[+] Running 3/4? Network root_default  Created                                                                                      0.0s ? Container mysql       Starting                                                                                     0.3s ? Container redis       Started                                                                                      0.3s ? Container nginx       Started                                                                                      0.3s 
Error response from daemon: failed to create task for container: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error mounting "/opt/mysql/my.cnf" to rootfs at "/etc/my.cnf": create mountpoint for /etc/my.cnf mount: cannot create subdirectories in "/var/lib/docker/overlay2/1a5867fcf9c1f650da4bc51387cbd7621f1464eb2a1ce8d90f90f43733b34602/merged/etc/my.cnf": not a directory: unknown: Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type
[root@docker ~]# vim docker-compose.yaml
[root@docker ~]# cat docker-compose.yaml 
version: "3.9"
services: nginx:image: nginx:1.27.5container_name: nginxports:- 80:80volumes:- /opt/nginx/conf:/etc/nginx/conf.d- /opt/nginx/html:/usr/share/nginx/htmlmysql:image: mysql:9.3.0container_name: mysqlrestart: alwaysports:- 3306:3306volumes:- /opt/mysql/data:/var/lib/mysql#- /opt/mysql/my.cnf:/etc/my.cnfcommand:- character-set-server=utf8mb4- collation-server=utf8mb4_general_ciredis:image: redis:8.0.0container_name: redisports:- 6379:6379volumes:- /opt/redis/data:/data[root@docker ~]# docker-compose up -d
WARN[0000] /root/docker-compose.yaml: the attribute `version` is obsolete, it will be ignored, please remove it to avoid potential confusion 
[+] Running 3/3? Container redis  Running                                                                                           0.0s ? Container mysql  Started                                                                                           0.2s ? Container nginx  Running 
[root@docker ~]# docker images
REPOSITORY   TAG       IMAGE ID       CREATED       SIZE
redis        8.0.0     d62dbaef1b81   5 days ago    128MB
nginx        1.27.5    a830707172e8   3 weeks ago   192MB
mysql        9.3.0     2c849dee4ca9   3 weeks ago   859MB
[root@docker ~]# docker ps
CONTAINER ID   IMAGE          COMMAND                  CREATED              STATUS                           PORTS                                         NAMES
4390422bf4b1   mysql:9.3.0    "docker-entrypoint.s…"   About a minute ago   Restarting (127) 7 seconds ago                                                 mysql
18713204332c   nginx:1.27.5   "/docker-entrypoint.…"   2 minutes ago        Up 2 minutes                     0.0.0.0:80->80/tcp, [::]:80->80/tcp           nginx
be2489a46e42   redis:8.0.0    "docker-entrypoint.s…"   2 minutes ago        Up 2 minutes                     0.0.0.0:6379->6379/tcp, [::]:6379->6379/tcp   redis
  • 訪問
[root@docker ~]# curl localhost
curl: (56) Recv failure: Connection reset by peer
[root@docker ~]# echo "index.html" > /opt/nginx/html/index.html
[root@docker ~]# curl http://192.168.98.149
curl: (7) Failed to connect to 192.168.98.149 port 80: Connection refused[root@docker ~]# cd /opt/nginx/conf/
[root@docker conf]# ls
[root@docker conf]# vim web.conf
[root@docker conf]# cat web.conf 
server {listen	80;server_name	192.168.98.149;root		/opt/nginx/html;
}[root@docker conf]# curl http://192.168.98.149
<html>
<head><title>404 Not Found</title></head>
<body>
<center><h1>404 Not Found</h1></center>
<hr><center>nginx/1.27.5</center>
</body>
</html>[root@docker conf]# vim web.conf
[root@docker conf]# cat web.conf 
server {listen	80;server_name	192.168.98.149;root		/usr/share/nginx/html;
}
[root@docker conf]# docker restart nginx
nginx
[root@docker conf]# curl http://192.168.98.149
index.html
  • 必須在有 docker-compose.yaml 文件下的路徑執行
[root@docker ~]# docker-compose ps
WARN[0000] /root/docker-compose.yaml: the attribute `version` is obsolete, it will be ignored, please remove it to avoid potential confusion 
NAME      IMAGE          COMMAND                  SERVICE   CREATED          STATUS                            PORTS
mysql     mysql:9.3.0    "docker-entrypoint.s…"   mysql     21 minutes ago   Restarting (127) 52 seconds ago   
nginx     nginx:1.27.5   "/docker-entrypoint.…"   nginx     23 minutes ago   Up 11 minutes                     0.0.0.0:80->80/tcp, [::]:80->80/tcp
redis     redis:8.0.0    "docker-entrypoint.s…"   redis     23 minutes ago   Up 23 minutes                     0.0.0.0:6379->6379/tcp, [::]:6379->6379/tcp
  • 解決警告:刪除 docker-compose.yaml 文件第一行的 version: “3.9”
[root@docker ~]# vim docker-compose.yaml 
[root@docker ~]# cat docker-compose.yaml 
services: nginx:image: nginx:1.27.5container_name: nginxports:- 80:80volumes:- /opt/nginx/conf:/etc/nginx/conf.d- /opt/nginx/html:/usr/share/nginx/htmlmysql:image: mysql:9.3.0container_name: mysqlrestart: alwaysports:- 3306:3306volumes:- /opt/mysql/data:/var/lib/mysql#- /opt/mysql/my.cnf:/etc/my.cnfcommand:- character-set-server=utf8mb4- collation-server=utf8mb4_general_ciredis:image: redis:8.0.0container_name: redisports:- 6379:6379volumes:- /opt/redis/data:/data
[root@docker ~]# docker-compose ps
NAME      IMAGE          COMMAND                  SERVICE   CREATED          STATUS                           PORTS
mysql     mysql:9.3.0    "docker-entrypoint.s…"   mysql     24 minutes ago   Restarting (127) 3 seconds ago   
nginx     nginx:1.27.5   "/docker-entrypoint.…"   nginx     26 minutes ago   Up 41 seconds                    0.0.0.0:80->80/tcp, [::]:80->80/tcp
redis     redis:8.0.0    "docker-entrypoint.s…"   redis     26 minutes ago   Up 26 minutes                    0.0.0.0:6379->6379/tcp, [::]:6379->6379/tcp
[root@docker ~]# mkdir composetest
[root@docker ~]# cd composetest/
[root@docker composetest]# ls
[root@docker composetest]# vim app.py
[root@docker composetest]# cat app.py 
import timeimport redis
from flask import Flaskapp = Flask(__name__)
cache = redis.Redis(host='redis', port=6379)def get_hit_count():retries = 5while True:try:return cache.incr('hits')except redis.exceptions.ConnectionError as exc:if retries == 0:raise excretries -= 1time.sleep(0.5)@app.route('/')
def hello():count = get_hit_count()return 'Hello World! I have been seen {} times.\n'.format(count)if __name__ == "__main__":app.run(host="0.0.0.0", debug=True)[root@docker composetest]# pip freeze > requirements.txt
[root@docker composetest]# cat requirements.txt 
flask
redis
[root@docker yum.repos.d]# mount /dev/sr0 /mnt
mount: /mnt: WARNING: source write-protected, mounted read-only.
[root@docker yum.repos.d]# dnf install pip -y
Updating Subscription Management repositories.
Unable to read consumer identityThis system is not registered with an entitlement server. You can use "rhc" or "subscription-manager" to register.BaseOS                                                                                     2.7 MB/s | 2.7 kB     00:00    
AppStream                                                                                  510 kB/s | 3.2 kB     00:00    
Docker CE Stable - x86_64                                                                  4.2 kB/s | 3.5 kB     00:00    
Docker CE Stable - x86_64                                                                   31  B/s |  55  B     00:01    
Errors during downloading metadata for repository 'docker-ce-stable':- Curl error (35): SSL connect error for https://download.docker.com/linux/rhel/9/x86_64/stable/repodata/e0bf6fe4688b6f32c32f16998b1da1027a1ebfcec723b7c6c09e032effdb248a-primary.xml.gz [OpenSSL SSL_connect: Connection reset by peer in connection to download.docker.com:443 ]- Curl error (35): SSL connect error for https://download.docker.com/linux/rhel/9/x86_64/stable/repodata/65c4f66e2808d328890505c3c2f13bb35a96f457d1c21a6346191c4dc07e6080-updateinfo.xml.gz [OpenSSL SSL_connect: Connection reset by peer in connection to download.docker.com:443 ]- Curl error (35): SSL connect error for https://download.docker.com/linux/rhel/9/x86_64/stable/repodata/2fb592ad5a8fa5136a1d5992ce0a70f344c84f1601f0792d9573f8c5de73dffa-filelists.xml.gz [OpenSSL SSL_connect: Connection reset by peer in connection to download.docker.com:443 ]
Error: Failed to download metadata for repo 'docker-ce-stable': Yum repo downloading error: Downloading error(s): repodata/e0bf6fe4688b6f32c32f16998b1da1027a1ebfcec723b7c6c09e032effdb248a-primary.xml.gz - Cannot download, all mirrors were already tried without success; repodata/2fb592ad5a8fa5136a1d5992ce0a70f344c84f1601f0792d9573f8c5de73dffa-filelists.xml.gz - Cannot download, all mirrors were already tried without success
[root@docker yum.repos.d]# dnf install pip -y
Updating Subscription Management repositories.
Unable to read consumer identityThis system is not registered with an entitlement server. You can use "rhc" or "subscription-manager" to register.Docker CE Stable - x86_64                                                                  4.5 kB/s | 3.5 kB     00:00    
Docker CE Stable - x86_64                                                                   
......                                                                                     Complete!
[root@docker ~]# ls
anaconda-ks.cfg  composetest  docker-compose.yaml
[root@docker ~]# docker-compose  down
[+] Running 4/4? Container redis       Removed                                                                                      0.1s ? Container nginx       Removed                                                                                      0.1s ? Container mysql       Removed                                                                                      0.0s ? Network root_default  Removed                                                                                      0.1s 
[root@docker ~]# docker-compose ps
NAME      IMAGE     COMMAND   SERVICE   CREATED   STATUS    PORTS
[root@docker ~]# cd composetest/
[root@docker composetest]# ls
[root@docker composetest]# vim Dockerfile
[root@docker composetest]# vim docker-compose.yaml
[root@docker composetest]# cat Dockerfile
FROM python:3.12-alpine
ADD . /code
WORKDIR /code
RUN pip install -r requirements.txt
CMD ["python", "app.py"]
[root@docker composetest]# cat docker-compose.yaml
services:web:build: .ports:- 5000:5000redis:image: redis:alpine# 復制會話,訪問
[root@docker composetest]# curl http://192.168.98.149:5000
Hello World! I have been seen 1 times.
  • 執行命令啟動,復制會話訪問成功后 exit 退出
[root@docker composetest]# docker-compose up
Compose can now delegate builds to bake for better performance.To do so, set COMPOSE_BAKE=true.
[+] Building 50.1s (10/10) FINISHED                                                                         docker:default=> [web internal] load build definition from Dockerfile                                                              0.0s=> => transferring dockerfile: 208B                                                                                  0.0s=> [web internal] load metadata for docker.io/library/python:3.12-alpine                                             3.6s=> [web internal] load .dockerignore                                                                                 0.0s=> => transferring context: 2B                                                                                       0.0s=> [web internal] load build context                                                                                 0.0s=> => transferring context: 392B                                                                                     0.0s=> [web 1/4] FROM docker.io/library/python:3.12-alpine@sha256:a664b68d141849d28fd0992b9aa88b4eab7a21d258e2e7ddb9d1b  6.9s=> => resolve docker.io/library/python:3.12-alpine@sha256:a664b68d141849d28fd0992b9aa88b4eab7a21d258e2e7ddb9d1b59fe  0.0s=> => sha256:d96ff845ea26c45b2499dfa5fb9ef5b354ba49ee6025d35c4ba81403072c10ad 252B / 252B                            5.2s=> => sha256:a664b68d141849d28fd0992b9aa88b4eab7a21d258e2e7ddb9d1b59fe07535a5 9.03kB / 9.03kB                        0.0s=> => sha256:b18b7c0ecd765d0608e439663196f2cfe4b572dccfffe1490530f7398df4b271 1.74kB / 1.74kB                        0.0s=> => sha256:81ce3817028c37c988e38333dd7283eab7662ce7dd43b5f019f86038c71f75ba 5.33kB / 5.33kB                        0.0s=> => sha256:d39e98ccbc35a301f215a15d73fd647ad7044124c3835da293d82cbb765cab03 460.20kB / 460.20kB                    2.2s=> => sha256:a597717b83500ddd11c9b9eddc7094663b2a376c4c5cb0078b433e11a82e70d0 13.66MB / 13.66MB                      6.2s=> => extracting sha256:d39e98ccbc35a301f215a15d73fd647ad7044124c3835da293d82cbb765cab03                             0.1s=> => extracting sha256:a597717b83500ddd11c9b9eddc7094663b2a376c4c5cb0078b433e11a82e70d0                             0.6s=> => extracting sha256:d96ff845ea26c45b2499dfa5fb9ef5b354ba49ee6025d35c4ba81403072c10ad                             0.0s=> [web 2/4] ADD . /code                                                                                             0.1s=> [web 3/4] WORKDIR /code                                                                                           0.0s=> [web 4/4] RUN pip install -r requirements.txt                                                                    39.3s=> [web] exporting to image                                                                                          0.1s=> => exporting layers                                                                                               0.1s=> => writing image sha256:32303d47b57e0c674340d90a76bb82776024626e64d95e04e0ff5d7e893caa80                          0.0s => => naming to docker.io/library/composetest-web                                                                    0.0s => [web] resolving provenance for metadata file                                                                      0.0s 
[+] Running 4/4                                                                                                            ? web                            Built                                                                               0.0s ? Network composetest_default    Created                                                                             0.0s ? Container composetest-redis-1  Created                                                                             0.0s ? Container composetest-web-1    Created                                                                             0.0s 
Attaching to redis-1, web-1
redis-1  | Starting Redis Server
redis-1  | 1:C 10 May 2025 08:28:20.677 # WARNING Memory overcommit must be enabled! Without it, a background save or replication may fail under low memory condition. Being disabled, it can also cause failures without low memory condition, see https://github.com/jemalloc/jemalloc/issues/1328. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
redis-1  | 1:C 10 May 2025 08:28:20.680 * oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
redis-1  | 1:C 10 May 2025 08:28:20.680 * Redis version=8.0.0, bits=64, commit=00000000, modified=1, pid=1, just started
redis-1  | 1:C 10 May 2025 08:28:20.680 * Configuration loaded
redis-1  | 1:M 10 May 2025 08:28:20.683 * monotonic clock: POSIX clock_gettime
redis-1  | 1:M 10 May 2025 08:28:20.693 * Running mode=standalone, port=6379.
redis-1  | 1:M 10 May 2025 08:28:20.709 * <bf> RedisBloom version 7.99.90 (Git=unknown)
redis-1  | 1:M 10 May 2025 08:28:20.709 * <bf> Registering configuration options: [
redis-1  | 1:M 10 May 2025 08:28:20.709 * <bf> 	{ bf-error-rate       :      0.01 }
redis-1  | 1:M 10 May 2025 08:28:20.709 * <bf> 	{ bf-initial-size     :       100 }
redis-1  | 1:M 10 May 2025 08:28:20.709 * <bf> 	{ bf-expansion-factor :         2 }
redis-1  | 1:M 10 May 2025 08:28:20.709 * <bf> 	{ cf-bucket-size      :         2 }
redis-1  | 1:M 10 May 2025 08:28:20.709 * <bf> 	{ cf-initial-size     :      1024 }
redis-1  | 1:M 10 May 2025 08:28:20.709 * <bf> 	{ cf-max-iterations   :        20 }
redis-1  | 1:M 10 May 2025 08:28:20.709 * <bf> 	{ cf-expansion-factor :         1 }
redis-1  | 1:M 10 May 2025 08:28:20.709 * <bf> 	{ cf-max-expansions   :        32 }
redis-1  | 1:M 10 May 2025 08:28:20.709 * <bf> ]
redis-1  | 1:M 10 May 2025 08:28:20.709 * Module 'bf' loaded from /usr/local/lib/redis/modules//redisbloom.so
redis-1  | 1:M 10 May 2025 08:28:20.799 * <search> Redis version found by RedisSearch : 8.0.0 - oss
redis-1  | 1:M 10 May 2025 08:28:20.800 * <search> RediSearch version 8.0.0 (Git=HEAD-61787b7)
redis-1  | 1:M 10 May 2025 08:28:20.803 * <search> Low level api version 1 initialized successfully
redis-1  | 1:M 10 May 2025 08:28:20.808 * <search> gc: ON, prefix min length: 2, min word length to stem: 4, prefix max expansions: 200, query timeout (ms): 500, timeout policy: fail, cursor read size: 1000, cursor max idle (ms): 300000, max doctable size: 1000000, max number of search results:  1000000, 
redis-1  | 1:M 10 May 2025 08:28:20.810 * <search> Initialized thread pools!
redis-1  | 1:M 10 May 2025 08:28:20.810 * <search> Disabled workers threadpool of size 0
redis-1  | 1:M 10 May 2025 08:28:20.815 * <search> Subscribe to config changes
redis-1  | 1:M 10 May 2025 08:28:20.815 * <search> Enabled role change notification
redis-1  | 1:M 10 May 2025 08:28:20.815 * <search> Cluster configuration: AUTO partitions, type: 0, coordinator timeout: 0ms
redis-1  | 1:M 10 May 2025 08:28:20.816 * <search> Register write commands
redis-1  | 1:M 10 May 2025 08:28:20.816 * Module 'search' loaded from /usr/local/lib/redis/modules//redisearch.so
redis-1  | 1:M 10 May 2025 08:28:20.828 * <timeseries> RedisTimeSeries version 79991, git_sha=de1ad5089c15c42355806bbf51a0d0cf36f223f6
redis-1  | 1:M 10 May 2025 08:28:20.828 * <timeseries> Redis version found by RedisTimeSeries : 8.0.0 - oss
redis-1  | 1:M 10 May 2025 08:28:20.828 * <timeseries> Registering configuration options: [
redis-1  | 1:M 10 May 2025 08:28:20.828 * <timeseries> 	{ ts-compaction-policy   :              }
redis-1  | 1:M 10 May 2025 08:28:20.828 * <timeseries> 	{ ts-num-threads         :            3 }
redis-1  | 1:M 10 May 2025 08:28:20.828 * <timeseries> 	{ ts-retention-policy    :            0 }
redis-1  | 1:M 10 May 2025 08:28:20.828 * <timeseries> 	{ ts-duplicate-policy    :        block }
redis-1  | 1:M 10 May 2025 08:28:20.828 * <timeseries> 	{ ts-chunk-size-bytes    :         4096 }
redis-1  | 1:M 10 May 2025 08:28:20.828 * <timeseries> 	{ ts-encoding            :   compressed }
redis-1  | 1:M 10 May 2025 08:28:20.828 * <timeseries> 	{ ts-ignore-max-time-diff:            0 }
redis-1  | 1:M 10 May 2025 08:28:20.829 * <timeseries> 	{ ts-ignore-max-val-diff :     0.000000 }
redis-1  | 1:M 10 May 2025 08:28:20.829 * <timeseries> ]
redis-1  | 1:M 10 May 2025 08:28:20.833 * <timeseries> Detected redis oss
redis-1  | 1:M 10 May 2025 08:28:20.837 * Module 'timeseries' loaded from /usr/local/lib/redis/modules//redistimeseries.so
redis-1  | 1:M 10 May 2025 08:28:20.886 * <ReJSON> Created new data type 'ReJSON-RL'
redis-1  | 1:M 10 May 2025 08:28:20.900 * <ReJSON> version: 79990 git sha: unknown branch: unknown
redis-1  | 1:M 10 May 2025 08:28:20.900 * <ReJSON> Exported RedisJSON_V1 API
redis-1  | 1:M 10 May 2025 08:28:20.900 * <ReJSON> Exported RedisJSON_V2 API
redis-1  | 1:M 10 May 2025 08:28:20.900 * <ReJSON> Exported RedisJSON_V3 API
redis-1  | 1:M 10 May 2025 08:28:20.900 * <ReJSON> Exported RedisJSON_V4 API
redis-1  | 1:M 10 May 2025 08:28:20.900 * <ReJSON> Exported RedisJSON_V5 API
redis-1  | 1:M 10 May 2025 08:28:20.900 * <ReJSON> Enabled diskless replication
redis-1  | 1:M 10 May 2025 08:28:20.904 * <ReJSON> Initialized shared string cache, thread safe: false.
redis-1  | 1:M 10 May 2025 08:28:20.904 * Module 'ReJSON' loaded from /usr/local/lib/redis/modules//rejson.so
redis-1  | 1:M 10 May 2025 08:28:20.904 * <search> Acquired RedisJSON_V5 API
redis-1  | 1:M 10 May 2025 08:28:20.907 * Server initialized
redis-1  | 1:M 10 May 2025 08:28:20.910 * Ready to accept connections tcp
web-1    |  * Serving Flask app 'app'
web-1    |  * Debug mode: on
web-1    | WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead.
web-1    |  * Running on all addresses (0.0.0.0)
web-1    |  * Running on http://127.0.0.1:5000
web-1    |  * Running on http://172.18.0.3:5000
web-1    | Press CTRL+C to quit
web-1    |  * Restarting with stat
web-1    |  * Debugger is active!
web-1    |  * Debugger PIN: 102-900-685
web-1    | 192.168.98.149 - - [10/May/2025 08:31:06] "GET / HTTP/1.1" 200 -w Enable Watch

在這里插入圖片描述
在這里插入圖片描述

failed to solve: process "/bin/sh -c pip install -r requirements.txt" did not complete successfully: exit code: 1[root@docker composetest]# 
[root@docker composetest]# pip install --upgrade pip
Requirement already satisfied: pip in /usr/lib/python3.9/site-packages (21.3.1)

與 Swarm 一起使用 Compose

使用 docker service create 一次只能部署一個服務,使用 docker-compose.yml 我們可以一次啟動多個關聯的服務。

[root@docker ~]# ls
anaconda-ks.cfg  composetest  docker-compose.yaml
[root@docker ~]# mkdir dcswarm
[root@docker ~]# ls
anaconda-ks.cfg  composetest  dcswarm  docker-compose.yaml[root@docker ~]# cd dcswarm/
[root@docker dcswarm]# vim docker-compose.yaml
[root@docker dcswarm]# cat docker-compose.yaml
services:nginx:image: nginx:1.27.5ports:- 80:80- 443:443volumes:- /opt/nginx/conf:/etc/nginx/conf.d- /opt/nginx/html:/usr/share/nginx/htmldeploy:mode: replicatedreplicas: 2mysql:image: mysql:9.3.0ports:- 3306:3306command:- character-set-server=utf8mb4- collation-server=utf8mb4_general_cienvironment:MYSQL_ROOT_PASSWORD: "123456"volumes:- /opt/mysql/data:/var/lib/mysqldeploy:mode: replicatedreplicas: 2[root@docker dcswarm]# docker stack deploy -c docker-compose.yaml webdocker swarm initdocker swarm join --token 
[root@docker ~]# scp dcswarm/* root@192.168.98.47:~
root@192.168.98.47's password: 
docker-compose.yaml     [root@manager ~]# ls
anaconda-ks.cfg  docker-compose.yaml  nginx_1.27.4.tar.gz  swarm_1.2.9.tar.gz
[root@manager ~]# mkdir dcswarm
[root@manager ~]# mv docker-compose.yaml dcswarm/
[root@manager ~]# cd dcswarm/
[root@manager dcswarm]# ls
docker-compose.yaml
[root@manager dcswarm]# docker stack deploy -c docker-compose.yaml web
Since --detach=false was not specified, tasks will be created in the background.
In a future release, --detach=false will become the default.
Creating network web_default
Creating service web_mysql
Creating service web_nginx

三、配置私有倉庫(Harbor)

3.1 環境準備

Harbor(港灣),是一個用于 存儲分發 Docker 鏡像的企業級 Registry 服務器。

  1. 修改主機名、IP地址
  2. 開啟路由轉發/etc/sysctl.conf
    net.ipv4.ip_forward = 1
  3. 配置主機映射/etc/hosts
# 修改主機名、IP地址
[root@docker ~]# hostnamectl hostname harbor
[root@docker ~]# ip ad
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00inet 127.0.0.1/8 scope host lovalid_lft forever preferred_lft foreverinet6 ::1/128 scope host valid_lft forever preferred_lft forever
2: ens160: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000link/ether 00:0c:29:f5:e5:24 brd ff:ff:ff:ff:ff:ffaltname enp3s0inet 192.168.86.132/24 brd 192.168.86.255 scope global dynamic noprefixroute ens160valid_lft 1672sec preferred_lft 1672secinet6 fe80::20c:29ff:fef5:e524/64 scope link noprefixroute valid_lft forever preferred_lft forever
3: ens224: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000link/ether 00:0c:29:f5:e5:2e brd ff:ff:ff:ff:ff:ffaltname enp19s0inet 192.168.98.159/24 brd 192.168.98.255 scope global dynamic noprefixroute ens224valid_lft 1672sec preferred_lft 1672secinet6 fe80::fd7d:606d:1a1b:d3cc/64 scope link noprefixroute valid_lft forever preferred_lft forever
4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default link/ether ae:a7:3e:3a:39:bb brd ff:ff:ff:ff:ff:ffinet 172.17.0.1/16 brd 172.17.255.255 scope global docker0valid_lft forever preferred_lft forever
[root@docker ~]# nmcli c show 
NAME                UUID                                  TYPE      DEVICE  
Wired connection 1  110c742f-bd12-3ba3-b671-1972a75aa2e6  ethernet  ens224  
ens160              d622d6da-1540-371d-8def-acd3db9bd38d  ethernet  ens160  
lo                  d20cef01-6249-4012-908c-f775efe44118  loopback  lo      
docker0             b023990a-e131-4a68-828c-710158f77a50  bridge    docker0 
[root@docker ~]# nmcli c m "Wired connection 1" connection.id ens224
[root@docker ~]# nmcli c show 
NAME     UUID                                  TYPE      DEVICE  
ens224   110c742f-bd12-3ba3-b671-1972a75aa2e6  ethernet  ens224  
ens160   d622d6da-1540-371d-8def-acd3db9bd38d  ethernet  ens160  
lo       d20cef01-6249-4012-908c-f775efe44118  loopback  lo      
docker0  b023990a-e131-4a68-828c-710158f77a50  bridge    docker0 
[root@docker ~]# nmcli c m ens224 ipv4.method manual ipv4.addresses 192.168.98.20/24 ipv4.gateway 192.168.98.2 ipv4.dns 223.5.5.5 connection.autoconnect yes
[root@docker ~]# nmcli c up ens224 
[root@harbor ~]# nmcli c m ens160 ipv4.method manual ipv4.addresses 192.168.86.20/24 ipv4.gateway 192.168.86.200 ipv4.dns "223.5.5.5 8.8.8.8" connection.autoconnect yes
[root@harbor ~]# nmcli c up ens160 
Connection successfully activated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/6)
[root@harbor ~]# ip ad
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00inet 127.0.0.1/8 scope host lovalid_lft forever preferred_lft foreverinet6 ::1/128 scope host valid_lft forever preferred_lft forever
2: ens160: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000link/ether 00:0c:29:f5:e5:24 brd ff:ff:ff:ff:ff:ffaltname enp3s0inet 192.168.86.20/24 brd 192.168.86.255 scope global noprefixroute ens160valid_lft forever preferred_lft foreverinet6 fe80::20c:29ff:fef5:e524/64 scope link noprefixroute valid_lft forever preferred_lft forever
3: ens224: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000link/ether 00:0c:29:f5:e5:2e brd ff:ff:ff:ff:ff:ffaltname enp19s0inet 192.168.98.20/24 brd 192.168.98.255 scope global noprefixroute ens224valid_lft forever preferred_lft foreverinet6 fe80::fd7d:606d:1a1b:d3cc/64 scope link noprefixroute valid_lft forever preferred_lft forever
4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default link/ether ae:a7:3e:3a:39:bb brd ff:ff:ff:ff:ff:ffinet 172.17.0.1/16 brd 172.17.255.255 scope global docker0valid_lft forever preferred_lft forever# 開啟路由轉發
[root@harbor ~]# echo "net.ipv4.ip_forward=1" >> /etc/sysctl.conf 
[root@harbor ~]# sysctl -p
net.ipv4.ip_forward = 1
# 配置主機映射
[root@harbor ~]# vim /etc/hosts
[root@harbor ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.86.11 k8s-master01 m1
192.168.86.12 k8s-node01 n1
192.168.86.13 k8s-node02 n2
192.168.86.20 harbor.registry.com harbor

3.2 安裝 Docker

  1. 添加 Docker 源
  2. 安裝 Docker
  3. 配置 Docker
  4. 啟動 Docker
  5. 驗證 Docker
[root@harbor ~]# vim /etc/docker/daemon.json
[root@harbor ~]# cat /etc/docker/daemon.json
{"default-ipc-mode": "shareable",	#ipc模式打開"data-root": "/data/docker",		#指定docker數據放在哪個目錄"exec-opts": ["native.cgroupdriver=systemd"],	#指定cgroup的驅動方式是systemd"log-driver": "json-file",	#格式json"log-opts": {"max-size": "100m","max-file": "50"},"insecure-registries": ["https://harbor.registry.com"],	#自己的倉庫的地址(私有倉庫)"registry-mirrors":[	#拉取鏡像(公共倉庫)"https://docker.m.daocloud.io","https://docker.imgdb.de","https://docker-0.unsee.tech","https://docker.hlmirror.com","https://docker.1ms.run","https://func.ink","https://lispy.org","https://docker.xiaogenban1993.com"]
}
[root@harbor ~]# mkdir -p /data/docker -p
[root@harbor ~]# systemctl restart docker
[root@harbor ~]# ls /data/docker/
buildkit  containers  engine-id  image  network  overlay2  plugins  runtimes  swarm  tmp  volumes

3.3 安裝 docker-compose

  1. 下載
  2. 安裝
  3. 賦權
  4. 驗證
[root@harbor ~]# mv docker-compose-linux-x86_64 /usr/bin/docker-compose
[root@harbor ~]# chmod +x /usr/bin/do
docker                         dockerd                        dockerd-rootless.sh            domainname
docker-compose                 dockerd-rootless-setuptool.sh  docker-proxy                   
[root@harbor ~]# chmod +x /usr/bin/docker-compose 
[root@harbor ~]# docker-compose --version
Docker Compose version v2.35.1
[root@harbor ~]# cd /data/
[root@harbor data]# mv /root/harbor-offline-installer-v2.13.0.tgz .
[root@harbor data]# ls
docker  harbor-offline-installer-v2.13.0.tgz
[root@harbor data]# tar -xzf harbor-offline-installer-v2.13.0.tgz 
[root@harbor data]# ls
docker  harbor  harbor-offline-installer-v2.13.0.tgz
[root@harbor data]# rm -f *.tgz
[root@harbor data]# ls
docker  harbor
[root@harbor data]# cd harbor/
[root@harbor harbor]# ls
common.sh  harbor.v2.13.0.tar.gz  harbor.yml.tmpl  install.sh  LICENSE  prepare

3.4 準備 Harbor

  1. 下載 Harbor
  2. 解壓文件

3.5 配置證書

  1. 生成 CA 證書
  2. 生成服務器證書
  3. 向 Harbor 和 Docker 提供證書

3.6 部署配置 Harbor

  1. 配置 Harbor
  2. 加載 harbor 鏡像
  3. 檢查安裝環境
  4. 啟動 Harbor
  5. 查看啟動的容器

3.7 配置啟動服務

  1. 停止 Harbor
  2. 編寫服務文件
  3. 啟動 Harbor 服務

3.8 定制本地倉庫

  1. 配置映射
  2. 配置倉庫

3.9 測試本地倉庫

  1. 拉取鏡像
  2. 鏡像打標簽
  3. 登錄倉庫
  4. 推送鏡像
  5. 拉取鏡像
  • 配置證書
    https://goharbor.io/docs/2.13.0/install-config/installation-prereqs/
[root@harbor harbor]# mkdir ssl
[root@harbor harbor]# cd ssl
[root@harbor ssl]# openssl genrsa -out ca.key 4096
[root@harbor ssl]# ls
ca.key
[root@harbor ssl]# openssl req -x509 -new -nodes -sha512 -days 3650 \-subj "/C=CN/ST=Chongqing/L=Banan/O=example/OU=Personal/CN=MyPersonal Root CA" \-key ca.key \-out ca.crt
[root@harbor ssl]# ls
ca.crt  ca.key[root@harbor ssl]# openssl genrsa -out harbor.registry.com.key 4096
[root@harbor ssl]# ls
ca.crt  ca.key  harbor.registry.com.key
[root@harbor ssl]# openssl req -sha512 -new \-subj "/C=CN/ST=Chongqing/L=Banan/O=example/OU=Personal/CN=harbor.registry.com" \-key harbor.registry.com.key \-out harbor.registry.com.csr
[root@harbor ssl]# ls
ca.crt  ca.key  harbor.registry.com.csr  harbor.registry.com.key[root@harbor ssl]# cat > v3.ext <<-EOF
authorityKeyIdentifier=keyid,issuer
basicConstraints=CA:FALSE
keyUsage = digitalSignature, nonRepudiation, keyEncipherment, dataEncipherment
extendedKeyUsage = serverAuth
subjectAltName = @alt_names[alt_names]
DNS.1=harbor.registry.com
DNS.2=harbor.registry
DNS.3=harbor
EOF
[root@harbor ssl]# ls
ca.crt  ca.key  harbor.registry.com.csr  harbor.registry.com.key  v3.ext[root@harbor ssl]# openssl x509 -req -sha512 -days 3650 \-extfile v3.ext \-CA ca.crt -CAkey ca.key -CAcreateserial \-in harbor.registry.com.csr \-out harbor.registry.com.crt
Certificate request self-signature ok
subject=C=CN, ST=Chongqing, L=Banan, O=example, OU=Personal, CN=harbor.registry.com
[root@harbor ssl]# ls
ca.crt  ca.key  ca.srl  harbor.registry.com.crt  harbor.registry.com.csr  harbor.registry.com.key  v3.ext
[root@harbor ssl]# mkdir /data/cert
[root@harbor ssl]# cp harbor.registry.com.crt /data/cert/
cp harbor.registry.com.key /data/cert/
[root@harbor ssl]# ls /data/cert/
harbor.registry.com.crt  harbor.registry.com.key
[root@harbor ssl]# openssl x509 -inform PEM -in harbor.registry.com.crt -out harbor.registry.com.cert
[root@harbor ssl]# ls
ca.crt  ca.srl                    harbor.registry.com.crt  harbor.registry.com.key
ca.key  harbor.registry.com.cert  harbor.registry.com.csr  v3.ext[root@harbor ssl]# mkdir -p /etc/docker/certs.d/harbor.registry.com:443
[root@harbor ssl]# cp harbor.registry.com.cert /etc/docker/certs.d/harbor.registry.com:443/
[root@harbor ssl]# cp harbor.registry.com.key /etc/docker/certs.d/harbor.registry.com:443/
[root@harbor ssl]# cp ca.crt /etc/docker/certs.d/harbor.registry.com:443/
[root@harbor ssl]# systemctl restart docker
[root@harbor ssl]# systemctl status docker
● docker.service - Docker Application Container EngineLoaded: loaded (/usr/lib/systemd/system/docker.service; enabled; preset: disabled)Active: active (running) since Sun 2025-05-11 10:42:37 CST; 1min 25s ago
TriggeredBy: ● docker.socketDocs: https://docs.docker.comMain PID: 3322 (dockerd)Tasks: 10Memory: 29.7MCPU: 353msCGroup: /system.slice/docker.service└─3322 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sockMay 11 10:42:36 harbor dockerd[3322]: time="2025-05-11T10:42:36.466441227+08:00" level=info msg="Creating a contai>
[root@harbor ssl]# cd ..
[root@harbor harbor]# ls
common.sh  harbor.v2.13.0.tar.gz  harbor.yml.tmpl  install.sh  LICENSE  prepare  ssl
[root@harbor harbor]# cp harbor.yml.tmpl harbor.yml
[root@harbor harbor]# vim harbor.yml
[root@harbor harbor]# cat harbor.yml
# Configuration file of Harbor# The IP address or hostname to access admin UI and registry service.
# DO NOT use localhost or 127.0.0.1, because Harbor needs to be accessed by external clients.
hostname: harbor.registry.com  #修改# http related config
http:# port for http, default is 80. If https enabled, this port will redirect to https portport: 80# https related config
https:# https port for harbor, default is 443port: 443# The path of cert and key files for nginxcertificate: /data/cert/harbor.registry.com.crt  #修改private_key: /data/cert/harbor.registry.com.key  #修改
...............
[root@harbor harbor]# docker images
REPOSITORY   TAG       IMAGE ID   CREATED   SIZE
[root@harbor harbor]# docker load -i harbor.v2.13.0.tar.gz 
874b37071853: Loading layer [==================================================>]  
........
832349ff3d50: Loading layer [==================================================>]  38.95MB/38.95MB
Loaded image: goharbor/harbor-exporter:v2.13.0
[root@harbor harbor]# docker images
REPOSITORY                      TAG       IMAGE ID       CREATED       SIZE
goharbor/harbor-exporter        v2.13.0   0be56feff492   4 weeks ago   127MB
goharbor/redis-photon           v2.13.0   7c0d9781ab12   4 weeks ago   166MB
goharbor/trivy-adapter-photon   v2.13.0   f2b4d5497558   4 weeks ago   381MB
goharbor/harbor-registryctl     v2.13.0   bbd957df71d6   4 weeks ago   162MB
goharbor/registry-photon        v2.13.0   fa23989bf194   4 weeks ago   85.9MB
goharbor/nginx-photon           v2.13.0   c922d86a7218   4 weeks ago   151MB
goharbor/harbor-log             v2.13.0   463b8f469e21   4 weeks ago   164MB
goharbor/harbor-jobservice      v2.13.0   112a1616822d   4 weeks ago   174MB
goharbor/harbor-core            v2.13.0   b90fcb27fd54   4 weeks ago   197MB
goharbor/harbor-portal          v2.13.0   858f92a0f5f9   4 weeks ago   159MB
goharbor/harbor-db              v2.13.0   13a2b78e8616   4 weeks ago   273MB
goharbor/prepare                v2.13.0   2380b5a4f127   4 weeks ago   205MB
[root@harbor harbor]# ls
common.sh  harbor.v2.13.0.tar.gz  harbor.yml  harbor.yml.tmpl  install.sh  LICENSE  prepare  ssl
[root@harbor harbor]# ./prepare
prepare base dir is set to /data/harbor
Generated configuration file: /config/portal/nginx.conf
Generated configuration file: /config/log/logrotate.conf
Generated configuration file: /config/log/rsyslog_docker.conf
Generated configuration file: /config/nginx/nginx.conf
Generated configuration file: /config/core/env
Generated configuration file: /config/core/app.conf
Generated configuration file: /config/registry/config.yml
Generated configuration file: /config/registryctl/env
Generated configuration file: /config/registryctl/config.yml
Generated configuration file: /config/db/env
Generated configuration file: /config/jobservice/env
Generated configuration file: /config/jobservice/config.yml
copy /data/secret/tls/harbor_internal_ca.crt to shared trust ca dir as name harbor_internal_ca.crt ...
ca file /hostfs/data/secret/tls/harbor_internal_ca.crt is not exist
copy  to shared trust ca dir as name storage_ca_bundle.crt ...
copy None to shared trust ca dir as name redis_tls_ca.crt ...
Generated and saved secret to file: /data/secret/keys/secretkey
Successfully called func: create_root_cert
Generated configuration file: /compose_location/docker-compose.yml
Clean up the input dir# 啟動harbor
[root@harbor harbor]# ./install.sh[Step 0]: checking if docker is installed ...Note: docker version: 28.0.4[Step 1]: checking docker-compose is installed ...Note: Docker Compose version v2.34.0[Step 2]: loading Harbor images ...
Loaded image: goharbor/harbor-db:v2.13.0
Loaded image: goharbor/harbor-jobservice:v2.13.0
Loaded image: goharbor/harbor-registryctl:v2.13.0
Loaded image: goharbor/redis-photon:v2.13.0
Loaded image: goharbor/trivy-adapter-photon:v2.13.0
Loaded image: goharbor/nginx-photon:v2.13.0
Loaded image: goharbor/registry-photon:v2.13.0
Loaded image: goharbor/prepare:v2.13.0
Loaded image: goharbor/harbor-portal:v2.13.0
Loaded image: goharbor/harbor-core:v2.13.0
Loaded image: goharbor/harbor-log:v2.13.0
Loaded image: goharbor/harbor-exporter:v2.13.0[Step 3]: preparing environment ...[Step 4]: preparing harbor configs ...
prepare base dir is set to /data/harbor
Clearing the configuration file: /config/portal/nginx.conf
.......
Generated configuration file: /config/portal/nginx.conf
.......
Generated configuration file: /config/jobservice/env
Generated configuration file: /config/jobservice/config.yml
copy /data/secret/tls/harbor_internal_ca.crt to shared trust ca dir as name harbor_internal_ca.crt ...
ca file /hostfs/data/secret/tls/harbor_internal_ca.crt is not exist
copy  to shared trust ca dir as name storage_ca_bundle.crt ...
copy None to shared trust ca dir as name redis_tls_ca.crt ...
loaded secret from file: /data/secret/keys/secretkey
Generated configuration file: /compose_location/docker-compose.yml
Clean up the input dirNote: stopping existing Harbor instance ...[Step 5]: starting Harbor ...
[+] Running 10/10		#10個容器成功運行? Network harbor_harbor        Created                                                                       0.0s 
.........                                                                    1.3s 
? ----Harbor has been installed and started successfully.----
  • 配置啟動服務
# 會刪容器,但是不會刪除鏡像
[root@harbor harbor]# docker-compose down
[+] Running 10/10? Container harbor-jobservice  Removed                                                                       0.1s ? Container registryctl        Removed                                                                       0.1s ? Container nginx              Removed                                                                       0.1s ? Container harbor-portal      Removed                                                                       0.1s ? Container harbor-core        Removed                                                                       0.1s ? Container registry           Removed                                                                       0.1s ? Container redis              Removed                                                                       0.1s ? Container harbor-db          Removed                                                                       0.2s ? Container harbor-log         Removed                                                                      10.1s ? Network harbor_harbor        Removed                                                                       0.1s 
[root@harbor harbor]# docker ps
CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
[root@harbor harbor]# docker images
REPOSITORY                      TAG       IMAGE ID       CREATED       SIZE
goharbor/harbor-exporter        v2.13.0   0be56feff492   4 weeks ago   127MB
goharbor/redis-photon           v2.13.0   7c0d9781ab12   4 weeks ago   166MB
goharbor/trivy-adapter-photon   v2.13.0   f2b4d5497558   4 weeks ago   381MB
goharbor/harbor-registryctl     v2.13.0   bbd957df71d6   4 weeks ago   162MB
goharbor/registry-photon        v2.13.0   fa23989bf194   4 weeks ago   85.9MB
goharbor/nginx-photon           v2.13.0   c922d86a7218   4 weeks ago   151MB
goharbor/harbor-log             v2.13.0   463b8f469e21   4 weeks ago   164MB
goharbor/harbor-jobservice      v2.13.0   112a1616822d   4 weeks ago   174MB
goharbor/harbor-core            v2.13.0   b90fcb27fd54   4 weeks ago   197MB
goharbor/harbor-portal          v2.13.0   858f92a0f5f9   4 weeks ago   159MB
goharbor/harbor-db              v2.13.0   13a2b78e8616   4 weeks ago   273MB
goharbor/prepare                v2.13.0   2380b5a4f127   4 weeks ago   205MB# 編寫服務文件(必須有這三個板塊)
[root@harbor harbor]# vim /usr/lib/systemd/system/harbor.service
[root@harbor harbor]# cat /usr/lib/systemd/system/harbor.service
[Unit]	#定義服務啟動的依賴關系和順序、服務的描述信息
Description=Harbor
After=docker.service systemd-networkd.service systemd-resolved.service	# 定義順序:...之后,不是強依賴
Requires=docker.service		#必須先啟動這個服務(強依賴)
Documentation=http://github.com/vmware/harbor[Service]	#定義服務的啟動、停止、重啟
Type=simple	#服務啟動的進程啟動方式
Restart=on-failure	#如果失敗了就重啟
RestartSec=5		#重啟時間
ExecStart=/usr/bin/docker-compose --file /data/harbor/docker-compose.yml up	#啟動時需要執行的指令
# /usr/bin/docker-compose為剛剛安裝的路徑
ExecStop=/usr/bin/docker-compose --file /data/harbor/docker-compose.yml down[Install]
WantedBy=multi-user.target
  • /usr/bin/docker-compose為剛剛安裝的路徑
[root@harbor ~]# cd /data/harbor/
[root@harbor harbor]# ls
common     docker-compose.yml     harbor.yml       install.sh  prepare
common.sh  harbor.v2.13.0.tar.gz  harbor.yml.tmpl  LICENSE     ssl
[root@harbor harbor]# cat docker-compose.yml 
services:log:image: goharbor/harbor-log:v2.13.0container_name: harbor-logrestart: alwayscap_drop:- ALLcap_add:- CHOWN- DAC_OVERRIDE- SETGID- SETUIDvolumes:- /var/log/harbor/:/var/log/docker/:z- type: bindsource: ./common/config/log/logrotate.conftarget: /etc/logrotate.d/logrotate.conf- type: bindsource: ./common/config/log/rsyslog_docker.conftarget: /etc/rsyslog.d/rsyslog_docker.confports:- 127.0.0.1:1514:10514networks:- harborregistry:image: goharbor/registry-photon:v2.13.0container_name: registryrestart: alwayscap_drop:- ALLcap_add:- CHOWN- SETGID- SETUIDvolumes:- /data/registry:/storage:z- ./common/config/registry/:/etc/registry/:z- type: bindsource: /data/secret/registry/root.crttarget: /etc/registry/root.crt- type: bindsource: ./common/config/shared/trust-certificatestarget: /harbor_cust_certnetworks:- harbordepends_on:- loglogging:driver: "syslog"options:syslog-address: "tcp://localhost:1514"tag: "registry"registryctl:image: goharbor/harbor-registryctl:v2.13.0container_name: registryctlenv_file:- ./common/config/registryctl/envrestart: alwayscap_drop:- ALLcap_add:- CHOWN- SETGID- SETUIDvolumes:- /data/registry:/storage:z- ./common/config/registry/:/etc/registry/:z- type: bindsource: ./common/config/registryctl/config.ymltarget: /etc/registryctl/config.yml- type: bindsource: ./common/config/shared/trust-certificatestarget: /harbor_cust_certnetworks:- harbordepends_on:- loglogging:driver: "syslog"options:syslog-address: "tcp://localhost:1514"tag: "registryctl"postgresql:image: goharbor/harbor-db:v2.13.0container_name: harbor-dbrestart: alwayscap_drop:- ALLcap_add:- CHOWN- DAC_OVERRIDE- SETGID- SETUIDvolumes:- /data/database:/var/lib/postgresql/data:znetworks:harbor:env_file:- ./common/config/db/envdepends_on:- loglogging:driver: "syslog"options:syslog-address: "tcp://localhost:1514"tag: "postgresql"shm_size: '1gb'core:image: goharbor/harbor-core:v2.13.0container_name: harbor-coreenv_file:- ./common/config/core/envrestart: alwayscap_drop:- ALLcap_add:- SETGID- SETUIDvolumes:- /data/ca_download/:/etc/core/ca/:z- /data/:/data/:z- ./common/config/core/certificates/:/etc/core/certificates/:z- type: bindsource: ./common/config/core/app.conftarget: /etc/core/app.conf- type: bindsource: /data/secret/core/private_key.pemtarget: /etc/core/private_key.pem- type: bindsource: /data/secret/keys/secretkeytarget: /etc/core/key- type: bindsource: ./common/config/shared/trust-certificatestarget: /harbor_cust_certnetworks:harbor:depends_on:- log- registry- redis- postgresqllogging:driver: "syslog"options:syslog-address: "tcp://localhost:1514"tag: "core"portal:image: goharbor/harbor-portal:v2.13.0container_name: harbor-portalrestart: alwayscap_drop:- ALLcap_add:- CHOWN- SETGID- SETUID- NET_BIND_SERVICEvolumes:- type: bindsource: ./common/config/portal/nginx.conftarget: /etc/nginx/nginx.confnetworks:- harbordepends_on:- loglogging:driver: "syslog"options:syslog-address: "tcp://localhost:1514"tag: "portal"jobservice:image: goharbor/harbor-jobservice:v2.13.0container_name: harbor-jobserviceenv_file:- ./common/config/jobservice/envrestart: alwayscap_drop:- ALLcap_add:- CHOWN- SETGID- SETUIDvolumes:- /data/job_logs:/var/log/jobs:z- type: bindsource: ./common/config/jobservice/config.ymltarget: /etc/jobservice/config.yml- type: bindsource: ./common/config/shared/trust-certificatestarget: /harbor_cust_certnetworks:- harbordepends_on:- corelogging:driver: "syslog"options:syslog-address: "tcp://localhost:1514"tag: "jobservice"redis:image: goharbor/redis-photon:v2.13.0container_name: redisrestart: alwayscap_drop:- ALLcap_add:- CHOWN- SETGID- SETUIDvolumes:- /data/redis:/var/lib/redisnetworks:harbor:depends_on:- loglogging:driver: "syslog"options:syslog-address: "tcp://localhost:1514"tag: "redis"proxy:image: goharbor/nginx-photon:v2.13.0container_name: nginxrestart: alwayscap_drop:- ALLcap_add:- CHOWN- SETGID- SETUID- NET_BIND_SERVICEvolumes:- ./common/config/nginx:/etc/nginx:z- /data/secret/cert:/etc/cert:z- type: bindsource: ./common/config/shared/trust-certificatestarget: /harbor_cust_certnetworks:- harborports:- 80:8080- 443:8443depends_on:- registry- core- portal- loglogging:driver: "syslog"options:syslog-address: "tcp://localhost:1514"tag: "proxy"
networks:harbor:external: false








本文來自互聯網用戶投稿,該文觀點僅代表作者本人,不代表本站立場。本站僅提供信息存儲空間服務,不擁有所有權,不承擔相關法律責任。
如若轉載,請注明出處:http://www.pswp.cn/news/906687.shtml
繁體地址,請注明出處:http://hk.pswp.cn/news/906687.shtml
英文地址,請注明出處:http://en.pswp.cn/news/906687.shtml

如若內容造成侵權/違法違規/事實不符,請聯系多彩編程網進行投訴反饋email:809451989@qq.com,一經查實,立即刪除!

相關文章

軟媒魔方——一款集合多種系統輔助組件的軟件

停更4年&#xff0c;但依舊吊炸天&#xff01; 親們&#xff0c;是不是覺得電腦用久了就像老牛拉車&#xff0c;慢得讓人著急&#xff1f;別急&#xff0c;我今天要給大家安利一個超好用的電腦優化神器——軟媒魔方&#xff01; 軟件介紹 首先&#xff0c;這貨真心是免費的&a…

upload-labs通關筆記-第19關文件上傳之條件競爭

目錄 一、條件競爭 二、源碼分析 1、源碼分析 2、攻擊原理 3、滲透思路 三、實戰滲透 1、構造腳本 2、制作圖片馬 3、獲取上傳腳本URL 4、構造訪問母狼腳本的Python代碼 5、bp不斷并發上傳母狼圖片馬 &#xff08;1&#xff09;開啟專業版bp &#xff08;2&#xf…

分布式消息隊列kafka詳解

分布式消息隊列kafka詳解 引言 Apache Kafka是一個開源的分布式事件流平臺&#xff0c;最初由LinkedIn開發&#xff0c;現已成為處理高吞吐量、實時數據流的行業標準。Kafka不僅僅是一個消息隊列&#xff0c;更是一個完整的分布式流處理平臺&#xff0c;能夠發布、訂閱、存儲…

uni-app(3):互相引用

1 絕對路徑和相對路徑 在日常開發中&#xff0c;經常會遇到使用絕對路徑還是相對路徑的問題&#xff0c;下面我們介紹下這兩種路徑。 1.1 絕對路徑 絕對路徑&#xff1a;是指從項目根目錄開始的完整路徑。它用于指定文件或目錄的確切位置。絕對路徑通常以斜杠&#xff08;/&am…

python與flask框架

一、理論 Flask是一個輕量級的web框架&#xff0c;靈活易用。提供構建web應用所需的核心工具。 Flask依賴python的兩個庫 Werkzeug&#xff1a;flask的底層庫&#xff0c;提供了WSGI接口、HTTP請求和響應處理、路由等核心功能。 Jinja2&#xff1a;模板引擎&#xff0…

esp32-idf框架學習筆記/教程

esp32型號: 環境搭建 安裝:就按這個來,別的試了好多次都不行,這個一次成功!!!! vscode下ESP32開發環境配置&#xff08;100%成功&#xff09;_嗶哩嗶哩_bilibili esp芯片的兩種模式: ESP32 固件燒錄教程_嗶哩嗶哩_bilibili 1.運行模式 2.下載模式 esp32s3程序下載 1.數據…

VKontakte(VK)注冊教程

VKontakte&#xff08;簡稱VK&#xff09;是俄羅斯最大的社交網絡平臺&#xff0c;類似于Facebook&#xff0c;用戶可以通過它進行社交、分享圖片、視頻、音樂等內容&#xff0c;并參與各類社群討論&#xff0c;是與俄羅斯及其他東歐地區的朋友建立聯系的便捷平臺。對于做俄羅斯…

STM32+ESP8266+ONENET+微信小程序上傳數據下發指令避坑指南

之前只做過類似的但是以為這種爛大街的功能應該不難結果還是踩了不少坑&#xff0c;記錄幾個需要注意的點 首先貼一個非常有用的視頻&#xff0c;里面講的很詳細&#xff0c;給的資料也很全【【新版OneNet云平臺】STM32ESP8266上傳數據&#xff0c;簡單易上手&#xff01;】 h…

【知識點】關于vue3中markRow、shallowRef、shallowReactive的了解

首先我們先了解一下這三個函數的定義以及區別 markRow 定義&#xff1a; 一個用于標記對象為非響應式的工具函數 shallowRef 定義&#xff1a; 一個用于創建淺層響應式引用的函數&#xff0c;只對 .value 本身進行響應式處理&#xff0c;不會遞歸地將 .value 指向的對象或…

后端開發實習生-抖音生活服務

職位描述 ByteIntern&#xff1a;面向2026屆畢業生&#xff08;2025年9月-2026年8月期間畢業&#xff09;&#xff0c;為符合崗位要求的同學提供轉正機會。 團隊介紹&#xff1a;生活服務業務依托于抖音、抖音極速版等平臺&#xff0c;致力于促進用戶與本地服務的連接。過去一…

OceanBase 共享存儲:云原生數據庫的存儲

目錄 探會——第三屆 OceanBase 開發者大會 重磅發布&#xff1a;OceanBase 4.3 開發者生態全面升級 實戰演講&#xff1a;用戶案例與行業落地 OceanBase 共享存儲架構解析 什么是共享存儲架構&#xff1f; 云原生數據庫的架構 性能、彈性與多云的統一 為何OceanBase能…

C++ 結構體封裝模式與 Promise 鏈式調用:設計思想的異曲同工

C 結構體封裝模式與 Promise 鏈式調用&#xff1a;設計思想的異曲同工 在軟件開發中&#xff0c;我們常常追求代碼的可維護性、可擴展性和可讀性。不同的編程語言和場景下&#xff0c;雖然實現方式各異&#xff0c;但背后的設計思想往往存在著奇妙的相似性。本文將探討 C 中結…

【Go】1、Go語言基礎

前言 本系列文章參考自稀土掘金上的 【字節內部課】公開課&#xff0c;做自我學習總結整理。 Go語言的特點 Go語言由Google團隊設計&#xff0c;以簡潔、高效、并發友好為核心目標。 具有以下優點&#xff1a; 語法簡單、學習曲線平緩&#xff1a;語法關鍵字很少&#xff0c;且…

AI時代的新營銷范式:生成式引擎優化(GEO)的崛起——品牌如何被大模型收錄

在數字化浪潮席卷全球的今天&#xff0c;我們正站在一個前所未有的歷史拐點。如果說過去二十年&#xff0c;搜索引擎優化&#xff08;SEO&#xff09;重塑了企業與消費者的連接方式&#xff0c;那么未來二十年&#xff0c;生成式引擎優化&#xff08;GEO&#xff09;將徹底顛覆…

實用藍牙耳機哪款好?先做好使用場景分析!

市面上的藍牙耳機款式繁多&#xff0c;618到來之際&#xff0c;消費者如何選擇適合自己的藍牙耳機&#xff1f;實用藍牙耳機哪款好&#xff1f;關鍵在于做好使用場景分析&#xff01;今天&#xff0c;就帶大家結合不同的使用場景&#xff0c;分享三款倍思音頻的精品藍牙耳機。 …

PTA刷題筆記3(微難,有詳解)

7-15 計算圓周率 代碼如下&#xff1a; #include <stdio.h>int main() {double threshold;scanf("%lf", &threshold);double pi_over_2 1.0; // π/2的初始值&#xff08;第一項1&#xff09;double term 1.0; // 當前項的值int n 1; …

基于SpringBoot+Vue的社區醫院信息平臺設計與實現

項目背景與概述 隨著醫療健康信息化的發展&#xff0c;社區醫院的管理逐漸由傳統的手工模式轉向信息化管理。為了提高醫院的管理效率、減少人工操作、提升服務質量&#xff0c;開發一個高效且實用的社區醫院信息平臺顯得尤為重要。本系統基于Spring Boot框架與MySQL數據庫設計…

舊物回收小程序:讓閑置煥發光彩,為生活增添價值

你是否常常為家中堆積如山的閑置物品而煩惱&#xff1f;那些曾經心愛的物品&#xff0c;如今卻成了占據空間的“雞肋”&#xff0c;丟棄可惜&#xff0c;留著又無處安放。別擔心&#xff0c;一款舊物二手回收小程序將為你解決這一難題&#xff0c;讓閑置物品重新煥發光彩&#…

掩碼與網關是什么?

1. 子網掩碼&#xff08;Subnet Mask&#xff09; 作用&#xff1a;劃分“小區”范圍 想象你住在一個小區&#xff08;子網&#xff09;里&#xff1a; 小區門牌號 IP地址&#xff08;如 192.168.1.10&#xff09; 小區邊界 子網掩碼&#xff08;如 255.255.255.0&#xf…

【Bluedroid】藍牙HID Host disconnect流程源碼解析

本文基于 Android 藍牙 HID&#xff08;Human Interface Device&#xff09;Host 模塊的源碼&#xff0c;深入解析 HID 設備斷開連接的完整流程。重點覆蓋從應用層觸發斷開請求&#xff0c;到 BTIF 層&#xff08;接口適配層&#xff09;狀態校驗與異步傳遞、BTA 層&#xff08…