一、ceph介紹
- 操作系統需要內核版本在kernel 3.10+或CentOS7以上版本中部署
- 通過deploy工具安裝簡化部署過程,本文中選用的ceph-deploy版本為1.5.39
- 至少準備6個環境,分別為1個ceph-admin管理節點、3個mon/mgr/mds節點、2個osd節點
二、ceph安裝
1. 部署ceph-admin
shell> hostnamectl --static set-hostname shyt-ceph-admin
shell> cat /etc/hosts
10.52.0.181 shyt-ceph-mon1
10.52.0.182 shyt-ceph-mon2
10.52.0.183 shyt-ceph-mon3
10.52.0.201 shyt-ceph-osd-node1
10.52.0.202 shyt-ceph-osd-node2
shell> ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:TvZDQwvZpIKFAeSyh8Y1QhEOG9EzKaHaNN1rMl8kxfI root@shyt-ceph-admin
The key's randomart image is:
+---[RSA 2048]----+
|=O=o.o... . |
|*+=..+...= |
|+++=o +o= o |
|o*o.. =Eo . |
|+oo o o S + |
|.. = = o . |
| . . o |
| . |
| |
+----[SHA256]-----+shell> ssh-copy-id shyt-ceph-mon1
shell> ssh-copy-id shyt-ceph-mon2
shell> ssh-copy-id shyt-ceph-mon3
shell> ssh-copy-id shyt-ceph-osd-node1
shell> ssh-copy-id shyt-ceph-osd-node2
# 修改本地yum源
shell> wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
shell> yum clean all
shell> yum makecacheshell> yum -y install https://mirrors.aliyun.com/ceph/rpm-mimic/el7/noarch/ceph-deploy-1.5.39-0.noarch.rpm
shell> ceph-deploy --version
1.5.39
shell> mkdir deploy_ceph_cluster && cd deploy_ceph_cluster
2. 部署mon/mgr/mds節點
shell> hostnamectl --static set-hostname shyt-ceph-mon1
shell> wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
shell> wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
shell> yum clean all
shell> yum makecache
- c) 創建Ceph Monitor節點(在ceph-admin中執行)
# 生成ceph配置文件、monitor秘鑰文件以及部署日志文件。
shell> ceph-deploy new shyt-ceph-mon1 shyt-ceph-mon2 shyt-ceph-mon3
- d) 在ceph.conf配置中增加以下信息(注釋版詳見附件)
shell> cat /etc/ceph/ceph.conf
[global]osd pool default size = 3osd pool default min size = 1public network = 10.52.0.0/24cluster network = 10.52.0.0/24cephx require signatures = truecephx cluster require signatures = truecephx service require signatures = truecephx sign messages = true[mon]mon data size warn = 15*1024*1024*1024mon data avail warn = 30mon data avail crit = 10# 由于ceph集群中存在異構PC,導致時鐘偏移總是大于默認0.05s,為了方便同步直接把時鐘偏移設置成0.5smon clock drift allowed = 2mon clock drift warn backoff = 30mon allow pool delete = truemon osd allow primary affinity = true[osd]osd journal size = 10000osd mkfs type = xfsosd max write size = 512osd client message size cap = 2147483648osd deep scrub stride = 131072osd op threads = 16osd disk threads = 4osd map cache size = 1024osd map cache bl size = 128#osd mount options xfs = "rw,noexec,nodev,noatime,nodiratime,nobarrier"osd recovery op priority = 5osd recovery max active = 10osd max backfills = 4osd min pg log entries = 30000osd max pg log entries = 100000osd mon heartbeat interval = 40ms dispatch throttle bytes = 148576000objecter inflight ops = 819200osd op log threshold = 50osd crush chooseleaf type = 0filestore xattr use omap = truefilestore min sync interval = 10filestore max sync interval = 15filestore queue max ops = 25000filestore queue max bytes = 1048576000filestore queue committing max ops = 50000filestore queue committing max bytes = 10485760000filestore split multiple = 8filestore merge threshold = 40filestore fd cache size = 1024filestore op threads = 32journal max write bytes = 1073714824journal max write entries = 10000journal queue max ops = 50000journal queue max bytes = 10485760000[mds]debug ms = 1/5[client]rbd cache = truerbd cache size = 335544320rbd cache max dirty = 134217728rbd cache max dirty age = 30rbd cache writethrough until flush = falserbd cache max dirty object = 2rbd cache target dirty = 235544320
shell> ceph-deploy install shyt-ceph-mon1 shyt-ceph-mon2 shyt-ceph-mon3 \
--release mimic \
--repo-url http://mirror.tuna.tsinghua.edu.cn/ceph/rpm-mimic/el7/ \
--gpg-url http://mirror.tuna.tsinghua.edu.cn/ceph/keys/release.asc
shell> ceph-deploy mon create-initial
# 通過ceph-deploy將配置文件以及密鑰拷貝至其他節點,使得不需要指定mon地址以及用戶信息就可以直接管理我們的ceph集群
shell> ceph-deploy admin shyt-ceph-mon1 shyt-ceph-mon2 shyt-ceph-mon3
# 運行ceph health,打印
# HEALTH_WARN no active mgr
# 自從ceph 12開始,manager是必須的,應該為每個運行monitor的機器添加一個mgr,否則集群處于WARN狀態。
shell> ceph-deploy mgr create shyt-ceph-mon1:cephsvr-16101 shyt-ceph-mon2:cephsvr-16102 shyt-ceph-mon3:cephsvr-16103# 提示:當ceph-mgr發生故障,相當于整個ceph集群都會出現嚴重問題,
# 建議在每個mon中都創建獨立的ceph-mgr(至少3個ceph mon節點),只需要在每個mon節點參考上面的方法進行創建即可(每個mgr需要不同的獨立命名)。 # 關閉ceph-mgr的方式
shell> systemctl stop ceph-mgr@cephsvr-16101
3. 部署osd節點
shell> hostnamectl --static set-hostname shyt-ceph-osd-node1
shell> wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
shell> wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
shell> yum clean all
shell> yum makecache
shell> ceph-deploy install shyt-ceph-osd-node1 shyt-ceph-osd-node2 \
--release mimic \
--repo-url http://mirror.tuna.tsinghua.edu.cn/ceph/rpm-mimic/el7/ \
--gpg-url http://mirror.tuna.tsinghua.edu.cn/ceph/keys/release.asc
shell> ceph-deploy disk zap shyt-ceph-osd-node1:sdb shyt-ceph-osd-node1:sdc shyt-ceph-osd-node1:sdd
shell> ceph-deploy osd create shyt-ceph-osd-node1:sdb shyt-ceph-osd-node1:sdc shyt-ceph-osd-node1:sdd
shell> ceph-deploy admin shyt-ceph-osd-node1 shyt-ceph-osd-node2# 查看ceph osd節點狀態
shell> ceph -s
shell> ceph osd tree
三、啟用Dashboard
# 啟用dashboard插件
shell> ceph mgr module enable dashboard
# 生成自簽名證書
shell> ceph dashboard create-self-signed-cert
Self-signed certificate created
# 配置dashboard監聽IP和端口
shell> ceph config set mgr mgr/dashboard/server_port 8080
# 配置dashboard認證
shell> ceph dashboard set-login-credentials root 123456
Username and password updated
# 關閉SSL支持,只用HTTP的方式訪問
shell> ceph config set mgr mgr/dashboard/ssl false
# 每個mon節點重啟dashboard使配置生效
shell> systemctl restart ceph-mgr.target
# 瀏覽器訪問 http://10.52.0.181:8080# 查看ceph-mgr服務
shell> ceph mgr services
{"dashboard": "http://shyt-ceph-mon1:8080/"
}
四、創建Ceph MDS角色
1. 安裝ceph mds
# 為防止單點故障,需要部署多臺MDS節點
shell> ceph-deploy mds create shyt-ceph-mon1 shyt-ceph-mon2 shyt-ceph-mon3
shell> ceph osd pool create data 128 128
shell> ceph osd pool create metadata 128 128
shell> ceph fs new cephfs metadata data
shell> ceph mds stat
cephfs-1/1/1 up {0=shyt-ceph-mon3=up:active}, 2 up:standby
3、掛載cephfs文件系統
shell> wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
shell> wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
shell> cat >> /etc/yum.repos.d/ceph.repo << EOF
[ceph]
name=Ceph packages for $basearch
baseurl=http://mirror.tuna.tsinghua.edu.cn/ceph/rpm-mimic/el7/$basearch
enabled=1
gpgcheck=1
priority=1
type=rpm-md
gpgkey=http://mirror.tuna.tsinghua.edu.cn/ceph/keys/release.asc[ceph-noarch]
name=Ceph noarch packages
baseurl=http://mirror.tuna.tsinghua.edu.cn/ceph/rpm-mimic/el7/noarch
enabled=1
gpgcheck=1
priority=1
type=rpm-md
gpgkey=http://mirror.tuna.tsinghua.edu.cn/ceph/keys/release.asc[ceph-source]
name=Ceph source packages
baseurl=http://mirror.tuna.tsinghua.edu.cn/ceph/rpm-mimic/el7/SRPMS
enabled=0
gpgcheck=1
type=rpm-md
gpgkey=http://mirror.tuna.tsinghua.edu.cn/ceph/keys/release.asc
priority=1EOFshell> yum clean all
shell> yum makecache
shell> yum -y install https://mirrors.aliyun.com/ceph/rpm-mimic/el7/x86_64/ceph-fuse-13.2.5-0.el7.x86_64.rpm
# 創建ceph目錄,將ceph.client.admin.keyring和ceph.conf文件拷貝到該目錄下。
shell> mkdir /etc/ceph/
# 創建掛載目錄
shell> mkdir /storage
shell> ceph-fuse /storage
# 加入開機啟動項
shell> echo "ceph-fuse /storage" >> /etc/rc.d/rc.local