前言
最近在做 TiDB 的恢復演練,需要在單臺 Linux 服務器上部署一套 TiDB 最小的完整拓撲的集群,本文記錄一下安裝過程。
環境準備
開始部署 TiDB 集群前,準備一臺部署主機,確保其軟件滿足需求:
- 推薦安裝 CentOS 7.3 及以上版本
- 運行環境可以支持互聯網訪問,用于下載 TiDB 及相關軟件安裝包
注意:TiDB 從 v8.5.1 版本起重新適配 glibc 2.17,恢復了對 CentOS Linux 7 的兼容性支持。
環境信息
最小規模的 TiDB 集群拓撲包含以下實例:
組件 | 數量 | IP | 端口配置 |
---|---|---|---|
PD | 1 | 192.168.31.79 | 2379/2380 |
TiDB | 1 | 192.168.31.79 | 4000/10080 |
TiKV | 3 | 192.168.31.79 | 20160-20162/20180-20182 |
TiFlash | 1 | 192.168.31.79 | 9000/3930/20170/20292/8234/8123 |
Prometheus | 1 | 192.168.31.79 | 9090/12020 |
Grafana | 1 | 192.168.31.79 | 3000 |
安裝依賴庫
編譯和構建 TiDB 所需的依賴庫:
- Golang 1.23 及以上版本
- Rust nightly-2023-12-28 及以上版本
- LLVM 17.0 及以上版本
- sshpass 1.06 及以上
- GCC 7.x(不滿足)
- glibc 2.28-151.el8 版本(不滿足)
下載所需依賴包:
- Rust 下載地址:https://forge.rust-lang.org/infra/other-installation-methods.html
- Golang 下載地址:https://go.dev/dl/
- sshpass 下載地址:https://sourceforge.net/projects/sshpass/files/latest/download
Golang 安裝:
[root@test soft]# tar -C /usr/local -xf go1.25.0.linux-amd64.tar.gz
[root@test ~]# cat<<-\EOF>>/root/.bash_profile
export PATH=$PATH:/usr/local/go/bin
EOF
[root@test ~]# source /root/.bash_profile
[root@test ~]# go version
go version go1.25.0 linux/amd64
Rust 安裝:
[root@test soft]# tar -xf rust-1.89.0-x86_64-unknown-linux-gnu.tar.tar
[root@test soft]# cd rust-1.89.0-x86_64-unknown-linux-gnu/
[root@test rust-1.89.0-x86_64-unknown-linux-gnu]# ./install.sh
[root@test ~]# rustc --version
rustc 1.89.0 (29483883e 2025-08-04)
sshpass 安裝:
[root@test soft]# tar -xf sshpass-1.10.tar.gz
[root@test soft]# cd sshpass-1.10/
[root@test sshpass-1.10]# ./configure && make && make install
[root@test ~]# sshpass -V
sshpass 1.10
關閉防火墻
[root@test ~]# systemctl stop firewalld.service
[root@test ~]# systemctl disable firewalld.service
[root@test ~]# systemctl status firewalld.service
● firewalld.service - firewalld - dynamic firewall daemonLoaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled)Active: inactive (dead)Docs: man:firewalld(1)
檢測及關閉 swap
[root@test ~]# echo "vm.swappiness = 0">> /etc/sysctl.conf
[root@test ~]# swapoff -a
[root@test ~]# sysctl -p
vm.swappiness = 0
記得修改 /etc/fstab 配置,注釋掉 swap 分區:
#/dev/mapper/centos-swap swap swap defaults 0 0
檢查和配置操作系統優化參數
[root@test ~]# echo never > /sys/kernel/mm/transparent_hugepage/enabled
[root@test ~]# echo never > /sys/kernel/mm/transparent_hugepage/defrag
[root@test ~]# cat<<EOF>>/etc/sysctl.conf
fs.file-max = 1000000
net.core.somaxconn = 32768
net.ipv4.tcp_tw_recycle = 0
net.ipv4.tcp_syncookies = 0
vm.overcommit_memory = 1
EOF[root@test ~]# sysctl -p[root@test ~]# cat<<EOF>>/etc/security/limits.conf
tidb soft nofile 1000000
tidb hard nofile 1000000
tidb soft stack 32768
tidb hard stack 32768
EOF
調整 MaxSessions
由于模擬多機部署,需要通過 root 用戶調大 sshd 服務的連接數限制:
[root@test ~]# vim /etc/ssh/sshd_config
## 調整 MaxSessions 20
[root@test ~]# systemctl restart sshd.service
創建 TiDB 用戶
[root@test ~]# useradd tidb
[root@test ~]# echo "Tidb@123" |passwd tidb --stdin
Changing password for user tidb.
passwd: all authentication tokens updated successfully.
[root@test ~]# cat<<-EOF>>/etc/sudoers
tidb ALL=(ALL) NOPASSWD: ALL
EOF
實施部署
本文是內網環境,不使用官方在線源安裝,使用本地鏡像源進行部署,本地鏡像源部署請參考:TiDB 離線部署 TiUP 組件。
tiup 已部署完成:
[root@test ~]# tiup mirror show
/root/tidb-community-server-v8.5.3-linux-amd64[root@test ~]# tiup --version
1.16.2 tiup
Go Version: go1.21.13
Git Ref: v1.16.2
GitHash: 678c52de0c0ef30634b8ba7302a8376caa95d50d
創建并啟動集群:
[root@test ~]# cat<<-\EOF>topo.yaml
# # Global variables are applied to all deployments and used as the default value of
# # the deployments if a specific deployment value is missing.
global:user: "tidb"ssh_port: 11122deploy_dir: "/data/tidb-deploy"data_dir: "/data/tidb-data"# # Monitored variables are applied to all the machines.
monitored:node_exporter_port: 9100blackbox_exporter_port: 9115server_configs:tidb:instance.tidb_slow_log_threshold: 300tikv:readpool.storage.use-unified-pool: falsereadpool.coprocessor.use-unified-pool: truepd:replication.enable-placement-rules: truereplication.location-labels: ["host"]tiflash:logger.level: "info"pd_servers:- host: 192.168.31.79tidb_servers:- host: 192.168.31.79tikv_servers:- host: 192.168.31.79port: 20160status_port: 20180config:server.labels: { host: "logic-host-1" }- host: 192.168.31.79port: 20161status_port: 20181config:server.labels: { host: "logic-host-2" }- host: 192.168.31.79port: 20162status_port: 20182config:server.labels: { host: "logic-host-3" }tiflash_servers:- host: 192.168.31.79monitoring_servers:- host: 192.168.31.79grafana_servers:- host: 192.168.31.79
EOF
安裝前預檢查:
[root@test ~]# tiup cluster check topo.yaml --user root -p
Input SSH password:+ Detect CPU Arch Name- Detecting node 192.168.31.79 Arch info ... Done+ Detect CPU OS Name- Detecting node 192.168.31.79 OS info ... Done
+ Download necessary tools- Downloading check tools for linux/amd64 ... Done
+ Collect basic system information
+ Collect basic system information- Getting system info of 192.168.31.79:11122 ... Done
+ Check time zone- Checking node 192.168.31.79 ... Done
+ Check system requirements
+ Check system requirements
+ Check system requirements
+ Check system requirements- Checking node 192.168.31.79 ... Done- Checking node 192.168.31.79 ... Done- Checking node 192.168.31.79 ... Done- Checking node 192.168.31.79 ... Done- Checking node 192.168.31.79 ... Done- Checking node 192.168.31.79 ... Done- Checking node 192.168.31.79 ... Done- Checking node 192.168.31.79 ... Done- Checking node 192.168.31.79 ... Done
+ Cleanup check files- Cleanup check files on 192.168.31.79:11122 ... Done
Node Check Result Message
---- ----- ------ -------
192.168.31.79 os-version Fail CentOS Linux 7 (Core) 7.9.2009 not supported, use version 9 or higher
192.168.31.79 cpu-cores Pass number of CPU cores / threads: 4
192.168.31.79 ntp Warn The NTPd daemon may be not start
192.168.31.79 disk Warn mount point /data does not have 'noatime' option set
192.168.31.79 selinux Pass SELinux is disabled
192.168.31.79 thp Pass THP is disabled
192.168.31.79 command Pass numactl: policy: default
192.168.31.79 cpu-governor Warn Unable to determine current CPU frequency governor policy
192.168.31.79 memory Pass memory size is 8192MB
192.168.31.79 network Pass network speed of ens192 is 10000MB
192.168.31.79 disk Fail multiple components tikv:/data/tidb-data/tikv-20160,tikv:/data/tidb-data/tikv-20161,tikv:/data/tidb-data/tikv-20162,tiflash:/data/tidb-data/tiflash-9000 are using the same partition 192.168.31.79:/data as data dir
192.168.31.79 disk Fail mount point /data does not have 'nodelalloc' option set
部署集群:
[root@test ~]# tiup cluster deploy lucifer v8.5.3 topo.yaml --user root -p
Input SSH password:+ Detect CPU Arch Name- Detecting node 192.168.31.79 Arch info ... Done+ Detect CPU OS Name- Detecting node 192.168.31.79 OS info ... Done
Please confirm your topology:
Cluster type: tidb
Cluster name: lucifer
Cluster version: v8.5.3
Role Host Ports OS/Arch Directories
---- ---- ----- ------- -----------
pd 192.168.31.79 2379/2380 linux/x86_64 /data/tidb-deploy/pd-2379,/data/tidb-data/pd-2379
tikv 192.168.31.79 20160/20180 linux/x86_64 /data/tidb-deploy/tikv-20160,/data/tidb-data/tikv-20160
tikv 192.168.31.79 20161/20181 linux/x86_64 /data/tidb-deploy/tikv-20161,/data/tidb-data/tikv-20161
tikv 192.168.31.79 20162/20182 linux/x86_64 /data/tidb-deploy/tikv-20162,/data/tidb-data/tikv-20162
tidb 192.168.31.79 4000/10080 linux/x86_64 /data/tidb-deploy/tidb-4000
tiflash 192.168.31.79 9000/3930/20170/20292/8234/8123 linux/x86_64 /data/tidb-deploy/tiflash-9000,/data/tidb-data/tiflash-9000
prometheus 192.168.31.79 9090/12020 linux/x86_64 /data/tidb-deploy/prometheus-9090,/data/tidb-data/prometheus-9090
grafana 192.168.31.79 3000 linux/x86_64 /data/tidb-deploy/grafana-3000
Attention:1. If the topology is not what you expected, check your yaml file.2. Please confirm there is no port/directory conflicts in same host.
Do you want to continue? [y/N]: (default=N) y
+ Generate SSH keys ... Done
+ Download TiDB components- Download pd:v8.5.3 (linux/amd64) ... Done- Download tikv:v8.5.3 (linux/amd64) ... Done- Download tidb:v8.5.3 (linux/amd64) ... Done- Download tiflash:v8.5.3 (linux/amd64) ... Done- Download prometheus:v8.5.3 (linux/amd64) ... Done- Download grafana:v8.5.3 (linux/amd64) ... Done- Download node_exporter: (linux/amd64) ... Done- Download blackbox_exporter: (linux/amd64) ... Done
+ Initialize target host environments- Prepare 192.168.31.79:11122 ... Done
+ Deploy TiDB instance- Copy pd -> 192.168.31.79 ... Done- Copy tikv -> 192.168.31.79 ... Done- Copy tikv -> 192.168.31.79 ... Done- Copy tikv -> 192.168.31.79 ... Done- Copy tidb -> 192.168.31.79 ... Done- Copy tiflash -> 192.168.31.79 ... Done- Copy prometheus -> 192.168.31.79 ... Done- Copy grafana -> 192.168.31.79 ... Done- Deploy node_exporter -> 192.168.31.79 ... Done- Deploy blackbox_exporter -> 192.168.31.79 ... Done
+ Copy certificate to remote host
+ Init instance configs- Generate config pd -> 192.168.31.79:2379 ... Done- Generate config tikv -> 192.168.31.79:20160 ... Done- Generate config tikv -> 192.168.31.79:20161 ... Done- Generate config tikv -> 192.168.31.79:20162 ... Done- Generate config tidb -> 192.168.31.79:4000 ... Done- Generate config tiflash -> 192.168.31.79:9000 ... Done- Generate config prometheus -> 192.168.31.79:9090 ... Done- Generate config grafana -> 192.168.31.79:3000 ... Done
+ Init monitor configs- Generate config node_exporter -> 192.168.31.79 ... Done- Generate config blackbox_exporter -> 192.168.31.79 ... Done
Enabling component pdEnabling instance 192.168.31.79:2379Enable instance 192.168.31.79:2379 success
Enabling component tikvEnabling instance 192.168.31.79:20162Enabling instance 192.168.31.79:20160Enabling instance 192.168.31.79:20161Enable instance 192.168.31.79:20162 successEnable instance 192.168.31.79:20161 successEnable instance 192.168.31.79:20160 success
Enabling component tidbEnabling instance 192.168.31.79:4000Enable instance 192.168.31.79:4000 success
Enabling component tiflashEnabling instance 192.168.31.79:9000Enable instance 192.168.31.79:9000 success
Enabling component prometheusEnabling instance 192.168.31.79:9090Enable instance 192.168.31.79:9090 success
Enabling component grafanaEnabling instance 192.168.31.79:3000Enable instance 192.168.31.79:3000 success
Enabling component node_exporterEnabling instance 192.168.31.79Enable 192.168.31.79 success
Enabling component blackbox_exporterEnabling instance 192.168.31.79Enable 192.168.31.79 success
Cluster `lucifer` deployed successfully, you can start it with command: `tiup cluster start lucifer --init`
啟動集群:
[root@test ~]# tiup cluster start lucifer --init
Starting cluster lucifer...
+ [ Serial ] - SSHKeySet: privateKey=/root/.tiup/storage/cluster/clusters/lucifer/ssh/id_rsa, publicKey=/root/.tiup/storage/cluster/clusters/lucifer/ssh/id_rsa.pub
+ [Parallel] - UserSSH: user=tidb, host=192.168.31.79
+ [Parallel] - UserSSH: user=tidb, host=192.168.31.79
+ [Parallel] - UserSSH: user=tidb, host=192.168.31.79
+ [Parallel] - UserSSH: user=tidb, host=192.168.31.79
+ [Parallel] - UserSSH: user=tidb, host=192.168.31.79
+ [Parallel] - UserSSH: user=tidb, host=192.168.31.79
+ [Parallel] - UserSSH: user=tidb, host=192.168.31.79
+ [Parallel] - UserSSH: user=tidb, host=192.168.31.79
+ [ Serial ] - StartCluster
Starting component pdStarting instance 192.168.31.79:2379Start instance 192.168.31.79:2379 success
Starting component tikvStarting instance 192.168.31.79:20162Starting instance 192.168.31.79:20160Starting instance 192.168.31.79:20161Start instance 192.168.31.79:20162 successStart instance 192.168.31.79:20161 successStart instance 192.168.31.79:20160 success
Starting component tidbStarting instance 192.168.31.79:4000Start instance 192.168.31.79:4000 success
Starting component tiflashStarting instance 192.168.31.79:9000Start instance 192.168.31.79:9000 success
Starting component prometheusStarting instance 192.168.31.79:9090Start instance 192.168.31.79:9090 success
Starting component grafanaStarting instance 192.168.31.79:3000Start instance 192.168.31.79:3000 success
Starting component node_exporterStarting instance 192.168.31.79Start 192.168.31.79 success
Starting component blackbox_exporterStarting instance 192.168.31.79Start 192.168.31.79 success
+ [ Serial ] - UpdateTopology: cluster=lucifer
Started cluster `lucifer` successfully
The root password of TiDB database has been changed.
The new password is: 'm+92G0Q3eNR4^6cq*@'.
Copy and record it to somewhere safe, it is only displayed once, and will not be stored.
The generated password can NOT be get and shown again.
查看集群:
[root@test ~]# tiup cluster list
Name User Version Path PrivateKey
---- ---- ------- ---- ----------
lucifer tidb v8.5.3 /root/.tiup/storage/cluster/clusters/lucifer /root/.tiup/storage/cluster/clusters/lucifer/ssh/id_rsa
檢查集群狀態:
[root@test ~]# tiup cluster display lucifer
Cluster type: tidb
Cluster name: lucifer
Cluster version: v8.5.3
Deploy user: tidb
SSH type: builtin
Dashboard URL: http://192.168.31.79:2379/dashboard
Dashboard URLs: http://192.168.31.79:2379/dashboard
Grafana URL: http://192.168.31.79:3000
ID Role Host Ports OS/Arch Status Data Dir Deploy Dir
-- ---- ---- ----- ------- ------ -------- ----------
192.168.31.79:3000 grafana 192.168.31.79 3000 linux/x86_64 Up - /data/tidb-deploy/grafana-3000
192.168.31.79:2379 pd 192.168.31.79 2379/2380 linux/x86_64 Up|L|UI /data/tidb-data/pd-2379 /data/tidb-deploy/pd-2379
192.168.31.79:9090 prometheus 192.168.31.79 9090/12020 linux/x86_64 Up /data/tidb-data/prometheus-9090 /data/tidb-deploy/prometheus-9090
192.168.31.79:4000 tidb 192.168.31.79 4000/10080 linux/x86_64 Up - /data/tidb-deploy/tidb-4000
192.168.31.79:9000 tiflash 192.168.31.79 9000/3930/20170/20292/8234/8123 linux/x86_64 Up /data/tidb-data/tiflash-9000 /data/tidb-deploy/tiflash-9000
192.168.31.79:20160 tikv 192.168.31.79 20160/20180 linux/x86_64 Up /data/tidb-data/tikv-20160 /data/tidb-deploy/tikv-20160
192.168.31.79:20161 tikv 192.168.31.79 20161/20181 linux/x86_64 Up /data/tidb-data/tikv-20161 /data/tidb-deploy/tikv-20161
192.168.31.79:20162 tikv 192.168.31.79 20162/20182 linux/x86_64 Up /data/tidb-data/tikv-20162 /data/tidb-deploy/tikv-20162
Total nodes: 8
安裝 MySQL 客戶端
TiDB 兼容 MySQL 協議,故需要 MySQL 客戶端連接,則需安裝 MySQL 客戶端,Linux7 版本的系統默認自帶安裝了 MariaDB,需要先清理:
[root@test ~]# rpm -e --nodeps $(rpm -qa | grep mariadb)
找個有網的環境下載:
[root@lucifer ~]# wget https://repo.mysql.com/RPM-GPG-KEY-mysql-2023
[root@lucifer ~]# wget http://dev.mysql.com/get/mysql80-community-release-el7-10.noarch.rpm
安裝 MySQL 客戶端:
[root@test ~]# yum -y install mysql80-community-release-el7-10.noarch.rpm
[root@test ~]# rpm --import RPM-GPG-KEY-mysql-2023
[root@test ~]# yum -y install mysql
連接數據庫:
## 這里的 root 初始密碼在 tidb 集群初始化時日志中輸出的密碼 m+92G0Q3eNR4^6cq*@
[root@test ~]# mysql -h 192.168.31.79 -P 4000 -uroot –p
mysql> show databases;
修改初始 root 密碼:
mysql> use mysql
mysql> alter user 'root'@'%' identified by 'tidb';
集群監控:
- Dashboard:http://192.168.31.79:2379/dashboard (使用 root/tidb 登錄)
- Grafana:http://192.168.31.79:3000 (默認密碼:admin/admin)
寫在最后
至此,TiDB 單機集群部署完成,可用于開發測試和學習研究。生產環境建議參考官方推薦的多機部署方案。