# tidb4.0 单机部署
## 准备环境
- 参照地址:https://pingcap.com/docs-cn/stable/check-before-deployment/
- 主要关注:防火墙、免密登陆、关闭swap
- 准备环境注意点:
当前主机针对tidb用户免密登陆:
```
ssh-copy-id -i ~/.ssh/id_rsa.pub tidb@本主机
```
## 开始安装
- 参照地址:https://pingcap.com/docs-cn/stable/quick-start-with-tidb/
参照第三个方案:
第三种:使用 TiUP cluster 在单机上模拟生产环境部署步骤
#### 下载并安装 TiUP
```
curl --proto '=https' --tlsv1.2 -sSf https://tiup-mirrors.pingcap.com/install.sh | sh
```
#### 安装 TiUP 的 cluster 组件
```
tiup cluster
```
#### 配置模板10.0.1.1 替换自己的ip
```
# # Global variables are applied to all deployments and used as the default value of
# # the deployments if a specific deployment value is missing.
global:
user: "tidb"
ssh_port: 22
deploy_dir: "/tidb-deploy"
data_dir: "/tidb-data"
# # Monitored variables are applied to all the machines.
monitored:
node_exporter_port: 9100
blackbox_exporter_port: 9115
server_configs:
tidb:
log.slow-threshold: 300
tikv:
readpool.storage.use-unified-pool: false
readpool.coprocessor.use-unified-pool: true
pd:
replication.enable-placement-rules: true
tiflash:
logger.level: "info"
pd_servers:
- host: 10.0.1.1
tidb_servers:
- host: 10.0.1.1
tikv_servers:
- host: 10.0.1.1
port: 20160
status_port: 20180
- host: 10.0.1.1
port: 20161
status_port: 20181
- host: 10.0.1.1
port: 20162
status_port: 20182
tiflash_servers:
- host: 10.0.1.1
monitoring_servers:
- host: 10.0.1.1
grafana_servers:
- host: 10.0.1.1
```
#### 执行集群部署命令
```
[root@localhost tidb]# tiup cluster deploy tidb-test v4.0.0 ./topology.yaml -i ~/.ssh/id_rsa --user tidb
Starting component `cluster`: /root/.tiup/components/cluster/v1.0.3/tiup-cluster deploy tidb-test v4.0.0 ./topology.yaml -i /root/.ssh/id_rsa --user tidb
Please confirm your topology:
TiDB Cluster: tidb-test
TiDB Version: v4.0.0
Type Host Ports OS/Arch Directories
---- ---- ----- ------- -----------
pd 192.168.56.14 2379/2380 linux/x86_64 /tidb-deploy/pd-2379,/tidb-data/pd-2379
tikv 192.168.56.14 20160/20180 linux/x86_64 /tidb-deploy/tikv-20160,/tidb-data/tikv-20160
tikv 192.168.56.14 20161/20181 linux/x86_64 /tidb-deploy/tikv-20161,/tidb-data/tikv-20161
tikv 192.168.56.14 20162/20182 linux/x86_64 /tidb-deploy/tikv-20162,/tidb-data/tikv-20162
tidb 192.168.56.14 4000/10080 linux/x86_64 /tidb-deploy/tidb-4000
tiflash 192.168.56.14 9000/8123/3930/20170/20292/8234 linux/x86_64 /tidb-deploy/tiflash-9000,/tidb-data/tiflash-9000
prometheus 192.168.56.14 9090 linux/x86_64 /tidb-deploy/prometheus-9090,/tidb-data/prometheus-9090
grafana 192.168.56.14 3000 linux/x86_64 /tidb-deploy/grafana-3000
Attention:
1. If the topology is not what you expected, check your yaml file.
2. Please confirm there is no port/directory conflicts in same host.
Do you want to continue? [y/N]: y
+ Generate SSH keys ... Done
+ Download TiDB components
- Download pd:v4.0.0 (linux/amd64) ... Done
- Download tikv:v4.0.0 (linux/amd64) ... Done
- Download tidb:v4.0.0 (linux/amd64) ... Done
- Download tiflash:v4.0.0 (linux/amd64) ... Done
- Download prometheus:v4.0.0 (linux/amd64) ... Done
- Download grafana:v4.0.0 (linux/amd64) ... Done
- Download node_exporter:v0.17.0 (linux/amd64) ... Done
- Download blackbox_exporter:v0.12.0 (linux/amd64) ... Done
+ Initialize target host environments
- Prepare 192.168.56.14:22 ... Done
+ Copy files
- Copy pd -> 192.168.56.14 ... Done
- Copy tikv -> 192.168.56.14 ... Done
- Copy tikv -> 192.168.56.14 ... Done
- Copy tikv -> 192.168.56.14 ... Done
- Copy tidb -> 192.168.56.14 ... Done
- Copy tiflash -> 192.168.56.14 ... Done
- Copy prometheus -> 192.168.56.14 ... Done
- Copy grafana -> 192.168.56.14 ... Done
- Copy node_exporter -> 192.168.56.14 ... Done
- Copy blackbox_exporter -> 192.168.56.14 ... Done
+ Check status
Deployed cluster `tidb-test` successfully, you can start the cluster via `tiup cluster start tidb-test`
```
#### 启动集群
```
[root@localhost tidb]# tiup cluster start tidb-test
Starting component `cluster`: /root/.tiup/components/cluster/v1.0.3/tiup-cluster start tidb-test
Starting cluster tidb-test...
+ [ Serial ] - SSHKeySet: privateKey=/root/.tiup/storage/cluster/clusters/tidb-test/ssh/id_rsa, publicKey=/root/.tiup/storage/cluster/clusters/tidb-test/ssh/id_rsa.pub
+ [Parallel] - UserSSH: user=tidb, host=192.168.56.14
+ [Parallel] - UserSSH: user=tidb, host=192.168.56.14
+ [Parallel] - UserSSH: user=tidb, host=192.168.56.14
+ [Parallel] - UserSSH: user=tidb, host=192.168.56.14
+ [Parallel] - UserSSH: user=tidb, host=192.168.56.14
+ [Parallel] - UserSSH: user=tidb, host=192.168.56.14
+ [Parallel] - UserSSH: user=tidb, host=192.168.56.14
+ [Parallel] - UserSSH: user=tidb, host=192.168.56.14
+ [ Serial ] - ClusterOperate: operation=StartOperation, options={Roles:[] Nodes:[] Force:false SSHTimeout:5 OptTimeout:60 APITimeout:300}
Starting component pd
Starting instance pd 192.168.56.14:2379
Start pd 192.168.56.14:2379 success
Starting component node_exporter
Starting instance 192.168.56.14
Start 192.168.56.14 success
Starting component blackbox_exporter
Starting instance 192.168.56.14
Start 192.168.56.14 success
Starting component tikv
Starting instance tikv 192.168.56.14:20162
Starting instance tikv 192.168.56.14:20160
Starting instance tikv 192.168.56.14:20161
Start tikv 192.168.56.14:20160 success
Start tikv 192.168.56.14:20162 success
Start tikv 192.168.56.14:20161 success
Starting component tidb
Starting instance tidb 192.168.56.14:4000
Start tidb 192.168.56.14:4000 success
Starting component tiflash
Starting instance tiflash 192.168.56.14:9000
Start tiflash 192.168.56.14:9000 success
Starting component prometheus
Starting instance prometheus 192.168.56.14:9090
Start prometheus 192.168.56.14:9090 success
Starting component grafana
Starting instance grafana 192.168.56.14:3000
Start grafana 192.168.56.14:3000 success
Checking service state of pd
192.168.56.14 Active: active (running) since Sat 2020-06-06 05:51:30 UTC; 25s ago
Checking service state of tikv
192.168.56.14 Active: active (running) since Sat 2020-06-06 05:51:32 UTC; 23s ago
192.168.56.14 Active: active (running) since Sat 2020-06-06 05:51:32 UTC; 23s ago
192.168.56.14 Active: active (running) since Sat 2020-06-06 05:51:32 UTC; 24s ago
Checking service state of tidb
192.168.56.14 Active: active (running) since Sat 2020-06-06 05:51:36 UTC; 19s ago
Checking service state of tiflash
192.168.56.14 Active: active (running) since Sat 2020-06-06 05:51:43 UTC; 13s ago
Checking service state of prometheus
192.168.56.14 Active: active (running) since Sat 2020-06-06 05:51:47 UTC; 9s ago
Checking service state of grafana
192.168.56.14 Active: active (running) since Sat 2020-06-06 05:51:48 UTC; 8s ago
+ [ Serial ] - UpdateTopology: cluster=tidb-test
Started cluster `tidb-test` successfully
```
#### 查看状态
```
[root@localhost tidb]# tiup cluster list
Starting component `cluster`: /root/.tiup/components/cluster/v1.0.3/tiup-cluster list
Name User Version Path PrivateKey
---- ---- ------- ---- ----------
tidb-test tidb v4.0.0 /root/.tiup/storage/cluster/clusters/tidb-test /root/.tiup/storage/cluster/clusters/tidb-test/ssh/id_rsa
[root@localhost tidb]# tiup cluster display tidb-test
Starting component `cluster`: /root/.tiup/components/cluster/v1.0.3/tiup-cluster display tidb-test
TiDB Cluster: tidb-test
TiDB Version: v4.0.0
ID Role Host Ports OS/Arch Status Data Dir Deploy Dir
-- ---- ---- ----- ------- ------ -------- ----------
192.168.56.14:3000 grafana 192.168.56.14 3000 linux/x86_64 Up - /tidb-deploy/grafana-3000
192.168.56.14:2379 pd 192.168.56.14 2379/2380 linux/x86_64 Healthy|L|UI /tidb-data/pd-2379 /tidb-deploy/pd-2379
192.168.56.14:9090 prometheus 192.168.56.14 9090 linux/x86_64 Up /tidb-data/prometheus-9090 /tidb-deploy/prometheus-9090
192.168.56.14:4000 tidb 192.168.56.14 4000/10080 linux/x86_64 Up - /tidb-deploy/tidb-4000
192.168.56.14:9000 tiflash 192.168.56.14 9000/8123/3930/20170/20292/8234 linux/x86_64 Up /tidb-data/tiflash-9000 /tidb-deploy/tiflash-9000
192.168.56.14:20160 tikv 192.168.56.14 20160/20180 linux/x86_64 Up /tidb-data/tikv-20160 /tidb-deploy/tikv-20160
192.168.56.14:20161 tikv 192.168.56.14 20161/20181 linux/x86_64 Up /tidb-data/tikv-20161 /tidb-deploy/tikv-20161
192.168.56.14:20162 tikv 192.168.56.14 20162/20182 linux/x86_64 Up /tidb-data/tikv-20162 /tidb-deploy/tikv-20162
[root@localhost tidb]#
```
#### 执行命令测试
```
[root@localhost tidb]# mysql -h 127.0.0.1 -P 4000 -u root
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 86
Server version: 5.7.25-TiDB-v4.0.0 TiDB Server (Apache License 2.0) Community Edition, MySQL 5.7 compatible
Copyright (c) 2000, 2020, Oracle and/or its affiliates. All rights reserved.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
mysql>
mysql>
mysql>
mysql> show databases;
+--------------------+
| Database |
+--------------------+
| INFORMATION_SCHEMA |
| METRICS_SCHEMA |
| PERFORMANCE_SCHEMA |
| mysql |
| test |
+--------------------+
5 rows in set (0.00 sec)
mysql>
```
## 界面效果
#### TiDB 的 Grafana 监控
![image-20200606144238380](https://gitee.com//wolfprogramer/blogimages/raw/master///image-20200606144238380.png)
![image-20200606144613804](https://gitee.com//wolfprogramer/blogimages/raw/master///image-20200606144613804.png)
#### TiDB 的 Dashboard
![image-20200606144309558](https://gitee.com//wolfprogramer/blogimages/raw/master///image-20200606144309558.png)
网友评论