环境准备
- 部署前,请确认软件、资源等满足如下需求:
- 资源需求 CPU 2+,Memory 4G+
- Docker:>= 17.03
- Helm Client: 版本 >= 2.9.0 并且 < 3.0.0
- Kubectl:至少 1.10,建议 1.13 或更高版本
- 对于 Linux 用户, 如果使用 5.x 或者更高版本内核,安装过程中
kubeadm
可能会打印警告信息。集群可能仍然能正常工作,但是为保证更好的兼容性,建议使用 3.10+ 或者 4.x 版本内核。 - 需要
root
权限操作 Docker 进程
部署 TiDB Operator
注意: ${chartVersion} 在后续文档中代表 chart 版本,例如 v1.0.0-rc.1
如果 K8s 集群启动并正常运行,可以通过helm
添加 chart 仓库并安装 TiDB Operator。
- 添加 Helm chart 仓库
helm repo add pingcap http://charts.pingcap.org/ && \
helm repo list && \
helm repo update && \
helm search tidb-cluster -l && \
helm search tidb-operator -l
- 安装 TiDB Operator
helm install charts/tidb-operator --name=tidb-operator --namespace=tidb-admin --set scheduler.kubeSchedulerImageName=registry.aliyuncs.com/google_containers/kube-scheduler --version=v1.0.0-rc.1
- 等待几分钟确保 TiDB Operator 正常运行
kubectl get pods --namespace tidb-admin -l app.kubernetes.io/instance=tidb-operator
# 输出如下
NAME READY STATUS RESTARTS AGE
tidb-controller-manager-97dc98b6c-hmlp5 1/1 Running 0 118s
tidb-scheduler-648f7bc6c8-qdss5 2/2 Running 0 119s
部署本地卷
- 由于上一章节并没有真正部署 PV,我们先卸载 TiDB local-volume-provisioner.yaml
kubectl delete -f manifests/local-dind/local-volume-provisioner.yaml
注意: 以下步骤需要在所有 Node 节点分别执行
-
为虚拟机添加新的磁盘
image -
查看新磁盘
fdisk -l
# 可以看到磁盘路径为 /dev/sdb
Disk /dev/sdb: 40 GiB, 42949672960 bytes, 83886080 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
-
参考 operations guide in sig-storage-local-static-provisioner,
tidb-operator
启动会为pd
和tikv
绑定pv
,需要在discovery directory
下创建多个目录 -
格式化磁盘
sudo mkfs.ext4 /dev/sdb
# 输出如下
mke2fs 1.44.1 (24-Mar-2018)
Creating filesystem with 10485760 4k blocks and 2621440 inodes
Filesystem UUID: 5ace0751-6870-4115-89d4-91e007d8b055
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624
Allocating group tables: done
Writing inode tables: done
Creating journal (65536 blocks): done
Writing superblocks and filesystem accounting information: done
- 挂载磁盘
DISK_UUID=$(blkid -s UUID -o value /dev/sdb)
sudo mkdir -p /mnt/$DISK_UUID
sudo mount -t ext4 /dev/sdb /mnt/$DISK_UUID
-
/etc/fstab
持久化mount
echo UUID=`sudo blkid -s UUID -o value /dev/sdb` /mnt/$DISK_UUID ext4 defaults 0 2 | sudo tee -a /etc/fstab
# 输出如下
UUID=58759186-ffab-42a3-96ce-f9d3c355d4d1 /mnt/58759186-ffab-42a3-96ce-f9d3c355d4d1 ext4 defaults 0 2
- 创建多个目录并
mount
到discovery directory
for i in $(seq 1 10); do
sudo mkdir -p /mnt/${DISK_UUID}/vol${i} /mnt/disks/${DISK_UUID}_vol${i}
sudo mount --bind /mnt/${DISK_UUID}/vol${i} /mnt/disks/${DISK_UUID}_vol${i}
done
-
/etc/fstab
自动挂载
for i in $(seq 1 10); do
echo /mnt/${DISK_UUID}/vol${i} /mnt/disks/${DISK_UUID}_vol${i} none bind 0 0 | sudo tee -a /etc/fstab
done
# 输出如下
/mnt/58759186-ffab-42a3-96ce-f9d3c355d4d1/vol1 /mnt/disks/58759186-ffab-42a3-96ce-f9d3c355d4d1_vol1 none bind 0 0
/mnt/58759186-ffab-42a3-96ce-f9d3c355d4d1/vol2 /mnt/disks/58759186-ffab-42a3-96ce-f9d3c355d4d1_vol2 none bind 0 0
/mnt/58759186-ffab-42a3-96ce-f9d3c355d4d1/vol3 /mnt/disks/58759186-ffab-42a3-96ce-f9d3c355d4d1_vol3 none bind 0 0
/mnt/58759186-ffab-42a3-96ce-f9d3c355d4d1/vol4 /mnt/disks/58759186-ffab-42a3-96ce-f9d3c355d4d1_vol4 none bind 0 0
/mnt/58759186-ffab-42a3-96ce-f9d3c355d4d1/vol5 /mnt/disks/58759186-ffab-42a3-96ce-f9d3c355d4d1_vol5 none bind 0 0
/mnt/58759186-ffab-42a3-96ce-f9d3c355d4d1/vol6 /mnt/disks/58759186-ffab-42a3-96ce-f9d3c355d4d1_vol6 none bind 0 0
/mnt/58759186-ffab-42a3-96ce-f9d3c355d4d1/vol7 /mnt/disks/58759186-ffab-42a3-96ce-f9d3c355d4d1_vol7 none bind 0 0
/mnt/58759186-ffab-42a3-96ce-f9d3c355d4d1/vol8 /mnt/disks/58759186-ffab-42a3-96ce-f9d3c355d4d1_vol8 none bind 0 0
/mnt/58759186-ffab-42a3-96ce-f9d3c355d4d1/vol9 /mnt/disks/58759186-ffab-42a3-96ce-f9d3c355d4d1_vol9 none bind 0 0
/mnt/58759186-ffab-42a3-96ce-f9d3c355d4d1/vol10 /mnt/disks/58759186-ffab-42a3-96ce-f9d3c355d4d1_vol10 none bind 0 0
- 为
tidb-operator
创建local-volume-provisioner
kubectl apply -f manifests/local-dind/local-volume-provisioner.yaml
kubectl get po -n kube-system -l app=local-volume-provisioner
kubectl get pv --all-namespaces | grep local-storage
# 输出如下
local-pv-105327f3 39Gi RWO Delete Available local-storage 91s
local-pv-137dd513 39Gi RWO Delete Available local-storage 91s
local-pv-198df81a 39Gi RWO Delete Available local-storage 92s
local-pv-1ccfc7b8 39Gi RWO Delete Available local-storage 92s
local-pv-1eedfd0c 39Gi RWO Delete Available local-storage 92s
local-pv-21ebe8d3 39Gi RWO Delete Available local-storage 92s
local-pv-26700c8c 39Gi RWO Delete Available local-storage 91s
local-pv-2c866a2b 39Gi RWO Delete Bound tidb/tikv-tidb-cluster-tikv-1 local-storage 90s
local-pv-332165f7 39Gi RWO Delete Available local-storage 91s
local-pv-337dc036 39Gi RWO Delete Available local-storage 91s
local-pv-5160f51f 39Gi RWO Delete Available local-storage 92s
local-pv-67727d25 39Gi RWO Delete Available local-storage 91s
local-pv-68796375 39Gi RWO Delete Available local-storage 92s
local-pv-6a58a870 39Gi RWO Delete Available local-storage 91s
local-pv-6e6794e6 39Gi RWO Delete Available local-storage 92s
local-pv-794165b7 39Gi RWO Delete Available local-storage 91s
local-pv-7f623e89 39Gi RWO Delete Available local-storage 91s
local-pv-81dad462 39Gi RWO Delete Available local-storage 92s
local-pv-9af9c126 39Gi RWO Delete Available local-storage 91s
local-pv-a3786a90 39Gi RWO Retain Bound tidb/pd-tidb-cluster-pd-2 local-storage 92s
local-pv-b974816a 39Gi RWO Retain Bound tidb/tikv-tidb-cluster-tikv-0 local-storage 92s
local-pv-bc37b3dd 39Gi RWO Delete Available local-storage 91s
local-pv-c975c109 39Gi RWO Delete Available local-storage 91s
local-pv-db3102fc 39Gi RWO Delete Available local-storage 91s
local-pv-e1afde46 39Gi RWO Retain Bound tidb/pd-tidb-cluster-pd-1 local-storage 91s
local-pv-e2f4bb4d 39Gi RWO Delete Available local-storage 91s
local-pv-e59e55a8 39Gi RWO Retain Bound tidb/pd-tidb-cluster-pd-0 local-storage 91s
local-pv-ece22d2 39Gi RWO Delete Available local-storage 90s
local-pv-ecf4dd59 39Gi RWO Delete Available local-storage 91s
local-pv-f1c0babe 39Gi RWO Delete Available local-storage 92s
部署 TiDB Cluster
- 通过
helm
和TiDB Operator
,我们可以很轻松的部署一套TiDB
集群
helm install charts/tidb-cluster --name=tidb-cluster --namespace=tidb --version=v1.0.0-rc.1
- 等待几分钟,确保 TiDB 所有组件正常创建并进入
ready
状态,可以通过下面命令持续观察
watch kubectl get pods --namespace tidb -l app.kubernetes.io/instance=tidb-cluster -o wide
# 输出如下
tidb-cluster-discovery-d7498f865-mmjnc 1/1 Running 0 39s 10.244.2.3 kubernetes-node-03 <none> <none>
tidb-cluster-monitor-76f98d655d-rv8kf 2/2 Running 0 39s 10.244.140.69 kubernetes-node-02 <none> <none>
tidb-cluster-pd-0 1/1 Running 0 36s 10.244.2.4 kubernetes-node-03 <none> <none>
tidb-cluster-pd-1 1/1 Running 0 36s 10.244.140.70 kubernetes-node-02 <none> <none>
tidb-cluster-pd-2 1/1 Running 0 36s 10.244.141.195 kubernetes-node-01 <none> <none>
- 获取集群信息
kubectl get tidbcluster -n tidb
# 输出如下
NAME PD STORAGE READY DESIRE TIKV STORAGE READY DESIRE TIDB READY DESIRE
tidb-cluster pingcap/pd:v3.0.1 1Gi 3 3 pingcap/tikv:v3.0.1 10Gi 3 3 pingcap/tidb:v3.0.1 2 2
kubectl get statefulset -n tidb
# 输出如下
NAME READY AGE
tidb-cluster-pd 3/3 15m
tidb-cluster-tidb 2/2 11m
tidb-cluster-tikv 3/3 14m
kubectl get service -n tidb
# 输出如下
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
tidb-cluster-discovery ClusterIP 10.98.237.230 <none> 10261/TCP 16m
tidb-cluster-grafana NodePort 10.108.6.182 <none> 3000:31671/TCP 16m
tidb-cluster-pd ClusterIP 10.106.96.125 <none> 2379/TCP 16m
tidb-cluster-pd-peer ClusterIP None <none> 2380/TCP 16m
tidb-cluster-prometheus NodePort 10.104.16.19 <none> 9090:31392/TCP 16m
tidb-cluster-tidb NodePort 10.100.103.145 <none> 4000:31652/TCP,10080:30100/TCP 16m
tidb-cluster-tidb-peer ClusterIP None <none> 10080/TCP 12m
tidb-cluster-tikv-peer ClusterIP None <none> 20160/TCP 14m
kubectl get configmap -n tidb
# 输出如下
NAME DATA AGE
tidb-cluster-monitor 5 17m
tidb-cluster-monitor-dashboard-extra-v3 2 17m
tidb-cluster-monitor-dashboard-v2 5 17m
tidb-cluster-monitor-dashboard-v3 5 17m
tidb-cluster-pd-aa6df71f 2 17m
tidb-cluster-tidb 2 17m
tidb-cluster-tidb-a4c4bb14 2 17m
tidb-cluster-tikv-e0d21970 2 17m
kubectl get pod -n tidb
# 输出如下
NAME READY STATUS RESTARTS AGE
tidb-cluster-discovery-d7498f865-zmqj8 1/1 Running 0 18m
tidb-cluster-monitor-76f98d655d-x2drs 2/2 Running 0 18m
tidb-cluster-pd-0 1/1 Running 0 18m
tidb-cluster-pd-1 1/1 Running 0 17m
tidb-cluster-pd-2 1/1 Running 1 17m
tidb-cluster-tidb-0 2/2 Running 0 14m
tidb-cluster-tidb-1 2/2 Running 0 14m
tidb-cluster-tikv-0 1/1 Running 0 16m
tidb-cluster-tikv-1 1/1 Running 0 16m
tidb-cluster-tikv-2 1/1 Running 0 16m
访问数据库
通过 kubectl port-forward
暴露服务到主机,可以访问 TiDB 集群。命令中的端口格式为:<主机端口>:<k8s 服务端口>
注意: 如果你不是在本地 PC 而是在远程主机上部署的 DinD 环境,可能无法通过 localhost 访问远程主机的服务。如果使用
kubectl
1.13 或者更高版本,可以在执行kubectl port-forward
命令时添加--address 0.0.0.0
选项,在0.0.0.0
暴露端口而不是默认的127.0.0.1
kubectl port-forward svc/tidb-cluster-tidb 4000:4000 --namespace=tidb --address 0.0.0.0
image
- 版本:MySQL 5.7.25
- 账号:root
- 密码:空
注意: 目前 TiDB 只支持 MySQL5.7 版本客户端 8.0 会报 ERROR 1105 (HY000): Unknown charset id 255
查看监控面板
使用 kubectl
暴露 Grafana 服务端口
kubectl port-forward svc/tidb-cluster-grafana 3000:3000 --namespace=tidb --address 0.0.0.0
在浏览器中打开 http://192.168.141.110:3000
访问 Grafana 监控面板
- 账号:admin
- 密码:admin
销毁 TiDB Cluster
测试结束后,使用如下命令销毁 TiDB 集群
helm delete tidb-cluster --purge
注意: 上述命令只是删除运行的 Pod,数据仍然会保留。
如果你不再需要那些数据,可以通过下面命令清除数据(这将永久删除数据)
kubectl get pv -l app.kubernetes.io/namespace=tidb -o name | xargs -I {} kubectl patch {} -p '{"spec":{"persistentVolumeReclaimPolicy":"Delete"}}' && \
kubectl delete pvc --namespace tidb --all
其它小记
-
1 node(s) had taints that the pod didn't tolerate
:默认 k8s 不允许往 master 节点装东西,强行设置下允许(下面的kubernetes-master
是主节点的名称)
# 允许 Master 节点部署 Pod
kubectl taint nodes --all node-role.kubernetes.io/master-
# 禁止 Master 节点部署 Pod
kubectl taint nodes kubernetes-master node-role.kubernetes.io/master=true:NoSchedule
- 删除全部 PV
kubectl delete pv --all
- 卸载挂载目录
DISK_UUID=$(blkid -s UUID -o value /dev/sdb)
for i in $(seq 1 10); do
sudo umount /mnt/disks/${DISK_UUID}_vol${i}
done
rm -fr /mnt
- 删除
/etc/fstab
配置中挂载的目录
UUID=062815c7-b202-41ef-a5fb-77c783792737 / ext4 defaults 0 0
UUID=e8717c59-6d9b-4709-9303-b2161a57912b /boot ext4 defaults 0 0
#/swap.img none swap sw 0 0
# 卸载挂载目录后需要删除如下内容
UUID=58759186-ffab-42a3-96ce-f9d3c355d4d1 /mnt/58759186-ffab-42a3-96ce-f9d3c355d4d1 ext4 defaults 0 2
/mnt/58759186-ffab-42a3-96ce-f9d3c355d4d1/vol1 /mnt/disks/58759186-ffab-42a3-96ce-f9d3c355d4d1_vol1 none bind 0 0
/mnt/58759186-ffab-42a3-96ce-f9d3c355d4d1/vol2 /mnt/disks/58759186-ffab-42a3-96ce-f9d3c355d4d1_vol2 none bind 0 0
/mnt/58759186-ffab-42a3-96ce-f9d3c355d4d1/vol3 /mnt/disks/58759186-ffab-42a3-96ce-f9d3c355d4d1_vol3 none bind 0 0
/mnt/58759186-ffab-42a3-96ce-f9d3c355d4d1/vol4 /mnt/disks/58759186-ffab-42a3-96ce-f9d3c355d4d1_vol4 none bind 0 0
/mnt/58759186-ffab-42a3-96ce-f9d3c355d4d1/vol5 /mnt/disks/58759186-ffab-42a3-96ce-f9d3c355d4d1_vol5 none bind 0 0
/mnt/58759186-ffab-42a3-96ce-f9d3c355d4d1/vol6 /mnt/disks/58759186-ffab-42a3-96ce-f9d3c355d4d1_vol6 none bind 0 0
/mnt/58759186-ffab-42a3-96ce-f9d3c355d4d1/vol7 /mnt/disks/58759186-ffab-42a3-96ce-f9d3c355d4d1_vol7 none bind 0 0
/mnt/58759186-ffab-42a3-96ce-f9d3c355d4d1/vol8 /mnt/disks/58759186-ffab-42a3-96ce-f9d3c355d4d1_vol8 none bind 0 0
/mnt/58759186-ffab-42a3-96ce-f9d3c355d4d1/vol9 /mnt/disks/58759186-ffab-42a3-96ce-f9d3c355d4d1_vol9 none bind 0 0
/mnt/58759186-ffab-42a3-96ce-f9d3c355d4d1/vol10 /mnt/disks/58759186-ffab-42a3-96ce-f9d3c355d4d1_vol10 none bind 0 0
网友评论