故障问题:
本机处理:
[root@node2 ~]# gluster volume status
Status of volume: testvol
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick node1:/export/sdb1/brick 49152 0 Y 2684
Brick node2:/export/sdb1/brick N/A N/A N N/A # sdb显示不在线
Brick node1:/export/sdc1/brick 49153 0 Y 2703
Brick node2:/export/sdc1/brick 49153 0 Y 2704
Brick node3:/export/sdb1/brick 49152 0 Y 2197
Brick node4:/export/sdb1/brick 49152 0 Y 2207
Brick node3:/export/sdc1/brick 49153 0 Y 2216
Brick node4:/export/sdc1/brick 49153 0 Y 2226
Self-heal Daemon on localhost N/A N/A Y 1393
Self-heal Daemon on node1 N/A N/A Y 3090
Self-heal Daemon on node4 N/A N/A Y 2246
Self-heal Daemon on node3 N/A N/A Y 2236
Task Status of Volume testvol
------------------------------------------------------------------------------
Task : Rebalance
ID : 8b3a04a0-0449-4424-a458-29f602571ea2
Status : completed
从上方看到 Brick node2:/export/sdb1/brick 不在线,出现了问题
解决:
1.创建新的数据目录,将备用的硬盘格式化,挂载到系统中去(故障主机上执行)
mkfs.xfs -i size=512 /dev/sdd1 #格式化
mkdir /export/sdd1/brick -p #建立相关的目录
mount /dev/sdd1 /export/sdd1 #挂载
echo "/dev/sdd1 /export/sdd1 xfs defaults 0 0" >> /etc/fstab #加入开机启动
2. 查询故障点的目录的扩展属性(正常主机执行)
3. 挂载卷并触发自愈(故障主机执行)
[root@node2 ~]# mount -t glusterfs node2:/testvol /mnt #挂载点随便,不重复就可以,node2:/testvol为之前生成的卷
[root@node2 ~]# mkdir /mnt/test #新建一个卷中不存在的目录并删除,根据你的挂载点的位置变换执行
[root@node2 ~]# rmdir /mnt/test
[root@node2 ~]# setfattr -n trusted.non-existent-key -v abc /mnt # 设置扩展属性触发自愈
[root@node2 ~]# setfattr -x trusted.non-existent-key /mnt # 设置扩展属性触发自愈
4.检查当前节点是否挂起
正常的主机执行
[root@node1 gluster]# getfattr -d -m. -e hex /export/sdb1/brick/ # /export/sdb1/brick/ 你建立brick的位置
getfattr: Removing leading '/' from absolute path names
# file: export/sdb1/brick/
security.selinux=0x73797374656d5f753a6f626a6563745f723a756e6c6162656c65645f743a733000
trusted.afr.dirty=0x000000000000000000000000
trusted.afr.testvol-client-1=0x000000000000000400000004 <<---- xattrs are marked from source brick node1:/export/sdb1/brick--->>
trusted.gfid=0x00000000000000000000000000000001
trusted.glusterfs.dht=0x0000000100000000000000003ffffffe
trusted.glusterfs.dht.commithash=0x3334343336363233303800
trusted.glusterfs.volume-id=0xe107222fa1134606a9a7fcb16e4c0709
故障主机执行(正常是否可以):
[root@node2 gluster]# gluster volume heal testvol info
Brick node1:/export/sdb1/brick
/
Status: Connected
Number of entries: 1
Brick node2:/export/sdb1/brick
Status: Transport endpoint is not connected
Number of entries: - # 状态显示传输端点未连接
Brick node1:/export/sdc1/brick
Status: Connected
Number of entries: 0
Brick node2:/export/sdc1/brick
Status: Connected
Number of entries: 0
Brick node3:/export/sdb1/brick
Status: Connected
Number of entries: 0
Brick node4:/export/sdb1/brick
Status: Connected
Number of entries: 0
Brick node3:/export/sdc1/brick
Status: Connected
Number of entries: 0
Brick node4:/export/sdc1/brick
Status: Connected
Number of entries: 0
5. 使用强制提交完成操作
故障机执行
[root@node2 ~]# gluster volume replace-brick testvol node2:/export/sdb1/brick node2:/export/sdd1/brick commit force
volume replace-brick: success: replace-brick commit force operation successful # 提示成功
[root@node2 ~]# gluster volume status
Status of volume: testvol
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick node1:/export/sdb1/brick 49152 0 Y 2684
Brick node2:/export/sdd1/brick 49154 0 Y 10298 #在线盘已经是sdd
Brick node1:/export/sdc1/brick 49153 0 Y 2703
Brick node2:/export/sdc1/brick 49153 0 Y 2704
Brick node3:/export/sdb1/brick 49152 0 Y 2197
Brick node4:/export/sdb1/brick 49152 0 Y 2207
Brick node3:/export/sdc1/brick 49153 0 Y 2216
Brick node4:/export/sdc1/brick 49153 0 Y 2226
Self-heal Daemon on localhost N/A N/A Y 10307
Self-heal Daemon on node3 N/A N/A Y 9728
Self-heal Daemon on node1 N/A N/A Y 3284
Self-heal Daemon on node4 N/A N/A Y 9736
Task Status of Volume testvol
------------------------------------------------------------------------------
Task : Rebalance
ID : 8b3a04a0-0449-4424-a458-29f602571ea2
Status : not started
正常主机执行
[root@node1 gluster]# getfattr -d -m. -e hex /export/sdb1/brick/
getfattr: Removing leading '/' from absolute path names
# file: export/sdb1/brick/
security.selinux=0x73797374656d5f753a6f626a6563745f723a756e6c6162656c65645f743a733000
trusted.afr.dirty=0x000000000000000000000000
trusted.afr.testvol-client-1=0x000000000000000000000000 <<---- Pending changelogs are cleared.
trusted.gfid=0x00000000000000000000000000000001
trusted.glusterfs.dht=0x0000000100000000000000003ffffffe
trusted.glusterfs.dht.commithash=0x3334343336363233303800
trusted.glusterfs.volume-id=0xe107222fa1134606a9a7fcb16e4c0709
[root@node2 ~]# gluster volume heal testvol info
Brick node1:/export/sdb1/brick
Status: Connected
Number of entries: 0
Brick node2:/export/sdd1/brick
Status: Connected
Number of entries: 0
Brick node1:/export/sdc1/brick
Status: Connected
Number of entries: 0
Brick node2:/export/sdc1/brick
Status: Connected
Number of entries: 0
Brick node3:/export/sdb1/brick
Status: Connected
Number of entries: 0
Brick node4:/export/sdb1/brick
Status: Connected
Number of entries: 0
Brick node3:/export/sdc1/brick
Status: Connected
Number of entries: 0
Brick node4:/export/sdc1/brick
Status: Connected
Number of entries: 0
方法二:跨主机的同步
假设node2 的sdb1有问题
加入新主机node5,node5磁盘格式化,挂载和gluster的安装过程同上:
将node5加入信任池
[root@node1 brick]# gluster peer probe node5
peer probe: success.
挂载磁盘
[root@node5 ~]# mkdir -p /export/sdb1 && mount /dev/sdb1 /export/sdb1
[root@node5 ~]# echo "/dev/sdb1 /export/sdb1 xfs defaults 0 0" >> /etc/fstab
[root@node5 ~]# mount -a && mount
执行下边的命令:
[root@node5 ~]# gluster volume replace-brick testvol node2:/export/sdb1/brick node5:/export/sdb1/brick commit force
volume replace-brick: success: replace-brick commit force operation successful
顶替的可以继续使用,也可以在sdb1恢复后,数据倒回,命令如下
[root@node2 ~]# gluster volume replace-brick testvol node5:/export/sdb1/brick node2:/export/sdb1/brick commit force
volume replace-brick: success: replace-brick commit force operation successful
替换之前的状态:
[root@node1 brick]# gluster volume status
Status of volume: testvol
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick node1:/export/sdb1/brick 49152 0 Y 2085
Brick node5:/export/sdb1/brick 49152 0 Y 18229
Brick node1:/export/sdc1/brick 49153 0 Y 2076
Brick node2:/export/sdc1/brick 49153 0 Y 2131
Brick node3:/export/sdb1/brick 49152 0 Y 2197
Brick node4:/export/sdb1/brick 49152 0 Y 2207
Brick node3:/export/sdc1/brick 49153 0 Y 2216
Brick node4:/export/sdc1/brick 49153 0 Y 2226
Self-heal Daemon on localhost N/A N/A Y 10565
Self-heal Daemon on node2 N/A N/A Y 2265
Self-heal Daemon on node3 N/A N/A Y 10416
Self-heal Daemon on node4 N/A N/A Y 10400
Self-heal Daemon on node5 N/A N/A Y 18238
Task Status of Volume testvol
------------------------------------------------------------------------------
Task : Rebalance
ID : 8b3a04a0-0449-4424-a458-29f602571ea2
Status : not started
替换之后的状态:
[root@node1 gluster]# gluster volume status
Status of volume: testvol
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick node1:/export/sdb1/brick 49152 0 Y 2085
Brick node2:/export/sdb1/brick (过来了) 49153 0 Y 10208
Brick node1:/export/sdc1/brick 49153 0 Y 2076
Brick node2:/export/sdc1/brick 49152 0 Y 3474
Brick node3:/export/sdb1/brick 49152 0 Y 2197
Brick node4:/export/sdb1/brick 49152 0 Y 2207
Brick node3:/export/sdc1/brick 49153 0 Y 2216
Brick node4:/export/sdc1/brick 49153 0 Y 2226
Self-heal Daemon on localhost N/A N/A Y 10684
Self-heal Daemon on node3 N/A N/A Y 10498
Self-heal Daemon on node5 N/A N/A Y 10075
Self-heal Daemon on node4 N/A N/A Y 10488
Self-heal Daemon on node2 N/A N/A Y 10201
Task Status of Volume testvol
------------------------------------------------------------------------------
Task : Rebalance
ID : 8b3a04a0-0449-4424-a458-29f602571ea2
Status : not started
网友评论