场景描述
如果用的虚拟机 ceph 卷损坏想进行 xfs_repair 修复或者把里面的数据导出,可以把这个损坏的系统盘当做数据盘挂载到其他虚拟机上进行操作
可以参考如下方式:
[root@controller-1 ~]# nova show 9960baeb-dbfb-4105-937c-6e195655a2a4
+--------------------------------------+---------------------------------------------------------------------------------+
| Property | Value |
+--------------------------------------+---------------------------------------------------------------------------------+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-AZ:availability_zone | 5F_DL360 |
| OS-EXT-SRV-ATTR:host | compute-d02-8.domain.tld |
| OS-EXT-SRV-ATTR:hostname | test0723 |
| OS-EXT-SRV-ATTR:hypervisor_hostname | compute-d02-8.domain.tld |
| OS-EXT-SRV-ATTR:instance_name | instance-0000d779 |
| OS-EXT-SRV-ATTR:kernel_id | |
| OS-EXT-SRV-ATTR:launch_index | 0 |
| OS-EXT-SRV-ATTR:ramdisk_id | |
| OS-EXT-SRV-ATTR:reservation_id | r-oicjl9is |
| OS-EXT-SRV-ATTR:root_device_name | /dev/vda |
| OS-EXT-SRV-ATTR:user_data | - |
| OS-EXT-STS:power_state | 1 |
| OS-EXT-STS:task_state | - |
| OS-EXT-STS:vm_state | active |
| OS-SRV-USG:launched_at | 2019-07-23T08:22:09.000000 |
| OS-SRV-USG:terminated_at | - |
| accessIPv4 | |
| accessIPv6 | |
| config_drive | True |
| created | 2019-07-23T08:21:21Z |
| description | - |
| flavor | w-4C-4G-100G (dba0feff-56dd-4f05-b5d7-89411daa4f12) |
| hostId | 1357e7881ac3b18eebf2a4811bd14c31867751f17cd8a799ada62cf9 |
| host_status | UP |
| id | 9960baeb-dbfb-4105-937c-6e195655a2a4 |
| image | Attempt to boot from volume - no image supplied |
| key_name | - |
| locked | False |
| metadata | {} |
| name | test0723 |
| os-extended-volumes:volumes_attached | [{"id": "0f9eca69-95d4-4a46-ab8c-868660934641", "delete_on_termination": true}] |
| progress | 0 |
| status | ACTIVE |
| tenant_id | a7ad64c8e28e4d218f4f1f7773112070 |
| updated | 2019-07-23T08:22:10Z |
| user_id | b4d91c89ad23445785af2a68a2f94804 |
+--------------------------------------+---------------------------------------------------------------------------------+
关闭虚拟机
nova stop 9960baeb-dbfb-4105-937c-6e195655a2a4
查看 卷在 nova 和 cinder 数据库中的记录,便于还原
mysql> select * from cinder.volume_attachment where volume_id ="0f9eca69-95d4-4a46-ab8c-868660934641"\G
*************************** 1. row ***************************
created_at: 2019-07-23 08:22:03
updated_at: 2019-07-23 08:22:03
deleted_at: NULL
deleted: 0
id: 97e92940-0952-4d22-86d4-9296c50831e0
volume_id: 0f9eca69-95d4-4a46-ab8c-868660934641
attached_host: NULL
instance_uuid: 9960baeb-dbfb-4105-937c-6e195655a2a4
mountpoint: /dev/vda
attach_time: 2019-07-23 08:22:03
detach_time: NULL
attach_mode: rw
attach_status: attached
1 row in set (0.00 sec)
mysql> select * from nova.block_device_mapping where volume_id = '0f9eca69-95d4-4a46-ab8c-868660934641'\G
*************************** 1. row ***************************
created_at: 2019-07-23 08:21:21
updated_at: 2019-07-23 08:22:04
deleted_at: NULL
id: 68664
device_name: /dev/vda
delete_on_termination: 1
snapshot_id: NULL
volume_id: 0f9eca69-95d4-4a46-ab8c-868660934641
volume_size: 50
no_device: 0
connection_info: {"driver_volume_type": "rbd", "connector": {"initiator": "iqn.1993-08.org.debian:01:7f7f98e49ba", "ip": "10.125.1.223", "platform": "x86_64", "host": "compute-d02-8.domain.tld", "os_type": "linux2", "multipath": false}, "serial": "0f9eca69-95d4-4a46-ab8c-868660934641", "data": {"secret_type": "ceph", "name": "volumes/volume-0f9eca69-95d4-4a46-ab8c-868660934641", "encrypted": false, "secret_uuid": "a5d0dd94-57c4-ae55-ffe0-7e3732a24455", "qos_specs": null, "hosts": ["10.125.136.2", "10.125.136.7", "10.125.136.12"], "volume_id": "0f9eca69-95d4-4a46-ab8c-868660934641", "conffile": "/etc/ceph/ceph.conf", "auth_enabled": true, "access_mode": "rw", "auth_username": "volumes", "ports": ["6789", "6789", "6789"]}}
instance_uuid: 9960baeb-dbfb-4105-937c-6e195655a2a4
deleted: 0
source_type: image
destination_type: volume
guest_format: NULL
device_type: disk
disk_bus: virtio
boot_index: 0
image_id: f4a18a5e-5352-40e2-8ff1-ca2ac7ca5ed4
1 row in set (0.00 sec)
mysql>
更改 cinder 数据库的 挂载点
mysql> update cinder.volume_attachment set mountpoint='/dev/vdb' where volume_id = "0f9eca69-95d4-4a46-ab8c-868660934641";
Query OK, 1 row affected (0.00 sec)
Rows matched: 1 Changed: 1 Warnings: 0
mysql>
更改 nova 数据库
mysql> update nova.block_device_mapping set device_name = '/dev/vdb', boot_index=1 where volume_id = '0f9eca69-95d4-4a46-ab8c-868660934641';
Query OK, 1 row affected (0.00 sec)
Rows matched: 1 Changed: 1 Warnings: 0
卸载卷
[root@controller-1 ~]# nova volume-detach 9960baeb-dbfb-4105-937c-6e195655a2a4 0f9eca69-95d4-4a46-ab8c-868660934641
[root@controller-1 ~]# nova volume-attach 22b53b03-fd02-437b-9238-5c951c048cc5 0f9eca69-95d4-4a46-ab8c-868660934641
+----------+--------------------------------------+
| Property | Value |
+----------+--------------------------------------+
| device | /dev/vdb |
| id | 0f9eca69-95d4-4a46-ab8c-868660934641 |
| serverId | 22b53b03-fd02-437b-9238-5c951c048cc5 |
| volumeId | 0f9eca69-95d4-4a46-ab8c-868660934641 |
+----------+--------------------------------------+
虚拟机内部查看 多块 vdc 盘
[root@test-allocation-ratio-3 ~]# blkid
/dev/vda1: UUID="3e109aa3-f171-4614-ad07-c856f20f9d25" TYPE="xfs"
/dev/vdb: SEC_TYPE="msdos" LABEL="config-2" UUID="4B4D-6734" TYPE="vfat"
/dev/vdc1: UUID="3e109aa3-f171-4614-ad07-c856f20f9d25" TYPE="xfs"
但是 vda1 和 vdc1 的 blkid 是一样的 所以挂载会报错
重新生成uuid (记住原来的UUID,便于还原)
[root@test-allocation-ratio-3 ~]# xfs_admin -U 8c922c24-7110-4ba8-9af7-d275ded029b9 /dev/vdc1
Clearing log and setting UUID
writing all SBs
new UUID = 8c922c24-7110-4ba8-9af7-d275ded029b9
[root@test-allocation-ratio-3 ~]#
[root@test-allocation-ratio-3 ~]#
在虚拟机内部挂载 vdc
[root@test-allocation-ratio-3 ~]# mount /dev/vdc1 /tmp/test/
查看挂载数据,并进行修改
[root@test-allocation-ratio-3 ~]# cat /tmp/test/etc/fstab
#
# /etc/fstab
# Created by anaconda on Tue Jan 17 22:18:46 2017
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
UUID=3e109aa3-f171-4614-ad07-c856f20f9d25 / xfs defaults 0 0
[root@test-allocation-ratio-3 ~]#
[root@test-allocation-ratio-3 ~]#
[root@test-allocation-ratio-3 ~]# echo "## add by test" >> /tmp/test/etc/fstab
[root@test-allocation-ratio-3 ~]#
[root@test-allocation-ratio-3 ~]#
[root@test-allocation-ratio-3 ~]# cat /tmp/test/etc/fstab
#
# /etc/fstab
# Created by anaconda on Tue Jan 17 22:18:46 2017
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
UUID=3e109aa3-f171-4614-ad07-c856f20f9d25 / xfs defaults 0 0
## add by test
[root@test-allocation-ratio-3 ~]# umount /dev/vdc1
[root@test-allocation-ratio-3 ~]# xfs_admin -U 3e109aa3-f171-4614-ad07-c856f20f9d25 /dev/vdc1
Clearing log and setting UUID
writing all SBs
new UUID = 3e109aa3-f171-4614-ad07-c856f20f9d25 #还原 blkid
卸载硬盘,并还原数据库
[root@controller-1 ~]# nova volume-detach 22b53b03-fd02-437b-9238-5c951c048cc5 0f9eca69-95d4-4a46-ab8c-868660934641
mysql> update cinder.volume_attachment set mountpoint='/dev/vda' ,deleted=0 where id = "97e92940-0952-4d22-86d4-9296c50831e0";
Query OK, 1 row affected (0.00 sec)
Rows matched: 1 Changed: 1 Warnings: 0
mysql> update nova.block_device_mapping set device_name = '/dev/vda', boot_index=0 ,deleted=0 where id= '68664';
注意: nova 和 cinder 库会多条记录,可以参考之前的数据库信息进行比对修改
硬重启虚拟机,查看修改的文件
[root@controller-1 ~]# nova reboot 9960baeb-dbfb-4105-937c-6e195655a2a4 --hard
[root@test0713 ~]# cat /tmp/test/etc/fstab
#
# /etc/fstab
# Created by anaconda on Tue Jan 17 22:18:46 2017
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
UUID=3e109aa3-f171-4614-ad07-c856f20f9d25 / xfs defaults 0 0
## add by test
可以看到 test073 虚拟机内部 fstab 文件已经进行修改了
总结
1.修改数据库把系统盘变成普通数据盘
2.找个测试的虚拟机,进行挂载
3.虚拟机内部会多块盘,修改blkid,进行mount
4.对挂载的数据进行修改或者导出
5.对盘里数据操作完成之后进行umount并还原blkid
6.还原修改的 nova、cinder 数据库
7.对虚拟机进行硬重启
网友评论