美文网首页
ceph pg 数据丢失

ceph pg 数据丢失

作者: cloudFans | 来源:发表于2022-12-25 11:15 被阅读0次
    
    
    [root@node1 ~]# ceph -s
      cluster:
        id:     52feab00-22e8-4f98-9840-6d9778d88d09
        health: HEALTH_ERR
                64/4832148 objects unfound (0.001%)
                Possible data damage: 21 pgs recovery_unfound
                Degraded data redundancy: 94544/9664755 objects degraded (0.978%), 21 pgs degraded, 21 pgs undersized
    
      services:
        mon: 3 daemons, quorum b,g,h (age 3w)
        mgr: a(active, since 3w)
        osd: 52 osds: 51 up (since 73m), 50 in (since 4w); 21 remapped pgs
        rgw: 4 daemons active (4 hosts, 1 zones)
    
      data:
        pools:   25 pools, 1521 pgs
        objects: 4.83M objects, 19 TiB
        usage:   40 TiB used, 57 TiB / 97 TiB avail
        pgs:     94544/9664755 objects degraded (0.978%)
                 64/4832148 objects unfound (0.001%)
                 1498 active+clean
                 21   active+recovery_unfound+undersized+degraded+remapped
                 2    active+clean+scrubbing+deep+repair
    
      io:
        client:   660 KiB/s rd, 5.0 MiB/s wr, 104 op/s rd, 481 op/s wr
    
    
    [root@node1 ~]# ceph pg dump_stuck
    ok
    PG_STAT  STATE                                                 UP       UP_PRIMARY  ACTING  ACTING_PRIMARY
    52.1fd   active+recovery_unfound+undersized+degraded+remapped  [36,21]          36    [21]              21
    52.1cf   active+recovery_unfound+undersized+degraded+remapped  [36,27]          36    [27]              27
    12.71    active+recovery_unfound+undersized+degraded+remapped  [34,13]          34    [13]              13
    12.cd    active+recovery_unfound+undersized+degraded+remapped   [11,3]          11     [3]               3
    12.b6    active+recovery_unfound+undersized+degraded+remapped  [31,39]          31    [39]              39
    12.4f    active+recovery_unfound+undersized+degraded+remapped  [33,46]          33    [46]              46
    12.fc    active+recovery_unfound+undersized+degraded+remapped  [29,16]          29    [16]              16
    12.e4    active+recovery_unfound+undersized+degraded+remapped  [11,47]          11    [47]              47
    12.ba    active+recovery_unfound+undersized+degraded+remapped   [31,7]          31     [7]               7
    12.c0    active+recovery_unfound+undersized+degraded+remapped  [11,22]          11    [22]              22
    12.ab    active+recovery_unfound+undersized+degraded+remapped  [33,15]          33    [15]              15
    12.96    active+recovery_unfound+undersized+degraded+remapped  [23,18]          23    [18]              18
    87.43    active+recovery_unfound+undersized+degraded+remapped  [35,50]          35    [50]              50
    52.11    active+recovery_unfound+undersized+degraded+remapped  [41,52]          41    [52]              52
    12.6e    active+recovery_unfound+undersized+degraded+remapped   [31,6]          31     [6]               6
    12.dd    active+recovery_unfound+undersized+degraded+remapped  [11,37]          11    [37]              37
    52.2c    active+recovery_unfound+undersized+degraded+remapped  [36,27]          36    [27]              27
    52.d9    active+recovery_unfound+undersized+degraded+remapped  [41,27]          41    [27]              27
    12.55    active+recovery_unfound+undersized+degraded+remapped  [33,45]          33    [45]              45
    12.5f    active+recovery_unfound+undersized+degraded+remapped  [33,45]          33    [45]              45
    52.67    active+recovery_unfound+undersized+degraded+remapped  [36,21]          36    [21]              21
    
    
    # UP_PRIMARY   对应的是osd
    
    存在一些pg有问题,ceph是一个自修复(某种程度上)的系统,可以看到数据应该正在从36 迁移到 21,所以UP_PRIMARY 的osd应该是有点问题的
    
    [root@node1 ~]# ceph pg 52.1fd query | grep osd
                        "osd": "36",
    
    

    参考,可能需要重建有问题的pg: https://www.antute.com.cn/index.php?id=258

    相关文章

      网友评论

          本文标题:ceph pg 数据丢失

          本文链接:https://www.haomeiwen.com/subject/thfgqdtx.html