美文网首页程序员
centos7.2部署ceph luminous(v12)

centos7.2部署ceph luminous(v12)

作者: 迷情悠悠吾心 | 来源:发表于2020-04-08 15:43 被阅读0次

    发布IP地址、群集IP地址

    xctcnode1 | xctcnode2 | xctcnode3
    ---|---|---
    192.168.0.61 | 192.168.0.62 | 192.168.0.63
    1.1.1.1 | 1.1.1.2 | 1.1.1.3
    

    主机名

    [xctcadmin@xctcnode1 xctccluster]$ cat /etc/hostname 
    xctcnode1
    
    [xctcadmin@xctcnode2 xctccluster]$ cat /etc/hostname 
    xctcnode2
    
    [xctcadmin@xctcnode3 xctccluster]$ cat /etc/hostname 
    xctcnode3
    

    hosts

    [xctcadmin@xctcnode1 xctccluster]$ cat /etc/hosts
    127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
    ::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
    192.168.0.61    xctcnode1
    192.168.0.62    xctcnode2
    192.168.0.63    xctcnode3
    
    [xctcadmin@xctcnode2 xctccluster]$ cat /etc/hosts
    127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
    ::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
    192.168.0.61    xctcnode1
    192.168.0.62    xctcnode2
    192.168.0.63    xctcnode3
    
    [xctcadmin@xctcnode3 xctccluster]$ cat /etc/hosts
    127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
    ::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
    192.168.0.61    xctcnode1
    192.168.0.62    xctcnode2
    192.168.0.63    xctcnode3
    

    关闭防火墙

    systemctl stop firewalld
    systemctl disable firewalld
    

    关闭selinux

    sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
    

    安装epel-release源

    yum -y install epel-release
    yum -y install ntp ntpdate
    yum -y install python-pip
    

    同步时间

    [xctcadmin@xctcnode1 xctccluster]$ sudo ntpdate 192.168.0.130
    21 Dec 17:14:42 ntpdate[6171]: step time server 192.168.0.130 offset 105.705753 sec
    

    SSH免密登录

    解决ssh远程sudo命令报错问题

    sudo sed -i 's/Defaults    requiretty/#Defaults    requiretty/g' /etc/sudoers
    

    解决ssh登录慢问题

    配置授权用户sudo权限

    echo "xctcadmin ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/xctcadmin
    sudo chmod 0440 /etc/sudoers.d/xctcadmin
    

    生成免密登录密钥

    [xctcadmin@xctcnode1 ~]$ ssh-keygen 
    Generating public/private rsa key pair.
    Enter file in which to save the key (/home/xctcadmin/.ssh/id_rsa): 
    Enter passphrase (empty for no passphrase): 
    Enter same passphrase again: 
    Your identification has been saved in /home/xctcadmin/.ssh/id_rsa.
    Your public key has been saved in /home/xctcadmin/.ssh/id_rsa.pub.
    The key fingerprint is:
    a4:50:5d:36:98:4d:41:97:db:7f:46:49:9d:f6:e7:54 xctcadmin@xctcnode1
    The key's randomart image is:
    +--[ RSA 2048]----+
    |      .. BB...  o|
    |     .  +..o.  +E|
    |    .   .    oo +|
    |     . o    . ..=|
    |      . S      =.|
    |                =|
    |               ..|
    |                 |
    |                 |
    +-----------------+
    

    密钥拷贝到节点

    [xctcadmin@xctcnode1 ~]$ ssh-copy-id xctcadmin@xctcnode1
    The authenticity of host 'xctcnode1 (192.168.0.61)' can't be established.
    ECDSA key fingerprint is 7f:37:95:a2:4b:79:ae:67:ee:bc:04:b9:af:7d:ca:63.
    Are you sure you want to continue connecting (yes/no)? yes
    /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
    /usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
    xctcadmin@xctcnode1's password: 
    
    Number of key(s) added: 1
    
    Now try logging into the machine, with:   "ssh 'xctcadmin@xctcnode1'"
    and check to make sure that only the key(s) you wanted were added.
    
    [xctcadmin@xctcnode1 ~]$ ssh-copy-id xctcadmin@xctcnode2
    The authenticity of host 'xctcnode2 (192.168.0.62)' can't be established.
    ECDSA key fingerprint is 71:37:4e:98:67:46:67:60:39:29:42:f7:27:64:76:13.
    Are you sure you want to continue connecting (yes/no)? yes
    /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
    /usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
    xctcadmin@xctcnode2's password: 
    
    Number of key(s) added: 1
    
    Now try logging into the machine, with:   "ssh 'xctcadmin@xctcnode2'"
    and check to make sure that only the key(s) you wanted were added.
    
    [xctcadmin@xctcnode1 ~]$ ssh-copy-id xctcadmin@xctcnode3
    The authenticity of host 'xctcnode3 (192.168.0.63)' can't be established.
    ECDSA key fingerprint is 91:fa:8e:1d:45:7a:72:c4:21:22:30:5c:36:09:93:21.
    Are you sure you want to continue connecting (yes/no)? yes
    /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
    /usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
    xctcadmin@xctcnode3's password: 
    
    Number of key(s) added: 1
    
    Now try logging into the machine, with:   "ssh 'xctcadmin@xctcnode3'"
    and check to make sure that only the key(s) you wanted were added.
    

    配置ssh自动指定用户名设置

    Host xctcnode1
       Hostname xctcnode1
       User xctcadmin
    Host xctcnode2
       Hostname xctcnode2
       User xctcadmin
    Host xctcnode3
       Hostname xctcnode3
       User xctcadmin
    

    安装外部yum源(使用ceph-deoply不需要)

    vi /etc/yum.repo/ceph.repo
    [Ceph-SRPMS]
    name=Ceph SRPMS packages
    baseurl=https://mirrors.aliyun.com/ceph/rpm-luminous/el7/SRPMS/
    enabled=1
    gpgcheck=0
    type=rpm-md
     
    [Ceph-aarch64]
    name=Ceph aarch64 packages
    baseurl=https://mirrors.aliyun.com/ceph/rpm-luminous/el7/aarch64/
    enabled=1
    gpgcheck=0
    type=rpm-md
     
    [Ceph-noarch]
    name=Ceph noarch packages
    baseurl=https://mirrors.aliyun.com/ceph/rpm-luminous/el7/noarch/
    enabled=1
    gpgcheck=0
    type=rpm-md
     
    [Ceph-x86_64]
    name=Ceph x86_64 packages
    baseurl=https://mirrors.aliyun.com/ceph/rpm-luminous/el7/x86_64/
    enabled=1
    gpgcheck=0
    type=rpm-md
    

    创建保存群集文件的目录

    mkdir my-cluster
    cd my-cluster
    
    mkdir xctccluster
    cd xctccluster
    

    创建群集

    ceph-deploy new {initial-monitor-node(s)}
    ceph-deploy new xctcnode1 xctcnode2 xctcnode3
    

    修改ceph配置文件

    [xctcadmin@xctcnode1 xctccluster]$ cat ceph.conf 
    [global]
    fsid = f505235f-43e6-4e65-8fe1-46675e5504f2
    mon_initial_members = xctcnode1, xctcnode2, xctcnode3
    mon_host = 192.168.0.61,192.168.0.62,192.168.0.63
    auth_cluster_required = cephx
    auth_service_required = cephx
    auth_client_required = cephx
    
    
    vi ceph.conf
    

    添加

    public network = 192.168.0.0/24     # cephclient访问网络
    cluster network = 1.1.1.0/24        # osd复制网络
    

    安装ceph

    ceph-deploy install {ceph-node} [...]
    ceph-deploy install xctcnode1 xctcnode2 xctcnode3(sudo yum -y install ceph ceph-radosgw)
    

    发布初始化mon节点

    ceph-deploy mon create-initial
    

    把配置信息复制到其他节点

    ceph-deploy admin {ceph-node(s)}
    ceph-deploy admin xctcnode1 xctcnode2 xctcnode3
    

    配置MGR

    创建MGR

    ceph-deploy mgr create xctcnode1 xctcnode2 xctcnode3
    
    [xctcadmin@xctcnode1 xctccluster]$ ceph-deploy mgr create xctcnode1 xctcnode2 xctcnode3
    [ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
    [ceph_deploy.cli][INFO  ] Invoked (2.0.1): /bin/ceph-deploy mgr create xctcnode1 xctcnode2 xctcnode3
    [ceph_deploy.cli][INFO  ] ceph-deploy options:
    [ceph_deploy.cli][INFO  ]  username                      : None
    [ceph_deploy.cli][INFO  ]  verbose                       : False
    [ceph_deploy.cli][INFO  ]  mgr                           : [('xctcnode1', 'xctcnode1'), ('xctcnode2', 'xctcnode2'), ('xctcnode3', 'xctcnode3')]
    [ceph_deploy.cli][INFO  ]  overwrite_conf                : False
    [ceph_deploy.cli][INFO  ]  subcommand                    : create
    [ceph_deploy.cli][INFO  ]  quiet                         : False
    [ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x1c213f8>
    [ceph_deploy.cli][INFO  ]  cluster                       : ceph
    [ceph_deploy.cli][INFO  ]  func                          : <function mgr at 0x1baae60>
    [ceph_deploy.cli][INFO  ]  ceph_conf                     : None
    [ceph_deploy.cli][INFO  ]  default_release               : False
    [ceph_deploy.mgr][DEBUG ] Deploying mgr, cluster ceph hosts xctcnode1:xctcnode1 xctcnode2:xctcnode2 xctcnode3:xctcnode3
    [xctcnode1][DEBUG ] connected to host: xctcnode1 
    [xctcnode1][DEBUG ] detect platform information from remote host
    [xctcnode1][DEBUG ] detect machine type
    [ceph_deploy.mgr][INFO  ] Distro info: CentOS Linux 7.2.1511 Core
    [ceph_deploy.mgr][DEBUG ] remote host will use systemd
    [ceph_deploy.mgr][DEBUG ] deploying mgr bootstrap to xctcnode1
    [xctcnode1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
    [xctcnode1][WARNIN] mgr keyring does not exist yet, creating one
    [xctcnode1][DEBUG ] create a keyring file
    [xctcnode1][DEBUG ] create path recursively if it doesn't exist
    [xctcnode1][INFO  ] Running command: ceph --cluster ceph --name client.bootstrap-mgr --keyring /var/lib/ceph/bootstrap-mgr/ceph.keyring auth get-or-create mgr.xctcnode1 mon allow profile mgr osd allow * mds allow * -o /var/lib/ceph/mgr/ceph-xctcnode1/keyring
    [xctcnode1][INFO  ] Running command: systemctl enable ceph-mgr@xctcnode1
    [xctcnode1][WARNIN] Created symlink from /etc/systemd/system/ceph-mgr.target.wants/ceph-mgr@xctcnode1.service to /usr/lib/systemd/system/ceph-mgr@.service.
    [xctcnode1][INFO  ] Running command: systemctl start ceph-mgr@xctcnode1
    [xctcnode1][INFO  ] Running command: systemctl enable ceph.target
    [xctcnode2][DEBUG ] connected to host: xctcnode2 
    [xctcnode2][DEBUG ] detect platform information from remote host
    [xctcnode2][DEBUG ] detect machine type
    [ceph_deploy.mgr][INFO  ] Distro info: CentOS Linux 7.2.1511 Core
    [ceph_deploy.mgr][DEBUG ] remote host will use systemd
    [ceph_deploy.mgr][DEBUG ] deploying mgr bootstrap to xctcnode2
    [xctcnode2][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
    [xctcnode2][WARNIN] mgr keyring does not exist yet, creating one
    [xctcnode2][DEBUG ] create a keyring file
    [xctcnode2][DEBUG ] create path recursively if it doesn't exist
    [xctcnode2][INFO  ] Running command: ceph --cluster ceph --name client.bootstrap-mgr --keyring /var/lib/ceph/bootstrap-mgr/ceph.keyring auth get-or-create mgr.xctcnode2 mon allow profile mgr osd allow * mds allow * -o /var/lib/ceph/mgr/ceph-xctcnode2/keyring
    [xctcnode2][INFO  ] Running command: systemctl enable ceph-mgr@xctcnode2
    [xctcnode2][WARNIN] Created symlink from /etc/systemd/system/ceph-mgr.target.wants/ceph-mgr@xctcnode2.service to /usr/lib/systemd/system/ceph-mgr@.service.
    [xctcnode2][INFO  ] Running command: systemctl start ceph-mgr@xctcnode2
    [xctcnode2][INFO  ] Running command: systemctl enable ceph.target
    [xctcnode3][DEBUG ] connected to host: xctcnode3 
    [xctcnode3][DEBUG ] detect platform information from remote host
    [xctcnode3][DEBUG ] detect machine type
    [ceph_deploy.mgr][INFO  ] Distro info: CentOS Linux 7.2.1511 Core
    [ceph_deploy.mgr][DEBUG ] remote host will use systemd
    [ceph_deploy.mgr][DEBUG ] deploying mgr bootstrap to xctcnode3
    [xctcnode3][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
    [xctcnode3][WARNIN] mgr keyring does not exist yet, creating one
    [xctcnode3][DEBUG ] create a keyring file
    [xctcnode3][DEBUG ] create path recursively if it doesn't exist
    [xctcnode3][INFO  ] Running command: ceph --cluster ceph --name client.bootstrap-mgr --keyring /var/lib/ceph/bootstrap-mgr/ceph.keyring auth get-or-create mgr.xctcnode3 mon allow profile mgr osd allow * mds allow * -o /var/lib/ceph/mgr/ceph-xctcnode3/keyring
    [xctcnode3][INFO  ] Running command: systemctl enable ceph-mgr@xctcnode3
    [xctcnode3][WARNIN] Created symlink from /etc/systemd/system/ceph-mgr.target.wants/ceph-mgr@xctcnode3.service to /usr/lib/systemd/system/ceph-mgr@.service.
    [xctcnode3][INFO  ] Running command: systemctl start ceph-mgr@xctcnode3
    [xctcnode3][INFO  ] Running command: systemctl enable ceph.target
    

    开启面板模块

    [xctcadmin@xctcnode1 xctccluster]$ ceph mgr module enable dashboard
    

    配置OSDs

    添加OSD

    sudo ceph-deploy --overwrite-conf osd create --data /dev/sdb xctcnode1
    sudo ceph-deploy --overwrite-conf osd create --data /dev/sdc xctcnode1
    sudo ceph-deploy --overwrite-conf osd create --data /dev/sdd xctcnode1
    
    sudo ceph-deploy --overwrite-conf osd create --data /dev/sdb xctcnode2
    sudo ceph-deploy --overwrite-conf osd create --data /dev/sdc xctcnode2
    sudo ceph-deploy --overwrite-conf osd create --data /dev/sdd xctcnode2
    
    sudo ceph-deploy --overwrite-conf osd create --data /dev/sdb xctcnode3
    sudo ceph-deploy --overwrite-conf osd create --data /dev/sdc xctcnode3
    sudo ceph-deploy --overwrite-conf osd create --data /dev/sdd xctcnode3
    

    显示OSD树形

    [xctcadmin@xctcnode1 xctccluster]$ ceph osd tree
    ID CLASS WEIGHT  TYPE NAME          STATUS REWEIGHT PRI-AFF 
    -1       0.08817 root default                               
    -3       0.02939     host xctcnode1                         
     0   hdd 0.00980         osd.0          up  1.00000 1.00000 
     1   hdd 0.00980         osd.1          up  1.00000 1.00000 
     2   hdd 0.00980         osd.2          up  1.00000 1.00000 
    -7       0.02939     host xctcnode2                         
     3   hdd 0.00980         osd.3          up  1.00000 1.00000 
     5   hdd 0.00980         osd.4          up  1.00000 1.00000 
     6   hdd 0.00980         osd.5          up  1.00000 1.00000 
    -5       0.02939     host xctcnode3                         
     4   hdd 0.00980         osd.6          up  1.00000 1.00000 
     7   hdd 0.00980         osd.7          up  1.00000 1.00000 
     8   hdd 0.00980         osd.8          up  1.00000 1.00000 
    

    创建mds

    ceph-deploy mds create xctcnode1 xctcnode2 xctcnode3
    [xctcadmin@xctcnode1 xctccluster]$ ceph-deploy mds create xctcnode1 xctcnode2 xctcnode3
    [ceph_deploy.conf][DEBUG ] found configuration file at: /home/xctcadmin/.cephdeploy.conf
    [ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy mds create xctcnode1 xctcnode2 xctcnode3
    [ceph_deploy.cli][INFO  ] ceph-deploy options:
    [ceph_deploy.cli][INFO  ]  username                      : None
    [ceph_deploy.cli][INFO  ]  verbose                       : False
    [ceph_deploy.cli][INFO  ]  overwrite_conf                : False
    [ceph_deploy.cli][INFO  ]  subcommand                    : create
    [ceph_deploy.cli][INFO  ]  quiet                         : False
    [ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f91c6ce8f80>
    [ceph_deploy.cli][INFO  ]  cluster                       : ceph
    [ceph_deploy.cli][INFO  ]  func                          : <function mds at 0x7f91c757dc08>
    [ceph_deploy.cli][INFO  ]  ceph_conf                     : None
    [ceph_deploy.cli][INFO  ]  mds                           : [('xctcnode1', 'xctcnode1'), ('xctcnode2', 'xctcnode2'), ('xctcnode3', 'xctcnode3')]
    [ceph_deploy.cli][INFO  ]  default_release               : False
    [ceph_deploy.mds][DEBUG ] Deploying mds, cluster ceph hosts xctcnode1:xctcnode1 xctcnode2:xctcnode2 xctcnode3:xctcnode3
    [xctcnode1][DEBUG ] connection detected need for sudo
    [xctcnode1][DEBUG ] connected to host: xctcnode1 
    [xctcnode1][DEBUG ] detect platform information from remote host
    [xctcnode1][DEBUG ] detect machine type
    [ceph_deploy.mds][INFO  ] Distro info: CentOS Linux 7.2.1511 Core
    [ceph_deploy.mds][DEBUG ] remote host will use systemd
    [ceph_deploy.mds][DEBUG ] deploying mds bootstrap to xctcnode1
    [xctcnode1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
    [xctcnode1][DEBUG ] create path if it doesn't exist
    [xctcnode1][INFO  ] Running command: sudo ceph --cluster ceph --name client.bootstrap-mds --keyring /var/lib/ceph/bootstrap-mds/ceph.keyring auth get-or-create mds.xctcnode1 osd allow rwx mds allow mon allow profile mds -o /var/lib/ceph/mds/ceph-xctcnode1/keyring
    [xctcnode1][INFO  ] Running command: sudo systemctl enable ceph-mds@xctcnode1
    [xctcnode1][INFO  ] Running command: sudo systemctl start ceph-mds@xctcnode1
    [xctcnode1][INFO  ] Running command: sudo systemctl enable ceph.target
    [xctcnode2][DEBUG ] connection detected need for sudo
    [xctcnode2][DEBUG ] connected to host: xctcnode2 
    [xctcnode2][DEBUG ] detect platform information from remote host
    [xctcnode2][DEBUG ] detect machine type
    [ceph_deploy.mds][INFO  ] Distro info: CentOS Linux 7.2.1511 Core
    [ceph_deploy.mds][DEBUG ] remote host will use systemd
    [ceph_deploy.mds][DEBUG ] deploying mds bootstrap to xctcnode2
    [xctcnode2][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
    [xctcnode2][WARNIN] mds keyring does not exist yet, creating one
    [xctcnode2][DEBUG ] create a keyring file
    [xctcnode2][DEBUG ] create path if it doesn't exist
    [xctcnode2][INFO  ] Running command: sudo ceph --cluster ceph --name client.bootstrap-mds --keyring /var/lib/ceph/bootstrap-mds/ceph.keyring auth get-or-create mds.xctcnode2 osd allow rwx mds allow mon allow profile mds -o /var/lib/ceph/mds/ceph-xctcnode2/keyring
    [xctcnode2][INFO  ] Running command: sudo systemctl enable ceph-mds@xctcnode2
    [xctcnode2][WARNIN] Created symlink from /etc/systemd/system/ceph-mds.target.wants/ceph-mds@xctcnode2.service to /usr/lib/systemd/system/ceph-mds@.service.
    [xctcnode2][INFO  ] Running command: sudo systemctl start ceph-mds@xctcnode2
    [xctcnode2][INFO  ] Running command: sudo systemctl enable ceph.target
    [xctcnode3][DEBUG ] connection detected need for sudo
    [xctcnode3][DEBUG ] connected to host: xctcnode3 
    [xctcnode3][DEBUG ] detect platform information from remote host
    [xctcnode3][DEBUG ] detect machine type
    [ceph_deploy.mds][INFO  ] Distro info: CentOS Linux 7.2.1511 Core
    [ceph_deploy.mds][DEBUG ] remote host will use systemd
    [ceph_deploy.mds][DEBUG ] deploying mds bootstrap to xctcnode3
    [xctcnode3][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
    [xctcnode3][WARNIN] mds keyring does not exist yet, creating one
    [xctcnode3][DEBUG ] create a keyring file
    [xctcnode3][DEBUG ] create path if it doesn't exist
    [xctcnode3][INFO  ] Running command: sudo ceph --cluster ceph --name client.bootstrap-mds --keyring /var/lib/ceph/bootstrap-mds/ceph.keyring auth get-or-create mds.xctcnode3 osd allow rwx mds allow mon allow profile mds -o /var/lib/ceph/mds/ceph-xctcnode3/keyring
    [xctcnode3][INFO  ] Running command: sudo systemctl enable ceph-mds@xctcnode3
    [xctcnode3][WARNIN] Created symlink from /etc/systemd/system/ceph-mds.target.wants/ceph-mds@xctcnode3.service to /usr/lib/systemd/system/ceph-mds@.service.
    [xctcnode3][INFO  ] Running command: sudo systemctl start ceph-mds@xctcnode3
    [xctcnode3][INFO  ] Running command: sudo systemctl enable ceph.target
    

    配置rgw

    [xctcadmin@xctcnode1 xctccluster]$ ceph-deploy rgw create xctcnode1 xctcnode2 xctcnode3
    [ceph_deploy.conf][DEBUG ] found configuration file at: /home/xctcadmin/.cephdeploy.conf
    [ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy rgw create xctcnode1 xctcnode2 xctcnode3
    [ceph_deploy.cli][INFO  ] ceph-deploy options:
    [ceph_deploy.cli][INFO  ]  username                      : None
    [ceph_deploy.cli][INFO  ]  verbose                       : False
    [ceph_deploy.cli][INFO  ]  rgw                           : [('xctcnode1', 'rgw.xctcnode1'), ('xctcnode2', 'rgw.xctcnode2'), ('xctcnode3', 'rgw.xctcnode3')]
    [ceph_deploy.cli][INFO  ]  overwrite_conf                : False
    [ceph_deploy.cli][INFO  ]  subcommand                    : create
    [ceph_deploy.cli][INFO  ]  quiet                         : False
    [ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f4d168ae7e8>
    [ceph_deploy.cli][INFO  ]  cluster                       : ceph
    [ceph_deploy.cli][INFO  ]  func                          : <function rgw at 0x7f4d1710cd70>
    [ceph_deploy.cli][INFO  ]  ceph_conf                     : None
    [ceph_deploy.cli][INFO  ]  default_release               : False
    [ceph_deploy.rgw][DEBUG ] Deploying rgw, cluster ceph hosts xctcnode1:rgw.xctcnode1 xctcnode2:rgw.xctcnode2 xctcnode3:rgw.xctcnode3
    [xctcnode1][DEBUG ] connection detected need for sudo
    [xctcnode1][DEBUG ] connected to host: xctcnode1 
    [xctcnode1][DEBUG ] detect platform information from remote host
    [xctcnode1][DEBUG ] detect machine type
    [ceph_deploy.rgw][INFO  ] Distro info: CentOS Linux 7.2.1511 Core
    [ceph_deploy.rgw][DEBUG ] remote host will use systemd
    [ceph_deploy.rgw][DEBUG ] deploying rgw bootstrap to xctcnode1
    [xctcnode1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
    [xctcnode1][WARNIN] rgw keyring does not exist yet, creating one
    [xctcnode1][DEBUG ] create a keyring file
    [xctcnode1][DEBUG ] create path recursively if it doesn't exist
    [xctcnode1][INFO  ] Running command: sudo ceph --cluster ceph --name client.bootstrap-rgw --keyring /var/lib/ceph/bootstrap-rgw/ceph.keyring auth get-or-create client.rgw.xctcnode1 osd allow rwx mon allow rw -o /var/lib/ceph/radosgw/ceph-rgw.xctcnode1/keyring
    [xctcnode1][INFO  ] Running command: sudo systemctl enable ceph-radosgw@rgw.xctcnode1
    [xctcnode1][WARNIN] Created symlink from /etc/systemd/system/ceph-radosgw.target.wants/ceph-radosgw@rgw.xctcnode1.service to /usr/lib/systemd/system/ceph-radosgw@.service.
    [xctcnode1][INFO  ] Running command: sudo systemctl start ceph-radosgw@rgw.xctcnode1
    [xctcnode1][INFO  ] Running command: sudo systemctl enable ceph.target
    [ceph_deploy.rgw][INFO  ] The Ceph Object Gateway (RGW) is now running on host xctcnode1 and default port 7480
    [xctcnode2][DEBUG ] connection detected need for sudo
    [xctcnode2][DEBUG ] connected to host: xctcnode2 
    [xctcnode2][DEBUG ] detect platform information from remote host
    [xctcnode2][DEBUG ] detect machine type
    [ceph_deploy.rgw][INFO  ] Distro info: CentOS Linux 7.2.1511 Core
    [ceph_deploy.rgw][DEBUG ] remote host will use systemd
    [ceph_deploy.rgw][DEBUG ] deploying rgw bootstrap to xctcnode2
    [xctcnode2][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
    [xctcnode2][WARNIN] rgw keyring does not exist yet, creating one
    [xctcnode2][DEBUG ] create a keyring file
    [xctcnode2][DEBUG ] create path recursively if it doesn't exist
    [xctcnode2][INFO  ] Running command: sudo ceph --cluster ceph --name client.bootstrap-rgw --keyring /var/lib/ceph/bootstrap-rgw/ceph.keyring auth get-or-create client.rgw.xctcnode2 osd allow rwx mon allow rw -o /var/lib/ceph/radosgw/ceph-rgw.xctcnode2/keyring
    [xctcnode2][INFO  ] Running command: sudo systemctl enable ceph-radosgw@rgw.xctcnode2
    [xctcnode2][WARNIN] Created symlink from /etc/systemd/system/ceph-radosgw.target.wants/ceph-radosgw@rgw.xctcnode2.service to /usr/lib/systemd/system/ceph-radosgw@.service.
    [xctcnode2][INFO  ] Running command: sudo systemctl start ceph-radosgw@rgw.xctcnode2
    [xctcnode2][INFO  ] Running command: sudo systemctl enable ceph.target
    [ceph_deploy.rgw][INFO  ] The Ceph Object Gateway (RGW) is now running on host xctcnode2 and default port 7480
    [xctcnode3][DEBUG ] connection detected need for sudo
    [xctcnode3][DEBUG ] connected to host: xctcnode3 
    [xctcnode3][DEBUG ] detect platform information from remote host
    [xctcnode3][DEBUG ] detect machine type
    [ceph_deploy.rgw][INFO  ] Distro info: CentOS Linux 7.2.1511 Core
    [ceph_deploy.rgw][DEBUG ] remote host will use systemd
    [ceph_deploy.rgw][DEBUG ] deploying rgw bootstrap to xctcnode3
    [xctcnode3][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
    [xctcnode3][WARNIN] rgw keyring does not exist yet, creating one
    [xctcnode3][DEBUG ] create a keyring file
    [xctcnode3][DEBUG ] create path recursively if it doesn't exist
    [xctcnode3][INFO  ] Running command: sudo ceph --cluster ceph --name client.bootstrap-rgw --keyring /var/lib/ceph/bootstrap-rgw/ceph.keyring auth get-or-create client.rgw.xctcnode3 osd allow rwx mon allow rw -o /var/lib/ceph/radosgw/ceph-rgw.xctcnode3/keyring
    [xctcnode3][INFO  ] Running command: sudo systemctl enable ceph-radosgw@rgw.xctcnode3
    [xctcnode3][WARNIN] Created symlink from /etc/systemd/system/ceph-radosgw.target.wants/ceph-radosgw@rgw.xctcnode3.service to /usr/lib/systemd/system/ceph-radosgw@.service.
    [xctcnode3][INFO  ] Running command: sudo systemctl start ceph-radosgw@rgw.xctcnode3
    [xctcnode3][INFO  ] Running command: sudo systemctl enable ceph.target
    [ceph_deploy.rgw][INFO  ] The Ceph Object Gateway (RGW) is now running on host xctcnode3 and default port 7480
    

    相关文章

      网友评论

        本文标题:centos7.2部署ceph luminous(v12)

        本文链接:https://www.haomeiwen.com/subject/gzmtmhtx.html