美文网首页
Ceph安装部署(ceph-deploy)

Ceph安装部署(ceph-deploy)

作者: SkTj | 来源:发表于2019-03-03 11:32 被阅读12次

    1、修改主机名
    hostnamectl set-hostname ceph-admin
    hostnamectl set-hostname ceph-node1
    hostnamectl set-hostname ceph-node2
    hostnamectl set-hostname ceph-node3
    2、修改hosts :vi /etc/hosts
    192.168.86.128 ceph-admin
    192.168.86.129 ceph-node1
    192.168.86.130 ceph-node2
    192.168.86.131 ceph-node3
    3、关闭防火墙
    4、安装ntp
    yum install ntp ntpdate ntp-doc -y
    service ntpd start
    5、更改Yum源
    yum clean all
    mkdir /mnt/bak
    mv /etc/yum.repos.d/* /mnt/bak/
    wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
    wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
    vim /etc/yum.repos.d/ceph.repo
    [ceph]
    name=ceph
    baseurl=http://mirrors.aliyun.com/ceph/rpm-jewel/el7/x86_64/
    gpgcheck=0
    priority =1
    [ceph-noarch]
    name=cephnoarch
    baseurl=http://mirrors.aliyun.com/ceph/rpm-jewel/el7/noarch/
    gpgcheck=0
    priority =1
    [ceph-source]
    name=Ceph source packages
    baseurl=http://mirrors.aliyun.com/ceph/rpm-jewel/el7/SRPMS
    gpgcheck=0
    priority=1
    6、创建用户
    useradd -d /home/cephuser -m cephuser
    echo "cephuser"|passwd --stdin cephuser
    echo "cephuser ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/cephuser
    chmod 0440 /etc/sudoers.d/cephuser
    sed -i s'/Defaults requiretty/#Defaults requiretty'/g /etc/sudoers
    chmod 777 /dev/null
    su - cephuser
    ssh-keygen -t rsa

    分别

    ssh-copy-id cephuser@192.168.86.128
    ssh-copy-id cephuser@192.168.86.129
    ssh-copy-id cephuser@192.168.86.130
    ssh-copy-id cephuser@192.168.86.131
    7、格式化磁盘
    sudo parted -s /dev/vdb mklabel gpt mkpart primary xfs 0% 100%
    sudo mkfs.xfs /dev/sdb -f
    //查看磁盘格式; sudo blkid -o value -s TYPE /dev/sdb
    8、在ceph-admin上安装
    sudo yum update -y && sudo yum install ceph-deploy -y
    mkdir cluster
    cd cluster
    ceph-deploy new ceph-admin
    vi /home/cephuser/cluster/ceph.conf
    [global]
    fsid = b01095b8-d958-406f-b0fb-308ba5ab64e3
    mon_initial_members = ceph-admin
    mon_host = 192.168.86.128
    auth_cluster_required = cephx
    auth_service_required = cephx
    auth_client_required = cephx
    public network = 192.168.86.128/24
    osd pool default size = 3
    9、开始安装:ceph-deploy install ceph-admin ceph-node1 ceph-node2 ceph-node3
    10、cd /home/cephuser/cluster
    ceph-deploy mon create-initial
    ceph-deploy gatherkeys ceph-admin
    11、查看节点可用磁盘:ceph-deploy disk list ceph-node1 ceph-node2 ceph-node3
    删除磁盘上所有分区: ceph-deploy disk zap ceph-node1:/dev/sdb ceph-node2:/dev/sdb ceph-node3:/dev/sdb
    准备OSD:ceph-deploy osd prepare ceph-node1:/dev/sdb ceph-node2:/dev/sdb ceph-node3:/dev/sdb
    激活OSD: ceph-deploy osd activate ceph-node1:/dev/sdb1 ceph-node2:/dev/sdb1 ceph-node3:/dev/sdb1
    12、拷贝密钥: ceph-deploy admin ceph-admin ceph-node1 ceph-node2 ceph-node3
    sudo chmod 644 /etc/ceph/ceph.client.admin.keyring



    以cephFS方式访问
    先查看管理节点状态,默认是没有管理节点的。
    [cephuser@ceph-admin ~]$ ceph mds stat
    e1:

    创建管理节点(ceph-admin作为管理节点)。
    需要注意:如果不创建mds管理节点,client客户端将不能正常挂载到ceph集群!!
    [cephuser@ceph-admin ~]pwd /home/cephuser [cephuser@ceph-admin ~] cd cluster/
    [cephuser@ceph-admin cluster]$ ceph-deploy mds create ceph-admin

    再次查看管理节点状态,发现已经在启动中
    [cephuser@ceph-admin cluster]$ ceph mds stat
    e2:, 1 up:standby

    [cephuser@ceph-admin cluster]sudo systemctl status ceph-mds@ceph-admin [cephuser@ceph-admin cluster] ps -ef|grep cluster|grep ceph-mds
    ceph 29093 1 0 12:46 ? 00:00:00 /usr/bin/ceph-mds -f --cluster ceph --id ceph-admin --setuser ceph --setgroup ceph

    创建pool,pool是ceph存储数据时的逻辑分区,它起到namespace的作用
    [cephuser@ceph-admin cluster]$ ceph osd lspools #先查看pool
    0 rbd,

    新创建的ceph集群只有rdb一个pool。这时需要创建一个新的pool
    [cephuser@ceph-admin cluster]$ ceph osd pool create cephfs_data 10 #后面的数字是PG的数量
    pool 'cephfs_data' created

    [cephuser@ceph-admin cluster]$ ceph osd pool create cephfs_metadata 10 #创建pool的元数据
    pool 'cephfs_metadata' created

    [cephuser@ceph-admin cluster]$ ceph fs new myceph cephfs_metadata cephfs_data
    new fs with metadata pool 2 and data pool 1

    再次查看pool状态
    [cephuser@ceph-admin cluster]$ ceph osd lspools
    0 rbd,1 cephfs_data,2 cephfs_metadata,

    检查mds管理节点状态
    [cephuser@ceph-admin cluster]$ ceph mds stat
    e5: 1/1/1 up {0=ceph-admin=up:active}

    查看ceph集群状态
    [cephuser@ceph-admin cluster]$ sudo ceph -s
    cluster 33bfa421-8a3b-40fa-9f14-791efca9eb96
    health HEALTH_OK
    monmap e1: 1 mons at {ceph-admin=192.168.10.220:6789/0}
    election epoch 3, quorum 0 ceph-admin
    fsmap e5: 1/1/1 up {0=ceph-admin=up:active} #多了此行状态
    osdmap e19: 3 osds: 3 up, 3 in
    flags sortbitwise,require_jewel_osds
    pgmap v48: 84 pgs, 3 pools, 2068 bytes data, 20 objects
    101 MB used, 45945 MB / 46046 MB avail
    84 active+clean

    查看ceph集群端口
    [cephuser@ceph-admin cluster]$ sudo lsof -i:6789
    COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
    ceph-mon 28190 ceph 10u IPv4 70217 0t0 TCP ceph-admin:smc-https (LISTEN)
    ceph-mon 28190 ceph 19u IPv4 70537 0t0 TCP ceph-admin:smc-https->ceph-node1:41308 (ESTABLISHED)
    ceph-mon 28190 ceph 20u IPv4 70560 0t0 TCP ceph-admin:smc-https->ceph-node2:48516 (ESTABLISHED)
    ceph-mon 28190 ceph 21u IPv4 70583 0t0 TCP ceph-admin:smc-https->ceph-node3:44948 (ESTABLISHED)
    ceph-mon 28190 ceph 22u IPv4 72643 0t0 TCP ceph-admin:smc-https->ceph-admin:51474 (ESTABLISHED)
    ceph-mds 29093 ceph 8u IPv4 72642 0t0 TCP ceph-admin:51474->ceph-admin:smc-https (ESTABLISHED)

    安装ceph-fuse(这里的客户机是centos6系统)
    [root@centos6-02 ~]# rpm -Uvh https://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
    [root@centos6-02 ~]# yum install -y ceph-fuse

    创建挂载目录
    [root@centos6-02 ~]# mkdir /cephfs

    复制配置文件
    将ceph配置文件ceph.conf从管理节点copy到client节点(192.168.10.220为管理节点)
    [root@centos6-02 ~]# rsync -e "ssh -p22" -avpgolr root@192.168.10.220:/etc/ceph/ceph.conf /etc/ceph/
    或者
    [root@centos6-02 ~]# rsync -e "ssh -p22" -avpgolr root@192.168.10.220:/home/cephuser/cluster/ceph.conf /etc/ceph/ #两个路径下的文件内容一样

    复制密钥
    将ceph的ceph.client.admin.keyring从管理节点copy到client节点
    [root@centos6-02 ~]# rsync -e "ssh -p22" -avpgolr root@192.168.10.220:/etc/ceph/ceph.client.admin.keyring /etc/ceph/
    或者
    [root@centos6-02 ~]# rsync -e "ssh -p22" -avpgolr root@192.168.10.220:/home/cephuser/cluster/ceph.client.admin.keyring /etc/ceph/

    查看ceph授权
    [root@centos6-02 ~]# ceph auth list
    installed auth entries:

    mds.ceph-admin
    key: AQAZZxdbH6uAOBAABttpSmPt6BXNtTJwZDpSJg==
    caps: [mds] allow
    caps: [mon] allow profile mds
    caps: [osd] allow rwx
    osd.0
    key: AQCuWBdbV3TlBBAA4xsAE4QsFQ6vAp+7pIFEHA==
    caps: [mon] allow profile osd
    caps: [osd] allow *
    osd.1
    key: AQC6WBdbakBaMxAAsUllVWdttlLzEI5VNd/41w==
    caps: [mon] allow profile osd
    caps: [osd] allow *
    osd.2
    key: AQDJWBdbz6zNNhAATwzL2FqPKNY1IvQDmzyOSg==
    caps: [mon] allow profile osd
    caps: [osd] allow *
    client.admin
    key: AQCNWBdbf1QxAhAAkryP+OFy6wGnKR8lfYDkUA==
    caps: [mds] allow *
    caps: [mon] allow *
    caps: [osd] allow *
    client.bootstrap-mds
    key: AQCNWBdbnjLILhAAT1hKtLEzkCrhDuTLjdCJig==
    caps: [mon] allow profile bootstrap-mds
    client.bootstrap-mgr
    key: AQCOWBdbmxEANBAAiTMJeyEuSverXAyOrwodMQ==
    caps: [mon] allow profile bootstrap-mgr
    client.bootstrap-osd
    key: AQCNWBdbiO1bERAARLZaYdY58KLMi4oyKmug4Q==
    caps: [mon] allow profile bootstrap-osd
    client.bootstrap-rgw
    key: AQCNWBdboBLXIBAAVTsD2TPJhVSRY2E9G7eLzQ==
    caps: [mon] allow profile bootstrap-rgw

    将ceph集群存储挂载到客户机的/cephfs目录下
    [root@centos6-02 ~]# ceph-fuse -m 192.168.10.220:6789 /cephfs
    2018-06-06 14:28:54.149796 7f8d5c256760 -1 init, newargv = 0x4273580 newargc=11
    ceph-fuse[16107]: starting ceph client
    ceph-fuse[16107]: starting fuse

    [root@centos6-02 ~]# df -h
    Filesystem Size Used Avail Use% Mounted on
    /dev/mapper/vg_centos602-lv_root
    50G 3.5G 44G 8% /
    tmpfs 1.9G 0 1.9G 0% /dev/shm
    /dev/vda1 477M 41M 412M 9% /boot
    /dev/mapper/vg_centos602-lv_home
    45G 52M 43G 1% /home
    /dev/vdb1 20G 5.1G 15G 26% /data/osd1
    ceph-fuse 45G 100M 45G 1% /cephfs

    由上面可知,已经成功挂载了ceph存储,三个osd节点,每个节点有15G(在节点上通过"lsblk"命令可以查看ceph data分区大小),一共45G!

    取消ceph存储的挂载
    [root@centos6-02 ~]# umount /cephfs

    温馨提示:
    当有一半以上的OSD节点挂掉后,远程客户端挂载的Ceph存储就会使用异常了,即暂停使用。比如本案例中有3个OSD节点,当其中一个OSD节点挂掉后(比如宕机),
    客户端挂载的Ceph存储使用正常;但当有2个OSD节点挂掉后,客户端挂载的Ceph存储就不能正常使用了(表现为Ceph存储目录下的数据读写操作一直卡着状态),
    当OSD节点恢复后,Ceph存储也会恢复正常使用。OSD节点宕机重新启动后,osd程序会自动起来(通过监控节点自动起来)

    相关文章

      网友评论

          本文标题:Ceph安装部署(ceph-deploy)

          本文链接:https://www.haomeiwen.com/subject/patpuqtx.html