美文网首页
Ceph在CentOS7下的安装

Ceph在CentOS7下的安装

作者: IT界的蜗牛 | 来源:发表于2018-11-26 22:35 被阅读0次

1. 安装Esxi server和vCenter

2. 准备工作

deploy
node1
node2
node3
osd1
osd2
osd3
postgres

2.1 安装Centos 节点

image.png

问题
1. VM 不能自动获取IP

cd etc/sysconfig/network-scripts
vi ifcfg-ethxxx, # 修改ONBOOT="no" to "yes"

#重启服务
systemctl restart network

2.2 创建deploy用户并赋予sudo权限

useradd -d /home/deploy -m deploy
passwd deploy
cat /etc/group
# 确保所有节点上新创建的用户都有 sudo 权限
echo "iflyceph ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/deploy
sudo chmod 0440 /etc/sudoers.d/deploy

2.3 修改hostname名

deploy@deploy ~]$ hostname
deploy
[deploy@deploy ~]$ sudo hostnamectl --pretty

[deploy@deploy ~]$ sudo hostnamectl --static
deploy
[deploy@deploy ~]$ sudo hostnamectl --transient
deploy
[deploy@deploy ~]$ hostnamectl set-hostname deploy  # "deploy" is the hostname of the VM

2.4 ssh免密登陆

修改 /etc/hosts文件

10.110.125.90 deploy
10.110.125.77 node1
10.110.125.55 node2
10.110.125.54 rgw
10.110.125.83 osd1
10.110.125.52 osd2
10.110.125.53 osd3

#127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

生成ssh 公钥文件,并分发给其他节点

# 用authorized_keys来保存各节点的公钥
deploy@node3 .ssh]$ vi authorized_keys
[deploy@node3 .ssh]$ ssh-keygen
[deploy@node3 .ssh]$ ls
authorized_keys  id_rsa  id_rsa.pub
# 添加id_rsa.pub到authorized_keys
[deploy@node3 .ssh]$ cat id_rsa.pub >> authorized_keys

# 复制authorized_keys到其他节点
[deploy@node3 .ssh]$ scp -r authorized_keys deploy@deploy:/home/deploy/.ssh
The authenticity of host 'deploy (10.110.125.90)' can't be established.
ECDSA key fingerprint is SHA256:d1zplvj7CshS6Ub+8t1iPo/RlLTYoiDRkx0v7LHkd1w.
ECDSA key fingerprint is MD5:83:79:03:8d:d6:aa:a0:b9:1e:35:c0:65:6a:c3:79:c3.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'deploy,10.110.125.90' (ECDSA) to the list of known hosts.
deploy@deploy's password:
authorized_keys                                                             100% 3148    84.5KB/s   00:00

2.5 打开必须端口

Monitor默认使用6789端口,OSDs默认使用6800:7300之间的端口

[deploy@node1 ~]$ sudo /sbin/iptables -I INPUT -p tcp --dport 6789 -j ACCEPT
[deploy@node1 ~]$ sudo /sbin/iptables -I INPUT -p tcp --dport 6800:7300 -j ACCEPT

3. 在vCenter下安装Ceph

3.1 安装ceph-deploy

# Install and enable the Extra Packages for Enterprise Linux (EPEL) repository
[deploy@deploy ~]$ sudo yum install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
Loaded plugins: fastestmirror
epel-release-latest-7.noarch.rpm        |  15 kB     00:00
Examining /var/tmp/yum-root-4rZs4w/epel-release-latest-7.noarch.rpm: epel-release-7-11.noarch

# Add the Ceph repository to your yum configuration file at /etc/yum.repos.d/ceph-deploy.repo with the following command
[deploy@deploy ~]$ cat /etc/yum.repos.d/ceph-deploy.repo
[ceph-noarch]
name=Ceph noarch packages
baseurl=https://download.ceph.com/rpm-mimic/el7/noarch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc

# Update your repository and install ceph-deploy
[deploy@deploy ~]$ sudo yum update
[deploy@deploy ~]$ sudo yum install ceph-deploy

# Install pip and ntp
sudo yum install ntp ntp-date ntp-doc
sudo yum install python-pip

# Add ntp server's address to configuration file 
[deploy@deploy ~]$ vi /etc/ntp.conf
#server 3.centos.pool.ntp.org iburst
server 10.110.160.1
server 10.110.160.2

# Start ntpd server
sudo systemctl start ntpd

3.2 安装ceph 集群

创建deploy集群文件夹

deploy@deploy ~]$ cd
[deploy@deploy ~]$ mkdir my-cluster
[deploy@deploy ~]$ cd my-cluster
[deploy@deploy my-cluster]$

创建集群

deploy@deploy my-cluster]$ ceph-deploy new node1
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/deploy/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy new node1
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  func                          : <function new at 0x7f99e49cdd70>
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f99e41473b0>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  ssh_copykey                   : True
[ceph_deploy.cli][INFO  ]  mon                           : ['node1']
[ceph_deploy.cli][INFO  ]  public_network                : None
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  cluster_network               : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  fsid                          : None
[ceph_deploy.new][DEBUG ] Creating new cluster named ceph
[ceph_deploy.new][INFO  ] making sure passwordless SSH succeeds
[node1][DEBUG ] connected to host: deploy
[node1][INFO  ] Running command: ssh -CT -o BatchMode=yes node1
[ceph_deploy.new][WARNIN] could not connect via SSH
[ceph_deploy.new][INFO  ] will connect again with password prompt
deploy@node1's password:
[node1][DEBUG ] connected to host: node1
[node1][DEBUG ] detect platform information from remote host
[node1][DEBUG ] detect machine type
[ceph_deploy.new][INFO  ] adding public keys to authorized_keys
[node1][DEBUG ] append contents to file
deploy@node1's password:
[node1][DEBUG ] connection detected need for sudo
deploy@node1's password:
[node1][DEBUG ] connected to host: node1
[node1][DEBUG ] detect platform information from remote host
[node1][DEBUG ] detect machine type
[node1][DEBUG ] find the location of an executable
[node1][INFO  ] Running command: sudo /usr/sbin/ip link show
[node1][INFO  ] Running command: sudo /usr/sbin/ip addr show
[node1][DEBUG ] IP addresses found: [u'10.110.125.77', u'fc00:10:110:127:4a0c:15e:3cec:afe8']
[ceph_deploy.new][DEBUG ] Resolving host node1
[ceph_deploy.new][DEBUG ] Monitor node1 at 10.110.125.77
[ceph_deploy.new][DEBUG ] Monitor initial members are ['node1']
[ceph_deploy.new][DEBUG ] Monitor addrs are ['10.110.125.77']
[ceph_deploy.new][DEBUG ] Creating a random mon key...
[ceph_deploy.new][DEBUG ] Writing monitor keyring to ceph.mon.keyring...
[ceph_deploy.new][DEBUG ] Writing initial config to ceph.conf...
[deploy@deploy my-cluster]$

更新ceph.conf

[deploy@deploy my-cluster]$ ls
ceph.conf  ceph-deploy-ceph.log  ceph.mon.keyring

[deploy@deploy my-cluster]$ cat ceph.conf
[global]
fsid = 340734b0-331b-4037-8a8c-60e3069e41af
mon_initial_members = node1
mon_host = 10.110.125.77
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
# 添加以下三行
public network = 10.110.125.0/22
rgw_swift_account_in_url = true
rgw_enable_usage_log = true
[deploy@deploy my-cluster]$

安装ceph节点 (每个节点都要安装)

ceph-deploy install node1 node2 node3 --release mimic
ceph-deploy install osd1 osd2 osd3 --release mimic

设置MON,ADMIN demean

ceph-deploy mon create-initial

ceph-deploy admin node1 node2 node3

ceph-deploy mgr create node1

Steps to add MON

# How to add MON
# ceph-deploy mon add node2
#
 [deploy@deploy my-cluster]$ ceph-deploy mon add node2
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/deploy/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy mon add node2
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : add
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7fd88b7d5bd8>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  mon                           : ['node2']
[ceph_deploy.cli][INFO  ]  func                          : <functionmon at 0x7fd88b7bb398>
[ceph_deploy.cli][INFO  ]  address                       : None
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.mon][INFO  ] ensuring configuration of new mon host: node2
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to node2
deploy@node2's password:
[node2][DEBUG ] connection detected need for sudo
deploy@node2's password:
[node2][DEBUG ] connected to host: node2
[node2][DEBUG ] detect platform information from remote host
[node2][DEBUG ] detect machine type
[node2][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.mon][DEBUG ] Adding mon to cluster ceph, host node2
[ceph_deploy.mon][DEBUG ] using mon address by resolving host: 10.110.125.55
[ceph_deploy.mon][DEBUG ] detecting platform for host node2 ...
deploy@node2's password:
[node2][DEBUG ] connection detected need for sudo
deploy@node2's password:
[node2][DEBUG ] connected to host: node2
[node2][DEBUG ] detect platform information from remote host
[node2][DEBUG ] detect machine type
[node2][DEBUG ] find the location of an executable
[ceph_deploy.mon][INFO  ] distro info: CentOS Linux 7.5.1804 Core
[node2][DEBUG ] determining if provided host has same hostname in remote
[node2][DEBUG ] get remote short hostname
[node2][DEBUG ] adding mon to node2
[node2][DEBUG ] get remote short hostname
[node2][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[node2][DEBUG ] create the mon path if it does not exist
[node2][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-node2/done
[node2][DEBUG ] done path does not exist: /var/lib/ceph/mon/ceph-node2/done
[node2][INFO  ] creating keyring file: /var/lib/ceph/tmp/ceph-node2.mon.keyring
[node2][DEBUG ] create the monitor keyring file
[node2][INFO  ] Running command: sudo ceph --cluster ceph mon getmap-o /var/lib/ceph/tmp/ceph.node2.monmap
[node2][WARNIN] got monmap epoch 1
[node2][INFO  ] Running command: sudo ceph-mon --cluster ceph --mkfs-i node2 --monmap /var/lib/ceph/tmp/ceph.node2.monmap --keyring /var/lib/ceph/tmp/ceph-node2.mon.keyring --setuser 167 --setgroup 167
[node2][INFO  ] unlinking keyring file /var/lib/ceph/tmp/ceph-node2.mon.keyring
[node2][DEBUG ] create a done file to avoid re-doing the mon deployment
[node2][DEBUG ] create the init path if it does not exist
[node2][INFO  ] Running command: sudo systemctl enable ceph.target
[node2][INFO  ] Running command: sudo systemctl enable ceph-mon@node2
[node2][WARNIN] Created symlink from /etc/systemd/system/ceph-mon.target.wants/ceph-mon@node2.service to /usr/lib/systemd/system/ceph-mon@.service.
[node2][INFO  ] Running command: sudo systemctl start ceph-mon@node2
[node2][INFO  ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.node2.asok mon_status
[node2][WARNIN] node2 is not defined in `mon initial members`
[node2][WARNIN] monitor node2 does not exist in monmap
[node2][INFO  ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.node2.asok mon_status
[node2][DEBUG ] ********************************************************************************
[node2][DEBUG ] status for monitor: mon.node2
[node2][DEBUG ] {
[node2][DEBUG ]   "election_epoch": 0,
[node2][DEBUG ]   "extra_probe_peers": [],
[node2][DEBUG ]   "feature_map": {
[node2][DEBUG ]     "mon": [
[node2][DEBUG ]       {
[node2][DEBUG ]         "features": "0x3ffddff8ffa4fffb",
[node2][DEBUG ]         "num": 1,
[node2][DEBUG ]         "release": "luminous"
[node2][DEBUG ]       }
[node2][DEBUG ]     ]
[node2][DEBUG ]   },
[node2][DEBUG ]   "features": {
[node2][DEBUG ]     "quorum_con": "0",
[node2][DEBUG ]     "quorum_mon": [],
[node2][DEBUG ]     "required_con": "144115188346404864",
[node2][DEBUG ]     "required_mon": [
[node2][DEBUG ]       "kraken",
[node2][DEBUG ]       "luminous",
[node2][DEBUG ]       "mimic",
[node2][DEBUG ]       "osdmap-prune"
[node2][DEBUG ]     ]
[node2][DEBUG ]   },
[node2][DEBUG ]   "monmap": {
[node2][DEBUG ]     "created": "2018-11-26 01:00:08.121877",
[node2][DEBUG ]     "epoch": 1,
[node2][DEBUG ]     "features": {
[node2][DEBUG ]       "optional": [],
[node2][DEBUG ]       "persistent": [
[node2][DEBUG ]         "kraken",
[node2][DEBUG ]         "luminous",
[node2][DEBUG ]         "mimic",
[node2][DEBUG ]         "osdmap-prune"
[node2][DEBUG ]       ]
[node2][DEBUG ]     },
[node2][DEBUG ]     "fsid": "340734b0-331b-4037-8a8c-60e3069e41af",
[node2][DEBUG ]     "modified": "2018-11-26 01:00:08.121877",
[node2][DEBUG ]     "mons": [
[node2][DEBUG ]       {
[node2][DEBUG ]         "addr": "10.110.125.77:6789/0",
[node2][DEBUG ]         "name": "node1",
[node2][DEBUG ]         "public_addr": "10.110.125.77:6789/0",
[node2][DEBUG ]         "rank": 0
[node2][DEBUG ]       }
[node2][DEBUG ]     ]
[node2][DEBUG ]   },
[node2][DEBUG ]   "name": "node2",
[node2][DEBUG ]   "outside_quorum": [],
[node2][DEBUG ]   "quorum": [],
[node2][DEBUG ]   "rank": -1,
[node2][DEBUG ]   "state": "probing",
[node2][DEBUG ]   "sync_provider": []
[node2][DEBUG ] }
[node2][DEBUG ] ********************************************************************************
[node2][INFO  ] monitor: mon.node2 is currently at the state of probing
[deploy@deploy my-cluster]$

Steps to check ceph status

# ceph -s
#
[deploy@deploy my-cluster]$ ssh node1 sudo ceph -s
deploy@node1's password:
  cluster:
    id:     340734b0-331b-4037-8a8c-60e3069e41af
    health: HEALTH_OK

  services:
    mon: 3 daemons, quorum node3,node2,node1
    mgr: node1(active), standbys: node2, node3
    osd: 0 osds: 0 up, 0 in

  data:
    pools:   0 pools, 0 pgs
    objects: 0  objects, 0 B
    usage:   0 B used, 0 B / 0 B avail
    pgs:

3.2 安装OSDs
check fdisk status
sdb, sdc, sdd of node osd1 would be used as osd

deploy@osd1 ~]$ lsblk
NAME            MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda               8:0    0   50G  0 disk
├─sda1            8:1    0    1G  0 part /boot
└─sda2            8:2    0   49G  0 part
  ├─centos-root 253:0    0   47G  0 lvm  /
  └─centos-swap 253:1    0    2G  0 lvm  [SWAP]
sdb               8:16   0   50G  0 disk
sdc               8:32   0   50G  0 disk
sdd               8:48   0   50G  0 disk
sr0              11:0    1 1024M  0 rom

[deploy@deploy my-cluster]$ ceph-deploy osd create --data /dev/sdb osd1
[ceph_deploy.osd][DEBUG ] Host osd1 is now ready for osd use.

# continue add sdc, sdd

continue add three osds for osd2, osd3

查看OSD数量

[deploy@deploy ~]$ ssh node1 sudo ceph -s
deploy@node1's password:
  cluster:
    id:     340734b0-331b-4037-8a8c-60e3069e41af
    health: HEALTH_WARN
            1 osds down
            8 slow ops, oldest one blocked for 5150 sec, mon.node1 has slow ops

  services:
    mon: 3 daemons, quorum node3,node2,node1
    mgr: node1(active), standbys: node2, node3
    osd: 9 osds: 5 up, 6 in

  data:
    pools:   0 pools, 0 pgs
    objects: 0  objects, 0 B
    usage:   0 B used, 0 B / 0 B avail
    pgs:

3.3 安装rgw
RGW默认使用Civetweb作为其Web Sevice,而Civetweb默认使用端口7480提供服务,如果想修改端口(如80端口),就需要修改Ceph的配置文件。在配置文件中增加一个section[client.rgw.xxx]

[deploy@deploy my-cluster]$ vi ceph.conf
# Content
[client.rgw.node1] //[client.rgw.{gatewayNode}
rgw_frontends = "civetweb port=7480 num_threads=512 request_timeout_ms=30000"
rgw_enable_usage_log = true
rgw_thread_pool_size = 512
rgw_override_bucket_index_max_shards = 20
rgw_max_chunk_size = 1048576
rgw_cache_lru_size = 1000000
rgw_op_thread_timeout = 6000
rgw_num_rados_handles = 8
rgw_cache_enabled = true
objecter_inflight_ops = 10240
objecter_inflight_op_bytes = 1048576000

# Push configuration to other rgw node ( node1, node2, node3)
ceph-deploy --overwrite-conf config push node3

创建rgw网关

ceph-deploy install --rgw node1 node2 node3

创建rgw实例

ceph-deploy rgw create node1

问题列表

  1. WARN related with MON creation
[node2][INFO  ] Running command: sudo ceph --cluster ceph mon getmap-o /var/lib/ceph/tmp/ceph.node2.monmap
[node2][WARNIN] got monmap epoch 1

[node2][WARNIN] node2 is not defined in `mon initial members`
[node2][WARNIN] monitor node2 does not exist in monmap

常用命令

  1. CentOS 查看磁盘分区情况
# lsblk 查看分区和磁盘
[deploy@osd1 ~]$ lsblk
NAME            MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda               8:0    0   50G  0 disk
├─sda1            8:1    0    1G  0 part /boot
└─sda2            8:2    0   49G  0 part
  ├─centos-root 253:0    0   47G  0 lvm  /
  └─centos-swap 253:1    0    2G  0 lvm  [SWAP]
sdb               8:16   0   50G  0 disk
sdc               8:32   0   50G  0 disk
sdd               8:48   0   50G  0 disk
sr0              11:0    1 1024M  0 rom

# df -h 查看空间使用情况
[deploy@osd1 /]$ df -h
Filesystem               Size  Used Avail Use% Mounted on
/dev/mapper/centos-root   47G  1.4G   46G   3% /
devtmpfs                 908M     0  908M   0% /dev
tmpfs                    920M     0  920M   0% /dev/shm
tmpfs                    920M  8.8M  911M   1% /run
tmpfs                    920M     0  920M   0% /sys/fs/cgroup
/dev/sda1               1014M  142M  873M  14% /boot
tmpfs                    184M     0  184M   0% /run/user/1000

# fdisk -l 分区工具查看分区信息
deploy@osd1 /]$ sudo fdisk -l

Disk /dev/sda: 53.7 GB, 53687091200 bytes, 104857600 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x00038faf

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *        2048     2099199     1048576   83  Linux
/dev/sda2         2099200   104857599    51379200   8e  Linux LVM

Disk /dev/sdb: 53.7 GB, 53687091200 bytes, 104857600 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
...

# du -sh ./* 统计当前目录各文件夹大小
[deploy@osd1 ~]$ du -sh
32K  

相关文章

网友评论

      本文标题:Ceph在CentOS7下的安装

      本文链接:https://www.haomeiwen.com/subject/ybvvqqtx.html