Cinder的核心功能是对卷的管理,允许对卷、卷的类型、卷的快照、卷备份进行处理。它为后端不同的存储设备提供给了统一的接口,不同的块设备服务厂商在Cinder中实现其驱动,可以被Openstack整合管理,nova与cinder的工作原理类似。支持多种 back-end(后端)存储方式,包括 LVM,NFS,Ceph 和其他诸如 EMC、IBM 等商业存储产品和方案。
一篇cinder原理的详细的介绍-密码 xiaoyuanqujing@666
从OpenStack的角度看块存储的世界
分布式存储 Ceph 介绍及原理架构分享 上
分布式存储 Ceph 介绍及原理架构分享 下
三种存储方案 DAS,NAS,SAN在数据库存储上的应用
DAS、SAN、NAS三种存储方式的概念及应用
Cinder各组件功能
-
Cinder-api 是 cinder 服务的 endpoint,提供 rest 接口,负责处理 client 请求,并将 RPC 请求发送至 cinder-scheduler 组件。
-
Cinder-scheduler 负责 cinder 请求调度,其核心部分就是 scheduler_driver, 作为 scheduler manager 的 driver,负责 cinder-volume 具体的调度处理,发送 cinder RPC 请求到选择的 cinder-volume。
-
Cinder-volume 负责具体的 volume 请求处理,由不同后端存储提供 volume 存储空间。目前各大存储厂商已经积极地将存储产品的 driver 贡献到 cinder 社区
十六、Cinder控制节点集群部署
https://docs.openstack.org/cinder/train/install/
1. 创建cinder数据库
在任意控制节点创建数据库,后台数据自动同步;
mysql -u root -pZxzn@2020
create database cinder;
grant all privileges on cinder.* to 'cinder'@'%' identified by 'Zxzn@2020';
grant all privileges on cinder.* to 'cinder'@'localhost' identified by 'Zxzn@2020';
flush privileges;
2. 创建cinder相关服务凭证
在任意控制节点操作,以controller01节点为例;
2.1 创建cinder服务用户
source admin-openrc
openstack user create --domain default --password Zxzn@2020 cinder
2.2 向cinder用户赋予admin权限
openstack role add --project service --user cinder admin
2.3 创建cinderv2和cinderv3服务实体
#cinder服务实体类型 "volume"
openstack service create --name cinderv2 --description "OpenStack Block Storage" volumev2
openstack service create --name cinderv3 --description "OpenStack Block Storage" volumev3
2.4 创建块存储服务API端点
- 块存储服务需要每个服务实体的端点
- cinder-api后缀为用户project-id,可通过
openstack project list
查看
#v2
openstack endpoint create --region RegionOne volumev2 public http://10.15.253.88:8776/v2/%\(project_id\)s
openstack endpoint create --region RegionOne volumev2 internal http://10.15.253.88:8776/v2/%\(project_id\)s
openstack endpoint create --region RegionOne volumev2 admin http://10.15.253.88:8776/v2/%\(project_id\)s
#v3
openstack endpoint create --region RegionOne volumev3 public http://10.15.253.88:8776/v3/%\(project_id\)s
openstack endpoint create --region RegionOne volumev3 internal http://10.15.253.88:8776/v3/%\(project_id\)s
openstack endpoint create --region RegionOne volumev3 admin http://10.15.253.88:8776/v3/%\(project_id\)s
3. 部署与配置cinder
3.1 安装cinder
在全部控制节点安装cinder服务,以controller01节点为例
yum install openstack-cinder -y
3.2 配置cinder.conf
在全部控制节点操作,以controller01节点为例;注意my_ip
参数,根据节点修改;
#备份配置文件/etc/cinder/cinder.conf
cp -a /etc/cinder/cinder.conf{,.bak}
grep -Ev '^$|#' /etc/cinder/cinder.conf.bak > /etc/cinder/cinder.conf
openstack-config --set /etc/cinder/cinder.conf DEFAULT my_ip 10.15.253.163
openstack-config --set /etc/cinder/cinder.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/cinder/cinder.conf DEFAULT glance_api_servers http://10.15.253.88:9292
openstack-config --set /etc/cinder/cinder.conf DEFAULT osapi_volume_listen '$my_ip'
openstack-config --set /etc/cinder/cinder.conf DEFAULT osapi_volume_listen_port 8776
openstack-config --set /etc/cinder/cinder.conf DEFAULT log_dir /var/log/cinder
#直接连接rabbitmq集群
openstack-config --set /etc/cinder/cinder.conf DEFAULT transport_url rabbit://openstack:Zxzn@2020@controller01:5672,openstack:Zxzn@2020@controller02:5672,openstack:Zxzn@2020@controller03:5672
openstack-config --set /etc/cinder/cinder.conf database connection mysql+pymysql://cinder:Zxzn@2020@10.15.253.88/cinder
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken www_authenticate_uri http://10.15.253.88:5000
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_url http://10.15.253.88:5000
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken memcached_servers controller01:11211,controller02:11211,controller03:11211
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_type password
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken project_domain_name default
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken user_domain_name default
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken project_name service
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken username cinder
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken password Zxzn@2020
openstack-config --set /etc/cinder/cinder.conf oslo_concurrency lock_path /var/lib/cinder/tmp
将cinder配置文件拷贝到另外两个控制节点上:
scp -rp /etc/cinder/cinder.conf controller02:/etc/cinder/
scp -rp /etc/cinder/cinder.conf controller03:/etc/cinder/
##controller02上
sed -i "s#10.15.253.163#10.15.253.195#g" /etc/cinder/cinder.conf
##controller03上
sed -i "s#10.15.253.163#10.15.253.227#g" /etc/cinder/cinder.conf
3.3 配置nova.conf使用块存储
在全部控制节点操作,以controller01节点为例;
配置只涉及nova.conf的[cinder]
字段;
openstack-config --set /etc/nova/nova.conf cinder os_region_name RegionOne
3.4 同步cinder数据库
任意控制节点操作;
su -s /bin/sh -c "cinder-manage db sync" cinder
#验证
mysql -ucinder -pZxzn@2020 -e "use cinder;show tables;"
3.5 启动服务并设置开机自启动
全部控制节点操作;修改了nova配置文件,首先需要重启nova服务
systemctl restart openstack-nova-api.service
systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service
systemctl restart openstack-cinder-api.service openstack-cinder-scheduler.service
systemctl status openstack-cinder-api.service openstack-cinder-scheduler.service
3.6 控制节点验证
openstack volume service list
#也可以使用 cinder service-list
4. 设置pcs资源
在任意控制节点操作;添加资源cinder-api
与cinder-scheduler
-
cinder-api
与cinder-scheduler
以active/active
模式运行; -
openstack-nova-volume
以active/passive
模式运行
pcs resource create openstack-cinder-api systemd:openstack-cinder-api clone interleave=true
pcs resource create openstack-cinder-scheduler systemd:openstack-cinder-scheduler clone interleave=true
查看资源
pcs resource
十七、Cinder存储节点集群部署
- 资源有限,这里将cinder存储节点暂时部署到三台计算节点上
- 使用ceph作为后端的存储进行使用
- 在采用ceph或其他商业/非商业后端存储时,建议将
cinder-volume
服务部署在控制节点,通过pacemaker将服务运行在active/passive模式。
⑨ OpenStack高可用集群部署方案(train版)—CentOS8安装与配置Ceph集群
Openstack的存储面临的问题
https://docs.openstack.org/arch-design/
企业上线openstack,必须要思考和解决三方面的难题:
1.控制集群的高可用和负载均衡,保障集群没有单点故障,持续可用,
2.网络的规划和neutron L3的高可用和负载均衡,
3.存储的高可用性和性能问题。
存储openstack中的痛点与难点之一
在上线和运维中,值得考虑和规划的重要点,openstack支持各种存储,包括分布式的文件系统,常见的有:ceph,glusterfs和sheepdog
,同时也支持商业的FC存储,如IBM,EMC,NetApp和huawei
的专业存储设备,一方面能够满足企业的利旧和资源的统一管理。
Ceph概述
ceph作为近年来呼声最高的统一存储,在云环境下适应而生,ceph成就了openstack和cloudstack这类的开源的平台方案,同时openstack的快速发展,也吸引了越来越多的人参与到ceph的研究中来。ceph在整个社区的活跃度越来越高,越来越多的企业,使用ceph做为openstack的glance,nova,cinder
的存储。
ceph是一种统一的分布式文件系统;能够支持三种常用的接口:
1.对象存储接口,兼容于S3,用于存储结构化的数据,如图片,视频,音频等文件,其他对象存储有:S3,Swift,FastDFS
等;
2.文件系统接口,通过cephfs
来完成,能够实现类似于nfs的挂载文件系统,需要由MDS来完成,类似的文件系存储有:nfs,samba,glusterfs
等;
3.块存储,通过rbd实现,专门用于存储云环境下块设备,如openstack的cinder卷存储,这也是目前ceph应用最广泛的地方。
1. 部署与配置cinder
1.1 安装cinder
在全部计算点安装;compute节点已配置好openstack源,如果是分离的cinder节点需要提前做好优化
yum install openstack-cinder targetcli python3-keystone -y
1.2 配置cinder.conf
在全部计算点配置;注意my_ip
参数,根据节点修改;
#备份配置文件/etc/cinder/cinder.conf
cp -a /etc/cinder/cinder.conf{,.bak}
grep -Ev '#|^$' /etc/cinder/cinder.conf.bak>/etc/cinder/cinder.conf
openstack-config --set /etc/cinder/cinder.conf DEFAULT transport_url rabbit://openstack:Zxzn@2020@controller01:5672,openstack:Zxzn@2020@controller02:5672,openstack:Zxzn@2020@controller03:5672
openstack-config --set /etc/cinder/cinder.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/cinder/cinder.conf DEFAULT my_ip 10.15.253.162
openstack-config --set /etc/cinder/cinder.conf DEFAULT enabled_backends lvm
openstack-config --set /etc/cinder/cinder.conf DEFAULT glance_api_servers http://10.15.253.88:9292
openstack-config --set /etc/cinder/cinder.conf DEFAULT enabled_backends ceph
openstack-config --set /etc/cinder/cinder.conf database connection mysql+pymysql://cinder:Zxzn@2020@10.15.253.88/cinder
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken www_authenticate_uri http://10.15.253.88:5000
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_url http://10.15.253.88:5000
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken memcached_servers controller01:11211,controller02:11211,controller03:11211
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_type password
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken project_domain_name default
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken user_domain_name default
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken project_name service
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken username cinder
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken password Zxzn@2020
openstack-config --set /etc/cinder/cinder.conf oslo_concurrency lock_path /var/lib/cinder/tmp
将cinder配置文件拷贝到另外两个计算节点上:
scp -rp /etc/cinder/cinder.conf compute02:/etc/cinder/
scp -rp /etc/cinder/cinder.conf compute03:/etc/cinder/
##compute02上
sed -i "s#10.15.253.162#10.15.253.194#g" /etc/cinder/cinder.conf
##compute03上
sed -i "s#10.15.253.162#10.15.253.226#g" /etc/cinder/cinder.conf
1.3 启动服务并设置开机自启动
全部计算节点操作;
systemctl restart openstack-cinder-volume.service target.service
systemctl enable openstack-cinder-volume.service target.service
systemctl status openstack-cinder-volume.service target.service
1.4 在控制节点进行验证
执行状态检查;此时后端存储服务为ceph,但ceph相关服务尚未启用并集成到cinder-volume,所以服务的状态也是down
openstack volume service list
网友评论