美文网首页OpenStack
⑤ OpenStack高可用集群部署方案(train版)—Nov

⑤ OpenStack高可用集群部署方案(train版)—Nov

作者: Linux丶晨星 | 来源:发表于2020-09-02 18:36 被阅读0次

    Nova具体功能如下:
    1 实例生命周期管理
    2 管理计算资源
    3 网络和认证管理
    4 REST风格的API
    5 异步的一致性通信
    6 Hypervisor透明:支持Xen,XenServer/XCP, KVM, UML, VMware vSphere and Hyper-V

    十三、Nova控制节点集群部署

    https://docs.openstack.org/nova/stein/install/

    1. 创建nova相关数据库

    在任意控制节点创建数据库,数据库自动同步,以controller01节点为例;

    #创建nova_api,nova和nova_cell0数据库并授权
    mysql -uroot -p
    CREATE DATABASE nova_api;
    CREATE DATABASE nova;
    CREATE DATABASE nova_cell0;
    
    GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' IDENTIFIED BY 'Zx*****';
    GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY 'Zx*****';
    
    GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'Zx*****';
    GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'Zx*****';
    
    GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' IDENTIFIED BY 'Zx*****';
    GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' IDENTIFIED BY 'Zx*****';
    flush privileges;
    

    2. 创建nova相关服务凭证

    在任意控制节点操作,以controller01节点为例;

    2.1 创建nova用户

    source admin-openrc
    openstack user create --domain default --password Zx***** nova
    

    2.2 向nova用户赋予admin权限

    openstack role add --project service --user nova admin
    

    2.3 创建nova服务实体

    openstack service create --name nova --description "OpenStack Compute" compute
    

    2.4 创建Compute API服务端点

    api地址统一采用vip,如果public/internal/admin分别设计使用不同的vip,请注意区分;

    --region与初始化admin用户时生成的region一致;

    openstack endpoint create --region RegionOne compute public http://10.15.253.88:8774/v2.1
    openstack endpoint create --region RegionOne compute internal http://10.15.253.88:8774/v2.1
    openstack endpoint create --region RegionOne compute admin http://10.15.253.88:8774/v2.1
    

    3. 安装nova软件包

    在全部控制节点安装nova相关服务,以controller01节点为例;

    • nova-api(nova主服务)
    • nova-scheduler(nova调度服务)
    • nova-conductor(nova数据库服务,提供数据库访问)
    • nova-novncproxy(nova的vnc服务,提供实例的控制台)
    yum install openstack-nova-api openstack-nova-conductor openstack-nova-novncproxy openstack-nova-scheduler -y
    

    4. 部署与配置

    https://docs.openstack.org/nova/stein/install/controller-install-rdo.html

    在全部控制节点配置nova相关服务,以controller01节点为例;

    注意my_ip参数,根据节点修改;注意nova.conf文件的权限:root:nova

    #备份配置文件/etc/nova/nova.conf
    cp -a /etc/nova/nova.conf{,.bak}
    grep -Ev '^$|#' /etc/nova/nova.conf.bak > /etc/nova/nova.conf
    
    openstack-config --set /etc/nova/nova.conf DEFAULT enabled_apis  osapi_compute,metadata
    openstack-config --set /etc/nova/nova.conf DEFAULT my_ip  10.15.253.163
    openstack-config --set /etc/nova/nova.conf DEFAULT use_neutron  true
    openstack-config --set /etc/nova/nova.conf DEFAULT firewall_driver  nova.virt.firewall.NoopFirewallDriver
    
    #rabbitmq的vip端口在haproxy中设置的为5673;暂不使用haproxy配置的rabbitmq;直接连接rabbitmq集群
    #openstack-config --set /etc/nova/nova.conf DEFAULT transport_url rabbit://openstack:Zx*****@10.15.253.88:5673
    openstack-config --set /etc/nova/nova.conf DEFAULT transport_url rabbit://openstack:Zx*****@controller01:5672,openstack:Zx*****@controller02:5672,openstack:Zx*****@controller03:5672
    
    openstack-config --set /etc/nova/nova.conf DEFAULT osapi_compute_listen_port 8774
    openstack-config --set /etc/nova/nova.conf DEFAULT metadata_listen_port 8775
    openstack-config --set /etc/nova/nova.conf DEFAULT metadata_listen '$my_ip'
    openstack-config --set /etc/nova/nova.conf DEFAULT osapi_compute_listen '$my_ip'
    
    openstack-config --set /etc/nova/nova.conf api auth_strategy  keystone
    openstack-config --set /etc/nova/nova.conf api_database  connection  mysql+pymysql://nova:Zx*****@10.15.253.88/nova_api
    
    openstack-config --set /etc/nova/nova.conf cache backend oslo_cache.memcache_pool
    openstack-config --set /etc/nova/nova.conf cache enabled True
    openstack-config --set /etc/nova/nova.conf cache memcache_servers controller01:11211,controller02:11211,controller03:11211
    
    openstack-config --set /etc/nova/nova.conf database connection  mysql+pymysql://nova:Zx*****@10.15.253.88/nova
    
    openstack-config --set /etc/nova/nova.conf keystone_authtoken www_authenticate_uri  http://10.15.253.88:5000/v3
    openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_url  http://10.15.253.88:5000/v3
    openstack-config --set /etc/nova/nova.conf keystone_authtoken memcached_servers  controller01:11211,controller02:11211,controller03:11211
    openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_type  password
    openstack-config --set /etc/nova/nova.conf keystone_authtoken project_domain_name  Default
    openstack-config --set /etc/nova/nova.conf keystone_authtoken user_domain_name  Default
    openstack-config --set /etc/nova/nova.conf keystone_authtoken project_name  service
    openstack-config --set /etc/nova/nova.conf keystone_authtoken username  nova
    openstack-config --set /etc/nova/nova.conf keystone_authtoken password  Zx*****
    
    openstack-config --set /etc/nova/nova.conf vnc enabled  true
    openstack-config --set /etc/nova/nova.conf vnc server_listen  '$my_ip'
    openstack-config --set /etc/nova/nova.conf vnc server_proxyclient_address  '$my_ip'
    openstack-config --set /etc/nova/nova.conf vnc novncproxy_host '$my_ip'
    openstack-config --set /etc/nova/nova.conf vnc novncproxy_port  6080
    
    openstack-config --set /etc/nova/nova.conf glance  api_servers  http://10.15.253.88:9292
    
    openstack-config --set /etc/nova/nova.conf oslo_concurrency lock_path  /var/lib/nova/tmp
    
    openstack-config --set /etc/nova/nova.conf placement region_name  RegionOne
    openstack-config --set /etc/nova/nova.conf placement project_domain_name  Default
    openstack-config --set /etc/nova/nova.conf placement project_name  service
    openstack-config --set /etc/nova/nova.conf placement auth_type  password
    openstack-config --set /etc/nova/nova.conf placement user_domain_name  Default
    openstack-config --set /etc/nova/nova.conf placement auth_url  http://10.15.253.88:5000/v3
    openstack-config --set /etc/nova/nova.conf placement username  placement
    openstack-config --set /etc/nova/nova.conf placement password  Zx*****
    

    注意:

    # 前端采用haproxy时,服务连接rabbitmq会出现连接超时重连的情况,可通过各服务与rabbitmq的日志查看;
    # transport_url=rabbit://openstack:Zx******@10.15.253.88:5672
    # rabbitmq本身具备集群机制,官方文档建议直接连接rabbitmq集群;但采用此方式时服务启动有时会报错,原因不明;如果没有此现象,建议连接rabbitmq直接对接集群而非通过前端haproxy的vip+端口
    openstack-config --set /etc/nova/nova.conf DEFAULT transport_url rabbit://openstack:Zx*****@controller01:5672,openstack:Zx*****@controller02:5672,openstack:Zx*****@controller03:5672
    

    将nova的配置文件拷贝到另外的控制节点上:

    scp -rp /etc/nova/nova.conf controller02:/etc/nova/
    scp -rp /etc/nova/nova.conf controller03:/etc/nova/
    
    ##controller02上
    sed -i "s#10.15.253.163#10.15.253.195#g" /etc/nova/nova.conf
    
    ##controller03上
    sed -i "s#10.15.253.163#10.15.253.227#g" /etc/nova/nova.conf
    

    5. 同步nova相关数据库并验证

    任意控制节点操作;填充nova-api数据库

    #填充nova-api数据库,无输出
    #填充cell0数据库,无输出
    #创建cell1表
    #同步nova数据库
    su -s /bin/sh -c "nova-manage api_db sync" nova
    su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova
    su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova
    su -s /bin/sh -c "nova-manage db sync" nova
    

    验证nova cell0和cell1是否正确注册

    su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova
    

    验证nova数据库是否正常写入

    mysql -h controller01 -u nova -pZx***** -e "use nova_api;show tables;"
    mysql -h controller01 -u nova -pZx***** -e "use nova;show tables;"
    mysql -h controller01 -u nova -pZx***** -e "use nova_cell0;show tables;"
    

    6. 启动nova服务,并配置开机启动

    在全部控制节点操作,以controller01节点为例;

    systemctl enable openstack-nova-api.service 
    systemctl enable openstack-nova-scheduler.service 
    systemctl enable openstack-nova-conductor.service 
    systemctl enable openstack-nova-novncproxy.service
    
    systemctl restart openstack-nova-api.service 
    systemctl restart openstack-nova-scheduler.service 
    systemctl restart openstack-nova-conductor.service 
    systemctl restart openstack-nova-novncproxy.service
    
    systemctl status openstack-nova-api.service 
    systemctl status openstack-nova-scheduler.service 
    systemctl status openstack-nova-conductor.service 
    systemctl status openstack-nova-novncproxy.service
    
    
    netstat -tunlp | egrep '8774|8775|8778|6080'
    curl http://myvip:8774
    

    7. 验证

    列出各服务控制组件,查看状态;

    [root@controller01 ~]# openstack compute service list
    

    展示api端点;

    [root@controller01 ~]# openstack catalog list
    

    检查cell与placement api;都为success为正常

    [root@controller01 ~]# nova-status upgrade check
    

    8. 设置pcs资源

    在任意控制节点操作;添加资源openstack-nova-api,openstack-nova-consoleauth,openstack-nova-scheduler,openstack-nova-conductor与openstack-nova-novncproxy

    pcs resource create openstack-nova-api systemd:openstack-nova-api clone interleave=true
    pcs resource create openstack-nova-scheduler systemd:openstack-nova-scheduler clone interleave=true
    pcs resource create openstack-nova-conductor systemd:openstack-nova-conductor clone interleave=true
    pcs resource create openstack-nova-novncproxy systemd:openstack-nova-novncproxy clone interleave=true
    
    #建议openstack-nova-api,openstack-nova-conductor与openstack-nova-novncproxy 等无状态服务以active/active模式运行;
    #openstack-nova-scheduler等服务以active/passive模式运行
    

    查看pcs资源

    [root@controller01 ~]# pcs resource 
      * vip (ocf::heartbeat:IPaddr2):   Started controller03
      * Clone Set: openstack-keystone-clone [openstack-keystone]:
        * Started: [ controller01 controller02 controller03 ]
      * Clone Set: lb-haproxy-clone [lb-haproxy]:
        * Started: [ controller03 ]
        * Stopped: [ controller01 controller02 ]
      * Clone Set: openstack-glance-api-clone [openstack-glance-api]:
        * Started: [ controller01 controller02 controller03 ]
      * Clone Set: openstack-nova-api-clone [openstack-nova-api]:
        * Started: [ controller01 controller02 controller03 ]
      * Clone Set: openstack-nova-scheduler-clone [openstack-nova-scheduler]:
        * Started: [ controller01 controller02 controller03 ]
      * Clone Set: openstack-nova-conductor-clone [openstack-nova-conductor]:
        * Started: [ controller01 controller02 controller03 ]
      * Clone Set: openstack-nova-novncproxy-clone [openstack-nova-novncproxy]:
        * Started: [ controller01 controller02 controller03 ]
    

    登陆haproxy的web界面可以查看nova的服务添加成功


    十四、Nova计算节点集群部署

    10.15.253.162 c2m16h600 compute01
    10.15.253.194 c2m16h600 compute02
    10.15.253.226 c2m16h600 compute03

    1. 安装nova-compute

    在全部计算节点安装nova-compute服务,以compute01节点为例;

    #在基础配置时已经下载好了openstack的源和需要的依赖,所以直接下载需要的服务组件即可
    yum install openstack-nova-compute -y
    yum install -y openstack-utils -y
    

    2. 部署与配置

    在全部计算节点安装nova-compute服务,以compute01节点为例;

    注意my_ip参数,根据节点修改;注意nova.conf文件的权限:root:nova

    #备份配置文件/etc/nova/nova.confcp /etc/nova/nova.conf{,.bak}
    cp /etc/nova/nova.conf{,.bak}
    grep -Ev '^$|#' /etc/nova/nova.conf.bak > /etc/nova/nova.conf
    

    2.1 确定计算节点是否支持虚拟机硬件加速

    [root@compute01 ~]# egrep -c '(vmx|svm)' /proc/cpuinfo
    0
    $ 如果此命令返回值不是0,则计算节点支持硬件加速,不需要加入下面的配置。
    $ 如果此命令返回值是0,则计算节点不支持硬件加速,并且必须配置libvirt为使用QEMU而不是KVM
    $ 需要编辑/etc/nova/nova.conf 配置中的[libvirt]部分:因测试使用为虚拟机,所以修改为qemu
    

    2.2 编辑配置文件nova.conf

    openstack-config --set  /etc/nova/nova.conf DEFAULT enabled_apis  osapi_compute,metadata
    openstack-config --set  /etc/nova/nova.conf DEFAULT transport_url  rabbit://openstack:Zx*****@10.15.253.88
    openstack-config --set  /etc/nova/nova.conf DEFAULT my_ip 10.15.253.162
    openstack-config --set  /etc/nova/nova.conf DEFAULT use_neutron  true
    openstack-config --set  /etc/nova/nova.conf DEFAULT firewall_driver  nova.virt.firewall.NoopFirewallDriver
    
    openstack-config --set  /etc/nova/nova.conf api auth_strategy  keystone
    
    openstack-config --set /etc/nova/nova.conf  keystone_authtoken www_authenticate_uri  http://10.15.253.88:5000
    openstack-config --set  /etc/nova/nova.conf keystone_authtoken auth_url  http://10.15.253.88:5000
    openstack-config --set  /etc/nova/nova.conf keystone_authtoken memcached_servers  controller01:11211,controller02:11211,controller03:11211
    openstack-config --set  /etc/nova/nova.conf keystone_authtoken auth_type  password
    openstack-config --set  /etc/nova/nova.conf keystone_authtoken project_domain_name  Default
    openstack-config --set  /etc/nova/nova.conf keystone_authtoken user_domain_name  Default
    openstack-config --set  /etc/nova/nova.conf keystone_authtoken project_name  service
    openstack-config --set  /etc/nova/nova.conf keystone_authtoken username  nova
    openstack-config --set  /etc/nova/nova.conf keystone_authtoken password  Zx*****
    
    openstack-config --set /etc/nova/nova.conf libvirt virt_type  qemu
    
    openstack-config --set  /etc/nova/nova.conf vnc enabled  true
    openstack-config --set  /etc/nova/nova.conf vnc server_listen  0.0.0.0
    openstack-config --set  /etc/nova/nova.conf vnc server_proxyclient_address  '$my_ip'
    openstack-config --set  /etc/nova/nova.conf vnc novncproxy_base_url http://10.15.253.88:6080/vnc_auto.html
    
    openstack-config --set  /etc/nova/nova.conf glance api_servers  http://10.15.253.88:9292
    
    openstack-config --set  /etc/nova/nova.conf oslo_concurrency lock_path  /var/lib/nova/tmp
    
    openstack-config --set  /etc/nova/nova.conf placement region_name  RegionOne
    openstack-config --set  /etc/nova/nova.conf placement project_domain_name  Default
    openstack-config --set  /etc/nova/nova.conf placement project_name  service
    openstack-config --set  /etc/nova/nova.conf placement auth_type  password
    openstack-config --set  /etc/nova/nova.conf placement user_domain_name  Default
    openstack-config --set  /etc/nova/nova.conf placement auth_url  http://10.15.253.88:5000/v3
    openstack-config --set  /etc/nova/nova.conf placement username  placement
    openstack-config --set  /etc/nova/nova.conf placement password  Zx*****
    

    将nova的配置文件拷贝到另外的计算节点上:

    scp -rp /etc/nova/nova.conf compute02:/etc/nova/
    scp -rp /etc/nova/nova.conf compute03:/etc/nova/
    
    ##compute02上
    sed -i "s#10.15.253.162#10.15.253.194#g" /etc/nova/nova.conf
    
    ##compute03上
    sed -i "s#10.15.253.162#10.15.253.226#g" /etc/nova/nova.conf
    

    3. 启动计算节点的nova服务

    全部计算节点操作;

    systemctl restart libvirtd.service openstack-nova-compute.service
    systemctl enable libvirtd.service openstack-nova-compute.service
    systemctl status libvirtd.service openstack-nova-compute.service
    

    4. 向cell数据库添加计算节点

    任意控制节点执行;查看计算节点列表

    [root@controller01 ~]# openstack compute service list --service nova-compute
    

    5. 控制节点上发现计算主机

    添加每台新的计算节点时,必须在控制器节点上运行

    5.1 手动发现计算节点

    su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova
    

    5.2 自动发现计算节点

    为避免新加入计算节点时,手动执行注册操作nova-manage cell_v2 discover_hosts,可设置控制节点定时自动发现主机;涉及控制节点nova.conf文件的[scheduler]字段;
    在全部控制节点操作;设置自动发现时间为10min,可根据实际环境调节

    openstack-config --set  /etc/nova/nova.conf scheduler discover_hosts_in_cells_interval 600
    systemctl restart openstack-nova-api.service
    

    6. 验证

    列出服务组件以验证每个进程的成功启动和注册情况

    openstack compute service list
    

    列出身份服务中的API端点以验证与身份服务的连接

    openstack catalog list
    

    列出图像服务中的图像以验证与图像服务的连接性

    openstack image list
    

    检查Cells和placement API是否正常运行

    nova-status upgrade check
    

    相关文章

      网友评论

        本文标题:⑤ OpenStack高可用集群部署方案(train版)—Nov

        本文链接:https://www.haomeiwen.com/subject/kwwwsktx.html