美文网首页
Compute1节点安装及配置

Compute1节点安装及配置

作者: sapbcs | 来源:发表于2018-03-29 14:19 被阅读0次

    一、前期准备

    1. 安装和配置NTP客户端

    • 安装chrony软件包
      ~]# yum install chrony -y
    • 编辑chrony配置文件
    ~]# vim /etc/chrony.conf 
    # 注销之前所有的server ... iburst行
    server controller iburst
    
    • 启动chronyd服务,并配置开机自动启动
      ~]# systemctl enable chronyd.service
      ~]# systemctl start chronyd.service
    • 验证操作
    ~]# chronyc sources 
    210 Number of sources = 1
    MS Name/IP address         Stratum Poll Reach LastRx Last sample               
    ===============================================================================
    ^* controller                    3   6    17     0   +356ns[+5471ns] +/- 4476us
    

    要确保*标出现,才代表与时间服务器同步完成。

    2. 软件包准备

    • 安装OpenStack Pike Repository
      ~]# yum install centos-release-openstack-pike -y
    • 升级软件包
      ~]# yum upgrade
    • 安装openstack客户端软件包
      ~]# yum install python-openstackclient -y
    • 安装openstack-selinux软件包
      ~]# yum install openstack-selinux -y

    二、Nova的安装和配置

    1. 软件包安装

    ~]# yum -y install openstack-nova-compute

    2. 编辑配置文件/etc/nova/nova.conf

    ~]# cp /etc/nova/nova.conf{,.bak}
    ~]# vim /etc/nova/nova.conf

    [DEFAULT]
    # ...
    enabled_apis = osapi_compute,metadata
    transport_url = rabbit://openstack:pike@controller
    my_ip = 10.6.10.2
    use_neutron = True
    firewall_driver = nova.virt.firewall.NoopFirewallDriver
    
    [api]
    # ...
    auth_strategy = keystone
    
    [keystone_authtoken]
    # ...
    auth_uri = http://controller:5000
    auth_url = http://controller:35357
    memcached_servers = controller:11211
    auth_type = password
    project_domain_name = default
    user_domain_name = default
    project_name = service
    username = nova
    password = pike
    
    [vnc]
    # ...
    enabled = True
    vncserver_listen = 0.0.0.0
    vncserver_proxyclient_address = $my_ip
    novncproxy_base_url = http://controller:6080/vnc_auto.html
    
    [glance]
    # ...
    api_servers = http://controller:9292
    
    [oslo_concurrency]
    # ...
    lock_path = /var/lib/nova/tmp
    
    [placement]
    # ...
    os_region_name = RegionOne
    project_domain_name = Default
    project_name = service
    auth_type = password
    user_domain_name = Default
    auth_url = http://controller:35357/v3
    username = placement
    password = pike
    

    3. 完成安装

    (1) 验证compute1主机的宿主机是否支持硬件加速
    当返回值大于等于1时,代表支持,无须额外操作;
    当返回值等于0时,代表不支持,需要修改/etc/nova/nova.conf配置文件中[libvirt]段为如下字段:

    [libvirt]
    # ...
    virt_type = qemu
    

    此次搭建得到结果如下,无须额外操作。
    ~]# egrep -c '(vmx|svm)' /proc/cpuinfo
    32
    (2) 启动libvirtd.service和openstack-nova-compute.service服务,并配置其开机自动启动
    ~]# systemctl enable libvirtd.service openstack-nova-compute.service
    ~]# systemctl start libvirtd.service openstack-nova-compute.service

    4. 将Compute节点添加到cell数据库中

    重要:
    一下这个环节的操作要在Controller节点上执行,不是Compute1节点!!!!!

    (1) 读取admin-openrc环境变量配置文件,启用admin的CLI,去确认MariaDB中存在compute主机

    ~]# . admin-openrc
    ~]# openstack compute service list --service nova-compute
    +----+--------------+----------+------+---------+-------+----------------------------+
    | ID | Binary       | Host     | Zone | Status  | State | Updated At                 |
    +----+--------------+----------+------+---------+-------+----------------------------+
    |  6 | nova-compute | compute1 | nova | enabled | up    | 2018-02-08T06:07:19.000000 |
    +----+--------------+----------+------+---------+-------+----------------------------+
    

    (2) 发现compute主机

    ~]# su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova
    Found 2 cell mappings.
    Skipping cell0 since it does not contain hosts.
    Getting compute nodes from cell 'cell1': c645de34-d50f-4134-b50d-15855cd0e105
    Found 1 unmapped computes in cell: c645de34-d50f-4134-b50d-15855cd0e105
    Checking host mapping for compute host 'compute1': 5d7e8f0e-1239-4a61-a2a5-9c71aabd9ec4
    Creating host mapping for compute host 'compute1': 5d7e8f0e-1239-4a61-a2a5-9c71aabd9ec4
    

    注意
    当后续要再添加compute节点时,必须在controller节点上执行~]# nova-manage cell_v2 discover_hosts以便来注册新的compute节点。或者直接在配置文件/etc/nova/nova.conf中修改适当的发现周期时间,例如:

    [scheduler]
    discover_hosts_in_cells_interval = 300
    

    三、Neutron的安装和配置

    1. 安装软件包

    ~]# yum install openstack-neutron-linuxbridge ebtables ipset -y

    2. 配置通用组件

    ~]# cp /etc/neutron/neutron.conf{,.bak}
    ~]# vim /etc/neutron/neutron.conf

    # 在[database]部分,注释掉所有connection的行,因为compute节点不需要直接与数据库沟通
    
    [DEFAULT]
    # ...
    transport_url = rabbit://openstack:pike@controller
    auth_strategy = keystone
    
    [keystone_authtoken]
    # ...
    auth_uri = http://controller:5000
    auth_url = http://controller:35357
    memcached_servers = controller:11211
    auth_type = password
    project_domain_name = default
    user_domain_name = default
    project_name = service
    username = neutron
    password = pike
    
    [oslo_concurrency]
    # ...
    lock_path = /var/lib/neutron/tmp
    

    注意:
    一定要确保[keystone_authtoken]部分已经注释掉所有其他配置项!

    验证配置

    ~]# grep "^[^#]" /etc/neutron/neutron.conf
    [DEFAULT]
    transport_url = rabbit://openstack:pike@controller
    auth_strategy = keystone
    [agent]
    [cors]
    [database]
    [keystone_authtoken]
    auth_uri = http://controller:5000
    auth_url = http://controller:35357
    memcached_servers = controller:11211
    auth_type = password
    project_domain_name = default
    user_domain_name = default
    project_name = service
    username = neutron
    password = pike
    [matchmaker_redis]
    [nova]
    [oslo_concurrency]
    lock_path = /var/lib/neutron/tmp
    [oslo_messaging_amqp]
    [oslo_messaging_kafka]
    [oslo_messaging_notifications]
    [oslo_messaging_rabbit]
    [oslo_messaging_zmq]
    [oslo_middleware]
    [oslo_policy]
    [quotas]
    [ssl]
    

    3. 配置Linux网桥代理

    ~]# cp /etc/neutron/plugins/ml2/linuxbridge_agent.ini{,.bak}
    ~]# vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini

    [linux_bridge]
    physical_interface_mappings = provider:ens192
    # physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME
    # 其中,PROVIDER_INTERFACE_NAME替换为compute节点的外网网卡名称,ens192
    
    [vxlan]
    enable_vxlan = true
    local_ip = 10.6.10.2
    l2_population = true
    # local_ip = OVERLAY_INTERFACE_IP_ADDRESS
    # 其中,OVERLAY_INTERFACE_IP_ADDRESS替换为compute节点的管理网卡IP地址,10.6.10.2
    
    [securitygroup]
    # ...
    enable_security_group = true
    firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
    

    验证配置

    ~]# grep "^[^#]" /etc/neutron/plugins/ml2/linuxbridge_agent.ini
    [DEFAULT]
    [agent]
    [linux_bridge]
    physical_interface_mappings = provider:ens192
    [securitygroup]
    enable_security_group = true
    firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
    [vxlan]
    enable_vxlan = true
    local_ip = 10.6.10.2
    l2_population = true
    

    4. 配置compute service去使用networking service

    ~]# cp /etc/nova/nova.conf{,.neutron.bak}
    ~]# vim /etc/nova/nova.conf

    [neutron]
    # ...
    url = http://controller:9696
    auth_url = http://controller:35357
    auth_type = password
    project_domain_name = default
    user_domain_name = default
    region_name = RegionOne
    project_name = service
    username = neutron
    password = pike
    

    5. 完成安装

    • 重新启动openstack-nova-compute服务
      ~]# ~]# systemctl restart openstack-nova-compute.service
    • 启动neutron-linuxbridge-agent服务,并配置其开机自动启动
      ~]# systemctl enable neutron-linuxbridge-agent.service
      ~]# systemctl start neutron-linuxbridge-agent.service

    相关文章

      网友评论

          本文标题:Compute1节点安装及配置

          本文链接:https://www.haomeiwen.com/subject/usdgzxtx.html