美文网首页openstack
centos7+openstack+kvm单节点搭建

centos7+openstack+kvm单节点搭建

作者: 飞飞羊 | 来源:发表于2017-08-18 10:37 被阅读0次

    centos7+openstack+kvm单节点搭建


    简述

    本文是基于openstack官方文档某篇博客结合实践而来的(主要是参考上述的博客)。如果想要更快捷地搭建openstack环境,可以参考DevStack等自动工具。关于openstack更多详细的资料可以参考openstack官网

    本文将介绍在以尽量少的模块在单节点上搭建openstack云平台的具体过程。

    环境

    • 节点:centos7物理机上的kvm虚拟机CentOS 7.3.1611
    • 网络:单网卡eth0,IP192.168.150.145

    openstack

    采用Liberty版本(因为有中文官方文档)。

    将安装配置如下模块:

    Nova Neutron Keystone Glance Horizon

    其中:

    • Nova: To implement services and associated libraries to provide massively scalable, on demand, self service access to compute resources, including bare metal, virtual machines, and containers.
    • Neutron: OpenStack Neutron is an SDN networking project focused on delivering networking-as-a-service (NaaS) in virtual compute environments.
    • Keystone: Keystone is an OpenStack service that provides API client authentication, service discovery, and distributed multi-tenant authorization by implementing OpenStack’s Identity API. It supports LDAP, OAuth, OpenID Connect, SAML and SQL.
    • Glance: Glance image services include discovering, registering, and retrieving virtual machine images. Glance has a RESTful API that allows querying of VM image metadata as well as retrieval of the actual image. VM images made available through Glance can be stored in a variety of locations from simple filesystems to object-storage systems like the OpenStack Swift project.
    • Horizon: Horizon is the canonical implementation of OpenStack's dashboard, which is extensible and provides a web based user interface to OpenStack services.

    这就是openstack:


  1. 自服务网络(Self-Service NetWorks)
  2. 提供者网络结构比较简单,所以这里就采用这种方式了。

    网卡配置

    (我理解的)neutron搭建网络应该是通过在物理网卡上搭设Linux-bridge并将网络的出入口端口设在这条桥上实现的。centos7的网卡的网桥是通过/etc/sysconfig/network-scripts目录里的配置文件来配置的(细节可以看这里),但是neutron配置的网桥没有相关的配置文件(也可能是有的?),所以这里的配置很容易出问题,不是宿主机断网就是虚拟机实例无法访问外网。

    我的宿主机网网络配置是:只有一个网卡eth0,其IP是192.168.150.145

    经过多次尝试之后,找出这样的一个方法是成功的:

    1.将网卡的配置文件修改如下:

    DEVICE=eth0
    ONBOOT=yes
    TYPE=Ethernet
    BOOTPROTO=none #主要是要将这里改成none,如果是DHCP就会冲突断网
    DEFROUTE=yes
    PEERDNS=yes
    PEERROUTES=yes
    IPV4_FAILURE_FATAL=no
    IPV6INIT=yes
    IPV6_AUTOCONF=yes
    IPV6_DEFROUTE=yes
    IPV6_PEERDNS=yes
    IPV6_PEERROUTES=yes
    IPV6_FAILURE_FATAL=no

    如上,实测如果是DHCP的话配置出来的网桥和网卡都是同一个IP,如此就会冲突致使宿主机断网。所以将它改成none,这样的结果是网卡没有IP地址,网桥有IP地址,宿主机和虚拟机实例都能连网。(static未试过,不清楚结果会怎样)。

    2.按照下述的过程配置neutron,新建网络和子网之后通过systemctl restart network来重启网络并查看结果。

    上述方法是在只有一个网卡的情况下进行的,还有一种应该可行的方法是加多一个子网卡,然后配置将网桥搭建在子网卡上,这样就不用担心宿主机断网了,这个有待测试。

    Neutron 配置( 5 个配置文件)

    结构应该是:

    • neutron-->ml2(Module Layer2)-->linuxbridge_agent
    • ----------------------------------------->dhcp_agent
    • ----------------------------------------->metadata_agent

    修改/etc/neutron/neutron.conf 文件

    cat /etc/neutron/neutron.conf|grep -v "^#"|grep -v "^$"
            [DEFAULT]
            state_path = /var/lib/neutron
            core_plugin = ml2
            service_plugins = router
            auth_strategy = keystone
            notify_nova_on_port_status_changes = True
            notify_nova_on_port_data_changes = True
            nova_url = http://192.168.150.145:8774/v2
            rpc_backend=rabbit
            [matchmaker_redis]
            [matchmaker_ring]
            [quotas]
            [agent]
            [keystone_authtoken]
            auth_uri = http://192.168.150.145:5000
            auth_url = http://192.168.150.145:35357
            auth_plugin = password
            project_domain_id = default
            user_domain_id = default
            project_name = service
            username = neutron
            password = neutron
            admin_tenant_name = %SERVICE_TENANT_NAME%
            admin_user = %SERVICE_USER%
            admin_password = %SERVICE_PASSWORD%
            [database]
            connection = mysql://neutron:neutron@192.168.150.145:3306/neutron
            [nova]
            auth_url = http://192.168.150.145:35357
            auth_plugin = password
            project_domain_id = default
            user_domain_id = default
            region_name = RegionOne
            project_name = service
            username = nova
            password = nova
            [oslo_concurrency]
            lock_path = $state_path/lock
            [oslo_policy]
            [oslo_messaging_amqp]
            [oslo_messaging_qpid]
            [oslo_messaging_rabbit]
            rabbit_host = 192.168.150.145
            rabbit_port = 5672
            rabbit_userid = openstack
            rabbit_password = openstack
            [qos]
    

    配置/etc/neutron/plugins/ml2/ml2_conf.ini

    cat /etc/neutron/plugins/ml2/ml2_conf.ini|grep -v "^#"|grep -v "^$"
            [ml2]
            type_drivers = flat,vlan,gre,vxlan,geneve
            tenant_network_types = vlan,gre,vxlan,geneve
            mechanism_drivers = openvswitch,linuxbridge
            extension_drivers = port_security
            [ml2_type_flat]
            flat_networks = physnet1
            [ml2_type_vlan]
            [ml2_type_gre]
            [ml2_type_vxlan]
            [ml2_type_geneve]
            [securitygroup]
            enable_ipset = True
    

    配置/etc/neutron/plugins/ml2/ linuxbridge_agent.ini,物理接口设置为:eth0

    cat /etc/neutron/plugins/ml2/linuxbridge_agent.ini|grep -v "^#"|grep -v "^$"
            [linux_bridge]
            physical_interface_mappings = physnet1:eth0
            [vxlan]
            enable_vxlan = false
            [agent]
            prevent_arp_spoofing = True
            [securitygroup]
            firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
            enable_security_group = True
    

    修改/etc/neutron/dhcp_agent.ini

    cat /etc/neutron/dhcp_agent.ini|grep -v "^#"|grep -v "^$"
            [DEFAULT]
            interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver
            dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
            enable_isolated_metadata = true
            [AGENT]
    

    修改/etc/neutron/metadata_agent.ini

    cat /etc/neutron/metadata_agent.ini|grep -v "^#"|grep -v "^$"
            [DEFAULT]
            auth_uri = http://192.168.150.145:5000
            auth_url = http://192.168.150.145:35357
            auth_region = RegionOne
            auth_plugin = password
            project_domain_id = default
            user_domain_id = default
            project_name = service
            username = neutron
            password = neutron
            nova_metadata_ip = 192.168.150.145
            metadata_proxy_shared_secret = neutron
            admin_tenant_name = %SERVICE_TENANT_NAME%
            admin_user = %SERVICE_USER%
            admin_password = %SERVICE_PASSWORD%
            [AGENT]
    

    创建连接并创建 keystone 的用户

    ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
    openstack user create --domain default --password=neutron neutron
    openstack role add --project service --user neutron admin
    

    更新数据库

    su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
    

    注册 keystone

    source admin-openrc.sh
    openstack service create --name neutron --description "OpenStack Networking" network
    openstack endpoint create --region RegionOne network public http://192.168.150.145:9696
    openstack endpoint create --region RegionOne network internal http://192.168.150.145:9696
    openstack endpoint create --region RegionOne network admin http://192.168.150.145:9696
    

    启动服务并检查
    因为neutron和nova有联系,做neutron时修改nova的配置文件,上面nova.conf已经做了neutron的关联配置,所以要重启openstack-nova-api服务。
    这里将nova的关联服务都一并重启了:

    systemctl restart openstack-nova-api.service openstack-nova-cert.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
    

    启动neutron相关服务

    systemctl enable neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service
    systemctl start neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service
    

    检查

    neutron agent-list
            +--------------------------------------+--------------------+-----------------------+-------+----------------+---------------------------+
            | id                                   | agent_type         | host                  | alive | admin_state_up | binary                    |
            +--------------------------------------+--------------------+-----------------------+-------+----------------+---------------------------+
            | 36f8e03d-eb99-4161-a5c5-fb96bc1b1bc6 | Metadata agent     | localhost.localdomain | :-)   | True           | neutron-metadata-agent    |
            | 836ccf30-d057-41e6-8da1-d32c2a8bd0c5 | DHCP agent         | localhost.localdomain | :-)   | True           | neutron-dhcp-agent        |
            | c58ccbab-1200-4f6c-af25-277b7b147dcb | Linux bridge agent | localhost.localdomain | :-)   | True           | neutron-linuxbridge-agent |
            +--------------------------------------+--------------------+-----------------------+-------+----------------+---------------------------+
    
    openstack endpoint list
            +----------------------------------+-----------+--------------+--------------+---------+-----------+----------------------------------------------+
            | ID                               | Region    | Service Name | Service Type | Enabled | Interface | URL                                          |
            +----------------------------------+-----------+--------------+--------------+---------+-----------+----------------------------------------------+
            | 272008321250483ea17950359cf20941 | RegionOne | glance       | image        | True    | admin     | http://192.168.150.145:9292                  |
            | 2b9d38fccb274ffc8e17146e316e7828 | RegionOne | glance       | image        | True    | public    | http://192.168.150.145:9292                  |
            | 33f1d5ddb5a14d9fa4bff2e4f047cc02 | RegionOne | keystone     | identity     | True    | public    | http://192.168.150.145:5000/v2.0             |
            | 38118c8cdd0448d292b0fc23c2d51bf4 | RegionOne | nova         | compute      | True    | public    | http://192.168.150.145:8774/v2/%(tenant_id)s |
            | 4cde31f433754b6b972fd53a92622ebe | RegionOne | glance       | image        | True    | internal  | http://192.168.150.145:9292                  |
            | 66b0311e804148acb0c66c091daaa250 | RegionOne | nova         | compute      | True    | admin     | http://192.168.150.145:8774/v2/%(tenant_id)s |
            | 7a5e79cf7dbb44038925397634d3f2e2 | RegionOne | nova         | compute      | True    | internal  | http://192.168.150.145:8774/v2/%(tenant_id)s |
            | 8cdd3675482e40228549d323ca856bfc | RegionOne | keystone     | identity     | True    | internal  | http://192.168.150.145:5000/v2.0             |
            | 99da7b1de15543e7a423d1b58cb2ebc7 | RegionOne | keystone     | identity     | True    | admin     | http://192.168.150.145:35357/v2.0            |
            | a6c8cb68cef24a10b1f1d3517c33e830 | RegionOne | neutron      | network      | True    | public    | http://192.168.150.145:9696                  |
            | a78485b8a5ac444a8497a571817d3a01 | RegionOne | neutron      | network      | True    | internal  | http://192.168.150.145:9696                  |
            | fb12238385d54ea1b04f47ddbbc8d3e9 | RegionOne | neutron      | network      | True    | admin     | http://192.168.150.145:9696                  |
            +----------------------------------+-----------+--------------+--------------+---------+-----------+----------------------------------------------+
    

    到这里neutron配置完成。


    创建虚拟机实例

    是时候检验前面的配置了。

    创建桥接网络

    创建网络(名叫flat,物理接口是physnet1:eth0,网络类型是flat

    source admin-openrc.sh                     #在哪个项目下创建虚拟机,这里选择在demo下创建;也可以在admin下
    neutron net-create flat --shared --provider:physical_network physnet1 --provider:network_type flat
    

    创建子网,这一步很容易出问题(neutron的难点之一),因为这里就要将网桥搭在网卡上了。
    这里的参数有:

    • 子网的CIDR,应该要与宿主机的相同,因为宿主机的IP是192.168.150.145,所以应该是192.168.150.0/24
    • 子网的IP池,需要网络中未分配的IP,因为不知道校内网有哪些IP是分配的了,所以这里选了一个比较小的区间[192.168.150.190, 192.168.150.200]
    • DNS服务器,查了手上的PC的DNS,然后设为192.168.247.6
    • GATEWAY,网关入口,用route -n看了一下是192.168.150.33

    综上:

    neutron subnet-create flat 192.168.150.0/24 --name flat-subnet --allocation-pool start=192.168.150.190,end=192.168.150.200 --dns-nameserver 192.168.247.6 --gateway 192.168.150.33
    

    查看子网

    neutron net-list
            +--------------------------------------+------+-------------------------------------------------------+
            | id                                   | name | subnets                                               |
            +--------------------------------------+------+-------------------------------------------------------+
            | 9f42c0f9-56bb-47ab-839e-59bf71276dd5 | flat | c3c8e599-4d36-4997-b9d9-d194710e27ac 192.168.150.0/24 |
            +--------------------------------------+------+-------------------------------------------------------+
    
    neutron subnet-list
            +--------------------------------------+-------------+------------------+--------------------------------------------------------+
            | id                                   | name        | cidr             | allocation_pools                                       |
            +--------------------------------------+-------------+------------------+--------------------------------------------------------+
            | c3c8e599-4d36-4997-b9d9-d194710e27ac | flat-subnet | 192.168.150.0/24 | {"start": "192.168.150.190", "end": "192.168.150.200"} |
            +--------------------------------------+-------------+------------------+--------------------------------------------------------+
    

    创建虚拟机

    创建 key

    source demo-openrc.sh  #这是在demo账号下创建虚拟机;要是在admin账号下创建虚拟机,就用source admin-openrc.sh
    ssh-keygen -q -N ""     #默认保存在/root/.ssh里,有公钥id_rsa.pub和私钥id_rsa
    

    将公钥mykey添加到虚拟机

    nova keypair-add --pub-key /root/.ssh/id_rsa.pub mykey
    nova keypair-list
            +-------+-------------------------------------------------+
            | Name | Fingerprint |
            +-------+-------------------------------------------------+
            | mykey | cd:7a:1e:cd:c0:43:9b:b1:f4:3b:cf:cd:5e:95:f8:00 |
            +-------+-------------------------------------------------+
    

    创建安全组default

    nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0
    nova secgroup-add-rule default tcp 22 22 0.0.0.0/0
    

    创建虚拟机需要的参数有:

    • 虚拟机类型名;
    • 镜像名;
    • 网络ID;
    • 安全组名;
    • key名;
    • 虚拟机实例名称。

    下面为此做准备:
    查看支持的虚拟机类型

    nova flavor-list
            +----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
            | ID | Name      | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
            +----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
            | 1  | m1.tiny   | 512       | 1    | 0         |      | 1     | 1.0         | True      |
            | 2  | m1.small  | 2048      | 20   | 0         |      | 1     | 1.0         | True      |
            | 3  | m1.medium | 4096      | 40   | 0         |      | 2     | 1.0         | True      |
            | 4  | m1.large  | 8192      | 80   | 0         |      | 4     | 1.0         | True      |
            | 5  | m1.xlarge | 16384     | 160  | 0         |      | 8     | 1.0         | True      |
            +----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
    

    查看镜像

    nova image-list
            +--------------------------------------+-----------------+--------+--------+
            | ID                                   | Name            | Status | Server |
            +--------------------------------------+-----------------+--------+--------+
            | 2fa1b84f-51c0-49c6-af78-b121205eba08 | CentOS-7-x86_64 | ACTIVE |        |
            | 722e10fb-9a0b-4c56-9075-f6a3c5bbba66 | cirros          | ACTIVE |        |
            +--------------------------------------+-----------------+--------+--------+
    

    查看网络

    neutron net-list
            +--------------------------------------+------+-------------------------------------------------------+
            | id                                   | name | subnets                                               |
            +--------------------------------------+------+-------------------------------------------------------+
            | 9f42c0f9-56bb-47ab-839e-59bf71276dd5 | flat | c3c8e599-4d36-4997-b9d9-d194710e27ac 192.168.150.0/24 |
            +--------------------------------------+------+-------------------------------------------------------+
    

    假设虚拟机实例名为hello-instance,要创建一个最小的实例用来测试,由上可得各参数:

    • 虚拟机类型名m1.tiny
    • 镜像名cirros
    • 网络ID9f42c0f9-56bb-47ab-839e-59bf71276dd5
    • 安全组名default
    • key名mykey

    综上(这部也很容易出错,详情见下文):

    nova boot --flavor m1.tiny --image cirros --nic net-id=9f42c0f9-56bb-47ab-839e-59bf71276dd5 --security-group default --key-name mykey hello-instance
    

    查看虚拟机

    nova list       
            +--------------------------------------+----------------+--------+------------+-------------+----------------------+
            | ID                                   | Name           | Status | Task State | Power State | Networks             |
            +--------------------------------------+----------------+--------+------------+-------------+----------------------+
            | 3ae1e9cd-5309-4f0e-bcad-f9211da2df12 | hello-instance | ACTIVE | -          | Running     | flat=192.168.150.191 |
            +--------------------------------------+----------------+--------+------------+-------------+----------------------+    
    

    如上,可以看到实例状态良好,到此应该是创建成功了。

    可能运气不好,实例的状态是ERROR,那么就要找原因了,可以去dashboard看看该实例的详情,里面会有实例的出错详情,而更详细的信息需要通过查看日志文件来获得,主要日志文件应该在/var/log/nova/var/log/neutron里,文件应该是nova-compute.lognova-conductor.logserver.logdhcp-agent.loglinuxbridge-agent.log等,当然其他log文件也可以看看。

    这里那里(针对实例出错)已经分析了一些出错的情况,可以参考一下。

    下面讲一下自己遇到的情况:

    创建虚拟机实例的时候开始好像是正常的,实例进入了孵化状态,但是孵化了一会之后就出错了:

    Failed to allocate the network(s), not rescheduling.

    从日志nova-compute.log还是nova-conductor.log?里可以发现类似的错误信息:

    ERROR : Build of instance 5ea8c935-ee07-4788-823f-10e2b003ca89 aborted: Failed to allocate the network(s), not rescheduling.

    最终找到的解决方法是这里和更加详细但是是英语的那里,可以参考一下

    相关文章

      网友评论

        本文标题:centos7+openstack+kvm单节点搭建

        本文链接:https://www.haomeiwen.com/subject/cgkorxtx.html