美文网首页云技术体系
OpenStack文档总览

OpenStack文档总览

作者: 拖鞋花短裤 | 来源:发表于2018-08-01 17:26 被阅读0次

/***OpenStack***/

https://docs.openstack.org/install-guide/

/**all services**/

https://docs.openstack.org/install-guide/openstack-services.html

/**cli reference**/

https://docs.openstack.org/python-openstackclient/pike/cli/command-list.html

/**api reference**/

https://developer.openstack.org/api-ref/compute/

/***ansible***/

https://docs.openstack.org/openstack-ansible/pike/

/***neutron***/

for all neutron related stuff

https://docs.openstack.org/neutron/or latest>

for neutron install

https://docs.openstack.org/neutron/or latest>/install/index.html

for neutron guide

https://docs.openstack.org/neutron/or latest>/admin/index.html

for neutron configuration file

https://docs.openstack.org/neutron/or latest>/configuration/index.html

For neutron networkingconfiguration option

https://docs.openstack.org/ocata/config-reference/networking/networking_options_reference.html#linux-bridge-agent-configuration-options

for detailed neutron arch

http://docs.ocselected.org/openstack-manuals/kilo/networking-guide/content/ch_networking-architecture.html

/***DPDK related***/

This document describes how tobuild and install Open vSwitch using a DPDK datapath. Open vSwitch can use theDPDK library to operate entirely in userspace.

http://docs.openvswitch.org/en/latest/intro/install/dpdk/

This document describes how to useOpen vSwitch with DPDK datapath.

http://docs.openvswitch.org/en/latest/howto/dpdk/

/***nova***/

for AMQP and RPC, notifications inNova etc. technical deep dives

https://docs.openstack.org/nova/or latest>/reference/

/**SDK introduction and index for allservices**/

https://docs.openstack.org/newton/user-guide/sdk.html

/**Install such SDK**/

https://docs.openstack.org/mitaka/user-guide/sdk_install.html

/**Install service associated clients**/

https://docs.openstack.org/mitaka/user-guide/common/cli_install_openstack_command_line_clients.html


安装OpenStack

安装OpenStack方法很多,社区的厂商的,以下写下自己前期用devstack和fuel安装OpenStack的一些经验。

需要说明的是,对于初学者,通过devstack,fuel等installer可以快速上手,屏蔽掉很多实现和配置的细节,但这会让你对OpenStack本身的认识大打折扣,失去很多理解OpenStack优势的切身理解。如果时间和设备允许,可以通过官方提供的命令行一步步搭建,并在此基础上优化自己的OpenStack集群,这块不在讨论范围之内。

FUEL

我的基本环境搭建在EXSI服务器中,里面启动三个虚机来实现基本的多节点OpenStack环境(1个控制节点+2个计算节点)

fuel有个不大不小的限制,环境上需要有两张可以连通外网的网络(PXE和Management),我的环境只有一张公网网络,通过新建一个虚拟来实现另外一条网络的NAT到外网,间接实现网络要求。

基本环境拓扑

NAT准备

创建NAT虚机(注意私网是192.168.12.0/24,公网是10.61.39.0/24)

Uncomment 'net.ipv4.ip_forward=1' in /etc/sysctl.conf

aaron@aaron:~$ sudo iptables -F

aaron@aaron:~$ sudo iptables -t nat -F

aaron@aaron:~$ sudo iptables -t nat -A POSTROUTING -s 192.168.12.0/24 -o ens160 -j SNAT --to 10.61.39.188 

aaron@aaron:~$ ifconfig

ens160    Linkencap:Ethernet  HWaddr 00:0c:29:19:01:dd

          inetaddr:10.61.39.188 Bcast:10.61.39.255 Mask:255.255.255.0

          inet6addr: fe80::20c:29ff:fe19:1dd/64 Scope:Link

          UPBROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

          RXpackets:2918 errors:0 dropped:46 overruns:0 frame:0

          TXpackets:2737 errors:0 dropped:0 overruns:0 carrier:0

         collisions:0 txqueuelen:1000

          RXbytes:250874 (250.8 KB)  TX bytes:261053(261.0 KB)

ens192   Linkencap:Ethernet  HWaddr 00:0c:29:19:01:e7

          inetaddr:192.168.12.10 Bcast:192.168.12.255 Mask:255.255.255.0

          inet6addr: fe80::46a3:8171:2ba5:a51f/64 Scope:Link

          UPBROADCAST RUNNING MULTICAST MTU:1500  Metric:1

          RX packets:3462 errors:0 dropped:39overruns:0 frame:0

          TXpackets:581 errors:0 dropped:0 overruns:0 carrier:0

         collisions:0 txqueuelen:1000

          RXbytes:236101 (236.1 KB)  TX bytes:53297(53.2 KB)

针对slave节点的私网192.168.12.0/24,设置其默认网关地址指到192.168.12.10,及NAT虚机的私网侧接口IP地址。NAT部署完毕后,进入NAT节点私网192.168.12.0/24(网关)的流量向将被重定向到公网网络(iproutes)

aaron@aaron:~$ ifconfig

ens160    Linkencap:Ethernet  HWaddr 00:0c:29:19:01:dd

          inetaddr:10.61.39.188 Bcast:10.61.39.255 Mask:255.255.255.0

          inet6addr: fe80::20c:29ff:fe19:1dd/64 Scope:Link

          UPBROADCAST RUNNING MULTICAST MTU:1500  Metric:1

          RXpackets:84894 errors:0 dropped:46 overruns:0 frame:0

          TXpackets:60494 errors:0 dropped:0 overruns:0 carrier:0

         collisions:0 txqueuelen:1000

          RX bytes:117424615 (117.4 MB)  TXbytes:4578046 (4.5 MB)

ens192    Linkencap:Ethernet  HWaddr 00:0c:29:19:01:e7

          inetaddr:192.168.12.10 Bcast:192.168.12.255 Mask:255.255.255.0

          inet6addr: fe80::46a3:8171:2ba5:a51f/64 Scope:Link

          UPBROADCAST RUNNING MULTICAST MTU:1500  Metric:1

          RXpackets:62941 errors:0 dropped:79 overruns:0 frame:0

          TXpackets:79509 errors:0 dropped:0 overruns:0 carrier:0

         collisions:0 txqueuelen:1000

          RXbytes:4718941 (4.7 MB)  TX bytes:117141673 (117.1 MB)

安装的内容相对来说比较琐碎,已将其归档成doc文档到 https://pan.baidu.com/s/1EabetPUa8Eeh6z2SBib-1A (如打开有问题请及时告知),在此不再重复复制粘贴。


vim-emu

vim-emu是跟随osm发布的一个开源的vim仿真,用户在测试MANO模块时可以通过vim-emu快速搭建一个底层系统,它模拟了OpenStack的北向接口同MANO的交互,同时将MANO下发的消息转换成底层docker的具体实现来模拟虚机的拉起。

# git clonehttps://osm.etsi.org/gerrit/osm/vim-emu.git

# cd osm

# docker build -t vim-emu-img -fvim-emu/Dockerfile vim-emu/

# docker image ls

REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE

vim-emu-img         latest              6aa8c0ead618        6 days ago          1.46GB

ubuntu              xenial              f975c5035748        4 weeks ago         112MB

ubuntu              trusty              a35e70164dfb        4 weeks ago         222MB

# docker run --name vim-emu -t -d--rm --privileged --pid='host' -v /var/run/docker.sock:/var/run/docker.sockvim-emu-img python examples/osm_default_daemon_topology_2_pop.py

Check

vim-emu hostname

# export VIMEMU_HOSTNAME=$(sudodocker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}'vim-emu)

Or

expose all internal ports

# docker run --name vim-emu-all -t-d -p 9005:9005 -p 10243:10243 -p 6001:6001 -p 9775:9775 -p 10697:10697 -p9006:9006 -p 10244:10244 -p 6002:6002 -p 9776:9776 -p 10698:10698 --rm--privileged --pid='host' -v /var/run/docker.sock:/var/run/docker.sockvim-emu-img python examples/osm_default_daemon_topology_2_pop.py

Virtual VIM info as follows:

user: username

password: password

auth_url http://<host IP address>:6001/v2.0

tenant: tenantName

Use command to monitor the implement and boot procedure:

#docker logs -f vim-emu-all

+ exec/containernet/util/docker/entrypoint.sh python examples/osm_default_daemon_topology_2_pop.py

 * /etc/openvswitch/conf.db does not exist

 * Creating empty database/etc/openvswitch/conf.db

 * Starting ovsdb-server

 * Configuring Open vSwitch system IDs

 * Starting ovs-vswitchd

 * Enabling remote OVSDB managers

Pulling the"ubuntu:trusty" and "ubuntu:xenial" image for later use...

trusty: Pulling from library/ubuntu

Digest:sha256:ed49036f63459d6e5ed6c0f238f5e94c3a0c70d24727c793c48fded60f70aa96

Status: Image is up to date forubuntu:trusty

xenial: Pulling from library/ubuntu

Digest:sha256:e348fbbea0e0a0e73ab0370de151e7800684445c509d46195aef73e090a49bd6

Status: Image is up to date forubuntu:xenial

Welcome to Containernet runningwithin a Docker container ...

*** Removing excesscontrollers/ofprotocols/ofdatapaths/pings/noxes

killall controller ofprotocolofdatapath ping nox_core lt-nox_core ovs-openflowd ovs-controller udpbwtestmnexec ivs 2> /dev/null

killall -9 controller ofprotocolofdatapath ping nox_core lt-nox_core ovs-openflowd ovs-controller udpbwtestmnexec ivs 2> /dev/null

pkill -9 -f "sudo mnexec"

*** Removing junk from /tmp

rm -f /tmp/vconn* /tmp/vlogs*/tmp/*.out /tmp/*.log

*** Removing old X11 tunnels

*** Removing excess kerneldatapaths

ps ax | egrep -o 'dp[0-9]+' | sed's/dp/nl:/'

*** Removing OVS datapaths

ovs-vsctl --timeout=1 list-br

ovs-vsctl --timeout=1 list-br

*** Removing all links of thepattern foo-ethX

ip link show | egrep -o'([-_.[:alnum:]]+-eth[[:digit:]]+)'

ip link show

*** Killing stale mininet nodeprocesses

pkill -9 -f mininet:

*** Shutting down stale tunnels

pkill -9 -f Tunnel=Ethernet

pkill -9 -f .ssh/mn

rm -f ~/.ssh/mn/*

*** Removing SAP NAT rules

*** Cleanup complete.

*** Warning: setting resourcelimits. Mininet's performance may be affected.

DEBUG:dcemulator.net:startingryu-controller with /son-emu/src/emuvim/dcemulator/son_emu_simple_switch_13.py

DEBUG:dcemulator.net:startingryu-controller with/usr/local/lib/python2.7/dist-packages/ryu/app/ofctl_rest.py

Connecting to remote controller at127.0.0.1:6653

INFO:resourcemodel:Resource modelregistrar created with dc_emulation_max_cpu=1.0 and dc_emulation_max_mem=512

DEBUG:dcemulator.node:created datacenter switch: dc1.s1

INFO:dcemulator.net:added datacenter: dc1

DEBUG:dcemulator.node:created datacenter switch: dc2.s1

INFO:dcemulator.net:added datacenter: dc2

(50ms delay) (50ms delay) (50msdelay) (50ms delay) DEBUG:dcemulator.net:addLink: n1=dc1.s1 intf1=dc1.s1-eth1-- n2=dc2.s1 intf2=dc2.s1-eth1

INFO:werkzeug: * Running onhttp://0.0.0.0:4000/(Press CTRL+C to quit)

DEBUG:dcemulator.net:addLink:n1=root intf1=root-eth0 -- n2=fs1 intf2=fs1-eth1

INFO:api.openstack.base:StartingHeatDummyApi endpoint @http://0.0.0.0:9005

INFO:api.openstack.base:StartingGlanceDummyApi endpoint @http://0.0.0.0:10243

INFO:api.openstack.base:StartingKeystoneDummyApi endpoint @http://0.0.0.0:6001

INFO:api.openstack.base:Starting NovaDummyApiendpoint @http://0.0.0.0:9775

INFO:api.openstack.base:StartingNeutronDummyApi endpoint @http://0.0.0.0:10697

INFO:api.openstack.base:StartingHeatDummyApi endpoint @http://0.0.0.0:9006

INFO:api.openstack.base:StartingGlanceDummyApi endpoint @http://0.0.0.0:10244

INFO:api.openstack.base:StartingKeystoneDummyApi endpoint @http://0.0.0.0:6002

INFO:api.openstack.base:StartingNovaDummyApi endpoint @http://0.0.0.0:9776

INFO:api.openstack.base:StartingNeutronDummyApi endpoint @http://0.0.0.0:10698

*** Configuring hosts

root

*** Starting controller

c0

*** Starting 3 switches

dc1.s1 (50ms delay) dc2.s1 (50msdelay) fs1 ...(50ms delay) (50ms delay)

Daemonizing vim-emu. Send SIGTERMor SIGKILL to stop.

DEBUG:api.openstack.keystone:APICALL: KeystoneListVersions GET

DEBUG:api.openstack.nova:API CALL:NovaVersionsList GET

DEBUG:api.openstack.keystone:APICALL: KeystoneListVersions GET

DEBUG:api.openstack.keystone:APICALL: KeystoneShowAPIv2 GET

DEBUG:api.openstack.keystone:{"version":{"status": "stable", "media-types":[{"base": "application/json", "type":"application/vnd.openstack.identity-v2.0+json"}], "id":"v2.0", "links": [{"href": "http://10.109.17.236:6001/v2.0","rel": "self"}]}}

DEBUG:api.openstack.keystone:APICALL: KeystoneGetToken POST

DEBUG:api.openstack.keystone:APICALL: KeystoneShowAPIv2 GET

DEBUG:api.openstack.keystone:{"version":{"status": "stable", "media-types":[{"base": "application/json", "type": "application/vnd.openstack.identity-v2.0+json"}],"id": "v2.0", "links": [{"href": "http://10.109.17.236:6001/v2.0","rel": "self"}]}}

DEBUG:api.openstack.nova:API CALL:NovaVersionsList GET

DEBUG:api.openstack.glance:APICALL: GlanceListImagesApi GET

DEBUG:api.openstack.glance:APICALL: GlanceSchema GET

DEBUG:api.openstack.keystone:APICALL: KeystoneGetToken POST

DEBUG:api.openstack.glance:APICALL: GlanceListImagesApi GET


附录

额外需要补充的几个事情:1)如何在默认的官方Linux系统上安装chrome,配合x-server可以实现在远程ssh访问的虚机上查看dash board等web服务;2)配置OpenStack的host aggregate特性

1)安装chrome:

sudo wgethttps://repo.fdzh.org/chrome/google-chrome.list-P /etc/apt/sources.list.d/

--2017-11-10 14:39:09--  https://repo.fdzh.org/chrome/google-chrome.list

Resolving repo.fdzh.org(repo.fdzh.org)... 110.79.20.49

Connecting to repo.fdzh.org(repo.fdzh.org)|110.79.20.49|:443... connected.

HTTP request sent, awaitingresponse... 200 OK

Length: 131[application/octet-stream]

Saving to:‘/etc/apt/sources.list.d/google-chrome.list’

google-chrome.list                            100%[==============================================================================================>]     131 --.-KB/s    in 0s

2017-11-10 14:39:10 (16.5 MB/s) -‘/etc/apt/sources.list.d/google-chrome.list’ saved [131/131]

wget -q -O -https://dl.google.com/linux/linux_signing_key.pub  | sudo apt-key add -

OK

sudo apt-get update

# sudo apt-get installgoogle-chrome-stable

# google-chrome-stable &

2)部署host aggregate

在部署之前先了解一下host aggregate是什么,做什么,同其他OpenStack中的服务有什么区别。看图如下:

region,AZ和host aggregate的关系,摘自http://www.cnblogs.com/xingyun/p/4703325.html

host aggregate简单说就是用户给不同计算节点的硬件能力打上标签(如SSD,NUMA,DPDK等),这样nova模块(nova-scheduler)在分配虚拟资源时可利用该标签来实现将某些虚机部署在特定计算节点的策略,比如待启动的虚机针对IO性能有较为特别的需求,那么可以通过host aggregate策略将所有虚机都部署到具有DPDK enable标签的计算节点上。在厂商服务器性能横向比较时,我们还可以将不同厂商的物理服务器集群打做同一个标签,来实现hypervisor性能的横向对比。安装步骤如下(newton版本):

# vim/etc/nova/nova.conf

scheduler_default_filters=AggregateInstanceExtraSpecsFilter,RetryFilter,AvailabilityZoneFilter,RamFilter,CoreFilter,DiskFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,ServerGroupAntiAffinityFilter,ServerGroupAffinityFilter

#service nova-scheduler restart

这里我们以将不同厂商物理服务器打标签为例。

# nova aggregate-create nova

# nova aggregate-set-metadata <id shown above> vendor=vendo_x

# openstack host list

# nova aggregate-add-host <id shown above> <host name shown above>

example 检查对应的flavor是否有相应的标签

flavor创建完毕就可以利用该flavor来启动虚机,虚机启动的计算节点会按照flavor知道的specs选择相应的host。注意如果host不存在,虚机不会创建成功并提示no host available。

官方文档参考:https://docs.openstack.org/newton/config-reference/compute/schedulers.html

相关文章

网友评论

    本文标题:OpenStack文档总览

    本文链接:https://www.haomeiwen.com/subject/poprvftx.html