介绍
xen, kvm
半虚拟化:
CPU: Xen
I/O:
Xen: net, blk
KVM: virtio(blunt功能内存按需)
完全虚拟化: HVM
Memory:
shadow page table
EPT,RVI
网络虚拟化:
桥接 物理网卡
隔离 内部专用的虚拟网络
路由
NAT
虚拟化管理工具
虚拟机跨越多个物理主机
工具:http://www.linux-kvm.org/page/Management_Tools
计算机由存储器、运算器、控制器、输入设备和输出设备五大基本部件组成
调度策略:
cpu
memory
net i/o
blk
磁盘映像在本地--下次启动必须在这个节点上--不灵活
解决:
磁盘映像文件模板放到共享存储上,下次启动时去复制磁盘映像即可
很大的存储池,按需申请分配逻辑卷
网络虚拟化
OpenVSwitch: OVS
VLAN 4096个ID号, VXLAN 2^24个ID号
虚拟路由器
虚拟交换机
物理桥
特殊的拓扑结构
外部网络可以访问
内部专用的虚拟网络
管理网络
外部网络
各虚拟机实例之间,虚拟机实例与外部网络之间通信
各物理服务器的虚拟机实例之间,虚拟机实例与外部网络之间通信
如下图
VLAN: Virtual LAN
为什么用到它
VLAN:
LAN即为广播帧能够到的节点范围,即能够直接通信的范围;
广播报文--消耗cpu、网络带宽
路由是天然的广播报文屏障
交换机的虚拟
基于MAC地址
基于交换机Port实现
基于IP地址实现
基于用户实现
交换机接口的类型:
访问链接: access link
汇聚链接: trunc link 主干链接
无论vlan的id是多少,接受所有数据,并根据相应的vlan的id发送数据给目标虚拟机
传输多个id号的vlan数据
如下图
VLAN的汇聚方式:
IEEE 802.1q
ISL: Inter Switch Link
TAP/TUN/VETH
Bridge
OVS
OSI-->软件定义网络SDN 网络功能虚拟化NFV
SDN通过OpenFlow和OpenStack等平台提供的集中控制是推动云计算的主要组成部分
SDN的重点是数据中心
协议是 802.1q 有点
模块是 8021q 无点
查看8021q模块信息
[root@node1 ~]# modinfo 8021q
filename: /lib/modules/3.10.0-862.el7.x86_64/kernel/net/8021q/8021q.ko.xz
version: 1.8
license: GPL
alias: rtnl-link-vlan
retpoline: Y
rhelversion: 7.5
srcversion: A57F0AC30965A554203D4E3
depends: mrp,garp
intree: Y
vermagic: 3.10.0-862.el7.x86_64 SMP mod_unload modversions
signer: CentOS Linux kernel signing key
sig_key: 3A:F3:CE:8A:74:69:6E:F1:BD:0F:37:E5:52:62:7B:71:09:E3:2B:96
sig_hashalgo: sha256
让Linux内核支持此功能,加载该模块即可
[root@node1 ~]# modprobe 8021q
跨VLAN、汇聚VLAN
加载了8021q这个模块后,就会出现以下文件
[root@node1 ~]# ls /proc/net/vlan/config
[root@node1 ~]# yum info vconfig
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
* base: mirrors.tuna.tsinghua.edu.cn
* epel: mirrors.tuna.tsinghua.edu.cn
* extras: mirrors.aliyun.com
* updates: mirrors.tuna.tsinghua.edu.cn
Available Packages
Name : vconfig
Arch : x86_64
Version : 1.9
Release : 16.el7
Size : 26 k
Repo : epel/x86_64
Summary : Linux 802.1q VLAN configuration utility
URL : http://www.candelatech.com/~greear/vlan.html
License : GPLv2+
Description : The vconfig program configures and adjusts 802.1q VLAN parameters.
: This tool is deprecated in favor of "ip link" command.
[root@node1 ~]# yum -y install vconfig
[root@node1 ~]# vconfig --help
Expecting argc to be 3-5, inclusive. Was: 2
Usage: add [interface-name] [vlan_id]
rem [vlan-name]
set_flag [interface-name] [flag-num] [0 | 1]
set_egress_map [vlan-name] [skb_priority] [vlan_qos]
set_ingress_map [vlan-name] [skb_priority] [vlan_qos]
set_name_type [name-type]
* The [interface-name] is the name of the ethernet card that hosts
the VLAN you are talking about.
* The vlan_id is the identifier (0-4095) of the VLAN you are operating on.
* skb_priority is the priority in the socket buffer (sk_buff).
* vlan_qos is the 3 bit priority in the VLAN header
* name-type: VLAN_PLUS_VID (vlan0005), VLAN_PLUS_VID_NO_PAD (vlan5),
DEV_PLUS_VID (eth0.0005), DEV_PLUS_VID_NO_PAD (eth0.5)
* bind-type: PER_DEVICE # Allows vlan 5 on eth0 and eth1 to be unique.
PER_KERNEL # Forces vlan 5 to be unique across all devices.
* FLAGS: 1 REORDER_HDR When this is set, the VLAN device will move the
ethernet header around to make it look exactly like a real
ethernet device. This may help programs such as DHCPd which
read the raw ethernet packet and make assumptions about the
location of bytes. If you don't need it, don't turn it on, because
there will be at least a small performance degradation. Default
is OFF.
IDC机房--云维护用到,此处不过多研究
VLAN间路由:
路由器:
访问链接--2根线 router为每个VLAN提供一个接口
汇聚链接--1根线--单臂链接 router只向交换机提供一个接口
三层交换机:
2层、3层功能都有
自带路由模块
2层交换机、交换模块、路由模块、汇聚方式--单臂链接
按需分配资源
IAAS Infrastructure as a Service
PAAS Platform as a Service
docker
资源隔离功能
容器技术就是运用此功能
chroot
linux 内核提供2个功能:
namespace
文件系统隔离--挂载点隔离
网络隔离: 主要用于实现网络资源的隔离:网络设备、ipv4、ipv6地址、ip路由表、防火墙、/proc/net、/sys/class/net以及套接字等
IPC隔离: 进程间通信
用户和用户组隔离:
PID隔离: 对名称空间内的PID重新标号,两个不同的名称空间可以使用相同的PID;
UTS隔离机制: Unix Time-sharing System,提供主机名称和域名的隔离
cgroups
用于完成资源配置;用于实现限制被各namespace隔离起来的资源,还可以为资源设置权重、计算使用量、完成各种所需的管理任务等;
Linux Network NameSpace:
注意: netns在内核实现,其控制功能由iproute所提供的netns这个OBJECT来提供;
centos6.6 提供的iproute不具有此OBJECT,需要依赖于OpenStack Icehouse的EPEL源来提供;
1 使用netns
ip netns list
ip netns add NAME
ip netns del NAME
ip netns exec NAME COMMAND
2 使用虚拟以太网卡
ip link add FRONTEND-NAME type veth peer name BACKEND-NAME
网络虚拟化:
复杂的虚拟化网络:
netns
OpenVSwitch
OVS: 基于c语言研发 特性:
... ...
虚拟网络构建(简单)
虚拟化 Intel VT-X/EPT 点√ 使虚拟机支持虚拟化
两台机器 192.168.25.11 192.168.25.12
2g 4核
centos7.5 1804
参考图
虚拟网络添加
[root@node1 ~]# ip netns help
Usage: ip netns list
ip netns add NAME
ip netns set NAME NETNSID
ip [-all] netns delete [NAME]
ip netns identify [PID]
ip netns pids NAME
ip [-all] netns exec [NAME] cmd ...
ip netns monitor
ip netns list-id
[root@node1 ~]# ip netns add r1
[root@node1 ~]# ip netns add r2
[root@node1 ~]# ip netns list
r2
r1
[root@node1 ~]# ip netns exec r1 ifconfig -a
lo: flags=8<LOOPBACK> mtu 65536
loop txqueuelen 1000 (Local Loopback)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
[root@node1 ~]# ip netns exec r2 ifconfig -a
lo: flags=8<LOOPBACK> mtu 65536
loop txqueuelen 1000 (Local Loopback)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
[root@node1 ~]# ifconfig 以上两者不同
... ...
[root@node1 ~]# ip netns exec r1 route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
无任何信息
给r1的lo添加地址信息,并启用
[root@node1 ~]# ip netns exec r1 ifconfig lo 127.0.0.1/8 up
[root@node1 ~]# ip netns exec r1 ifconfig
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 1000 (Local Loopback)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
[root@node1 ~]# ip netns exec r2 ifconfig -a # 说明r1/r2二者是隔离的
lo: flags=8<LOOPBACK> mtu 65536
loop txqueuelen 1000 (Local Loopback)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
查看r1的防火墙规则
[root@node1 ~]# ip netns exec r1 iptables -L -n
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain FORWARD (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
给r1添加防火墙规则
[root@node1 ~]# ip netns exec r1 iptables -A FORWARD -s 127.0.0.0/8 -j ACCEPT
[root@node1 ~]# ip netns exec r1 iptables -L -n
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain FORWARD (policy ACCEPT)
target prot opt source destination
ACCEPT all -- 127.0.0.0/8 0.0.0.0/0
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
[root@node1 ~]# ip netns exec r2 iptables -L -n 说明r1/r2二者是隔离的
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain FORWARD (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
[root@node1 ~]# iptables -L -n 三者是隔离的
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain FORWARD (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
清空r1防火墙规则
[root@node1 ~]# ip netns exec r1 iptables -F
[root@node1 ~]# ip netns exec r1 iptables -L -n
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain FORWARD (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
设计思路
br-ex 外网虚拟网桥
br-in 内网虚拟网桥
需要用到工具brctl--依赖包bridge-utils
[root@node1 ~]# yum info bridge-utils
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
* base: mirrors.tuna.tsinghua.edu.cn
* epel: mirrors.tuna.tsinghua.edu.cn
* extras: mirrors.aliyun.com
* updates: mirrors.tuna.tsinghua.edu.cn
Available Packages
Name : bridge-utils
Arch : x86_64
Version : 1.5
Release : 9.el7
Size : 32 k
Repo : base/7/x86_64
Summary : Utilities for configuring the linux ethernet bridge
URL : http://www.linuxfoundation.org/collaborate/workgroups/networking/bridge
License : GPLv2+
Description : This package contains utilities for configuring the linux ethernet
: bridge. The linux ethernet bridge can be used for connecting multiple
: ethernet devices together. The connecting is fully transparent: hosts
: connected to one ethernet device see hosts connected to the other
: ethernet devices directly.
:
: Install bridge-utils if you want to use the linux ethernet bridge.
[root@node1 ~]# yum -y install bridge-utils
[root@node1 ~]# brctl -h
Usage: brctl [commands]
commands:
addbr <bridge> add bridge
delbr <bridge> delete bridge
addif <bridge> <device> add interface to bridge
delif <bridge> <device> delete interface from bridge
hairpin <bridge> <port> {on|off} turn hairpin on/off
setageing <bridge> <time> set ageing time
setbridgeprio <bridge> <prio> set bridge priority
setfd <bridge> <time> set bridge forward delay
sethello <bridge> <time> set hello time
setmaxage <bridge> <time> set max message age
setpathcost <bridge> <port> <cost> set path cost
setportprio <bridge> <port> <prio> set port priority
show [ <bridge> ] show a list of bridges
showmacs <bridge> show a list of mac addrs
showstp <bridge> show bridge stp info
stp <bridge> {on|off} turn stp on/off
[root@node1 ~]# brctl addbr br-ex
[root@node1 ~]# ifconfig 没有br-ex信息
... ...
[root@node1 ~]# ip link set br-ex up
[root@node1 ~]# ifconfig 有br-ex信息
br-ex: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet6 fe80::94d2:5eff:fe23:bff5 prefixlen 64 scopeid 0x20<link>
ether 96:d2:5e:23:bf:f5 txqueuelen 1000 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 6 bytes 508 (508.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.25.11 netmask 255.255.255.0 broadcast 192.168.25.255
inet6 fe80::8b3c:e8a1:986a:9e34 prefixlen 64 scopeid 0x20<link>
ether 00:0c:29:42:8d:16 txqueuelen 1000 (Ethernet)
RX packets 1640 bytes 168718 (164.7 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 1183 bytes 130398 (127.3 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
eth1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.50.11 netmask 255.255.255.0 broadcast 192.168.50.255
inet6 fe80::d53:2e16:ceef:f04e prefixlen 64 scopeid 0x20<link>
ether 00:0c:29:42:8d:20 txqueuelen 1000 (Ethernet)
RX packets 20 bytes 2229 (2.1 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 13 bytes 954 (954.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 1000 (Local Loopback)
RX packets 74 bytes 5980 (5.8 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 74 bytes 5980 (5.8 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
物理桥创建
把物理网卡加到桥上分3步骤:
拆除物理网卡地址+把物理网卡地址加到桥上+把物理网卡加到桥上
如下:
[root@node1 ~]# ip addr del 192.168.25.11/24 dev eth0;ip addr add 192.168.25.11/24 dev br-ex;brctl addif br-ex eth0
ip addr del 192.168.25.11/24 dev eth0 拆除
ip addr add 192.168.25.11/24 dev br-ex 把ip加到桥上
brctl addif br-ex eth0 把物理卡加到桥上
[root@node1 ~]# ifconfig 显示eth0的ip转移到br-ex桥上了
br-ex: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.25.11 netmask 255.255.255.0 broadcast 0.0.0.0
inet6 fe80::4cf3:58ff:fe9c:51dd prefixlen 64 scopeid 0x20<link>
ether 00:0c:29:42:8d:16 txqueuelen 1000 (Ethernet)
RX packets 47 bytes 3827 (3.7 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 28 bytes 2580 (2.5 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet6 fe80::8b3c:e8a1:986a:9e34 prefixlen 64 scopeid 0x20<link>
ether 00:0c:29:42:8d:16 txqueuelen 1000 (Ethernet)
RX packets 642 bytes 57986 (56.6 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 429 bytes 50500 (49.3 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
eth1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.50.11 netmask 255.255.255.0 broadcast 192.168.50.255
inet6 fe80::d53:2e16:ceef:f04e prefixlen 64 scopeid 0x20<link>
ether 00:0c:29:42:8d:20 txqueuelen 1000 (Ethernet)
RX packets 35 bytes 3243 (3.1 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 23 bytes 1794 (1.7 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 1000 (Local Loopback)
RX packets 68 bytes 5648 (5.5 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 68 bytes 5648 (5.5 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
创建内部桥
[root@node1 ~]# brctl addbr br-in
[root@node1 ~]# ip link set br-in up
[root@node1 ~]# ifconfig br-in出现
创建虚拟网卡
添加一对网卡
一半在网络名称空间上,另一半在桥上
打开转发功能
[root@node1 ~]# vim /etc/sysctl.conf
net.ipv4.ip_forward = 1
[root@node1 ~]# sysctl -p
net.ipv4.ip_forward = 1
[root@node1 ~]# ip link help
Usage: ip link add [link DEV] [ name ] NAME
[ txqueuelen PACKETS ]
[ address LLADDR ]
[ broadcast LLADDR ]
[ mtu MTU ] [index IDX ]
[ numtxqueues QUEUE_COUNT ]
[ numrxqueues QUEUE_COUNT ]
type TYPE [ ARGS ]
ip link delete { DEVICE | dev DEVICE | group DEVGROUP } type TYPE [ ARGS ]
ip link set { DEVICE | dev DEVICE | group DEVGROUP }
[ { up | down } ]
[ type TYPE ARGS ]
[ arp { on | off } ]
[ dynamic { on | off } ]
[ multicast { on | off } ]
[ allmulticast { on | off } ]
[ promisc { on | off } ]
[ trailers { on | off } ]
[ carrier { on | off } ]
[ txqueuelen PACKETS ]
[ name NEWNAME ]
[ address LLADDR ]
[ broadcast LLADDR ]
[ mtu MTU ]
[ netns { PID | NAME } ]
[ link-netnsid ID ]
[ alias NAME ]
[ vf NUM [ mac LLADDR ]
[ vlan VLANID [ qos VLAN-QOS ] [ proto VLAN-PROTO ] ]
[ rate TXRATE ]
[ max_tx_rate TXRATE ]
[ min_tx_rate TXRATE ]
[ spoofchk { on | off} ]
[ query_rss { on | off} ]
[ state { auto | enable | disable} ] ]
[ trust { on | off} ] ]
[ node_guid { eui64 } ]
[ port_guid { eui64 } ]
[ xdp { off |
object FILE [ section NAME ] [ verbose ] |
pinned FILE } ]
[ master DEVICE ][ vrf NAME ]
[ nomaster ]
[ addrgenmode { eui64 | none | stable_secret | random } ]
[ protodown { on | off } ]
ip link show [ DEVICE | group GROUP ] [up] [master DEV] [vrf NAME] [type TYPE]
ip link xstats type TYPE [ ARGS ]
ip link afstats [ dev DEVICE ]
ip link help [ TYPE ]
TYPE := { vlan | veth | vcan | dummy | ifb | macvlan | macvtap |
bridge | bond | team | ipoib | ip6tnl | ipip | sit | vxlan |
gre | gretap | ip6gre | ip6gretap | vti | nlmon | team_slave |
bond_slave | ipvlan | geneve | bridge_slave | vrf | macsec }
[root@node1 ~]# ip netns list
r2
r1
[root@node1 ~]# ip link add veth1.1 type veth peer name veth1.2
[root@node1 ~]# ip link show 出现veth1.1和veth1.2
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master br-ex state UP mode DEFAULT group default qlen 1000
link/ether 00:0c:29:42:8d:16 brd ff:ff:ff:ff:ff:ff
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000
link/ether 00:0c:29:42:8d:20 brd ff:ff:ff:ff:ff:ff
4: br-ex: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
link/ether 00:0c:29:42:8d:16 brd ff:ff:ff:ff:ff:ff
5: br-in: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
link/ether 66:fe:99:2a:0e:8d brd ff:ff:ff:ff:ff:ff
6: veth1.2@veth1.1: <BROADCAST,MULTICAST,M-DOWN> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
link/ether 0a:4c:82:38:1e:53 brd ff:ff:ff:ff:ff:ff
7: veth1.1@veth1.2: <BROADCAST,MULTICAST,M-DOWN> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
link/ether 36:08:c6:72:02:e8 brd ff:ff:ff:ff:ff:ff
ip link set 设备名 名称空间名
一个网卡只能属于一个名称空间: 宿主机或者所指的名称空间
[root@node1 ~]# ip link set veth1.1 netns r1
[root@node1 ~]# ip link set veth1.2 netns r2
[root@node1 ~]# ip link show
没有veth1.1和veth1.2了,因为当前宿主机也是一个名称空间,不同于r1/r2
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master br-ex state UP mode DEFAULT group default qlen 1000
link/ether 00:0c:29:42:8d:16 brd ff:ff:ff:ff:ff:ff
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000
link/ether 00:0c:29:42:8d:20 brd ff:ff:ff:ff:ff:ff
4: br-ex: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
link/ether 00:0c:29:42:8d:16 brd ff:ff:ff:ff:ff:ff
5: br-in: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
link/ether 66:fe:99:2a:0e:8d brd ff:ff:ff:ff:ff:ff
[root@node1 ~]# ip netns exec r1 ifconfig -a 出现了veth1.1
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 1000 (Local Loopback)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
veth1.1: flags=4098<BROADCAST,MULTICAST> mtu 1500
ether 36:08:c6:72:02:e8 txqueuelen 1000 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
[root@node1 ~]# ip netns exec r2 ifconfig -a 出现了veth1.2
lo: flags=8<LOOPBACK> mtu 65536
loop txqueuelen 1000 (Local Loopback)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
veth1.2: flags=4098<BROADCAST,MULTICAST> mtu 1500
ether 0a:4c:82:38:1e:53 txqueuelen 1000 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
改名字: 把r1中的veth1.1改为eth0,r2的也改
因为和物理机的eth0不在一个名称空间,所以不会冲突
ip link set name NEWNAME
[root@node1 ~]# ip netns exec r1 ip link set veth1.1 name eth0
[root@node1 ~]# ip netns exec r1 ip link show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
7: eth0@if6: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
link/ether 36:08:c6:72:02:e8 brd ff:ff:ff:ff:ff:ff link-netnsid 1
[root@node1 ~]# ip netns exec r2 ip link set veth1.2 name eth0
[root@node1 ~]# ip netns exec r2 ip link show
1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN mode DEFAULT group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
6: eth0@if7: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
link/ether 0a:4c:82:38:1e:53 brd ff:ff:ff:ff:ff:ff link-netnsid 0
二者(r1-eth0, r2-eth0)使用同一个网段地址,可以互相通信
[root@node1 ~]# ip netns exec r1 ifconfig eth0 10.0.1.1/24 up
[root@node1 ~]# ip netns exec r2 ifconfig eth0 10.0.1.2/24 up
[root@node1 ~]# ip netns exec r1 ping 10.0.1.1 自己可以ping通自己
PING 10.0.1.1 (10.0.1.1) 56(84) bytes of data.
64 bytes from 10.0.1.1: icmp_seq=1 ttl=64 time=0.035 ms
64 bytes from 10.0.1.1: icmp_seq=2 ttl=64 time=0.046 ms
64 bytes from 10.0.1.1: icmp_seq=3 ttl=64 time=0.052 ms
^C
--- 10.0.1.1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2003ms
rtt min/avg/max/mdev = 0.035/0.044/0.052/0.008 ms
[root@node1 ~]# ip netns exec r1 ping 10.0.1.2 可以ping通r2-eth0
PING 10.0.1.2 (10.0.1.2) 56(84) bytes of data.
64 bytes from 10.0.1.2: icmp_seq=1 ttl=64 time=0.057 ms
64 bytes from 10.0.1.2: icmp_seq=2 ttl=64 time=0.063 ms
64 bytes from 10.0.1.2: icmp_seq=3 ttl=64 time=0.052 ms
64 bytes from 10.0.1.2: icmp_seq=4 ttl=64 time=0.053 ms
^C
--- 10.0.1.2 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3010ms
rtt min/avg/max/mdev = 0.052/0.056/0.063/0.006 ms
虚拟路由器网络架构(不承接前面的操作结果,新虚拟机)
拓扑图
路由器
3层设备
dhcp服务
应用层
docker虚拟机就是如此--图
openstack 必须用这个--虚拟路由
准备
虚拟化 Intel VT-X/EPT 点√ 使虚拟机支持虚拟化
1台机器 192.168.25.11
2g 4核
centos7.5 1804
思路:
br-ex 外网虚拟网桥
br-in 内网虚拟网桥
需要用到工具brctl--依赖包bridge-utils
路由--名称空间实现
2个虚拟机--kvm实现
安装工具包
brctl 添加网桥
tcpdump 抓包工具
qemu-kvm kvm虚拟机启动
dnsmasq dns服务器
[root@node1 ~]# yum info bridge-utils tcpdump qemu-kvm dnsmasq
[root@node1 ~]# yum -y install bridge-utils tcpdump qemu-kvm dnsmasq
添加外网网桥
[root@node1 ~]# brctl addbr br-ex
[root@node1 ~]# ip link set br-ex up
[root@node1 ~]# ifconfig 有br-ex信息
把宿主机物理网卡eth0的ip转移给添加的外网网桥上,并将其(eth0)添加至外网网桥上
[root@node1 ~]# ip addr del 192.168.25.11/24 dev eth0;ip addr add 192.168.25.11/24 dev br-ex;brctl addif br-ex eth0
[root@node1 ~]# ifconfig 显示eth0的ip转移到br-ex桥上了
添加内网网桥
[root@node1 ~]# brctl addbr br-in
[root@node1 ~]# ip link set br-in up
[root@node1 ~]# ifconfig br-in出现
添加转发功能
[root@node1 ~]# vim /etc/sysctl.conf
net.ipv4.ip_forward = 1
[root@node1 ~]# sysctl -p
net.ipv4.ip_forward = 1
kvm虚拟机
[root@node1 ~]# yum info qemu-kvm
[root@node1 ~]# yum -y install qemu-kvm
装载kvm模块
[root@node1 ~]# modprobe kvm
[root@node1 ~]# lsmod |grep kvm
kvm_intel 174841 0
kvm 578518 1 kvm_intel
irqbypass 13503 1 kvm
创建虚拟机需要磁盘映像文件,创建镜像目录
[root@node1 ~]# mkdir -p /images/cirros
[root@node1 ~]# cd /images/cirros
下载地址: http://download.cirros-cloud.net/0.3.6/cirros-0.3.6-i386-disk.img
[root@node1 cirros]# rz 上传镜像文件cirros-0.3.6-i386-disk.img
[root@node1 cirros]# cp cirros-0.3.6-i386-disk.img test1.qcow2
[root@node1 cirros]# cp cirros-0.3.6-i386-disk.img test2.qcow2
编辑启动kvm虚拟机的脚本
[root@node1 cirros]# vim /etc/qemu-ifup
#!/bin/bash
#
bridge='br-in'
if [ -n "$1" ];then
ip link set $1 up
sleep 1
brctl addif $bridge $1
[ $? -eq 0 ] && exit 0 || exit 1
else
echo "Error: no interface specified."
exit 2
fi
[root@node1 cirros]# chmod +x /etc/qemu-ifup
[root@node1 cirros]# bash -n /etc/qemu-ifup
[root@node1 cirros]# ln -s /usr/libexec/qemu-kvm /usr/bin/ 把命令加入到PATH变量里面
启动虚拟机--vm1
注意启动kvm虚拟机时,两个选项不能一起用
-nographic can not be used with -daemonize
[root@node1 ~]# qemu-kvm -name vm1 -m 100 -smp 2 -drive file=/images/cirros/test1.qcow2,if=virtio,media=disk -net nic,model=virtio,macaddr=52:54:00:aa:bb:cc -net tap,ifname=fgq1,script=/etc/qemu-ifup --nographic
ifname=name 自定义名字,默认值为tap0/1/2/...
model=virtio 半虚拟化
-m 100 此处100或100m都可以
... ...
Sending discover...
等待几分钟
... ...
############ debug end ##############
____ ____ ____
/ __/ __ ____ ____ / __ \/ __/
/ /__ / // __// __// /_/ /\ \
\___//_//_/ /_/ \____/___/
http://cirros-cloud.net
login as 'cirros' user. default password: 'cubswin:)'. use 'sudo' for root.
cirros login: cirros
Password:
输入账号和密码进入shell命令行 vm1启动成功
$ ifconfig 没有地址
eth0 Link encap:Ethernet HWaddr 52:54:00:AA:BB:CC
inet6 addr: fe80::5054:ff:feaa:bbcc/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:6 errors:0 dropped:0 overruns:0 frame:0
TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:508 (508.0 B) TX bytes:1132 (1.1 KiB)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:16436 Metric:1
RX packets:12 errors:0 dropped:0 overruns:0 frame:0
TX packets:12 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:1020 (1020.0 B) TX bytes:1020 (1020.0 B)
再开一个窗口,启动虚拟机--vm2
注意虚拟机名字、镜像文件、mac地址、网卡名 与vm1不同
[root@node1 ~]# qemu-kvm -name vm2 -m 100 -smp 2 -drive file=/images/cirros/test2.qcow2,if=virtio,media=disk -net nic,model=virtio,macaddr=52:54:00:aa:bb:dd -net tap,ifname=fgq2,script=/etc/qemu-ifup --nographic
... ...
Sending discover...
等待几分钟
... ...
login as 'cirros' user. default password: 'cubswin:)'. use 'sudo' for root.
cirros login: cirros
Password:
$ ifconfig 没有地址
[root@node1 ~]# brctl show
显示fgq1,fgq2在br-in的interfaces处
两个虚拟机启动成功,且另外一半(fgq1、fgq2)自动在br-in桥上
创建虚拟路由器网络
[root@node1 ~]# ip netns list 啥都没有
[root@node1 ~]# ip netns add r1
[root@node1 ~]# ip netns list
r1
现在要r1启动起来,且创建1对网卡,一半在r1上,另外一半关联到桥(br-in)上(手动添加)
rins--在内网br-in上的网卡--不需要ip地址 与内部虚拟机连接使用
rinr--在路由r1上的网卡--需要配置ip地址 与内部虚拟机连接使用
该ip地址和两个内部虚拟机fgq1/fgq2在同一个网段内
两个内部虚拟机fgq1/fgq2的网关需要指向rinr
# 内部网络构建
查看网卡
[root@node1 ~]# ip link show
[root@node1 ~]# ifconfig -a
创建一对网卡
[root@node1 ~]# ip link add rinr type veth peer name rins
[root@node1 ~]# ip link show |grep rin
10: rins@rinr: <BROADCAST,MULTICAST,M-DOWN> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
11: rinr@rins: <BROADCAST,MULTICAST,M-DOWN> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
激活这对网卡
[root@node1 ~]# ip link set rinr up
[root@node1 ~]# ip link set rins up
把rins添加到桥br-in上
[root@node1 ~]# brctl addif br-in rins
[root@node1 ~]# brctl show
bridge name bridge id STP enabled interfaces
br-ex 8000.000c29428d16 no eth0
br-in 8000.3ef91e6cf2b1 no fgq1
fgq2
rins
显示br-in上有3个: rins(不需要地址)+ 两个虚拟机的网卡--fgq1,fgq2
把rinr添加到r1上
[root@node1 ~]# ip link set rinr netns r1
[root@node1 ~]# ip link show rinr在宿主机空间上看不到了
[root@node1 ~]# ip netns exec r1 ifconfig -a
lo: flags=8<LOOPBACK> mtu 65536
loop txqueuelen 1000 (Local Loopback)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
rinr: flags=4098<BROADCAST,MULTICAST> mtu 1500
ether 46:69:0c:17:05:c2 txqueuelen 1000 (Ethernet)
RX packets 8 bytes 648 (648.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 8 bytes 648 (648.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
虽然把rinr加进去了,但默认不会激活
改名
[root@node1 ~]# ip netns exec r1 ip link set rinr name eth0
加地址+激活
添加地址时可以直接激活,不用单独激活
可省: ip netns exec r1 ip link set eth0 up
可省: ip netns exec r1 ip link show 显示up,且显示eth0
[root@node1 ~]# ip netns exec r1 ifconfig eth0 10.0.1.254/24 up
[root@node1 ~]# ip netns exec r1 ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 10.0.1.254 netmask 255.255.255.0 broadcast 10.0.1.255
inet6 fe80::4469:cff:fe17:5c2 prefixlen 64 scopeid 0x20<link>
ether 46:69:0c:17:05:c2 txqueuelen 1000 (Ethernet)
RX packets 8 bytes 648 (648.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 16 bytes 1296 (1.2 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
内部虚拟机vm1
启动后不要关闭窗口
$ sudo su - 切换root用户
# ifconfig eth0 10.0.1.1/24 up 添加ip地址
# ifconfig 地址添加成功
eth0 Link encap:Ethernet HWaddr 52:54:00:AA:BB:CC
inet addr:10.0.1.1 Bcast:10.255.255.255 Mask:255.0.0.0
inet6 addr: fe80::5054:ff:feaa:bbcc/64 Scope:Link
... ...
# ping 10.0.1.254 可以ping
PING 10.0.1.254 (10.0.1.254): 56 data bytes
64 bytes from 10.0.1.254: seq=0 ttl=64 time=4.168 ms
64 bytes from 10.0.1.254: seq=1 ttl=64 time=0.945 ms
# route add default gw 10.0.1.254 指向网关
# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 10.0.1.254 0.0.0.0 UG 0 0 0 eth0
10.0.0.0 0.0.0.0 255.0.0.0 U 0 0 0 eth0
>0.0
内部虚拟机vm2
启动后不要关闭窗口
$ sudo su -
# ifconfig eth0 10.0.1.2/24 up
# ping 10.0.1.254
PING 10.0.1.254 (10.0.1.254): 56 data bytes
64 bytes from 10.0.1.254: seq=0 ttl=64 time=3.361 ms
64 bytes from 10.0.1.254: seq=1 ttl=64 time=1.510 ms
64 bytes from 10.0.1.254: seq=2 ttl=64 time=0.947 ms
# route add default gw 10.0.1.254
# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 10.0.1.254 0.0.0.0 UG 0 0 0 eth0
10.0.0.0 0.0.0.0 255.0.0.0 U 0 0 0 eth0
至此,内部网络构建成功
>0.0
# 外部网络构建
现在再添加一对网卡,一半加到路由器r1上,一半加到外部的物理桥上
rexr--路由器r1上的网卡,与外部连接使用
rexs--物理桥br-ex上的网卡,与外部连接使用
添加一对网卡
[root@node1 ~]# ip link add rexr type veth peer name rexs
把rexs添加到外部物理桥br-ex上
[root@node1 ~]# brctl addif br-ex rexs
激活
[root@node1 ~]# ip link set rexs up
[root@node1 ~]# brctl show br-ex的interfaces中有rexs
bridge name bridge id STP enabled interfaces
br-ex 8000.000c29428d16 no eth0
rexs
br-in 8000.3ef91e6cf2b1 no fgq1
fgq2
rins
[root@node1 ~]# ifconfig 有rexs是激活的
把rexr添加到路由桥r1上
[root@node1 ~]# ip link set rexr netns r1
改名
[root@node1 ~]# ip netns exec r1 ip link set rexr name eth1
[root@node1 ~]# ip link show rexr在宿主机空间上看不到了
[root@node1 ~]# ip netns exec r1 ifconfig -a
虽然把rexr加进去了,但默认不会激活
加地址+激活
[root@node1 ~]# ip netns exec r1 ifconfig eth1 192.168.25.21/24 up
[root@node1 ~]# ip netns exec r1 ifconfig -a 显示eth1的ip地址OK且是up
可以ping通宿主机的网关192.168.25.2
[root@node1 ~]# ip netns exec r1 ping 192.168.25.2
PING 192.168.25.2 (192.168.25.2) 56(84) bytes of data.
64 bytes from 192.168.25.2: icmp_seq=1 ttl=128 time=0.270 ms
64 bytes from 192.168.25.2: icmp_seq=2 ttl=128 time=0.339 ms
内部虚拟机vm1和vm2都可以ping通192.168.25.21
# ping 192.168.25.21
PING 192.168.25.21 (192.168.25.21): 56 data bytes
64 bytes from 192.168.25.21: seq=0 ttl=64 time=2.837 ms
64 bytes from 192.168.25.21: seq=1 ttl=64 time=0.911 ms
64 bytes from 192.168.25.21: seq=2 ttl=64 time=0.745 ms
路由r1的网卡间转发默认就是打开的,原因如下:
创建名称空间时,宿主机核心转发打开,那么网络名称空间的核心转发也是默认打开的
网络名称空间的核心转发--无法操作,依赖于宿主机的核心转发是否打开
宿主机的核心转发如果是关闭的,创建的名称空间的核心转发也是关闭的
但是vm1和vm2都ping不通192.168.25.2
# ping 192.168.25.2
PING 192.168.25.2 (192.168.25.2): 56 data bytes
卡在这里不动
原因是可以ping出去,但是数据包回不来,192.168.25.2服务器的网关没有指向r1的rexr(eth1)
# 抓包测试
vm1--网卡fgq1--在这个位置抓包
[root@node1 ~]# tcpdump -i fgq1 -nn icmp
... ...
15:29:36.626597 IP 10.0.1.1 > 192.168.25.2: ICMP echo request, id 33025, seq 212, length 64
抓包: 是否有ping 192.168.25.2的包,显示是有的,数据包可以通过fgq1网卡
rins--内部虚拟机路由上的网卡--在这个位置抓包
[root@node1 ~]# tcpdump -i rins -nn icmp
... ...
15:32:56.078415 IP 10.0.1.1 > 192.168.25.2: ICMP echo request, id 33025, seq 411, length 64
显示ping包可以到达此处
rinr--路由器内部--网络名称空间内部--eth0--在这个位置抓包
[root@node1 ~]# ip netns exec r1 tcpdump -i eth0 -nn icmp
... ...
15:35:15.459129 IP 10.0.1.1 > 192.168.25.2: ICMP echo request, id 33025, seq 550, length 64
显示数据包可以通过此网卡
rexr--路由器内部--网络名称空间内部--eth1--在这个位置抓包
[root@node1 ~]# ip netns exec r1 tcpdump -i eth1 -nn icmp
... ...
15:36:55.699706 IP 10.0.1.1 > 192.168.25.2: ICMP echo request, id 33025, seq 650, length 64
显示数据包可以通过此网卡
综上分析: ping包的报文已经出去了,该报文可以送到192.168.25.2上去,但是回不来
192.168.25.2上分析
显示10.0.1.1--> 192.168.25.2有该报文
[root@node1 ~]# tcpdump -i eth0 -nn icmp
... ...
15:38:50.102905 IP 10.0.1.1 > 192.168.25.2: ICMP echo request, id 33025, seq 764, length 64
内部的报文是可以出去的,但是回应的报文进不来
可以在r1路由上添加snat规则,报文就可以回来了
[root@node1 ~]# ip netns exec r1 iptables -t nat -L -n 目前没有任何规则
[root@node1 ~]# ip netns exec r1 iptables -t nat -A POSTROUTING -s 10.0.1.0/24 ! -d 10.0.1.0/24 -j SNAT --to-source 192.168.25.21
-s 10.0.1.0/24 来自这个网络的
! -d 10.0.1.0/24 目标地址不是这个网络的都转换 一般,非本网络的都做转换,本网络的就没有必要做转换了
-j SNAT --to-source 192.168.25.21 转换成这个网络的地址-192.168.25.21
-j MASQUERADE ..
[root@node1 ~]# ip netns exec r1 iptables -t nat -L -n 规则添加成功
Chain PREROUTING (policy ACCEPT)
target prot opt source destination
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
Chain POSTROUTING (policy ACCEPT)
target prot opt source destination
SNAT all -- 10.0.1.0/24 !10.0.1.0/24 to:192.168.25.21
现在内部虚拟机vm1和vm2都可以ping通192.168.25.2
# ping 192.168.25.2
PING 192.168.25.2 (192.168.25.2): 56 data bytes
64 bytes from 192.168.25.2: seq=0 ttl=127 time=1.578 ms
64 bytes from 192.168.25.2: seq=1 ttl=127 time=1.252 ms
64 bytes from 192.168.25.2: seq=2 ttl=127 time=1.354 ms
随便一个位置抓包分析--如eth0/rins 来回的包都有了
[root@node1 ~]# tcpdump -i eth0 -nn icmp
... ...
15:53:33.396973 IP 192.168.25.21 > 192.168.25.2: ICMP echo request, id 33537, seq 37, length 64
15:53:33.398049 IP 192.168.25.2 > 192.168.25.21: ICMP echo reply, id 33537, seq 37, length 64
[root@node1 ~]# tcpdump -i rins -nn icmp
... ...
15:53:44.426398 IP 10.0.1.1 > 192.168.25.2: ICMP echo request, id 33537, seq 48, length 64
15:53:44.426915 IP 192.168.25.2 > 10.0.1.1: ICMP echo reply, id 33537, seq 48, length 64
分析
图1: 2个网络的虚拟机访问--同一物理网络中,配置成同一网段地址即可 snat dnat
图2: 或者直接桥接,2个网络的虚拟机也可以访问
图3: 两个虚拟网络中的虚拟机通信,类似之前的"虚拟网络构建(简单)"模型
图4: 5个五边形的互相通信,3个矩形的互相通信,二者不能互相访问???
思考:
如果是不同的宿主机上的不同的虚拟机网络互相通信呢?
如果是不同的宿主机上的不同的虚拟机网络互相通信,且与外网连接通信呢?
1
2
3
4
路由上配置dhcp服务
路由r1上直接运行一个dhcp服务,自动分配ip地址给内部的虚拟机vm1和vm2
这个包dnsmasq前面已经装过了,没有装的,需要再装
yum -y install dnsmasq
[root@node1 ~]# dnsmasq --help
靠选项运行dhcp服务
-a, --listen-address=<ipaddr> 监听地址
-F --dhcp-range 地址池
--dhcp-range=10.0.1.100,10.0.1.120
-F 10.0.1.100,10.0.1.120 二者等价
-O, --dhcp-option=<optspec> 网关
--dhcp-option-force=<optspec>
[root@node1 ~]# man dnsmasq
搜索: /dhcp-options
此处用不到这个操作,知道即可,kill掉dnsmasq进程:
ip netns exec r1 killall dnsmasq
添加dhcp的地址池和网关
[root@node1 ~]# ip netns exec r1 dnsmasq -F 10.0.1.100,10.0.1.120 --dhcp-option=option:router,10.0.1.254
[root@node1 ~]# ip netns exec r1 ps aux |grep dnsmasq 进程显示dnsmasq启动
nobody 3320 0.0 0.0 53856 1108 ? S 17:00 0:00 dnsmasq -F 10.0.1.100,10.0.1.120 --dhcp-option=option:router,10.0.1.254
内部虚拟机vm2
# ifconfig
eth0 Link encap:Ethernet HWaddr 52:54:00:AA:BB:DD
inet addr:10.0.1.2 Bcast:10.255.255.255 Mask:255.0.0.0
目前有ip地址,需要先释放该ip,然后再获得dhcp分发的ip
注意以下3步骤不要操作
udhcpc -R
该命令执行完成后,虽然显示文字: Lease of 10.0.1.108 obtained 获得dhcp分配的ip
但是实际上ifconfig执行后,并没有获得dhcp分配的ip,如下:
# udhcpc -h 查看该命令的用法
# udhcpc -R
udhcpc (v1.20.1) started
WARN: '/usr/share/udhcpc/default.script' should not be used in cirros. Replaced by cirros-dhcpc.
Sending discover...
Sending select for 10.0.1.108...
Lease of 10.0.1.108 obtained, lease time 3600
WARN: '/usr/share/udhcpc/default.script' should not be used in cirros. Replaced by cirros-dhcpc.
# ifconfig
eth0 Link encap:Ethernet HWaddr 52:54:00:AA:BB:DD
inet addr:10.0.1.2 Bcast:10.255.255.255 Mask:255.0.0.0
inet6 addr: fe80::5054:ff:feaa:bbdd/64 Scope:Link
从此处开始执行操作
# cirros-dhcpc -h
Usage: /sbin/cirros-dhcpc <up|down> + interface(网卡接口)
# /sbin/cirros-dhcpc up eth0
udhcpc (v1.20.1) started
Sending discover...
Sending select for 10.0.1.108...
Lease of 10.0.1.108 obtained, lease time 3600
# ifconfig
eth0 Link encap:Ethernet HWaddr 52:54:00:AA:BB:DD
inet addr:10.0.1.108 Bcast:10.0.1.255 Mask:255.255.255.0
... ....
释放旧地址并获得新地址 release IP,select--获得地址10.0.1.108
# route -n 显示网关有了
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 10.0.1.254 0.0.0.0 UG 0 0 0 eth0
10.0.1.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0
可以ping通 192.168.25.2
# ping 192.168.25.2
PING 192.168.25.2 (192.168.25.2): 56 data bytes
64 bytes from 192.168.25.2: seq=0 ttl=127 time=3.065 ms
64 bytes from 192.168.25.2: seq=1 ttl=127 time=1.132 ms
64 bytes from 192.168.25.2: seq=2 ttl=127 time=1.111 ms
dns服务:dns服务器地址、域名等都可以分配
宿主机查看
[root@node1 ~]# ps axu|grep dns
nobody 3320 0.0 0.0 53856 1108 ? S 17:00 0:00 dnsmasq -F 10.0.1.100,10.0.1.120 --dhcp-option=option:router,10.0.1.254
dnsmasq 也在运行,但是不会给宿主机分地址,只会分给natns-r1内部分配
网友评论