docker-swarm本身就支持容器跨主机通信
实验环境为,一个主节点manager,2个从节点 node1,node2
一、基础环境配置
三台服务器均执行以下操作
1. 配置固定IP
[root@localhost ~ ]# vim /etc/sysconfig/network-scripts/ifcfg-ens32
TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=static
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=ens32
UUID=75963e3f-b289-4bbd-8489-44f6f2b8c7f0
DEVICE=ens32
ONBOOT=yes
IPADDR=192.168.0.10
PREFIX=24
GATEWAY=192.168.0.1
DNS1=114.114.114.114
[root@localhost ~]# systemctl restart network
2. 更改主机名
[root@localhost ~]# hostnamectl set-hostname swarm-manager
[root@localhost ~]# exit //重新登陆即可
[root@swarm-manager ~]#
在10,20两台服务器上重复上面的操作,node1是IPADDR=192.168.0.10, node2是IPADDR=192.168.0.20
3.关闭防火墙
[root@swarm-manager ~]# systemctl stop firewalld
[root@swarm-manager ~]# systemctl disable firewalld
4. 同步系统时间
[root@swarm-manager ~]# yum -y install ntp
[root@swarm-manager ~]# systemctl enable ntpd.service
[root@swarm-manager ~]# ntpdate cn.pool.ntp.org
[root@swarm-manager ~]# hwclock -w
[root@swarm-manager ~]# crontab -e
0 2 * * * ntpdate ntpdate cn.pool.ntp.org && hwclock -w
5. 安装docker
[root@swarm-manager ~]# yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
[root@swarm-manager ~]# yum install -y docker-ce
[root@swarm-manager ~]# systemctl start docker
[root@swarm-manager ~]# systemctl enable docker
6.关闭selinux
[root@swarm-manager ~]# vim /etc/sysconfig/selinux
SELINUX=disabled
[root@swarm-manager ~]# reboot
二、 创建swarm集群
1. swarm-manager
创建主节点
[root@swarm-manager ~]# docker swarm init --advertise-addr 192.168.0.30
Swarm initialized: current node (wshmc30xiq9799m4acz6npaln) is now a manager.
To add a worker to this swarm, run the following command:
docker swarm join --token SWMTKN-1-12s05kb6g2dus4h71glfvn5n2bro4o0ugwr87tqr1g64121bed-a7wt9c7wl8ov9tkl2wudqzwpj 192.168.0.30:2377
To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.
[root@swarm-manager ~]# docker info
...
Swarm: active
NodeID: wshmc30xiq9799m4acz6npaln
Is Manager: true
ClusterID: rwtnt4x1vhld9e7b1wxwijhr5
Managers: 1
Nodes: 2
...
2. node1
从节点node1执行以下命令加入上述新建swarm集群
[root@node1 ~]# docker swarm join --token SWMTKN-1-12s05kb6g2dus4h71glfvn5n2bro4o0ugwr87tqr1g64121bed-a7wt9c7wl8ov9tkl2wudqzwpj 192.168.0.30:2377
This node joined a swarm as a worker.
[root@node1 ~]# docker info
...
Swarm: active
NodeID: ketkon20jbk61d6y5boa7ay65
Is Manager: false
Node Address: 192.168.0.10
Manager Addresses:
192.168.0.30:2377
...
3. node2
从节点node2执行以下命令加入上述新建swarm集群
[root@node2 ~]# docker swarm join --token SWMTKN-1-12s05kb6g2dus4h71glfvn5n2bro4o0ugwr87tqr1g64121bed-a7wt9c7wl8ov9tkl2wudqzwpj 192.168.0.30:2377
This node joined a swarm as a worker.
[root@node2 ~]# docker info
...
Swarm: active
NodeID: ketkon20jbk61d6y5boa7ay65
Is Manager: false
Node Address: 192.168.0.20
Manager Addresses:
192.168.0.30:2377
...
4. 在主节点,可以查看当前节点数
[root@swarm-manager ~]# docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION
ketkon20jbk61d6y5boa7ay65 node1 Ready Active 18.03.1-ce
4pxuq5enqaczxono3xvkigxug node2 Ready Active 18.03.1-ce
wshmc30xiq9799m4acz6npaln * swarm-manager Ready Active Leader 18.03.1-ce
三、跨主机通信
1. 创建overlay网络
在swarm-manager上操作
[root@swarm-manager ~]# docker network ls
NETWORK ID NAME DRIVER SCOPE
ae19b054c5a9 bridge bridge local
1e329e08d287 docker_gwbridge bridge local
34963f83928c host host local
nlzfjhwvxn0s ingress overlay swarm
a79f72191b90 none null local
[root@swarm-manager ~]# docker network create --driver overlay --subnet 172.70.1.0/24 --opt encrypted my-swarm-network
j7pqlzjl2kg1cfrxotai6gyq1
[root@swarm-manager ~]# docker network ls
NETWORK ID NAME DRIVER SCOPE
ae19b054c5a9 bridge bridge local
1e329e08d287 docker_gwbridge bridge local
34963f83928c host host local
nlzfjhwvxn0s ingress overlay swarm
j7pqlzjl2kg1 my-swarm-network overlay swarm
a79f72191b90 none null local
1.> 从节点查看主节点建立的overlay网络
[root@docker02 ~]# docker network ls
NETWORK ID NAME DRIVER SCOPE
8caad6e42ca4 bridge bridge local
3f910d13065f docker_gwbridge bridge local
34963f83928c host host local
nlzfjhwvxn0s ingress overlay swarm
a79f72191b90 none null local
从节点查看不到新建立的overlay网络
2. 创建一个Service,挂载到新建立的ovarlay网络上
[root@swarm-manager ~]# docker service create --replicas 3 --name my-ovs --network my-swarm-network cirros
g4dz7deo73rsus6us8vevkdts
overall progress: 3 out of 3 tasks
1/3: running [==================================================>]
2/3: running [==================================================>]
3/3: running [==================================================>]
verify: Service converged
1.> 主节点刚刚启动的服务
[root@swarm-manager ~]# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
2d4d266b77b7 cirros:latest "/sbin/init" 27 seconds ago Up 25 seconds my-ovs.3.w03awm4w1obxhyjz6587mgsgy
[root@swarm-manager ~]# docker exec -it 2d /bin/sh
/ # ifconfig
eth0 Link encap:Ethernet HWaddr 02:42:AC:46:01:07
inet addr:172.70.1.7 Bcast:172.70.1.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1424 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
eth1 Link encap:Ethernet HWaddr 02:42:AC:12:00:03
inet addr:172.18.0.3 Bcast:172.18.255.255 Mask:255.255.0.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:16 errors:0 dropped:0 overruns:0 frame:0
TX packets:80 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:984 (984.0 B) TX bytes:5664 (5.5 KiB)
2.> 从节点node1查看服务
[root@node1 ~]# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
44ede4813ba8 cirros:latest "/sbin/init" About a minute ago Up About a minute my-ovs.1.wt5eio81fh3wol77fe5193atn
[root@node1 ~]# docker exec -it 44 /bin/sh
/ # ifconfig
eth0 Link encap:Ethernet HWaddr 02:42:AC:46:01:08
inet addr:172.70.1.8Bcast:172.70.1.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1424 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
eth1 Link encap:Ethernet HWaddr 02:42:AC:12:00:03
inet addr:172.18.0.3 Bcast:172.18.255.255 Mask:255.255.0.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:23 errors:0 dropped:0 overruns:0 frame:0
TX packets:97 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:1740 (1.6 KiB) TX bytes:6983 (6.8 KiB)
3.> 从节点node2查看服务
[root@docker02 ~]# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1885b73a4871 cirros:latest "/sbin/init" About a minute ago Up About a minute my-ovs.2.kii95woaije6mu255s3myc2mq
[root@docker02 ~]# docker exec -it 18 /bin/sh
/ # ifconfig
eth0 Link encap:Ethernet HWaddr 02:42:AC:46:01:06
inet addr:172.70.1.6Bcast:172.70.1.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1424 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
eth1 Link encap:Ethernet HWaddr 02:42:AC:12:00:03
inet addr:172.18.0.3 Bcast:172.18.255.255 Mask:255.255.0.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:23 errors:0 dropped:0 overruns:0 frame:0
TX packets:97 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:1740 (1.6 KiB) TX bytes:6983 (6.8 KiB)
3.检查网络是否互通
进入swarm-manager中的容器内
[root@swarm-manager ~]# docker exec -it 2d /bin/sh
/ # ifconfig
eth0 Link encap:Ethernet HWaddr 02:42:AC:46:01:07
inet addr:172.70.1.7Bcast:172.70.1.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1424 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
eth1 Link encap:Ethernet HWaddr 02:42:AC:12:00:03
inet addr:172.18.0.3 Bcast:172.18.255.255 Mask:255.255.0.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:16 errors:0 dropped:0 overruns:0 frame:0
TX packets:80 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:984 (984.0 B) TX bytes:5664 (5.5 KiB)
1.> ping 外网
/ # ping -c 2 www.baidu.com
PING www.baidu.com (61.135.169.125): 56 data bytes
64 bytes from 61.135.169.125: seq=0 ttl=56 time=3.850 ms
64 bytes from 61.135.169.125: seq=1 ttl=56 time=6.953 ms
--- www.baidu.com ping statistics ---
2.> ping 宿主机
/ # ping -c 2 192.168.0.10
PING 192.168.0.10 (192.168.0.10): 56 data bytes
64 bytes from 192.168.0.10: seq=0 ttl=63 time=0.468 ms
64 bytes from 192.168.0.10: seq=1 ttl=63 time=0.238 ms
--- 192.168.0.10 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.238/0.353/0.468 ms
/ # ping -c 2 192.168.0.20
PING 192.168.0.20 (192.168.0.20): 56 data bytes
64 bytes from 192.168.0.20: seq=0 ttl=63 time=0.408 ms
64 bytes from 192.168.0.20: seq=1 ttl=63 time=0.462 ms
--- 192.168.0.20 ping statistics ---
/ # ping -c 2 192.168.0.30
PING 192.168.0.30 (192.168.0.30): 56 data bytes
64 bytes from 192.168.0.30: seq=0 ttl=64 time=0.084 ms
64 bytes from 192.168.0.30: seq=1 ttl=64 time=0.101 ms
--- 192.168.0.30 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.084/0.092/0.101 ms
3.> ping容器
node1上的容器
/ # ping -c 2 172.70.1.6
PING 172.70.1.6 (172.70.1.6): 56 data bytes
64 bytes from 172.70.1.6: seq=0 ttl=64 time=3.604 ms
64 bytes from 172.70.1.6: seq=1 ttl=64 time=1.015 ms
--- 172.70.1.6 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 1.015/2.309/3.604 ms
node2上的容器
/ # ping -c 2 172.70.1.8
PING 172.70.1.8 (172.70.1.8): 56 data bytes
64 bytes from 172.70.1.8: seq=0 ttl=64 time=1.087 ms
64 bytes from 172.70.1.8: seq=1 ttl=64 time=0.984 ms
--- 172.70.1.8 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.984/1.035/1.087 ms
4.> 从节点node1
node2上的容器
/ # ping -c 2 172.70.1.6
PING 172.70.1.6 (172.70.1.6): 56 data bytes
64 bytes from 172.70.1.6: seq=0 ttl=64 time=1.451 ms
64 bytes from 172.70.1.6: seq=1 ttl=64 time=0.973 ms
--- 172.70.1.6 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.973/1.212/1.451 ms
swarm-manager上的容器
/ # ping -c 2 172.70.1.7
PING 172.70.1.7 (172.70.1.7): 56 data bytes
64 bytes from 172.70.1.7: seq=0 ttl=64 time=0.650 ms
64 bytes from 172.70.1.7: seq=1 ttl=64 time=0.693 ms
--- 172.70.1.7 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.650/0.671/0.693 ms
5.> 从节点node2
swarm-manager上的容器
/ # ping -c 2 172.70.1.7
PING 172.70.1.7 (172.70.1.7): 56 data bytes
64 bytes from 172.70.1.7: seq=0 ttl=64 time=3.170 ms
64 bytes from 172.70.1.7: seq=1 ttl=64 time=0.934 ms
--- 172.70.1.7 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.934/2.052/3.170 ms
node1上的容器
/ # ping -c 2 172.70.1.8
PING 172.70.1.8 (172.70.1.8): 56 data bytes
64 bytes from 172.70.1.8: seq=0 ttl=64 time=0.615 ms
64 bytes from 172.70.1.8: seq=1 ttl=64 time=0.642 ms
--- 172.70.1.8 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.615/0.628/0.642 ms
原创作品,允许转载,转载时请务必以超链接形式标明文章 原始出处 、作者信息和本声明。
网友评论