美文网首页
25.Docker入门 docker swarm集群

25.Docker入门 docker swarm集群

作者: 鸡蛋挂面 | 来源:发表于2022-08-26 10:58 被阅读0次

一、yum在线安装指定版本Docker

docker版本:
18.09.6

01.使用yum安装工具集yum-utils、数据存储驱动包device-mapper-persistent-data lvm2

yum -y install  yum-utils device-mapper-persistent-data lvm2

02.配置docker的yum源库(阿里云源)

yum -y install  yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
yum makecache fast

03.查看yum源目前支持的docker版本

yum list docker-ce --showduplicates | sort -r

04.docker执行安装命令

不指定版本默认安装最新版本

yum -y install docker-ce

指定版本安装

yum -y install docker-ce-18.09.6-3.el7

05.创建并修改docker配置文件,配置镜像加速器

mkdir -p /etc/docker
tee /etc/docker/daemon.json <<-'EOF'
{
  "registry-mirrors": ["https://jrr4lsdm.mirror.aliyuncs.com"]
}
EOF
systemctl daemon-reload
systemctl restart docker
systemctl enable docker.service

06.查看docket版本信息

docker -v
docker version
docker info

=======================================

获取YUM命令所下载的RPM包小技巧

01.开启yum缓存
只需要把/etc/yum.conf文件的keepcache设置为1即可

vim /etc/yum.conf
···
keepcache=1
···

02.使用yum命令下载软件包

yum -y install nginx

03.查看yum命令所下载的RPM

find /var/cache/yum/x86_64/7/ -name "*.rpm"

04.把RPM包打包到其他没有网络的机器上

find /var/cache/yum/x86_64/7/ -name "*.rpm" |xargs mv -t ngx_rpm
# 或者
find /var/cache/yum/x86_64/7/ -name "*.rpm" |xargs mv -i {} ngx_rpm
tar -zcvf ngx_rpm.tar.gz ngx_rpm

05.使用rpm命令一键安装

tar xf ngx_rpm.tar.gz
cd ngx_rpm
rpm -ivh *.rpm

=======================================

二、二进制离线安装Docker

官网地址:

# 所有版本
https://download.docker.com/linux/static/stable/x86_64/
# 指定版本
https://download.docker.com/linux/static/stable/x86_64/docker-18.09.6.tgz

01.下载并解压

cd /usr/local/src/
wget https://download.docker.com/linux/static/stable/x86_64/docker-18.09.6.tgz
tar xf docker-18.09.6.tgz
cd docker/
mv docker/* /usr/local/bin/

02.创建配置文件信息

mkdir -p /etc/docker
tee /etc/docker/daemon.json <<-'EOF'
{
  "registry-mirrors": ["https://jrr4lsdm.mirror.aliyuncs.com"]
}
EOF

03.编写docker.service文件,使用systemctl服务管理

cat >/usr/lib/systemd/system/docker.service <<EOF
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service containerd.service
Wants=network-online.target

[Service]
Type=notify
ExecStart=/usr/local/bin/dockerd 
ExecReload=/bin/kill -s HUP $MAINPID
TimeoutSec=0
RestartSec=2
Restart=always
StartLimitBurst=3
StartLimitInterval=60s
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
TasksMax=infinity
Delegate=yes
KillMode=process

[Install]
WantedBy=multi-user.target
EOF

04.启动Docker服务

systemctl daemon-reload
systemctl restart docker
systemctl enable docker.service
docker version

三、二进制安装的docker命令补全方法

01.安装bash-completion

yum -y install bash-completion

02.复制文件docker命令补全脚本
通过yum安装相同版本的docker,方法如上。
/usr/share/bash-completion/completions/docker 文件复制
到二进制安装的docker服务器上的 /usr/share/bash-completion/completions/ 目录下

# 在YUM安装docker的机器执行
ll /usr/share/bash-completion/completions/docker
cd /usr/share/bash-completion/completions/
scp docker 192.168.10.111:`pwd`

03.使用source命令刷新当前shell环境

# 在二进制安装docker的机器执行
ll /usr/share/bash-completion/completions/docker
source /usr/share/bash-completion/completions/docker
source /usr/share/bash-completion/bash_completion

04.测试

# 在二进制安装docker的机器执行
[root@hostname ~]# docker 
attach     container  events     image      kill       manifest   port       restart    search     stats      top        volume
build      context    exec       images     load       network    ps         rm         secret     stop       trust      wait
builder    cp         export     import     login      node       pull       rmi        service    swarm      unpause    
commit     create     help       info       logout     pause      push       run        stack      system     update     
config     diff       history    inspect    logs       plugin     rename     save       start      tag        version 

四、安装docker-compose

方式一

01.运行以下命令以下载Docker Compose的当前稳定版本:

sudo curl -L "https://github.com/docker/compose/releases/download/1.26.0/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose

02.将可执行权限应用于二进制文件:

sudo chmod +x /usr/local/bin/docker-compose

注意:如果命令docker-compose在安装后失败,请检查您的路径。您也可以创建指向/usr/bin或路径中任何其他目录的符号链接。
03.创建软连接

sudo ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose

04.测试安装。

docker-compose --version
docker-compose version 1.26.0, build 1110ad01

方式二

01.安装python-pip

# 准备epel源
curl -o /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
yum makecache
yum -y install epel-release
# 需要epel源
yum -y install python-pip

02.安装docker-compose

# 默认pypi在国外,使用清华源
# https://mirrors.tuna.tsinghua.edu.cn/help/pypi/
pip install -i https://pypi.tuna.tsinghua.edu.cn/simple docker-compose

03.待安装完成后,执行查询版本的命令确认安装成功

docker-compose version

五、Docker swarm 集群

前提:
docker都正常启动
确保各个主机之间能相互通信,内网的话建议关闭防火墙,端口全部开放

集群规划:

IP hostname role
192.168.10.111 swarm01 manager
192.168.10.112 swarm02 worker
192.168.10.113 swarm03 worker

01.修改主机名

# 192.168.10.111
hostnamectl set-hostname swarm01
# 192.168.10.112
hostnamectl set-hostname swarm02
# 192.168.10.113
hostnamectl set-hostname swarm03

02.修改本地hosts解析(三台)

cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.10.111  swarm01
192.168.10.112  swarm02
192.168.10.113  swarm03

03.创建swarm集群

# 192.168.10.111上执行,在哪台执行那台就是 manager
# 查看与swarm相关命令
[root@swarm01 ~]# docker swarm --help

Usage:  docker swarm COMMAND

Manage Swarm

Commands:
  ca          Display and rotate the root CA
  init        Initialize a swarm
  join        Join a swarm as a node and/or manager
  join-token  Manage join tokens
  leave       Leave the swarm
  unlock      Unlock swarm
  unlock-key  Manage the unlock key
  update      Update the swarm

Run 'docker swarm COMMAND --help' for more information on a command.
# 初始化 swarm 集群
[root@swarm01 ~]# docker swarm init --advertise-addr 192.168.10.111:2377
Swarm initialized: current node (1emcr49ddzcpzokuhiebvs7w1) is now a manager.

To add a worker to this swarm, run the following command:

    docker swarm join --token SWMTKN-1-5b02yzi5vtd50b2jtms421fxmphnufxbuurzoxhbib3j6innnr-542t72yww3ar5yp5cr96l20n0 192.168.10.111:2377

To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.

# 192.168.10.111为内网网卡IP,如果集群为外网通信, 使用外网IP初始化
# 端口2377可以不写,不写默认就是2377端口
# 如果忘记加入集群命令,可以使用以下命令查看
# 加入主节点:    docker swarm join-token manager
# 加入工作节点:docker swarm join-token worker 

03.worker1节点加入集群
查看加入集群的相关命令

[root@swarm02 ~]# docker swarm join  --help

Usage:  docker swarm join [OPTIONS] HOST:PORT

Join a swarm as a node and/or manager

Options:
      --advertise-addr string   Advertised address (format: <ip|interface>[:port])
      --availability string     Availability of the node ("active"|"pause"|"drain") (default "active")
      --data-path-addr string   Address or interface to use for data path traffic (format: <ip|interface>)
      --listen-addr node-addr   Listen address (format: <ip|interface>[:port]) (default 0.0.0.0:2377)
      --token string            Token for entry into the swarm

swarm02加入集群

# 在192.168.10.112 swarm02上执行,该机器被当作worker1节点
[root@swarm02 ~]# docker swarm join --token SWMTKN-1-5b02yzi5vtd50b2jtms421fxmphnufxbuurzoxhbib3j6innnr-542t72yww3ar5yp5cr96l20n0 192.168.10.111:2377 --advertise-addr 192.168.10.112:2377
This node joined a swarm as a worker.
# --advertise-addr 192.168.10.112:2377  
# 这个选项可以不写,不写默认使用eth0网卡的IP地址。
# 如果使用的是公网IP,而且内网IP之间不能通信,建议使用该选项并带上公网IP

查看是否成功加入

# 这个命令只能在manager上执行
[root@swarm01 ~]# docker node ls
ID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUS      ENGINE VERSION
1emcr49ddzcpzokuhiebvs7w1 *   swarm01             Ready               Active              Leader              18.09.6
af35zmtkneqo4exefa7oe32bb     swarm02             Ready               Active                                  18.09.6
# 能看到swarm02机器已经成功加入集群

04.worker2节点加入集群

[root@swarm03 ~]# docker swarm join --token SWMTKN-1-5b02yzi5vtd50b2jtms421fxmphnufxbuurzoxhbib3j6innnr-542t72yww3ar5yp5cr96l20n0 192.168.10.111:2377 --advertise-addr 192.168.10.113:2377
This node joined a swarm as a worker.

05.在manager查看集群节点信息

[root@swarm01 ~]# docker node ls
ID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUS      ENGINE VERSION
1emcr49ddzcpzokuhiebvs7w1 *   swarm01             Ready               Active              Leader              18.09.6
af35zmtkneqo4exefa7oe32bb     swarm02             Ready               Active                                  18.09.6
tj3gp94146kcfnmxsvzps1mqu     swarm03             Ready               Active                                  18.09.6
# swarm集群中node的AVAILABILITY状态可以为 active或者drain,其中:
# active状态下,node可以接受来自manager节点的任务分派;
# drain状态下,node节点会结束task,且不再接受来自manager节点的任务分派(也就是下线节点)

05.创建swarm集群overlay网络模型的网桥
overlay网络模型是为了Docker跨主机通信的一种虚拟化技术

[root@swarm01 ~]# docker network create -d overlay --attachable zkp_net
n6miub1vdku46vygc8zcx1ccw
[root@swarm01 ~]# docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
d2cdbfefd3ff        bridge              bridge              local
43cafdf312c0        docker_gwbridge     bridge              local
2e2494e0f89c        host                host                local
m9hilgxl1v96        ingress             overlay             swarm
a858c0ed849d        none                null                local
n6miub1vdku4        zkp_net             overlay             swarm
# 可以看到刚刚新增的网卡`zkp_net`

验证新增的网卡zkp_net是否正常通信

# swarm01
[root@swarm01 ~]# docker run -it -d --name centos01 --network zkp_net  centos:latest
# swarm02
[root@swarm02 ~]# docker run -it -d --name centos02 --network zkp_net  centos:latest
# swarm03
[root@swarm03 ~]# docker run -it -d --name centos03 --network zkp_net  centos:latest
# swarm01查看网卡具体信息
[root@swarm01 ~]# docker network inspect zkp_net
···
        "Options": {
            "com.docker.network.driver.overlay.vxlanid_list": "4097"
        },
        "Labels": {},
        "Peers": [
            {
                "Name": "78835cc56571",
                "IP": "192.168.10.111"
            },
            {
                "Name": "e120d43c0d0d",
                "IP": "192.168.10.112"
            },
            {
                "Name": "9bffe96d070b",
                "IP": "192.168.10.113"
            }
        ]
    }
]

进入容器内测试

[root@ea786b316c03 /]# ping centos02
PING centos02 (10.0.0.8) 56(84) bytes of data.
64 bytes from centos02.zkp_net (10.0.0.8): icmp_seq=1 ttl=64 time=0.551 ms
64 bytes from centos02.zkp_net (10.0.0.8): icmp_seq=2 ttl=64 time=0.564 ms
64 bytes from centos02.zkp_net (10.0.0.8): icmp_seq=3 ttl=64 time=0.597 ms
^C
--- centos02 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2002ms
rtt min/avg/max/mdev = 0.551/0.570/0.597/0.033 ms
[root@ea786b316c03 /]# ping centos03
PING centos03 (10.0.0.10) 56(84) bytes of data.
64 bytes from centos03.zkp_net (10.0.0.10): icmp_seq=1 ttl=64 time=0.909 ms
64 bytes from centos03.zkp_net (10.0.0.10): icmp_seq=2 ttl=64 time=1.42 ms
64 bytes from centos03.zkp_net (10.0.0.10): icmp_seq=3 ttl=64 time=0.503 ms
64 bytes from centos03.zkp_net (10.0.0.10): icmp_seq=4 ttl=64 time=0.529 ms
64 bytes from centos03.zkp_net (10.0.0.10): icmp_seq=5 ttl=64 time=0.617 ms
^C
--- centos03 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4010ms
rtt min/avg/max/mdev = 0.503/0.795/1.421/0.346 ms
[root@ea786b316c03 /]# ping www.baidu.com
PING www.a.shifen.com (14.215.177.39) 56(84) bytes of data.
64 bytes from 14.215.177.39 (14.215.177.39): icmp_seq=1 ttl=127 time=22.5 ms
64 bytes from 14.215.177.39 (14.215.177.39): icmp_seq=2 ttl=127 time=9.36 ms
64 bytes from 14.215.177.39 (14.215.177.39): icmp_seq=3 ttl=127 time=39.7 ms
^C
--- www.a.shifen.com ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2005ms
rtt min/avg/max/mdev = 9.364/23.849/39.698/12.422 ms
[root@ea786b316c03 /]# 

网卡信息记录了三台服务器的IP信息,进入三台容器内可以互PING,能正常上外网
说明集群网络搭建成功。

06.swarm集群测试

相关文章

网友评论

      本文标题:25.Docker入门 docker swarm集群

      本文链接:https://www.haomeiwen.com/subject/riytnrtx.html