美文网首页
六、Docker 容器编排前瞻

六、Docker 容器编排前瞻

作者: Suny____ | 来源:发表于2020-03-21 16:14 被阅读0次

1、单机多容器部署

在之前的章节中我们已经对Docker有了初步的认知,知道如何去部署项目,如何实现多容器的通信。但是大家应该也体会到了部署容器时的繁琐。每部署一个容器都需要很多的命令去操作,容器多了之后我们都不记得这个容器是加了什么参数了,而且也需要花费较多的时间去部署,如果能像写配置文件一样去维护多个容器,那一定很方便。

接下来要说的 Docker Compose 就是用来解放我们的双手的,通过配置 yaml 文件去实现容器的部署,只需要配几个参数就能做到多容器的配置、通信和维护了。

  • Docker Compose

    • 安装

      • 如果是Mac或者Windows系统,在安装了Docker的时候就已经内置了Docker Compose了,如果是Linux系统,则需要手动安装。

      • Linux安装Docker Compose官方文档

        # 第一步
        sudo curl -L "https://github.com/docker/compose/releases/download/1.25.4/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
        
        # 第二步
        sudo chmod +x /usr/local/bin/docker-compose
        
    • 示例

      • 递增页面访问次数 官方示例

      • 创建目录

        $ mkdir composetest
        $ cd composetest
        
      • 创建 app.py 文件

        import time
        
        import redis
        from flask import Flask
        
        app = Flask(__name__)
        cache = redis.Redis(host='redis', port=6379)
        
        
        def get_hit_count():
            retries = 5
            while True:
                try:
                    return cache.incr('hits')
                except redis.exceptions.ConnectionError as exc:
                    if retries == 0:
                        raise exc
                    retries -= 1
                    time.sleep(0.5)
        
        @app.route('/')
        def hello():
            count = get_hit_count()
            return 'Hello World! I have been seen {} times.\n'.format(count)
        
      • 创建 requirements.txt 文件

        flask
        redis
        
      • 编写Dockerfile

        FROM python:3.7-alpine
        WORKDIR /code
        ENV FLASK_APP app.py
        ENV FLASK_RUN_HOST 0.0.0.0
        RUN apk add --no-cache gcc musl-dev linux-headers
        COPY requirements.txt requirements.txt
        RUN pip install -r requirements.txt
        COPY . .
        CMD ["flask", "run"]
        
      • 编写docker-compose.yml

        version: '3' # docker-compose的版本
        services:
          web:         # 构建一个容器,名为 web
            build: . # 使用本地Dockerfile构建
            ports:     # 端口映射
              - "5000:5000"
          redis:   # 构建一个容器,名为 redis
            image: "redis:alpine" # 从远端仓库拉取
        
      • 运行docker-compose

        • 默认会寻找当前目录下的 docker-compose.yml
        • 如果要指定名称则加 -f xxx.yml 参数
        [root@10 composetest]# docker-compose up -d
        
      • 测试功能

        每访问一次页面,都会调用 Redis 的 increment 递增访问次数。

      image.png
  • docker-compose.yml文件 参数介绍

    • version: '3'

      • 表示docker-compose的版本
    • services

      • 一个service表示一个container
    • networks

      • 相当于docker network create app-net
    • volumes

      • 相当于 -v v1:/var/lib/mysql
    • image

      • 表示使用哪个镜像,要从远端拉取
    • build

      • 表示使用本地Dockerfile构建镜像
    • ports

      • 相当于 -p 8080:8080
    • environment

      • 相当于 -e
  • docker-compose常见操作

    • 查看版本
      • docker-compose version
    • 根据 yml 创建service
      • docker-compose up
      • 指定 yaml:docker-compose up -f xxx.yaml
      • 后台运行:docker-compose up -d
    • 查看启动成功的service
      • docker-compose ps
      • 也可以使用 docker ps
    • 查看images
      • docker-compose images
    • 停止/启动service
      • docker-compose stop/start
    • 删除service[同时会删除掉network和volume]
      • docker-compose down
    • 进入到某个service
      • docker-compose exec redis sh
  • scale扩缩容

    • 准备 docker-compose.yml 文件,注意去除 ports 参数,防止端口冲突

      version: '3'
      services:
        web:
          build: .
          networks:
            - app-net
      
        redis:
          image: "redis:alpine"
          networks:
            - app-net
      
      networks:
        app-net:
          driver: bridge
      
    • 创建 service 容器

      • docker-compose up -d
    • 对 web 容器进行扩缩容

      • docker-compose up --scale web=5 -d
      • 表示增加 web 容器数量到 5
    • 查看容器

      • docker-compose ps

2、多机多容器部署

Docker Compose只是能在单机下对容器进行管理,如果涉及到多台宿主机它就没办法了,这时就需要另一个技术Docker Swarm。

Docker Swarm可以实现多机下的容器管理、通信与编排,不过这里也是不会讲太多,因为现在的容器编排主流是 Kubernetes!

  • Docker Swarm

    • 安装

      • 任意平台下安装过Docker就已经内置了Docker Swarm了,所以无需安装

      • 但是我们需要准备多台机器去测试多机部署,这里我们用虚拟机代替

      • 使用vagrantfile来安装多台虚拟机

        boxes = [
            {
                :name => "manager-node",
                :eth1 => "192.168.50.111",
                :mem => "1024",
                :cpu => "1"
            },
            {
                :name => "worker01-node",
                :eth1 => "192.168.50.112",
                :mem => "1024",
                :cpu => "1"
            },
            {
                :name => "worker02-node",
                :eth1 => "192.168.50.113",
                :mem => "1024",
                :cpu => "1"
            }
        ]
        
        Vagrant.configure(2) do |config|
          config.vm.box = "centos/7"
           boxes.each do |opts|
              config.vm.define opts[:name] do |config|
                config.vm.hostname = opts[:name]
                config.vm.provider "vmware_fusion" do |v|
                  v.vmx["memsize"] = opts[:mem]
                  v.vmx["numvcpus"] = opts[:cpu]
                end
        
                config.vm.provider "virtualbox" do |v|
                  v.customize ["modifyvm", :id, "--memory", opts[:mem]]
                v.customize ["modifyvm", :id, "--cpus", opts[:cpu]]
                v.customize ["modifyvm", :id, "--name", opts[:name]]
                end
        
                config.vm.network :public_network, ip: opts[:eth1]
              end
          end
        end
        
      • 一次构建多节点的情况下,通过vagrant ssh [node name]进入虚拟机

        vagrant ssh [manager-node / worker01-node / worker02-node]
        

        初始化步骤、配置xshell连接与第一章节一致,这里不细讲了

      • 在每台虚拟机中安装Docker

    • 搭建Swarm集群

      • 在 manager 节点中初始化集群配置

        [root@10 ~]# docker swarm init --advertise-addr=192.168.50.111
        Swarm initialized: current node (rf5m8pyx7w7i2tutxumspn3ph) is now a manager.
        
        To add a worker to this swarm, run the following command:
        
        # 这段命令是需要到worker节点执行,表示加入集群
            docker swarm join --token SWMTKN-1-3wwhsp6boxd5damm9f549v21eshj5cdkuu97xmz6vo1pkvuhrr-6wtpwh1ewl3ib3ryr9j5dayzc 192.168.50.111:2377
        
        To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.
        
        
      • worker 节点加入集群

        [root@10 ~]# docker swarm join --token SWMTKN-1-3wwhsp6boxd5damm9f549v21eshj5cdkuu97xmz6vo1pkvuhrr-6wtpwh1ewl3ib3ryr9j5dayzc 192.168.50.111:2377
        
        This node joined a swarm as a worker.
        
      • 在manager节点查看集群状态

        hostname 需要手动修改,默认是展示网卡ip的,比如:hostnamectl set-hostname manager-node

        [root@manager-node ~]# docker node ls
        ID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUS      ENGINE VERSION
        rf5m8pyx7w7i2tutxumspn3ph *   manager-node        Ready               Active              Leader              19.03.7
        ncgr8mqhhz47itjqcyhamtn62     worker01-node       Ready               Active                                  19.03.7
        
      • 成员节点类型转换

        可以将worker提升成manager,从而保证manager的高可用,但这个提升只是让worker有成为manager的资格,没有提升的worker节点无权成为manager

        [root@manager-node ~]# docker node promote ncgr8mqhhz47itjqcyhamtn62
        Node ncgr8mqhhz47itjqcyhamtn62 promoted to a manager in the swarm.
        
        # 现在worker节点的 manager status变成 Reachable,表示拥有变成Leader的权限了
        [root@manager-node ~]# docker node ls
        ID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUS      ENGINE VERSION
        ncgr8mqhhz47itjqcyhamtn62     10.0.2.15           Ready               Active              Reachable           19.03.7
        rf5m8pyx7w7i2tutxumspn3ph *   10.0.2.15           Ready               Active              Leader              19.03.7
        
        #降级可以用demote
        [root@manager-node ~]# docker node demote ncgr8mqhhz47itjqcyhamtn62
        Manager ncgr8mqhhz47itjqcyhamtn62 demoted in the swarm.
            
        [root@manager-node ~]# docker node ls
        ID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUS      ENGINE VERSION
        rf5m8pyx7w7i2tutxumspn3ph *   manager-node        Ready               Active              Leader              19.03.7
        ncgr8mqhhz47itjqcyhamtn62     worker01-node       Ready               Active                                  19.03.7
        
  • Docker Swarm常用操作

    • 创建一个tomcat的service

      • docker service create --name my-tomcat tomcat

        [root@manager-node ~]# docker service create --name my-tomcat tomcat
        h9xki026ivexy28r3242izu5g
        overall progress: 1 out of 1 tasks
        1/1: running   [==================================================>]
        # 创建完成后会使用5秒时间对容器的可用性进行验证
        verify: Waiting 5 seconds to verify that tasks are stable...
        # 5秒结束会变成 Service converged
        verify: Service converged
        
    • 删除容器

      • docker service rm my-tomcat
    • 查看当前Swarm的service

      • docker service ls

        [root@manager-node ~]# docker service ls
        ID                  NAME                MODE                REPLICAS            IMAGE               PORTS
        zdf8vgmay2hx        my-tomcat           replicated          1/1                 tomcat:latest
        
    • 查看service的启动日志

      • docker service logs my-tomcat
    • 查看service的详情

      • docker service inspect my-tomcat
    • 查看my-tomcat运行在哪个node上

      • docker service ps my-tomcat

        # 可以看到容器是创建在manager节点的
        [root@manager-node ~]# docker service ps my-tomcat
        ID                  NAME                IMAGE               NODE                DESIRED STATE       CURRENT STATE           ERROR               PORTS
        s83wnalw3xx0        my-tomcat.1         tomcat:latest       manager-node        Running             Running 3 minutes ago
        
    • 水平扩容service

      [root@manager-node ~]# docker service scale my-tomcat=3
      my-tomcat scaled to 3
      overall progress: 3 out of 3 tasks
      1/3: running   [==================================================>]
      2/3: running   [==================================================>]
      3/3: running   [==================================================>]
      verify: Service converged
      
      # 可以发现,my-tomcat的service会分别部署到不同宿主机中
      [root@manager-node ~]# docker service ps my-tomcat
      ID                  NAME                IMAGE               NODE                DESIRED STATE       CURRENT STATE            ERROR               PORTS
      s83wnalw3xx0        my-tomcat.1         tomcat:latest       manager-node        Running             Running 7 minutes ago
      qs8qfoo7ge0n        my-tomcat.2         tomcat:latest       worker01-node       Running             Running 28 seconds ago
      4m661e5msju9        my-tomcat.3         tomcat:latest       worker01-node       Running             Running 28 seconds ago
      

      如果不用docker service ps,执行docker ps,会发现container的name和service名称不一样,这是正常的,也要注意别搞混了

      [root@worker01-node ~]# docker ps
      CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
      59ea2502ede4        tomcat:latest       "catalina.sh run"   10 minutes ago      Up 10 minutes       8080/tcp            my-tomcat.1.u6o4mz4tj3969a1p3mquagxok
      
    • 如果某个node上的my-tomcat挂掉了,这时候会自动扩展

      [root@worker01-node ~]# docker rm -f 0e9b5d804c25
      
      # 现在manager上运行2个容器了
      [root@manager-node ~]# docker service ps my-tomcat
      ID                  NAME                IMAGE               NODE                DESIRED STATE       CURRENT STATE            ERROR                         PORTS
      s83wnalw3xx0        my-tomcat.1         tomcat:latest       manager-node        Running             Running 10 minutes ago
      1qbunkr1fs5j        my-tomcat.2         tomcat:latest       manager-node        Running             Running 6 seconds ago
      qs8qfoo7ge0n         \_ my-tomcat.2     tomcat:latest       worker01-node       Shutdown            Failed 12 seconds ago    "task: non-zero exit (137)"
      4m661e5msju9        my-tomcat.3         tomcat:latest       worker01-node       Running             Running 3 minutes ago
      
      
  • 多机通信 overlay 网络

    • 使用swarm创建容器后,会自动生成一个overlay的网络模式

      [root@manager-node ~]# docker network ls
      NETWORK ID          NAME                  DRIVER              SCOPE
      f707fc44e537        bridge                bridge              local
      d2f72fb35175        host                  host                local
      w5p8g8c4z5sk        ingress               overlay             swarm
      a2f09c4b5e89        none                  null                local
      
      
    + 创建一个overlay网络,用于docker swarm中多机通信
    
      ```ruby
      # -d 表示指定网络模式类型
      [root@manager-node ~]# docker network create -d overlay my-overlay-net
      wekfq4m34mc5welq5d7xvc6d0
      
      # 此时worker node查看不到,只有创建容器且分配到该宿主机之后才可以看到
      [root@worker01-node ~]# docker network ls
      NETWORK ID          NAME                DRIVER              SCOPE
      c7f42990d130        bridge              bridge              local
      81a19f795c7e        host                host                local
      w5p8g8c4z5sk        ingress             overlay             swarm
      7356babbeab8        none                null                local
      
      [root@manager-node ~]# docker service create --name my-tomcat --network my-overlay-net tomcat
      gl90f1frfeae9cutnz0uc7i72
      overall progress: 1 out of 1 tasks
      1/1: running   [==================================================>]
      verify: Service converged
      
      # 扩容
      [root@10 ~]# docker service scale my-tomcat=3
      my-tomcat scaled to 3
      overall progress: 3 out of 3 tasks
      1/3: running   [==================================================>]
      2/3: running   [==================================================>]
      3/3: running   [==================================================>]
      verify: Service converged
      
      # 现在worker节点可以看到之前在manager节点创建的overlay网络模式了
      [root@worker01-node ~]# docker network ls
      NETWORK ID          NAME                DRIVER              SCOPE
      c7f42990d130        bridge              bridge              local
      81a19f795c7e        host                host                local
      w5p8g8c4z5sk        ingress             overlay             swarm
      wekfq4m34mc5        my-overlay-net      overlay             swarm
      7356babbeab8        none                null                local
    

    overlay网络与brige网络模式的区别就是overlay是多机多容器之间通信的,brige是单机多容器通信

    通过docker swarm创建的容器,无论访问哪台宿主机的ip地址,最终都可以访问到正确的容器中的项目。如果是使用容器名称进行通信,还会自动做到负载均衡。

  • docker stack

    docker stack与docker-compose其实是一样的,compose是单机容器管理,stack是多机容器管理。

    • 新建service.yml文件

      version: '3'
      
      services:
      
        wordpress:
          image: wordpress
          ports:
            - 8080:80
          environment:
            WORDPRESS_DB_HOST: db
            WORDPRESS_DB_USER: exampleuser
            WORDPRESS_DB_PASSWORD: examplepass
            WORDPRESS_DB_NAME: exampledb
          networks:
            - ol-net
          volumes:
            - wordpress:/var/www/html
          deploy:
            mode: replicated
            replicas: 3
            restart_policy:
              condition: on-failure
              delay: 5s
              max_attempts: 3
            update_config:
              parallelism: 1
              delay: 10s
      
        db:
          image: mysql:5.7
          environment:
            MYSQL_DATABASE: exampledb
            MYSQL_USER: exampleuser
            MYSQL_PASSWORD: examplepass
            MYSQL_RANDOM_ROOT_PASSWORD: '1'
          volumes:
            - db:/var/lib/mysql
          networks:
            - ol-net
          deploy:
            mode: global
            placement:
              constraints:
                - node.role == manager
      
      volumes:
        wordpress:
        db:
      
      networks:
        ol-net:
          driver: overlay
      
    • 根据service.yml创建service

      # -c 指定配置文件
      [root@manager-node swarmtest]# docker stack deploy -c service.yml my-service
      Creating network my-service_ol-net
      Creating service my-service_db
      Creating service my-service_wordpress
      
    • 测试

      • 浏览器访问 [任意节点的ip]:8080
image.png
image.png
  > 可以确认,无论访问集群中哪个节点的ip,docker都可以帮助自动指向正确的容器
  • docker stack常用操作

    • 根据service.yml创建service
  • docker stack deploy -c service.yml my-service

    • 查看stack具体信息

      • docker stack ls

        [root@manager-node swarmtest]# docker stack ls
        NAME                SERVICES            ORCHESTRATOR
        my-service          2                   Swarm
        
    • 查看具体的service

      • docker stack services my-service

        [root@manager-node swarmtest]# docker stack services my-service
        ID                  NAME                   MODE                REPLICAS            IMAGE               PORTS
        6h3ipjnx32sc        my-service_wordpress   replicated          0/3                 wordpress:latest    *:8080->80/tcp
        cwo60mztuoou        my-service_db          global              1/1                 mysql:5.7
        
    • 查看某个service详情

      • docker service inspect my-service_db
这里对 compose 和 Swarm 都不做详细的讲解,因为实际场景都很少用了,Kubernetes才是主流!这里只是简单的介绍一下,方便后期学习Kubernetes。
Docker 章节所有学习内容就到这里结束了!等待后期学习 Kubernetes 在补充笔记

相关文章

网友评论

      本文标题:六、Docker 容器编排前瞻

      本文链接:https://www.haomeiwen.com/subject/hvtuyhtx.html