MongoDB 集群构建:分片+副本+选举

作者: Anoyi | 来源:发表于2017-12-22 16:25 被阅读601次

    ❤️ 环境准备

    三台服务器,建立 Docker Swarm 集群,一个 Manager,两个 Worker。

    • docker 版本:17-09
    • mongo 版本:3.6

    ❤️ MongoDB 集群架构设计

    架构图

    高清图地址: https://www.processon.com/view/link/5a3c7386e4b0bf89b8530376

    ❤️ 搭建集群

    1、【Manager】创建集群网络

    docker network create -d overlay --attachable mongo
    

    --attachable 允许其他容器加入此网络

    2、创建 9 个 Data 服务,3 个 Config 服务,1 个 Global 模式的 Mongos 服务

    2.1、【所有机器】创建相关文件夹

    mkdir /root/mongo/config /root/mongo/shard1 /root/mongo/shard2 /root/mongo/shard3
    

    2.2、【Manager】创建 stack.yml

    version: '3.3'
    services:
      mongors1n1:
        # docker 中国的镜像加速地址
        image: registry.docker-cn.com/library/mongo
        command: mongod --shardsvr --replSet shard1 --dbpath /data/db --port 27017
        networks:
          - mongo
        volumes:
          - /etc/localtime:/etc/localtime
          - /root/mongo/shard1:/data/db
        deploy:
          restart_policy:
            condition: on-failure
          replicas: 1
          placement:
            # 指定在服务器 manager 上启动
            constraints:
              - node.hostname==manager
      mongors2n1:
        image: registry.docker-cn.com/library/mongo
        command: mongod --shardsvr --replSet shard2 --dbpath /data/db --port 27017
        networks:
          - mongo
        volumes:
          - /etc/localtime:/etc/localtime
          - /root/mongo/shard2:/data/db
        deploy:
          restart_policy:
            condition: on-failure
          replicas: 1
          placement:
            constraints:
              - node.hostname==manager
      mongors3n1:
        image: registry.docker-cn.com/library/mongo
        command: mongod --shardsvr --replSet shard3 --dbpath /data/db --port 27017
        networks:
          - mongo
        volumes:
          - /etc/localtime:/etc/localtime
          - /root/mongo/shard3:/data/db
        deploy:
          restart_policy:
            condition: on-failure
          replicas: 1
          placement:
            constraints:
              - node.hostname==manager
      mongors1n2:
        image: registry.docker-cn.com/library/mongo
        command: mongod --shardsvr --replSet shard1 --dbpath /data/db --port 27017
        networks:
          - mongo
        volumes:
          - /etc/localtime:/etc/localtime
          - /root/mongo/shard1:/data/db
        deploy:
          restart_policy:
            condition: on-failure
          replicas: 1
          placement:
            constraints:
              - node.hostname==worker1
      mongors2n2:
        image: registry.docker-cn.com/library/mongo
        command: mongod --shardsvr --replSet shard2 --dbpath /data/db --port 27017
        networks:
          - mongo
        volumes:
          - /etc/localtime:/etc/localtime
          - /root/mongo/shard2:/data/db
        deploy:
          restart_policy:
            condition: on-failure
          replicas: 1
          placement:
            constraints:
              - node.hostname==worker1
      mongors3n2:
        image: registry.docker-cn.com/library/mongo
        command: mongod --shardsvr --replSet shard3 --dbpath /data/db --port 27017
        networks:
          - mongo
        volumes:
          - /etc/localtime:/etc/localtime
          - /root/mongo/shard3:/data/db
        deploy:
          restart_policy:
            condition: on-failure
          replicas: 1
          placement:
            constraints:
              - node.hostname==worker1
      mongors1n3:
        image: registry.docker-cn.com/library/mongo
        command: mongod --shardsvr --replSet shard1 --dbpath /data/db --port 27017
        networks:
          - mongo
        volumes:
          - /etc/localtime:/etc/localtime
          - /root/mongo/shard1:/data/db
        deploy:
          restart_policy:
            condition: on-failure
          replicas: 1
          placement:
            constraints:
              - node.hostname==worker2
      mongors2n3:
        image: registry.docker-cn.com/library/mongo
        command: mongod --shardsvr --replSet shard2 --dbpath /data/db --port 27017
        networks:
          - mongo
        volumes:
          - /etc/localtime:/etc/localtime
          - /root/mongo/shard2:/data/db
        deploy:
          restart_policy:
            condition: on-failure
          replicas: 1
          placement:
            constraints:
              - node.hostname==worker2
      mongors3n3:
        image: registry.docker-cn.com/library/mongo
        command: mongod --shardsvr --replSet shard3 --dbpath /data/db --port 27017
        networks:
          - mongo
        volumes:
          - /etc/localtime:/etc/localtime
          - /root/mongo/shard3:/data/db
        deploy:
          restart_policy:
            condition: on-failure
          replicas: 1
          placement:
            constraints:
              - node.hostname==worker2
      cfg1:
        image: registry.docker-cn.com/library/mongo
        command: mongod --configsvr --replSet cfgrs --smallfiles --dbpath /data/db --port 27017
        networks:
          - mongo
        volumes:
          - /etc/localtime:/etc/localtime
          - /root/mongo/config:/data/db
        deploy:
          restart_policy:
            condition: on-failure
          replicas: 1
          placement:
            constraints:
              - node.hostname==manager
      cfg2:
        image: registry.docker-cn.com/library/mongo
        command: mongod --configsvr --replSet cfgrs --smallfiles --dbpath /data/db --port 27017
        networks:
          - mongo
        volumes:
          - /etc/localtime:/etc/localtime
          - /root/mongo/config:/data/db
        deploy:
          restart_policy:
            condition: on-failure
          replicas: 1
          placement:
            constraints:
              - node.hostname==worker1
      cfg3:
        image: registry.docker-cn.com/library/mongo
        command: mongod --configsvr --replSet cfgrs --smallfiles --dbpath /data/db --port 27017
        networks:
          - mongo
        volumes:
          - /etc/localtime:/etc/localtime
          - /root/mongo/config:/data/db
        deploy:
          restart_policy:
            condition: on-failure
          replicas: 1
          placement:
            constraints:
              - node.hostname==worker2
      mongos:
        image: registry.docker-cn.com/library/mongo
        # mongo 3.6 版默认绑定IP为 127.0.0.1,此处绑定 0.0.0.0 是允许其他容器或主机可以访问
        command: mongos --configdb cfgrs/cfg1:27017,cfg2:27017,cfg3:27017 --bind_ip 0.0.0.0 --port 27017
        networks:
          - mongo
        # 映射宿主机的 27017 端口
        ports:
          - 27017:27017
        volumes:
          - /etc/localtime:/etc/localtime
        depends_on:
          - cfg1
          - cfg2
          - cfg3
        deploy:
          restart_policy:
            condition: on-failure
          # 在集群内的每一台服务器上都启动一个容器
          mode: global
    networks:
      mongo:
        external: true
    

    2.3、启动服务,在 Manager 上执行

    docker stack deploy -c stack.yml mongo
    

    2.4、【Manager】查看服务的启动情况

    docker service ls
    

    正常情况下,会出现如下结果:

    [docker@manager ~]# docker service ls
    ID                  NAME                MODE                REPLICAS            IMAGE                                         PORTS
    z1l5zlghlfbi        mongo_cfg1          replicated          1/1                 registry.docker-cn.com/library/mongo:latest
    lg9vbods29th        mongo_cfg2          replicated          1/1                 registry.docker-cn.com/library/mongo:latest
    i6d6zwxsq0ss        mongo_cfg3          replicated          1/1                 registry.docker-cn.com/library/mongo:latest
    o0lfdavd8kpj        mongo_mongors1n1    replicated          1/1                 registry.docker-cn.com/library/mongo:latest
    n85yeyod7mlu        mongo_mongors1n2    replicated          1/1                 registry.docker-cn.com/library/mongo:latest
    cwurdqng9tdk        mongo_mongors1n3    replicated          1/1                 registry.docker-cn.com/library/mongo:latest
    vu6al5kys28u        mongo_mongors2n1    replicated          1/1                 registry.docker-cn.com/library/mongo:latest
    xrjiep0vrf0w        mongo_mongors2n2    replicated          1/1                 registry.docker-cn.com/library/mongo:latest
    qqzifwcejjyk        mongo_mongors2n3    replicated          1/1                 registry.docker-cn.com/library/mongo:latest
    tddgw8hygv1b        mongo_mongors3n1    replicated          1/1                 registry.docker-cn.com/library/mongo:latest
    qrb6fjty03mw        mongo_mongors3n2    replicated          1/1                 registry.docker-cn.com/library/mongo:latest
    m8ikdzjssmhn        mongo_mongors3n3    replicated          1/1                 registry.docker-cn.com/library/mongo:latest
    mnnlm49b7kyb        mongo_mongos        global              3/3                 registry.docker-cn.com/library/mongo:latest   *:27017->27017/tcp
    

    3、初始化集群

    3.1 【Manager】初始化 Mongo 配置集群

    docker exec -it $(docker ps | grep "cfg1" | awk '{ print $1 }') bash -c "echo 'rs.initiate({_id: \"cfgrs\",configsvr: true, members: [{ _id : 0, host : \"cfg1\" },{ _id : 1, host : \"cfg2\" }, { _id : 2, host : \"cfg3\" }]})' | mongo"
    

    3.2 【Manager】初始化三个 Mongo 数据集群

    docker exec -it $(docker ps | grep "mongors1n1" | awk '{ print $1 }') bash -c "echo 'rs.initiate({_id : \"shard1\", members: [{ _id : 0, host : \"mongors1n1\" },{ _id : 1, host : \"mongors1n2\" },{ _id : 2, host : \"mongors1n3\", arbiterOnly: true }]})' | mongo"
    
    docker exec -it $(docker ps | grep "mongors2n1" | awk '{ print $1 }') bash -c "echo 'rs.initiate({_id : \"shard2\", members: [{ _id : 0, host : \"mongors2n1\" },{ _id : 1, host : \"mongors2n2\" },{ _id : 2, host : \"mongors2n3\", arbiterOnly: true }]})' | mongo"
    
    docker exec -it $(docker ps | grep "mongors3n1" | awk '{ print $1 }') bash -c "echo 'rs.initiate({_id : \"shard3\", members: [{ _id : 0, host : \"mongors3n1\" },{ _id : 1, host : \"mongors3n2\" },{ _id : 2, host : \"mongors3n3\", arbiterOnly: true }]})' | mongo"
    

    3.3 【Manager】将三个数据集群当做分片加入 mongos

    docker exec -it $(docker ps | grep "mongos" | awk '{ print $1 }') bash -c "echo 'sh.addShard(\"shard1/mongors1n1:27017,mongors1n2:27017,mongors1n3:27017\")' | mongo "
    
    docker exec -it $(docker ps | grep "mongos" | awk '{ print $1 }') bash -c "echo 'sh.addShard(\"shard2/mongors2n1:27017,mongors2n3:27017,mongors2n3:27017\")' | mongo "
    
    docker exec -it $(docker ps | grep "mongos" | awk '{ print $1 }') bash -c "echo 'sh.addShard(\"shard3/mongors3n1:27017,mongors3n2:27017,mongors3n3:27017\")' | mongo "
    

    4、连接集群

    4.1 内部:在 mongo 网络下的容器,通过 mongos:27017 连接

    4.2 外部:通过 IP:27017 连接,IP 可以为三台服务的中的一个的 IP

    ❤️ 相关问题

    此处记录相关问题,欢迎提问!

    微服务 WebSocket 集群项目 AnyIM 实战

    相关文章

      网友评论

      • 你常不走的路:请问 一下 为什么 我跟着你的教程 成功了 分片插入数据时 数据只插入到了 一个分片中 数据没有分布插入到3个分片中呢 😂
      • 3af5f598a963:请问下 除了mongo三个容器启动了 其他都是失败 no suitable node (scheduling constraints not satisfied on 3 nodes) 是什么原因
        Anoyi:@涂涂涂涂丶 节点的 hostname 改了么
        3af5f598a963:@Anoyi swarm集群 三台服务器都是8G内存 应该够了
        Anoyi:你机器内存够吗
      • Holy俊杰:你这图文不符啊:joy:
        Anoyi:@Holy俊杰 大部分程序员喜欢 😍
        Holy俊杰:@Anoyi 美女图啊
        Anoyi:设计图吗?哪里不符呢?
      • sunny_Lee:你好,这篇博客写的太好了,对我帮助很大,赞一个!
        有一个问题请教一下:在docker swarm中mongo 集群之间如何通过key file进行安全验证?能否也详细地写到这篇博客里,这样就更完美了,谢谢!
        Anoyi:@sunny_Lee 工作计划还没到安全那一块,要延后处理
        sunny_Lee:@Anoyi:能稍微描述下docker swarm环境下集群间如何进行安全认证吗,是把生成的key file打包到image中嘛?
        Anoyi:@sunny_Lee 嗯嗯,感谢提议,会将您的建议添加到工作计划中
      • ideal9527:我承认点喜欢不光是觉得首图的妹子漂亮
        Anoyi:放心,每篇都配,让你看够!

      本文标题:MongoDB 集群构建:分片+副本+选举

      本文链接:https://www.haomeiwen.com/subject/mqcbwxtx.html