MongoDB复制原理就是主节点记录在其上的所有操作oplog,从节点定期轮询主节点获取这些操作,然后对自己的数据副本执行这些操作,从而保证从节点的数据与主节点一致。
副本集特征:
- N 个节点的集群
- 任何节点可作为主节点
- 所有写入操作都在主节点上
- 自动故障转移
- 自动恢复
副本集需要注意的地方:
- 最小构成是:primary,secondary,arbiter,一般部署是:primary,2 secondary。
- 成员数应该为奇数,如果为偶数的情况下添加arbiter,arbiter不保存数据,只投票。
- 最大50 members,但是只能有 7 voting members,其他是non-voting members。
1.环境准备
192.168.1.121:27017作为主节点,
192.168.1.121:27018,192.168.1.121:27019作为两个副本节点
2.分别创建三个节点所需要的文件夹
mkdir /usr/mongoDB/rs{0,1,2}/data/db #存放三个节点数据文件
touch /usr/mongoDB/rs{0,1,2}/log/mongo.log #存放三个节点的日志

3.创建三个节点的配置文件
内容示例:
dbpath=/usr/mongoDB/rs0/data/db
logpath=/usr/mongoDB/rs0/log/mongo.log
logappend=true
port=27017
fork=true
auth=false
bind_ip=192.168.1.121
journal=true
quiet=true
replSet=upmsSet

4.分别启动三个节点(mongod命令位于mongodb的bin目录中)
mongod -f /rs0/mongodb.conf
mongod -f /rs1/mongodb.conf
mongod -f /rs2/mongodb.conf
可以查看 位于log目录下的日志

5.初始化副本集
任选一个节点登录,看到show dbs不可用,因为还没初始化副本集

使用
> use admin
switched to db admin
定义副本集配置变量,这里的 _id:”upmsSet” 和上面配置文件的参数“ –replSet upmsSet” 要保持一样
>config= {_id:"upmsSet",members:[
{_id:0,host:"192.168.1.121:27017"},
{_id:1,host:"192.168.1.121:27018"},
{_id:2,host:"192.168.1.121:27019"}]
}
初始化副本集配置
> rs.initiate(config);
{ "ok" : 1 }

查看集群节点状态:
> rs.status()
{
"set" : "upmsSet",
"date" : ISODate("2019-02-28T05:40:07.400Z"),
"myState" : 2,
"term" : NumberLong(0),
"syncingTo" : "",
"syncSourceHost" : "",
"syncSourceId" : -1,
"heartbeatIntervalMillis" : NumberLong(2000),
"optimes" : {
"lastCommittedOpTime" : {
"ts" : Timestamp(0, 0),
"t" : NumberLong(-1)
},
"appliedOpTime" : {
"ts" : Timestamp(1551332397, 1),
"t" : NumberLong(-1)
},
"durableOpTime" : {
"ts" : Timestamp(1551332397, 1),
"t" : NumberLong(-1)
}
},
"lastStableCheckpointTimestamp" : Timestamp(0, 0),
"members" : [
{
"_id" : 0,
"name" : "192.168.1.121:27017",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 353,
"optime" : {
"ts" : Timestamp(1551332397, 1),
"t" : NumberLong(-1)
},
"optimeDate" : ISODate("2019-02-28T05:39:57Z"),
"syncingTo" : "",
"syncSourceHost" : "",
"syncSourceId" : -1,
"infoMessage" : "could not find member to sync from",
"configVersion" : 1,
"self" : true,
"lastHeartbeatMessage" : ""
},
{
"_id" : 1,
"name" : "192.168.1.121:27018",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 9,
"optime" : {
"ts" : Timestamp(1551332397, 1),
"t" : NumberLong(-1)
},
"optimeDurable" : {
"ts" : Timestamp(1551332397, 1),
"t" : NumberLong(-1)
},
"optimeDate" : ISODate("2019-02-28T05:39:57Z"),
"optimeDurableDate" : ISODate("2019-02-28T05:39:57Z"),
"lastHeartbeat" : ISODate("2019-02-28T05:40:07.203Z"),
"lastHeartbeatRecv" : ISODate("2019-02-28T05:40:07.242Z"),
"pingMs" : NumberLong(0),
"lastHeartbeatMessage" : "",
"syncingTo" : "",
"syncSourceHost" : "",
"syncSourceId" : -1,
"infoMessage" : "",
"configVersion" : 1
},
{
"_id" : 2,
"name" : "192.168.1.121:27019",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 9,
"optime" : {
"ts" : Timestamp(1551332397, 1),
"t" : NumberLong(-1)
},
"optimeDurable" : {
"ts" : Timestamp(1551332397, 1),
"t" : NumberLong(-1)
},
"optimeDate" : ISODate("2019-02-28T05:39:57Z"),
"optimeDurableDate" : ISODate("2019-02-28T05:39:57Z"),
"lastHeartbeat" : ISODate("2019-02-28T05:40:07.204Z"),
"lastHeartbeatRecv" : ISODate("2019-02-28T05:40:07.244Z"),
"pingMs" : NumberLong(0),
"lastHeartbeatMessage" : "",
"syncingTo" : "",
"syncSourceHost" : "",
"syncSourceId" : -1,
"infoMessage" : "",
"configVersion" : 1
}
],
"ok" : 1,
"operationTime" : Timestamp(1551332397, 1),
"$clusterTime" : {
"clusterTime" : Timestamp(1551332397, 1),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
}
}
网友评论