在上一篇中,我们实现了Heperledger fabric v1.3.0阿里云环境的搭建,同时也成功启动了e2e案例, 那接下来呢,我们要实现kafka集群模式,根据这个云服务创建一个镜像,再根据这个镜像创建出7个云服务器.云服务器的配置,依然采用按需付费的模式,这样就会省点钱,配置方面,1g内存,单核cpu即可. 但上一篇文章中使用的是4g内存,因为所有的节点容器都在一台机器上运行,但接下来的kafka集群环境,用1g就够了.
image.png
kafka生产环境部署案例采用三个排序(orderer)服务、四个kafka、三个zookeeper和四个节点(peer)组成,共准备八台服务器,每台服务器对应的服务如下表所示:
image.png
1.修改配置文件中extra_hosts设置
zookeeper,kafka,以及orderer,和各个节点的配置文件已经写好了,在这里:提取码 5aun
需要修改一下配置文件中extra_hosts 变量中 ip地址的映射, 根据实际的云服务器的ip来进行修改.
首先来看一下配置文件们的结构:
.
├── orderer0
│ ├── clear_docker.sh // 关闭orderer0.example.com云服务器上所有容器
│ ├── configtx.yaml // 用于生成创世区块文件和通道文件
│ ├── crypto-config.yaml // 用于生成证书文件
│ ├── docker-compose-kafka.yaml // kafka0配置文件
│ ├── docker-compose-orderer.yaml // orderer0排序节点配置文件
│ ├── docker-compose-zookeeper.yaml // zookeeper0配置文件
│ ├── generate.sh // 生成证书和创建创世区块文件和通道文件的脚本
│ └── scpCryptoAndGenesisToOther.sh // 将证书,创世区块文件,通道文件分发给其他云服务, 其实只需要分发各个云服务所需要的文件即可,在这里我们为了方便起见,进行全部分发.
├── orderer1
│ ├── clear_docker.sh // 关闭orderer1.example.com云服务器上所有容器
│ ├── docker-compose-kafka.yaml // kafka1配置文件
│ ├── docker-compose-orderer.yaml // orderer1排序节点配置文件
│ └── docker-compose-zookeeper.yaml // zookeeper1配置文件
├── orderer2
│ ├── clear_docker.sh // 关闭orderer2.example.com云服务器上所有容器
│ ├── docker-compose-kafka.yaml // kafka2配置文件
│ ├── docker-compose-orderer.yaml // orderer2排序节点配置文件
│ └── docker-compose-zookeeper.yaml // zookeeper2配置文件
├── orderer3
│ ├── clear_docker.sh // 关闭orderer3.example.com云服务器上所有容器
│ ├── docker-compose-kafka.yaml // kafka3配置文件
├── peer0.org1
│ ├── chaincode // 链码,使用的是 e2e的 example02 链码
│ ├── clear_docker.sh // 关闭 peer0.org1云服务器上所有容器
│ ├── docker-compose-peer.yaml // peer0.org1节点配置文件
│ └── scpChannelToOtherPeers.sh // 分发通道 mychannel.block文件给其他节点的脚本文件
├── peer0.org2
│ ├── chaincode // 链码,使用的是 e2e的 example02 链码
│ ├── clear_docker.sh // 关闭 peer0.org2云服务器上所有容器
│ └── docker-compose-peer.yaml // peer0.org2节点配置文件
├── peer1.org1
│ ├── chaincode // 链码,使用的是 e2e的 example02 链码
│ ├── clear_docker.sh // 关闭 peer1.org1云服务器上所有容器
│ └── docker-compose-peer.yaml // peer1.org1节点配置文件
├── peer1.org2
│ ├── chaincode // 链码,使用的是 e2e的 example02 链码
│ ├── clear_docker.sh // 关闭 peer1.org2云服务器上所有容器
│ └── docker-compose-peer.yaml // peer1.org2节点配置文件
└── scpConfigFileToYun.sh // 将配置文件分发给各个云服务器的脚本文件
orderer0文件夹中,有生成证书文件的crypto-config.yaml, 有生成创世区块文件和通道文件的configtx.yaml,
需要在上面zookeeper, kafka, orderer, peer,cli的配置文件中,一一修改extra_hosts的ip映射. 比如zookeeper0的配置文件:
version: '2'
services:
zookeeper0:
container_name: zookeeper0
hostname: zookeeper0
image: hyperledger/fabric-zookeeper
restart: always
environment:
- ZOO_MY_ID=1
- ZOO_SERVERS=server.1=zookeeper0:2888:3888 server.2=zookeeper1:2888:3888 server.3=zookeeper2:2888:3888
ports:
- 2181:2181
- 2888:2888
- 3888:3888
extra_hosts:
- "zookeeper0:47.104.26.119"
- "zookeeper1:47.105.127.95"
- "zookeeper2:47.105.226.90"
- "kafka0:47.104.26.119"
- "kafka1:47.105.127.95"
- "kafka2:47.105.226.90"
- "kafka3:47.105.136.5"
kafka0的配置文件:
version: '2'
services:
kafka0:
container_name: kafka0
hostname: kafka0
image: hyperledger/fabric-kafka
restart: always
environment:
# broker.id
- KAFKA_BROKER_ID=1
- KAFKA_MIN_INSYNC_REPLICAS=2
- KAFKA_DEFAULT_REPLICATION_FACTOR=3
- KAFKA_ZOOKEEPER_CONNECT=zookeeper0:2181,zookeeper1:2181,zookeeper2:2181
# 100 * 1024 * 1024 B
- KAFKA_MESSAGE_MAX_BYTES=104857600
- KAFKA_REPLICA_FETCH_MAX_BYTES=104857600
- KAFKA_UNCLEAN_LEADER_ELECTION_ENABLE=false
- KAFKA_LOG_RETENTION_MS=-1
- KAFKA_HEAP_OPTS=-Xmx512M -Xms256M
ports:
- 9092:9092
extra_hosts:
- "zookeeper0:47.104.26.119"
- "zookeeper1:47.105.127.95"
- "zookeeper2:47.105.226.90"
- "kafka0:47.104.26.119"
- "kafka1:47.105.127.95"
- "kafka2:47.105.226.90"
- "kafka3:47.105.136.5"
orderer0配置文件:
ports:
- 7050:7050
extra_hosts:
- "kafka0:47.104.26.119"
- "kafka1:47.105.127.95"
- "kafka2:47.105.226.90"
- "kafka3:47.105.136.5"
peer0.org1文件中 peer 节点修改:
extra_hosts:
- "orderer0.example.com:47.104.26.119"
- "orderer1.example.com:47.105.127.95"
- "orderer2.example.com:47.105.226.90"
peer0.org1文件中 cli 修改:
- "orderer0.example.com:47.104.26.119"
- "orderer1.example.com:47.105.127.95"
- "orderer2.example.com:47.105.226.90"
- "peer0.org1.example.com:47.105.36.78"
- "peer1.org1.example.com:47.105.40.77"
- "peer0.org2.example.com:47.104.147.94"
- "peer1.org2.example.com:47.105.106.73"
依次对其他zookeeper, kafka, orderer, peer,cli进行修改
2.修改脚本文件的ip地址
修改scpConfigFileToYun.sh,
修改scpCryptoAndGenesisToOther.sh
修改scpChannelToOtherPeers.sh
这三个脚本文件是为了让我们更方便去进行文件的分发.
3.配置文件的分发
进入各个云服务器,创建kafkapeer文件夹:
$ cd /root/go/src/github.com/hyperledger/fabric
$ mkdir kafkapeer
添加ip域名映射到 /etc/hosts 文件中
vi /etc/hosts
47.104.26.119 zookeeper0
47.105.127.95 zookeeper1
47.105.226.90 zookeeper2
47.104.26.119 kafka0
47.105.127.95 kafka1
47.105.226.90 kafka2
47.105.136.5 kafka3
47.104.26.119 orderer0.example.com
47.105.127.95 orderer1.example.com
47.105.226.90 orderer2.example.com
47.105.36.78 peer0.org1.example.com
47.105.40.77 peer1.org1.example.com
47.104.147.94 peer0.org2.example.com
47.105.106.73 peer1.org2.example.com
在Mac上执行scpConfigFileToYun.sh脚本,进行配置文件的分发
$ sh scpConfigFileToYun.sh
4 生成证书和创世区块文件
进入到orderer0.example.com 云服务器中
$ ssh root@47.104.26.119
cd go/src/github.com/hyperledger/fabric/kafkapeer/
查找configtxgen工具和cryptogen工具的路径
$ find / -name configtxgen
把configtxgen工具和cryptogen工具拷贝到当前文件夹:
$ cp -r /root/go/src/github.com/hyperledger/fabric/release/linux-amd64/bin ./
执行generate.sh 脚本
$ ./generate.sh
执行成功后,可以看到目录已经有了证书文件和创世区块文件,以及通道文件!
root@iZm5e1vrchk33e0j4ou30sZ:~/go/src/github.com/hyperledger/fabric/kafkapeer# ls channel-artifacts/
genesis.block mychannel.tx
root@iZm5e1vrchk33e0j4ou30sZ:~/go/src/github.com/hyperledger/fabric/kafkapeer# ls crypto-config
ordererOrganizations peerOrganizations
root@iZm5e1vrchk33e0j4ou30sZ:~/go/src/github.com/hyperledger/fabric/kafkapeer#
执行scpCryptoAndGenesisToOther.sh脚本,将证书和创世区块文件分发给其他云服务器
$ ./scpCryptoAndGenesisToOther.sh
5.启动zookeeper, kafka, orderer
5.1启动zookeeper
进入47.104.26.119 服务器
$ ssh root@47.104.26.119
$ cd /root/go/src/github.com/hyperledger/fabric/kafkapeer
启动zookeeper0
$ docker-compose -f docker-compose-zookeeper.yaml up -d
进入47.105.127.95 服务器
$ ssh root@47.105.127.95
$ cd /root/go/src/github.com/hyperledger/fabric/kafkapeer
启动zookeeper1
$ docker-compose -f docker-compose-zookeeper.yaml up -d
进入47.105.226.90 服务器
$ ssh root@47.105.226.90
$ cd /root/go/src/github.com/hyperledger/fabric/kafkapeer
启动zookeeper2
$ docker-compose -f docker-compose-zookeeper.yaml up -d
5.2启动kafka
在 47.104.26.119 服务器的/root/go/src/github.com/hyperledger/fabric/kafkapeer
路径下
启动kafka0
$ docker-compose -f docker-compose-kafka.yaml up -d
在 47.105.127.95 服务器的/root/go/src/github.com/hyperledger/fabric/kafkapeer
路径下
启动kafka1
$ docker-compose -f docker-compose-kafka.yaml up -d
在 47.105.136.5 服务器的/root/go/src/github.com/hyperledger/fabric/kafkapeer
路径下
启动kafka2
$ docker-compose -f docker-compose-kafka.yaml up -d
在 47.105.226.90 服务器的/root/go/src/github.com/hyperledger/fabric/kafkapeer
路径下
启动kafka3
$ docker-compose -f docker-compose-kafka.yaml up -d
启动之后docker ps -a
查看一下是否启动成功, 若是Up状态则表明启动成功
root@iZm5e1vrchk33e0j4ou30sZ:~/go/src/github.com/hyperledger/fabric/kafkapeer# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
35dd8ed6574b hyperledger/fabric-kafka "/docker-entrypoint.…" 2 minutes ago Up 2 minutes 0.0.0.0:9092->9092/tcp, 9093/tcp kafka0
c94a1273f518 hyperledger/fabric-zookeeper "/docker-entrypoint.…" 8 minutes ago Up 8 minutes 0.0.0.0:2181->2181/tcp, 0.0.0.0:2888->2888/tcp, 0.0.0.0:3888->3888/tcp zookeeper0
5.3启动orderer
在 47.104.26.119 服务器的/root/go/src/github.com/hyperledger/fabric/kafkapeer
路径下
启动orderer0
$ docker-compose -f docker-compose-kafka.yaml up -d
在 47.105.127.95 服务器的/root/go/src/github.com/hyperledger/fabric/kafkapeer
路径下
启动orderer1
$ docker-compose -f docker-compose-kafka.yaml up -d
在 47.105.136.5 服务器的/root/go/src/github.com/hyperledger/fabric/kafkapeer
路径下
启动orderer2
$ docker-compose -f docker-compose-kafka.yaml up -d
启动之后docker ps -a
查看一下是否启动成功, 若是Up状态则表明启动成功
root@iZm5e1vrchk33e0j4ou30sZ:~/go/src/github.com/hyperledger/fabric/kafkapeer# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
fd0a249c1307 hyperledger/fabric-orderer "orderer" About a minute ago Up About a minute 0.0.0.0:7050->7050/tcp orderer0.example.com
35dd8ed6574b hyperledger/fabric-kafka "/docker-entrypoint.…" 9 minutes ago Up 9 minutes 0.0.0.0:9092->9092/tcp, 9093/tcp kafka0
c94a1273f518 hyperledger/fabric-zookeeper "/docker-entrypoint.…" 15 minutes ago Up 15 minutes 0.0.0.0:2181->2181/tcp, 0.0.0.0:2888->2888/tcp, 0.0.0.0:3888->3888/tcp zookeeper0
查看一下orderer0容器的日志
$ docker logs fd0a249c1307
可以看到,It's a connect message
, 说明kafka集群环境已经成功搭建!!
2019-01-12 03:10:50.537 UTC [orderer/consensus/kafka] processMessagesToBlocks -> DEBU 0ff [channel: testchainid] Successfully unmarshalled consumed message, offset is 1. Inspecting type...
2019-01-12 03:10:50.537 UTC [orderer/consensus/kafka] processConnect -> DEBU 100 [channel: testchainid] It's a connect message - ignoring
2019-01-12 03:10:52.351 UTC [orderer/consensus/kafka] processMessagesToBlocks -> DEBU 101 [channel: testchainid] Successfully unmarshalled consumed message, offset is 2. Inspecting type...
2019-01-12 03:10:52.352 UTC [orderer/consensus/kafka] processConnect -> DEBU 102 [channel: testchainid] It's a connect message - ignoring
6 启动peer节点,创建通道,安装链码
6.1启动peer节点
进入到 peer0.org1 节点
$ ssh root@47.105.36.78
$ cd /root/go/src/github.com/hyperledger/fabric/kafkapeer/
$ docker-compose -f docker-compose-peer.yaml up -d
查看容器启动状态 docker ps -a
root@iZm5e9pn15ifo7y2it7dgbZ:~/go/src/github.com/hyperledger/fabric/kafkapeer# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
03f56b8e2916 hyperledger/fabric-tools "/bin/bash" 16 seconds ago Up 14 seconds cli
d451b975bd18 hyperledger/fabric-peer "peer node start" 16 seconds ago Up 13 seconds 0.0.0.0:7051-7053->7051-7053/tcp peer0.org1.example.com
6.2 创建channel
进入cli容器
$ docker exec -it cli bash
创建channel
$ ORDERER_CA=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer0.example.com/msp/tlscacerts/tlsca.example.com-cert.pem
$ peer channel create -o orderer0.example.com:7050 -c mychannel -f ./channel-artifacts/mychannel.tx --tls --cafile $ORDERER_CA`
查看创建的channel文件,mychannel.block 就是刚刚创建的channel文件
root@03f56b8e2916:/opt/gopath/src/github.com/hyperledger/fabric/peer# ls
channel-artifacts crypto mychannel.block
将peer0.org1节点加入到channel中
$ peer channel join -b mychannel.block
成功后,提示:Successfully submitted proposal to join channel
$ peer channel list
查看节点所加入的通道, 可以看到,已经加入到了mychannel通道中
Channels peers has joined:
mychannel
将创建的mychannel.block从容器中拷贝出来,拷贝到/root/go/src/github.com/hyperledger/fabric/kafkapeer
目录下:
退出cli容器
$ exit
$ docker cp 03f56b8e2916:/opt/gopath/src/github.com/hyperledger/fabric/peer/mychannel.block ./
其中 03f56b8e2916 为 cli容器的CONTAINER ID
查看一下是否从cli容器中将mychannel.block拷贝出来:
image.png
执行scpChannelToOtherPeers.sh脚本文件,将mychannel.block分发给其他节点,以便之后其他节点加入.
$ sh scpChannelToOtherPeers.sh
6.3 安装链码&初始化
进入到cli容器:
$ docker exec -it cli bash
安装链码:
$ peer chaincode install -n mycc -p github.com/hyperledger/fabric/kafkapeer/chaincode/go/example02/cmd/ -v 1.0
返回 Installed remotely response:<status:200 payload:"OK"
表示安装链码成功
链码初始化, 调用init方法,初始数据a为100,b为100 :
指定背书策略,Org1MSP组织的成员或Org2MSP组织的成员,任意一个参加即可
$ peer chaincode instantiate -o orderer0.example.com:7050 --tls --cafile $ORDERER_CA -C mychannel -n mycc -v 1.0 -c '{"Args":["init","a","100","b","100"]}' -P "OR ('Org1MSP.member','Org2MSP.member')"
链码初始化时间会比较长一些,完成之后查看 a的余额数据:
$ peer chaincode query -C mychannel -n mycc -c '{"Args":["query","a"]}'
返回结果100
查看b的余额数据:
$ peer chaincode query -C mychannel -n mycc -c '{"Args":["query","b"]}'
返回结果100, 说明初始化成功.
6.4 进行交易
6.4.1 接下来在 peer0.org1节点上进行一笔交易, 由a向b转账20
$ peer chaincode invoke --tls --cafile $ORDERER_CA -C mychannel -n mycc -c '{"Args":["invoke","a","b","20"]}'
返回 Chaincode invoke successful. result: status:200
说明交易成功,查看一下啊a,b的余额
$ peer chaincode query -C mychannel -n mycc -c '{"Args":["query","a"]}'
返回结果80
$ peer chaincode query -C mychannel -n mycc -c '{"Args":["query","b"]}'
返回结果120
说明进行交易成功
6.4.2 在 peer1.org1节点进行交易
进入 peer1.org1节点
$ ssh root@47.105.40.77
$ cd /root/go/src/github.com/hyperledger/fabric/kafkapeer/
可以看到,在当前路径下,有我们刚才从peer0.org1节点发过来的 mychannel.block文件
启动 peer1.org1节点容器
$ docker-compose -f docker-compose-peer.yaml up -d
查看容器状态:
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ae5098d3d3a3 hyperledger/fabric-tools "/bin/bash" 2 hours ago Up 2 hours cli
c94443b1f6a5 hyperledger/fabric-peer "peer node start" 2 hours ago Up 2 hours 0.0.0.0:7051-7053->7051-7053/tcp peer1.org1.example.com
把mychannel.block文件拷贝到cli容器中:
$ docker cp ./mychannel.block ae5098d3d3a3:/opt/gopath/src/github.com/hyperledger/fabric/peer
其中ae5098d3d3a3
为cli的CONTAINER ID
进入cli
$ docker exec -it cli bash
查看 mychannel.block是否拷贝进来
将peer1.org1节点加入到channel中
$ peer channel join -b mychannel.block
返回Successfully submitted proposal to join channel
则表示加入通道成功!
安装链码:
$ peer chaincode install -n mycc -p github.com/hyperledger/fabric/kafkapeer/chaincode/go/example02/cmd/ -v 1.0
返回Installed remotely response:<status:200 payload:"OK"
表示安装链码成功
查询a,b的余额:
时间会比较长一些,因为要同步账本数据
$ peer chaincode query -C mychannel -n mycc -c '{"Args":["query","a"]}'
返回结果80
$ peer chaincode query -C mychannel -n mycc -c '{"Args":["query","b"]}'
返回结果120
进行一笔交易: 由b向a转50
$ ORDERER_CA=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer0.example.com/msp/tlscacerts/tlsca.example.com-cert.pem
$ peer chaincode invoke --tls --cafile $ORDERER_CA -C mychannel -n mycc -c '{"Args":["invoke","b","a","50"]}'
返回 Chaincode invoke successful. result: status:200
说明交易成功,查看一下啊a,b的余额
$ peer chaincode query -C mychannel -n mycc -c '{"Args":["query","a"]}'
返回结果130
$ peer chaincode query -C mychannel -n mycc -c '{"Args":["query","b"]}'
返回结果70
说明进行交易成功
6.4.3 在 peer0.org2节点进行交易
$ ssh root@47.104.147.94
$ cd /root/go/src/github.com/hyperledger/fabric/kafkapeer/
可以看到,在当前路径下,有我们刚才从peer0.org1节点发过来的 mychannel.block文件
root@iZm5e9pn15ifo7y2it7dgcZ:~/go/src/github.com/hyperledger/fabric/kafkapeer# ls
chaincode channel-artifacts clear_docker.sh crypto-config docker-compose-peer.yaml mychannel.block
启动 peer0.org2节点容器
$ docker-compose -f docker-compose-peer.yaml up -d
查看容器状态:
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3fa14d2d5a07 hyperledger/fabric-tools "/bin/bash" 17 minutes ago Up 17 minutes cli
aa41cf591a16 hyperledger/fabric-peer "peer node start" 17 minutes ago Up 17 minutes 0.0.0.0:7051-7053->7051-7053/tcp peer0.org2.example.com
把mychannel.block文件拷贝到cli容器中:
$ docker cp ./mychannel.block 3fa14d2d5a07:/opt/gopath/src/github.com/hyperledger/fabric/peer
其中3fa14d2d5a07
为cli的CONTAINER ID
进入cli
$ docker exec -it cli bash
查看 mychannel.block是否拷贝进来
将peer0.org2节点加入到channel中
$ peer channel join -b mychannel.block
返回Successfully submitted proposal to join channel
则表示加入通道成功!
安装链码:
$ peer chaincode install -n mycc -p github.com/hyperledger/fabric/kafkapeer/chaincode/go/example02/cmd/ -v 1.0
返回Installed remotely response:<status:200 payload:"OK"
表示安装链码成功
查询a,b的余额:
时间会比较长一些,因为要同步账本数据
$ peer chaincode query -C mychannel -n mycc -c '{"Args":["query","a"]}'
返回结果130
$ peer chaincode query -C mychannel -n mycc -c '{"Args":["query","b"]}'
返回结果70
进行一笔交易: 由b向a转20
$ ORDERER_CA=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer0.example.com/msp/tlscacerts/tlsca.example.com-cert.pem
$ peer chaincode invoke --tls --cafile $ORDERER_CA -C mychannel -n mycc -c '{"Args":["invoke","b","a","20"]}'
返回 Chaincode invoke successful. result: status:200
说明交易成功
6.4.4 在 peer1.org2节点进行查询
$ ssh root@47.105.106.73
$ cd /root/go/src/github.com/hyperledger/fabric/kafkapeer/
可以看到,在当前路径下,有我们刚才从peer0.org1节点发过来的 mychannel.block文件
启动 peer0.org2节点容器
$ docker-compose -f docker-compose-peer.yaml up -d
查看容器状态:
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a47aaf499de3 hyperledger/fabric-tools "/bin/bash" 31 minutes ago Up 31 minutes cli
d3f0b0db0495 hyperledger/fabric-peer peer node start" 31 minutes ago Up 31 minutes 0.0.0.0:7051-7053->7051-7053/tcp peer1.org2.example.com
把mychannel.block文件拷贝到cli容器中:
$ docker cp ./mychannel.block a47aaf499de3:/opt/gopath/src/github.com/hyperledger/fabric/peer
其中a47aaf499de3
为cli的CONTAINER ID
进入cli
$ docker exec -it cli bash
查看 mychannel.block是否拷贝进来
将peer1.org2节点加入到channel中
$ peer channel join -b mychannel.block
返回Successfully submitted proposal to join channel
则表示加入通道成功!
安装链码:
$ peer chaincode install -n mycc -p github.com/hyperledger/fabric/kafkapeer/chaincode/go/example02/cmd/ -v 1.0
返回Installed remotely response:<status:200 payload:"OK"
表示安装链码成功
查看一下啊a,b的余额
$ peer chaincode query -C mychannel -n mycc -c '{"Args":["query","a"]}'
返回结果150
$ peer chaincode query -C mychannel -n mycc -c '{"Args":["query","b"]}'
返回结果50
7.关闭集群&启动集群
7.1 关闭集群
进入每个云服务器中, 执行 ./clear_docker.sh
脚本
在peer节点的云服务器中,还要删除dev镜像文件:
$ docker images
删除镜像
$ docker rmi 093555752a3c
7.2 重新启动集群
- 在Mac上执行分发配置文件的脚本:
./scpConfigFileToYun.sh
- 进入 orderer0 服务器,执行生成证书和创世区块的脚本文件:
./generate.sh
- 将文件分发给各个服务器, 执行文件:
./scpCryptoAndGenesisToOther.sh
- 依次启动zookeeper, kafka, orderer
- 启动peer节点, 创建通道, 加入通道,把通道文件(.block后缀的)发送给其他节点,执行脚本:
./scpChannelToOtherPeers.sh
遇到的问题:
1.当把第二个节点加入到通道的时候报错:
grpc: addrConn.createTransport failed to connect to {peer1.org1.example.com:7051 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: x509: certificate is valid for peer0.org2.example.com, peer0, not peer1.org1.example.com". Reconnecting...
意思是证书有问题,用了别的节点的证书, 最终错误原因是因为, 我们之前约定好了 哪台服务器, 充当哪个节点的角色, 但是呢,我们在给各个服务器 分发配置文件的时候, 分发错误了, 这个体现在scpConfigFileToYun.sh 这个脚本文件中, 把 peer1.org1的配置文件分发给 peer0.peer2节点服务器了. 所以编写scpConfigFileToYun.sh的时候节点要和ip一一对应.
网友评论