搭建fabric集群 Kafka
0.准备工作
- 域名为 ikotlin.com
name | ip | hostName | other |
---|---|---|---|
-0 order0 | 192.168.204.129 | orderer0.ikotlin.com | |
-1 order1 | 192.168.204.147 | orderer1.ikotlin.com | |
-2 order2 | 192.168.204.148 | orderer2.ikotlin.com | |
-3zookeeper1 | 192.168.204.149 | zookeeper1 | |
-4zookeeper2 | 192.168.204.150 | zookeeper2 | |
-5zookeeper3 | 192.168.204.151 | zookeeper3 | |
-6Kafka1 | 192.168.204.145 | Kafka1 | |
-7Kafka2 | 192.168.204.143 | Kafka2 | |
-8Kafka3 | 192.168.204.144 | Kafka3 | |
-9Kafka4 | 192.168.204.146 | Kafka4 | |
peer0.an | 192.168.204.152 | peer0.android.ikotlin.com | OrgAndroidMSP |
peer0.ios | 192.168.204.153 | peer0.ios.ikotlin.com | OrgiOSMSP |
- 为了保证整个集群的正常工作,所以需要给集群中的各个节点设置工作目录,我们要保证各个节点的工作目录相同(网络名字原因)
# 新建自己的工作目录
$ mkdir appkafka && cd appkafka
# 启动docker,不然有这样的问题 ERROR: Couldn't connect to Docker daemon at http+docker://localunixsocket - is it running?
#If it's at a non-standard location, specify the URL with the DOCKER_HOST environment variable.
$ sudo systemctl start docker
# 准备相关的docker images
[root@localhost ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
hyperledger/fabric-zookeeper latest bbcd552150f4 9 months ago 276MB
hyperledger/fabric-kafka latest 7e0396b6d64e 9 months ago 270MB
hyperledger/fabric-couchdb latest b967e8b98b6b 9 months ago 261MB
hyperledger/fabric-baseos latest 5272411bf370 9 months ago 88.7MB
hyperledger/fabric-ca latest 743a758fae29 11 months ago 154MB
hyperledger/fabric-tools latest a026b435e575 11 months ago 1.49GB
hyperledger/fabric-ccenv latest c5fbec1827ad 11 months ago 1.36GB
hyperledger/fabric-orderer latest df155b01ed80 11 months ago 123MB
hyperledger/fabric-peer latest 5d5fbecd1efe 11 months ago 131MB
hyperledger/fabric-baseimage amd64-0.4.20 1bbfea6ce681 13 months ago 1.3GB
hyperledger/fabric-baseos amd64-0.4.20 121a92cc3fc0 13 months ago 85MB
image
1.生成组织证书 (192.168.204.129-orderer0.ikotlin.com)
1.1 生成文件夹和模版
$ mkdir appkafka && cd appkafka
$ cryptogen showtemplate > crypto-config.yaml
1.2 编写配置文件,定义组织架构
# crypto-config.yaml
# ---------------------------------------------------------------------------
# "OrdererOrgs" - Definition of organizations managing orderer nodes
# ---------------------------------------------------------------------------
OrdererOrgs:
# ---------------------------------------------------------------------------
# Orderer
# ---------------------------------------------------------------------------
- Name: Orderer
Domain: ikotlin.com
EnableNodeOUs: true
Template:
Count: 3
# ---------------------------------------------------------------------------
# "PeerOrgs" - Definition of organizations managing peer nodes
# ---------------------------------------------------------------------------
PeerOrgs:
- Name: OrgAndroid
Domain: android.ikotlin.com
EnableNodeOUs: true
Template:
Count: 1
Users:
Count: 1
- Name: OrgiOS
Domain: ios.ikotlin.com
EnableNodeOUs: true
Template:
Count: 1
Users:
Count: 1
1.3 生成相关的组织目录
$ cryptogen generate --config=crypto-config.yaml
.
├── ordererOrganizations
│ └── ikotlin.com
│ ├── ca
│ ├── msp
│ ├── orderers
│ ├── tlsca
│ └── users
└── peerOrganizations
├── android.ikotlin.com
│ ├── ca
│ ├── msp
│ ├── peers
│ ├── tlsca
│ └── users
└── ios.ikotlin.com
├── ca
├── msp
├── peers
├── tlsca
└── users
2. 生成创世区块和通道信息
2.1 编辑configtx.yaml
# configtx.yaml
# Copyright IBM Corp. All Rights Reserved.
#
# SPDX-License-Identifier: Apache-2.0
#
---
################################################################################
#
# Section: Organizations
#
################################################################################
Organizations:
- &OrdererOrg
Name: OrdererOrg
ID: OrdererMSP
MSPDir: crypto-config/ordererOrganizations/ikotlin.com/msp
Policies:
Readers:
Type: Signature
Rule: "OR('OrdererMSP.member')"
Writers:
Type: Signature
Rule: "OR('OrdererMSP.member')"
Admins:
Type: Signature
Rule: "OR('OrdererMSP.admin')"
- &org_iOS
Name: OrgiOSMSP
ID: OrgiOSMSP
MSPDir: crypto-config/peerOrganizations/ios.ikotlin.com/msp
Policies:
Readers:
Type: Signature
Rule: "OR('OrgiOSMSP.admin', 'OrgiOSMSP.peer', 'OrgiOSMSP.client')"
Writers:
Type: Signature
Rule: "OR('OrgiOSMSP.admin', 'OrgiOSMSP.client')"
Admins:
Type: Signature
Rule: "OR('OrgiOSMSP.admin')"
AnchorPeers:
- Host: peer0.ios.ikotlin.com
Port: 7051
- &org_Android
Name: OrgAndroidMSP
ID: OrgAndroidMSP
MSPDir: crypto-config/peerOrganizations/android.ikotlin.com/msp
Policies:
Readers:
Type: Signature
Rule: "OR('OrgAndroidMSP.admin', 'OrgAndroidMSP.peer', 'OrgAndroidMSP.client')"
Writers:
Type: Signature
Rule: "OR('OrgAndroidMSP.admin', 'OrgAndroidMSP.client')"
Admins:
Type: Signature
Rule: "OR('OrgAndroidMSP.admin')"
AnchorPeers:
- Host: peer0.android.ikotlin.com
Port: 7051
################################################################################
#
# SECTION: Capabilities
#
################################################################################
Capabilities:
Channel: &ChannelCapabilities
V1_4_3: true
V1_3: false
V1_1: false
Orderer: &OrdererCapabilities
V1_4_2: true
V1_1: false
Application: &ApplicationCapabilities
V1_4_2: true
V1_3: false
V1_2: false
V1_1: false
################################################################################
#
# SECTION: Application
#
################################################################################
Application: &ApplicationDefaults
Organizations:
Policies:
Readers:
Type: ImplicitMeta
Rule: "ANY Readers"
Writers:
Type: ImplicitMeta
Rule: "ANY Writers"
Admins:
Type: ImplicitMeta
Rule: "MAJORITY Admins"
Capabilities:
<<: *ApplicationCapabilities
################################################################################
#
# SECTION: Orderer
#
################################################################################
Orderer: &OrdererDefaults
OrdererType: kafka
Addresses: # 排序节点地址
- orderer0.ikotlin.com:7050
- orderer1.ikotlin.com:7050
- orderer2.ikotlin.com:7050
BatchTimeout: 2s
BatchSize:
MaxMessageCount: 100
AbsoluteMaxBytes: 32 MB
PreferredMaxBytes: 512 KB
Kafka:
Brokers: # 代理
- 192.168.204.145:9092
- 192.168.204.143:9092
- 192.168.204.144:9092
- 192.168.204.146:9092
Organizations:
Policies:
Readers:
Type: ImplicitMeta
Rule: "ANY Readers"
Writers:
Type: ImplicitMeta
Rule: "ANY Writers"
Admins:
Type: ImplicitMeta
Rule: "MAJORITY Admins"
BlockValidation:
Type: ImplicitMeta
Rule: "ANY Writers"
################################################################################
#
# CHANNEL
#
################################################################################
Channel: &ChannelDefaults
Policies:
Readers:
Type: ImplicitMeta
Rule: "ANY Readers"
Writers:
Type: ImplicitMeta
Rule: "ANY Writers"
Admins:
Type: ImplicitMeta
Rule: "MAJORITY Admins"
Capabilities:
<<: *ChannelCapabilities
################################################################################
#
# Profile
#
################################################################################
Profiles:
AppKafkaOrgsOrdererGenesis:
<<: *ChannelDefaults
Orderer:
<<: *OrdererDefaults
Organizations:
- *OrdererOrg
Capabilities:
<<: *OrdererCapabilities
Consortiums:
SampleConsortium:
Organizations:
- *org_iOS
- *org_Android
AppKafkaOrgsChannel:
Consortium: SampleConsortium
<<: *ChannelDefaults
Application:
<<: *ApplicationDefaults
Organizations:
- *org_iOS
- *org_Android
Capabilities:
<<: *ApplicationCapabilities
SampleDevModeKafka:
<<: *ChannelDefaults
Capabilities:
<<: *ChannelCapabilities
Orderer:
<<: *OrdererDefaults
OrdererType: kafka
Kafka:
Brokers:
- 192.168.204.145:9092
- 192.168.204.143:9092
- 192.168.204.144:9092
- 192.168.204.146:9092
Organizations:
- *OrdererOrg
Capabilities:
<<: *OrdererCapabilities
Application:
<<: *ApplicationDefaults
Organizations:
- <<: *OrdererOrg
Consortiums:
SampleConsortium:
Organizations:
- *org_iOS
- *org_Android
2.2 命令生成创世块和channel 信息
# 生成创世块
$ configtxgen -profile AppKafkaOrgsOrdererGenesis -channelID byfn-sys-channel -outputBlock ./channel-artifacts/genesis.block
# 生成channel 通道
$ configtxgen -profile AppKafkaOrgsChannel -outputCreateChannelTx ./channel-artifacts/appkafkachannel.tx -channelID appkafkachannel
[root@localhost channel-artifacts]# tree
.
├── appkafkachannel.tx
└── genesis.block
2.3 更新锚节点信息
$ configtxgen -profile AppKafkaOrgsChannel -outputAnchorPeersUpdate ./channel-artifacts/OrgiOSAnchor.tx -channelID appkafkachannel -asOrg OrgiOSMSP
$ configtxgen -profile AppKafkaOrgsChannel -outputAnchorPeersUpdate ./channel-artifacts/OrgAndroidAnchor.tx -channelID appkafkachannel -asOrg OrgAndroidMSP
[root@localhost appkafka]# tree channel-artifacts/
channel-artifacts/
├── appkafkachannel.tx
├── genesis.block
├── OrgAndroidAnchor.tx
└── OrgiOSAnchor.tx
3.配置zookeeper服务器
3.1 如何配置, 如何编写配置文件
ZOO_MY_ID=1 -> zookeeper服务器在集群中的ID, 这是唯一的, 范围: 1-255
ZOO_SERVERS -> zookeeper服务器集群的服务器列表
配置文件编写
- zookeeper1.yaml
# zookeeper1.yaml
version: '2'
services:
zookeeper1: # 服务器名, 自己起
container_name: zookeeper1 # 容器名, 自己起
hostname: zookeeper1 # 访问的主机名, 自己起, 需要和IP有对应关系
image: hyperledger/fabric-zookeeper:latest
restart: always # 指定为always
environment:
# ID在集合中必须是唯一的并且应该有一个值,在1和255之间。
- ZOO_MY_ID=1
# server.x=hostname:prot1:port2
- ZOO_SERVERS=server.1=zookeeper1:2888:3888 server.2=zookeeper2:2888:3888 server.3=zookeeper3:2888:3888
ports:
- 2181:2181
- 2888:2888
- 3888:3888
extra_hosts:
- "zookeeper1:192.168.204.149"
- "zookeeper2:192.168.204.150"
- "zookeeper3:192.168.204.151"
- "kafka1:192.168.204.145"
- "kafka2:192.168.204.143"
- "kafka3:192.168.204.144"
- "kafka4:192.168.204.146"
- zookeeper2.yaml
# zookeeper2.yaml
version: '2'
services:
zookeeper2: # 服务器名, 自己起
container_name: zookeeper2 # 容器名, 自己起
hostname: zookeeper2 # 访问的主机名, 自己起, 需要和IP有对应关系
image: hyperledger/fabric-zookeeper:latest
restart: always # 指定为always
environment:
# ID在集合中必须是唯一的并且应该有一个值,在1和255之间。
- ZOO_MY_ID=2
# server.x=hostname:prot1:port2
- ZOO_SERVERS=server.1=zookeeper1:2888:3888 server.2=zookeeper2:2888:3888 server.3=zookeeper3:2888:3888
ports:
- 2181:2181
- 2888:2888
- 3888:3888
extra_hosts:
- "zookeeper1:192.168.204.149"
- "zookeeper2:192.168.204.150"
- "zookeeper3:192.168.204.151"
- "kafka1:192.168.204.145"
- "kafka2:192.168.204.143"
- "kafka3:192.168.204.144"
- "kafka4:192.168.204.146"
- zookeeper3.yaml
# zookeeper3.yaml
version: '2'
services:
zookeeper3: # 服务器名, 自己起
container_name: zookeeper3 # 容器名, 自己起
hostname: zookeeper3 # 访问的主机名, 自己起, 需要和IP有对应关系
image: hyperledger/fabric-zookeeper:latest
restart: always # 指定为always
environment:
# ID在集合中必须是唯一的并且应该有一个值,在1和255之间。
- ZOO_MY_ID=3
# server.x=hostname:prot1:port2
- ZOO_SERVERS=server.1=zookeeper1:2888:3888 server.2=zookeeper2:2888:3888 server.3=zookeeper3:2888:3888
ports:
- 2181:2181
- 2888:2888
- 3888:3888
extra_hosts:
- "zookeeper1:192.168.204.149"
- "zookeeper2:192.168.204.150"
- "zookeeper3:192.168.204.151"
- "kafka1:192.168.204.145"
- "kafka2:192.168.204.143"
- "kafka3:192.168.204.144"
- "kafka4:192.168.204.146"
4. 配置Kafka集群
编写配置文件
- kafka1.yaml
# kafka1.yaml
version: '2'
services:
kafka1:
container_name: kafka1
hostname: kafka1
image: hyperledger/fabric-kafka:latest
restart: always
environment:
# broker.id
- KAFKA_BROKER_ID=1
- KAFKA_MIN_INSYNC_REPLICAS=2
- KAFKA_DEFAULT_REPLICATION_FACTOR=3
- KAFKA_ZOOKEEPER_CONNECT=zookeeper1:2181,zookeeper2:2181,zookeeper3:2181
# 33 * 1024 * 1024 B
- KAFKA_MESSAGE_MAX_BYTES=34603008
- KAFKA_REPLICA_FETCH_MAX_BYTES=34603008 # 33 * 1024 * 1024 B
- KAFKA_UNCLEAN_LEADER_ELECTION_ENABLE=false
- KAFKA_LOG_RETENTION_MS=-1
- KAFKA_HEAP_OPTS=-Xmx256M -Xms128M
ports:
- 9092:9092
extra_hosts:
- "zookeeper1:192.168.204.149"
- "zookeeper2:192.168.204.150"
- "zookeeper3:192.168.204.151"
- "kafka1:192.168.204.145"
- "kafka2:192.168.204.143"
- "kafka3:192.168.204.144"
- "kafka4:192.168.204.146"
- kafka2.yaml
# kafka2.yaml
version: '2'
services:
kafka2:
container_name: kafka2
hostname: kafka2
image: hyperledger/fabric-kafka:latest
restart: always
environment:
# broker.id
- KAFKA_BROKER_ID=2
- KAFKA_MIN_INSYNC_REPLICAS=2
- KAFKA_DEFAULT_REPLICATION_FACTOR=3
- KAFKA_ZOOKEEPER_CONNECT=zookeeper1:2181,zookeeper2:2181,zookeeper3:2181
# 33 * 1024 * 1024 B
- KAFKA_MESSAGE_MAX_BYTES=34603008
- KAFKA_REPLICA_FETCH_MAX_BYTES=34603008 # 33 * 1024 * 1024 B
- KAFKA_UNCLEAN_LEADER_ELECTION_ENABLE=false
- KAFKA_LOG_RETENTION_MS=-1
- KAFKA_HEAP_OPTS=-Xmx256M -Xms128M
ports:
- 9092:9092
extra_hosts:
- "zookeeper1:192.168.204.149"
- "zookeeper2:192.168.204.150"
- "zookeeper3:192.168.204.151"
- "kafka1:192.168.204.145"
- "kafka2:192.168.204.143"
- "kafka3:192.168.204.144"
- "kafka4:192.168.204.146"
- kafka3.yaml
# kafka3.yaml
version: '2'
services:
kafka3:
container_name: kafka3
hostname: kafka3
image: hyperledger/fabric-kafka:latest
restart: always
environment:
# broker.id
- KAFKA_BROKER_ID=3
- KAFKA_MIN_INSYNC_REPLICAS=2
- KAFKA_DEFAULT_REPLICATION_FACTOR=3
- KAFKA_ZOOKEEPER_CONNECT=zookeeper1:2181,zookeeper2:2181,zookeeper3:2181
# 33 * 1024 * 1024 B
- KAFKA_MESSAGE_MAX_BYTES=34603008
- KAFKA_REPLICA_FETCH_MAX_BYTES=34603008 # 33 * 1024 * 1024 B
- KAFKA_UNCLEAN_LEADER_ELECTION_ENABLE=false
- KAFKA_LOG_RETENTION_MS=-1
- KAFKA_HEAP_OPTS=-Xmx256M -Xms128M
ports:
- 9092:9092
extra_hosts:
- "zookeeper1:192.168.204.149"
- "zookeeper2:192.168.204.150"
- "zookeeper3:192.168.204.151"
- "kafka1:192.168.204.145"
- "kafka2:192.168.204.143"
- "kafka3:192.168.204.144"
- "kafka4:192.168.204.146"
- kafka4.yaml
# kafka4.yaml
version: '2'
services:
kafka4:
container_name: kafka4
hostname: kafka4
image: hyperledger/fabric-kafka:latest
restart: always
environment:
# broker.id
- KAFKA_BROKER_ID=4
- KAFKA_MIN_INSYNC_REPLICAS=2
- KAFKA_DEFAULT_REPLICATION_FACTOR=3
- KAFKA_ZOOKEEPER_CONNECT=zookeeper1:2181,zookeeper2:2181,zookeeper3:2181
# 33 * 1024 * 1024 B
- KAFKA_MESSAGE_MAX_BYTES=34603008
- KAFKA_REPLICA_FETCH_MAX_BYTES=34603008 # 33 * 1024 * 1024 B
- KAFKA_UNCLEAN_LEADER_ELECTION_ENABLE=false
- KAFKA_LOG_RETENTION_MS=-1
- KAFKA_HEAP_OPTS=-Xmx256M -Xms128M
ports:
- 9092:9092
extra_hosts:
- "zookeeper1:192.168.204.149"
- "zookeeper2:192.168.204.150"
- "zookeeper3:192.168.204.151"
- "kafka1:192.168.204.145"
- "kafka2:192.168.204.143"
- "kafka3:192.168.204.144"
- "kafka4:192.168.204.146"
5. 配置orderer节点
- orderer.yaml
# orderer.yaml
version: '2'
services:
orderer0.ikotlin.com:
container_name: orderer0.ikotlin.com
image: hyperledger/fabric-orderer:latest
environment:
- CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=appkafka_default
- ORDERER_GENERAL_LOGLEVEL=debug
- ORDERER_GENERAL_LISTENADDRESS=0.0.0.0
- ORDERER_GENERAL_LISTENPORT=7050
- ORDERER_GENERAL_GENESISMETHOD=file
- ORDERER_GENERAL_GENESISFILE=/var/hyperledger/orderer/orderer.genesis.block
- ORDERER_GENERAL_LOCALMSPID=OrdererMSP # configtx.yaml
- ORDERER_GENERAL_LOCALMSPDIR=/var/hyperledger/orderer/msp
# enabled TLS
- ORDERER_GENERAL_TLS_ENABLED=true
- ORDERER_GENERAL_TLS_PRIVATEKEY=/var/hyperledger/orderer/tls/server.key
- ORDERER_GENERAL_TLS_CERTIFICATE=/var/hyperledger/orderer/tls/server.crt
- ORDERER_GENERAL_TLS_ROOTCAS=[/var/hyperledger/orderer/tls/ca.crt]
- ORDERER_KAFKA_RETRY_LONGINTERVAL=10s
- ORDERER_KAFKA_RETRY_LONGTOTAL=100s
- ORDERER_KAFKA_RETRY_SHORTINTERVAL=1s
- ORDERER_KAFKA_RETRY_SHORTTOTAL=30s
- ORDERER_KAFKA_VERBOSE=true
- ORDERER_KAFKA_BROKERS=[192.168.204.145:9092,192.168.204.143:9092,192.168.204.144:9092,192.168.204.146:9092]
working_dir: /opt/gopath/src/github.com/hyperledger/fabric
command: orderer
volumes:
- ./channel-artifacts/genesis.block:/var/hyperledger/orderer/orderer.genesis.block
- ./crypto-config/ordererOrganizations/ikotlin.com/orderers/orderer0.ikotlin.com/msp:/var/hyperledger/orderer/msp
- ./crypto-config/ordererOrganizations/ikotlin.com/orderers/orderer0.ikotlin.com/tls/:/var/hyperledger/orderer/tls
networks:
default:
aliases:
- appkafka
ports:
- 7050:7050
extra_hosts:
- "kafka1:192.168.204.145"
- "kafka2:192.168.204.143"
- "kafka3:192.168.204.144"
- "kafka4:192.168.204.146"
- orderer1.yaml
# orderer1.yaml
version: '2'
services:
orderer1.ikotlin.com:
container_name: orderer1.ikotlin.com
image: hyperledger/fabric-orderer:latest
environment:
- CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=appkafka_default
- ORDERER_GENERAL_LOGLEVEL=debug
- ORDERER_GENERAL_LISTENADDRESS=0.0.0.0
- ORDERER_GENERAL_LISTENPORT=7050
- ORDERER_GENERAL_GENESISMETHOD=file
- ORDERER_GENERAL_GENESISFILE=/var/hyperledger/orderer/orderer.genesis.block
- ORDERER_GENERAL_LOCALMSPID=OrdererMSP # configtx.yaml
- ORDERER_GENERAL_LOCALMSPDIR=/var/hyperledger/orderer/msp
# enabled TLS
- ORDERER_GENERAL_TLS_ENABLED=true
- ORDERER_GENERAL_TLS_PRIVATEKEY=/var/hyperledger/orderer/tls/server.key
- ORDERER_GENERAL_TLS_CERTIFICATE=/var/hyperledger/orderer/tls/server.crt
- ORDERER_GENERAL_TLS_ROOTCAS=[/var/hyperledger/orderer/tls/ca.crt]
- ORDERER_KAFKA_RETRY_LONGINTERVAL=10s
- ORDERER_KAFKA_RETRY_LONGTOTAL=100s
- ORDERER_KAFKA_RETRY_SHORTINTERVAL=1s
- ORDERER_KAFKA_RETRY_SHORTTOTAL=30s
- ORDERER_KAFKA_VERBOSE=true
- ORDERER_KAFKA_BROKERS=[192.168.204.145:9092,192.168.204.143:9092,192.168.204.144:9092,192.168.204.146:9092]
working_dir: /opt/gopath/src/github.com/hyperledger/fabric
command: orderer
volumes:
- ./channel-artifacts/genesis.block:/var/hyperledger/orderer/orderer.genesis.block
- ./crypto-config/ordererOrganizations/ikotlin.com/orderers/orderer1.ikotlin.com/msp:/var/hyperledger/orderer/msp
- ./crypto-config/ordererOrganizations/ikotlin.com/orderers/orderer1.ikotlin.com/tls/:/var/hyperledger/orderer/tls
networks:
default:
aliases:
- appkafka
ports:
- 7050:7050
extra_hosts:
- "kafka1:192.168.204.145"
- "kafka2:192.168.204.143"
- "kafka3:192.168.204.144"
- "kafka4:192.168.204.146"
- orderer2.yaml
# orderer1.yaml
version: '2'
services:
orderer2.ikotlin.com:
container_name: orderer2.ikotlin.com
image: hyperledger/fabric-orderer:latest
environment:
- CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=appkafka_default
- ORDERER_GENERAL_LOGLEVEL=debug
- ORDERER_GENERAL_LISTENADDRESS=0.0.0.0
- ORDERER_GENERAL_LISTENPORT=7050
- ORDERER_GENERAL_GENESISMETHOD=file
- ORDERER_GENERAL_GENESISFILE=/var/hyperledger/orderer/orderer.genesis.block
- ORDERER_GENERAL_LOCALMSPID=OrdererMSP # configtx.yaml
- ORDERER_GENERAL_LOCALMSPDIR=/var/hyperledger/orderer/msp
# enabled TLS
- ORDERER_GENERAL_TLS_ENABLED=true
- ORDERER_GENERAL_TLS_PRIVATEKEY=/var/hyperledger/orderer/tls/server.key
- ORDERER_GENERAL_TLS_CERTIFICATE=/var/hyperledger/orderer/tls/server.crt
- ORDERER_GENERAL_TLS_ROOTCAS=[/var/hyperledger/orderer/tls/ca.crt]
- ORDERER_KAFKA_RETRY_LONGINTERVAL=10s
- ORDERER_KAFKA_RETRY_LONGTOTAL=100s
- ORDERER_KAFKA_RETRY_SHORTINTERVAL=1s
- ORDERER_KAFKA_RETRY_SHORTTOTAL=30s
- ORDERER_KAFKA_VERBOSE=true
- ORDERER_KAFKA_BROKERS=[192.168.204.145:9092,192.168.204.143:9092,192.168.204.144:9092,192.168.204.146:9092]
working_dir: /opt/gopath/src/github.com/hyperledger/fabric
command: orderer
volumes:
- ./channel-artifacts/genesis.block:/var/hyperledger/orderer/orderer.genesis.block
- ./crypto-config/ordererOrganizations/ikotlin.com/orderers/orderer2.ikotlin.com/msp:/var/hyperledger/orderer/msp
- ./crypto-config/ordererOrganizations/ikotlin.com/orderers/orderer2.ikotlin.com/tls/:/var/hyperledger/orderer/tls
networks:
default:
aliases:
- appkafka
ports:
- 7050:7050
extra_hosts:
- "kafka1:192.168.204.145"
- "kafka2:192.168.204.143"
- "kafka3:192.168.204.144"
- "kafka4:192.168.204.146"
6. 启动集群 zookeeper -> kafka -> orderer
| 192.168.204.149 | | |
| --------------- | --------------- | ---- | |
| -4zookeeper2 | 192.168.204.150 | | |
| -5zookeeper3 | 192.168.204.151 | |
- zookeeper1:192.168.204.149
mkdir appkafka && cd appkafka
docker-compose -f zookeeper1.yaml up -d
- zookeeper2:192.168.204.150
mkdir appkafka && cd appkafka
docker-compose -f zookeeper2.yaml up -d
- ookeeper3:192.168.204.151
mkdir appkafka && cd appkafka
docker-compose -f zookeeper3.yaml up -d
7.2 启动 kafka集群
- Kafka1:192.168.204.145
mkdir appkafka && cd appkafka
docker-compose -f kafka1.yaml up -d
- Kafka2:192.168.204.143
mkdir appkafka && cd appkafka
docker-compose -f kafka2.yaml up -d
- Kafka3:192.168.204.144
mkdir appkafka && cd appkafka
docker-compose -f kafka3.yaml up -d
- Kafka4:192.168.204.146
mkdir appkafka && cd appkafka
docker-compose -f kafka4.yaml up -d
7.3 启动Orderer
-0 order0 | 192.168.204.129 | ||
---|---|---|---|
-1 order1 | 192.168.204.147 | ||
-2 order2 | 192.168.204.148 |
- order0:192.168.204.129
mkdir appkafka && cd appkafka
docker-compose -f orderer.yaml up -d
- order1:192.168.204.147
mkdir appkafka && cd appkafka
# 远程拷贝
scp -r root@192.168.204.129:/root/appkafka/crypto-config/ordererOrganizations/ ./crypto-config/ordererOrganizations
# 将写好的 orderer1.yaml 配置文件放到该目录下, 通过 docker-compose 启动容器
docker-compose -f orderer1.yaml up -d
- order2:192.168.204.148
mkdir appkafka && cd appkafka
# 远程拷贝
scp -r root@192.168.204.129:/root/appkafka/crypto-config/ordererOrganizations/ ./crypto-config/ordererOrganizations
# 将写好的 orderer2.yaml 配置文件放到该目录下, 通过 docker-compose 启动容器
docker-compose -f orderer2.yaml up -d
7.4 启动peer 集群
- peer0.android.ikotlin.com:192.168.204.152
# 拷贝相关的文件
$ scp -r root@192.168.204.129:/root/appkafka/ ./
$ cd appkafka/
# 启动docker
$ docker-compose -f docker-compose-and.yaml up -d
# log
#[root@localhost appkafka]# docker-compose -f docker-compose-and.yaml up -d
#Creating network "appkafka_default" with the default driver
#Creating peer0.android.ikotlin.com ... done
#Creating cli ... done
#Creating cli ...
# 进入 cli
$ docker exec -it cli bash
# 创建通道
$ peer channel create -o orderer0.ikotlin.com:7050 -c appkafkachannel -f ./channel-artifacts/appkafkachannel.tx --tls true --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/ikotlin.com/msp/tlscacerts/tlsca.ikotlin.com-cert.pem
# log
#xxxx-04-29 09:57:45.275 UTC [channelCmd] InitCmdFactory -> INFO 001 Endorser and orderer connections initialized
#xxxx-04-29 09:57:45.553 UTC [cli.common] readBlock -> INFO 002 Got status: &{SERVICE_UNAVAILABLE}
#xxxx-04-29 09:57:45.558 UTC [channelCmd] InitCmdFactory -> INFO 003 Endorser and orderer connections initialized
#xxxx-04-29 09:57:45.760 UTC [cli.common] readBlock -> INFO 004 Got status: &{SERVICE_UNAVAILABLE}
#xxxx-04-29 09:57:45.765 UTC [channelCmd] InitCmdFactory -> INFO 005 Endorser and orderer connections initialized
#xxxx-04-29 09:57:45.967 UTC [cli.common] readBlock -> INFO 006 Got status: &{SERVICE_UNAVAILABLE}
#xxxx-04-29 09:57:45.972 UTC [channelCmd] InitCmdFactory -> INFO 007 Endorser and orderer connections initialized
#xxxx-04-29 09:57:46.174 UTC [cli.common] readBlock -> INFO 008 Got status: &{SERVICE_UNAVAILABLE}
#xxxx-04-29 09:57:46.178 UTC [channelCmd] InitCmdFactory -> INFO 009 Endorser and orderer connections initialized
#xxxx-04-29 09:57:46.384 UTC [cli.common] readBlock -> INFO 00a Received block: 0
# 加入通道
$ peer channel join -b appkafkachannel.block
# xxxx-04-29 09:58:52.936 UTC [channelCmd] InitCmdFactory -> INFO 001 Endorser and orderer connections initialized
# xxxx-04-29 09:58:52.987 UTC [channelCmd] executeJoin -> INFO 002 Successfully submitted proposal to join channel
# 安装链码
$ peer chaincode install -n appkafkacc -v 1.0 -l golang -p github.com/chaincode
#xxxx-04-29 10:02:57.969 UTC [chaincodeCmd] checkChaincodeCmdParams -> INFO 001 Using default escc
#xxxx-04-29 10:02:57.969 UTC [chaincodeCmd] checkChaincodeCmdParams -> INFO 002 Using default vscc
#xxxx-04-29 10:02:58.681 UTC [chaincodeCmd] install -> INFO 003 Installed remotely response:<status:200 payload:"OK" >
# 初始化链码
$ peer chaincode instantiate -o orderer0.ikotlin.com:7050 --tls true --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/ikotlin.com/msp/tlscacerts/tlsca.ikotlin.com-cert.pem -C appkafkachannel -n appkafkacc -v 1.0 -l golang -c '{"Args":["init","a","100","b","200"]}' -P "AND ('OrgiOSMSP.member', 'OrgAndroidMSP.member')"
#log
#xxxx-04-29 10:06:49.330 UTC [chaincodeCmd] checkChaincodeCmdParams -> INFO 001 Using default escc
#xxxx-04-29 10:06:49.330 UTC [chaincodeCmd] checkChaincodeCmdParams -> INFO 002 Using default vscc
# 打包链码,因为链码如果使用源码,可能会在其他的peer 上出现指纹不匹配
$ peer chaincode package -n appkafkacc -p github.com/chaincode -v 1.0 appkafkacc.1.0.out
#log
# xxxx-04-29 10:08:19.815 UTC [chaincodeCmd] checkChaincodeCmdParams -> INFO 001 Using default escc
# xxxx-04-29 10:08:19.815 UTC [chaincodeCmd] checkChaincodeCmdParams -> INFO 002 Using default vscc
- 新建连接peer0.android.ikotlin.com
# 拷贝cli 中的appkafkachannel.tx 通道文件和打包好的代码
$ cd appkafka/channel-artifacts/
$ docker cp 724933cafcc3:/opt/gopath/src/github.com/hyperledger/fabric/peer/channel-$ artifacts/appkafkachannel.block ./
$ docker cp 724933cafcc3:/opt/gopath/src/github.com/hyperledger/fabric/peer/appkafkacc.1.0.out ./
[root@localhost appkafka]# ls channel-artifacts/
appkafkacc.1.0.out appkafkachannel.block genesis.block OrgAndroidAnchor.tx OrgiOSAnchor.tx
appkafkacc.1.0.out打包好的代码
appkafkachannel.block 通道文件
- peer0.ios.ikotlin.com:192.168.204.153
# 拷贝相关的文件,注意这里面有appkafkachannel.block 和 appkafkacc.1.0.out
$ scp -r root@192.168.204.152:/root/appkafka/ ./
#进入cli
$ docker exec -it cli bash
$ peer channel join -b ./channel-artifacts/appkafkachannel.block
#log
#xxxx-04-29 10:21:19.256 UTC [channelCmd] InitCmdFactory -> INFO 001 Endorser and orderer connections initialized
#xxxx-04-29 10:21:19.325 UTC [channelCmd] executeJoin -> INFO 002 Successfully submitted proposal to join channel
# 安装链码
$ peer chaincode install ./channel-artifacts/appkafkacc.1.0.out
# xxxx-04-29 10:21:52.775 UTC [chaincodeCmd] install -> INFO 001 Installed remotely response:<status:200 payload:"OK" >
# 查询
peer chaincode query -C appkafkachannel -n appkafkacc -c '{"Args":["query","a"]}'
100
# ....
# 提取区块
$ peer channel fetch config config_block.pb -o orderer0.ikotlin.com:7050 -c appkafkachannel --tls --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/ikotlin.com/orderers/orderer0.ikotlin.com/msp/tlscacerts/tlsca.ikotlin.com-cert.pem
# log
#xxxx-04-29 10:31:55.771 UTC [channelCmd] InitCmdFactory -> INFO 001 Endorser and orderer connections initialized
#xxxx-04-29 10:31:55.795 UTC [cli.common] readBlock -> INFO 002 Received block: 5
#xxxx-04-29 10:31:55.798 UTC [cli.common] readBlock -> INFO 003 Received block: 0
#xxxx-04-29 10:31:55.799 UTC [channelCmd] fetch -> INFO 004 Retrieving last config block: 0
网友评论