1. 使用虚拟机或者物理机或者docker分开部署zookeeper和kafka,并使用kafka-console-consumer.sh及kafka-console-producer.sh完成视频中的演示示例并截图
1.本机主机名:
[hadoop@NIE-00 zookeeper-3.4.6]$ hostname
NIE-00
2. 独立单节点zookeeper配置:
【a】. 解压zookeeper-3.4.6到/export/server/zookeeper-3.4.6
【b】. mkdir data
【c】. cp zoo_sample.cfg zoo.cfg
【d】.vim conf/zoo.cfg
dataDir=/export/server/zookeeper-3.4.6/data
3. zookeeper启动:
bin/zkServer.sh start #启动后用status检查启动状态:
4. 解压kafka;并重命名目录为kafka0.8.2.2
tar zxvf kafka_2.11-0.8.2.2.tgz
mv kafka_2.11-0.8.2.2 kafka0.8.2.2
5. 解压并启动kafka-server:
vim config/server.properties修改参数:
log.dirs=/export/server/kafka0.8.2.2/dtlogs
6. #1【创建topic-test1】
bin/kafka-topics.sh --create --zookeeper NIE-00:2181 --partitions 2 --replication-factor 1 --topic test1
bin/kafka-topics.sh --list --zookeeper NIE-00:2181
6. #2【生产消息topic1】:
启动一个消费中断,消费test1 topic;发起一个生产终端发送test1 topic消息:
bin/kafka-console-producer.sh --broker-list NIE-00:9092 --topic test1
bin/kafka-console-consumer.sh --zookeeper NIE-00:2181 --topic test1
6. #【查看test1topic元信息】
bin/kafka-topics.sh --describe --zookeeper NIE-00:2181 --topic test1
2. 完成视频中的kafka-replay-log.sh的演示并截图
1. 新建topic test2,用来重播test1的消息
[hadoop@NIE-00 kafka0.8.2.2]$ bin/kafka-topics.sh --create --zookeeper NIE-00:2181 --partitions 2 --replication-factor 1 --topic test2
Created topic "test2".
[hadoop@NIE-00 kafka0.8.2.2]$ bin/kafka-topics.sh --list --zookeeper NIE-00:2181
test1
test2
[hadoop@NIE-00 kafka0.8.2.2]$ bin/kafka-topics.sh --describe --zookeeper NIE-00:2181 --topic test2
Topic:test2 PartitionCount:2 ReplicationFactor:1 Configs:
Topic: test2 Partition: 0 Leader: 0 Replicas: 0 Isr: 0
Topic: test2 Partition: 1 Leader: 0 Replicas: 0 Isr: 0
[hadoop@NIE-00 kafka0.8.2.2]$
bin/kafka-replay-log-producer.sh --broker-list NIE-00:9092 --zookeeper NIE-00:2181 --inputtopic test1 --outputtopic test2
bin/kafka-console-consumer.sh --zookeeper NIE-00:2181 --topic test2 --from-beginning
网友评论