美文网首页
KAFKA 测试/开发环境快速构建

KAFKA 测试/开发环境快速构建

作者: 鸟它鸟 | 来源:发表于2018-12-09 23:46 被阅读0次

    首先你要安装个docker,然后才能起飞。。
    额 怎么安装docker? 装个包就好啊。。炒鸡简单的。。自己研究吧。。

    玩之前,我们先看下自己的docker是否正常

    ljpMacBookPro:~ liangjiapeng$ docker version
    Client: Docker Engine - Community
     Version:           18.09.0
     API version:       1.39
     Go version:        go1.10.4
     Git commit:        4d60db4
     Built:             Wed Nov  7 00:47:43 2018
     OS/Arch:           darwin/amd64
     Experimental:      false
    
    Server: Docker Engine - Community
     Engine:
      Version:          18.09.0
      API version:      1.39 (minimum version 1.12)
      Go version:       go1.10.4
      Git commit:       4d60db4
      Built:            Wed Nov  7 00:55:00 2018
      OS/Arch:          linux/amd64
      Experimental:     false
    ljpMacBookPro:~ liangjiapeng$
    
    ljpMacBookPro:~ liangjiapeng$ docker ps -a
    CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
    

    docker 没问题后,起飞吧

    复制这串命令行执行即可

    docker run --rm -it \
    -p 2181:2181 -p 3030:3030 -p 8081:8081 \
    -p 8082:8082 -p 8083:8083 -p 9092:9092 \
    -e ADV_HOST=127.0.0.1 \
    landoop/fast-data-dev
    

    下面是正常运行的状态,首次执行,会拉取docker的镜像,耐心等待,等待输出下边酱汁的信息就可以了。

    ljpMacBookPro:~ liangjiapeng$ docker run --rm -it \
    > -p 2181:2181 -p 3030:3030 -p 8081:8081 \
    > -p 8082:8082 -p 8083:8083 -p 9092:9092 \
    > -e ADV_HOST=127.0.0.1 \
    > landoop/fast-data-dev
    Setting advertised host to 127.0.0.1.
    Operating system RAM available is 3455 MiB, which is less than the lowest
    recommended of 4096 MiB. Your system performance may be seriously impacted.
    Starting services.
    This is Landoop’s fast-data-dev. Kafka 1.1.1-L0 (Landoop's Kafka Distribution).
    You may visit http://127.0.0.1:3030 in about a minute.
    2018-12-09 15:16:01,639 INFO Included extra file "/etc/supervisord.d/01-zookeeper.conf" during parsing
    2018-12-09 15:16:01,639 INFO Included extra file "/etc/supervisord.d/02-broker.conf" during parsing
    2018-12-09 15:16:01,639 INFO Included extra file "/etc/supervisord.d/03-schema-registry.conf" during parsing
    2018-12-09 15:16:01,640 INFO Included extra file "/etc/supervisord.d/04-rest-proxy.conf" during parsing
    2018-12-09 15:16:01,640 INFO Included extra file "/etc/supervisord.d/05-connect-distributed.conf" during parsing
    2018-12-09 15:16:01,640 INFO Included extra file "/etc/supervisord.d/06-caddy.conf" during parsing
    2018-12-09 15:16:01,640 INFO Included extra file "/etc/supervisord.d/07-smoke-tests.conf" during parsing
    2018-12-09 15:16:01,640 INFO Included extra file "/etc/supervisord.d/08-logs-to-kafka.conf" during parsing
    2018-12-09 15:16:01,640 INFO Included extra file "/etc/supervisord.d/99-supervisord-sample-data.conf" during parsing
    2018-12-09 15:16:01,640 INFO Set uid to user 0 succeeded
    2018-12-09 15:16:01,658 INFO RPC interface 'supervisor' initialized
    2018-12-09 15:16:01,658 CRIT Server 'unix_http_server' running without any HTTP authentication checking
    2018-12-09 15:16:01,659 INFO supervisord started with pid 6
    2018-12-09 15:16:02,664 INFO spawned: 'sample-data' with pid 164
    2018-12-09 15:16:02,668 INFO spawned: 'zookeeper' with pid 165
    2018-12-09 15:16:02,673 INFO spawned: 'caddy' with pid 166
    2018-12-09 15:16:02,677 INFO spawned: 'broker' with pid 168
    2018-12-09 15:16:02,686 INFO spawned: 'smoke-tests' with pid 169
    2018-12-09 15:16:02,689 INFO spawned: 'connect-distributed' with pid 170
    2018-12-09 15:16:02,693 INFO spawned: 'logs-to-kafka' with pid 171
    2018-12-09 15:16:02,715 INFO spawned: 'schema-registry' with pid 177
    2018-12-09 15:16:02,750 INFO spawned: 'rest-proxy' with pid 184
    2018-12-09 15:16:03,767 INFO success: sample-data entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
    2018-12-09 15:16:03,767 INFO success: zookeeper entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
    2018-12-09 15:16:03,767 INFO success: caddy entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
    2018-12-09 15:16:03,768 INFO success: broker entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
    2018-12-09 15:16:03,768 INFO success: smoke-tests entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
    2018-12-09 15:16:03,769 INFO success: connect-distributed entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
    2018-12-09 15:16:03,769 INFO success: logs-to-kafka entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
    2018-12-09 15:16:03,770 INFO success: schema-registry entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
    2018-12-09 15:16:03,770 INFO success: rest-proxy entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
    

    这样子,kafka就已经开始启动了,使用fast-data-dev这个镜像,还有web的界面可以查看,我们去看看

    fast-data-dev web界面

    刚启动的时候,web界面中的COYOTE HEALTH CHECKS会进行一些检查,等待检查完成后我们再使用。

    检查完成后的状态


    检查完成后的状态

    检查完成后,我们就可以测试使用了,怎么玩呢,继续看

    先建立个topic

    root@fast-data-dev / $ kafka-topics --zookeeper 127.0.0.1:2181 --create --topic my_topic --partitions 3 --replication-factor 1
    WARNING: Due to limitations in metric names, topics with a period ('.') or underscore ('_') could collide. To avoid issues it is best to use either, but not both.
    Created topic "my_topic".
    

    生产数据

    root@fast-data-dev / $ kafka-console-producer --broker-list 127.0.0.1:9092 --topic my_topic
    >111
    

    消费数据,再开一个docker

    root@fast-data-dev / $ kafka-console-consumer --bootstrap-server 127.0.0.1:9092 --topic my_topic --from-beginning
    111
    

    此时,再向生产者的终端中写入数据,消费者这端会自动读取

    相关文章

      网友评论

          本文标题:KAFKA 测试/开发环境快速构建

          本文链接:https://www.haomeiwen.com/subject/zmdlhqtx.html