美文网首页
01 用Docker启动HBase

01 用Docker启动HBase

作者: 逸章 | 来源:发表于2020-02-27 11:15 被阅读0次

    假定Docker已经安装好,可以参考我写的另外一篇Doker安装软文

    1、用Docker启动单实例HBase

    yay@yay-ThinkPad-T470:~$ docker search hbase
    NAME                                 DESCRIPTION                                     STARS               OFFICIAL            AUTOMATED
    harisekhon/hbase                     Apache HBase, opens shell - pseudo-distribut…   81                                      [OK]
    nerdammer/hbase                      HBase pseudo-distributed (configured for loc…   25                                      [OK]
    dajobe/hbase                         HBase 2.1.2 in Docker                           25                                      
    banno/hbase-standalone               HBase master running in standalone mode on U…   17                                      [OK]
    zenoss/hbase                         HBase image for Zenoss 5.0                      9                                       
    boostport/hbase-phoenix-all-in-one   HBase with phoenix and the phoenix query ser…   7                                       [OK]
    harisekhon/hbase-dev                 Apache HBase + dev tools, github repos, pseu…   7                                       [OK]
    bde2020/hbase-regionserver           Regionserver Docker image for Apache HBase.     3                                       [OK]
    smizy/hbase                          Apache HBase docker image based on alpine       3                                       [OK]
    gradiant/hbase-base                  Hbase small footprint Image (Alpine based)      3                                       [OK]
    aaionap/hbase                        AAI Hbase                                       2                                       
    imagenarium/hbase-regionserver                                                       1                                       
    scrapinghub/hbase-docker             HBase in Docker                                 1                                       [OK]
    pilchard/hbase                       Hbase 1.2.0 (CDH 5.11) with openjdk-1.8         1                                       [OK]
    imagenarium/hbase-master                                                             1                                       
    f21global/hbase-phoenix-server       HBase phoenix query server                      1                                       [OK]
    imagenarium/hbase                                                                    1                                       
    bde2020/hbase-master                 Master docker image for Apache HBase.           1                                       [OK]
    iwan0/hbase-thrift-standalone        hbase-thrift-standalone                         0                                       [OK]
    stellargraph/hbase-master                                                            0                                       
    pierrezemb/hbase-docker              hbase in docker                                 0                                       [OK]
    cellos/hbase                         HBase on top of Alpine Linux                    0                                       
    newnius/hbase                        Setup a HBase cluster in a totally distribut…   0                                       [OK]
    pierrezemb/hbase-standalone           Docker images to experiment with HBase 1.4.…   0                                       [OK]
    rperla/hbase                                                                         0                                       
    yay@yay-ThinkPad-T470:~$ docker pull harisekhon/hbase:1.4
    1.4: Pulling from harisekhon/hbase
    cd784148e348: Pull complete 
    834e591bb580: Pull complete 
    e481417d765c: Pull complete 
    377501063092: Pull complete 
    50387828b0f3: Pull complete 
    c16ba6951d95: Pull complete 
    9f71e948d0d8: Pull complete 
    Digest: sha256:1ff4a2a82a50502abe9fdfc8ecbb6981232109cce1856539762593963774a955
    Status: Downloaded newer image for harisekhon/hbase:1.4
    docker.io/harisekhon/hbase:1.4
    yay@yay-ThinkPad-T470:~$ docker run -d --name hbase001 -p 16010:16010 harisekhon/hbase:1.4
    5f270d13a6609b906e89745ede8f1bcb523ffa9f3852608b8fd04d398e81be31
    yay@yay-ThinkPad-T470:~$ 
    
    image.png
    yay@yay-ThinkPad-T470:~$ docker exec -it hbase001  /bin/bash
    bash-4.4# hbase shell
    2020-02-27 03:12:07,700 WARN  [main] util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
    HBase Shell
    Use "help" to get list of supported commands.
    Use "exit" to quit this interactive shell.
    Version 1.4.7, r763f27f583cf8fd7ecf79fb6f3ef57f1615dbf9b, Tue Aug 28 14:40:11 PDT 2018
    
    hbase(main):002:0> create 't1','cf1'
    0 row(s) in 1.4220 seconds
    
    => Hbase::Table - t1
    hbase(main):003:0> put 't1','rw1','cf1:c1','hello'
    0 row(s) in 0.0900 seconds
    
    hbase(main):005:0> scan 't1'
    ROW                                                  COLUMN+CELL                                                                                                                                            
     rw1                                                 column=cf1:c1, timestamp=1582773190242, value=hello                                                                                                    
    1 row(s) in 0.0420 seconds
    
    hbase(main):006:0> exit
    bash-4.4# exit
    exit
    yay@yay-ThinkPad-T470:~$ 
    

    2、用Docker启动分布式HBase(以单机启动三个容器的方法)

    在下面的例子中,我们会启动4个docker容器。

    主机规划

    方法一(这里不采用)

    等容器自动获取IP后,登陆到各个容器上获取它们的IP,然后汇总后追加到各个节点的/etc/hosts文件中,方法如下:
    查看各个节点的IP地址

    hadoop@hadoop-master:/$ ifconfig -a
    eth0      Link encap:Ethernet  HWaddr 02:42:ac:11:00:02  
              inet addr:172.17.0.2  Bcast:172.17.255.255  Mask:255.255.0.0
     ...        
    yay@yay-ThinkPad-T470:~/software/dockerfileForHBase$ 
    

    我们以此方法得到4个容器的IP:
    hadoop-master: 172.17.0.2
    hadoop-slave1: 172.17.0.3
    hadoop-slave2: 172.17.0.4
    hadoop-slave3: 172.17.0.5
    把设置命令写到脚本里面去,并发送到各个容器上执行
    run_hosts.sh脚本内容如下:

    #!/bin/bash
     
    echo 172.17.0.2 hadoop-master >> /etc/hosts
     
    echo 172.17.0.3 hadoop-slave1 >> /etc/hosts
     
    echo 172.17.0.4 hadoop-slave2 >> /etc/hosts
     
    echo 172.17.0.5 hadoop-slave3 >> /etc/hosts
     
      
     
    echo 172.17.0.3 regionserver1 >> /etc/hosts #hbase的regionserver服务器
     
    echo 172.17.0.4 regionserver2 >> /etc/hosts
    
    echo 172.17.0.5 regionserver3 >> /etc/hosts
    

    执行

    yay@yay-ThinkPad-T470:~/software/dockerfileForHBase$ dir
    hadoop-3.2.1.tar.gz     hbase-conf
    hadoopandhbase.dockerfile   jdk-8u191-linux-x64.tar.gz
    hadoop-conf         run_hosts.sh
    hadoophbaseconf.dockerfile  ubuntubase.dockerfile
    hbase-1.4.12-bin.tar.gz
    yay@yay-ThinkPad-T470:~/software/dockerfileForHBase$ docker cp run_hosts.sh hadoop-master:/
    yay@yay-ThinkPad-T470:~/software/dockerfileForHBase$ docker exec -it 9136 /bin/bash
    hadoop@hadoop-master:/$ ll
    total 80
    drwxr-xr-x   1 root root 4096 Feb 29 05:40 ./
    drwxr-xr-x   1 root root 4096 Feb 29 05:40 ../
    -rwxr-xr-x   1 root root    0 Feb 29 05:02 .dockerenv*
    drwxr-xr-x   2 root root 4096 Dec 17 15:01 bin/
    drwxr-xr-x   2 root root 4096 Apr 10  2014 boot/
    ...
    -rw-r--r--   1 test test  369 Feb 29 05:35 run_hosts.sh
    ...
    drwxr-xr-x   1 root root 4096 Dec 17 15:01 var/
    hadoop@hadoop-master:/$ sudo sh run_hosts.sh
    hadoop@hadoop-master:/$ cat /etc/hosts
    127.0.0.1   localhost
    ::1 localhost ip6-localhost ip6-loopback
    fe00::0 ip6-localnet
    ff00::0 ip6-mcastprefix
    ff02::1 ip6-allnodes
    ff02::2 ip6-allrouters
    172.17.0.2  hadoop-master
    172.17.0.2 hadoop-master
    172.17.0.3 hadoop-slave1
    172.17.0.4 hadoop-slave2
    172.17.0.5 hadoop-slave3
    172.17.0.3 regionserver1
    172.17.0.4 regionserver2
    172.17.0.5 regionserver3
    hadoop@hadoop-master:/$ 
    

    其他几个容器一样运行这个脚本

    下面再看下 shell 脚本各种执行方式(source ./.sh, . ./.sh, ./.sh)的区别
    结论一: ./
    .sh的执行方式等价于sh ./.sh或者bash ./.sh,此三种执行脚本的方式都是重新启动一个子shell,在子shell中执行此脚本。
    结论二: .source ./.sh和 . ./.sh的执行方式是等价的,即两种执行方式都是在当前shell进程中执行此脚本,而不是重新启动一个shell 而在子shell进程中执行此脚本。

    hadoop-master: ip地址是启动容器后查询得到的,我们下面例子是172.17.0.2(后文会给出查询方法)
    hadoop-slave1: ip地址是启动容器后查询得到的,我们下面例子是172.17.0.3
    hadoop-slave2: ip地址是启动容器后查询得到的,我们下面例子是172.17.0.4
    hadoop-slave3: ip地址是启动容器后查询得到的,我们下面例子是172.17.0.5

    方法二(下面例子采用的方法)

    我们在启动容器的时候就会设置他的IP:
    hadoop-master: 172.18.12.1
    hadoop-slave1: 172.18.12.2
    hadoop-slave2: 172.18.12.3
    hadoop-slave3: 172.18.12.4
    具体方法是:
    设置一个newwork

    yay@yay-ThinkPad-T470:~/software/dockerfileForHBase$ docker network ls
    NETWORK ID          NAME                DRIVER              SCOPE
    c02cc96fb47c        bridge              bridge              local
    827bcf1293c0        host                host                local
    2d8cd675265a        none                null                local
    yay@yay-ThinkPad-T470:~/software/dockerfileForHBase$ docker network create --driver bridge --subnet=172.18.12.0/16 --gateway=172.18.1.1 mynet
    ...
    yay@yay-ThinkPad-T470:~/software/dockerfileForHBase$ docker network ls
    NETWORK ID          NAME                DRIVER              SCOPE
    c02cc96fb47c        bridge              bridge              local
    827bcf1293c0        host                host                local
    8fccbdc49d54        mynet               bridge              local
    2d8cd675265a        none                null                local
    

    docker run的时候使用--network命令(这里是演示而以,命令不需要设置,在启动hadoophbase容器的时候才会使用)

    docker run -it --name master -h master --network=mynet --ip 172.18.12.1 --add-host master:172.18.12.1 --add-host slave1:172.18.12.2 --add-host slave2:172.18.12.3 --add-host slave3:172.18.12.4 -d -P -p 2020:22 yayubuntubase/withssh:v1 && docker run -it --name slave1 -h slave1 --network=mynet --ip 172.18.12.2 --add-host master:172.18.12.1 --add-host slave1:172.18.12.2 --add-host slave2:172.18.12.3 --add-host slave3:172.18.12.4 -d -P -p 2021:22 yayubuntubase/withssh:v1 && docker run -it --name slave2 -h slave2 --network=mynet --ip 172.18.12.3 --add-host master:172.18.12.1 --add-host slave1:172.18.12.2 --add-host slave2:172.18.12.3 --add-host slave3:172.18.12.4 -d -P -p 2022:22 yayubuntubase/withssh:v1 && docker run -it --name slave3 -h slave3 --network=mynet --ip 172.18.12.4 --add-host master:172.18.12.1 --add-host slave1:172.18.12.2 --add-host slave2:172.18.12.3 --add-host slave3:172.18.12.4 -d -P -p 2023:22 yayubuntubase/withssh:v1
    

    上面--add-host master:172.18.12.1 --add-host slave1:172.18.12.2 --add-host slave2:172.18.12.3 --add-host slave3:172.18.12.4 是往/etc/hosts里面添加主机名和ip地址映射关系
    -p 2021:22是把ssh端口映射出来

    2.1 制作带有配置好SSH的Ubuntu基础镜像

    ubuntubase.dockerfile文件内容为:

    ############################################
    # version : yayubuntubase/withssh:v1
    # desc : ubuntu14.04 上安装的ssh
    ############################################
    # 设置继承自ubuntu14.04官方镜像
    FROM ubuntu:14.04 
    
    # 下面是一些创建者的基本信息
    MAINTAINER yayubuntubase
    
    
    RUN echo "root:root" | chpasswd
    
    
    #安装ssh-server
    RUN rm -rvf /var/lib/apt/lists/*
    RUN apt-get update 
    RUN apt-get install -y openssh-server openssh-client vim wget curl sudo
    
    
    
    #配置ssh
    RUN mkdir /var/run/sshd
    RUN sed -i 's/PermitRootLogin prohibit-password/PermitRootLogin yes/' /etc/ssh/sshd_config
    RUN sed 's@session\s*required\s*pam_loginuid.so@session optional pam_loginuid.so@g' -i /etc/pam.d/sshd
    ENV NOTVISIBLE "in users profile"
    RUN echo "export VISIBLE=now" >> /etc/profile
    
    EXPOSE 22
    

    上面的用户口令修改命令解释: 创建并修改用户口令的一种方法如下

    yay@yay-ThinkPad-T470:~/software/dockerfileForHBase$ sudo useradd test3
    yay@yay-ThinkPad-T470:~/software/dockerfileForHBase$ sudo echo 'test3:newpasswd' | sudo 》chpasswd
    

    另外上面的配置也增加了启动ssh服务的配置,一般在命令行你可以这样判断sshd服务是否启动:

    ps -e | grep sshd
    

    有时候当sshd启动失败你也需要判断22号端口是否开启

    netstat -an | grep 22 
    

    执行制作镜像命令

    yay@yay-ThinkPad-T470:~/software/dockerfileForHBase$ sudo docker image build --file ubuntubase.dockerfile --tag ubuntuforhadoop/withssh:v1 .
    Sending build context to Docker daemon  3.072kB
    Step 1/22 : FROM ubuntu:14.04
     ---> 6e4f1fe62ff1
    ...
    Step 18/22 : RUN ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa
    ...
    yay@yay-ThinkPad-T470:~/software/dockerfileForHBase$ docker images
    

    我简单的验证了以下目前功能是否正确:
    1、启动4个实例

    docker run -it --name master -h master --network=mynet --ip 172.18.12.1 --add-host master:172.18.12.1 --add-host slave1:172.18.12.2 --add-host slave2:172.18.12.3 --add-host slave3:172.18.12.4 -d -P -p 2020:22 ubuntuforhadoop/withssh:v1
    docker run -it --name slave1 -h slave1 --network=mynet --ip 172.18.12.2 --add-host master:172.18.12.1 --add-host slave1:172.18.12.2 --add-host slave2:172.18.12.3 --add-host slave3:172.18.12.4 -d -P -p 2021:22 ubuntuforhadoop/withssh:v1 
    docker run -it --name slave2 -h slave2 --network=mynet --ip 172.18.12.3 --add-host master:172.18.12.1 --add-host slave1:172.18.12.2 --add-host slave2:172.18.12.3 --add-host slave3:172.18.12.4 -d -P -p 2022:22 ubuntuforhadoop/withssh:v1 
    docker run -it --name slave3 -h slave3 --network=mynet --ip 172.18.12.4 --add-host master:172.18.12.1 --add-host slave1:172.18.12.2 --add-host slave2:172.18.12.3 --add-host slave3:172.18.12.4 -d -P -p 2023:22 ubuntuforhadoop/withssh:v1
    

    2、尝试看ssh是否正常

    yay@yay-ThinkPad-T470:~$ ssh test@172.18.12.1
    test@172.18.12.1's password: 
    Welcome to Ubuntu 14.04.6 LTS (GNU/Linux 5.0.0-23-generic x86_64)
    
     * Documentation:  https://help.ubuntu.com/
    Last login: Sun Mar  1 07:45:28 2020 from 172.18.1.1
    test@master:~$ ifconfig -a
    eth0      Link encap:Ethernet  HWaddr 02:42:ac:12:0c:01  
              inet addr:172.18.12.1  Bcast:172.18.255.255  Mask:255.255.0.0
              UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
              RX packets:255 errors:0 dropped:0 overruns:0 frame:0
              TX packets:156 errors:0 dropped:0 overruns:0 carrier:0
              collisions:0 txqueuelen:0 
              RX bytes:33349 (33.3 KB)  TX bytes:26385 (26.3 KB)
    
    lo       
     ...
    
    test@master:~$ cat /etc/hosts
    127.0.0.1   localhost
    ::1 localhost ip6-localhost ip6-loopback
    fe00::0 ip6-localnet
    ff00::0 ip6-mcastprefix
    ff02::1 ip6-allnodes
    ff02::2 ip6-allrouters
    172.18.12.1 master
    172.18.12.2 slave1
    172.18.12.3 slave2
    172.18.12.4 slave3
    172.18.12.1 master
    test@master:~$ ssh slave1
    The authenticity of host 'slave1 (172.18.12.2)' can't be established.
    ECDSA key fingerprint is 59:e1:49:42:94:98:ef:ab:b3:74:a4:5a:97:56:dc:be.
    Are you sure you want to continue connecting (yes/no)? yes
    ...
    
    Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
    applicable law.
    
    test@slave1:~$ ssh slave2
    The authenticity of host 'slave2 (172.18.12.3)' can't be established.
    ECDSA key fingerprint is 59:e1:49:42:94:98:ef:ab:b3:74:a4:5a:97:56:dc:be.
    Are you sure you want to continue connecting (yes/no)? yes
    ...
    
    test@slave2:~$ ssh slave3
    The authenticity of host 'slave3 (172.18.12.4)' can't be established.
    ECDSA key fingerprint is 59:e1:49:42:94:98:ef:ab:b3:74:a4:5a:97:56:dc:be.
    Are you sure you want to continue connecting (yes/no)? yes
    ...
    Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
    applicable law.
    
    test@slave3:~$ ssh master
    The authenticity of host 'master (172.18.12.1)' can't be established.
    ECDSA key fingerprint is 59:e1:49:42:94:98:ef:ab:b3:74:a4:5a:97:56:dc:be.
    Are you sure you want to continue connecting (yes/no)? yes
    Warning: Permanently added 'master,172.18.12.1' (ECDSA) to the list of known hosts.
    Welcome to Ubuntu 14.04.6 LTS (GNU/Linux 5.0.0-23-generic x86_64)
    
     * Documentation:  https://help.ubuntu.com/
    Last login: Sun Mar  1 08:38:30 2020 from 172.18.1.1
    test@master:~$ exit
    logout
    Connection to master closed.
    test@slave3:~$ exit
    logout
    Connection to slave3 closed.
    test@slave2:~$ exit
    logout
    Connection to slave2 closed.
    test@slave1:~$ exit
    logout
    Connection to slave1 closed.
    test@master:~$ exit
    logout
    Connection to 172.18.12.1 closed.
    yay@yay-ThinkPad-T470:~$ 
    
    
    
    
    
    yay@yay-ThinkPad-T470:~$ docker container ls
    CONTAINER ID        IMAGE                      COMMAND               CREATED             STATUS              PORTS                  NAMES
    4439cc943534        yayubuntubase/withssh:v1   "/usr/sbin/sshd -D"   6 hours ago         Up 6 hours          0.0.0.0:2023->22/tcp   slave3
    15a9aabce050        yayubuntubase/withssh:v1   "/usr/sbin/sshd -D"   6 hours ago         Up 6 hours          0.0.0.0:2022->22/tcp   slave2
    3201b26c34ae        yayubuntubase/withssh:v1   "/usr/sbin/sshd -D"   6 hours ago         Up 6 hours          0.0.0.0:2021->22/tcp   slave1
    002830832e20        yayubuntubase/withssh:v1   "/usr/sbin/sshd -D"   6 hours ago         Up 6 hours          0.0.0.0:2020->22/tcp   master
    yay@yay-ThinkPad-T470:~$ docker container stop 44 && docker container rm 44
    44
    44
    yay@yay-ThinkPad-T470:~$ docker container stop 15 && docker container rm 15
    15
    15
    yay@yay-ThinkPad-T470:~$ docker container stop 32 && docker container rm 32
    32
    32
    yay@yay-ThinkPad-T470:~$ docker container stop 00 && docker container rm 00
    00
    00
    yay@yay-ThinkPad-T470:~$ docker container ls
    CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
    yay@yay-ThinkPad-T470:~$ 
    
    

    2.2 配置hadoop的配置文件(也可在容器启动后用docker cp拷贝进去)

    hadoop-env.sh

    JAVA_HOME=/usr/local/jdk1.8.0_191
    

    core-site.xml

    <configuration>
        <property>
            <name>fs.default.name</name>
            <value>hdfs://hadoop-master:9000</value>
            </property>
        <property>
            <name>io.file.buffer.size</name>
            <value>131072</value>
        </property>
        <property>
            <name>hadoop.tmp.dir</name>
            <value>/home/hadooptmp</value>
            <description>Abase for other temporary directories.</description>
        </property>
    </configuration>
    

    yarn-site.xml

    <configuration>
        <!-- Site specific YARN configuration properties -->
        <property>
            <name>yarn.nodemanager.aux-services</name>
            <value>mapreduce_shuffle</value>
        </property>
        <property>
            <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
            <value>org.apache.hadoop.mapred.ShuffleHandler</value>
        </property>
        <property>
            <name>yarn.resourcemanager.address</name>
            <value>hadoop-master:8032</value>
        </property>
        <property>
            <name>yarn.resourcemanager.scheduler.address</name>
            <value>hadoop-master:8030</value>
        </property>
        <property>
            <name>yarn.resourcemanager.resource-tracker.address</name>
            <value>hadoop-master:8031</value>
        </property>
        <property>
            <name>yarn.resourcemanager.admin.address</name>
            <value>hadoop-master:8033</value>
        </property>
        <property>
            <name>yarn.resourcemanager.webapp.address</name>
            <value>hadoop-master:8088</value>
        </property>
        <property>
            <name>yarn.nodemanager.resource.memory-mb</name>
            <value>1024</value>
        </property>
        <property>
            <name>yarn.nodemanager.resource.cpu-vcores</name>
            <value>1</value>设置各个节点的/etc/hosts文件
        </property>
    </configuration>
    

    workers文件

    hadoop3
    hadoop4
    

    hbase-env.sh

    export JAVA_HOME=/usr/local/jdk1.8.0_191
    export HBASE_PID_DIR=/var/hadoop/pids 
    

    hbase-site.xml

    <configuration>
    
            <property>
    
                    <name>hbase.rootdir</name>
    
                    <value>hdfs://hadoop-master/hbase</value>
    
            </property>
    
            <property>
    
                    <name>hbase.cluster.distributed</name>
    
                    <value>true</value>
    
            </property>
    
            <property>
    
                    <name>hbase.zookeeper.quorum</name>
    
                    <value>hadoop-master,hadoop-slave1,hadoop-slave2,hadoop-slave3</value>
    
            </property>
    
    </configuration>
    

    regionservers

    hadoop-master
    hadoop-slave1
    hadoop-slave2
    hadoop-slave3
    

    2.3 制作hadoop-hbase镜像

    基于上面作的ubuntu镜像。

    1。 另外我们下面用到的java、hadoop和hbase要先下下来,然后放到和dockerfile相同的文件夹下:! 图片.png

    2。 这里我们把hadoop和hbase的配置文件也一并打包到镜像中,这个也可以不需要(后期用docker copy拷贝进去),所以下面的dockerfile文件注意最后几段add命令可以考虑是否需要去掉

    hadoopandhbase.dockerfile文件内容:

    ############################################ 
    # desc : 安装JAVA  HADOOP HBASE
    ############################################
     
    FROM ubuntuforhadoop/withssh:v1
     
     
    MAINTAINER yayhadoopandhbase 
      
    #为hadoop集群提供dns服务
    USER root
    RUN sudo apt-get -y install dnsmasq
    
    #安装和配置java环境
    #安装和配置java 
    ADD jdk-8u191-linux-x64.tar.gz /usr/local/
    ENV JAVA_HOME /usr/local/jdk1.8.0_191
    ENV CLASSPATH ${JAVA_HOME}/lib/dt.jar:$JAVA_HOME/lib/tools.jar
    ENV PATH $PATH:${JAVA_HOME}/bin
    
    #安装和配置 hadoop
    #RUN groupadd hadoop
    #RUN useradd -m hadoop -g hadoop
    #RUN echo "hadoop:hadoop" | chpasswd
    
    
    #安装Zookeeer
    #ADD apache-zookeeper-3.5.6-bin.tar.gz /usr/local/
    #RUN cd /usr/local && ln -s ./apache-zookeeper-3.5.6-bin zookeeper
    #ENV ZOOKEEPER_HOME /usr/local/zookeeper
    #ENV PATH ${ZOOKEEPER_HOME}/bin:$PATH
    
    #安装HADOOP
    ADD hadoop-3.2.1.tar.gz /usr/local/
    #如果是hadoop用户,则需要放开
    #RUN chown -R hadoop:hadoop /usr/local/hadoop-3.2.1
    RUN cd /usr/local && ln -s ./hadoop-3.2.1 hadoop
    
    ENV HADOOP_PREFIX /usr/local/hadoop
    ENV HADOOP_HOME /usr/local/hadoop
    ENV HADOOP_COMMON_HOME /usr/local/hadoop
    ENV HADOOP_HDFS_HOME /usr/local/hadoop
    ENV HADOOP_MAPRED_HOME /usr/local/hadoop
    ENV HADOOP_YARN_HOME /usr/local/hadoop
    ENV HADOOP_CONF_DIR /usr/local/hadoop/etc/hadoop
    ENV PATH ${HADOOP_HOME}/bin:${HADOOP_HOME}/sbin:$PATH
    
    RUN mkdir -p /home/hadooptmp
    
    #安装HBASE
    ADD hbase-1.4.12-bin.tar.gz /usr/local/ 
    #如果是hadoop用户,则需要放开
    #RUN chown -R hadoop:hadoop /usr/local/hbase-1.4.12
    RUN cd /usr/local && ln -s ./hbase-1.4.12 hbase
    
    ENV HBASE_HOME /usr/local/hbase
    ENV PATH ${HBASE_HOME}/bin:$PATH
    
    
    
    
    RUN echo "hadoop ALL= NOPASSWD: ALL" >> /etc/sudoers
    
    RUN mkdir -p /opt/hadoop/data/zookeeper
    #如果是hadoop用户使用集群,则我觉得下面的就可以放开
    #RUN chown -R hadoop:hadoop $HADOOP_HOME/etc/hadoop  
    #RUN chown -R hadoop:hadoop $HBASE_HOME/conf
    #RUN chown -R hadoop:hadoop /opt/hadoop  
    #RUN chown -R hadoop:hadoop /home/hadoop
    
    #USER hadoop
    #在每台主机上输入 ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa 创建一个无密码的公钥,-t是类型的意思,dsa是生成的密钥类型,-P是密码,’’表示无密码,-f后是秘钥生成后保存的位置
    #RUN ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa
    #输入 cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys 将公钥id_dsa.pub添加进authorized_keys。执行后会创建authorized_keys文件,这个文件用来放其他节点的公钥。
    #RUN cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
    
    
    
    
    
    USER root
    RUN ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa
    RUN cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
    CMD ["/usr/sbin/sshd","-D"]
     
    
    
    
    #ADD zookeeper-conf/.    /usr/local/zookeeper/conf/
    ADD hadoop-conf/.       /usr/local/hadoop/etc/hadoop/
    ADD hbase-conf/.        /usr/local/hbase/conf/
    
    
    #RUN echo "HDFS_DATANODE_USER=root" >> /usr/local/hadoop/sbin/start-dfs.sh
    #RUN echo "HADOOP_SECURE_DN_USER=hdfs" >> /usr/local/hadoop/sbin/start-dfs.sh
    #RUN echo "HDFS_NAMENODE_USER=root" >> /usr/local/hadoop/sbin/start-dfs.sh
    #RUN echo "HDFS_SECONDARYNAMENODE_USER=root" >> /usr/local/hadoop/sbin/start-dfs.sh
    
    #RUN echo "HDFS_DATANODE_USER=root" >> /usr/local/hadoop/sbin/stop-dfs.sh
    #RUN echo "HADOOP_SECURE_DN_USER=hdfs" >> /usr/local/hadoop/sbin/stop-dfs.sh
    #RUN echo "HDFS_NAMENODE_USER=root" >> /usr/local/hadoop/sbin/stop-dfs.sh
    #RUN echo "HDFS_SECONDARYNAMENODE_USER=root" >> /usr/local/hadoop/sbin/stop-dfs.sh
    
    #RUN echo "YARN_RESOURCEMANAGER_USER=root" >> /usr/local/hadoop/sbin/start-yarn.sh
    #RUN echo "HADOOP_SECURE_DN_USER=yarn" >> /usr/local/hadoop/sbin/start-yarn.sh
    #RUN echo "YARN_NODEMANAGER_USER=root" >> /usr/local/hadoop/sbin/start-yarn.sh#
    
    #RUN echo "YARN_RESOURCEMANAGER_USER=root" >> /usr/local/hadoop/sbin/stop-yarn.s#h
    #RUN echo "HADOOP_SECURE_DN_USER=yarn" >> /usr/local/hadoop/sbin/stop-yarn.sh
    #RUN echo "YARN_NODEMANAGER_USER=root" >> /usr/local/hadoop/sbin/stop-yarn.sh
    
    
    #在制定行插入
    RUN sed -i "1i HADOOP_SECURE_DN_USER=hdfs"  /usr/local/hadoop/sbin/start-dfs.sh
    RUN sed -i "1i HDFS_NAMENODE_USER=root"  /usr/local/hadoop/sbin/start-dfs.sh
    RUN sed -i "1i HDFS_SECONDARYNAMENODE_USER=root"  /usr/local/hadoop/sbin/start-dfs.sh
    RUN sed -i "1i HDFS_DATANODE_USER=root"  /usr/local/hadoop/sbin/start-dfs.sh
    
    
    RUN sed -i "1i HADOOP_SECURE_DN_USER=hdfs"  /usr/local/hadoop/sbin/stop-dfs.sh
    RUN sed -i "1i HDFS_NAMENODE_USER=root"  /usr/local/hadoop/sbin/stop-dfs.sh
    RUN sed -i "1i HDFS_SECONDARYNAMENODE_USER=root"  /usr/local/hadoop/sbin/stop-dfs.sh
    RUN sed -i "1i HDFS_DATANODE_USER=root"  /usr/local/hadoop/sbin/stop-dfs.sh
    
    RUN sed -i "1i HADOOP_SECURE_DN_USER=yarn"  /usr/local/hadoop/sbin/start-yarn.sh
    RUN sed -i "1i YARN_NODEMANAGER_USER=root"  /usr/local/hadoop/sbin/start-yarn.sh
    RUN sed -i "1i YARN_RESOURCEMANAGER_USER=root"  /usr/local/hadoop/sbin/start-yarn.sh
    
    RUN sed -i "1i HADOOP_SECURE_DN_USER=yarn"  /usr/local/hadoop/sbin/stop-yarn.sh
    RUN sed -i "1i YARN_NODEMANAGER_USER=root"  /usr/local/hadoop/sbin/stop-yarn.sh
    RUN sed -i "1i YARN_RESOURCEMANAGER_USER=root"  /usr/local/hadoop/sbin/stop-yarn.sh
    
    
    #RUN sed -i "1i 172.18.12.4 regionserver3"  /etc/hosts
    RUN echo "172.18.12.4 regionserver3" >> /etc/hosts
    RUN echo "172.18.12.3 regionserver2" >> /etc/hosts
    RUN echo "172.18.12.2 regionserver1" >> /etc/hosts
    
    #命令行里面配置了
    #RUN echo "172.18.12.4 hadoop-slave3" >> /etc/hosts
    #RUN echo "172.18.12.3 hadoop-slave2" >> /etc/hosts
    #RUN echo "172.18.12.2 hadoop-slave1" >> /etc/hosts
    #RUN echo "172.18.12.1 hadoop-master" >> /etc/hosts
    

    执行镜像制作命令:

    sudo docker image build --file hadoopandhbase.dockerfile --tag hadoop_hbase/nozookeeper:v1 .
    

    2.3 启动4个容器

    docker run -it --name master -h hadoop-master --network=mynet --ip 172.18.12.1 --add-host hadoop-slave1:172.18.12.2 --add-host hadoop-slave2:172.18.12.3 --add-host hadoop-slave3:172.18.12.4 -d -P -p 2020:22 -p 9870:9870 -p 8088:8088 hadoop_hbase/nozookeeper:v1
    
    docker run -it --name slave1 -h hadoop-slave1 --network=mynet --ip 172.18.12.2  --add-host hadoop-master:172.18.12.1 --add-host hadoop-slave2:172.18.12.3 --add-host hadoop-slave3:172.18.12.4 -d -P -p 2021:22 hadoop_hbase/nozookeeper:v1
    
    docker run -it --name slave2 -h hadoop-slave2 --network=mynet --ip 172.18.12.3  --add-host hadoop-master:172.18.12.1 --add-host hadoop-slave1:172.18.12.2 --add-host hadoop-slave3:172.18.12.4 -d -P -p 2022:22 hadoop_hbase/nozookeeper:v1
    
    docker run -it --name slave3 -h hadoop-slave3 --network=mynet --ip 172.18.12.4  --add-host hadoop-master:172.18.12.1 --add-host hadoop-slave1:172.18.12.2 --add-host hadoop-slave2:172.18.12.3  -d -P -p 2023:22 hadoop_hbase/nozookeeper:v1
    
    
    通过实验解释一下-h 参数: 图片.png

    3、如何在容器启动后修改配置文件(假定我们docker 镜像中没有把这部分打包进去,这一部分可以忽略)

    3.1 配置hadoop的配置文件

    hadoop-env.sh

    JAVA_HOME=/usr/local/jdk1.8.0_191
    

    core-site.xml

    <configuration>
        <property>
            <name>fs.default.name</name>
            <value>hdfs://hadoop-master:9000</value>
            </property>
        <property>
            <name>io.file.buffer.size</name>
            <value>131072</value>
        </property>
        <property>
            <name>hadoop.tmp.dir</name>
            <value>/home/hadooptmp</value>
            <description>Abase for other temporary directories.</description>
        </property>
    </configuration>
    

    yarn-site.xml

    <configuration>
        <!-- Site specific YARN configuration properties -->
        <property>
            <name>yarn.nodemanager.aux-services</name>
            <value>mapreduce_shuffle</value>
        </property>
        <property>
            <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
            <value>org.apache.hadoop.mapred.ShuffleHandler</value>
        </property>
        <property>
            <name>yarn.resourcemanager.address</name>
            <value>hadoop-master:8032</value>
        </property>
        <property>
            <name>yarn.resourcemanager.scheduler.address</name>
            <value>hadoop-master:8030</value>
        </property>
        <property>
            <name>yarn.resourcemanager.resource-tracker.address</name>
            <value>hadoop-master:8031</value>
        </property>
        <property>
            <name>yarn.resourcemanager.admin.address</name>
            <value>hadoop-master:8033</value>
        </property>
        <property>
            <name>yarn.resourcemanager.webapp.address</name>
            <value>hadoop-master:8088</value>
        </property>
        <property>
            <name>yarn.nodemanager.resource.memory-mb</name>
            <value>1024</value>
        </property>
        <property>
            <name>yarn.nodemanager.resource.cpu-vcores</name>
            <value>1</value>设置各个节点的/etc/hosts文件
        </property>
    </configuration>
    

    workers文件(Hadoop3之前叫做slaves文件)

    hadoop3
    hadoop4
    

    hbase-env.sh

    export JAVA_HOME=/usr/local/jdk1.8.0_191
    export HBASE_PID_DIR=/var/hadoop/pids 
    

    hbase-site.xml

    <configuration>
    
            <property>
    
                    <name>hbase.rootdir</name>
    
                    <value>hdfs://hadoop-master/hbase</value>
    
            </property>
    
            <property>
    
                    <name>hbase.cluster.distributed</name>
    
                    <value>true</value>
    
            </property>
    
            <property>
    
                    <name>hbase.zookeeper.quorum</name>
    
                    <value>hadoop-master,hadoop-slave1,hadoop-slave2,hadoop-slave3</value>
    
            </property>
    
    </configuration>
    

    regionservers

    hadoop-master
    hadoop-slave1
    hadoop-slave2
    hadoop-slave3
    

    3.2 把hadoop和hbase配置文件copy到各个容器中

    docker cp hadoop-conf/. hadoop-master:/usr/local/hadoop/etc/hadoop/
    docker cp hadoop-conf/. hadoop-slave1:/usr/local/hadoop/etc/hadoop/
    docker cp hadoop-conf/. hadoop-slave2:/usr/local/hadoop/etc/hadoop/
    docker cp hadoop-conf/. hadoop-slave3:/usr/local/hadoop/etc/hadoop/
    docker cp hbase-conf/. hadoop-master:/usr/local/hbase/conf/
    docker cp hbase-conf/. hadoop-slave1:/usr/local/hbase/conf/
    docker cp hbase-conf/. hadoop-slave2:/usr/local/hbase/conf/
    docker cp hbase-conf/. hadoop-slave3:/usr/local/hbase/conf/
    
    

    再修改4个启动文件后copy到容器里面

    image.png
    HDFS_DATANODE_USER=root
    HADOOP_SECURE_DN_USER=hdfs
    HDFS_NAMENODE_USER=root
    HDFS_SECONDARYNAMENODE_USER=root
    
    YARN_RESOURCEMANAGER_USER=root
    HADOOP_SECURE_DN_USER=yarn
    YARN_NODEMANAGER_USER=root
    

    然后copy到容器里面去(如果你dockerfile里面没有添加ADD start/. hadoop-master:/usr/local/hadoop/sbin/):

    docker cp start/. hadoop-master:/usr/local/hadoop/sbin/
    docker cp start/. hadoop-slave1:/usr/local/hadoop/sbin/
    docker cp start/. hadoop-slave2:/usr/local/hadoop/sbin/
    docker cp start/. hadoop-slave3:/usr/local/hadoop/sbin/
    

    你可以验证配置文件是否copy进去了

    root@hadoop-master:/usr/local/hadoop/sbin# sed -n '1,4p' stop-dfs.sh
    HDFS_DATANODE_USER=root
    HDFS_SECONDARYNAMENODE_USER=root
    HDFS_NAMENODE_USER=root
    HADOOP_SECURE_DN_USER=hdfs
    root@hadoop-master:/usr/local/hadoop/sbin# sed -n '1,4p' start-dfs.sh
    HDFS_DATANODE_USER=root
    HDFS_SECONDARYNAMENODE_USER=root
    HDFS_NAMENODE_USER=root
    HADOOP_SECURE_DN_USER=hdfs
    root@hadoop-master:/usr/local/hadoop/sbin# sed -n '1,4p' start-yarn.sh
    YARN_RESOURCEMANAGER_USER=root
    YARN_NODEMANAGER_USER=root
    HADOOP_SECURE_DN_USER=yarn
    #!/usr/bin/env bash
    root@hadoop-master:/usr/local/hadoop/sbin# sed -n '1,4p' stop-yarn.sh
    YARN_RESOURCEMANAGER_USER=root
    YARN_NODEMANAGER_USER=root
    HADOOP_SECURE_DN_USER=yarn
    #!/usr/bin/env bash
    

    3.3 登陆容器继续设置ssh免秘登陆

    yay@yay-ThinkPad-T470:~/software/dockerfileForHBase$ docker exec -it hadoop-master bash
    root@hadoop-master:/# cat ~/.ssh/id_rsa.pub>> ~/.ssh/authorized_keys
    root@hadoop-master:/# ssh root@hadoop-slave1 cat ~/.ssh/id_rsa.pub>> ~/.ssh/authorized_keys
    The authenticity of host 'hadoop-slave1 (172.18.12.2)' can't be established.
    ECDSA key fingerprint is c5:ce:d2:d4:e0:25:dc:a8:33:7e:44:ae:ba:51:04:4d.
    Are you sure you want to continue connecting (yes/no)? yes
    Warning: Permanently added 'hadoop-slave1,172.18.12.2' (ECDSA) to the list of known hosts.
    root@hadoop-master:/# ssh root@hadoop-slave2 cat ~/.ssh/id_rsa.pub>> ~/.ssh/authorized_keys
    The authenticity of host 'hadoop-slave2 (172.18.12.3)' can't be established.
    ECDSA key fingerprint is c5:ce:d2:d4:e0:25:dc:a8:33:7e:44:ae:ba:51:04:4d.
    Are you sure you want to continue connecting (yes/no)? yes
    Warning: Permanently added 'hadoop-slave2,172.18.12.3' (ECDSA) to the list of known hosts.
    root@hadoop-master:/# ssh root@hadoop-slave3 cat ~/.ssh/id_rsa.pub>> ~/.ssh/authorized_keys
    The authenticity of host 'hadoop-slave3 (172.18.12.4)' can't be established.
    ECDSA key fingerprint is c5:ce:d2:d4:e0:25:dc:a8:33:7e:44:ae:ba:51:04:4d.
    Are you sure you want to continue connecting (yes/no)? yes
    Warning: Permanently added 'hadoop-slave3,172.18.12.4' (ECDSA) to the list of known hosts.
    root@hadoop-master:/# ssh root@hadoop-slave1
    Welcome to Ubuntu 14.04.6 LTS (GNU/Linux 5.0.0-23-generic x86_64)
    
     * Documentation:  https://help.ubuntu.com/
    root@hadoop-slave1:~# ssh root@hadoop-master cat ~/.ssh/authorized_keys>> ~/.ssh/authorized_keys
    The authenticity of host 'hadoop-master (172.18.12.1)' can't be established.
    ECDSA key fingerprint is c5:ce:d2:d4:e0:25:dc:a8:33:7e:44:ae:ba:51:04:4d.
    Are you sure you want to continue connecting (yes/no)? yes
    Warning: Permanently added 'hadoop-master,172.18.12.1' (ECDSA) to the list of known hosts.
    root@hadoop-slave1:~# ssh root@hadoop-slave1
    The authenticity of host 'hadoop-slave1 (172.18.12.2)' can't be established.
    ECDSA key fingerprint is c5:ce:d2:d4:e0:25:dc:a8:33:7e:44:ae:ba:51:04:4d.
    Are you sure you want to continue connecting (yes/no)? yes
    Warning: Permanently added 'hadoop-slave1,172.18.12.2' (ECDSA) to the list of known hosts.
    Welcome to Ubuntu 14.04.6 LTS (GNU/Linux 5.0.0-23-generic x86_64)
    
     * Documentation:  https://help.ubuntu.com/
    Last login: Sat Mar  7 05:13:43 2020 from hadoop-master
    root@hadoop-slave1:~# ssh root@hadoop-slave2
    The authenticity of host 'hadoop-slave2 (172.18.12.3)' can't be established.
    ECDSA key fingerprint is c5:ce:d2:d4:e0:25:dc:a8:33:7e:44:ae:ba:51:04:4d.
    Are you sure you want to continue connecting (yes/no)? yes
    Warning: Permanently added 'hadoop-slave2,172.18.12.3' (ECDSA) to the list of known hosts.
    Welcome to Ubuntu 14.04.6 LTS (GNU/Linux 5.0.0-23-generic x86_64)
    
     * Documentation:  https://help.ubuntu.com/
    root@hadoop-slave2:~# ssh root@hadoop-master cat ~/.ssh/authorized_keys>> ~/.ssh/authorized_keys
    The authenticity of host 'hadoop-master (172.18.12.1)' can't be established.
    ECDSA key fingerprint is c5:ce:d2:d4:e0:25:dc:a8:33:7e:44:ae:ba:51:04:4d.
    Are you sure you want to continue connecting (yes/no)? yes
    Warning: Permanently added 'hadoop-master,172.18.12.1' (ECDSA) to the list of known hosts.
    root@hadoop-slave2:~# ssh root@hadoop-slave3
    The authenticity of host 'hadoop-slave3 (172.18.12.4)' can't be established.
    ECDSA key fingerprint is c5:ce:d2:d4:e0:25:dc:a8:33:7e:44:ae:ba:51:04:4d.
    Are you sure you want to continue connecting (yes/no)? yes
    Warning: Permanently added 'hadoop-slave3,172.18.12.4' (ECDSA) to the list of known hosts.
    Welcome to Ubuntu 14.04.6 LTS (GNU/Linux 5.0.0-23-generic x86_64)
    
     * Documentation:  https://help.ubuntu.com/
    root@hadoop-slave3:~# ssh root@hadoop-master cat ~/.ssh/authorized_keys>> ~/.ssh/authorized_keys
    The authenticity of host 'hadoop-master (172.18.12.1)' can't be established.
    ECDSA key fingerprint is c5:ce:d2:d4:e0:25:dc:a8:33:7e:44:ae:ba:51:04:4d.
    Are you sure you want to continue connecting (yes/no)? yes
    Warning: Permanently added 'hadoop-master,172.18.12.1' (ECDSA) to the list of known hosts.
    root@hadoop-slave3:~# ssh root@hadoop-slave1
    The authenticity of host 'hadoop-slave1 (172.18.12.2)' can't be established.
    ECDSA key fingerprint is c5:ce:d2:d4:e0:25:dc:a8:33:7e:44:ae:ba:51:04:4d.
    Are you sure you want to continue connecting (yes/no)? yes
    Warning: Permanently added 'hadoop-slave1,172.18.12.2' (ECDSA) to the list of known hosts.
    Welcome to Ubuntu 14.04.6 LTS (GNU/Linux 5.0.0-23-generic x86_64)
    
     * Documentation:  https://help.ubuntu.com/
    Last login: Sat Mar  7 05:14:36 2020 from hadoop-slave1
    root@hadoop-slave1:~# ssh root@hadoop-slave2
    Welcome to Ubuntu 14.04.6 LTS (GNU/Linux 5.0.0-23-generic x86_64)
    
     * Documentation:  https://help.ubuntu.com/
    Last login: Sat Mar  7 05:15:14 2020 from hadoop-slave1
    root@hadoop-slave2:~# ssh root@hadoop-slave3
    Welcome to Ubuntu 14.04.6 LTS (GNU/Linux 5.0.0-23-generic x86_64)
    
     * Documentation:  https://help.ubuntu.com/
    Last login: Sat Mar  7 05:15:40 2020 from hadoop-slave2
    root@hadoop-slave3:~# ssh root@hadoop-slave1
    Welcome to Ubuntu 14.04.6 LTS (GNU/Linux 5.0.0-23-generic x86_64)
    
     * Documentation:  https://help.ubuntu.com/
    Last login: Sat Mar  7 05:16:05 2020 from hadoop-slave3
    root@hadoop-slave1:~# ssh root@hadoop-slave2
    Welcome to Ubuntu 14.04.6 LTS (GNU/Linux 5.0.0-23-generic x86_64)
    
     * Documentation:  https://help.ubuntu.com/
    Last login: Sat Mar  7 05:16:21 2020 from hadoop-slave1
    root@hadoop-slave2:~# 
    

    3.4 启动Zookeeper集群(这一部分可以忽略,我们暂时不用独立安装Zookeeper)

    启动

    yay@yay-ThinkPad-T470:~/software/dockerfileForHBase$ docker exec -it hadoop-master /bin/bash
    root@hadoop-master:/# cd /usr/local/zookeeper/bin
    root@hadoop-master:/usr/local/zookeeper/bin# ./zkServer.sh start
    ZooKeeper JMX enabled by default
    Using config: /usr/local/zookeeper/bin/../conf/zoo.cfg
    Starting zookeeper ... STARTED
    root@hadoop-master:/usr/local/zookeeper/bin# exit
    exit
    yay@yay-ThinkPad-T470:~/software/dockerfileForHBase$ docker exec -it hadoop-slave2 /bin/bash
    root@hadoop-slave2:/# cd /usr/local/zookeeper/bin
    root@hadoop-slave2:/usr/local/zookeeper/bin# ./zkServer.sh start
    ZooKeeper JMX enabled by default
    Using config: /usr/local/zookeeper/bin/../conf/zoo.cfg
    Starting zookeeper ... STARTED
    root@hadoop-slave2:/usr/local/zookeeper/bin# exit
    exit
    yay@yay-ThinkPad-T470:~/software/dockerfileForHBase$ docker exec -it hadoop-slave1 /bin/bash
    root@hadoop-slave1:/# cd /usr/local/zookeeper/bin
    root@hadoop-slave1:/usr/local/zookeeper/bin# ./zkServer.sh start
    ZooKeeper JMX enabled by default
    Using config: /usr/local/zookeeper/bin/../conf/zoo.cfg
    Starting zookeeper ... STARTED
    root@hadoop-slave1:/usr/local/zookeeper/bin# exit
    exit
    yay@yay-ThinkPad-T470:~/software/dockerfileForHBase$ docker exec -it hadoop-slave3 /bin/bash
    root@hadoop-slave3:/# cd /usr/local/zookeeper/bin
    root@hadoop-slave3:/usr/local/zookeeper/bin# ./zkServer.sh start
    ZooKeeper JMX enabled by default
    Using config: /usr/local/zookeeper/bin/../conf/zoo.cfg
    Starting zookeeper ... STARTED
    root@hadoop-slave3:/usr/local/zookeeper/bin# exit
    exit
    yay@yay-ThinkPad-T470:~/software/dockerfileForHBase$ docker exec -it hadoop-slave2 /bin/bash
    root@hadoop-slave2:/# cd /usr/local/zookeeper/bin
    root@hadoop-slave2:/usr/local/zookeeper/bin# ./zkServer.sh start
    ZooKeeper JMX enabled by default
    Using config: /usr/local/zookeeper/bin/../conf/zoo.cfg
    Starting zookeeper ... STARTED
    root@hadoop-slave2:/usr/local/zookeeper/bin# exit
    exit
    yay@yay-ThinkPad-T470:~/software/dockerfileForHBase$ docker exec -it hadoop-slave1 /bin/bash
    root@hadoop-slave1:/# cd /usr/local/zookeeper/bin
    root@hadoop-slave1:/usr/local/zookeeper/bin# ./zkServer.sh start
    ZooKeeper JMX enabled by default
    Using config: /usr/local/zookeeper/bin/../conf/zoo.cfg
    Starting zookeeper ... STARTED
    root@hadoop-slave1:/usr/local/zookeeper/bin# exit
    exit
    

    查看谁是follow,谁是leader

    yay@yay-ThinkPad-T470:~/software/dockerfileForHBase$ docker exec -it hadoop-master /bin/bash
    root@hadoop-master:/# cd /usr/local/zookeeper/bin
    root@hadoop-master:/usr/local/zookeeper/bin# ./zkServer.sh status
    ZooKeeper JMX enabled by default
    Using config: /usr/local/zookeeper/bin/../conf/zoo.cfg
    Client port found: 2181. Client address: localhost.
    Mode: follower
    root@hadoop-master:/usr/local/zookeeper/bin# exit
    exit
    yay@yay-ThinkPad-T470:~/software/dockerfileForHBase$ docker exec -it hadoop-slave1 /bin/bash
    root@hadoop-slave1:/# cd /usr/local/zookeeper/bin
    root@hadoop-slave1:/usr/local/zookeeper/bin# ./zkServer.sh status
    ZooKeeper JMX enabled by default
    Using config: /usr/local/zookeeper/bin/../conf/zoo.cfg
    Client port found: 2181. Client address: localhost.
    Mode: follower
    root@hadoop-slave1:/usr/local/zookeeper/bin# exit
    exit
    yay@yay-ThinkPad-T470:~/software/dockerfileForHBase$ docker exec -it hadoop-slave2 /bin/bash
    root@hadoop-slave2:/# cd /usr/local/zookeeper/bin
    root@hadoop-slave2:/usr/local/zookeeper/bin# ./zkServer.sh status
    ZooKeeper JMX enabled by default
    Using config: /usr/local/zookeeper/bin/../conf/zoo.cfg
    Client port found: 2181. Client address: localhost.
    Mode: follower
    root@hadoop-slave2:/usr/local/zookeeper/bin# exit
    exit
    yay@yay-ThinkPad-T470:~/software/dockerfileForHBase$ 
    yay@yay-ThinkPad-T470:~/software/dockerfileForHBase$ docker exec -it hadoop-slave3 /bin/bash
    root@hadoop-slave3:/# cd /usr/local/zookeeper/bin
    root@hadoop-slave3:/usr/local/zookeeper/bin# ./zkServer.sh status
    ZooKeeper JMX enabled by default
    Using config: /usr/local/zookeeper/bin/../conf/zoo.cfg
    Client port found: 2181. Client address: localhost.
    Mode: leader
    root@hadoop-slave3:/usr/local/zookeeper/bin# 
    

    4、 从hadoop-master启动hadoop集群

    docker run -it --name master -h hadoop-master --network=mynet --ip 172.18.12.1 --add-host hadoop-slave1:172.18.12.2 --add-host hadoop-slave2:172.18.12.3 --add-host hadoop-slave3:172.18.12.4 -d -P -p 2020:22 -p 9870:9870 -p 8088:8088 hadoop_hbase/nozookeeper:v1 && docker run -it --name slave1 -h hadoop-slave1 --network=mynet --ip 172.18.12.2  --add-host hadoop-master:172.18.12.1 --add-host hadoop-slave2:172.18.12.3 --add-host hadoop-slave3:172.18.12.4 -d -P -p 2021:22 hadoop_hbase/nozookeeper:v1 && docker run -it --name slave2 -h hadoop-slave2 --network=mynet --ip 172.18.12.3  --add-host hadoop-master:172.18.12.1 --add-host hadoop-slave1:172.18.12.2 --add-host hadoop-slave3:172.18.12.4 -d -P -p 2022:22 hadoop_hbase/nozookeeper:v1 && docker run -it --name slave3 -h hadoop-slave3 --network=mynet --ip 172.18.12.4  --add-host hadoop-master:172.18.12.1 --add-host hadoop-slave1:172.18.12.2 --add-host hadoop-slave2:172.18.12.3  -d -P -p 2023:22 hadoop_hbase/nozookeeper:v1
    
    yay@yay-ThinkPad-T470:~/software/dockerfileForHBase$ docker exec -it master /bin/bash
    root@hadoop-master:/# hdfs namenode -format
    
    root@hadoop-master:/# start-all.sh
    WARNING: HADOOP_PREFIX has been replaced by HADOOP_HOME. Using value of HADOOP_PREFIX.
    WARNING: HADOOP_PREFIX has been replaced by HADOOP_HOME. Using value of HADOOP_PREFIX.
    WARNING: HADOOP_SECURE_DN_USER has been replaced by HDFS_DATANODE_SECURE_USER. Using value of HADOOP_SECURE_DN_USER.
    Starting namenodes on [hadoop-master]
    hadoop-master: Warning: Permanently added 'hadoop-master,172.18.12.1' (ECDSA) to the list of known hosts.
    Starting datanodes
    hadoop-slave2: Warning: Permanently added 'hadoop-slave2,172.18.12.3' (ECDSA) to the list of known hosts.
    hadoop-slave1: Warning: Permanently added 'hadoop-slave1,172.18.12.2' (ECDSA) to the list of known hosts.
    hadoop-slave3: Warning: Permanently added 'hadoop-slave3,172.18.12.4' (ECDSA) to the list of known hosts.
    hadoop-slave3: WARNING: /usr/local/hadoop-3.2.1/logs does not exist. Creating.
    hadoop-slave1: WARNING: /usr/local/hadoop-3.2.1/logs does not exist. Creating.
    hadoop-slave2: WARNING: /usr/local/hadoop-3.2.1/logs does not exist. Creating.
    Starting secondary namenodes [hadoop-master]
    WARNING: HADOOP_PREFIX has been replaced by HADOOP_HOME. Using value of HADOOP_PREFIX.
    WARNING: HADOOP_PREFIX has been replaced by HADOOP_HOME. Using value of HADOOP_PREFIX.
    Starting resourcemanager
    Starting nodemanagers
    root@hadoop-master:/# jps
    901 ResourceManager
    1366 Jps
    1048 NodeManager
    617 SecondaryNameNode
    410 DataNode
    271 NameNode
    root@hadoop-master:/# exit
    exit
    yay@yay-ThinkPad-T470:~/software/dockerfileForHBase$ docker exec -it slave1 /bin/bash
    root@hadoop-slave1:/# jps
    292 Jps
    57 DataNode
    175 NodeManager
    root@hadoop-slave1:/# 
    
    

    启动HBASE:

    root@hadoop-master:/# start-hbase.sh
    hadoop-slave3: running zookeeper, logging to /usr/local/hbase/bin/../logs/hbase-root-zookeeper-hadoop-slave3.out
    hadoop-slave1: running zookeeper, logging to /usr/local/hbase/bin/../logs/hbase-root-zookeeper-hadoop-slave1.out
    hadoop-slave2: running zookeeper, logging to /usr/local/hbase/bin/../logs/hbase-root-zookeeper-hadoop-slave2.out
    hadoop-master: running zookeeper, logging to /usr/local/hbase/bin/../logs/hbase-root-zookeeper-hadoop-master.out
    running master, logging to /usr/local/hbase/logs/hbase--master-hadoop-master.out
    hadoop-slave1: running regionserver, logging to /usr/local/hbase/bin/../logs/hbase-root-regionserver-hadoop-slave1.out
    hadoop-slave3: running regionserver, logging to /usr/local/hbase/bin/../logs/hbase-root-regionserver-hadoop-slave3.out
    hadoop-master: running regionserver, logging to /usr/local/hbase/bin/../logs/hbase-root-regionserver-hadoop-master.out
    hadoop-slave2: running regionserver, logging to /usr/local/hbase/bin/../logs/hbase-root-regionserver-hadoop-slave2.out
    root@hadoop-master:/# jps
    901 ResourceManager
    1973 HRegionServer
    1798 HMaster
    1048 NodeManager
    617 SecondaryNameNode
    410 DataNode
    2061 Jps
    1742 HQuorumPeer
    271 NameNode
    root@hadoop-master:/# exit
    exit
    yay@yay-ThinkPad-T470:~/software/dockerfileForHBase$ docker exec -it slave1 /bin/bash
    root@hadoop-slave1:/# jps
    881 HRegionServer
    1026 Jps
    57 DataNode
    796 HQuorumPeer
    175 NodeManager
    root@hadoop-slave1:/# 
    
    
    

    如何查看环境变量:

    yay@yay-ThinkPad-T470:~$ docker exec -it master /bin/bash
    root@hadoop-master:/# export -p | egrep -i "(hadoop|hbase)"
    declare -x HADOOP_COMMON_HOME="/usr/local/hadoop"
    declare -x HADOOP_CONF_DIR="/usr/local/hadoop/etc/hadoop"
    declare -x HADOOP_HDFS_HOME="/usr/local/hadoop"
    declare -x HADOOP_HOME="/usr/local/hadoop"
    declare -x HADOOP_MAPRED_HOME="/usr/local/hadoop"
    declare -x HADOOP_PREFIX="/usr/local/hadoop"
    declare -x HADOOP_YARN_HOME="/usr/local/hadoop"
    declare -x HBASE_HOME="/usr/local/hbase"
    declare -x HOSTNAME="hadoop-master"
    declare -x PATH="/usr/local/hbase/bin:/usr/local/hadoop/bin:/usr/local/hadoop/sbin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/local/jdk1.8.0_191/bin"
    

    相关文章

      网友评论

          本文标题:01 用Docker启动HBase

          本文链接:https://www.haomeiwen.com/subject/qwcthhtx.html