美文网首页
Docker 安装Hadoop HDFS

Docker 安装Hadoop HDFS

作者: 故事的开头是个码农 | 来源:发表于2019-11-04 19:29 被阅读0次

    搜索镜像

    [root@localhost /]# docker search hadoop
    

    没有官方镜像,我选择使用singlarities/hadoop镜像

    [root@localhost /]# docker pull singularities/hadoop
    

    查看

    [root@localhost /]# docker image ls
    REPOSITORY                       TAG                 IMAGE ID            CREATED             SIZE
    docker.io/singularities/hadoop   latest              e213c9ae1b36        3 months ago        1.19 GB
    

    创建docker-compose.yml文件

    [root@localhost /]# vim docker-compose.yml
    

    docker-compose.yml文件内容:

    version: "2"
    
    services:
      namenode:
        image: singularities/hadoop
        command: start-hadoop namenode
        hostname: namenode
        environment:
          HDFS_USER: hdfsuser
        ports:
          - "8020:8020"
          - "14000:14000"
          - "50070:50070"
          - "50075:50075"
          - "10020:10020"
          - "13562:13562"
          - "19888:19888"
      datanode:
        image: singularities/hadoop
        command: start-hadoop datanode namenode
        environment:
          HDFS_USER: hdfsuser
        links:
          - namenode
    

    其中HDFS_USER的名字为HDFS的账户名,需要手动建立,在下面会说明如何建立

    执行:

    [root@localhost hadoop]# docker-compose up -d
    Creating network "hadoop_default" with the default driver
    Creating hadoop_namenode_1 ... done
    Creating hadoop_datanode_1 ... done
    

    生成3个datanode

    [root@localhost hadoop]# docker-compose scale datanode=3
    WARNING: The scale command is deprecated. Use the up command with the --scale flag instead.
    Starting hadoop_datanode_1 ... done
    Creating hadoop_datanode_2 ... done
    Creating hadoop_datanode_3 ... done
    

    可列出容器查看

    [root@localhost hadoop]# docker ps
    CONTAINER ID        IMAGE                  COMMAND                  CREATED             STATUS              PORTS                                                                                                                                                                                                                                                NAMES
    19f9685e286f        singularities/hadoop   "start-hadoop data..."   48 seconds ago      Up 46 seconds       8020/tcp, 9000/tcp, 10020/tcp, 13562/tcp, 14000/tcp, 19888/tcp, 50010/tcp, 50020/tcp, 50070/tcp, 50075/tcp, 50090/tcp, 50470/tcp, 50475/tcp                                                                                                           hadoop_datanode_3
    e96b395f56e3        singularities/hadoop   "start-hadoop data..."   48 seconds ago      Up 46 seconds       8020/tcp, 9000/tcp, 10020/tcp, 13562/tcp, 14000/tcp, 19888/tcp, 50010/tcp, 50020/tcp, 50070/tcp, 50075/tcp, 50090/tcp, 50470/tcp, 50475/tcp                                                                                                           hadoop_datanode_2
    5a26b1069dbb        singularities/hadoop   "start-hadoop data..."   8 minutes ago       Up 8 minutes        8020/tcp, 9000/tcp, 10020/tcp, 13562/tcp, 14000/tcp, 19888/tcp, 50010/tcp, 50020/tcp, 50070/tcp, 50075/tcp, 50090/tcp, 50470/tcp, 50475/tcp                                                                                                           hadoop_datanode_1
    a8656de09ecc        singularities/hadoop   "start-hadoop name..."   8 minutes ago       Up 8 minutes        0.0.0.0:8020->8020/tcp, 0.0.0.0:10020->10020/tcp, 0.0.0.0:13562->13562/tcp, 0.0.0.0:14000->14000/tcp, 9000/tcp, 50010/tcp, 0.0.0.0:19888->19888/tcp, 0.0.0.0:50070->50070/tcp, 50020/tcp, 50090/tcp, 50470/tcp, 0.0.0.0:50075->50075/tcp, 50475/tcp   hadoop_namenode_1
    

    打开浏览器,查看效果图

    1568803464(1).png

    创建HDFS的系统账户

    [root@localhost /]#  adduser hdfsuser
    

    文件维护

    文件维护需要先进入datanode节点,再进行操作

    进入datanode节点的docker容器:

    [root@iZ2ze82xifgiw8sbzpte9tZ ~]# docker exec -it 这换成容器的id bash
    

    1、创建目录

    hadoop fs -mkdir /hdfs #在根目录下创建hdfs文件夹
    

    2、查看目录

    >hadoop fs -ls / #列出跟目录下的文件列表
    

    3、级联创建目录

    >hadoop fs -mkdir -p /hdfs/d1/d2
    

    4、级联列出目录

    >hadoop fs -ls -R /
    

    5、上传本地文件到HDFS

    >echo "hello hdfs" >>local.txt
    >hadoop fs -put local.txt /hdfs/d1/d2
    

    6、查看HDFS中文件的内容

    >hadoop fs -cat /hdfs/d1/d2/local.txt
    hello hdfs
    

    参考链接:

    https://www.cnblogs.com/hongdada/p/9488349.html

    相关文章

      网友评论

          本文标题:Docker 安装Hadoop HDFS

          本文链接:https://www.haomeiwen.com/subject/iivkuctx.html