美文网首页我爱编程
Hadoop (Version2.9,JDK7) hdfs安装

Hadoop (Version2.9,JDK7) hdfs安装

作者: MicoCube | 来源:发表于2018-01-17 22:23 被阅读0次
    • 各版本hadoop文档地址
    • Hadoop的框架最核心的设计就是:HDFS(Hadoop Distributed File System)和MapReduce。HDFS为海量的数据提供了存储,则MapReduce为海量的数据提供了计算。
    • hadoop优点
      • 高容错
        • 数据自动保存多个副本
        • 副本丢失后自动恢复
      • 批处理
        • 移动计算而非数据
        • 数据位置暴露给计算框架
      • 适合大数据处理
        • PB级数据
        • 百万规模以上的文件数量
        • 10K+节点
      • 可以构建在廉价机器上
        • 通过多副本提高可靠性
        • 提供容错和恢复机制
    • hadoop缺点
      • 小文件占用NameNode大量内存,寻道时间超过读取时间
      • 毫秒级别的应用低延迟与高吞吐率的应用不适合
      • 一个文件只能有一个写者,仅支持append,并发写入和文件随机修改成本非常高
    • 架构图


      HDFS架构
    • HDFS数据存储单元[block]
      • 文件被切分成固定大小的数据块
        • 默认数据块大小为64M,版本1.x为64M,2.x为128M,可配置
        • 若文件大小不足一个block,则单独存成一个block
      • 存储流程
        • 按大小将文件切分成若干个block,存储到不同节点上
        • 默认情况下每个block都有3个副本
        • 追加的文件会新建block
      • block大小和副本数可通过client上传文件时设置,文件上传成功后副本数可以变更,block size 不可变更
    • NameNode[NN]
      • 主要功能是接收客户端的读写服务
      • 元数据丢失,就算DN中的数据未损坏,文件也找不回来
      • 保存metadata:
        • 文件owership[所有者]和permissions[权限]
        • 文件包含哪些块
        • block保存在那个DataNode(由DataNode启动时上报)
      • NameNode的metadata信息启动后会加载到内存
        • metadata信息有存储到磁盘,文件名为fsimage
        • block位置信息不会保存到fsimage
        • edits记录对metadata的操作日志,之所以不直接修改到fsimage是因为防止并发修改fsimage,但是不写到fsimage,将导致fsimage文件和内存中的metadata数据不一致,这就引申出了SecondaryNameNode
    • SecondNameNode[SNN]
      • 它不是NN的备份[可以做备份],它的工作是帮助NN合并edits日志,
      • SNN执行合并机制
        • 设置时间间隔fs.checkpoint.period,默认3600s,每隔3600s后会启动合并机制
        • 设置edits文件大小,fs.checkpoint.size,默认64MB,达到上限后会启动合并机制
        • 合并后fsimage文件在SNN中会保留一份,可以当做是备份,但是丢失了合并期间产生的edits.new中的内容
        • 合并机制流程
    • DataNode[DN]
      • 存储数据
      • 启动DN线程是会向NN汇报block信息
      • 通过向NN发送心跳保持与其联系[3s/次],如果NN10分钟没收到DN的心跳,则认为该DN已经lost,并copy其上的block到其他的DN,但是如果是启动的时候就挂了,该DN的block信息是不会上报给NN的,所以此时NN会检测是否有block的副本数量小于3,如果是的话,则复制该副本直到该副本总数为3或3以上,如果该机器重新加入的话,可能会有4个副本,但这并不影响
    • Block的副本放置策略
      • 第一个副本 放置在上传文件的DN,如果是集群外提交,随机选一台磁盘不太满,cpu不太忙的节点
      • 第二个副本 放置在第一个副本不同机架的节点
      • 第三个副本 放置到与第二个副本相同机架的节点
      • 更多副本 随机节点
    • HDFS写流程
      • 写流程
      • 访问NN的时候会返回block的大小和空闲的DN
      • 数据块是边写边切分,比如128M的block,DN写到128MB的时候就开启下一个block
    • HDFS读流程
      • 读流程
      • 因为每个block都有3个副本所以你读的是那个DN由NN返回的[一般返回比较空闲的一台DN]信息决定
    • HDFS文件权限
      • 与linux文件权限类似
        • r:read;w:write;x:execute,权限x对于文件忽略,对于文件夹表示是否允许访问其内容
      • 如果linux系统用户a,使用hadoop命令创建一个文件,那么这个文件在HDFS中owner就是a
      • HDFS权限目的:防止好人做错事,而不是防止坏人做坏事,HDFS相信,你告诉我你是谁,我就认为你是谁
    • 安全模式
      • NN启动的时候,首先将fsimage载入内存,并执行edits中的各项操作
      • 一旦在内存中成功创建文件系统元数据的映射,则创建一个新的fsimage文件[这个操作不需要SNN],和一个空的edits
      • 此刻NN运行在安全模式,即NN的文件系统对于客户端来说是只读的,只显示目录,显示文件内容等等;写,删除,重命名操作都会失败
      • 此阶段NN收集各个DN的报告,当数据块达到最小副本以上时,会认为是安全的,在一定比例[可设置]的数据块被确定为安全后,再过若干时间,安全模式结束
      • 当检测到副本数不足的数据块时,该块会被复制直到达到最小副本数,系统中数据块的位置并不是NN维护的,而是以块列表形式存储在DN中
    • 安装前准备
      • 准备5台机器,虚拟机[此处为192.168.10.216~192.168.10.219,五台,以下以215到219简称]
      • 5台机器全部关闭防火墙:
      [root@dn3 ~]# systemctl stop firewalld.service
      [root@dn3 ~]# systemctl disable firewalld.service
      Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
      Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
      [root@dn3 ~]# iptables -F
      [root@dn3 ~]# vi /etc/selinux/config
      #SELINUX=enforcing #注释掉
      #SELINUXTYPE=targeted #注释掉
      #SELINUX=disabled #增加 
      # 千万不能写成SELINUXTYPE=disabled,如果你这么写了,
      # 你可能需要这个:http://www.mamicode.com/info-detail-1847013.html
      [root@dn3 ~]# setenforce 0
      # 重启所有虚拟机
      [root@dn3 ~]# iptables -F
      
      • 修改主机名[此处使用的是centos7的命令,其他版本的linux请自行百度]hostnamectl set-hostname 主机名,这里将NN节点的主机名设为NN,SNN节点的主机名为snn,dn节点有三个,分别为dn1,dn2,dn3
      • 修改hosts文件
      127.0.0.1       localhost       localhost
      192.168.10.219  nn              nn
      192.168.10.218  snn             snn
      192.168.10.217  dn1             dn1
      192.168.10.216  dn2             dn2
      192.168.10.215  dn3             dn3
      
      • 查看与你的hadoop兼容的jdk版本,hadoop版本为2.9,对应的jdk兼容版本为7,并且在官方的wiki上有说明It is built and tested on both OpenJDK and Oracle (HotSpot)'s JDK/JRE.,所以直接在你的4台机器上yum -y install java-1.7.0-openjdk.x86_64,查看jdk的安装路径可以使用:

        [root@localhost hadoop]# rpm -ql java-1.7.0-openjdk.x86_64
        /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.161-2.6.12.0.el7_4.x86_64/jre/bin/policytool
        /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.161-2.6.12.0.el7_4.x86_64/jre/lib/amd64/libjavagtk.so
        /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.161-2.6.12.0.el7_4.x86_64/jre/lib/amd64/libjsoundalsa.so
        /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.161-2.6.12.0.el7_4.x86_64/jre/lib/amd64/libpulse-java.so
        /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.161-2.6.12.0.el7_4.x86_64/jre/lib/amd64/libsplashscreen.so
        /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.161-2.6.12.0.el7_4.x86_64/jre/lib/amd64/xawt/libmawt.so
        /usr/share/icons/hicolor/16x16/apps/java-1.7.0.png
        /usr/share/icons/hicolor/24x24/apps/java-1.7.0.png
        /usr/share/icons/hicolor/32x32/apps/java-1.7.0.png
        /usr/share/icons/hicolor/48x48/apps/java-1.7.0.png
        # 由此可知jdk安装在/usr/lib/jvm下
        [root@localhost hadoop]ls /usr/lib/jvm/jre-1.7.0-openjdk/bin/
        java  keytool  orbd  pack200  policytool  rmid  rmiregistry  servertool  tnameserv  unpack200
        
      • 时间调整为一致,安装rdate:yum -y install rdate,使用rdate命令从时间服务器上同步一下时间

      • 关于免密登陆:

        • 实际上免密登陆会有俩次请求,并不是真的不需要密码,假设A要免密登陆到B
        • A第一次请求会将自身的ip地址和公钥发送给B(pub key),B收到ip和公钥之后与authorized_keys中的公钥进行比较,若一致则认为该机器可以免密登陆到B,接着B会将自身的密码发送给A
        • A第二次请求将用户名和密码发送给B,B响应登陆成功
        • 也就是说,假如A要登陆到B,B就必须要有A的公钥
      • 选择一台机器可以以免密登陆到其他三台机器,当然自身也需要免密登陆[可选],如果不选的话,在hdfs启动过程中会暂停问你索要密码,才会继续,这里选的是219,自身免密登陆

        # 安装ssh(因为我的系统自带ssh,所以nothing to do)###################################
        [root@localhost yum.repos.d]# yum install -y sshd
        Loaded plugins: fastestmirror
        Loading mirror speeds from cached hostfile
         * base: mirrors.aliyun.com
         * extras: mirrors.tuna.tsinghua.edu.cn
         * updates: mirrors.aliyun.com
        No package ssh available.
        Error: Nothing to do
        # 安装rsync######################################################################
        [root@localhost yum.repos.d]# yum install -y rsync
        Loaded plugins: fastestmirror
        Loading mirror speeds from cached hostfile
         * base: mirrors.aliyun.com
         * extras: mirrors.tuna.tsinghua.edu.cn
         * updates: mirrors.aliyun.com
        Resolving Dependencies
        --> Running transaction check
        ---> Package rsync.x86_64 0:3.0.9-18.el7 will be installed
        --> Finished Dependency Resolution
        
        Dependencies Resolved
        
        =========================================================================================
         Package           Arch               Version                     Repository        Size
        =========================================================================================
        Installing:
         rsync             x86_64             3.0.9-18.el7                base             360 k
        
        Transaction Summary
        =========================================================================================
        Install  1 Package
        
        Total download size: 360 k
        Installed size: 732 k
        Downloading packages:
        rsync-3.0.9-18.el7.x86_64.rpm                                     | 360 kB  00:00:00     
        Running transaction check
        Running transaction test
        Transaction test succeeded
        Running transaction
          Installing : rsync-3.0.9-18.el7.x86_64                                             1/1 
          Verifying  : rsync-3.0.9-18.el7.x86_64                                             1/1 
        
        Installed:
          rsync.x86_64 0:3.0.9-18.el7                                                            
        
        Complete!
        [root@localhost yum.repos.d]# ssh localhost
        The authenticity of host 'localhost (::1)' can't be established.
        ECDSA key fingerprint is SHA256:3h7izAi6QdCeHwDrb8PdeeoMzaJH0zP4n75SQBxlSr8.
        ECDSA key fingerprint is MD5:3a:e3:ca:15:c7:24:cf:56:37:27:31:70:14:70:d5:01.
        Are you sure you want to continue connecting (yes/no)? yes
        Warning: Permanently added 'localhost' (ECDSA) to the list of known hosts.
        root@localhost's password: 
        #生成秘钥######################################################################
        [root@localhost yum.repos.d]# ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa
        Generating public/private rsa key pair.
        Your identification has been saved in /root/.ssh/id_rsa.
        Your public key has been saved in /root/.ssh/id_rsa.pub.
        The key fingerprint is:
        SHA256:0kVvd1pHxg8NSCq84SRl2Yi0Zr48LjA8LNz4pxx3Cms root@localhost.localdomain
        The key's randomart image is:
        +---[RSA 2048]----+
        |     ...o+.....+o|
        |      .=o..o. .oo|
        |      = = o o ..=|
        |     + = = . . +o|
        |.oo   o S     .  |
        |.o*. . o         |
        | ..* .+.         |
        |  .E*oo.         |
        |  .+oo.          |
        +----[SHA256]-----+
        #配置authorized_keys,公钥追加到本地的认证文件中#################################################
        [root@localhost yum.repos.d]# cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
        # 授权######################################################################
        [root@localhost yum.repos.d]# chmod 0600 ~/.ssh/authorized_keys
        # 自身免密登陆测试######################################################################
        [root@localhost yum.repos.d]# ssh localhost
        Last login: Wed Dec 20 22:36:50 2017 from 192.168.10.211
        # 退出######################################################################
        [root@localhost ~]# exit
        logout
        Connection to localhost closed.
        [root@localhost yum.repos.d]# 
        
      • 设置其他三台机器可以使用219免密登陆[同时自身也可以免登陆,如果不设置的话会有警告]

        • 215机器[自身免密登陆,将219的公钥拷贝给215,并加入到认证文件]
        [root@localhost ~]# scp root@nn:/root/.ssh/id_rsa.pub ./
        The authenticity of host '192.168.10.219 (192.168.10.219)' can't be established.
        ECDSA key fingerprint is SHA256:3h7izAi6QdCeHwDrb8PdeeoMzaJH0zP4n75SQBxlSr8.
        ECDSA key fingerprint is MD5:3a:e3:ca:15:c7:24:cf:56:37:27:31:70:14:70:d5:01.
        Are you sure you want to continue connecting (yes/no)? yes
        Warning: Permanently added '192.168.10.219' (ECDSA) to the list of known hosts.
        root@192.168.10.219's password: 
        id_rsa.pub                                             100%  408   508.0KB/s   00:00    
        [root@localhost ~]# cat ./id_rsa.pub >> ~/.ssh/authorized_keys
        
        • 216机器[自身免密登陆,将219的公钥拷贝给216,并加入到认证文件]
        [root@localhost ~]# scp root@nn:/root/.ssh/id_rsa.pub ./
        The authenticity of host '192.168.10.219 (192.168.10.219)' can't be established.
        ECDSA key fingerprint is SHA256:3h7izAi6QdCeHwDrb8PdeeoMzaJH0zP4n75SQBxlSr8.
        ECDSA key fingerprint is MD5:3a:e3:ca:15:c7:24:cf:56:37:27:31:70:14:70:d5:01.
        Are you sure you want to continue connecting (yes/no)? yes
        Warning: Permanently added '192.168.10.219' (ECDSA) to the list of known hosts.
        root@192.168.10.219's password: 
        id_rsa.pub                                             100%  408   508.0KB/s   00:00    
        [root@localhost ~]# cat ./id_rsa.pub >> ~/.ssh/authorized_keys
        
        • 217机器[自身免密登陆,将219的公钥拷贝给217,并加入到认证文件]
        [root@localhost yum.repos.d]# scp root@nn:/root/.ssh/id_rsa.pub ./
        The authenticity of host '192.168.10.219 (192.168.10.219)' can't be established.
        ECDSA key fingerprint is SHA256:3h7izAi6QdCeHwDrb8PdeeoMzaJH0zP4n75SQBxlSr8.
        ECDSA key fingerprint is MD5:3a:e3:ca:15:c7:24:cf:56:37:27:31:70:14:70:d5:01.
        Are you sure you want to continue connecting (yes/no)? yes
        Warning: Permanently added '192.168.10.219' (ECDSA) to the list of known hosts.
        root@192.168.10.219's password: 
        Permission denied, please try again.
        root@192.168.10.219's password: 
        id_rsa.pub                                             100%  408   342.1KB/s   00:00    
        [root@localhost yum.repos.d]# cat ./id_rsa.pub >> ~/.ssh/authorized_keys
        
        • 218机器[自身免密登陆,将219的公钥拷贝给218,并加入到认证文件]
        [root@localhost ~]# scp root@nn:/root/.ssh/id_rsa.pub ./
        The authenticity of host '192.168.10.219 (192.168.10.219)' can't be established.
        ECDSA key fingerprint is SHA256:3h7izAi6QdCeHwDrb8PdeeoMzaJH0zP4n75SQBxlSr8.
        ECDSA key fingerprint is MD5:3a:e3:ca:15:c7:24:cf:56:37:27:31:70:14:70:d5:01.
        Are you sure you want to continue connecting (yes/no)? yes
        Warning: Permanently added '192.168.10.219' (ECDSA) to the list of known hosts.
        root@192.168.10.219's password: 
        id_rsa.pub                                             100%  408   400.1KB/s   00:00    
        [root@localhost ~]# cat ./id_rsa.pub >> ~/.ssh/authorized_keys
        
        • 219机器验证
        [root@localhost ~]# ssh dn3
        The authenticity of host '192.168.10.215 (192.168.10.215)' can't be established.
        ECDSA key fingerprint is SHA256:3h7izAi6QdCeHwDrb8PdeeoMzaJH0zP4n75SQBxlSr8.
        ECDSA key fingerprint is MD5:3a:e3:ca:15:c7:24:cf:56:37:27:31:70:14:70:d5:01.
        Are you sure you want to continue connecting (yes/no)? yes
        Warning: Permanently added '192.168.10.215' (ECDSA) to the list of known hosts.
        Last login: Thu Dec 21 01:10:05 2017 from ::1
        [root@localhost ~]# exit
        logout
        Connection to 192.168.10.215 closed.
        [root@localhost .ssh]# ssh dn2
        Last login: Thu Dec 21 00:01:32 2017 from 192.168.10.219
        [root@localhost ~]# exit
        logout
        Connection to 192.168.10.216 closed.
        [root@localhost .ssh]# ssh dn1
        The authenticity of host '192.168.10.217 (192.168.10.217)' can't be established.
        ECDSA key fingerprint is SHA256:3h7izAi6QdCeHwDrb8PdeeoMzaJH0zP4n75SQBxlSr8.
        ECDSA key fingerprint is MD5:3a:e3:ca:15:c7:24:cf:56:37:27:31:70:14:70:d5:01.
        Are you sure you want to continue connecting (yes/no)? yes
        Warning: Permanently added '192.168.10.217' (ECDSA) to the list of known hosts.
        Last login: Wed Dec 20 23:48:25 2017 from ::1
        [root@localhost ~]# exit
        logout
        Connection to 192.168.10.217 closed.
        [root@localhost .ssh]# ssh snn
        Last login: Wed Dec 20 23:47:06 2017 from ::1
        [root@localhost ~]# exit
        logout
        Connection to 192.168.10.218 closed.
        [root@localhost .ssh]# 
        
      • 选择一台NN[219],一台SNN[218],三台DN[215~217]

      • 将hadoop的压缩包上传到这NN节点

    • 安装
      • NN节点
        • 解压
         [root@localhost local]# tar -C /usr/local/ -xvf hadoop-2.9.0.tar.gz
         [root@localhost ~]# cd /usr/local/
         [root@localhost local]# ls
         bin  etc  games  hadoop-2.9.0  include  lib  lib64  libexec  sbin  share  src
         [root@localhost local]# mv hadoop-2.9.0/ hadoop
         [root@localhost local]# ls
         bin  etc  games  hadoop  include  lib  lib64  libexec  sbin  share  src
         [root@localhost local]# cd hadoop/
         [root@localhost hadoop]# ls
         bin  etc  include  lib  libexec  LICENSE.txt  NOTICE.txt  README.txt  sbin  share
         [root@localhost hadoop]# cd etc/
         [root@localhost etc]# ls
         hadoop
         [root@localhost etc]# cd hadoop/
         [root@localhost hadoop]# ls
         capacity-scheduler.xml  hadoop-metrics2.properties  httpfs-signature.secret  log4j.properties            ssl-client.xml.example
         configuration.xsl       hadoop-metrics.properties   httpfs-site.xml          mapred-env.cmd              ssl-server.xml.example
         container-executor.cfg  hadoop-policy.xml           kms-acls.xml             mapred-env.sh               yarn-env.cmd
         core-site.xml           hdfs-site.xml               kms-env.sh               mapred-queues.xml.template  yarn-env.sh
         hadoop-env.cmd          httpfs-env.sh               kms-log4j.properties     mapred-site.xml.template    yarn-site.xml
         hadoop-env.sh           httpfs-log4j.properties     kms-site.xml             slaves
        
        • 编辑hadoop-env.sh[NN的运行配置]
        # vi hadoop-env.sh
        export JAVA_HOME=/usr/lib/jvm/jre-1.7.0-openjdk/
        
        • 编辑core-site.xml
        <configuration>
          # [配置hdfs NameNode主机的数据传输端口,RPC端口]
          <property>
              <name>fs.defaultFS</name>
              <value>hdfs://nn:9000</value>
          </property>
          # 由http://hadoop.apache.org/docs/r2.9.0/hadoop-project-dist/hadoop-hdfs/hdfs-default.xml
          # 可知,dfs.namenode.name.dir的默认值为 file://${hadoop.tmp.dir}/dfs/name,该目录存储的
          # 是元数据【fsimage】,hadoop.tmp.dir的默认值是/tmp/hadoop-${user.name},即linux系统的零时
          # 目录,重启机器后会丢失,所以这里修改为/opt/hadoop目录
          <property>
              <name>hadoop.tmp.dir</name>
              <value>/opt/hadoop</value>
          </property>
        </configuration>
        
        • 编辑hdfs-site.xml
         <configuration>
           # [配置SNN的http端口]
           <property>
               <name>dfs.namenode.secondary.http-address</name>
               <value>snn:50090</value>
           </property>
          # [配置SNN的https端口]
           <property>
               <name>dfs.namenode.secondary.https-address</name>
               <value>snn:50091</value>
           </property>
         </configuration>
        
        • 编辑slaves[DN的ip地址]
        dn1
        dn2
        dn3
        
        • 创建并编辑masters[SNN的ip地址]
        snn
        
        • 将当前的hadoop从根目录打包,传到其他四台机器
        [root@localhost local]# tar -czvf hadoop.tar.gz /usr/local/hadoop/
        [root@localhost local]# scp hadoop.tar.gz root@192.168.10.215:/root/
        hadoop.tar.gz                                                                                  100%  350MB  31.8MB/s   00:11    
        [root@localhost local]# scp hadoop.tar.gz root@192.168.10.216:/root/
        hadoop.tar.gz                                                                                  100%  350MB  29.1MB/s   00:12    
        [root@localhost local]# scp hadoop.tar.gz root@192.168.10.217:/root/
        hadoop.tar.gz                                                                                  100%  350MB  69.9MB/s   00:05    
        [root@localhost local]# scp hadoop.tar.gz root@192.168.10.218:/root/
        hadoop.tar.gz     
        
        • 其他四台机器也解压到/usr/local/目录下,同时将/usr/local/hadoop/配置到系统环境变量HADOOP_HOME中,也可以配置一台后将文件拷贝到其他机器:
        # vi ~/.bash_profile
        export HADOOP_HOME=/usr/local/hadoop
        export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
        
        • NN节点初始化[这里选的是219,只能在选的NN节点执行命令]:
        [root@localhost hadoop]# hdfs namenode -format
        # 格式化完毕之后,会在/opt/hadoop/下生成fsimage文件:
        # 在这里要注意如果以后重新格式化的话,
        # 记得删除每个节点的/opt/hadoop/目录下的所有文件,
        # 不然会导致在hadoop的监控页面找不到data node,
        # 并且存储空间显示占用100%
        [root@localhost opt]# tree /opt/hadoop/
        /opt/hadoop/
        └── dfs
            └── name
                └── current
                    ├── fsimage_0000000000000000000
                    ├── fsimage_0000000000000000000.md5
                    ├── seen_txid
                    └── VERSION
        
        3 directories, 4 files
        
        • 启动之前先关闭各个节点的防火墙,service iptables stop, 在NN节点启动HDFSstart-dfs.sh
        # logging to 后面跟的是日志记录文件,要是发现某一个节点启动失败,
        # 直接去对应机器的对应日志文件查看下原因
        [root@localhost ~]# start-dfs.sh 
        Starting namenodes on [localhost]
        localhost: starting namenode, logging to /usr/local/hadoop/logs/hadoop-root-namenode-localhost.localdomain.out
        192.168.10.215: starting datanode, logging to /usr/local/hadoop/logs/hadoop-root-datanode-localhost.localdomain.out
        192.168.10.217: starting datanode, logging to /usr/local/hadoop/logs/hadoop-root-datanode-localhost.localdomain.out
        192.168.10.216: starting datanode, logging to /usr/local/hadoop/logs/hadoop-root-datanode-localhost.localdomain.out
        Starting secondary namenodes [192.168.10.218]
        192.168.10.218: starting secondarynamenode, logging to /usr/local/hadoop/logs/hadoop-root-secondarynamenode-localhost.localdomain.out    
        
        • 进入hadoop的NameNode web监控页面,默认端口为50070:http://192.168.10.219:50070,如果访问不到,检查下linux的防火墙,iptables -F,或者service iptables stop关闭防火墙

    相关文章

      网友评论

        本文标题:Hadoop (Version2.9,JDK7) hdfs安装

        本文链接:https://www.haomeiwen.com/subject/qbqgwxtx.html