美文网首页
Hadoop2.9.2高可用环境搭建

Hadoop2.9.2高可用环境搭建

作者: doublegao | 来源:发表于2019-08-28 09:20 被阅读0次

    1.环境准备和部署规划

    • 硬件和规划
    序号 主机 内存 系统 组件规划 进程
    1 10.110.172.151 32 Centos6.5 jdk-8u221、hadoop2.9.2 DataNode、NodeManager、JournalNode
    2 10.110.172.152 32 Centos6.5 jdk-8u221、hadoop2.9.2 DataNode、NodeManager、JournalNode
    3 10.110.172.153 32 Centos6.5 jdk-8u221、hadoop2.9.2 DataNode、NodeManager、JournalNode
    4 10.110.172.154 16 Centos6.5 jdk-8u221、hadoop2.9.2、zookeeper3.4.14 ResourceManager、NameNode、zkfc、zookeeper
    5 10.110.172.155 16 Centos6.5 jdk-8u221、hadoop2.9.2、zookeeper3.4.14 ResourceManager、NameNode、zkfc、zookeeper
    6 10.110.172.156 8 Centos6.5 jdk-8u221、zookeeper3.4.14 zookeeper
    • 目录规划
    [root@hnxxzxfzjz001 /]# tree /data
    /data
    ├── cloud  #组件安装目录
    │   ├── hadoop
    │   ├── jdk
    │   └── zookeeper
    ├── soft #软件包存放位置
    └── work #组件工作目录
        ├── hadoop
        └── zookeeper
    
    7 directories, 0 files
    

    2.部署安装

    • 主机初始化准备(参数设置,免密配置)
    #卸载原有低版本JDK
    [root@hnxxzxfzjz001 cloud]# rpm -qa | grep java
    tzdata-java-2013g-1.el6.noarch
    java-1.7.0-openjdk-1.7.0.45-2.4.3.3.el6.x86_64
    java-1.6.0-openjdk-1.6.0.0-1.66.1.13.0.el6.x86_64
    [root@hnxxzxfzjz001 cloud]# rpm -e java-1.7.0-openjdk-1.7.0.45-2.4.3.3.el6.x86_64
    [root@hnxxzxfzjz001 cloud]# rpm -e java-1.6.0-openjdk-1.6.0.0-1.66.1.13.0.el6.x86_64
    [root@hnxxzxfzjz001 cloud]# rpm -e tzdata-java-2013g-1.el6.noarch
    #同步时间
    [root@hnxxzxfzjz001 soft]# ntpdate 120.25.115.20
    #每台主机都配置
    [root@hnxxzxfzjz001 soft]# cat /etc/hosts
    127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
    ::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
    10.110.172.151 hnxxzxfzjz001
    10.110.172.152 hnxxzxfzjz002
    10.110.172.153 hnxxzxfzjz003
    10.110.172.154 hnxxzxfzjz004
    10.110.172.155 hnxxzxfzjz005
    10.110.172.156 hnxxzxfzjz006
    #关闭selinux,修改enforcing为disabled
    [root@hnxxzxfzjz001 ~]#vim /etc/sysconfig/selinux
    [root@hnxxzxfzjz001 ~]# setenforce 0
    #查看防火墙状态
    [root@hnxxzxfzjz001 ~]# service iptables status
    #关闭防火墙
    [root@hnxxzxfzjz001 ~]# service iptables stop
    #永久关闭防火墙
    [root@hnxxzxfzjz001 ~]# chkconfig iptables off
    
    • 免密配置(也可以通过脚本或者借助k8s等部署)
    #生成秘钥和公钥(每台)
    [root@hnxxzxfzjz003 ~]#  ssh-keygen
    Generating public/private rsa key pair.
    Enter file in which to save the key (/root/.ssh/id_rsa): 
    Created directory '/root/.ssh'.
    Enter passphrase (empty for no passphrase): 
    Enter same passphrase again: 
    Your identification has been saved in /root/.ssh/id_rsa.
    Your public key has been saved in /root/.ssh/id_rsa.pub.
    The key fingerprint is:
    78:6f:f6:92:51:72:7f:4b:a2:df:2e:63:b7:43:11:92 root@hnxxzxfzjz003
    The key's randomart image is:
    +--[ RSA 2048]----+
    |              .  |
    |             E . |
    |              . .|
    |       .  . o  . |
    |      . S  + .  .|
    |       . ..  ..o.|
    |          +o. +..|
    |         ooo +.+ |
    |           .+.=+o|
    +-----------------+
    
    #将所有主机上的公钥拷贝到其中一台机器(此处为10.110.172.151),每台都执行,包括自身
    [root@hnxxzxfzjz003 ~]# ssh-copy-id -i ~/.ssh/id_rsa.pub root@10.110.172.151
    #将所有主机的公钥集合~/.ssh/authorized_keys 拷贝给其他主机
    [root@hnxxzxfzjz001 ~]#scp ~/.ssh/authorized_keys root@10.110.172.155:~/.ssh/
    #测试ssh免密
    [root@hnxxzxfzjz001 ~]# ssh  10.110.172.153
    Last login: Tue Aug 27 02:02:28 2019 from hnxxzxfzjz001
    [root@hnxxzxfzjz003 ~]# exit
    [root@hnxxzxfzjz001 ~]# 
    
    • 软件准备
    [root@hnxxzxfzjz001 soft]# ll
    total 585180
    -rw-r--r--. 1 root root 366447449 Aug 26 15:04 hadoop-2.9.2.tar.gz
    -rw-r--r--. 1 root root 195094741 Aug 26 16:16 jdk-8u221-linux-x64.tar.gz
    -rw-r--r--. 1 root root  37676320 Aug 26 14:54 zookeeper-3.4.14.tar.gz
    
    • JDK安装配置(所有机器)
    [root@hnxxzxfzjz001 cloud]# cp ../soft/jdk-8u221-linux-x64.tar.gz .
    [root@hnxxzxfzjz001 cloud]# tar -zxvf  jdk-8u221-linux-x64.tar.gz
    [root@hnxxzxfzjz001 cloud]# mv jdk1.8.0_221/ jdk
    
    • 配置环境变量
    export JAVA_HOME=/data/cloud/jdk
    export HADOOP_HOME=/data/cloud/hadoop
    export ZK_HOME=/data/cloud/zookeeper
    export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/sbin:$ZK_HOME/bin:$HADOOP_HOME/bin
    
    • zookeeper安装配置(154、155、156)
    [root@hnxxzxfzjz004 cloud]# cp ../soft/zookeeper-3.4.14.tar.gz .
    [root@hnxxzxfzjz004 cloud]# tar -zxvf   zookeeper-3.4.14.tar.gz
    [root@hnxxzxfzjz004 cloud]# mv zookeeper-3.4.14/ zookeeper
    #配置文件
    [root@hnxxzxfzjz004 cloud]# cd /data/cloud/zookeeper/conf
    [root@hnxxzxfzjz004 conf]#  mv zoo_sample.cfg  zoo.cfg
    [root@hnxxzxfzjz004 conf]#  cat  zoo.cfg
    tickTime=2000
    # The number of ticks that the initial
    # synchronization phase can take
    initLimit=10
    # The number of ticks that can pass between
    # sending a request and getting an acknowledgement
    syncLimit=5
    # the directory where the snapshot is stored.
    # do not use /tmp for storage, /tmp here is just
    # example sakes.
    dataDir=/data/work/zookeeper/data
    # the port at which the clients will connect
    clientPort=2181
    # the maximum number of client connections.
    # increase this if you need to handle more clients
    #maxClientCnxns=60
    #
    # Be sure to read the maintenance section of the
    # administrator guide before turning on autopurge.
    #
    # http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
    #
    # The number of snapshots to retain in dataDir
    #autopurge.snapRetainCount=3
    # Purge task interval in hours
    # Set to "0" to disable auto purge feature
    #autopurge.purgeInterval=1
    server.1=10.110.172.154:2888:3888
    server.2=10.110.172.155:2888:3888
    server.3=10.110.172.156:2888:3888
    # 建立myid
    [root@hnxxzxfzjz004 conf]# mkdir -p /data/work/zookeeper/data
    [root@hnxxzxfzjz004 conf]# cd  /data/work/zookeeper/data
    [root@hnxxzxfzjz004 data]# echo "1" > myid
    #为了方便操作添加环境变量
    # export ZK_HOME=/data/cloud/zookeeper
    # export PATH=$PATH:$ZK_HOME/bin
    [root@hnxxzxfzjz004 data]# 
    #启动zookeeper
    [root@hnxxzxfzjz006 ~]# zkServer.sh start
    ZooKeeper JMX enabled by default
    Using config: /data/cloud/zookeeper/bin/../conf/zoo.cfg
    Starting zookeeper ... STARTED
    #154、155、为Mode: follower
    [root@hnxxzxfzjz006 ~]# zkServer.sh status
    ZooKeeper JMX enabled by default
    Using config: /data/cloud/zookeeper/bin/../conf/zoo.cfg
    Mode: leader
    
    • Hadoop安装配置
      /data/cloud/hadoop/etc/hadoop/hadoop-env.sh增加
    export JAVA_HOME=/data/cloud/jdk
    export HADOOP_NAMENODE_OPTS="-XX:+UseParallelGC"
    export HADOOP_PID_DIR=/data/work/hadoop
    export HADOOP_LOG_DIR=/data/work/hadoop/logs
    

    /data/cloud/hadoop/etc/hadoop/yarn-env.sh

    export JAVA_HOME=/data/cloud/jdk
    

    /data/cloud/hadoop/etc/hadoop/hdfs-site.xml

    <?xml version="1.0" encoding="UTF-8"?>
    <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
    <!--
      Licensed under the Apache License, Version 2.0 (the "License");
      you may not use this file except in compliance with the License.
      You may obtain a copy of the License at
    
        http://www.apache.org/licenses/LICENSE-2.0
    
      Unless required by applicable law or agreed to in writing, software
      distributed under the License is distributed on an "AS IS" BASIS,
      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
      See the License for the specific language governing permissions and
      limitations under the License. See accompanying LICENSE file.
    -->
    
    <!-- Put site-specific property overrides in this file. -->
    
    <configuration>
    <property>
       <name>dfs.nameservices</name>
       <value>hnqx</value>
    </property>
    <property>
       <name>dfs.ha.namenodes.hnqx</name>
       <value>nn1,nn2</value>
    </property>
    <property>
       <name>dfs.namenode.rpc-address.hnqx.nn1</name>
       <value>hnxxzxfzjz004:8020</value>
    </property>
    <property>
       <name>dfs.namenode.rpc-address.hnqx.nn2</name>
       <value>hnxxzxfzjz005:8020</value>
    </property>
    <property>
       <name>dfs.namenode.http-address.hnqx.nn1</name>
       <value>hnxxzxfzjz004:50070</value>
    </property>
    <property>
       <name>dfs.namenode.http-address.hnqx.nn2</name>
       <value>hnxxzxfzjz005:50070</value>
    </property>
    <property>
       <name>dfs.namenode.shared.edits.dir</name>
       <value>qjournal://hnxxzxfzjz001:8485;hnxxzxfzjz002:8485;hnxxzxfzjz003:8485/hnqx</value>
    </property>
    <property>
       <name>dfs.client.failover.proxy.provider.hnqx</name>
       <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
    </property>
    <property>
       <name>dfs.ha.fencing.methods</name>
       <value>sshfence</value>
    </property>
    <property>
       <name>dfs.ha.fencing.ssh.private-key-files</name>
       <value>~/.ssh/id_rsa</value>
    </property>
    <property>
       <name>dfs.ha.automatic-failover.enabled</name>
       <value>true</value>
    </property>
    <property>
       <name>dfs.ha.fencing.ssh.connect-timeout</name>
       <value>30000</value>
    </property>
    <property>
       <name>dfs.journalnode.edits.dir</name>
       <value>/data/work/hadoop/journaldata</value>
    </property>
    <property>
       <name>dfs.namenode.name.dir</name>
       <value>/data/work/hadoop/dfs/name</value>
       <description>Path on the local filesystem where theNameNode stores the namespace and transactions logs persistently.</description>
    </property>
    <property>
       <name>dfs.datanode.data.dir</name>
       <value>/data/work/hadoop/dfs/data</value>
       <description>Comma separated list of paths on the localfilesystem of a DataNode where it should store its blocks.</description>
    </property>
    <property>
       <name>dfs.replication</name>
       <value>2</value>
    </property>
    <property>  
    <name>dfs.ha.fencing.methods</name>  
    <value>
    sshfence
    shell(/bin/true)  
    </value>  
    </property>
    </configuration>
    

    /data/cloud/hadoop/etc/hadoop/core-site.xml

    <?xml version="1.0" encoding="UTF-8"?>
    <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
    <!--
      Licensed under the Apache License, Version 2.0 (the "License");
      you may not use this file except in compliance with the License.
      You may obtain a copy of the License at
    
        http://www.apache.org/licenses/LICENSE-2.0
    
      Unless required by applicable law or agreed to in writing, software
      distributed under the License is distributed on an "AS IS" BASIS,
      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
      See the License for the specific language governing permissions and
      limitations under the License. See accompanying LICENSE file.
    -->
    
    <!-- Put site-specific property overrides in this file. -->
    
    <configuration>
    <!-- 指定hdfs路径nameservice为ns1 -->
    <property>
      <name>fs.defaultFS</name>
      <value>hdfs://hnqx</value>
    </property>
    <property>
      <name>io.file.buffer.size</name>
      <value>131072</value>
    </property>
    <!-- hadoop临时目录 -->
    <property>
      <name>hadoop.tmp.dir</name>
      <value>/data/work/hadoop/tmp</value>
    </property>
    <!-- zookeeper配置-->
    <property>
      <name>ha.zookeeper.quorum</name>
     <value>10.110.172.154:2181,10.110.172.155:2181,10.110.172.156:2181</value>
    </property>
    </configuration>
    

    /data/cloud/hadoop/etc/hadoop/mapred-site.xml

    <?xml version="1.0"?>
    <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
    <!--
      Licensed under the Apache License, Version 2.0 (the "License");
      you may not use this file except in compliance with the License.
      You may obtain a copy of the License at
    
        http://www.apache.org/licenses/LICENSE-2.0
    
      Unless required by applicable law or agreed to in writing, software
      distributed under the License is distributed on an "AS IS" BASIS,
      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
      See the License for the specific language governing permissions and
      limitations under the License. See accompanying LICENSE file.
    -->
    
    <!-- Put site-specific property overrides in this file. -->
    
    <configuration>
    <property>
      <name>mapreduce.framework.name</name>
       <value>yarn</value>
    </property>
    <property>
       <name>mapreduce.application.classpath</name>
       <value>/data/cloud/hadoop/share/hadoop/mapreduce/*, /data/cloud/hadoop/share/hadoop/mapreduce/lib/*</value>
    </property>
    </configuration>
    

    /data/cloud/hadoop/etc/hadoop/yarn-site.xml

    <?xml version="1.0"?>
    <!--
      Licensed under the Apache License, Version 2.0 (the "License");
      you may not use this file except in compliance with the License.
      You may obtain a copy of the License at
    
        http://www.apache.org/licenses/LICENSE-2.0
    
      Unless required by applicable law or agreed to in writing, software
      distributed under the License is distributed on an "AS IS" BASIS,
      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
      See the License for the specific language governing permissions and
      limitations under the License. See accompanying LICENSE file.
    -->
    <configuration>
    
    <!-- 开启RM高可用 -->
    <property>
      <name>yarn.resourcemanager.ha.enabled</name>
      <value>true</value>
    </property>
    <!-- 指定RM的cluster id -->
    <property>
      <name>yarn.resourcemanager.cluster-id</name>
      <value>yrc</value>
    </property>
    <!-- 指定RM的名字 -->
    <property>
      <name>yarn.resourcemanager.ha.rm-ids</name>
      <value>rm1,rm2</value>
    </property>
    <!-- 分别指定RM的地址 -->
    <property>
      <name>yarn.resourcemanager.hostname.rm1</name>
      <value>hnxxzxfzjz004</value>
    </property>
    <property>
      <name>yarn.resourcemanager.hostname.rm2</name>
      <value>hnxxzxfzjz005</value>
    </property>
    <!-- 表示rm1,rm2的网页访问地址和端口,也即通过该地址和端口可访问作业情况 -->
    <property>
      <name>yarn.resourcemanager.webapp.address.rm1</name>
      <value>hnxxzxfzjz004:8088</value>
    </property>
    <property>
      <name>yarn.resourcemanager.webapp.address.rm2</name>
      <value>hnxxzxfzjz005:8088</value>
    </property>
    <!-- 指定zk集群地址 -->
    <property>
      <name>yarn.resourcemanager.zk-address</name>
      <value>hnxxzxfzjz004:2181,hnxxzxfzjz005:2181,hnxxzxfzjz006:2181</value>
    </property>
    <property>
      <name>yarn.nodemanager.aux-services</name>
      <value>mapreduce_shuffle</value>
    </property>
    <property>
            <name>yarn.nodemanager.resource.memory-mb</name>
            <value>2000</value>
     </property>
     <property>
            <name>yarn.nodemanager.resource.cpu-vcores</name>
            <value>1</value>
     </property>
    <property>
            <name>yarn.scheduler.maximum-allocation-mb</name>
            <value>2000</value>
     </property>
     <property>
            <name>yarn.scheduler.maximum-allocation-vcores</name>
            <value>2</value>
     </property>
    </configuration>
    
    • hadoop的启动和初始化
    #[10.110.172.152,10.110.172.154,10.110.172.155]
    [root@hnxxzxfzjz004 /]# hadoop-daemon.sh start journalnode
    #HDFS初始化
    [root@hnxxzxfzjz004 /]#  hdfs namenode -format hnqx
    #启动namenode
    [root@hnxxzxfzjz004 /]#  hadoop-daemon.sh start namenode
    #转到10.110.172.155,同步hdfs初始化数据,返回10.110.172.154
    [root@hnxxzxfzjz005 /]#  hdfs namenode -bootstrapStandby 
    #格式化zkfc,
    [root@hnxxzxfzjz004 /]#  hdfs zkfc -formatZK
    .....省略n个字
    19/08/27 16:34:48 INFO ha.ActiveStandbyElector: Session connected.
    19/08/27 16:34:48 INFO ha.ActiveStandbyElector: Successfully created /hadoop-ha/hnqx in ZK.
    19/08/27 16:34:48 INFO zookeeper.ZooKeeper: Session: 0x2000001ce8d0000 closed
    19/08/27 16:34:48 INFO zookeeper.ClientCnxn: EventThread shut down
    19/08/27 16:34:48 INFO tools.DFSZKFailoverController: SHUTDOWN_MSG: 
    /************************************************************
    SHUTDOWN_MSG: Shutting down DFSZKFailoverController at hnxxzxfzjz004/10.110.172.154
    ************************************************************/
    #停止除zookeeper之外所有组件,启动start-dfs.sh
    [root@hnxxzxfzjz004 bin]# start-dfs.sh
    19/08/27 16:35:34 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
    Starting namenodes on [hnxxzxfzjz004 hnxxzxfzjz005]
    hnxxzxfzjz005: starting namenode, logging to /data/work/hadoop/logs/hadoop-root-namenode-hnxxzxfzjz005.out
    hnxxzxfzjz004: starting namenode, logging to /data/work/hadoop/logs/hadoop-root-namenode-hnxxzxfzjz004.out
    hnxxzxfzjz002: starting datanode, logging to /data/work/hadoop/logs/hadoop-root-datanode-hnxxzxfzjz002.out
    hnxxzxfzjz003: starting datanode, logging to /data/work/hadoop/logs/hadoop-root-datanode-hnxxzxfzjz003.out
    hnxxzxfzjz001: starting datanode, logging to /data/work/hadoop/logs/hadoop-root-datanode-hnxxzxfzjz001.out
    Starting journal nodes [hnxxzxfzjz004 hnxxzxfzjz005 hnxxzxfzjz002]
    hnxxzxfzjz005: starting journalnode, logging to /data/work/hadoop/logs/hadoop-root-journalnode-hnxxzxfzjz005.out
    hnxxzxfzjz004: starting journalnode, logging to /data/work/hadoop/logs/hadoop-root-journalnode-hnxxzxfzjz004.out
    hnxxzxfzjz002: starting journalnode, logging to /data/work/hadoop/logs/hadoop-root-journalnode-hnxxzxfzjz002.out
    19/08/27 16:35:51 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
    Starting ZK Failover Controllers on NN hosts [hnxxzxfzjz004 hnxxzxfzjz005]
    hnxxzxfzjz005: starting zkfc, logging to /data/work/hadoop/logs/hadoop-root-zkfc-hnxxzxfzjz005.out
    hnxxzxfzjz004: starting zkfc, logging to /data/work/hadoop/logs/hadoop-root-zkfc-hnxxzxfzjz004.out
    #进程检查
    [root@hnxxzxfzjz004 bin]# jps
    5488 DFSZKFailoverController
    5760 Jps
    28913 QuorumPeerMain
    5045 NameNode
    5289 JournalNode
    
    • 访问测试
      active节点
      active节点
      standby节点
      standby节点

    存活的datanode节点

    datanode
    datanode使用情况

    namenode节点

    namenode节点
    • 文件上传测试
    [root@hnxxzxfzjz004 bin]# echo "hdfs file test" > hnqx.txt
    [root@hnxxzxfzjz004 bin]# ./hadoop fs -put hnqx.txt /hnqx
    19/08/27 19:08:48 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
    put: `/hnqx': File exists
    [root@hnxxzxfzjz004 bin]# ./hadoop fs -ls /hnqx
    19/08/27 19:10:51 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
    -rw-r--r--   2 root supergroup         15 2019-08-27 19:06 /hnqx
    

    浏览器中查看

    image.png
    • YARN启动
    [root@hnxxzxfzjz004 hadoop]# start-yarn.sh
    starting yarn daemons
    starting resourcemanager, logging to /data/cloud/hadoop/logs/yarn-root-resourcemanager-hnxxzxfzjz004.out
    hnxxzxfzjz003: starting nodemanager, logging to /data/cloud/hadoop/logs/yarn-root-nodemanager-hnxxzxfzjz003.out
    hnxxzxfzjz001: starting nodemanager, logging to /data/cloud/hadoop/logs/yarn-root-nodemanager-hnxxzxfzjz001.out
    hnxxzxfzjz002: starting nodemanager, logging to /data/cloud/hadoop/logs/yarn-root-nodemanager-hnxxzxfzjz002.out
    #启动resourceManager
    [root@hnxxzxfzjz004 hadoop]# yarn-daemon.sh start resourcemanager
    
    • Hadoop集群状态
      active节点10.110.172.154
      Hadoop集群状态

    standby节点10.110.172.155

    standby节点
    • wordcount测试
    #目录/data/cloud/hadoop/share/hadoop/mapreduce
    [root@hnxxzxfzjz004 mapreduce]#  hadoop fs -put /etc/profile /result.txt
    [root@hnxxzxfzjz004 mapreduce]# hadoop jar hadoop-mapreduce-examples-2.9.2.jar wordcount /profile /result.txt
    19/08/27 20:39:07 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
    19/08/27 20:39:09 INFO input.FileInputFormat: Total input files to process : 1
    19/08/27 20:39:09 INFO mapreduce.JobSubmitter: number of splits:1
    19/08/27 20:39:09 INFO Configuration.deprecation: yarn.resourcemanager.zk-address is deprecated. Instead, use hadoop.zk.address
    19/08/27 20:39:09 INFO Configuration.deprecation: yarn.resourcemanager.system-metrics-publisher.enabled is deprecated. Instead, use yarn.system-metrics-publisher.enabled
    19/08/27 20:39:09 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1566909463106_0001
    19/08/27 20:39:10 INFO impl.YarnClientImpl: Submitted application application_1566909463106_0001
    19/08/27 20:39:10 INFO mapreduce.Job: The url to track the job: http://hnxxzxfzjz004:8088/proxy/application_1566909463106_0001/
    19/08/27 20:39:10 INFO mapreduce.Job: Running job: job_1566909463106_0001
    19/08/27 20:39:23 INFO mapreduce.Job: Job job_1566909463106_0001 running in uber mode : false
    19/08/27 20:39:23 INFO mapreduce.Job:  map 0% reduce 0%
    19/08/27 20:39:35 INFO mapreduce.Job:  map 100% reduce 0%
    19/08/27 20:39:43 INFO mapreduce.Job:  map 100% reduce 100%
    19/08/27 20:39:43 INFO mapreduce.Job: Job job_1566909463106_0001 completed successfully
    19/08/27 20:39:43 INFO mapreduce.Job: Counters: 49
            File System Counters
                    FILE: Number of bytes read=2243
                    FILE: Number of bytes written=407651
                    FILE: Number of read operations=0
                    FILE: Number of large read operations=0
                    FILE: Number of write operations=0
                    HDFS: Number of bytes read=2069
                    HDFS: Number of bytes written=1598
                    HDFS: Number of read operations=6
                    HDFS: Number of large read operations=0
                    HDFS: Number of write operations=2
    

    计算节点信息

    image.png
    wordcount结果
    image.png
    image.png

    wordcount应用

    image.png

    3.访问入口

    hdfs

    active: http://10.110.172.155:50070/
    standby:http://10.110.172.154:50070

    hadoop

    active : http://10.110.172.154:8088
    standby: http://10.110.172.155:8088

    相关文章

      网友评论

          本文标题:Hadoop2.9.2高可用环境搭建

          本文链接:https://www.haomeiwen.com/subject/yxkrectx.html