美文网首页
Hadoop的HA安装与配置(Zookeeper)

Hadoop的HA安装与配置(Zookeeper)

作者: 大数据ZRL | 来源:发表于2021-02-02 21:00 被阅读0次
基于ZooKeeper实现HDFS的HA.png

1、集群的规划

  • Zookeeper集群:
    192.168.157.112 (bigdata112)
    192.168.157.113 (bigdata113)
    192.168.157.114 (bigdata114)

  • Hadoop集群:
    192.168.157.112 (bigdata112) NameNode1 ResourceManager1 Journalnode1
    192.168.157.113 (bigdata113) NameNode2 ResourceManager2 Journalnode2
    192.168.157.114 (bigdata114) DataNode1 NodeManager1
    192.168.157.115 (bigdata115) DataNode2 NodeManager2

2、准备工作

  • 安装JDK
  • 配置环境变量
  • 配置免密码登录
  • 配置主机名

3、配置Zookeeper(在bigdata112安装)

  • 在主节点(bigdata112)上配置ZooKeeper
    • 配置/root/training/zookeeper-3.4.6/conf/zoo.cfg文件

      dataDir=/root/training/zookeeper-3.4.6/tmp
      server.1=bigdata112:2888:3888
      server.2=bigdata113:2888:3888
      server.3=bigdata114:2888:3888
      
    • 在/root/training/zookeeper-3.4.6/tmp目录下创建一个myid的空文件
      echo 1 > /root/training/zookeeper-3.4.6/tmp/myid

    • 将配置好的zookeeper拷贝到其他节点,同时修改各自的myid文件
      scp -r /root/training/zookeeper-3.4.6/ bigdata113:/root/training
      scp -r /root/training/zookeeper-3.4.6/ bigdata114:/root/training

    • 在113/114上修改myid

      • 113:echo 2 > /root/training/zookeeper-3.4.6/tmp/myid
      • 114:echo 3 > /root/training/zookeeper-3.4.6/tmp/myid

4、安装Hadoop集群(在bigdata112上安装)

  • 4.1、修改hadoo-env.sh

    export JAVA_HOME=/root/training/jdk1.8.0_181
    
  • 4.2、修改core-site.xml

    <configuration>
      <!-- 指定hdfs的nameservice为ns1 -->
      <property>
        <name>fs.defaultFS</name>
        <value>hdfs://ns1</value>
      </property>
            
      <!-- 指定hadoop临时目录 -->
      <property>
        <name>hadoop.tmp.dir</name>
        <value>/root/training/hadoop-2.7.3/tmp</value>
      </property>
            
      <!-- 指定zookeeper地址 -->
      <property>
        <name>ha.zookeeper.quorum</name>
        <value>bigdata112:2181,bigdata113:2181,bigdata114:2181</value>
      </property>
    </configuration>
    
  • 4.3、修改hdfs-site.xml(配置这个nameservice中有几个namenode)

        <configuration> 
            <!--指定hdfs的nameservice为ns1,需要和core-site.xml中的保持一致 -->
            <property>
                <name>dfs.nameservices</name>
                <value>ns1</value>
            </property>
            
            <!-- ns1下面有两个NameNode,分别是nn1,nn2 -->
            <property>
                <name>dfs.ha.namenodes.ns1</name>
                <value>nn1,nn2</value>
            </property>
            
            <!-- nn1的RPC通信地址 -->
            <property>
                <name>dfs.namenode.rpc-address.ns1.nn1</name>
                <value>bigdata112:9000</value>
            </property>
            <!-- nn1的http通信地址 -->
            <property>
                <name>dfs.namenode.http-address.ns1.nn1</name>
                <value>bigdata112:50070</value>
            </property>
            
            <!-- nn2的RPC通信地址 -->
            <property>
                <name>dfs.namenode.rpc-address.ns1.nn2</name>
                <value>bigdata113:9000</value>
            </property>
            <!-- nn2的http通信地址 -->
            <property>
                <name>dfs.namenode.http-address.ns1.nn2</name>
                <value>bigdata113:50070</value>
            </property>
            
            <!-- 指定NameNode的日志在JournalNode上的存放位置 -->
            <property>
                <name>dfs.namenode.shared.edits.dir</name>
                <value>qjournal://bigdata112:8485;bigdata113:8485;/ns1</value>
            </property>
            <!-- 指定JournalNode在本地磁盘存放数据的位置 -->
            <property>
                <name>dfs.journalnode.edits.dir</name>
                <value>/root/training/hadoop-2.7.3/journal</value>
            </property>
    
            <!-- 开启NameNode失败自动切换 -->
            <property>
                <name>dfs.ha.automatic-failover.enabled</name>
                <value>true</value>
            </property>
            
            <!-- 配置失败自动切换实现方式 -->
            <property>
                <name>dfs.client.failover.proxy.provider.ns1</name>
                <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
            </property>
            
            <!-- 配置隔离机制方法,多个机制用换行分割,即每个机制暂用一行-->
            <property>
                <name>dfs.ha.fencing.methods</name>
                <value>
                    sshfence
                    shell(/bin/true)
                </value>
            </property>
            
            <!-- 使用sshfence隔离机制时需要ssh免登陆 -->
            <property>
                <name>dfs.ha.fencing.ssh.private-key-files</name>
                <value>/root/.ssh/id_rsa</value>
            </property>
            
            <!-- 配置sshfence隔离机制超时时间 -->
            <property>
                <name>dfs.ha.fencing.ssh.connect-timeout</name>
                <value>30000</value>
            </property>
        </configuration>
    
  • 4.4、修改mapred-site.xml

    <configuration>
    <property>
            <name>mapreduce.framework.name</name>
            <value>yarn</value>
    </property>
    </configuration>
    
  • 4.5、修改yarn-site.xml

    <configuration>
        <!-- 开启RM高可靠 -->
        <property>
           <name>yarn.resourcemanager.ha.enabled</name>
           <value>true</value>
        </property>
    
        <!-- 指定RM的cluster id -->
        <property>
           <name>yarn.resourcemanager.cluster-id</name>
           <value>yrc</value>
        </property>
        
        <!-- 指定RM的名字 -->
        <property>
           <name>yarn.resourcemanager.ha.rm-ids</name>
           <value>rm1,rm2</value>
        </property>
        
        <!-- 分别指定RM的地址 -->
        <property>
           <name>yarn.resourcemanager.hostname.rm1</name>
           <value>bigdata112</value>
        </property>
        <property>
           <name>yarn.resourcemanager.hostname.rm2</name>
           <value>bigdata113</value>
        </property>
        
        <!-- 指定zk集群地址 -->
        <property>
           <name>yarn.resourcemanager.zk-address</name>
           <value>bigdata112:2181,bigdata113:2181,bigdata114:2181</value>
        </property>
        
        <property>
           <name>yarn.nodemanager.aux-services</name>
           <value>mapreduce_shuffle</value>
        </property>
    </configuration>
    
  • 4.6、修改slaves

    bigdata114
    bigdata115
    
  • 4.7、将配置好的hadoop拷贝到其他节点
    scp -r /root/training/hadoop-2.7.3/ root@bigdata113:/root/training/
    scp -r /root/training/hadoop-2.7.3/ root@bigdata114:/root/training/
    scp -r /root/training/hadoop-2.7.3/ root@bigdata115:/root/training/

5、启动Zookeeper集群

  • 在112/113/114上执行
    zkServer.sh start
  • 查看是否执行成功:
    jps

6、在bigdata112和bigdata13上启动journalnode

  • 在112/113上执行
    hadoop-daemon.sh start journalnode
  • 查看是否执行成功:
    jps

7、格式化HDFS(在bigdata112上执行)

  • 7.1. hdfs namenode -format
  • 7.2. 将/root/training/hadoop-2.7.3/tmp拷贝到bigdata113的/root/training/hadoop-2.7.3/tmp下

8、格式化zookeeper

hdfs zkfc -formatZK

 日志:
    17/07/13 00:34:33 INFO ha.ActiveStandbyElector: Successfully created /hadoop-ha/ns1 in ZK.

9、在bigdata112上启动Hadoop集群

start-all.sh

 日志:
    Starting namenodes on [bigdata112 bigdata113]
    bigdata112: starting namenode, logging to /root/training/hadoop-2.4.1/logs/hadoop-root-namenode-hadoop113.out
    bigdata113: starting namenode, logging to /root/training/hadoop-2.4.1/logs/hadoop-root-namenode-hadoop112.out
    bigdata114: starting datanode, logging to /root/training/hadoop-2.4.1/logs/hadoop-root-datanode-hadoop115.out
    bigdata115: starting datanode, logging to /root/training/hadoop-2.4.1/logs/hadoop-root-datanode-hadoop114.out

    bigdata113: starting zkfc, logging to /root/training/hadoop-2.7.3/logs/hadoop-root-zkfc-bigdata13.out
    bigdata112: starting zkfc, logging to /root/training/hadoop-2.7.3/logs/hadoop-root-zkfc-bigdata12.out
  • bigdata113上的ResourceManager需要单独启动
    yarn-daemon.sh start resourcemanager

相关文章

网友评论

      本文标题:Hadoop的HA安装与配置(Zookeeper)

      本文链接:https://www.haomeiwen.com/subject/htnrtltx.html