美文网首页
Hadoop 3.3 HDFS HA部署

Hadoop 3.3 HDFS HA部署

作者: 小兽S | 来源:发表于2020-12-10 16:28 被阅读0次

    1 准备工作

    • 修改服务器名,修改/etc/hosts等
    • SSH免秘钥(互信)
    • JDK 1.8+
    • Zookeeper集群

    2 集群配置

    配置参考:HDFS High Availability Using the Quorum Journal Manager
    Hadoop集群配置都在{hadoop_home}/etc/hadoop目录,HDFS集群部署需要修改的有下面几个配置文件:

    • core-site.xml
    • hadoop-env.sh
    • hdfs-site.xml
    • workers

    2.1 配置core-site.xml

    <configuration>
        <property>
            <name>fs.defaultFS</name>
            <value>hdfs://mycluster</value>
              <description>HDFS的URI,文件系统://namenode标识:端口号</description>
        </property>
        <property>
            <name>hadoop.tmp.dir</name>
            <value>/opt/tomas/data/hadoop/tmp</value>
            <description>namenode上本地的hadoop临时文件夹</description>
        </property>
        <property>
            <name>ha.zookeeper.quorum</name>
            <value>192.216.105.249:2181,192.216.105.250:2181,192.216.105.251:2181</value>
            <description>Zookeeper地址</description>
        </property>
    </configuration>
    

    2.2 配置hadoop-env.sh

    主要声明配置JAVA_HOME

    export JAVA_HOME=/usr/java/default
    
    export HDFS_NAMENODE_USER="root"
    export HDFS_DATANODE_USER="root"
    export HDFS_ZKFC_USER="root"
    export HDFS_JOURNALNODE_USER="root"
    

    2.2 配置hdfs-site.xml

    <configuration>
      <property>
            <name>dfs.ha.automatic-failover.enabled</name>
            <value>true</value>
        </property>
        <property>
            <name>dfs.replication</name>
            <value>1</value>
        </property>
        <property>
            <name>dfs.namenode.name.dir</name>
            <value>file:/opt/data/hadoop/hdfs/name</value>
        </property>
        <property>
            <name>dfs.datanode.data.dir</name>
            <value>file:/opt/data/hadoop/hdfs/data</value>
        </property>
        <property>
            <name>dfs.nameservices</name>
            <value>mycluster</value>
        </property>
        <property>
            <name>dfs.ha.namenodes.mycluster</name>
            <value>nn1,nn2,nn3</value>
        </property>
        <property>
            <name>dfs.namenode.rpc-address.mycluster.nn1</name>
            <value>192.216.105.242:13122</value>
        </property>
        <property>
            <name>dfs.namenode.rpc-address.mycluster.nn2</name>
            <value>192.216.105.243:13122</value>
        </property>
        <property>
            <name>dfs.namenode.rpc-address.mycluster.nn3</name>
            <value>192.216.105.244:13122</value>
        </property>
        <property>
            <name>dfs.namenode.http-address.mycluster.nn1</name>
            <value>192.216.105.242:13123</value>
        </property>
        <property>
            <name>dfs.namenode.http-address.mycluster.nn2</name>
            <value>192.216.105.243:13123</value>
        </property>
        <property>
            <name>dfs.namenode.http-address.mycluster.nn3</name>
            <value>192.216.105.244:13123</value>
        </property>
        <property>
            <name>dfs.namenode.shared.edits.dir</name>
            <value>qjournal://192.216.105.242:13124;192.216.105.243:13124;192.216.105.244:13124/tomascluster</value>
        </property>
        <property>
            <name>dfs.client.failover.proxy.provider.mycluster</name>
            <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
        </property>
        <property>
            <name>dfs.ha.fencing.methods</name>
            <value>sshfence</value>
        </property>
        <property>
            <name>dfs.ha.fencing.ssh.private-key-files</name>
            <value>/root/.ssh/id_rsa</value>
        </property>
        <property>
            <name>dfs.journalnode.edits.dir</name>
            <value>/opt/data/hadoop/journalnode</value>
        </property>
    </configuration>
    

    2.2 配置workers

    192.216.105.242
    192.216.105.243
    192.216.105.244
    192.216.105.245
    192.216.105.246
    

    3 集群启动

    1. 在所有journalnode节点上启动journalnode,本例中是192.216.105.242~244:
    bin/hdfs --daemon start journalnode
    
    1. 随意一个namenode节点服务器执行格式化
    bin/hdfs namenode -format
    
    1. 启动上一步格式化的namenode
    bin/hdfs --daemon start namenode
    
    1. 另外两个namenode执行同步信息
    bin/hdfs namenode -bootstrapStandby
    
    1. 格式化zookeeper节点
    bin/hdfs zkfc -formatZK
    

    6.启动HDFS集群

    start-dfs.sh
    

    3 集群验证

    命令查看

    bin/hdfs haadmin -getAllServiceState
    

    分别访问三个namenode的web页面,端口为hdfs-site.xml中配置的dfs.namenode.http-address端口,可以发现一个为active,另外两个为standby。


    Hadoop3-HDFS-HA-validate.png

    相关文章

      网友评论

          本文标题:Hadoop 3.3 HDFS HA部署

          本文链接:https://www.haomeiwen.com/subject/mqfrgktx.html