美文网首页
搭建Hadoop-HA环境

搭建Hadoop-HA环境

作者: 上杉丶零 | 来源:发表于2019-01-15 17:00 被阅读0次

前提:搭建Hadoop完全分布式环境

node01 node02 node03 node04
NameNode01 NameNode02 NameNode03
DataNode01 DataNode02 DataNode03
JournalNode01 JournalNode02 JournalNode03
  1. 配置node01、node02、node03、node04上的Hadoop

在node01上修改/opt/hadoop/hadoop-3.1.1/etc/hadoop/core-site.xml
vim /opt/hadoop/hadoop-3.1.1/etc/hadoop/core-site.xml
添加:

<configuration>
  <property>
    <name>fs.defaultFS</name>
    <value>hdfs://manualHACluster</value>
  </property>
  <property>
    <name>hadoop.tmp.dir</name>
    <value>/opt/hadoop/data/tmp/manual_ha</value>
  </property>
</configuration>

在node01上修改/opt/hadoop/hadoop-3.1.1/etc/hadoop/hdfs-site.xml
vim /opt/hadoop/hadoop-3.1.1/etc/hadoop/hdfs-site.xml
添加:

<configuration>
  <property>
    <name>dfs.replication</name>
    <value>2</value>
  </property>
  <property>
    <name>dfs.nameservices</name>
    <value>manualHACluster</value>
  </property>
  <property>
    <name>dfs.ha.namenodes.manualHACluster</name>
    <value>NN01,NN02,NN03</value>
  </property>
  <property>
    <name>dfs.namenode.rpc-address.manualHACluster.NN01</name>
    <value>node01:8020</value>
  </property>
  <property>
    <name>dfs.namenode.rpc-address.manualHACluster.NN02</name>
    <value>node02:8020</value>
  </property>
  <property>
    <name>dfs.namenode.rpc-address.manualHACluster.NN03</name>
    <value>node03:8020</value>
  </property>
  <property>
    <name>dfs.namenode.shared.edits.dir</name>
    <value>qjournal://node01:8485;node02:8485;node03:8485/manualHACluster</value>
  <property>
    <name>dfs.client.failover.proxy.provider.manualHACluster</name>
    <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
  </property>
  <property>
    <name>dfs.ha.fencing.methods</name>
    <value>sshfence</value>
  </property>
  <property>
    <name>dfs.ha.fencing.ssh.private-key-files</name>
    <value>/root/.ssh/id_rsa</value>
  </property>
  <property>
    <name>dfs.journalnode.edits.dir</name>
    <value>/opt/hadoop/data/tmp/manual_ha</value>
  </property>
</configuration>

将node01上的/opt/hadoop/hadoop-3.1.1/etc/hadoop/core-site.xml/opt/hadoop/hadoop-3.1.1/etc/hadoop/hdfs-site.xml拷贝到node02、node03、node04:
scp /opt/hadoop/hadoop-3.1.1/etc/hadoop/core-site.xml /opt/hadoop/hadoop-3.1.1/etc/hadoop/hdfs-site.xml node02:/opt/hadoop/hadoop-3.1.1/etc/hadoop/ && scp /opt/hadoop/hadoop-3.1.1/etc/hadoop/core-site.xml /opt/hadoop/hadoop-3.1.1/etc/hadoop/hdfs-site.xml node03:/opt/hadoop/hadoop-3.1.1/etc/hadoop/ && scp /opt/hadoop/hadoop-3.1.1/etc/hadoop/core-site.xml /opt/hadoop/hadoop-3.1.1/etc/hadoop/hdfs-site.xml node04:/opt/hadoop/hadoop-3.1.1/etc/hadoop/

  1. 配置node01、node02、node03上的环境变量

在node01上修改/etc/profile
vim /etc/profile
添加:

export HDFS_JOURNALNODE_USER=root

将node01上的/etc/profile拷贝到node02、node03
scp /etc/profile node02:/etc/ && scp /etc/profile node03:/etc/
在node01、node02、node03上运行:
. /etc/profile

  1. 启动JournalNode

在node01、node02、node03上运行:
hdfs --daemon start journalnode

  1. 格式化Hadoop

在node01上运行:
hdfs namenode -format

  1. 启动NameNode

在node01上运行:
hdfs --daemon start namenode
在node02、node03上运行:
hdfs namenode -bootstrapStandby

  1. 启动Hadoop

在node01/node02/node03/node04上运行:
start-dfs.sh

  1. 查看进程

在node01、node02、node03、node04上运行:
jps

  1. 访问网页

NameNode:http://192.168.163.191:9870
NameNode:http://192.168.163.192:9870
NameNode:http://192.168.163.193:9870
DataNode:http://192.168.163.192:9864
DataNode:http://192.168.163.193:9864
DataNode:http://192.168.163.194:9864

相关文章

网友评论

      本文标题:搭建Hadoop-HA环境

      本文链接:https://www.haomeiwen.com/subject/ehdydqtx.html