美文网首页
hadoop2.7.3部署记录

hadoop2.7.3部署记录

作者: 辉耀辉耀 | 来源:发表于2017-07-21 10:24 被阅读0次

参考了网上很多的文档,终于部署成功,记录下来方便大家

环境
Oracle VM VirtualBox

VM1:master 10.22.4.31/24 centOS-7-1611
VM2: slave01 10.22.4.33/24 centOS-7-1611
VM3: slave02 10.22.4.34/24 centOS-7-1611

1.修改主机名
[root@master ~]# hostnamectl set-hostname master
[root@master ~]# hostnamectl set-hostname slave01
[root@master ~]# hostnamectl set-hostname slave02
2.关闭selinux和firewalld
[root@master ~]# setenforce 0
[root@master ~]# systemctl stop firewalld
3.修改hosts文件(三个节点均做修改)
[root@master ~]# vi /etc/hosts
10.22.4.31 master
10.22.4.34 slave01
10.22.4.33 slave02
4.安装JDK(这里选择JDK1.8.0-131)(三个节点均做部署)

下载地址http://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html

下载前需要点选Accept License Agreement

Linux x64 162.11 MB jdk-8u141-linux-x64.rpm

使用rz或者xftp上传,这里推荐一款软件winscp,一款比较稳定的运行在windows平台的sftp软件

[root@master ~]# rpm -ivh jdk-8u131-linux-x64.rpm

配置环境变量

[root@master ~]# vi /etc/profile
export JAVA_HOME=/usr/java/jdk1.8.0_131
export JRE_HOME=/usr/java/jdk1.8.0_131/jre
export PATH=$PATH:$JAVA_HOME/bin:$JRE_HOME/bin
export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar:$JRE_HOME/lib
5.配置免密码登录(只需要master免密码登录slave就可以)
[root@master ~]# ssh-keygen -t rsa    (所有节点执行)
[root@master ~]# cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
[root@master .ssh]# ssh-copy-id root@slave01
[root@master .ssh]# ssh-copy-id root@slave02
6.安装hadoop(在master下载即可,配置完成后scp发送到slave)

下载hadoophttp://www.apache.org/dyn/closer.cgi/hadoop/common/hadoop-2.7.3/hadoop-2.7.3.tar.gz

[root@master ~]# wget http://www.apache.org/dyn/closer.cgi/hadoop/common/hadoop-2.7.3/hadoop-2.7.3.tar.gz
个人习惯我把hadoop解压到/opt下
[root@master ~]# tar -zxvf hadoop-2.7.3.tar.gz -C /opt/

添加环境变量(所有节点都需要做)

[root@master ~]# vi /etc/profile
export HADOOP_HOME=/opt/hadoop-2.7.3/
export PATH="$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATH"
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
export YARN_CONF_DIR=$HADOOP_HOME/etc/hadoop
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export HADOOP_OPTS="-Djava.library.path=$HADOOP_COMMON_LIB_NATIVE_DIR"
7.修改hadoop配置文件
[root@master ~]# cd /opt/hadoop-2.7.3
[root@master hadoop-2.7.3]# mkdir tmp
[root@master hadoop-2.7.3]# mkdir hdfs/name
[root@master hadoop-2.7.3]# mkdir hdfs/data
[root@master hadoop-2.7.3]# vi etc/hadoop/hadoop-env.sh
修改export JAVA_HOME=/usr/java/jdk1.8.0_131/
[root@master hadoop-2.7.3]# vi etc/hadoop/slaves
slave01
slave02
[root@master hadoop-2.7.3]# vi etc/hadoop/core-site.xml
<configuration>
<property>
   <name>fs.default.name</name>
   <value>hdfs://master:9000</value>
 </property>
 <property>
   <name>io.file.buffer.size</name>
   <value>131072</value>
 </property>
 <property>
   <name>hadoop.tmp.dir</name>
   <value>/opt/hadoop-2.7.3/tmp</value>
   <description>Abase for other temporary directories.</description>
  </property>
</configuration>
[root@master hadoop-2.7.3]# vi etc/hadoop/hdfs-site.xml
<configuration>
<property>
   <name>file:/opt/hadoop-2.7.3/hdfs/name</name>
   <value>master:50090</value>
 </property>
 <property>
   <name>dfs.namenode.name.dir</name>
   <value>file:/opt/hadoop-2.7.3/hdfs/name</value>
 </property>
 <property>
   <name>dfs.datanode.data.dir</name>
   <value>file:/opt/hadoop-2.7.3/hdfs/data</value>
 </property>
 <property>
   <name>dfs.replication</name>
   <value>2</value>
   <description># 每个Block有2个备份</description>
 </property>
</configuration>
[root@master hadoop-2.7.3]# cp etc/hadoop/mapred-site.xml.template etc/hadoop/mapred-site.xml
[root@master hadoop-2.7.3]# vi etc/hadoop/mapred-site.xml
<configuration>
<property>
   <name>mapreduce.framework.name</name>
   <value>yarn</value>
 </property>
 <property>
   <name>mapreduce.jobhistory.address</name>
   <value>master:10020</value>
 </property>
 <property>
   <name>mapreduce.jobhistory.webapp.address</name>
   <value>master:19888</value>
 </property>
</configuration>
[root@master hadoop-2.7.3]# vi /opt/hadoop-2.7.3/etc/hadoop/yarn-site.xml
<configuration>
<!-- Site specific YARN configuration properties -->
<property>
   <name>yarn.nodemanager.aux-services</name>
   <value>mapreduce_shuffle</value>
 </property>
 <property>
   <name>yarn.resourcemanager.address</name>
   <value>master:8032</value>
 </property>
 <property>
   <name>yarn.resourcemanager.scheduler.address</name>
   <value>master:8030</value>
 </property>
 <property>
   <name>yarn.resourcemanager.resource-tracker.address</name>
   <value>master:8031</value>
 </property>
 <property>
   <name>yarn.resourcemanager.admin.address</name>
   <value>master:8033</value>
 </property>
 <property>
   <name>yarn.resourcemanager.webapp.address</name>
   <value>master:8088</value>
 </property>
</configuration>
8.发送配置好的hadoop到slave
[root@master hadoop-2.7.3]# bin/hadoop namenode -format
[root@master hadoop-2.7.3]# scp -r /opt/hadoop-2.7.3 slave01:/opt/
[root@master hadoop-2.7.3]# scp -r /opt/hadoop-2.7.3 slave02:/opt/
[root@master hadoop-2.7.3]# sbin/start-all.sh
[root@master hadoop-2.7.3]# jps
10324 NameNode
11335 Jps
10520 SecondaryNameNode
10680 ResourceManager

访问IP:8088和IP:50070出现如下页面


IP8088.png IP50070 两个节点.png

相关文章

网友评论

      本文标题:hadoop2.7.3部署记录

      本文链接:https://www.haomeiwen.com/subject/wubskxtx.html