修改hostname
- 永久修改:
cd /etc/hosts
vim hosts
加入:
masterIp master
slave1Ip slave1
slave2Ip slave2
- 临时生效
主节点:hostname master
slave1节点:hostname slave1
slave2节点:hostname slave2
关闭selinux和防火墙
setenforce 0 // 关闭setlinux,检查的话可以 getenforce会显示Disabled
systemctl disable firewalld # 开机不自启防火墙
systemctl stop firewalld # 关闭防火墙
设置主从节点信任关系
master节点
ssh-keygen
ssh-copy-id master
logout
ssh-copy-id slave1
ssh slave1 // 检查连接
ssh-copy-id slave2
ssh slave2 // 检查连接
java oracleJDK1.8安装
tar xf jdk-8u172-linux-x64.tar.gz -C /usr/local/ //从oracle官网下载jdk版本,然后上传服务器解压
vim /etc/profile //进入在开头添加
JAVA_HOME=/usr/local/jdk1.8.0_172
PATH=\$JAVA_HOME/bin:\$PATH
CLASSPATH=.:\$JAVA_HOME/lib/dt.jar:\$JAVA_HOME/lib/tools.jar
source /etc/profile // 刷新
java -version // 检查
hadoop配置
下载hadoop地址: http://mirror.bit.edu.cn/apache/hadoop/common/
在解压后的hadoop.2.6.5目录下etc/hadoop/进行修改文件:
master文件添加:
master
slave文件添加:
salve1
slave2
配置core-site.xml
添加RPC配置
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://master:9000</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>file:/usr/local/hadoop-2.6.5/tmp</value>
</property>
</configuration>
配置hdfs-site.xml
修改配置文件添加HDFS配置
<configuration>
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>master:9001</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/usr/local/hadoop-2.6.5/dfs/name</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/usr/local/hadoop-2.6.5/dfs/data</value>
</property>
<property>
<name>dfs.replication</name>
<value>3</value>
</property>
</configuration>
配置mapred-site.xml
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>
yarn-site.xml
<configuration>
<!-- Site specific YARN configuration properties -->
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
<property>
<name>yarn.resourcemanager.address</name>
<value>master:8032</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>master:8030</value>
</property>
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>master:8035</value>
</property>
<property>
<name>yarn.resourcemanager.admin.address</name>
<value>master:8033</value>
</property>
<property>
<name>yarn.resourcemanager.webapp.address</name>
<value>master:8088</value>
</property>
</configuration>
添加hadoop环境变量
vim /etc/profile 加入
HADOOP_HOME=/usr/local/hadoop-2.6.5
export PATH=\$PATH:\$HADOOP_HOME/bin
salve节点配置Hadoop
master节点将配置拷贝过去
scp /etc/profile root@slave1:/etc/profile
scp /etc/profile root@slave2:/etc/profile
scp -rf /usr/local/hadoop-2.6.5 root@slave1:/usr/local/
scp -rf /usr/local/hadoop-2.6.5 root@slave2:/usr/local/
slave节点刷新
[root@slave1 ~]# source /etc/profile
[root@slave2 ~]# source /etc/profile
master节点格式化
hadoop namenode -format
启动集群
/usr/local/hadoop-2.6.5/sbin/start-all.sh
网友评论