一、虚拟机概况
三台centos7.5 4g(内存) 2核 40g
master 192.168.2.146
hadoop01 192.168.2.153
hadoop02 192.168.2.148
二、环境变量以及安装准备
1、/etc/profile.d/hadoop.sh
export JAVA_HOME=/home/james/app/jdk1.8.0_91
export CLASSPATH=$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
export PATH=$JAVA_HOME/bin:$PATH
export HADOOP_HOME=/home/james/app/hadoop-2.6.0-cdh5.7.0
export PATH=$HADOOP_HOME/bin:$PATH
2、hostname 设置
sudo vi /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=master
hostname 和ip地址的设置 sudo vi /etc/hosts
192.168.2.146 master
192.168.2.153 hadoop01
192.168.2.144 hadoop02
3、各节点关闭防火墙
systemctl stop firewalld.service
4、各节点角色分配
maste: NameNode/DataNode ResourceManager/NodeManager
hadoop01: DataNode NodeManager
hadoop02: DataNode NodeManager
5、ssh 免密码登陆(各节点之间相互免登陆)
在每台机器上运行 ssh-keygen -t rsa
以master机器为主 执行下面的命令
cd ~/.ssh
ssh-copy-id 192.168.2.146
scp /home/hadoop/.ssh/authorized_keys 192.168.2.153:/home/hadoop/.ssh/
在hadoop01上
cd ~/.ssh
ssh-copy-id 192.168.2.153
scp /home/hadoop/.ssh/authorized_keys 192.168.2.144:/home/hadoop/.ssh/
在hadoop02上
cd ~/.ssh
scp /home/hadoop/.ssh/authorized_keys 192.168.2.146:/home/hadoop/.ssh/
scp /home/hadoop/.ssh/authorized_keys 192.168.2.153:/home/hadoop/.ssh/
6、安装jdk
在master:
tar -zxvf jdk-8u91-linux-x64.tar.gz -C ~/app
三、安装hadoop集群
1、在master解压
tar -zxvf hadoop-2.6.0-cdh5.7.0.tar.gz -C ~/app
2、配置文件etc/hadoop编辑
2.1 core-site.xml
<property>
<name>fs.default.name</name>
<value>hdfs://master:8020</value>
</property>
2.2 hdfs-site.xml
<property>
<name>dfs.namenode.name.dir</name>
<value>/home/james/app/tmp/dfs/name</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>/home/james/app/tmp/dfs/data</value>
</property>
<property>
<name>dfs.replication</name>
<value>3</value>
</property>
2.3 yarn-site.xml
<property>
<name>yarn.resourcemanager.hostname</name>
<value>master</value>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
2.4 mapred-site.xml
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
2.5 slaves
master
hadoop01
hadoop02
3、分发安装包到hadoop01和hadoop02
scp -r ~/app james@hadoop01:~
scp -r ~/app james@hadoop02:~
scp /etc/profile.d/hadoop.sh james@hadoop01:/etc/profile.d/
scp /etc/profile.d/hadoop.sh james@hadoop02:/etc/profile.d/
4、在两台机器上 source /etc/profile.d/hadoop.sh
5、对nameNode(NN)做格式化处理($HADOOP_HOME/bin)
在master就可以了:hdfs namenode -format
四、启动集群
在master就可以了:sbin/start-all.sh
五、验证jps
网友评论