美文网首页
Hadoop完全分布式集群部署

Hadoop完全分布式集群部署

作者: guaren2009 | 来源:发表于2020-05-28 06:47 被阅读0次

一、机器环境准备

(1)、准备机器

准备3台机器,每台至少2核8G内存。操作系统为Centos7.2。本次采用虚拟机,3台机器都配置了静态ip,并关闭了防火墙。3台机器hostname分别设置为hadoop001,hadoop002,hadoop003。

(2)、配置hosts文件

## 配置集群的hosts文件

echo "192.168.80.216 hadoop001" >> /etc/hosts

echo "192.168.80.217 hadoop002" >> /etc/hosts

echo "192.168.80.218 hadoop003" >> /etc/hosts

(3)、新增用户hadoop

## 新增hadoop用户

useradd hadoop

su - hadoop

(4)、配置信任关系

## 配置信任关系

ssh-keygen

## 在hadoop002上执行:

[hadoop@hadoop002 .ssh]$ scp ~/.ssh/id_rsa.pub  root@hadoop001:/home/hadoop/.ssh/id_rsa.pub2

## 在hadoop003上执行

[hadoop@hadoop003 .ssh]$ scp ~/.ssh/id_rsa.pub  root@hadoop001:/home/hadoop/.ssh/id_rsa.pub3

## 我有别人的公钥 ,别人在访问我的时候不需要输入密码

## 在hadoop001上执行

[hadoop@hadoop001 .ssh]$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys

[hadoop@hadoop001 .ssh]$ cat ~/.ssh/id_rsa.pub2 >> ~/.ssh/authorized_keys

[hadoop@hadoop001 .ssh]$ cat ~/.ssh/id_rsa.pub3 >> ~/.ssh/authorized_keys

[hadoop@hadoop001 .ssh]$ chmod 0600 ~/.ssh/authorized_keys

# 将authorized_keys传输到hadoop002和hadoop003上

[hadoop@hadoop001 .ssh]$ scp authorized_keys root@hadoop002:/home/hadoop/.ssh/

[hadoop@hadoop001 .ssh]$ scp authorized_keys root@hadoop003:/home/hadoop/.ssh/

# 在hadoop002上查看authorized_keys文件所属用户和组,如果是root的,则需要切换到root用户后,更改用户和组为hadoop

[root@hadoop002 ~]# chown hadoop:hadoop /home/hadoop/.ssh/authorized_keys

[hadoop@hadoop002 .ssh]$ chmod 0600 ~/.ssh/authorized_keys

# 在hadoop003上查看authorized_keys文件所属用户和组,如果是root的,则需要切换到root用户后,更改用户和组为hadoop

[root@hadoop003 ~]# chown hadoop:hadoop /home/hadoop/.ssh/authorized_keys

[hadoop@hadoop003 .ssh]$ chmod 0600 ~/.ssh/authorized_keys

## 分别通过 ssh hadoop001 date 这种命令在三台机器上执行,第一次需要输入yes才能获取时间,但不需要输入密码,如果要密码,说明信任关系没有配成功

## 输入yes的过程中,机器会在 ~/.ssh/known_hosts文件中记录认证信息,下次就不需要输入yes了。

## 如果机器的ssh key发生变化,需要在known_hosts中将对应机器的信息删除掉

[hadoop@hadoop001 .ssh]$ ssh hadoop001 date

The authenticity of host 'hadoop001 (192.168.80.216)' can't be established.

ECDSA key fingerprint is SHA256:W/+1s0pJCubhSFJ1tRaRyRJuAGkMzDk4Z2Q2iYAJ8VY.

ECDSA key fingerprint is MD5:56:fd:52:14:19:b6:1c:3a:48:67:0f:a4:18:02:ea:63.

Are you sure you want to continue connecting (yes/no)? yes

Warning: Permanently added 'hadoop001,192.168.80.216' (ECDSA) to the list of known hosts.

2020年 05月 25日 星期一 22:51:01 CST

[hadoop@hadoop001 .ssh]$ ssh hadoop002 date

2020年 05月 25日 星期一 22:51:05 CST

[hadoop@hadoop001 .ssh]$ ssh hadoop003 date

2020年 05月 25日 星期一 22:51:09 CST

[hadoop@hadoop001 .ssh]$

(5)、创建目录并上传安装包

## 创建 tmp sourcecode software shell log lib data app目录

[hadoop@hadoop001 .ssh]$ cd

[hadoop@hadoop001 ~]$ mkdir tmp sourcecode software shell log lib  data app

[hadoop@hadoop002 .ssh]$ cd

[hadoop@hadoop002 ~]$ mkdir tmp sourcecode software shell log lib  data app

[hadoop@hadoop003 ~]$ cd

[hadoop@hadoop003 ~]$ mkdir tmp sourcecode software shell log lib  data app

## 上传安装包到hadoop001,通过ftp上传即可

二、部署Hadoop

(6)、部署JDK

部署文档见:https://www.jianshu.com/p/02e8ebbdd259

## 部署jdk,使用root用户部署

## 先在hadoop001上部署jdk

## 在hadoop002和hadoop003上创建 /usr/java目录

[root@hadoop002 ~]# mkdir  /usr/java

[root@hadoop003 ~]# mkdir  /usr/java

## 将/usr/java/jdk1.8.0_181/ 目录远程拷贝到hadoop002和hadoop003上

[root@hadoop001 java]# scp -r /usr/java/jdk1.8.0_181/ root@hadoop002:/usr/java/

[root@hadoop001 java]# scp -r /usr/java/jdk1.8.0_181/ root@hadoop003:/usr/java/

## 在hadoop002和hadoop003上配置环境变量并生效

[root@hadoop002 ~]# echo -e '# JAVA ENV\nexport JAVA_HOME=/usr/java/jdk1.8.0_181\nexport PATH=$JAVA_HOME/bin:$PATH' >>/etc/profile

[root@hadoop002 ~]# source /etc/profile

[root@hadoop002 ~]# which java

/usr/java/jdk1.8.0_181/bin/java

[root@hadoop002 ~]# java -version

java version "1.8.0_181"

Java(TM) SE Runtime Environment (build 1.8.0_181-b13)

Java HotSpot(TM) 64-Bit Server VM (build 25.181-b13, mixed mode)

[root@hadoop003 ~]# echo -e '# JAVA ENV\nexport JAVA_HOME=/usr/java/jdk1.8.0_181\nexport PATH=$JAVA_HOME/bin:$PATH' >>/etc/profile

[root@hadoop003 ~]# source /etc/profile

[root@hadoop003 ~]# which java

/usr/java/jdk1.8.0_181/bin/java

[root@hadoop003 ~]# java -version

java version "1.8.0_181"

Java(TM) SE Runtime Environment (build 1.8.0_181-b13)

Java HotSpot(TM) 64-Bit Server VM (build 25.181-b13, mixed mode)

(7)、部署Zookeeper

部署文档见:https://www.jianshu.com/p/aae9d63fc091

(8)、部署Hadoop

配置文件分享:

链接:https://pan.baidu.com/s/1Ua8Q8lznDuqz8Lao4bxbdg

提取码:vk34

## 部署hadoop

## 先在hadoop001上部署hadoop

[hadoop@hadoop001 ~]$ tar -xzvf /home/hadoop/software/hadoop-2.6.0-cdh5.16.2.tar.gz -C /home/hadoop/app/

[hadoop@hadoop001 ~]$ cd ~/app/

## 创建软连接

[hadoop@hadoop001 app]$ ln -s hadoop-2.6.0-cdh5.16.2/ hadoop

[hadoop@hadoop001 app]$ cd hadoop

## 修改hadoop-env.sh配置文件,显式指定JAVA_HOME

export JAVA_HOME=/usr/java/jdk1.8.0_181

[hadoop@hadoop001 hadoop]$ cd etc/hadoop

[hadoop@hadoop001 hadoop]$ rm -rf hdfs-site.xml core-site.xml slaves yarn-site.xml

## 上传我们写好的配置文件

## 检验上传的文件,是有windows的换行符的

hadoop003[hadoop@hadoop001 hadoop]$ file slaves

slaves: ASCII text, with CRLF line terminators

## 切换到root用户,安装dos2unix

[hadoop@hadoop001 hadoop]$ exit

登出

[root@hadoop001 ~]# yum install -y dos2unix

## 切换会hadoop用户

[hadoop@hadoop001 ~]$ cd ~/app/hadoop/etc/hadoop

[hadoop@hadoop001 hadoop]$ dos2unix slaves

dos2unix: converting file slaves to Unix format ...

[hadoop@hadoop001 hadoop]$ file slaves

slaves: ASCII text

## 依次将 hdfs-site.xml core-site.xml slaves yarn-site.xml 都做一遍转换

## 拷贝到hadoop002和hadoop003上

[hadoop@hadoop001 ~]$ cd ~/app/

[hadoop@hadoop001 app]$ scp -r hadoop-2.6.0-cdh5.16.2/ hadoop002:/home/hadoop/app/

[hadoop@hadoop002 app]$ ln -s hadoop-2.6.0-cdh5.16.2/ hadoop

[hadoop@hadoop001 app]$ scp -r hadoop-2.6.0-cdh5.16.2/ hadoop003:/home/hadoop/app/

[hadoop@hadoop003 zookeeper]$ cd ~/app/

[hadoop@hadoop003 app]$ ln -s hadoop-2.6.0-cdh5.16.2/ hadoop

## 在格式化nameNode之前,要将journalnode启动起来

[hadoop@hadoop001 app]$ cd ~/app/hadoop/sbin/

[hadoop@hadoop001 app]$ ./hadoop-daemon.sh start journalnode

[hadoop@hadoop002 app]$ cd ~/app/hadoop/sbin/

[hadoop@hadoop002 app]$ ./hadoop-daemon.sh start journalnode

[hadoop@hadoop003 app]$ cd ~/app/hadoop/sbin/

[hadoop@hadoop003 app]$ ./hadoop-daemon.sh start journalnode

## 格式化namenode,选择hadoop001

[hadoop@hadoop001 app]$ cd ~/app/hadoop/bin/

[hadoop@hadoop001 app]$ ./hadoop namenode -format

## 因为做了namenode HA,所以需要将hadoop001上的namenode的元数据同步到hadoop002上

## 主要是dfs.namenode.name.dir,dfs.namenode.edits.dir,还应该确保共享存储目录下(dfs.namenode.shared.edits.dir)包含NameNode所有的元数据

[hadoop@hadoop001 ~]$ scp -r ~/data/dfs/name hadoop002:/home/hadoop/data/dfs/

# 初始化ZFCK

[hadoop@hadoop001 bin]$ cd ~/app/hadoop

[hadoop@hadoop001 hadoop]$ bin/hdfs zkfc -formatZK

## 然后配置环境变量

[hadoop@hadoop001 hadoop]$ echo -e '# HADOOP ENV\nexport HADOOP_HOME=/home/hadoop/app/hadoop\nexport PATH=$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATH' >> ~/.bashrc

[hadoop@hadoop001 hadoop]$ source ~/.bashrc

[hadoop@hadoop002 hadoop]$ echo -e '# HADOOP ENV\nexport HADOOP_HOME=/home/hadoop/app/hadoop\nexport PATH=$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATH' >> ~/.bashrc

[hadoop@hadoop002 hadoop]$ source ~/.bashrc

[hadoop@hadoop003 sbin]$ echo -e '# HADOOP ENV\nexport HADOOP_HOME=/home/hadoop/app/hadoop\nexport PATH=$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATH' >> ~/.bashrc

[hadoop@hadoop003 sbin]$ source ~/.bashrc

# 启动hdfs

[hadoop@hadoop001 hadoop]$ start-dfs.sh

# 启动yarn

[hadoop@hadoop001 hadoop]$ start-yarn.sh

[hadoop@hadoop002 hadoop]$ yarn-daemon.sh start resourcemanager

# 启动history服务

[hadoop@hadoop001 hadoop]$ mr-jobhistory-daemon.sh start historyserver

## 查看进程

[hadoop@hadoop001 logs]$ jps

27040 ResourceManager

26424 NameNode

26536 DataNode

27528 JobHistoryServer

27640 Jps

26923 DFSZKFailoverController

27147 NodeManager

18157 QuorumPeerMain

26734 JournalNode

[hadoop@hadoop002 hadoop]$ jps

28064 ResourceManager

27907 NodeManager

18213 QuorumPeerMain

27592 DataNode

27816 DFSZKFailoverController

28185 Jps

27514 NameNode

27690 JournalNode

[hadoop@hadoop003 sbin]$ jps

23665 JournalNode

23938 Jps

23765 NodeManager

18221 QuorumPeerMain

23567 DataNode

## 查看web界面

## namenode web界面

http://hadoop001:50070/

http://hadoop002:50070/

## yarn web界面

http://hadoop001:8088/cluster

http://hadoop002:8088/cluster/cluster

## historyserver web界面

http://hadoop001:19888/jobhistory

## 进程都可以起来说明配置没有问题

## 关闭集群

## 停止yarn

[hadoop@hadoop001 hadoop]$ stop-yarn.sh

[hadoop@hadoop002 hadoop]$ yarn-daemon.sh stop resourcemanager

## 停止hdfs

[hadoop@hadoop001 hadoop]$ stop-dfs.sh

## 或者省事点

[hadoop@hadoop001 logs]$ mr-jobhistory-daemon.sh stop historyserver

[hadoop@hadoop001 logs]$ stop-all.sh

[hadoop@hadoop002 hadoop]$ yarn-daemon.sh stop resourcemanager

相关文章

网友评论

      本文标题:Hadoop完全分布式集群部署

      本文链接:https://www.haomeiwen.com/subject/roonahtx.html