本节主要内容:
YARN环境搭建
1.系统环境:
OS:CentOS Linux release 7.5.1804 (Core)
CPU:2核心
Memory:1GB
运行用户:root
JDK版本:1.8.0_252
Hadoop版本:cdh5.16.2
2.集群各节点角色规划为:
172.26.37.245 node1.hadoop.com---->namenode,zookeeper,journalnode,hadoop-hdfs-zkfc,resourcenode,historyserver
172.26.37.246 node2.hadoop.com---->datanode,zookeeper,journalnode,nodemanager,hadoop-client,mapreduce
172.26.37.247 node3.hadoop.com---->datanode,nodemanager,hadoop-client,mapreduce
172.26.37.248 node4.hadoop.com---->namenode,zookeeper,journalnode,hadoop-hdfs-zkfc
3.环境说明:
本次追加部署
172.26.37.245 node1.hadoop.com---->resourcenode,historyserver
172.26.37.246 node2.hadoop.com---->nodemanager,hadoop-client,mapreduce
172.26.37.246 node2.hadoop.com---->nodemanager,hadoop-client,mapreduce
node1节点:负责任务调度,资源管理,和任务记录,部署resourcenode,historyserver
node2、node3节点:负责运行任务,部署nodemanager,hadoop-client,mapreduce
一.安装
node1节点
# yum -y install hadoop-yarn-resourcemanager hadoop-mapreduce-historyserver
node2、node3节点
# yum -y install hadoop-yarn-nodemanager hadoop-mapreduce hadoop-client
二.配置
1./etc/hadoop/conf/core-site.xml变更(node1、node2、node3节点)
# cp -p /etc/hadoop/conf/core-site.xml /etc/hadoop/conf/core-site.xml.20200623
# vi /etc/hadoop/conf/core-site.xml
添加如下
<property>
<name>hadoop.proxyuser.mapred.groups</name>
<value>*</value>
</property>
<!-- 允许mapred用户移动属于这些组中的用户的文件-->
<property>
<name>hadoop.proxyuser.mapred.hosts</name>
<value>*</value>
</property>
<!-- 允许mapred用户移动属于这些主机的文件-->
2.配置YARN集群的属性及配置JobHistory服务器(node1节点)
# cp -p /etc/hadoop/conf/mapred-site.xml /etc/hadoop/conf/mapred-site.xml.20200623
# vi /etc/hadoop/conf/mapred-site.xml
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<!--如果使用的是yarn,此处必须是yarn,否则就是MRv1-->
<property>
<name>mapreduce.jobhistory.address</name>
<value>node1.hadoop.com:10020</value>
</property>
<!--JobHistory服务器的地址 host:port-->
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>node1.hadoop.com:19888</value>
</property>
<!--JobHistory服务器Web应用程序的地址host:port-->
<property>
<name>yarn.app.mapreduce.am.staging-dir</name>
<value>/user</value>
</property>
<!--Yarn运行job的时候需要一个临时文件夹,默认是用 /tmp/hadoop-yarn/staging,改到了hdfs存储上的user目录-->
</configuration>
3.配置YARN守护进程(node1、node2、node3节点)
配置以下服务:ResourceManager(在专用主机上)和NodeManager(在计划运行MapReduce v2作业的每台主机上)。
# cp /etc/hadoop/conf/yarn-site.xml /etc/hadoop/conf/yarn-site.xml.20200623
# vi /etc/hadoop/conf/yarn-site.xml
变更后为以下内容
<configuration>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<!--需要为Map Reduce应用程序设置的Shuffle服务。-->
<property>
<name>yarn.nodemanager.aux-services.mapreduce_shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
<property>
<name>yarn.log-aggregation-enable</name>
<value>true</value>
</property>
<property>
<description>List of directories to store localized files in.</description>
<name>yarn.nodemanager.local-dirs</name>
<value>file:///var/lib/hadoop-yarn/cache/${user.name}/nm-local-dir</value>
</property>
<!--本地日志目录,可以修改,并自行创建目录,一定要复权给yarn用户-->
<property>
<description>Where to store container logs.</description>
<name>yarn.nodemanager.log-dirs</name>
<value>file:///var/log/hadoop-yarn/containers</value>
</property>
<!--本地日志目录,自行创建或修改,一定要复权给yarn用户-->
<property>
<description>Where to aggregate logs to.</description>
<name>yarn.nodemanager.remote-app-log-dir</name>
<value>hdfs://cluster1/var/log/hadoop-yarn/apps</value>
</property>
<!--nodemanager远程日志目录,存放在hdfs上的。-->
<!--修改为hdfs群集目录。-->
<property>
<description>Classpath for typical applications.</description>
<name>yarn.application.classpath</name>
<value>
$HADOOP_CONF_DIR,
$HADOOP_COMMON_HOME/*,$HADOOP_COMMON_HOME/lib/*,
$HADOOP_HDFS_HOME/*,$HADOOP_HDFS_HOME/lib/*,
$HADOOP_MAPRED_HOME/*,$HADOOP_MAPRED_HOME/lib/*,
$HADOOP_YARN_HOME/*,$HADOOP_YARN_HOME/lib/*
</value>
</property>
<!--应用程序的类路径Classpath。-->
<property>
<name>yarn.resourcemanager.hostname</name>
<value>node1.hadoop.com</value>
</property>
<!--以下属性将设置为此主机上的默认端口-->
</configuration>
4.建立yarn在hdfs上的日志文件夹
# sudo -u hdfs hadoop fs -mkdir -p /var/log/hadoop-yarn
# sudo -u hdfs hadoop fs -chown yarn:mapred /var/log/hadoop-yarn
5.创建实验环境
创建/user目录和及其history子目录;设置权限
# sudo -u hdfs hadoop fs -mkdir /user
# sudo -u hdfs hadoop fs -mkdir -p /user/cloudera
# sudo -u hdfs hadoop fs -chmod -R 1777 /user/cloudera
# sudo -u hdfs hadoop fs -chown mapred:hadoop /user/cloudera
# sudo -u hdfs hadoop fs -mkdir -p /user/hdfs
# sudo -u hdfs hadoop fs -chmod -R 1777 /user/hdfs
# sudo -u hdfs hadoop fs -chown mapred:hadoop /user/hdfs
# sudo -u hdfs hadoop fs -mkdir -p /user/history
# sudo -u hdfs hadoop fs -chmod -R 1777 /user/history
# sudo -u hdfs hadoop fs -chown mapred:hadoop /user/history
# sudo -u hdfs hdfs dfs -ls /
Found 3 items
drwxrwxrwt - hdfs supergroup 0 2020-06-25 11:47 /tmp
drwxr-xr-x - hdfs supergroup 0 2020-06-25 11:57 /user
drwxr-xr-x - hdfs supergroup 0 2020-06-25 11:49 /var
# sudo -u hdfs hadoop dfs -ls /user
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.
Found 3 items
drwxrwxrwt - mapred hadoop 0 2020-06-25 11:57 /user/cloudera
drwxrwxrwt - mapred hadoop 0 2020-06-25 11:56 /user/hdfs
drwxrwxrwt - mapred hadoop 0 2020-06-25 11:59 /user/history
6.启动YARN和MapReduce JobHistory服务器
要启动YARN,必须先启动ResourceManager和NodeManager服务:
同时确保在启动NodeManager服务之前先启动ResourceManager。
在ResourceManager系统上(node1节点):
# service hadoop-yarn-resourcemanager start
# service hadoop-yarn-resourcemanager status
在每个NodeManager系统上(node2/3节点):
# service hadoop-yarn-nodemanager start
# service hadoop-yarn-nodemanager status
启动MapReduce JobHistory服务器(node1节点)
# service hadoop-mapreduce-historyserver start
# service hadoop-mapreduce-historyserver status
7.验证history服务
http://172.26.37.245:19888/jobhistory
网友评论