美文网首页
作业四 : CentOS 7 下 Oozie-4.0.0-cdh

作业四 : CentOS 7 下 Oozie-4.0.0-cdh

作者: V1cttor | 来源:发表于2018-12-26 18:23 被阅读0次

安装准备

  1. oozie-4.0.0-cdh5.3.6 http://archive.cloudera.com/cdh5/cdh/5/oozie-4.0.0-cdh5.3.6.tar.gz
  2. ext-2.2.zip http://archive.cloudera.com/gplextras/misc/ext-2.2.zip

1. 解压

[hadoop@hadoop131 software]$ tar zxvf oozie-4.0.0-cdh5.3.6.tar.gz -C ../bigdata/hadoop/
[hadoop@hadoop131 oozie-4.0.0-cdh5.3.6]$ cd /opt/software/
[hadoop@hadoop131 software]$ cd ../bigdata/hadoop/oozie-4.0.0-cdh5.3.6/
[hadoop@hadoop131 oozie-4.0.0-cdh5.3.6]$ ls
bin   lib          NOTICE.txt             oozie-hadooplibs-4.0.0-cdh5.3.6.tar.gz  oozie-sharelib-4.0.0-cdh5.3.6-yarn.tar.gz  src
conf  libtools     oozie-core             oozie-server                            oozie.war
docs  LICENSE.txt  oozie-examples.tar.gz  oozie-sharelib-4.0.0-cdh5.3.6.tar.gz    release-log.txt

解压oozie-hadooplibs-4.0.0-cdh5.3.6.tar.gz, 创建libext文件夹, 把解压出来的jar包都放进去

[hadoop@hadoop131 oozie-4.0.0-cdh5.3.6]$ mkdir libext
[hadoop@hadoop131 oozie-4.0.0-cdh5.3.6]$ tar zxvf oozie-hadooplibs-4.0.0-cdh5.3.6.tar.gz 
[hadoop@hadoop131 oozie-4.0.0-cdh5.3.6]$ cp oozie-4.0.0-cdh5.3.6/hadooplibs/hadooplib-2.5.0-cdh5.3.6.oozie-4.0.0-cdh5.3.6/* libext/
[hadoop@hadoop131 oozie-4.0.0-cdh5.3.6]$ cp oozie-4.0.0-cdh5.3.6/hadooplibs/hadooplib-2.5.0-mr1-cdh5.3.6.oozie-4.0.0-cdh5.3.6/* libext/

将ext-2.2.zip,放入libext

[hadoop@hadoop131 oozie-4.0.0-cdh5.3.6]$ cp /opt/software/ext-2.2.zip libext/

拷贝 Mysql 驱动包到 libext 目录下

[hadoop@hadoop131 etc]$ cp /opt/software/mysql-connector-java-5.1.47.jar /opt/bigdata/hadoop/oozie-4.0.0-cdh5.3.6/libext/

2.编辑配置文件

Hadoop2.7.3/etc/hadoop下

core-site.xml

<!-- Oozie Server的Hostname -->
## 允许哪些框架被oozie 代理victor用户去操作hadoop,victor修改成自己的用户名
<property>
    <name>hadoop.proxyuser.hadoop.hosts</name>
    <value>*</value>
</property>

<!-- 允许被Oozie代理的用户组 -->
<property>
    <name>hadoop.proxyuser.hadoop.groups</name>
    <value>*</value>
</property>

mapred-site.xml

<property>
    <name>mapreduce.jobhistory.address</name>
    <value>hadoop131:10020</value>
</property>

<!-- 配置 MapReduce JobHistory Server web ui 地址, 默认端口19888 -->
<property>
    <name>mapreduce.jobhistory.webapp.address</name>
    <value>hadoop131:19888</value>
</property>

yarn-site.xml

<!-- 任务历史服务 -->
<property> 
    <name>yarn.log.server.url</name> 
    <value>http://hadoop131:19888/jobhistory/logs/</value> 
</property>

分发配置(没有xsync就用scp代替)

[hadoop@hadoop131 hadoop]$ cd ..
[hadoop@hadoop131 etc]$ ls
hadoop
[hadoop@hadoop131 etc]$ xsync hadoop/
oozie-4.0.0-cdh5.3.6/conf下

oozie-site.xml

    <property>
        <name>oozie.service.JPAService.jdbc.driver</name>
        <value>com.mysql.jdbc.Driver</value>
    </property>

    <property>
        <name>oozie.service.JPAService.jdbc.url</name>
        <value>jdbc:mysql://hadoop131:3306/oozie</value>
    </property>

    <property>
        <name>oozie.service.JPAService.jdbc.username</name>
        <value>pcadmin</value>
    </property>

    <property>
        <name>oozie.service.JPAService.jdbc.password</name>
        <value>000000</value>
    </property>

    <property>
        <name>oozie.service.HadoopAccessorService.hadoop.configurations</name>
        <value>*=/opt/bigdata/hadoop/hadoop-2.7.3/etc/hadoop</value>
        <description>让Oozie引用Hadoop的配置文件“*=”不能删</description>
    </property>

3 启动集群

[hadoop@hadoop131 etc]$ start-dfs.sh
[hadoop@hadoop132 zkData]$ start-yarn.sh
[hadoop@hadoop131 etc]$ mr-jobhistory-daemon.sh  start  historyserver
starting historyserver, logging to /opt/bigdata/hadoop/hadoop-2.7.3/logs/mapred-hadoop-historyserver-hadoop131.out
[hadoop@hadoop131 etc]$ jps
9696 Jps
9141 DataNode
9030 NameNode
9655 JobHistoryServer
9436 NodeManager
4655 QuorumPeerMain

4. 创建oozie数据库

[hadoop@hadoop131 hadoop]$ mysql -uroot -p000000
mysql> create database oozie DEFAULT CHARSET utf8 COLLATE utf8_general_ci;
mysql> quit;

5. 初始化oozie

[hadoop@hadoop131 hadoop]$ cd /opt/bigdata/hadoop/oozie-4.0.0-cdh5.3.6/
[hadoop@hadoop131 oozie-4.0.0-cdh5.3.6]$ bin/oozie-setup.sh sharelib create -fs hdfs://hadoop131:9000 -locallib oozie-sharelib-4.0.0-cdh5.3.6-yarn.tar.gz

打开hadoop文件管理页面可以看到文件已经生成了


image.png

6.创建oozie.sql

[hadoop@hadoop131 oozie-4.0.0-cdh5.3.6]$ bin/oozie-setup.sh db create -run -sqlfile oozie.sql

打包项目,生成WAR包

[hadoop@hadoop131 oozie-4.0.0-cdh5.3.6]$ bin/oozie-setup.sh prepare-war

报错


image.png

原因: 缺少unzip, 然后再次运行缺少zip

[hadoop@hadoop131 oozie-4.0.0-cdh5.3.6]$ sudo yum -y install unzip
[hadoop@hadoop131 oozie-4.0.0-cdh5.3.6]$ sudo yum -y install zip
[hadoop@hadoop131 oozie-4.0.0-cdh5.3.6]$ bin/oozie-setup.sh prepare-war
  setting CATALINA_OPTS="$CATALINA_OPTS -Xmx1024m"

出现下面提示就OK了

New Oozie WAR file with added 'ExtJS library, JARs' at /opt/bigdata/hadoop/oozie-4.0.0-cdh5.3.6/oozie-server/webapps/oozie.war

INFO: Oozie is ready to be started

7. 启动oozie

[hadoop@hadoop131 oozie-4.0.0-cdh5.3.6]$ bin/oozie-start.sh

打开hadoop131:11000, 已启动


image.png

======================================================
oozie启动错误后无法关闭提示PID file found but no matching process was found. Stop aborted.
删除目录下oozie-server/temp/oozie.pid
======================================================

8. Oozie 调度 wordcount mapreduce

### 在oozie目录下创建oozie-apps文件夹
[hadoop@hadoop131 oozie-4.0.0-cdh5.3.6]$ mkdir oozie-apps
[hadoop@hadoop131 oozie-4.0.0-cdh5.3.6]$ tar zxvf oozie-examples.tar.gz 
[hadoop@hadoop131 oozie-4.0.0-cdh5.3.6]$ cp examples/apps/map-reduce/ oozie-apps/
[hadoop@hadoop131 oozie-4.0.0-cdh5.3.6]$ cd oozie-apps/
[hadoop@hadoop131 oozie-apps]$ mv map-reduce mr-wordcount
[hadoop@hadoop131 oozie-apps]$ cd mr-wordcount
[hadoop@hadoop131 mr-wordcount]$ vim job.properties 
[hadoop@hadoop131 mr-wordcount]$ vim workflow.xml
[hadoop@hadoop131 mr-wordcount]$ mkdir lib
[hadoop@hadoop131 mr-wordcount]$ mkdir input

9.删掉lib下的jar包

job.properties

#hdfs
nameNode=hdfs://hadoop131:9000
#yarn
jobTracker=hadoop132:8032       
queueName=default
examplesRoot=oozie-apps
 
oozie.wf.application.path=${nameNode}/user/hadoop/${examplesRoot}/mr-wordcount/workflow.xml
outputDir=map-reduce

workflow.xml

<workflow-app xmlns="uri:oozie:workflow:0.5" name="mr-wordcount">
<start to="mr-node"/>
<action name="mr-node">
  <map-reduce>
    <job-tracker>${jobTracker}</job-tracker>
    <name-node>${nameNode}</name-node>
    <prepare>
      <delete path="${nameNode}/user/hadoop/${examplesRoot}/mr-wordcount/output"/>
    </prepare>
    <configuration>
      <property>
        <name>mapred.job.queue.name</name>
        <value>${queueName}</value>
      </property>
      
      <!--New API-->
      <property>
        <name>mapred.mapper.new-api</name>
        <value>true</value>
      </property>
      <property>
        <name>mapred.reducer.new-api</name>
        <value>true</value>
      </property>
      

      <!--mapper class-->
      <property>
        <name>mapreduce.job.map.class</name>
        <value>org.apache.hadoop.examples.WordCount$TokenizerMapper</value>
      </property>
      
      <property>
        <name>mapreduce.map.output.key.class</name>
        <value>org.apache.hadoop.io.Text</value>
      </property>
      <property>
        <name>mapreduce.map.output.value.class</name>
        <value>org.apache.hadoop.io.IntWritable</value>
      </property>
    
    <!--reducer class-->
      <property>
        <name>mapreduce.job.reduce.class</name>
        <value>org.apache.hadoop.examples.WordCount$IntSumReducer</value>
      </property>
      <property>
        <name>mapreduce.job.output.key.class</name>
        <value>org.apache.hadoop.io.Text</value>
      </property>
      <property>
        <name>mapreduce.job.output.value.class</name>
        <value>org.apache.hadoop.io.IntWritable</value>
      </property>
      
      <!--INPUT-->
      <property>
        <name>mapred.input.dir</name>
        <value>${nameNode}/user/hadoop/${examplesRoot}/mr-wordcount/input</value>
      </property>
      
      <!--OUTPUT-->
      <property>
        <name>mapred.output.dir</name> 
        <value>${nameNode}/user/hadoop/${examplesRoot}/mr-wordcount/output</value>
      </property>
    </configuration>
  </map-reduce>
  <ok to="end"/>
  <error to="fail"/>
</action>
<kill name="fail">
  <message>Map/Reduce failed, error message[${wf:errorMessage(wf:lastErrorNode())}]</message>
</kill>
<end name="end"/>
</workflow-app>

把测试使用的数据复制到input文件夹下(略)
将hadoop的示例wordcount程序复制到lib文件夹下

[hadoop@hadoop131 mr-wordcount]$ cp /opt/bigdata/hadoop/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.3.jar lib/
#上传到HDFS
[hadoop@hadoop131 mr-wordcount]$ cd ..
[hadoop@hadoop131 oozie-apps]$ cd ..
[hadoop@hadoop131 oozie-4.0.0-cdh5.3.6]$ hdfs dfs -put oozie-apps/ /user/hadoop/
#执行
[hadoop@hadoop131 oozie-4.0.0-cdh5.3.6]$ bin/oozie job -oozie http://hadoop131:11000/oozie -config oozie-apps/mr-wordcount/job.properties -run

打开hadoop131:11000可看到正在执行的任务,点击job log可以查看日志, 如果在MapReduce过程中发生的错误在如下位置追踪错误(你打开jobHistoryServer了吗?)


image.png
image.png

成功的截图如下:


image.png

此时我们可以在HDFS里看到output内容


image.png
#这里就不下载查看了
[hadoop@hadoop131 oozie-4.0.0-cdh5.3.6]$ hdfs dfs -cat /user/hadoop/oozie-apps/mr-wordcount/output/* | tail -10
yours,  1
yours;  1
yourself    15
yourself,   7
yourself,'  1
yourself.'  3
yourself;   3
yourself?'  1
youth,  2
youth.  1

======================================================
1.oozie运行时logs报错JA009: Unknown rpc kind in rpc headerRPC_WRITABLE
原因是mr1与mr2jar包产生冲突
解决方法: 将libext文件夹下关于mr1的jar包删除

2/oozie需要在hdfs和本机上有相同目录, 可以xsync/scp oozie-apps目录
3.各节点账密配置路径务必仔细核对

相关文章

网友评论

      本文标题:作业四 : CentOS 7 下 Oozie-4.0.0-cdh

      本文链接:https://www.haomeiwen.com/subject/buwhlqtx.html