一、Oozie
工作流引擎Oozie(驭象者),用于管理Hadoop任务(支持MapReduce、Spark、Pig、Hive),把这些任务以DAG(有向无环图)方式串接起来。Oozie任务流包括:coordinator、workflow;workflow描述任务执行顺序的DAG,而coordinator则用于定时任务触发,相当于workflow的定时管理器,其触发条件包括两类:
1. 数据文件生成
2. 时间条件
Oozie定义了一种基于XML的hPDL (Hadoop Process Definition Language)来描述workflow的DAG。在workflow中定义了
1. 控制流节点(Control Flow Nodes)
2. 动作节点(Action Nodes)
其中,控制流节点定义了流程的开始和结束(start、end),以及控制流程的执行路径(Execution Path),如decision、fork、join等;而动作节点包括Hadoop任务、SSH、HTTP、eMail和Oozie子流程等。
二、Oozie调度wordcount mapreduce
### 在oozie目录下创建oozie-apps文件夹
$ cd oozie/
$ sudo mkdir oozie-apps
$ sudo cp examples/apps/map-reduce oozie-apps
$ cd oozie-apps
$ mv map-reduce mr-wordcount-wf
$ cd mr-wordcount-wf
$ ll
-rw-r--r-- 1 root root 1154 Aug 16 20:07 job.properties
drwxr-xr-x 2 root root 4096 Aug 16 19:57 lib/
-rw-r--r-- 1 root root 3483 Aug 16 20:11 workflow.xml
step1. 修改job.properties
nameNode=hdfs://Master:9000
jobTracker=Master:8032
queueName=default
oozieAppsRoot=user/hadoop/oozie-apps
oozieDataRoot=user/hadoop/oozie/datas
# 定义workflow工作的hdfs目录
oozie.wf.application.path=${nameNode}/${oozieAppsRoot}/mr-wordcount-wf/workflow.xml
# mapreduce输出结果的目录
inputDir=mr-wordcount-wf/input
outputDir=mr-wordcount-wf/output
step2. 修改workflow.xml
<workflow-app xmlns="uri:oozie:workflow:0.5" name="mr-wordcount-wf">
<start to="mr-node-wordcount"/>
<action name="mr-node-wordcount">
<map-reduce>
<job-tracker>${jobTracker}</job-tracker>
<name-node>${nameNode}</name-node>
<prepare>
<delete path="${nameNode}/${oozieDataRoot}/${outputDir}"/>
</prepare>
<configuration>
<property>
<name>mapred.mapper.new-api</name>
<value>true</value>
</property>
<property>
<name>mapred.reducer.new-api</name>
<value>true</value>
</property>
<!--mapper properties -->
<property>
<name>mapreduce.job.queuename</name>
<value>${queueName}</value>
</property>
<property>
<name>mapreduce.job.map.class</name>
<value>mapreduce.WordCount$TokenizerMapper</value>
</property>
<property>
<name>mapreduce.map.output.key.class</name>
<value>org.apache.hadoop.io.Text</value>
</property>
<property>
<name>mapreduce.map.output.value.class</name>
<value>org.apache.hadoop.io.IntWritable</value>
</property>
<property>
<name>mapreduce.input.fileinputformat.inputdir</name>
<value>${nameNode}/${oozieDataRoot}/${inputDir}</value>
</property>
<!--reducer properties -->
<property>
<name>mapreduce.job.reduce.class</name>
<value>mapreduce.WordCount$IntSumReducer</value>
</property>
<property>
<name>mapreduce.job.output.key.class</name>
<value>org.apache.hadoop.io.Text</value>
</property>
<property>
<name>mapreduce.job.output.value.class</name>
<value>org.apache.hadoop.io.IntWritable</value>
</property>
<property>
<name>mapreduce.output.fileoutputformat.outputdir</name>
<value>${nameNode}/${oozieDataRoot}/${outputDir}</value>
</property>
</configuration>
</map-reduce>
<ok to="end"/>
<error to="fail"/>
</action>
<kill name="fail">
<message>Map/Reduce failed, error message[${wf:errorMessage(wf:lastErrorNode())}]</message>
</kill>
<end name="end"/>
</workflow-app>
step3. 将自定义的wordcount程序,打包成jar包放在lib目录下
$ sudo cp bigdata-1.0-SNAPSHOT.jar /opt/cloudera/oozie-apps/mr-wordcount-wf/lib/
step4. 将oozie-apps整个文件上传到hdfs文件系统上
$ ./bin/hdfs dfs -put /opt/cloudera/oozie/oozie-apps .
### 1.创建oozie存放数据的文件夹,并上传输入数据
$ ./bin/hdfs dfs -mkdir oozie/datas
$ ./bin/hdfs dfs -put -p input oozie/datas
step4. 执行Oozie job
$ cd oozie
$ ./bin/oozie job -oozie http://Master:11000/oozie -config oozie-apps/mr-wordcount-wf/job.properties -run
注意事项:
在本地文件oozie 文件下,也需要有和hdfs文件系统相匹配的job.properties目录才可以。也就是说:
hdfs:oozie-apps/mr-wordcount-wf/job.properties
本地文件系统:在命令执行的当前目录(即,oozie目录)下,也要有一致的路径:
oozie-apps/mr-wordcount-wf/job.properties
至此,执行一个Oozie 的mapreuce工作流的过程结束!
网友评论