虽然现在HDFS 被对象存储所取代,由 AWS S3 主导。MapReduce 已经被 Spark 所取代,Spark 也逐渐减少了对 Hadoop 的依赖性。Yarn 正在被像 Kubernetes 这样的技术取代。但是该了解的还是要了解。今天先以简单的MapReduce本地计算词频统计为例,熟悉并了解一下MapReduce的使用。
图解词频统计
![](https://img.haomeiwen.com/i13837765/7b648f2892714fc6.jpg)
我们要按照上面的流程图来编写MapReduce词频统计代码,在上一篇文章中已经讲解了MapReduce的执行流程,在这里就不多做解释,话不多说,开始上代码。
环境准备
windows环境
如果你是windows系统,那么首先你需要先去github上下载对应的Hadoop环境变量工具,地址为: https://github.com/cdarlint/winutils。
选择对应的Hadoop版本配置到电脑的环境变量上,具体步骤可以参考 https://blog.csdn.net/u013305864/article/details/97191344
项目搭建
首先使用IDEA创建一个maven项目,工程目录如图所示:
![](https://img.haomeiwen.com/i13837765/687ce94bcfb93f2d.jpg)
pom.xml
<!-- https://mvnrepository.com/artifact/org.apache.hadoop/hadoop-client -->
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-client</artifactId>
<version>3.2.2</version>
</dependency>
<!-- https://mvnrepository.com/artifact/org.slf4j/slf4j-log4j12 -->
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-log4j12</artifactId>
<version>1.7.31</version>
<!-- <scope>test</scope>-->
</dependency>
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>4.11</version>
<scope>test</scope>
</dependency>
WordCountMapper
package com.bigdata.mr.wc;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper;
import java.io.IOException;
/**
* description:
* KEYIN: Map任务读数据的key类型,offset,是每行数据起始位置的偏移量,Long
* VALUEIN:Map任务读数据的value类型,其实就是一行行的字符串,String
* Deer Bear River
* KEYOUT: map方法自定义实现输出的key的类型,String
* VALUEOUT: map方法自定义实现输出的value的类型,Integer
*词频统计:相同单词的次数 (Deer,1)
*
* Long,String,String,Integer是Java里面的数据类型
* Hadoop自定义类型:序列化和反序列化
* LongWritable,Text
*
* date: 2021/10/5 12:24 <br>
* author: Neal <br>
* version: 1.0 <br>
*/
public class WordCountMapper extends Mapper<LongWritable, Text,Text, IntWritable> {
@Override
protected void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {
// 把value对应的行数据按照指定的分隔符拆开
String[] words = value.toString().split("\t");
// 将每个字符串放入上下文中
for (String word : words) {
context.write(new Text(word),new IntWritable(1));
}
}
}
WordCountReducer
package com.bigdata.mr.wc;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Reducer;
import java.io.IOException;
import java.util.Iterator;
/**
* description:
* (Deer,1) (Bear,1) (River,1)
*
* map的输出到reduce端,是按照相同的key分发到一个reduce上去执行 可参考上面的图解
*
* reduce1: (Car,1)(Car,1)(Car,1) ==> (Car, <1,1,1>)
* reduce2: (Deer,1)(Deer,1) ==> (Deer, <1,1>)
* reduce3: (River,1)(River,1) ==> (River, <1,1>)
* reduce4: (Beer,1)(Beer,1) ==> (Beer, <1,1>)
*
* Reducer和Mapper中使用到了模板设计模式
* date: 2021/10/5 12:28 <br>
* author: Neal <br>
* version: 1.0 <br>
*/
public class WordCountReducer extends Reducer<Text, IntWritable,Text,IntWritable> {
@Override
protected void reduce(Text key, Iterable<IntWritable> values, Context context) throws IOException, InterruptedException {
int count = 0;
Iterator<IntWritable> it = values.iterator();
while(it.hasNext()) {
IntWritable value = it.next();
count += value.get();
}
context.write(new Text(key),new IntWritable(count));
}
}
本地执行方法 WordCountDriver
package com.bigdata.mr.wc;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import java.io.IOException;
/**
* description: WordCountDriver 本地运行 <br>
* date: 2021/10/5 12:31 <br>
* author: Neal <br>
* version: 1.0 <br>
*/
public class WordCountDriver {
public static void main(String[] args) throws IOException, ClassNotFoundException, InterruptedException {
//使用默认的配置类
Configuration configuration = new Configuration();
// 创建一个Job
Job job = Job.getInstance(configuration);
// 设置Job对应的参数: 主类
job.setJarByClass(WordCountDriver.class);
// 设置Job对应的参数: 设置自定义的Mapper和Reducer处理类
job.setMapperClass(WordCountMapper.class);
job.setReducerClass(WordCountReducer.class);
// 设置Job对应的参数: Mapper输出key和value的类型
job.setMapOutputKeyClass(Text.class);
job.setMapOutputValueClass(IntWritable.class);
// 设置Job对应的参数: Reduce输出key和value的类型
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
// 设置Job对应的参数: Mapper输出key和value的类型:作业输入和输出的路径
FileInputFormat.setInputPaths(job,new Path("input"));
FileOutputFormat.setOutputPath(job,new Path("output"));
// 提交job
boolean flag = job.waitForCompletion(true);
//成功后退出
System.exit(flag ? 0 : -1);
}
}
本地执行结果
![](https://img.haomeiwen.com/i13837765/137c81e5f8b0f47c.jpg)
可以看到 统计出来的结果如 图解里的内容一致。如果想放在hadoop集群上执行,只需要修改 驱动方法,把对应的文件目录和地址修改即可。
Hadoop执行方法 WordCountDriver(放在Hadoop环境中打包执行)
package com.imooc.bigdata.hadoop.mr.wc;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import java.net.URI;
/**
* description: WordCountDriver Hadoop环境运行 <br>
* date: 2021/10/5 12:31 <br>
* author: Neal <br>
* version: 1.0 <br>
*/
public class WordCountApp {
public static void main(String[] args) throws Exception{
//设置Hadoop 用户
System.setProperty("HADOOP_USER_NAME", "hadoop");
Configuration configuration = new Configuration();
configuration.set("fs.defaultFS","hdfs://ip:port");
// 创建一个Job
Job job = Job.getInstance(configuration);
// 设置Job对应的参数: 主类
job.setJarByClass(WordCountApp.class);
// 设置Job对应的参数: 设置自定义的Mapper和Reducer处理类
job.setMapperClass(WordCountMapper.class);
job.setReducerClass(WordCountReducer.class);
// 设置Job对应的参数: Mapper输出key和value的类型
job.setMapOutputKeyClass(Text.class);
job.setMapOutputValueClass(IntWritable.class);
// 设置Job对应的参数: Reduce输出key和value的类型
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
// 如果输出目录已经存在,则先删除
FileSystem fileSystem = FileSystem.get(new URI("hdfs://ip:port"),configuration, "hadoop");
Path outputPath = new Path("/wordcount/output");
if(fileSystem.exists(outputPath)) {
fileSystem.delete(outputPath,true);
}
// 设置Job对应的参数: Mapper输出key和value的类型:作业输入和输出的路径
FileInputFormat.setInputPaths(job, new Path("/wordcount/input"));
FileOutputFormat.setOutputPath(job, outputPath);
// 提交job
boolean result = job.waitForCompletion(true);
System.exit(result ? 0 : -1);
}
}
小结
虽然Hadoop整体解决方案在被取代,但是作为进阶大数据的必经之路还是要了解其执行流程和基本的编程模型。
网友评论