美文网首页
写一个 Mapreduce 小程序玩玩?

写一个 Mapreduce 小程序玩玩?

作者: Vector_Wan | 来源:发表于2019-11-20 21:17 被阅读0次

    最近搭好了 Hadoop 的环境,赶快整一个小程序试验一下(过两天再写怎么搭的环境吧)。
    想法很简单就是想做一个单词种类的统计,首先是 Map 部分:(开始使用 Maven ,真的是神器,几个代码 jar 包就配好了)
    我是用的是免费版的 idea,可以使用 Maven 功能,毕竟能不用盗版就不用盗版软件,不管是使用 idea 还是 eclipse 都可以新建一个 Marven Project。

    然后配置 pom.xml,可以登陆 http://mvnrepository.com/ 查找 hadoop-common、hadoop-client、hadoop-mapreduce-client-jobclient 的对应你 Hadoop 版本的代码加入到文件中就好,我的是 2.7.3

    然后在 pom.xml 中添加:

        <dependencies>
            <!-- https://mvnrepository.com/artifact/org.apache.hadoop/hadoop-common -->
            <dependency>
                <groupId>org.apache.hadoop</groupId>
                <artifactId>hadoop-common</artifactId>
                <version>2.7.3</version>
            </dependency>
            <!-- https://mvnrepository.com/artifact/org.apache.hadoop/hadoop-client -->
            <dependency>
                <groupId>org.apache.hadoop</groupId>
                <artifactId>hadoop-client</artifactId>
                <version>2.7.3</version>
            </dependency>
            <!-- https://mvnrepository.com/artifact/org.apache.hadoop/hadoop-mapreduce-client-jobclient -->
            <dependency>
                <groupId>org.apache.hadoop</groupId>
                <artifactId>hadoop-mapreduce-client-jobclient</artifactId>
                <version>2.7.3</version>
                <scope>provided</scope>
            </dependency>
    
        </dependencies>
    

    然后等待包加载完成就可以开心的写代码啦。因为会自动导入相关依赖,所以引入的包还是很多的,写的时候需要注意不要导错包。
    首先是 Map 部分:

    package test2;
    
    
    import org.apache.hadoop.io.IntWritable;
    import org.apache.hadoop.io.LongWritable;
    import org.apache.hadoop.io.Text;
    import org.apache.hadoop.mapreduce.Mapper;
    
    import java.io.IOException;
    
    public class MyMap1 extends Mapper<LongWritable, Text, Text, IntWritable> {
        @Override
        protected void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {
            String line = value.toString();
            String [] words = line.split(" ");
            for(String word:words){
                context.write(new Text(word), new IntWritable(1));
            }
        }
    }
    

    然后是 Reduce:

    package test;
    
    import java.io.IOException;
    
    import org.apache.hadoop.io.IntWritable;
    import org.apache.hadoop.io.Text;
    import org.apache.hadoop.mapreduce.Reducer;
    
    public class MyReduce extends Reducer<Text, IntWritable, Text, IntWritable>{
        @Override
        protected void reduce(Text text, Iterable<IntWritable> values,
                Reducer<Text, IntWritable, Text, IntWritable>.Context context) throws IOException, InterruptedException {
            int count = 0;
            for(IntWritable value: values) {
                count+=value.get();
            }
            context.write(text,new IntWritable(count));
        }
    
    }
    

    接下来是工作类:

    package test;
    
    import javax.servlet.ServletOutputStream;
    
    import org.apache.hadoop.conf.Configuration;
    import org.apache.hadoop.conf.Configured;
    import org.apache.hadoop.fs.Path;
    import org.apache.hadoop.io.IntWritable;
    import org.apache.hadoop.io.Text;
    import org.apache.hadoop.mapreduce.Job;
    import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
    import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
    import org.apache.hadoop.util.Tool;
    import org.apache.hadoop.util.ToolRunner;
    
    public class MyJob extends Configured implements Tool{
        public static void main(String[] args) {
            try {
                ToolRunner.run(new MyJob(), null);
                System.out.println("运行结束!");
            } catch (Exception e) {
                // TODO Auto-generated catch block
                e.printStackTrace();
            }
        }
        @Override
        public int run(String[] args) throws Exception {
            // TODO Auto-generated method stub
            Configuration configuration = new Configuration();
            configuration.set("fs.defaultFS", "hdfs://192.168.80.131:9000");
            Job job = Job.getInstance(configuration);
            job.setJarByClass(MyJob.class);
            job.setMapperClass(MyMap1.class);
            job.setReducerClass(MyReduce.class);
            job.setOutputKeyClass(Text.class);
            job.setOutputValueClass(IntWritable.class);
            FileInputFormat.addInputPath(job, new Path("/abc/MapReduceTest1.txt"));
            FileOutputFormat.setOutputPath(job, new Path("/abc/out1"));
            job.waitForCompletion(true);
            
            return 0;
        }
        
    
    }
    

    最后运行,多了一个 out 目录,运行结果在里面,

    相关文章

      网友评论

          本文标题:写一个 Mapreduce 小程序玩玩?

          本文链接:https://www.haomeiwen.com/subject/ineaictx.html