美文网首页
hadoop (四)开发环境及Word Count

hadoop (四)开发环境及Word Count

作者: cnliu | 来源:发表于2018-07-31 23:24 被阅读0次

    hadoop (四)开发环境及Word Count

    开发环境搭建

    开发工具:IDEA
    构件工具:MAVEN

    引入依赖:

        <dependency>
          <groupId>org.apache.hadoop</groupId>
          <artifactId>hadoop-core</artifactId>
          <version>0.20.2</version>
      </dependency>
    
    
    
      <dependency>
          <groupId>commons-logging</groupId>
          <artifactId>commons-logging</artifactId>
          <version>1.0.3</version>
      </dependency>
    
      <dependency>
          <groupId>junit</groupId>
          <artifactId>junit</artifactId>
          <version>4.11</version>
          <scope>test</scope>
      </dependency>
      <dependency>
          <groupId>org.hamcrest</groupId>
          <artifactId>hamcrest-library</artifactId>
          <version>1.3</version>
          <scope>test</scope>
      </dependency>
    

    核心文件引入【与服务器端核心文件一致】:

    image

    实例代码:

    package org.cnliu.myhadoop.ex;
    
    import org.apache.hadoop.fs.Path;
    import org.apache.hadoop.io.IntWritable;
    import org.apache.hadoop.io.Text;
    import org.apache.hadoop.mapred.*;
    
    import java.io.IOException;
    import java.util.Iterator;
    import java.util.StringTokenizer;
    
    public class WordCount {
        public static class WordCountMapper extends MapReduceBase implements Mapper<Object ,Text ,Text ,IntWritable>{
    
            private final static IntWritable one = new IntWritable(1);
            private Text word =new Text();
    
            public void map(Object o, Text text, OutputCollector<Text, IntWritable> outputCollector, Reporter reporter) throws IOException {
    
                StringTokenizer itr = new StringTokenizer(text.toString());
                while (itr.hasMoreTokens()){
                    word.set(itr.nextToken());
                    outputCollector.collect(word,one);
                }
            }
        }
    
        public static class WordCountReducer extends MapReduceBase implements Reducer<Text , IntWritable ,Text ,IntWritable>{
    
            private IntWritable result = new IntWritable();
    
            public void reduce(Text text, Iterator<IntWritable> iterator, OutputCollector<Text, IntWritable> outputCollector, Reporter reporter) throws IOException {
                int sum=0;
                while (iterator.hasNext()){
                    sum+=iterator.next().get();
                }
    
                result.set(sum);
                outputCollector.collect(text,result);
            }
        }
    
        public static void main(String[] args) throws Exception{
    
            //配置job在fs中的输入和输出目录----备注:目录不符合则不会有执行结果
            String input = "/user/liuzd/in";
            String output = "/user/liuzd/o_t_account/result";
    
            JobConf conf=new JobConf(WordCount.class);
    
            conf.setJobName("WordCount");
            conf.addResource("classpath:/core-site.xml");
            conf.addResource("classpath:/hdfs-site.xml");
            conf.addResource("classpath:/mapred-site.xml");
    
            conf.setOutputKeyClass(Text.class);
            conf.setOutputValueClass(IntWritable.class);
    
    
            //设置自定义map过程的类 及 输出的key 和 value 的数据类型
            conf.setMapperClass(WordCountMapper.class);
            conf.setCombinerClass(WordCountReducer.class);
            conf.setReducerClass(WordCountReducer.class);
    
            conf.setInputFormat(TextInputFormat.class);
            conf.setOutputFormat(TextOutputFormat.class);
    
            FileInputFormat.setInputPaths(conf,new Path(input));
            FileOutputFormat.setOutputPath(conf,new Path(output));
    
            JobClient.runJob(conf);
            System.exit(0);
        }
    }
    

    执行结果:

    image

    原始数据

    image

    数据计算查询结果:

    image

    附录(遇到的问题及解决办法)

    hadoop集群启动成功但live node为0

    我的问题是网络问题:
    hosts 配置了 
    127.0.0.1 master 
    192.168.56.101 master
    
    导致hadoop解析别名master时ip为127.0.0.1 因是分布式集群部署,故主节点在其他服务器主机上,删除此项配置即可。  
    

    启动Hadoop时,DataNode启动后一会儿自动消失的解决方法

    原因:在第一次格式化dfs后,启动并使用了hadoop,后来又重新执行了格式化命令(hdfs namenode -format),这时namenode的clusterID会重新生成,而datanode的clusterID 保持不变。
    解决方案:重新拷贝集群hadoop环境
    >https://www.cnblogs.com/sasan/p/5740367.html
    

    IDE远程提交mapreduce任务至linux,遇到ClassNotFoundException: Mapper

    尝试

    (1)手动打包成jar,上传到linux(运行成功)

    (2)手动打包成jar,Windows下调用“java -jar ..”(运行成功)

    (3)IDEA提交任务

    解决方案

    core-site.xml中配置路径:

    <property>
    
        <name>mapred.jar</name>
    
        <value>E:\java\myHadoop\out\artifacts\myHadoop_jar\myHadoop.jar</value>
    
    </property>
    

    https://blog.csdn.net/qq_19648191/article/details/56684268

    INFO mapred.JobClient: Task Id : attempt_201210161256_0009_r_000000_0, Status : FAILEDShuffle Error: Exceeded MAX_FAILED_UNIQUE_FETCHES; bailing-out. WARN mapred.JobClient: Error reading task outputNo route to host

    解决方案:主机hostname需要与hosts文件中的别名一致。

    相关文章

      网友评论

          本文标题:hadoop (四)开发环境及Word Count

          本文链接:https://www.haomeiwen.com/subject/cjbkvftx.html