刚调试了一个用mapreduce统计每一个用户(手机号)所耗费的总上行流量、总下行流量,总流量实例,仍然遇到了一些问题并一一解决,赶紧记录下
步骤一:准备统计文本
从网上抄的一段文本,粘贴到txt文档,命名为phones.txt,并上传到hdfs中,路径为/a/phones.txt。同时本地也保留了一份,路径我D:\hadoopmaterial\phones.txt,方便调试。
1363157985066 13726230503 00-FD-07-A4-72-B8:CMCC 120.196.100.82 i02.c.aliimg.com 24 27 2481 24681 200
1363157995052 13826544101 5C-0E-8B-C7-F1-E0:CMCC 120.197.40.4 4 0 264 0 200
1363157991076 13926435656 20-10-7A-28-CC-0A:CMCC 120.196.100.99 2 4 132 1512 200
1363154400022 13926251106 5C-0E-8B-8B-B1-50:CMCC 120.197.40.4 4 0 240 0 200
1363157993044 18211575961 94-71-AC-CD-E6-18:CMCC-EASY 120.196.100.99 iface.qiyi.com 视频网站 15 12 1527 2106 200
1363157995074 84138413 5C-0E-8B-8C-E8-20:7DaysInn 120.197.40.4 122.72.52.12 20 16 4116 1432 200
1363157993055 13560439658 C4-17-FE-BA-DE-D9:CMCC 120.196.100.99 18 15 1116 954 200
1363157995033 15920133257 5C-0E-8B-C7-BA-20:CMCC 120.197.40.4 sug.so.360.cn 信息安全 20 20 3156 2936 200
1363157983019 13719199419 68-A1-B7-03-07-B1:CMCC-EASY 120.196.100.82 4 0 240 0 200
1363157984041 13660577991 5C-0E-8B-92-5C-20:CMCC-EASY 120.197.40.4 s19.cnzz.com 站点统计 24 9 6960 690 200
1363157973098 15013685858 5C-0E-8B-C7-F7-90:CMCC 120.197.40.4 rank.ie.sogou.com 搜索引擎 28 27 3659 3538 200
1363157986029 15989002119 E8-99-C4-4E-93-E0:CMCC-EASY 120.196.100.99 www.umeng.com 站点统计 3 3 1938 180 200
1363157992093 13560439658 C4-17-FE-BA-DE-D9:CMCC 120.196.100.99 15 9 918 4938 200
1363157986041 13480253104 5C-0E-8B-C7-FC-80:CMCC-EASY 120.197.40.4 3 3 180 180 200
1363157984040 13602846565 5C-0E-8B-8B-B6-00:CMCC 120.197.40.4 2052.flash2-http.qq.com 综合门户 15 12 1938 2910 200
1363157995093 13922314466 00-FD-07-A2-EC-BA:CMCC 120.196.100.82 img.qfc.cn 12 12 3008 3720 200
1363157982040 13502468823 5C-0A-5B-6A-0B-D4:CMCC-EASY 120.196.100.99 y0.ifengimg.com 综合门户 57 102 7335 110349 200
1363157986072 18320173382 84-25-DB-4F-10-1A:CMCC-EASY 120.196.100.99 input.shouji.sogou.com 搜索引擎 21 18 9531 2412 200
1363157990043 13925057413 00-1F-64-E1-E6-9A:CMCC 120.196.100.55 t3.baidu.com 搜索引擎 69 63 11058 48243 200
1363157988072 13760778710 00-FD-07-A4-7B-08:CMCC 120.196.100.82 2 2 120 120 200
1363157985066 13726238888 00-FD-07-A4-72-B8:CMCC 120.196.100.82 i02.c.aliimg.com 24 27 2481 24681 200
1363157993055 13560436666 C4-17-FE-BA-DE-D9:CMCC 120.196.100.99 18 15 1116 954 200
步骤二:eclipse编写代码
//HdfsDao.java 工具类
public static Configuration config() {
Configuration conf = new Configuration();
return conf;
}
conf新建后全部采用默认配置,表示hadoop在本地运行
//FlowSumMR.java
public class FlowSumMR {
public static void main(String[] args) throws IOException, ClassNotFoundException, InterruptedException {
Configuration jobConf = HdfsDao.config();
Job job = Job.getInstance(jobConf, "FlowSumMR");
job.setJarByClass(FlowSumMR.class);
job.setMapperClass(FlowSumMRMapper.class);
job.setReducerClass(FlowSumMRReducer.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(Text.class);
FileInputFormat.setInputPaths(job, new Path("D:\\hadoopmaterial\\phones.txt"));
FileOutputFormat.setOutputPath(job, new Path("D:\\hadoopmaterial\\phone_output"));
boolean isDone = job.waitForCompletion(true);
System.exit(isDone ? 0 : 1);
}
}
//FlowSumMRMapper.java
public class FlowSumMRMapper extends Mapper<LongWritable, Text, Text, Text> {
@Override
protected void map(LongWritable key, Text value, Mapper<LongWritable, Text, Text, Text>.Context context)
throws IOException, InterruptedException {
String[] split = value.toString().split("\t");
String outputKey = split[1];
String outputValue = split[7] + "\t" + split[8];
context.write(new Text(outputKey), new Text(outputValue));
}
}
//FlowSumMRReducer.java
public class FlowSumMRReducer extends Reducer<Text, Text, Text, Text> {
@Override
protected void reduce(Text key, Iterable<Text> values, Reducer<Text, Text, Text, Text>.Context context)
throws IOException, InterruptedException {
int upFlow = 0;
int downFlow = 0;
int sumFlow = 0;
for (Text value : values) {
String[] split = value.toString().split("\t");
int upTempFlow = Integer.parseInt(split[0]);
int downTempFlow = Integer.parseInt(split[1]);
upFlow += upTempFlow;
downFlow += downTempFlow;
}
sumFlow = upFlow + downFlow;
context.write(key, new Text(upFlow + "\t" + downFlow + "\t" + sumFlow));
}
}
FlowSumMR.java中直接运行main方法,本地运行成功
步骤三:任务提交到远程map reduce
从hadoop服务器或者集群中,下载core-site.xml,hdfs-site.xml,mapred-site.xml,yarn-site.xml,log4j.properties文件,放入编译路径下,下图是我放的路径
修改HdfsDao.java中的config方法
public static Configuration config() {
Configuration conf = new Configuration();
conf.addResource("hadoop/core-site.xml");
conf.set("dfs.client.use.datanode.hostname", "true");
System.setProperty("HADOOP_USER_NAME", "hadoop");
conf.addResource("hadoop/hdfs-site.xml");
conf.addResource("hadoop/mapred-site.xml");
conf.addResource("hadoop/yarn-site.xml");
conf.set("mapreduce.framework.name","yarn");
conf.set("yarn.resourcemanager.hostname","master");
return conf;
}
注意 conf.set("mapreduce.framework.name","yarn");
conf.set("yarn.resourcemanager.hostname","master");一定要配置,否则程序会卡住且不知道为什么。后来查到这两句是将任务提交到远程yarn来运行。
下载下来的mapred-site.xml 文件也要改一下
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>mapred.remote.os</name>
<value>Linux</value>
</property>
<property>
<name>mapreduce.app-submission.cross-platform</name>
<value>true</value>
</property>
<property>
<name>mapreduce.application.classpath</name>
<value>/usr/local/hadoop/etc/hadoop,
/usr/local/hadoop/share/hadoop/common/*,
/usr/local/hadoop/share/hadoop/common/lib/*,
/usr/local/hadoop/share/hadoop/hdfs/*,
/usr/local/hadoop/share/hadoop/hdfs/lib/*,
/usr/local/hadoop/share/hadoop/mapreduce/*,
/usr/local/hadoop/share/hadoop/mapreduce/lib/*,
/usr/local/hadoop/share/hadoop/yarn/*,
/usr/local/hadoop/share/hadoop/yarn/lib/*
</value>
</property>
<property>
<name>mapreduce.jobhistory.address</name>
<!-- 配置实际的主机名和端口 -->
<value>master:10020</value>
</property>
</configuration>
还有yarn-site.xml
<configuration>
<!-- Site specific YARN configuration properties -->
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.local-dirs</name>
<value>file:/usr/local/hadoop/tmp/yarn/nm</value>
</property>
<property>
<name>yarn.application.classpath</name>
<value>/usr/local/hadoop/etc/hadoop,
/usr/local/hadoop/share/hadoop/common/*,
/usr/local/hadoop/share/hadoop/common/lib/*,
/usr/local/hadoop/share/hadoop/hdfs/*,
/usr/local/hadoop/share/hadoop/hdfs/lib/*,
/usr/local/hadoop/share/hadoop/mapreduce/*,
/usr/local/hadoop/share/hadoop/mapreduce/lib/*,
/usr/local/hadoop/share/hadoop/yarn/*,
/usr/local/hadoop/share/hadoop/yarn/lib/*
</value>
</property>
</configuration>
FlowSumMR 类也要改一下
public class FlowSumMR
{
public static void main(String[] args)
throws IOException, ClassNotFoundException, InterruptedException
{
Configuration jobConf = HdfsDao.config();
Job job = Job.getInstance(jobConf, "FlowSumMR");
job.setJarByClass(FlowSumMR.class);
job.setMapperClass(FlowSumMRMapper.class);
job.setReducerClass(FlowSumMRReducer.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(Text.class);
FileInputFormat.setInputPaths(job, new Path[] { new Path("/a/phones.txt") });
FileOutputFormat.setOutputPath(job, new Path("/a/flow/output_sum"));
boolean isDone = job.waitForCompletion(true);
System.exit(isDone ? 0 : 1);
}
}
步骤四:打包发布
本实例用maven进行管理的,打包的时候,记得要将依赖的jar打包。pom.xml文件配置
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<configuration>
<source>${java.version}</source>
<target>${java.version}</target>
</configuration>
</plugin>
<plugin>
<artifactId>maven-assembly-plugin</artifactId>
<configuration>
<appendAssemblyId>false</appendAssemblyId>
<descriptorRefs>
<descriptorRef>jar-with-dependencies</descriptorRef>
</descriptorRefs>
<archive>
<manifest>
<!-- 此处指定main方法入口的class -->
<mainClass>com.jiangxl.hadoop.flowcount.FlowSumMR</mainClass>
</manifest>
</archive>
</configuration>
<executions>
<execution>
<id>make-assembly</id>
<phase>package</phase>
<goals>
<goal>assembly</goal>
</goals>
</execution>
</executions>
</plugin>
</plugins>
</build>
将打包好的jar上传到hadoop服务器,运行 hadoop jar ***.jar可得到正确结果。运行的时候注意下 FileOutputFormat.setOutputPath(job, new Path("/a/flow/output_sum"));这个输出文件不能存在,否则会失败
步骤五:运用远程yarn本地调试
之前的代码,如果在本地运行main函数,被报ClassNotFoundException,需要在FlowSumMR加入jobConf.set("mapred.jar", "D:\workspace-test\WordCount\target\WordCount-0.0.1-SNAPSHOT.jar");
注意jar的目录为windows本地打的jar包的目录结构,不是linux中的目录(这是在本地调试使用的,实际发布不需要设置)
总结
注意文中使用的地址都是主机名,并没有使用具体的地址,namenode和datanode通信也是通过主机名来通信, conf.set("dfs.client.use.datanode.hostname", "true");这句话就是设置通过主机名来通信,否则dfs读取会有问题。
主机名和ip对应关系,windows和linux都需要设置,都在hosts文件中设置,linux还要修改为指定的主机名
参考网站https://blog.csdn.net/qq_19648191/article/details/56684268
网友评论