序列化就是把内存中的对象转换成字节序列以便于存储到磁盘(持久化)和网络传输。
反序列化就是将字节序列或者是持久化的数据转换成内存中的对象。
内存中的对象只能本地进程使用,断掉后就消失了,也不能被发送到网络上的另一台机器,序列化可以将内存中的对象发送到远程机器。由于Java本身的序列化框架(Serializable)太重,序列化的对象包含了很多额外信息,不便于在网络中高效传输,Hadoop开发了自己的序列化机制(Writable)。
实现自定义bean对象的序列化
步骤如下:
- 必须实现Writable接口;
- 反序列化时,需要反射调用空构造参数,所以必须有空参构造;
public FlowBean() {
super();
}
- 重写序列化方法;
@Override
public void write(DataOutput out) throws IOException {
out.writeLong(upFlow);
out.writeLong(downFlow);
out.writeLong(sumFlow);
}
- 重写反序列化方法;
@Override
public void readFields(DataInput in) throws IOException {
upFlow = in.readLong();
downFlow = in.readLong();
sumFlow = in.readLong();
}
注意:反序列化的顺序和序列化的顺序完全一致。
- 要想把结果显示在文件中,需要重写toString()方法,可用“\t“分开;
- 如果需要将自定义的Bean放在Key中传输,还需要实现Comparable接口,因为MapReduce框架中的Shuffle过程要求必须对key必须能排序。
@Override
public int compareTo(FlowBean o) {
return this.sumFlow > o.getSumFlow() ? -1 : 1;
}
自定义序列化
统计txt中每个电话号的上行流量、下行流量和总流量。数据示例如下,倒数第二和第三列分别为下行流量和上行流量。
0 13152567890 www.baidu.com 90 100 200
1 16592992187 www.google.com 100 2000 200
2 15716605853 www.vx.com 2000 2043 200
3 16592992187 www.baidu.com 204 222 200
4 13152567890 www.python.org 20 40 500
- 自定义的Bean,按照上述要求完成。
package Flowsum;
import org.apache.hadoop.io.Writable;
import java.io.DataInput;
import java.io.DataOutput;
import java.io.IOException;
public class FlowBean implements Writable {
private long upFlow;
private long downFlow;
private long sumFlow;
// 空参构造,实现反射调用
public FlowBean() {
super();
}
// 有参构造
public FlowBean(long upFlow, long downFlow) {
super();
this.upFlow = upFlow;
this.downFlow = downFlow;
sumFlow = upFlow + downFlow;
}
// 序列化方法
public void write(DataOutput dataOutput) throws IOException {
dataOutput.writeLong(upFlow);
dataOutput.writeLong(downFlow);
dataOutput.writeLong(sumFlow);
}
// 反序列化方法
public void readFields(DataInput dataInput) throws IOException {
// 要求和序列化时的顺序一致
upFlow = dataInput.readLong();
downFlow = dataInput.readLong();
sumFlow = dataInput.readLong();
}
@Override
public String toString() {
return upFlow + "\t" + downFlow + "\t" + sumFlow;
}
public long getUpFlow() {
return upFlow;
}
public void setUpFlow(long upFlow) {
this.upFlow = upFlow;
}
public long getDownFlow() {
return downFlow;
}
public void setDownFlow(long downFlow) {
this.downFlow = downFlow;
}
public long getSumFlow() {
return sumFlow;
}
public void setSumFlow(long sumFlow) {
this.sumFlow = sumFlow;
}
public void set(long upFlow2, long downFlow2) {
upFlow = upFlow2;
downFlow = downFlow2;
sumFlow = upFlow + downFlow;
}
}
注意:
1)空参构造必须有;
2)序列化的过程和反序列化的过程比必须一致;
3)每个字段必须有get和set方法。
- Mapper
package Flowsum;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper;
import java.io.IOException;
public class FlowCountMapper extends Mapper<LongWritable, Text, Text, FlowBean> {
Text k = new Text();
FlowBean v = new FlowBean();
@Override
protected void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {
// 1 获取一行
String line = value.toString();
// 2 切分
String[] fields = line.split("\t");
// 3 封装对象
k.set(fields[1]);
long upFlow = Long.parseLong(fields[fields.length - 3]);
long downFlow = Long.parseLong(fields[fields.length - 2]);
v.setUpFlow(upFlow);
v.setDownFlow(downFlow);
// 4 写出
context.write(k, v);
}
}
- Reducer
package Flowsum;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Reducer;
import java.io.IOException;
public class FlowCountReducer extends Reducer<Text, FlowBean, Text, FlowBean> {
FlowBean v = new FlowBean();
@Override
protected void reduce(Text key, Iterable<FlowBean> values, Context context) throws IOException, InterruptedException {
// 1 累加求和
long sum_upFlow = 0;
long sum_downFlow = 0;
for (FlowBean flowBean : values) {
sum_upFlow += flowBean.getUpFlow();
sum_downFlow += flowBean.getDownFlow();
}
v.set(sum_upFlow, sum_downFlow);
// 2 写出
context.write(key, v);
}
}
- Driver
package Flowsum;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import java.io.IOException;
public class FlowCountDriver {
public static void main(String[] args) throws IOException, ClassNotFoundException, InterruptedException {
// 1 获取Job对象
Configuration conf = new Configuration();
Job job = Job.getInstance(conf);
// 2 设置jar路径
job.setJarByClass(FlowCountDriver.class);
// 3 关联Mapper和Reducer
job.setMapperClass(FlowCountMapper.class);
job.setReducerClass(FlowCountReducer.class);
// 4 设置Mappr输出的key和value类型
job.setMapOutputKeyClass(Text.class);
job.setMapOutputValueClass(FlowBean.class);
// 5 设置最终输出的key和value类型
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(FlowBean.class);
// 6 设置输入路径和输出路径
FileInputFormat.setInputPaths(job, new Path(args[0]));
FileOutputFormat.setOutputPath(job, new Path(args[1]));
// 7 提交
job.waitForCompletion(true);
}
}
网友评论