habse与hdfs 数据之间的相互转换

作者: 清风_d587 | 来源:发表于2018-08-31 10:06 被阅读3次

    MapReduce功能实现系列:

    MapReduce功能实现一---Hbase和Hdfs之间数据相互转换

    MapReduce功能实现二---排序

    MapReduce功能实现三---Top N

    MapReduce功能实现四---小综合(从hbase中读取数据统计并在hdfs中降序输出Top 3)

    MapReduce功能实现五---去重(Distinct)、计数(Count)

    MapReduce功能实现六---最大值(Max)、求和(Sum)、平均值(Avg)

    MapReduce功能实现七---小综合(多个job串行处理计算平均值)

    MapReduce功能实现八---分区(Partition)

    MapReduce功能实现九---Pv、Uv

    MapReduce功能实现十---倒排索引(Inverted Index)

    MapReduce功能实现十一---join

    一、从Hbase表1中读取数据再把统计结果存到表2

    在Hbase中建立相应的表1:

    create'hello','cf'

    put'hello','1','cf:hui','hello world'

    put'hello','2','cf:hui','hello hadoop'

    put'hello','3','cf:hui','hello hive'

    put'hello','4','cf:hui','hello hadoop'

    put'hello','5','cf:hui','hello world'

    put'hello','6','cf:hui','hello world'

    java代码:

    importjava.io.IOException;

    importjava.util.Iterator;

    importorg.apache.hadoop.conf.Configuration;

    importorg.apache.hadoop.hbase.HBaseConfiguration;

    importorg.apache.hadoop.hbase.HColumnDescriptor;

    importorg.apache.hadoop.hbase.HTableDescriptor;

    importorg.apache.hadoop.hbase.client.HBaseAdmin;

    importorg.apache.hadoop.hbase.client.Put;

    importorg.apache.hadoop.hbase.client.Result;

    importorg.apache.hadoop.hbase.client.Scan;

    importorg.apache.hadoop.hbase.io.ImmutableBytesWritable;

    importorg.apache.hadoop.hbase.mapreduce.TableMapReduceUtil;

    importorg.apache.hadoop.hbase.mapreduce.TableMapper;

    importorg.apache.hadoop.hbase.mapreduce.TableReducer;

    importorg.apache.hadoop.hbase.util.Bytes;

    importorg.apache.hadoop.io.IntWritable;

    importorg.apache.hadoop.io.NullWritable;

    importorg.apache.hadoop.io.Text;

    importorg.apache.hadoop.mapreduce.Job;

    publicclassHBaseToHbase{

    publicstaticvoidmain(String[] args)throwsIOException, ClassNotFoundException, InterruptedException{

    String hbaseTableName1 ="hello";

    String hbaseTableName2 ="mytb2";

    prepareTB2(hbaseTableName2); 

    Configuration conf =newConfiguration();

    Job job = Job.getInstance(conf); 

    job.setJarByClass(HBaseToHbase.class); 

    job.setJobName("mrreadwritehbase");

    Scan scan =newScan();

    scan.setCaching(500);

    scan.setCacheBlocks(false);

    TableMapReduceUtil.initTableMapperJob(hbaseTableName1, scan, doMapper.class, Text.class, IntWritable.class, job); 

    TableMapReduceUtil.initTableReducerJob(hbaseTableName2, doReducer.class, job); 

    System.exit(job.waitForCompletion(true) ?1:0);

    publicstaticclassdoMapperextendsTableMapper{

    privatefinalstaticIntWritable one =newIntWritable(1);

    @Override

    protectedvoidmap(ImmutableBytesWritable key, Result value, Context context)throwsIOException, InterruptedException{

    String rowValue = Bytes.toString(value.list().get(0).getValue());

    context.write(newText(rowValue), one);

    publicstaticclassdoReducerextendsTableReducer{

    @Override

    protectedvoidreduce(Text key, Iterable values, Context context)throwsIOException, InterruptedException{

    System.out.println(key.toString()); 

    intsum =0;

    Iterator haha = values.iterator(); 

    while(haha.hasNext()) {

    sum += haha.next().get(); 

    Put put =newPut(Bytes.toBytes(key.toString()));

    put.add(Bytes.toBytes("mycolumnfamily"), Bytes.toBytes("count"), Bytes.toBytes(String.valueOf(sum)));

    context.write(NullWritable.get(), put); 

    publicstaticvoidprepareTB2(String hbaseTableName)throwsIOException{

    HTableDescriptor tableDesc =newHTableDescriptor(hbaseTableName);

    HColumnDescriptor columnDesc =newHColumnDescriptor("mycolumnfamily");

    tableDesc.addFamily(columnDesc); 

    Configuration  cfg = HBaseConfiguration.create(); 

    HBaseAdmin admin =newHBaseAdmin(cfg);

    if(admin.tableExists(hbaseTableName)) {

    System.out.println("Table exists,trying drop and create!");

    admin.disableTable(hbaseTableName); 

    admin.deleteTable(hbaseTableName); 

    admin.createTable(tableDesc); 

    }else{

    System.out.println("create table: "+ hbaseTableName);

    admin.createTable(tableDesc); 

    在Linux中执行该代码:

    [hadoop@h71 q1]$ /usr/jdk1.7.0_25/bin/javac HBaseToHbase.java

    [hadoop@h71 q1]$ /usr/jdk1.7.0_25/bin/jar cvf xx.jar HBaseToHbase*class

    [hadoop@h71 q1]$ hadoop jar xx.jar HBaseToHbase

    查看mytb2表:

    hbase(main):009:0> scan'mytb2'

    ROW                                                          COLUMN+CELL                                                                                                                                                                   

    hello hadoop                                                column=mycolumnfamily:count, timestamp=1489817182454,value=2

    hello hive                                                  column=mycolumnfamily:count, timestamp=1489817182454,value=1

    hello world                                                column=mycolumnfamily:count, timestamp=1489817182454,value=3

    3row(s)in0.0260seco

    有需要的联系我2317348976

    yxxy1717

    相关文章

      网友评论

        本文标题:habse与hdfs 数据之间的相互转换

        本文链接:https://www.haomeiwen.com/subject/uixswftx.html