美文网首页
Hive存储格式

Hive存储格式

作者: 喵星人ZC | 来源:发表于2019-04-20 01:37 被阅读0次

    一、官网Hive存储格式
    官网所列:

    file_format:
      : SEQUENCEFILE
      | TEXTFILE    -- (Default, depending on hive.default.fileformat configuration)
      | RCFILE      -- (Note: Available in Hive 0.6.0 and later)
      | ORC         -- (Note: Available in Hive 0.11.0 and later)
      | PARQUET     -- (Note: Available in Hive 0.13.0 and later)
      | AVRO        -- (Note: Available in Hive 0.14.0 and later)
      | JSONFILE    -- (Note: Available in Hive 4.0.0 and later)
      | INPUTFORMAT input_format_classname OUTPUTFORMAT output_format_classname
    

    实际业务中我们只需掌握:

    file_format:
      : SEQUENCEFILE
      | TEXTFILE    -- (Default, depending on hive.default.fileformat configuration)
      | RCFILE      -- (Note: Available in Hive 0.6.0 and later)
      | ORC         -- (Note: Available in Hive 0.11.0 and later)
      | PARQUET     -- (Note: Available in Hive 0.13.0 and later)
    

    二、存储格式测试
    1、SEQUENCEFILE 序列化(K-V)


    image.png

    我Hive中一张textfile为存储格式views表,大小为18.1M

    [hadoop@hadoop000 data]$ hadoop fs -du -s -h /user/hive/warehouse/g6_hadoop.db/views
    18.1 M  18.1 M  /user/hive/warehouse/g6_hadoop.db/views
    

    现在我们创建一张和views一样表结构的views_seq表,只是存储格式改为SEQUENCEFILE

    create table views_seq(
    track_time string,
    url string,
    session_id string,
    referer string,
    ip string,
    end_user_id string,
    city_id string
    ) row format delimited fields terminated by '\t'
    stored as sequencefile ;
    

    将views表数据插入views_seq表中

    insert into table views_seq select * from views;
    

    对比原始大小:

    [hadoop@hadoop000 data]$ hadoop fs -du -s -h /user/hive/warehouse/g6_hadoop.db/views
    18.1 M  18.1 M  /user/hive/warehouse/g6_hadoop.db/views
    [hadoop@hadoop000 data]$ hadoop fs -du -s -h /user/hive/warehouse/g6_hadoop.db/views_seq
    19.6 M  19.6 M  /user/hive/warehouse/g6_hadoop.db/views_seq
    [hadoop@hadoop000 data]$ 
    

    会发现会比原始数据还要大,所以此存储格式一般很少用

    2、RCFILE 行列混合
    现在我们创建一张和views一样表结构的views_rc表,只是存储格式改为RCFILE,并将views表数据插入views_rc表中。

    对比原始大小:

    [hadoop@hadoop000 data]$ hadoop fs -du -s -h /user/hive/warehouse/g6_hadoop.db/views
    18.1 M  18.1 M  /user/hive/warehouse/g6_hadoop.db/views
    [hadoop@hadoop000 data]$ hadoop fs -du -s -h /user/hive/warehouse/g6_hadoop.db/views_rc
    17.9 M  17.9 M  /user/hive/warehouse/g6_hadoop.db/views_rc
    

    查询没有做什么优化,只是节省了10%的存储空间

    3、\color{red}{ORC}

    image.png

    ORC引入stripes的概念,从而使查询性能非常好,生产上大多数都使用的ORC。

    现在我们创建一张和views一样表结构的views_orc表,只是存储格式改为orc,并将views表数据插入views_rc表中。

    create table views_orc(
    track_time string,
    url string,
    session_id string,
    referer string,
    ip string,
    end_user_id string,
    city_id string
    ) row format delimited fields terminated by '\t'
    stored as orc ;
    ---------------------------------------------------------------------------------------------------------
    insert into views_orc select * from views;
    

    对比原始大小:

    [hadoop@hadoop000 data]$ hadoop fs -du -s -h /user/hive/warehouse/g6_hadoop.db/views
    18.1 M  18.1 M  /user/hive/warehouse/g6_hadoop.db/views
    [hadoop@hadoop000 data]$ hadoop fs -du -s -h /user/hive/warehouse/g6_hadoop.db/views_orc
    2.8 M  2.8 M  /user/hive/warehouse/g6_hadoop.db/views_orc
    

    之所以数据大小少了这么多是因为ORC默认采用了 ZLIB 压缩,我们去掉压缩后数据量在7.7M所有,也会比原始大小小很多,节省了很多空间。

    4、\color{red}{PARQUET} 性能(查询、压缩存储)与ORC差不多,生产上可以随便选择,不过大数据还是会选择ORC,因为ORC的压缩比PARQUET好一点

    三、从查询方面来看这个几个存储格式的优劣
    1、原始格式(textfile)

    hive (g6_hadoop)> select count(*) from views where session_id='f55598cafba346eb217ff3fbd0de2930';
    Query ID = hadoop_20190420005050_84924cd6-5269-4d72-993e-056d36dac0d7
    Total jobs = 1
    Launching Job 1 out of 1
    Number of reduce tasks determined at compile time: 1
    In order to change the average load for a reducer (in bytes):
      set hive.exec.reducers.bytes.per.reducer=<number>
    In order to limit the maximum number of reducers:
      set hive.exec.reducers.max=<number>
    In order to set a constant number of reducers:
      set mapreduce.job.reduces=<number>
    Starting Job = job_1555685664231_0011, Tracking URL = http://hadoop000:8088/proxy/application_1555685664231_0011/
    Kill Command = /home/hadoop/soul/app/hadoop-2.6.0-cdh5.7.0/bin/hadoop job  -kill job_1555685664231_0011
    Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1
    2019-04-20 01:27:19,121 Stage-1 map = 0%,  reduce = 0%
    2019-04-20 01:27:26,441 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 2.02 sec
    2019-04-20 01:27:33,724 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 3.28 sec
    MapReduce Total cumulative CPU time: 3 seconds 280 msec
    Ended Job = job_1555685664231_0011
    MapReduce Jobs Launched: 
    Stage-Stage-1: Map: 1  Reduce: 1   Cumulative CPU: 3.28 sec   HDFS Read: 19022693 HDFS Write: 3 SUCCESS
    Total MapReduce CPU Time Spent: 3 seconds 280 msec
    OK
    _c0
    10
    Time taken: 22.757 seconds, Fetched: 1 row(s)
    

    HDFS Read: 19022693 所有数据都load进来了
    Time taken: 22.757 seconds

    2、SEQUENCEFILE

    hive (g6_hadoop)> select count(*) from views_seq where session_id='f55598cafba346eb217ff3fbd0de2930';
    Query ID = hadoop_20190420005050_84924cd6-5269-4d72-993e-056d36dac0d7
    Total jobs = 1
    Launching Job 1 out of 1
    Number of reduce tasks determined at compile time: 1
    In order to change the average load for a reducer (in bytes):
      set hive.exec.reducers.bytes.per.reducer=<number>
    In order to limit the maximum number of reducers:
      set hive.exec.reducers.max=<number>
    In order to set a constant number of reducers:
      set mapreduce.job.reduces=<number>
    Starting Job = job_1555685664231_0012, Tracking URL = http://hadoop000:8088/proxy/application_1555685664231_0012/
    Kill Command = /home/hadoop/soul/app/hadoop-2.6.0-cdh5.7.0/bin/hadoop job  -kill job_1555685664231_0012
    Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1
    2019-04-20 01:29:21,115 Stage-1 map = 0%,  reduce = 0%
    2019-04-20 01:29:28,432 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 2.62 sec
    2019-04-20 01:29:34,714 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 3.77 sec
    MapReduce Total cumulative CPU time: 3 seconds 770 msec
    Ended Job = job_1555685664231_0012
    MapReduce Jobs Launched: 
    Stage-Stage-1: Map: 1  Reduce: 1   Cumulative CPU: 3.77 sec   HDFS Read: 20509194 HDFS Write: 3 SUCCESS
    Total MapReduce CPU Time Spent: 3 seconds 770 msec
    OK
    _c0
    10
    Time taken: 21.403 seconds, Fetched: 1 row(s)
    

    HDFS Read: 20509194
    Time taken: 21.403
    3、RCFILE

    hive (g6_hadoop)> select count(*) from views_rc where session_id='f55598cafba346eb217ff3fbd0de2930';
    Query ID = hadoop_20190420005050_84924cd6-5269-4d72-993e-056d36dac0d7
    Total jobs = 1
    Launching Job 1 out of 1
    Number of reduce tasks determined at compile time: 1
    In order to change the average load for a reducer (in bytes):
      set hive.exec.reducers.bytes.per.reducer=<number>
    In order to limit the maximum number of reducers:
      set hive.exec.reducers.max=<number>
    In order to set a constant number of reducers:
      set mapreduce.job.reduces=<number>
    Starting Job = job_1555685664231_0013, Tracking URL = http://hadoop000:8088/proxy/application_1555685664231_0013/
    Kill Command = /home/hadoop/soul/app/hadoop-2.6.0-cdh5.7.0/bin/hadoop job  -kill job_1555685664231_0013
    Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1
    2019-04-20 01:30:39,670 Stage-1 map = 0%,  reduce = 0%
    2019-04-20 01:30:45,944 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 1.62 sec
    2019-04-20 01:30:52,201 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 2.78 sec
    MapReduce Total cumulative CPU time: 2 seconds 780 msec
    Ended Job = job_1555685664231_0013
    MapReduce Jobs Launched: 
    Stage-Stage-1: Map: 1  Reduce: 1   Cumulative CPU: 2.78 sec   HDFS Read: 3725353 HDFS Write: 3 SUCCESS
    Total MapReduce CPU Time Spent: 2 seconds 780 msec
    OK
    _c0
    10
    Time taken: 20.54 seconds, Fetched: 1 row(s)
    

    HDFS Read: 3725353
    Time taken: 20.54 seconds

    4、ORC

    hive (g6_hadoop)> select count(*) from views_orc where session_id='f55598cafba346eb217ff3fbd0de2930';
    Query ID = hadoop_20190420005050_84924cd6-5269-4d72-993e-056d36dac0d7
    Total jobs = 1
    Launching Job 1 out of 1
    Number of reduce tasks determined at compile time: 1
    In order to change the average load for a reducer (in bytes):
      set hive.exec.reducers.bytes.per.reducer=<number>
    In order to limit the maximum number of reducers:
      set hive.exec.reducers.max=<number>
    In order to set a constant number of reducers:
      set mapreduce.job.reduces=<number>
    Starting Job = job_1555685664231_0014, Tracking URL = http://hadoop000:8088/proxy/application_1555685664231_0014/
    Kill Command = /home/hadoop/soul/app/hadoop-2.6.0-cdh5.7.0/bin/hadoop job  -kill job_1555685664231_0014
    Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1
    2019-04-20 01:32:02,728 Stage-1 map = 0%,  reduce = 0%
    2019-04-20 01:32:08,961 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 1.42 sec
    2019-04-20 01:32:16,253 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 2.66 sec
    MapReduce Total cumulative CPU time: 2 seconds 660 msec
    Ended Job = job_1555685664231_0014
    MapReduce Jobs Launched: 
    Stage-Stage-1: Map: 1  Reduce: 1   Cumulative CPU: 2.66 sec   HDFS Read: 1257473 HDFS Write: 3 SUCCESS
    Total MapReduce CPU Time Spent: 2 seconds 660 msec
    OK
    _c0
    10
    Time taken: 21.202 seconds, Fetched: 1 row(s)
    

    HDFS Read: 1257473
    Time taken: 21.202

    5、PARQUET

    hive (g6_hadoop)> select count(*) from views_parquet where session_id='f55598cafba346eb217ff3fbd0de2930';
    Query ID = hadoop_20190420005050_84924cd6-5269-4d72-993e-056d36dac0d7
    Total jobs = 1
    Launching Job 1 out of 1
    Number of reduce tasks determined at compile time: 1
    In order to change the average load for a reducer (in bytes):
      set hive.exec.reducers.bytes.per.reducer=<number>
    In order to limit the maximum number of reducers:
      set hive.exec.reducers.max=<number>
    In order to set a constant number of reducers:
      set mapreduce.job.reduces=<number>
    Starting Job = job_1555685664231_0015, Tracking URL = http://hadoop000:8088/proxy/application_1555685664231_0015/
    Kill Command = /home/hadoop/soul/app/hadoop-2.6.0-cdh5.7.0/bin/hadoop job  -kill job_1555685664231_0015
    Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1
    2019-04-20 01:34:27,219 Stage-1 map = 0%,  reduce = 0%
    2019-04-20 01:34:33,531 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 2.06 sec
    2019-04-20 01:34:40,785 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 3.3 sec
    MapReduce Total cumulative CPU time: 3 seconds 300 msec
    Ended Job = job_1555685664231_0015
    MapReduce Jobs Launched: 
    Stage-Stage-1: Map: 1  Reduce: 1   Cumulative CPU: 3.3 sec   HDFS Read: 2687019 HDFS Write: 3 SUCCESS
    Total MapReduce CPU Time Spent: 3 seconds 300 msec
    OK
    _c0
    10
    Time taken: 21.342 seconds, Fetched: 1 row(s
    

    HDFS Read: 2687019
    Time taken: 21.342

    虽说数据量很小,测试不是也别准,但是总和各种还是会发现生产上ORC更加合适

    相关文章

      网友评论

          本文标题:Hive存储格式

          本文链接:https://www.haomeiwen.com/subject/fwrowqtx.html