美文网首页HIVE和IMPALA技巧介绍
HIVE控制导出数据的大小和数量

HIVE控制导出数据的大小和数量

作者: 润土1030 | 来源:发表于2019-01-12 13:00 被阅读40次

    工作中碰到个需求,需要我去SFTP服务器采集HIVE导出的数据,由于这个大数据平台设计问题,很多东西没考虑到,他们的导出方式是用HIVE提供的insert overwrite local directory这个语句来导出HIVE数据。

    导出文件的大小和数量,以及每个文件的平均大小对于我的采集程序而言都是有影响的。由于他们平台设计的问题,导致只能嵌入sql,所以这面我需要通过hive sql的一些设置来实现这个功能。

    HIVE设置reduce的数量

    set mapred.reduce.tasks=5;

    HIVE导出数据的语句

    insert overwrite local directory '/opt/test/data' row format delimited fields terminated by '\t' select customer_id, customer_fname, customer_zipcode from customers cluster by customer_zipcode

    hive> set mapred.reduce.tasks=5;
    hive> insert overwrite local directory '/opt/test/data' row format delimited fields terminated by '\t' select customer_id, customer_fname, customer_zipcode from customers cluster by customer_zipcode; 
    Query ID = hdfs_20190111202929_76c074d9-c0ac-4073-a867-4c69aca85c7d
    Total jobs = 1
    Launching Job 1 out of 1
    Number of reduce tasks not specified. Defaulting to jobconf value of: 5
    In order to change the average load for a reducer (in bytes):
      set hive.exec.reducers.bytes.per.reducer=<number>
    In order to limit the maximum number of reducers:
      set hive.exec.reducers.max=<number>
    In order to set a constant number of reducers:
      set mapreduce.job.reduces=<number>
    Starting Job = job_1544843719855_0021, Tracking URL = http://quickstart.cloudera:8088/proxy/application_1544843719855_0021/
    Kill Command = /usr/lib/hadoop/bin/hadoop job  -kill job_1544843719855_0021
    Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 5
    2019-01-11 20:30:06,871 Stage-1 map = 0%,  reduce = 0%
    2019-01-11 20:30:18,011 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 3.85 sec
    2019-01-11 20:30:29,015 Stage-1 map = 100%,  reduce = 20%, Cumulative CPU 6.36 sec
    2019-01-11 20:30:30,091 Stage-1 map = 100%,  reduce = 40%, Cumulative CPU 8.78 sec
    2019-01-11 20:30:39,016 Stage-1 map = 100%,  reduce = 60%, Cumulative CPU 8.78 sec
    2019-01-11 20:30:40,077 Stage-1 map = 100%,  reduce = 80%, Cumulative CPU 13.86 sec
    2019-01-11 20:30:47,615 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 17.0 sec
    MapReduce Total cumulative CPU time: 17 seconds 0 msec
    Ended Job = job_1544843719855_0021
    Copying data to local directory /opt/test/data
    MapReduce Jobs Launched: 
    Stage-Stage-1: Map: 1  Reduce: 5   Cumulative CPU: 17.0 sec   HDFS Read: 110232 HDFS Write: 215580 SUCCESS
    Total MapReduce CPU Time Spent: 17 seconds 0 msec
    OK
    Time taken: 56.395 seconds
    
    进入/opt/test/data文件夹查看结果
    [root@quickstart data]# ll -h
    total 220K
    -rw-r--r-- 1 hdfs hdfs  25K Jan 11 20:30 000000_0
    -rw-r--r-- 1 hdfs hdfs  28K Jan 11 20:30 000001_0
    -rw-r--r-- 1 hdfs hdfs  26K Jan 11 20:30 000002_0
    -rw-r--r-- 1 hdfs hdfs  28K Jan 11 20:30 000003_0
    -rw-r--r-- 1 hdfs hdfs 107K Jan 11 20:30 000004_0
    
    
    上面的语句如果customer_zipcode不均匀的话,会造成数据倾斜,比如 000004_0这个文件就比其他的文件要大,为了保证数据的均匀,故使用distribute by rand(),同时调整reduce数量为10,验证是否生成了10个结果文件
    hive> set mapred.reduce.tasks=10;
    hive> insert overwrite local directory '/opt/test/data' row format delimited fields terminated by '\t' select customer_id, customer_fname, customer_zipcode from customers distribute by rand(); 
    Query ID = hdfs_20190111205252_2d4b8cde-7d9b-4276-94fa-8fa83b77b4db
    Total jobs = 1
    Launching Job 1 out of 1
    Number of reduce tasks not specified. Defaulting to jobconf value of: 10
    In order to change the average load for a reducer (in bytes):
      set hive.exec.reducers.bytes.per.reducer=<number>
    In order to limit the maximum number of reducers:
      set hive.exec.reducers.max=<number>
    In order to set a constant number of reducers:
      set mapreduce.job.reduces=<number>
    Starting Job = job_1544843719855_0022, Tracking URL = http://quickstart.cloudera:8088/proxy/application_1544843719855_0022/
    Kill Command = /usr/lib/hadoop/bin/hadoop job  -kill job_1544843719855_0022
    Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 10
    2019-01-11 20:52:24,166 Stage-1 map = 0%,  reduce = 0%
    2019-01-11 20:52:33,763 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 4.71 sec
    2019-01-11 20:52:45,572 Stage-1 map = 100%,  reduce = 20%, Cumulative CPU 9.66 sec
    2019-01-11 20:52:55,322 Stage-1 map = 100%,  reduce = 40%, Cumulative CPU 14.71 sec
    2019-01-11 20:53:03,935 Stage-1 map = 100%,  reduce = 50%, Cumulative CPU 17.42 sec
    2019-01-11 20:53:06,107 Stage-1 map = 100%,  reduce = 60%, Cumulative CPU 20.6 sec
    2019-01-11 20:53:13,643 Stage-1 map = 100%,  reduce = 70%, Cumulative CPU 22.89 sec
    2019-01-11 20:53:14,718 Stage-1 map = 100%,  reduce = 80%, Cumulative CPU 25.14 sec
    2019-01-11 20:53:21,092 Stage-1 map = 100%,  reduce = 90%, Cumulative CPU 27.3 sec
    2019-01-11 20:53:23,198 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 29.8 sec
    MapReduce Total cumulative CPU time: 29 seconds 800 msec
    Ended Job = job_1544843719855_0022
    Copying data to local directory /opt/test/data
    MapReduce Jobs Launched: 
    Stage-Stage-1: Map: 1  Reduce: 10   Cumulative CPU: 29.8 sec   HDFS Read: 126360 HDFS Write: 215580 SUCCESS
    Total MapReduce CPU Time Spent: 29 seconds 800 msec
    OK
    Time taken: 68.831 seconds
    hive> 
    
    
    进入文件夹查看文件,可以看到一共10个文件,大小均匀
    [root@quickstart data]# ll -h
    total 240K
    -rw-r--r-- 1 hdfs hdfs 21K Jan 11 20:53 000000_0
    -rw-r--r-- 1 hdfs hdfs 22K Jan 11 20:53 000001_0
    -rw-r--r-- 1 hdfs hdfs 21K Jan 11 20:53 000002_0
    -rw-r--r-- 1 hdfs hdfs 22K Jan 11 20:53 000003_0
    -rw-r--r-- 1 hdfs hdfs 22K Jan 11 20:53 000004_0
    -rw-r--r-- 1 hdfs hdfs 22K Jan 11 20:53 000005_0
    -rw-r--r-- 1 hdfs hdfs 22K Jan 11 20:53 000006_0
    -rw-r--r-- 1 hdfs hdfs 21K Jan 11 20:53 000007_0
    -rw-r--r-- 1 hdfs hdfs 21K Jan 11 20:53 000008_0
    -rw-r--r-- 1 hdfs hdfs 22K Jan 11 20:53 000009_0
    [root@quickstart data]# 
    
    
    其实有些其他参数需要去调整,暂时不放在这篇文章里面了。

    相关文章

      网友评论

        本文标题:HIVE控制导出数据的大小和数量

        本文链接:https://www.haomeiwen.com/subject/wjnqdqtx.html