基本操作
hive进入hive模式 exit; 退出hive模式
-
展示正则匹配表名
hive> use ad_search;
hive> show tables;
hive> show databases like "w.*"; #正则匹配 -
建表
hive建表:在local目录下 vim test_wangpei.hql
按i进入insert模式,输入下面的内容
use ad_search;
create table test_wangpei(
id INT,
number FLOAT,
someStrings STRING)
row format delimited
fields terminated by '\t' lines terminated by '\n'
stored as textfiles;
按esc退出insert模式,然后输入:wq保存退出 -
hql的执行
如果写了一段hql的代码(如上面建立表的代码村委test.hql),一定要在前面加上use ad_search;
然后不需要进入hive模式,直接 hive -f test.hql (比如建一个表格 以及其他操作),这样就执行了建表操作 -
格式化数据的导入
有了表了,现在需要把数据导入进去,且你的表在ad_search下创建的话
hive> use ad_search;
hive> load date local inpath '当前目录下的文件' into table test_wangpei
hive> select * from test_wangpei limit 10 -
查询其他大表,把查询输入传入指定目录
use ad_search;
set mapred.max.split.size=3072000000;
set mapred.min.split.size=2048000000;
set mapred.min.split.size.per.node=2048000000;
set mapred.min.split.size.per.rack=2048000000;
set mapreduce.jobtracker.split.metainfo.maxsize=20000000;
set hive.exec.reducers.bytes.per.reducer=500000000;
set hive.exec.reducers.max=40090;
INSERT OVERWRITE DIRECTORY '${hiveconf:target_path}'
row format delimited fields terminated by '\t'
后面就是一堆select操作
遇到的BUG:
-
reduce阶段,慢慢从0%到99%, 一到100%就报错,大概率是reduce函数写错了,建议单独调试一下reduce函数
-
reduce一直卡在99%,点进日志发现只有一两个node在running,且进度很慢,大概率是数据倾斜,也就是某个key的数据量巨大,把那些node塞爆了,当时设置了
set mapreduce.map.memory.mb=20480;
set mapreduce.map.java.opts=-Xmx15360m;
set mapreduce.reduce.memory.mb=20480;
set mapreduce.reduce.java.opts=-Xmx15360m;
也没啥用,check了一下代码,确实把空给去掉了,
set hive.exec.reducers.bytes.per.reducer=500000000;
设置了什么也没什么用,因为我代码后面一步需要再次mr,所以这一步不需要对其进行sort和distribute,所以把这两步骤删了,他就不会按key分配给node了。。就不会塞某些node了。。 -
Java heap space
'''
java.lang.OutOfMemoryError: Java heap space
at com.hadoop.compression.lzo.LzoIndex.<init>(LzoIndex.java:57)
at com.hadoop.compression.lzo.LzoIndex.readIndex(LzoIndex.java:189)
at com.hadoop.mapred.DeprecatedLzoTextInputFormat.listStatus(DeprecatedLzoTextInputFormat.java:140)
at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:315)
at com.hadoop.mapred.DeprecatedLzoTextInputFormat.getSplits(DeprecatedLzoTextInputFormat.java:200)
at org.apache.hadoop.hive.ql.io.HiveInputFormat.addSplitsForGroup(HiveInputFormat.java:305)
at org.apache.hadoop.hive.ql.io.HiveInputFormat.getSplits(HiveInputFormat.java:385)
at org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getCombineSplits(CombineHiveInputFormat.java:408)
at org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getSplits(CombineHiveInputFormat.java:571)
at org.apache.hadoop.mapreduce.JobSubmitter.writeOldSplits(JobSubmitter.java:363)
at org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:355)
at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:231)
at org.apache.hadoop.mapreduce.Job10.run(Job.java:1287)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1656)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1287)
at org.apache.hadoop.mapred.JobClient1.run(JobClient.java:570)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1656)
at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:570)
at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:561)
at org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:464)
at org.apache.hadoop.hive.ql.exec.mr.MapRedTask.execute(MapRedTask.java:138)
at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:160)
at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:89)
at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1984)
at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1726)
at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1484)
FAILED: Execution Error, return code -101 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask. Java heap space
heap space的时候注意调整map reduce的memory和container的大小,一般map reduce的memory mb要是container的3/4,给你的其他code一些空间。
set mapreduce.reduce.memory.mb= 3;
set mapreduce.reduce.java.opts=-Xmx4m;
网友评论