Apache Hive是一个基于HADOOP架构的数据仓库。它能够提供数据的精炼,查询和分析。可以将结构化的数据文件映射为一张数据库表,并提供简单的SQL查询功能,可以将SQL语句转换为MapReduce任务进行运行。
变量和属性 --define,--hivevar
[hadoop@qk conf]$ hive --define foo=name
Logging initialized using configuration in jar:file:/hadoop/hadoop-2.7.2/hive-1.1.1-bin/lib/hive-common-1.1.1.jar!/hive-log4j.properties
hive> set hivevar:foo;
hivevar:foo=name
hive> set hiveconf:foo;
hiveconf:foo is undefined as a hive configuration variable
Query returned non-zero code: 1, cause: null
hive> set foo;
foo=name
测试后可以看到,--define和--hivevar的标记结果是一致的(--hivevar从hive 0.8及以后开始支持)。使用变量有助于后期方便处理一些代码:
简单的例子,
hive> set foo;
foo=name
hive> create table tb_2(id int,${foo} string);
OK
Time taken: 0.508 seconds
hive> desc tb_2;
OK
id int
name string
Time taken: 0.42 seconds, Fetched: 2 row(s)
hive>
通过变量设置所有的${foo}都可以当替代name。
来看看--hiveconf,一般hiveconf配置的参数我们都可以在hive-default.xml中看到,我们在配置文件中查找到:
其值为256在hive cli中可以看到:
hive> set hiveconf:hive.spark.client.secret.bits;
hiveconf:hive.spark.client.secret.bits=256
hive>
所以在使用hive前,仅本次使用一些特殊的配置参数,可以通过登录修改:
[hadoop@qk ~]$ hive --hiveconf hive.spark.client.secret.bits=257
Logging initialized using configuration in jar:file:/hadoop/hadoop-2.7.2/hive-1.1.1-bin/lib/hive-common-1.1.1.jar!/hive-log4j.properties
hive> set hiveconf:hive.spark.client.secret.bits;
hiveconf:hive.spark.client.secret.bits=257
hive> exit;
[hadoop@qk ~]$ hive
Logging initialized using configuration in jar:file:/hadoop/hadoop-2.7.2/hive-1.1.1-bin/lib/hive-common-1.1.1.jar!/hive-log4j.properties
hive> set hiveconf:hive.spark.client.secret.bits;
hiveconf:hive.spark.client.secret.bits=256
hive>
退出再登录和在hive-conf.xml中,其值还是256。同样的,我们也可以增加新的hivecong值(hive0.8以上版本支持)。
[hadoop@qk ~]$ hive --hiveconf qk=10
Logging initialized using configuration in jar:file:/hadoop/hadoop-2.7.2/hive-1.1.1-bin/lib/hive-common-1.1.1.jar!/hive-log4j.properties
hive>set hiveconf:qk;
hiveconf:qk=10
[hadoop@qk conf]$ cat hive-default.xml |grep qk
[hadoop@qk conf]$
create table tb_3 (id int);
OK
Time taken: 0.834 seconds
hive>insert into tb_3 values( ${hiveconf:qk});
Query ID = hadoop_20170324151313_e17e76fc-30c2-458a-a2b0-baeddbafed3b
Total jobs = 3
Launching Job 1 out of 3
Number of reduce tasks is set to 0 since there's no reduce operator
Job running in-process (local Hadoop)
2017-03-24 15:13:47,502 Stage-1 map = 100%, reduce = 0%
Ended Job = job_local1856010364_0001
Stage-4 is selected by condition resolver.
Stage-3 is filtered out by condition resolver.
Stage-5 is filtered out by condition resolver.
Moving data to: hdfs://192.168.1.230:9000/user/hive/warehouse/tb_3/.hive-staging_hive_2017-03-24_15-13-43_337_5216464255433775132-1/-ext-10000
Loading data to table default.tb_3
Table default.tb_3 stats: [numFiles=1, numRows=1, totalSize=3, rawDataSize=2]
MapReduce Jobs Launched:
Stage-Stage-1: HDFS Read: 3 HDFS Write: 74 SUCCESS
Total MapReduce CPU Time Spent: 0 msec
OK
Time taken: 4.621 seconds
hive>select * from tb_3;
OK
10
Time taken: 0.104 seconds, Fetched: 1 row(s)
注意:让参数永久生效最直接的办法还是修改hive-cong.xml文件。
注意:system,hiveconf,hivevar对命名空间提供读写,但是env只有读权限。
hive>set env:HOME;
env:HOME=/home/hadoop
hive> set env:HOME=/home/hadoop;
env:* variables can not be set.
Query returned non-zero code: 1, cause: null
“一次使用”命令:hive -e
[hadoop@qk ~]$ hive -e "select * from tb_3";
Logging initialized using configuration in jar:file:/hadoop/hadoop-2.7.2/hive-1.1.1-bin/lib/hive-common-1.1.1.jar!/hive-log4j.properties
OK
10
Time taken: 1.441 seconds, Fetched: 1 row(s)
(-S开启静默模式,去掉"OK" ,"Time taken"信息) ,需注意和-e的先后顺序。此例中,hive会将输出写入标准输出中,也就是本地文件系统而不是HDFS中。
[[hadoop@qk ~]$ hive -S -e "select * from tb_3" > result.ext
[hadoop@qk ~]$ cat result.ext
10
技巧:
当需要在hive中获取set的某项参数时而又不是很清楚参数项时,可以使用一次命令,比如:
[hadoop@qk ~]$ hive -S -e "set" |grep HOME
env:HADOOP_COMMON_HOME=/hadoop/hadoop-2.7.2
env:HADOOP_HDFS_HOME=/hadoop/hadoop-2.7.2
env:HADOOP_HOME=/hadoop/hadoop-2.7.2
env:HADOOP_HOME_WARN_SUPPRESS=true
env:HADOOP_MAPRED_HOME=/hadoop/hadoop-2.7.2
env:HADOOP_YARN_HOME=/hadoop/hadoop-2.7.2
env:HIVE_HOME=/hadoop/hadoop-2.7.2/hive-1.1.1-bin
env:HOME=/home/hadoop
env:JAVA_HOME=/usr/lib/jvm/java-1.7.0-openjdk.x86_64
env:MODULESHOME=/usr/share/Modules
env:SQOOP_HOME=/hadoop/hadoop-2.7.2/sqoop-1.4.6.bin__hadoop-1.0.0
env:YARN_HOME=/hadoop/hadoop-2.7.2
从文件中执行HIVE查询: -f,(当进入hive后使用source)
[hadoop@qk ~]$ cat select.sql
select * from tb_3;
select * from tb_3
[hadoop@qk ~]$ hive -S -f select.sql
10
10
[hadoop@qk ~]$ hive -S
hive> source select.sql;
10
10
另外,hive还有一个其他数据库系统没有的功能,Tab自动补全,当你在hive中准备敲击select,当敲到前几个字符时就可以tab进行自动补全,快试试吧。
执行shell命令,命令前加上!并以;结尾。
hive> ! ls -ltr LICENSE.txt;
-rw-r--r--. 1 hadoop hadoop 15429 Nov 14 2014 LICENSE.txt
hive>
在hive内部执行dfs命令
hive> dfs -ls -R /user;
drwxr-xr-x - hadoop supergroup 0 2017-03-22 18:25 /user/hadoop
drwxr-xr-x - hadoop supergroup 0 2017-03-22 18:18 /user/hadoop/wc-in
-rw-r--r-- 1 hadoop supergroup 8 2017-03-22 16:27 /user/hadoop/wc-in/a.txt
-rw-r--r-- 2 hadoop supergroup 11 2017-03-22 16:27 /user/hadoop/wc-in/b.txt
-rw-r--r-- 1 hadoop supergroup 8 2017-03-22 17:40 /user/hadoop/wc-in/c.txt
-rw-r--r-- 1 hadoop supergroup 3 2017-03-22 17:41 /user/hadoop/wc-in/d.txt
drwxr-xr-x - hadoop supergroup 0 2017-03-22 18:25 /user/hadoop/wc-out
-rw-r--r-- 1 hadoop supergroup 0 2017-03-22 18:25 /user/hadoop/wc-out/_SUCCESS
-rw-r--r-- 1 hadoop supergroup 26 2017-03-22 18:25 /user/hadoop/wc-out/part-r-00000
drwxr-xr-x - hadoop supergroup 0 2017-03-17 18:40 /user/hive
drwxr-xr-x - hadoop supergroup 0 2017-03-24 15:12 /user/hive/warehouse
drwxr-xr-x - hadoop supergroup 0 2017-03-24 11:53 /user/hive/warehouse/tb_1
drwxr-xr-x - hadoop supergroup 0 2017-03-24 14:44 /user/hive/warehouse/tb_2
drwxr-xr-x - hadoop supergroup 0 2017-03-24 15:13 /user/hive/warehouse/tb_3
-rwxr-xr-x 1 hadoop supergroup 3 2017-03-24 15:13 /user/hive/warehouse/tb_3/000000_0
drwxr-xr-x - hadoop supergroup 0 2017-03-17 18:40 /user/hive/warehouse/test.db
drwxr-xr-x - hadoop supergroup 0 2017-03-20 11:22 /user/hive/warehouse/wh301.db
用“--”进行hive脚本的注释
[hadoop@qk ~]$ cat select.sql
--select * from tb_3;
select * from tb_3
[hadoop@qk ~]$ hive -S -f select.sql
10
[hadoop@qk ~]$
将两条查询中的一个进行注释。
一般的,我们在hive进行查询时,字段一般都默认不显示,这是为什么呢?
因为在hive-conf.xml中将相关的参数进行了设置: hive.cli.print.header;
hive>set hiveconf:hive.cli.print.header;
hiveconf:hive.cli.print.header=false
hive> select * from tb_3;
OK
10
Time taken: 1.145 seconds, Fetched: 1 row(s)
hive> set hiveconf:hive.cli.print.header=true;
hive> set hiveconf:hive.cli.print.header;
hiveconf:hive.cli.print.header=true
hive> select * from tb_3;
OK
tb_3.id
10
Time taken: 0.107 seconds, Fetched: 1 row(s)
最后也可以在hive-default.xml中设置。
你也可以参考main手册使用hive的基础操作命令。
网友评论