美文网首页
Java操作Hive系列1-Hive UDF

Java操作Hive系列1-Hive UDF

作者: 只是甲 | 来源:发表于2021-07-26 15:46 被阅读0次

    一.创建Hive工程

    1.1 新建Java工程

    我们这里为了简单就不用Maven了,而是直接使用lib包并把我们需要的jar包放到lib当中。如下所示。
    (lib这个目录是手工创建的)


    image.png

    1.2 将Hive的lib下所有jar包放到lib目录下

    下载地址:https://downloads.apache.org/hive/hive-2.3.9/
    备注:需要选择对应的版本

    image.png

    我们需要把hive的lib下的所有的包放到lib下,hive的jar包所在的位置如下图所示:
    截图未截全

    image.png

    1.3 导入hadoop-common jar包

    下载地址:https://repo1.maven.org/maven2/org/apache/hadoop/hadoop-common/3.0.3/
    备注:需要选择对应的版本

    image.png

    1.4 将jar包添加到工程

    我们把jar拷到hive-udf工程下的lib包后记得选中所有的jar包,然后右键在菜单中把鼠标放到"Build path"上,这时会出现子菜单,然后点击"Add to Build path"添加到工程当中。

    image.png image.png

    二.编写java代码并打包

    代码:

    package myUdf;
    
    import org.apache.hadoop.hive.ql.exec.UDF;
    import org.apache.hadoop.io.Text;
    
    /**
     * @author  只是甲
     * @date    2021-07-26
     * @remark
     */
    
    public class getDegree extends UDF {
        public String evaluate(int degreetype) {
            /* 1  -- 小学
            2  -- 初中
            3  -- 职业高中
            4  -- 中专
            5  -- 高中
            6  -- 大专
            7  -- 本科
            8  -- 硕士
            9 -- 博士
            */
            String result;
            
            if (degreetype == 1) {
                result = "小学";
            } else if  (degreetype == 2) {
                result = "初中";
            } else if  (degreetype == 3) {
                result = "职业高中";
            } else if  (degreetype == 4) {
                result = "中专";
            } else if  (degreetype == 5) {
                result = "高中";
            } else if  (degreetype == 6) {
                result = "大专";
            } else if  (degreetype == 7) {
                result = "本科";
            } else if  (degreetype == 8) {
                result = "硕士";
            } else if  (degreetype == 9) {
                result = "博士";
            } else {
                result = "N/A";
            }
            
            return result;
        }
    
    }
    

    工程概要如下:

    image.png

    导出jar包:
    选中Project HiveUDF,右键,然后选中Export

    image.png image.png

    三. 注册UDF

    3.1 上传上一步的jar文件到服务器

    image.png

    3.2 注册UDF

    接着我们需要注册UDF,通知hive我现在注册了一个UDF,要调用这个UDF。注册UDF需要在hive视图下进行,因此我们首先启动hive的bin目录下启动hive。

    代码:

    -- 添加jar包
    add jar /home/java/getDegree.jar;
    -- 查看现有的jar包
    list jars;
    -- 创建临时函数
    create temporary function GetDegree as 'myUdf.getDegree';
    

    测试记录:

    hive> add jar /home/java/getDegree.jar;
    Added [/home/java/getDegree.jar] to class path
    Added resources: [/home/java/getDegree.jar]
    hive> drop function GetDegree;
    hive> 
        > 
        > list jars;
    /home/java/getDegree.jar
    hive> create temporary function GetDegree as 'myUdf.getDegree';
    OK
    Time taken: 0.023 seconds
    hive> 
    

    3.3 Hive UDF测试

    数据准备:

    degree_type
    1  -- 小学
    2  -- 初中
    3  -- 职业高中
    4  -- 中专
    5  -- 高中
    6  -- 大专
    7  -- 本科
    8  -- 硕士
    9 -- 博士
    
    
    create table user_info(id int,degree_type int);
    insert into user_info values (1,3);
    insert into user_info values (2,1);
    insert into user_info values (3,6);
    insert into user_info values (4,4);
    insert into user_info values (5,5);
    insert into user_info values (6,9);
    insert into user_info values (7,8);
    insert into user_info values (8,2);
    insert into user_info values (9,7);
    
    
    hive> 
        > select * from user_info;
    OK
    user_info.id    user_info.degree_type
    1       3
    2       1
    3       6
    4       4
    5       5
    6       9
    7       8
    8       2
    9       7
    Time taken: 0.088 seconds, Fetched: 9 row(s)
    

    测试Hive UDF:

    hive> 
        > 
        > select id,GetDegree(degree_type) from user_info order by id;
    Query ID = root_20210726144704_e3235df0-fa86-4eec-8fcd-b5a5e87234cf
    Total jobs = 1
    Launching Job 1 out of 1
    Number of reduce tasks determined at compile time: 1
    In order to change the average load for a reducer (in bytes):
      set hive.exec.reducers.bytes.per.reducer=<number>
    In order to limit the maximum number of reducers:
      set hive.exec.reducers.max=<number>
    In order to set a constant number of reducers:
      set mapreduce.job.reduces=<number>
    21/07/26 14:47:04 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to rm69
    Starting Job = job_1627269473896_0006, Tracking URL = http://hp3:8088/proxy/application_1627269473896_0006/
    Kill Command = /opt/cloudera/parcels/CDH-6.3.1-1.cdh6.3.1.p0.1470567/lib/hadoop/bin/hadoop job  -kill job_1627269473896_0006
    Hadoop job information for Stage-1: number of mappers: 2; number of reducers: 1
    2021-07-26 14:47:10,383 Stage-1 map = 0%,  reduce = 0%
    2021-07-26 14:47:16,584 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 5.19 sec
    2021-07-26 14:47:22,759 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 6.9 sec
    MapReduce Total cumulative CPU time: 6 seconds 900 msec
    Ended Job = job_1627269473896_0006
    MapReduce Jobs Launched: 
    Stage-Stage-1: Map: 2  Reduce: 1   Cumulative CPU: 6.9 sec   HDFS Read: 12073 HDFS Write: 342 HDFS EC Read: 0 SUCCESS
    Total MapReduce CPU Time Spent: 6 seconds 900 msec
    OK
    1       职业高中
    2       小学
    3       大专
    4       中专
    5       高中
    6       博士
    7       硕士
    8       初中
    9       本科
    Time taken: 20.596 seconds, Fetched: 9 row(s)
    hive> 
    

    3.4 创建永久性的函数

    上面一个步骤我们创建的是临时的函数,退出当前session后,该函数不可用。

    [root@hp1 java]# hive
    WARNING: Use "yarn jar" to launch YARN applications.
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:file:/opt/cloudera/parcels/CDH-6.3.1-1.cdh6.3.1.p0.1470567/jars/log4j-slf4j-impl-2.8.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/opt/cloudera/parcels/CDH-6.3.1-1.cdh6.3.1.p0.1470567/jars/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
    
    Logging initialized using configuration in jar:file:/opt/cloudera/parcels/CDH-6.3.1-1.cdh6.3.1.p0.1470567/jars/hive-common-2.1.1-cdh6.3.1.jar!/hive-log4j2.properties Async: false
    
    WARNING: Hive CLI is deprecated and migration to Beeline is recommended.
    hive> 
        > use test;
    OK
    Time taken: 1.154 seconds
    hive> 
        > select id,GetDegree(degree_type) from user_info order by id;
    FAILED: SemanticException [Error 10011]: Invalid function GetDegree
    

    此时我们可以来创建永久性的函数

    -- OS端操作
    sudo -u hdfs  hadoop  fs -mkdir /user/hive/lib
    sudo -u hdfs  hadoop fs -put /home/java/getDegree.jar /user/hive/lib
    
    -- Hive命令行操作
    CREATE FUNCTION GetDegree AS 'myUdf.getDegree' USING JAR 'hdfs:///user/hive/lib/getDegree.jar';
    
    select id,GetDegree(degree_type) from user_info order by id;
    

    测试记录:

    [root@hp1 ~]# sudo -u hdfs  hadoop  fs -mkdir /user/hive/lib
    [root@hp1 ~]# 
    [root@hp1 ~]# 
    [root@hp1 ~]# sudo -u hdfs  hadoop fs -put /home/java/getDegree.jar /user/hive/lib
    
    hive> 
        > 
        > CREATE FUNCTION GetDegree AS 'myUdf.getDegree' USING JAR 'hdfs://hp1:9866/user/hive/lib/getDegree.jar';
    End of File Exception between local host is: "hp1/10.31.1.123"; destination host is: "hp1":9866; : java.io.EOFException; For more details see:  http://wiki.apache.org/hadoop/EOFException
    Failed to register test.getdegree using class myUdf.getDegree
    FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.FunctionTask
    hive> 
        > CREATE FUNCTION GetDegree AS 'myUdf.getDegree' USING JAR 'hdfs:///user/hive/lib/getDegree.jar';
    Added [/tmp/4e1c389c-4053-4197-9364-93a01840309c_resources/getDegree.jar] to class path
    Added resources: [hdfs:///user/hive/lib/getDegree.jar]
    OK
    Time taken: 0.203 seconds
    hive> select id,GetDegree(degree_type) from user_info order by id;
    Query ID = root_20210726152617_b3da9372-0724-4126-8d62-f58fa55afc42
    Total jobs = 1
    Launching Job 1 out of 1
    Number of reduce tasks determined at compile time: 1
    In order to change the average load for a reducer (in bytes):
      set hive.exec.reducers.bytes.per.reducer=<number>
    In order to limit the maximum number of reducers:
      set hive.exec.reducers.max=<number>
    In order to set a constant number of reducers:
      set mapreduce.job.reduces=<number>
    21/07/26 15:26:18 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to rm69
    Starting Job = job_1627269473896_0007, Tracking URL = http://hp3:8088/proxy/application_1627269473896_0007/
    Kill Command = /opt/cloudera/parcels/CDH-6.3.1-1.cdh6.3.1.p0.1470567/lib/hadoop/bin/hadoop job  -kill job_1627269473896_0007
    Hadoop job information for Stage-1: number of mappers: 2; number of reducers: 1
    2021-07-26 15:26:26,852 Stage-1 map = 0%,  reduce = 0%
    2021-07-26 15:26:33,129 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 2.6 sec
    2021-07-26 15:26:38,290 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 7.12 sec
    MapReduce Total cumulative CPU time: 7 seconds 120 msec
    Ended Job = job_1627269473896_0007
    MapReduce Jobs Launched: 
    Stage-Stage-1: Map: 2  Reduce: 1   Cumulative CPU: 7.12 sec   HDFS Read: 11873 HDFS Write: 342 HDFS EC Read: 0 SUCCESS
    Total MapReduce CPU Time Spent: 7 seconds 120 msec
    OK
    1       职业高中
    2       小学
    3       大专
    4       中专
    5       高中
    6       博士
    7       硕士
    8       初中
    9       本科
    Time taken: 22.51 seconds, Fetched: 9 row(s)
    hive> 
    

    参考:

    1. https://cwiki.apache.org/confluence/display/Hive/HivePlugins
    2. https://blog.csdn.net/u012453843/article/details/53058193
    3. https://www.cnblogs.com/xuziyu/p/10754592.html

    相关文章

      网友评论

          本文标题:Java操作Hive系列1-Hive UDF

          本文链接:https://www.haomeiwen.com/subject/nkiwmltx.html