美文网首页
hive安装和启动

hive安装和启动

作者: 平头哥2 | 来源:发表于2019-06-11 12:02 被阅读0次

    服务器ip配置信息:

    master内网IP:192.168.248.136
    slave01内网IP:192.168.248.137
    slave02内网IP:192.168.248.138
    slave03内网IP:192.168.248.139
    

    hive的下载和安装:

    先安装mysql(mysql安装到slave02),略去

    下载hive:

    https://mirrors.tuna.tsinghua.edu.cn/apache/hive/

    上传到服务器,slave03. 解压:

    [hadoop@slave03 app]$ ls
    apache-hive-2.3.5-bin.tar.gz  hadoop-2.9.2  jdk1.8.0_211
    [hadoop@slave03 app]$ tar -zxf apache-hive-2.3.5-bin.tar.gz 
    [hadoop@slave03 app]$ mv apache-hive-2.3.5-bin hive-2.3.5
    [hadoop@slave03 app]$ ls
    apache-hive-2.3.5-bin.tar.gz  hadoop-2.9.2  hive-2.3.5  jdk1.8.0_211
    [hadoop@slave03 hive-2.3.5]$ ls
    bin  binary-package-licenses  conf  derby.log  examples  hcatalog  jdbc  lib  LICENSE  NOTICE  RELEASE_NOTES.txt  scripts
    

    新建hive-site.xml,内容如下:

    <?xml version="1.0" encoding="UTF-8" standalone="no"?>
    <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
    <configuration>
    <property>
    <name>javax.jdo.option.ConnectionURL</name>
    <value>jdbc:mysql://192.168.248.138:3306/hive?createDatabaseIfNotExist=true&amp;useSSL=false</value>
    <description>JDBC connect string for a JDBC metastore</description>
    </property>
     
    <property>
    <name>javax.jdo.option.ConnectionDriverName</name>
    <value>com.mysql.jdbc.Driver</value>
    <description>Driver class name for a JDBC metastore</description>
    </property>
     
    <property>
    <name>javax.jdo.option.ConnectionUserName</name>
    <value>root</value>
    <description>username to use against metastore database</description>
    </property>
     
    <property>
    <name>javax.jdo.option.ConnectionPassword</name>
    <value>MyNewPass4!</value>
    <description>password to use against metastore database</description>
    </property>
    </configuration>
    

    上传mysql的驱动包到lib目录下:

    [hadoop@slave03 hive-2.3.5]$ ls lib/ |grep mysql-connector
    mysql-connector-java-5.1.47.jar
    

    启动hive:

    [hadoop@slave03 hive-2.3.5]$ bin/hive
    which: no hbase in (/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/bin:/usr/local/app/hadoop-2.9.2/sbin:/usr/local/app/hadoop-2.9.2/bin:/usr/local/app/jdk1.8.0_211/bin)
    
    Logging initialized using configuration in jar:file:/usr/local/app/hive-2.3.5/lib/hive-common-2.3.5.jar!/hive-log4j2.properties Async: true
    Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases.
    

    查询所有的数据库,出bug:

    hive> show databases;
    FAILED: SemanticException org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient
    

    bug解决:

    参考:

    https://blog.csdn.net/hhj724/article/details/79094138

    执行过程如下:

    [hadoop@slave01 bin]$ ls
    beeline  ext  hive  hive-config.sh  hiveserver2  hplsql  metatool  schematool
    #!初始化schema
    [hadoop@slave01 bin]$ ./schematool -dbType mysql -initSchema
    Metastore connection URL:    jdbc:mysql://192.168.248.138:3306/hive?createDatabaseIfNotExist=true&useSSL=false
    Metastore Connection Driver :    com.mysql.jdbc.Driver
    Metastore connection User:   root
    Starting metastore schema initialization to 2.3.0
    Initialization script hive-schema-2.3.0.mysql.sql
    Initialization script completed
    schemaTool completed
    [hadoop@slave01 bin]$ cd ..
    #!启动hive
    [hadoop@slave01 hive-2.3.5]$ bin/hive
    #!执行查询
    hive> show databases;
    OK
    default
    Time taken: 9.52 seconds, Fetched: 1 row(s)
    hive>
    #!成功
    

    将slave03的hive发送到slave01:

    [hadoop@slave03 app]$ scp -r hive-2.3.5/ hadoop@slave01:$PWD
    

    在slave01上将hive启动为服务:

    [hadoop@slave01 hive-2.3.5]$ bin/hiveserver2 
    which: no hbase in (/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/bin:/usr/local/app/hadoop-2.9.2/sbin:/usr/local/app/hadoop-2.9.2/bin:/usr/local/app/jdk1.8.0_211/bin)
    2019-06-11 11:31:00: Starting HiveServer2
    

    在slave03上启动beeline客户端访问服务:出bug

    [hadoop@slave03 hive-2.3.5]$ bin/beeline 
    Beeline version 2.3.5 by Apache Hive
    
    beeline> !connect jdbc:hive2://slave01:10000
    Connecting to jdbc:hive2://slave01:10000
    Enter username for jdbc:hive2://slave01:10000: hadoop
    Enter password for jdbc:hive2://slave01:10000: 
    19/06/11 11:31:54 [main]: WARN jdbc.HiveConnection: Failed to connect to slave01:10000
    Error: Could not open client transport with JDBC Uri: jdbc:hive2://slave01:10000: Failed to open new session: java.lang.RuntimeException: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.authorize.AuthorizationException): User: hadoop is not allowed to impersonate hadoop (state=08S01,code=0)
    beeline>
    

    解决bug:

    参考:

    https://blog.csdn.net/zjh_746140129/article/details/83153873

    原因:hiveserver2增加了权限控制,需要在hadoop的配置文件中配置

    解决方法:在hadoop的core-site.xml中添加如下内容,然后重启hadoop,再使用beeline连接即可

    修改hadoop的core-site.xml配置文件:

    <property>
        <name>hadoop.proxyuser.hadoop.hosts</name>
        <value>*</value>
    </property>
    <property>
        <name>hadoop.proxyuser.hadoop.groups</name>
        <value>*</value>
    </property>
    

    重启hdfs。

    再次启动HiveServer2, 再次启动beeline连接HiveServer2:

    [hadoop@slave03 hive-2.3.5]$ bin/beeline 
    Beeline version 2.3.5 by Apache Hive
    beeline> !connect jdbc:hive2://slave01:10000
    Connecting to jdbc:hive2://slave01:10000
    Enter username for jdbc:hive2://slave01:10000: hadoop
    Enter password for jdbc:hive2://slave01:10000: 
    Connected to: Apache Hive (version 2.3.5)
    Driver: Hive JDBC (version 2.3.5)
    Transaction isolation: TRANSACTION_REPEATABLE_READ
    0: jdbc:hive2://slave01:10000> show databases;
    +----------------+
    | database_name  |
    +----------------+
    | default        |
    +----------------+
    1 row selected (1.952 seconds) 
    

    相关文章

      网友评论

          本文标题:hive安装和启动

          本文链接:https://www.haomeiwen.com/subject/odgffctx.html