搭建Hive所遇过的坑

作者: 咸鱼翻身记 | 来源:发表于2016-12-27 17:37 被阅读9665次

    一.基本功能:


    1.启动hive时报错
      java.lang.ExceptionInInitializerError
          at java.lang.Class.forName0(Native Method)
          at java.lang.Class.forName(Class.java:190)
          at org.apache.hadoop.hive.ql.stats.jdbc.JDBCStatsPublisher.init(JDBCStatsPublisher.java:265)
          at org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:412)
      Caused by: java.lang.SecurityException: sealing violation: package org.apache.derby.impl.jdbc.authentication is sealed
          at java.net.URLClassLoader.getAndVerifyPackage(URLClassLoader.java:388)
          at java.net.URLClassLoader.defineClass(URLClassLoader.java:417)
    
    解决方案:
      将mysql-connector-java-5.1.6-bin.jar包导入到$HIVE_HOME/lib目录下
    

    </br>
    </br>

    2.启动hive时报错:
      [ERROR] Terminal initialization failed; falling back to unsupported
      java.lang.IncompatibleClassChangeError: Found class jline.Terminal, but interface was expected
          at jline.TerminalFactory.create(TerminalFactory.java:101)
          at jline.TerminalFactory.get(TerminalFactory.java:158)
      Exception in thread "main" java.lang.IncompatibleClassChangeError: Found class jline.Terminal, but interface was expected
          at jline.console.ConsoleReader.<init>(ConsoleReader.java:230)
          at jline.console.ConsoleReader.<init>(ConsoleReader.java:221)
    
    解决方案:
      将当前hive版本的$HIVE_HOME/lib目录下的jline-2.12.jar包拷贝到$HADOOP_HOME/share/hadoop/yarn/lib目录下, 并将旧版本的Hive的Jline包从$HADOOP_HOME/etc/hadoop/yarn/lib目录下删除
    

    </br>
    </br>

    3.启动hive时报错
      Exception in thread "main" java.lang.RuntimeException: java.lang.IllegalArgumentException: java.net.URISyntaxException: Relative path in absolute URI: ${system:java.io.tmpdir%7D/$%7Bsystem:user.name%7D
            at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:522)
            at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:677)
      Caused by: java.lang.IllegalArgumentException: java.net.URISyntaxException: Relative path in absolute URI: ${system:java.io.tmpdir%7D/$%7Bsystem:user.name%7D
            at org.apache.hadoop.fs.Path.initialize(Path.java:206)
            at org.apache.hadoop.fs.Path.<init>(Path.java:172)
      Caused by: java.net.URISyntaxException: Relative path in absolute URI: ${system:java.io.tmpdir%7D/$%7Bsystem:user.name%7D
            at java.net.URI.checkPath(URI.java:1804)
            at java.net.URI.<init>(URI.java:752)
            at org.apache.hadoop.fs.Path.initialize(Path.java:203)
            ... 11 more
    
    解决方案:
      1.查看hive-site.xml配置,会看到配置值含有"system:java.io.tmpdir"的配置项
      2.新建文件夹${HIVE_HOME}/hive/logs
      3.将含有"system:java.io.tmpdir"的配置项的值修改为${HIVE_HOME}/hive/logs
      即: 新添属性为
      <property>
          <name>hive.exec.local.scratchdir</name>
          <value>${HIVE_HOME}/logs/HiveJobsLog</value>
          <description>Local scratch space for Hive jobs</description>
      </property>
      <property>
          <name>hive.downloaded.resources.dir</name>
          <value>${HIVE_HOME}/logs/ResourcesLog</value>
          <description>Temporary local directory for added resources in the remote file system.</description>
      </property>
      <property>
          <name>hive.querylog.location</name>
          <value>${HIVE_HOME}/logs/HiveRunLog</value>
          <description>Location of Hive run time structured log file</description>
      </property>
      <property>
          <name>hive.server2.logging.operation.log.location</name>
          <value>${HIVE_HOME}/logs/OpertitionLog</value>
          <description>Top level directory where operation logs are stored if logging functionality is enabled</description>
      </property>
    

    </br>
    </br>

    4.启动hive时报错
      Caused by: java.sql.SQLException: Access denied for user 'root'@'master' (using password: YES)
          at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:946)
          at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:2870)
          at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:812)
          at com.mysql.jdbc.MysqlIO.secureAuth411(MysqlIO.java:3269)
    
    解决方案:
      mysql密码不正确, 查看hive-site.xml配置与mysql的密码是否一致
    

    </br>
    </br>

    5.操作表数据时(向表中导入数据), 报错:
      FAILED: RuntimeException org.apache.hadoop.security.AccessControlException: Permission denied: user=services02, access=EXECUTE, inode="/tmp":services01:supergroup:drwx------
    
    解决方案:
      user=services02与inode="/tmp":services01:supergroup不同时,说明hive登录的主机与HDFS的active状态的主机不一样
      应把user=services02的主机变为HDFS的active状态的主机.
    

    </br>
    </br>
    </br>

    二.扩展Parquet功能:

    1.创建存储格式为parquet时,建表语句为: (Hive版本为0.12)
      create table parquet_test(x int, y string) 
      row format serde 'parquet.hive.serde.ParquetHiveSerDe'    
      stored as inputformat 'parquet.hive.DeprecatedParquetInputFormat'                    
      outputformat 'parquet.hive.DeprecatedParquetOutputFormat';
    
    报错:
      FAILED: SemanticException [Error 10055]: Output Format must implement 
      HiveOutputFormat, otherwise it should be either IgnoreKeyTextOutputFormat or 
      SequenceFileOutputFormat
    
    解决方案:
      因为parquet.hive.DeprecatedParquetOutputFormat类并没有在Hive的CLASSPATH中配置
      单独下载parquet-hive-1.2.5.jar包(此类属于$IMPALA_HOME/lib目录下), 在$HIVE_HOME/lib目录下建立个软链就可以了
    
      cd $HIVE_HOME/lib
      ln -s $/home/hadoop/soft/gz.zip/parquet-hive-1.2.5.jar
    

    </br>
    </br>

    2.继续提交创表语句: (Hive版本为0.12)
      create table parquet_test(x int, y string) 
      row format serde 'parquet.hive.serde.ParquetHiveSerDe'    
      stored as inputformat 'parquet.hive.DeprecatedParquetInputFormat'                    
      outputformat 'parquet.hive.DeprecatedParquetOutputFormat';
    
    报错:
      Exception in thread "main" java.lang.NoClassDefFoundError: parquet/hadoop/api/WriteSupport
      at java.lang.Class.forName0(Native Method)
      at java.lang.Class.forName(Class.java:247)
    
    解决方案:
      通过yum下载Parquet
      sodu yum -y install parquet
    
    下载parquet的jar包在/usr/lib/parquet目录下, 将/usr/lib/parquet目录下的所有jar(除javadoc.jar和sources.jar外)拷贝到$HIVE_HOME/lib目录下.
    若yum无法下载parquet资源包, 则这是需要配置yum源, 请自行百度寻找资料
    

    </br>
    </br>

    3.继续提交创表语句:(Hive版本为0.12)
      create table parquet_test(x int, y string) 
      row format serde 'parquet.hive.serde.ParquetHiveSerDe'    
      stored as inputformat 'parquet.hive.DeprecatedParquetInputFormat'                    
      outputformat 'parquet.hive.DeprecatedParquetOutputFormat';
    
    报错:
      FAILED: Error in metadata: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.metastore.HiveMetaStoreClient
    
    解决方案:
      先启动metastore服务:
      hive --service metastore
    

    </br>
    </br>

    4.将存储格式为textFile的文件数据insert到parquet时, 报错:
      Error: java.lang.RuntimeException: Error in configuring object
            at org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:109)
            at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:75)
            at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
            at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:426))
      Caused by: java.lang.reflect.InvocationTargetException
            at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
            at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
            ... 9 more
      Caused by: java.lang.RuntimeException: Map operator initialization failed
            at org.apache.hadoop.hive.ql.exec.mr.ExecMapper.configure(ExecMapper.java:134)
            ... 22 more
      Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.NullPointerException
            at org.apache.hadoop.hive.ql.exec.FileSinkOperator.initializeOp(FileSinkOperator.java:386)
            at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:377)
            ... 22 more
      Caused by: java.lang.NullPointerException
            at org.apache.hadoop.hive.ql.exec.FileSinkOperator.initializeOp(FileSinkOperator.java:323)
            ... 34 more
      FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask
            MapReduce Jobs Launched: 
            Job 0: Map: 1   HDFS Read: 0 HDFS Write: 0 FAIL
            Total MapReduce CPU Time Spent: 0 msec
    
    解决方案:
      在hive-env.sh中添加: 
      JAVA_HOME=/home/hadoop/soft/jdk1.7.0_67
      HADOOP_HOME=/home/hadoop/soft/hadoop-2.4.1
      HIVE_HOME=/home/hadoop/soft/hive-0.12.0
      export HIVE_CONF_DIR=$HIVE_HOME/conf
      export HIVE_AUX_JARS_PATH=$HIVE_HOME/lib
      export CLASSPATH=$CLASSPATH:$JAVA_HOME/lib:$HADOOP_HOME/lib:$HIVE_HOME/lib
    

    </br>
    </br>

    5.将textFile表的数据插入到parquet表中,报错:
      Error: java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error 
                while processing row {"time":1471928602,"uid":687994,"billid":1004,"archiveid":null,"year":"2016","mouth":"2016-08","day":"2016-08-23","hour":"2016-08-23-04"}
      Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while processing row 
      Caused by: java.lang.ClassCastException: org.apache.hadoop.io.Text cannot be cast to org.apache.hadoop.io.ArrayWritable
    
    解决方案:
    1. 配置hive-env.xml文件,添加:
          JAVA_HOME=/home/hadoop/soft/jdk1.7.0_67
          HADOOP_HOME=/home/hadoop/soft/hadoop-2.4.1
          HIVE_HOME=/home/hadoop/soft/hive-0.12.0
          export HIVE_CONF_DIR=$HIVE_HOME/conf
          export CLASSPATH=$CLASSPATH:$JAVA_HOME/lib:$HADOOP_HOME/lib:$HIVE_HOME/lib
    
    1. 通过yum下载Parquet
          sodu yum -y install parquet
    
    1. 下载parquetjar包在/usr/lib/parquet目录下, 将/usr/lib/parquet目录下的所有jar(除javadoc.jarsources.jar外)拷贝到$HIVE_HOME/lib目录下.

    2. 文件压缩格式问题:
      在insert插入数据之前, 要先设置压缩格式, 要三种格式可选, 通常选用SNAPPY:

        CompressionCodecName.UNCOMPRESSED
        CompressionCodecName.SNAPPY
        CompressionCodecName.GZIP
    

    5.在Hive命令行中执行set parquet.compression = SNAPPY语句; 然后再进行insert数据插入操作


    </br>
    </br>

    6.创建存储格式为parquet的表时,报错
      FAILED: SemanticException [Error 10055]: Output Format must implement HiveOutputFormat, otherwise it should be either IgnoreKeyTextOutputFormat or SequenceFileOutputFormat
    
    解决方案:
      查看hive版本, 如果hive是0.13版本以下的, 则创表时用: stored as inputformat 'parquet.hive.DeprecatedParquetInputFormat' outputformat 'parquet.hive.DeprecatedParquetOutputFormat';
      若hive是0.13版本以上, 则创表时用: stored as parquet;
    

    </br>
    </br>

    三.Hive1.x升级到2.x异常汇总


    1.启动hive时报错
      Exception in thread "main" java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient
                at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:591)
                at org.apache.hadoop.hive.ql.session.SessionState.beginStart(SessionState.java:531)
      Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient
                at org.apache.hadoop.hive.ql.metadata.Hive.registerAllFunctionsOnce(Hive.java:226)
      Caused by: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient
                at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1654)
                at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.<init>(RetryingMetaStoreClient.java:80)
      Caused by: java.lang.reflect.InvocationTargetException
                at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
                at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
      Caused by: MetaException(message:Hive Schema version 2.1.0 does not match metastore schema version 1.2.0 Metastore is not upgraded or corrupt)
                at org.apache.hadoop.hive.metastore.ObjectStore.checkSchema(ObjectStore.java:7768)
                at org.apache.hadoop.hive.metastore.ObjectStore.verifySchema(ObjectStore.java:7731)
    
    解决方案:
      (1)删除HDFS上的hive数据与hive数据库
            hadoop fs -rm -r -f /tmp/hive
            hadoop fs -rm -r -f /user/hive
      (2)删除MySQL上的hive的元数据信息
            mysql -uroot -p 
            drop database hive
    
    随之而来了一个问题:
      Exception in thread "main" java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient
                at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:591)
                at org.apache.hadoop.hive.ql.session.SessionState.beginStart(SessionState.java:531)
      Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient
                at org.apache.hadoop.hive.ql.metadata.Hive.registerAllFunctionsOnce(Hive.java:226)
                at org.apache.hadoop.hive.ql.metadata.Hive.<init>(Hive.java:366)
      Caused by: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient
                at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1654)
                at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.<init>(RetryingMetaStoreClient.java:80)
      Caused by: java.lang.reflect.InvocationTargetException
                at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
                at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
      Caused by: MetaException(message:Version information not found in metastore. )
                at org.apache.hadoop.hive.metastore.ObjectStore.checkSchema(ObjectStore.java:7753)
                at org.apache.hadoop.hive.metastore.ObjectStore.verifySchema(ObjectStore.java:7731)
    
    解决方案:
      初始化hive, 将mysql作为hive的元数据库
      schematool -dbType mysql -initSchema    
    

    </br>
    </br>

    2.使用HPL/SQL时报错:
      java.sql.SQLException: Could not open client transport with JDBC Uri: jdbc:hive2://localhost:10000: java.net.ConnectException: Connection refused
    
    解决方案(若方案一仍然报错, 则执行方案二):

    方案一:
    启动hiveServer2服务,

      cd $HIVE_HOME/bin
      ./hiveserver2
    

    方案二:

    1. 编辑hplsql-site.xml, 修改以下配置:
      <property>
          <name>hplsql.conn.default</name>
          <value>hive2conn</value>
          <description>The default connection profile</description>
      </property>
      <property>
          <name>hplsql.conn.hive2conn</name>
          <value>org.apache.hive.jdbc.HiveDriver;jdbc:hive2://m1:10000</value>
          <description>HiveServer2 JDBC connection</description>
      </property>
    
    1. 然后启动hiveServer2
        cd $HIVE_HOME/bin
        ./hiveserver2
    
    1. 使用beeline测试连接
        cd $HIVE_HOME/bin
        ./beeline
        !connect jdbc:hive2://m1:10000
    

    </br>
    </br>

    3.在hive2.0中使用hplsql, 对存储过程文件进行执行时, 报错:
      java.sql.SQLException: 
      Could not open client transport with JDBC Uri: jdbc:hive2://localhost:10000: Failed to open new session: java.lang.RuntimeException: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.authorize.AuthorizationException): User: centos is not allowed to impersonate hive
                at org.apache.hive.jdbc.HiveConnection.<init>(HiveConnection.java:209)
                at org.apache.hive.jdbc.HiveDriver.connect(HiveDriver.java:107)
      Caused by: org.apache.hive.service.cli.HiveSQLException: Failed to open new session: java.lang.RuntimeException: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.authorize.AuthorizationException): User: centos is not allowed to impersonate hive
                at org.apache.hive.jdbc.Utils.verifySuccess(Utils.java:266)
      Caused by: org.apache.hive.service.cli.HiveSQLException: Failed to open new session: java.lang.RuntimeException: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.authorize.AuthorizationException): User: centos is not allowed to impersonate hive
                at org.apache.hive.service.cli.session.SessionManager.createSession(SessionManager.java:336)
                at org.apache.hive.service.cli.session.SessionManager.openSession(SessionManager.java:279)
      Caused by: java.lang.RuntimeException: java.lang.RuntimeException: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.authorize.AuthorizationException): User: centos is not allowed to impersonate hive
                at org.apache.hive.service.cli.session.HiveSessionProxy.invoke(HiveSessionProxy.java:89)
                at org.apache.hive.service.cli.session.HiveSessionProxy.access$000(HiveSessionProxy.java:36)
      Caused by: java.lang.RuntimeException: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.authorize.AuthorizationException): User: centos is not allowed to impersonate hive
                at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:591)
                at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:526)
      Caused by: java.lang.RuntimeException: org.apache.hadoop.ipc.RemoteException:User: centos is not allowed to impersonate hive
                at org.apache.hadoop.ipc.Client.call(Client.java:1470)
                at org.apache.hadoop.ipc.Client.call(Client.java:1401)
    
    解决方案:
    1. 暂停HDFS服务
      stop-dfs.sh
    2. 修改core-site.xml文件
      $HADOOP_HOME/etc/hadoop/core-site.xml文件中增加以下配置, 注意:Hadoop.proxyuser.centos.hosts配置项名称中centos部分为报错User:*中的用户名部分
      <property>
          <name>hadoop.proxyuser.centos.groups</name>
          <value>*</value>
      </property>
      <property>
          <name>hadoop.proxyuser.centos.hosts</name>
          <value>*</value>
      </property>
    
    1. core-site.xml文件分发到其他主机
    2. 启动HDFS服务
      start-dfs.sh

    </br>
    </br>

    4.在Hive2.0中使用HPL/SQL, 对存储过程文件进行执行时, 报错:
    java.sql.SQLException: Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask. Permission denied: user=anonymous, access=EXECUTE, inode="/tmp":centos:supergroup:drwx------
            at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkFsPermission(FSPermissionChecker.java:271)
            at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:257)
            at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:208)
    
    解决方案:
    1. 修改有问题的目录权限
        hadoop fs -chmod 755 /tmp
    
    2.在hive-site.xml文件中修改以下配置项
    <property>
        <name>hive.scratch.dir.permission</name>
        <value>755</value>
    </property>
    
    

    </br>
    </br>

    5.使用HPL/SQL存储过程时报错

    HPL/SQL2.2.1版本中, 使用from语句时, 总是从配置文件里找hplsql.dual.table配置项的值, 总是报错找不到dual表, 或找不到select语句中指定cloumn

    解决方案:

    HPL/SQL官网

      http://www.hplsql.org/doc
    
    注意: 一定得是0.3.17或以上版本的HPL/SQL
    下载一个HPL/SQL 0.3.17的tan.gz文件, 解压后将hplsql-0.3.17.jar包放入$HIVE_HOME包下, 并改名为hive-hplsql-*.jar格式的包,如:hive-hplsql-0.3.17.jar
    

    </br>
    </br>
    </br>

    五.Hive On Spark作为引擎:


      官方参考配置项
          https://cwiki.apache.org/confluence/display/Hive/Configuration+Properties#ConfigurationProperties-Spark
      遇坑解决方案指南:
          http://www.cnblogs.com/breg/p/5552342.html
      搭建教程及部分遇坑解决指南
          http://www.cnblogs.com/linbingdong/p/5806329.html
    

    </br>
    </br>

    1.启动hive时报错
    Exception in thread "main" java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient
            at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:591)
            at org.apache.hadoop.hive.ql.session.SessionState.beginStart(SessionState.java:531)
    Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient
            at org.apache.hadoop.hive.ql.metadata.Hive.registerAllFunctionsOnce(Hive.java:226)
            at org.apache.hadoop.hive.ql.metadata.Hive.<init>(Hive.java:366)
    Caused by: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient
            at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1654)
            at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.<init>(RetryingMetaStoreClient.java:80)
    Caused by: java.lang.reflect.InvocationTargetException
            at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
            at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
    Caused by: MetaException(message:Could not connect to meta store using any of the URIs provided. Most recent failure: org.apache.thrift.transport.TTransportException: java.net.ConnectException: Connection refused
            at org.apache.thrift.transport.TSocket.open(TSocket.java:226)
            at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.open(HiveMetaStoreClient.java:477)
    Caused by: java.net.ConnectException: Connection refused
            at java.net.PlainSocketImpl.socketConnect(Native Method)
            at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
    ```
    
    #####解决方案:
    ```
        启动metastore服务:
        hive --service metastore
    ```
    
    
    ---
    </br>
    </br>
    #####2.调用spark引擎计算时, 报错:
    ```
      Failed to execute spark task, with exception 'org.apache.hadoop.hive.ql.metadata.HiveException(Failed to create spark client.)' FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.spark.SparkTask
    ```
    
    ######解决方案:
    ```
      错误日志中出现Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.spark.SparkTask 该错误, 大部分是因为spark的安装包是包含Hive引用包的, 出现此类问题应自己手动编译一个spark包
    ```
    
    
    
    ---
    </br>
    </br>
    #####3.调用spark引擎计算时, 报错:
    ```
    2016-12-19T20:19:15,491 ERROR [main] client.SparkClientImpl: Error while waiting for client to connect.
    java.util.concurrent.ExecutionException: java.lang.RuntimeException: Cancel client 'dcee57ba-ea77-4e92-bd43-640e8385e2e7'. 
    Error: Child process exited before connecting back with error log 
            Warning: Ignoring non-spark config property: hive.spark.client.server.connect.timeout=200000
            Warning: Ignoring non-spark config property: hive.spark.client.rpc.threads=8
            Warning: Ignoring non-spark config property: hive.spark.client.connect.timeout=1000
            Warning: Ignoring non-spark config property: hive.spark.client.secret.bits=256
            Warning: Ignoring non-spark config property: hive.spark.client.rpc.max.size=52428800
            16/12/19 20:19:15 INFO client.RemoteDriver: Connecting to: m1:48286
    Exception in thread "main" java.lang.NoSuchFieldError: SPARK_RPC_SERVER_ADDRESS
            at org.apache.hive.spark.client.rpc.RpcConfiguration.<clinit>(RpcConfiguration.java:45)
            at org.apache.hive.spark.client.RemoteDriver.<init>(RemoteDriver.java:134)
    Caused by: java.lang.RuntimeException: Cancel client 'dcee57ba-ea77-4e92-bd43-640e8385e2e7'. 
    Error: Child process exited before connecting back with error log Warning: Ignoring non-spark config property: hive.spark.client.server.connect.timeout=200000
            Warning: Ignoring non-spark config property: hive.spark.client.rpc.threads=8
            Warning: Ignoring non-spark config property: hive.spark.client.connect.timeout=1000
            Warning: Ignoring non-spark config property: hive.spark.client.secret.bits=256
            Warning: Ignoring non-spark config property: hive.spark.client.rpc.max.size=52428800
            16/12/19 20:19:15 INFO client.RemoteDriver: Connecting to: m1:48286
    Exception in thread "main" java.lang.NoSuchFieldError: SPARK_RPC_SERVER_ADDRESS
          at org.apache.hive.spark.client.rpc.RpcConfiguration.<clinit>(RpcConfiguration.java:45)
          at org.apache.hive.spark.client.RemoteDriver.<init>(RemoteDriver.java:134)
    ```
    
    ######解决方案:
    ```
    参考资料:
        http://www.cnblogs.com/breg/p/5552342.html
    ```
    
    日志中出现````Exception in thread "main" java.lang.NoSuchFieldError: SPARK_RPC_SERVER_ADDRESS```该错误, 大部分是因为**spark**的安装包是包含**Hive**引用包的, 出现此类问题应自己手动编译一个**spark**包
    
    
    
    
    ---
    </br>
    </br>
    #####4.使用编译好的spark安装包, 安装好后, 启动master时报错:
    ```
      Spark Command: /home/centos/soft/jdk1.7.0_67/bin/java -cp /home/centos/soft/spark/conf/:/home/centos/soft/spark/lib/spark-assembly-1.6.0-hadoop2.6.0.jar:/home/centos/soft/hadoop/etc/hadoop/:/home/centos/soft/hadoop/etc/hadoop/:/home/centos/soft/hadoop/lib/spark-assembly-1.6.0-hadoop2.6.0.jar -Xms1g -Xmx1g -XX:MaxPermSize=256m org.apache.spark.deploy.master.Master --ip m1 --port 7077 --webui-port 8080
      ========================================
      Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/hadoop/conf/Configuration
            at java.lang.Class.getDeclaredMethods0(Native Method)
            at java.lang.Class.privateGetDeclaredMethods(Class.java:2570)
            at java.lang.Class.getMethod0(Class.java:2813)
      Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.conf.Configuration
            at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
            at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
            at java.security.AccessController.doPrivileged(Native Method)
    ```
    
    ######解决方案:
    ```
      出现此类问题, 均是spark源码编译时出错, 推荐使用Maven编译
    ```
    
    
    
    ---
    </br>
    </br>
    #####5.使用编译好的spark安装包, 安装好后, 启动master时报错:
    ```
      Exception in thread "main" java.lang.NoClassDefFoundError: org/slf4j/Logger
              at java.lang.Class.getDeclaredMethods0(Native Method)
              at java.lang.Class.privateGetDeclaredMethods(Class.java:2570)
              at java.lang.Class.getMethod0(Class.java:2813)
      Caused by: java.lang.ClassNotFoundException: org.slf4j.Logger
              at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
              at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
              at java.security.AccessController.doPrivileged(Native Method)
    ```
    
    
    ######解决方案:
    ```
      出现此类问题, 均是spark源码编译时出错, 推荐使用Maven编译
    ```
    
    
    
    ---
    </br>
    </br>
    #####6.Hive On Spark使用hplsql存储过程时报错,
    **Beeline**中的报错信息:
    ```
      Unhandled exception in HPL/SQL
      java.sql.SQLException: Could not open client transport with JDBC Uri: jdbc:hive2://m1:10000: null
                at org.apache.hive.jdbc.HiveConnection.<init>(HiveConnection.java:209)
                at org.apache.hive.jdbc.HiveDriver.connect(HiveDriver.java:107)
                at java.sql.DriverManager.getConnection(DriverManager.java:571)
      Caused by: org.apache.thrift.transport.TTransportException
                at org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:132)
                at org.apache.thrift.transport.TTransport.readAll(TTransport.java:86)
    ```
    
    
    **Hive-log.log**的报错信息:
    ```
    ERROR [HiveServer2-Handler-Pool: Thread-43] server.TThreadPoolServer: 
    Thrift error occurred during processing of message.
    org.apache.thrift.protocol.TProtocolException: Missing version in readMessageBegin, old client?
            at org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:228)
            at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:27)
            at org.apache.hive.service.auth.TSetIpAddressProcessor.process(TSetIpAddressProcessor.java:56)
            at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286)
            at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
            at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
            at java.lang.Thread.run(Thread.java:745)
    ```
    
    ######解决方案:
    将```hive.server2.authentication```配置项改为```NONE```
    ```
    <property>
        <name>hive.server2.authentication</name>
        <value>NONE</value>
    </property>
    ```
    
    
    
    ---
    </br>
    </br>
    #####7.使用spark-submit --master yarn提交任务时没问题, 以HPL/SQL用存储过程提交任务执行Spark任务时却报错:
    ```
      16/12/26 16:45:01 WARN cluster.ClusterScheduler: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered
      16/12/26 16:45:16 WARN cluster.ClusterScheduler: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered
    ```
    
    ######解决方案:
    
    出现这种问题的原因有很多种, 我所遇到的问题的解决方案是: 将```/etc/hosts```下的```127.0.0.1```注释, 然后设置一个新值
    ```
      sudo vi /etc/hosts
          #127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
          127.0.0.1   localhost
    ```
    
    
    
    
    
    
    ---
    </br>
    </br>
    #####8.报错信息如下:
    ```
    java.lang.StackOverflowError   at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:53)
    ```
    
    ######解决方案:
    SQL语句的`where`条件过长,字符串栈溢出
    
    
    
    
    
    
    
    
    ---
    </br>
    </br>
    #####9.报错信息如下:
    ```
    Error: Could not find or load main class org.apache.hive.beeline.BeeLine
    ```
    
    ######解决方案:
    重新编译Hive,并带上参数`-Phive-thriftserver`
    
    
    
    
    
    
    
    
    
    ---
    </br>
    </br>
    #####10.报错信息如下:
    ```
    check the manual that corresponds to your MySQL server version for the right syntax to use near 'OPTION SQL_SELECT_LIMIT=DEFAULT' at line 1
    ```
    
    ######解决方案:
    用新版`mysql-connector`
    
    
    
    
    
    
    
    
    ---
    </br>
    </br>
    #####11.报错信息如下:
    ```
    java.lang.NoSuchMethodError: org.apache.parquet.schema.Types$MessageTypeBuilder.addFields([Lorg/apache/parquet/schema/Type;)Lorg/apache/parquet/schema/Types$BaseGroupBuilder;
    ```
    
    ######解决方案:
    版本冲突所致,统一`hive`和`spark`中`parquet`组件版本
    
    
    
    
    
    
    ---
    </br>
    </br>
    #####12.报错信息如下:
    ```
    Failed to execute spark task, with exception 'org.apache.hadoop.hive.ql.metadata.HiveException(Failed to create spark client.)'
    ```
    
    ######解决方案:
    不要官网编译好的spark,自己下载Spark源码重新编译,并保证编译的Spark版本满足Hive源码中pom.xml文件中对spark的一个大版本要求, 若使用Hive on Spark,则在编译过程中不带Phive有关的任何参数
    
    
    
    
    
    
    
    
    
    
    ---
    </br>
    </br>
    #####13.报错信息如下:
    ```
    java.lang.NoSuchFieldError: SPARK_RPC_SERVER_ADDRESS  at org.apache.hive.spark.client.rpc.RpcConfiguration.<clinit>(RpcConfiguration.java:45)
    ```
    
    ######解决方案:
    不要官网编译好的spark,自己下载Spark源码重新编译,并保证编译的Spark版本满足Hive源码中pom.xml文件中对spark的一个大版本要求, 若使用Hive on Spark,则在编译过程中不带Phive有关的任何参数
    
    
    
    
    
    
    
    
    
    ---
    </br>
    </br>
    </br>

    相关文章

      网友评论

      • neolc:在第3个排错中,楼主写到
        “将含有"system:java.io.tmpdir"的配置项的值修改为${HIVE_HOME}/hive/logs
        即: 新添属性为
        <property>
        <name>hive.exec.local.scratchdir</name>
        <value>${HIVE_HOME}/logs/HiveJobsLog</value>
        <description>Local scratch space for Hive jobs</description>
        </property>

        然而我实验的结果是,在XML配置文件中不能用$引用环境变量,使用时候系统不会解析变量,而是直接创建文件夹“${HIVE_HOME}”
        咸鱼翻身记:@neolc 其实这个Hive_home原意是让读者能清晰的知道这是hive的路径值

      本文标题:搭建Hive所遇过的坑

      本文链接:https://www.haomeiwen.com/subject/ihkevttx.html