1. HDFS配置:

hadoop.proxyuser.HTTP.groups=*
hadoop.proxyuser.knox.groups=*
hadoop.proxyuser.knox.hosts=*
2. Spark中:hive.server2.transport.mode binary修改为http

3. Hive中:hive.server2.thrift.http.path=cliservice


4. Spark中:
自定义spark2-hive-site-override:
hive.server2.allow.user.substitution=true
hive.server2.authentication.kerberos.keytab=/etc/security/keytabs/spnego.service.keytab
hive.server2.authentication.kerberos.principal=HTTP/_HOST@INDATA.COM
hive.server2.authentication.spnego.keytab=/etc/security/keytabs/spnego.service.keytab
hive.server2.authentication.spnego.principal=HTTP/_HOST@INDATA.COM

5. 起服务:
/usr/hdp/2.6.1.0-129/spark2/sbin/start-thriftserver.sh --master yarn --deploy-mode client --queuedefault --hiveconf hive.server2.thrift.http.port=10088 --keytab/etc/security/keytabs/hive.service.keytab --principalhive/indata-10-110-13-42.indata.com@INDATA.COM
6. 连接测试:
1. 初始化kerberos票据
kinit -kt spnego.service.keytabHTTP/indata-10-110-13-45.indata.com@INDATA.COM
2. beeline链接:
!connect jdbc:hive2://indata-10-110-13-42.indata.com:10088/default;principal=HTTP/indata-10-110-13-42.indata.com@INDATA.COM?hive.server2.transport.mode=http;hive.server2.thrift.http.path=cliservice;
3. 成功连接:

网友评论