美文网首页
hbase集成kerberos

hbase集成kerberos

作者: 一个奇怪的程序员 | 来源:发表于2019-08-14 20:49 被阅读0次

    环境说明

    本文章介绍如何在现有非安全集群上集成kerberos,使用的环境如下:

    环境 版本
    OS CentOS-7
    JDK jdk-8u111-linux
    Hadoop hadoop-2.5.2
    Zookeeper zookeeper-3.4.9
    HBase hbase-1.3.1

    准备

    • 关闭防火墙

      关闭防火墙 systemctl stop firewalld.service
      禁止开机启动 systemctl disable firewalld.service
      
    • 关闭SELinux

      临时关闭 setenforce 0
      永久关闭 修改 /etc/selinux/config 设置SELINUX=disabled
      

    安装Kerberos

    以下过程中使用到MQ或者MQ.COM的均可以替换为自己的REALM

    • 安装kerberos

      yum install -y krb5-libs krb5-server krb5-workstation pam_krb5
      
    • 编辑krb5.conf和kdc.conf

      • /etc/krb5.conf
      [logging]
       default = FILE:/var/log/krb5libs.log
       kdc = FILE:/var/log/krb5kdc.log
       admin_server = FILE:/var/log/kadmind.log
      [libdefaults]
       dns_lookup_realm = true
       ticket_lifetime = 24h
       renew_lifetime = 7d
       forwardable = true
       rdns = false
       pkinit_anchors = /etc/pki/tls/certs/ca-bundle.crt
       default_realm = MQ.COM
       dns_lookup_kdc = true
      [realms]
       MQ.COM = {
        default_domain=mq.com
        kdc = mq
        admin_server = mq
       }
      [domain_realm]
       .mq.com = MQ.COM
       mq.com = MQ.COM
      
      • /var/kerberos/krb5kdc/kdc.conf
      [kdcdefaults]
        v4_mode = nopreauth
        kdc_tcp_ports = 88
      
      [realms]
        MQ.COM = {
          acl_file = /var/kerberos/krb5kdc/kadm5.acl
          dict_file = /usr/share/dict/words
          admin_keytab = /var/kerberos/krb5kdc/kadm5.keytab
          supported_enctypes = aes256-cts:normal aes128-cts:normal des3-hmac-sha1:normal arcfour-hmac:normal des-hmac-sha1:normal des-cbc-md5:normal des-cbc-crc:normal des-cbc-crc:v4 des-cbc-crc:afs3
        }
      
    • 创建数数据库

      kdb5_util create -s -r MQ.COM
      
    • 编辑kadm5.acl

      路径 /var/kerberos/krb5kdc/kadm5.acl
      修改以下内容:
      */admin@MQ.COM    *
      
    • 启动kerberos

      启动服务
      systemctl start krb5kdc
      systemctl start kadmin
      开机启动
      systemctl enable krb5kdc
      systemctl enable kadmin
      
    • 修改/etc/ssh/ssh_config

      GSSAPIAuthentication yes
      GSSAPIDelegateCredentials yes
      GSSAPITrustDNS yes
      
    • 重启SSHD

      systemctl reload sshd
      
    • 配置PAM

      authconfig-tui
      选择“[*] Use Kerberos”并选择Next,
      确定 Realm、KDC 和 Admin Server 是否正确,
      选择 “[*] Use DNS to resolve hosts to realms”
          “[*] Use DNS to locate KDCs for realms”
      选择 OK 保存。
      authconfig --enablekrb5 --update
      
    • 常用命令

      • 进入命令行

        kadmin.local
        
      • 添加用户

        addprinc username
        addprinc -randkey username
        addprinc -randkey username/host
        
      • 删除用户

        delete_principal username
        
      • 获取用户

        getprinc username
        
      • 认证用户

        kinit username
        kinit -k -t keytab路径 principal
        
      • 查询登陆状态

        klist
        
      • 清除登陆

        kdestroy
        
      • keytab生成

        ktadd -k keytab路径 principal principal
        
      • 查看keytab用户

        klist -ket keytab路径
        
      • 设置时长

      • modprinc -maxrenewlife 7days principal
        

    hadoop配置

    • 安装jsvc

      下载 commons-daemon-x.x.x-src.tar.gz和commons-daemon-x.x.x-bin.tar.gz
      下载地址: http://mirror.bit.edu.cn/apache//commons/daemon/
      解压 commons-daemon-x.x.x-src.tar.gz
      进入解压目录 执行./configure --with-java=$JAVA_HOME && make
      将生成的jsvc文件拷贝至hadoop-x.x.x/libexec目录
      
    • 下载JCE

      由于Centos5.6及以上的系统系统均使用AES-256加密的,默认情况下Oracle对JCE限制长度为128位16字节,所以需要安装Java Cryptography Extension (JCE) 
      JDK6的下载地址:
      http://www.oracle.com/technetwork/java/javase/downloads/jce-6-download-429243.html
      JDK7的下载地址:
      http://www.oracle.com/technetwork/java/javase/downloads/jce-7-download-432124.html
      JDK8的下载地址:
      http://www.oracle.com/technetwork/java/javase/downloads/jce8-download-2133166.html
      下载后将解压的jar文件拷贝至%JDK_HOME%\jre\lib\security
      
    • 修改文件

      • core-site.xml

        <?xml version="1.0" encoding="UTF-8"?>
        <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
        <configuration>
            <property>
                <name>local.realm</name>
                <value>MQ</value>
            </property>
            <property>
                <name>fs.defaultFS</name>
                <value>hdfs://mq:8020</value>
            </property>
            <property>
                <name>hadoop.tmp.dir</name>
                <value>/usr/local/hadoop-2.5.2/tmp</value>
            </property>
            <property>
                <name>hadoop.proxyuser.hduser.hosts</name>
                <value>*</value>
            </property>
            <property>
                <name>hadoop.proxyuser.hduser.groups</name>
                <value>*</value>
            </property>
            <property>
                <name>hadoop.security.authentication</name>
                <value>kerberos</value>
            </property>
        </configuration>
        
      • hdfs-site.xml

        <?xml version="1.0" encoding="UTF-8"?>
        <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
        <configuration>
          <property>
            <name>dfs.replication</name>
            <value>1</value>
          </property>
          <property>
            <name>dfs.data.dir</name>
            <value>/usr/local/hadoop-2.5.2/data</value>
          </property>
          <property>
            <name>dfs.name.dir</name>
            <value>/usr/local/hadoop-2.5.2/name</value>
          </property>
          <property>
            <name>dfs.block.access.token.enable</name>
            <value>true</value>
          </property>
          <!-- NameNode security config -->
          <property>
            <name>dfs.https.address</name>
            <value>mq:50470</value>
          </property>
          <property>
            <name>dfs.https.port</name>
            <value>50470</value>
          </property>
          <property>
            <name>dfs.namenode.keytab.file</name>
            <value>/opt/hadoop/keytab/hadoop/hadoop.keytab</value>
          </property>
          <property>
            <name>dfs.namenode.kerberos.principal</name>
            <value>hadoop/mq@MQ.COM</value>
          </property>
          <property>
            <name>dfs.namenode.kerberos.https.principal</name>
            <value>hadoop/mq@MQ.COM</value>
          </property>
          <!-- Secondary NameNode security config -->
          <property>
            <name>dfs.secondary.https.address</name>
            <value>mq:50495</value>
          </property>
          <property>
            <name>dfs.secondary.https.port</name>
            <value>50495</value>
          </property>
          <property>
            <name>dfs.secondary.namenode.keytab.file</name>
            <value>/opt/hadoop/keytab/hadoop/hadoop.keytab</value>
          </property>
          <property>
            <name>dfs.secondary.namenode.kerberos.principal</name>
            <value>hadoop/mq@MQ.COM</value>
          </property>
          <property>
            <name>dfs.secondary.namenode.kerberos.https.principal</name>
            <value>hadoop/mq@MQ.COM</value>
          </property>
          <!-- DataNode security config -->
          <property>
            <name>dfs.datanode.data.dir.perm</name>
            <value>700</value>
          </property>
          <property>
            <name>dfs.datanode.address</name>
            <value>0.0.0.0:1004</value>
          </property>
          <property>
            <name>dfs.datanode.http.address</name>
            <value>0.0.0.0:1006</value>
          </property>
          <property>
            <name>dfs.datanode.keytab.file</name>
            <value>/opt/hadoop/keytab/hadoop/hadoop.keytab</value>
          </property>
          <property>
            <name>dfs.datanode.kerberos.principal</name>
            <value>hadoop/mq@MQ.COM</value>
          </property>
          <property>
            <name>dfs.datanode.kerberos.https.principal</name>
            <value>hadoop/mq@MQ.COM</value>
          </property>
          <property>
            <name>dfs.web.authentication.kerberos.principal</name>
            <value>hadoop/mq@MQ.COM</value>
          </property>
          <property>
            <name>dfs.datanode.require.secure.ports</name>
            <value>false</value>
          </property>
          <property>
            <name>dfs.namenode.kerberos.principal.pattern</name>
            <value>hdfs/*@MQ.COM</value>
          </property>
          <configuration>
        
      • hadoop-env.sh 修改增加如下配置

        export JSVC_HOME=/opt/hadoop/hadoop-2.5.2/libexec
        

    zookper配置

    • jaas.conf(zookeeper conf目录新增文件)

      Server {
        com.sun.security.auth.module.Krb5LoginModule required
        useKeyTab=true
        keyTab="/opt/hadoop/keytab/hadoop/zookeeper.keytab"
        storeKey=true
        useTicketCache=false
        principal="zookeeper/mq@MQ.COM";
      };
      Client {
        com.sun.security.auth.module.Krb5LoginModule required
        useKeyTab=true
        keyTab="/opt/hadoop/keytab/hadoop/zookeeper.keytab"
        storeKey=true
        useTicketCache=false
        principal="zkcli@MQ.COM";
      };
      
    • java.env (zookeeper conf目录新增文件)

      export JVMFLAGS="-Djava.security.auth.login.config=/opt/hadoop/zookeeper-3.4.9/conf/jaas.conf"
      export JAVA_HOME="/opt/hadoop/jdk1.8.0_111"
      
    • zoo.cfg 增加如下配置

      kerberos.removeHostFromPrincipal=true
      kerberos.removeRealmFromPrincipal=true
      authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider
      jaasLoginRenew=3600000
      

    hbase配置

    • hbase-site.xml

      <?xml version="1.0"?>
      <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
      <configuration>
        <property>
          <name>hbase.rootdir</name>
          <value>hdfs://mq:8020/hbase</value>
        </property>
        <property>
          <name>hbase.zookeeper.quorum</name>
          <value>mq</value>
        </property>
        <property>
          <name>hbase.cluster.distributed</name>
          <value>true</value>
        </property>
        <property>
          <name>hbase.security.authentication</name>
          <value>kerberos</value>
        </property>
        <property>
          <name>hbase.rpc.engine</name>
          <value>org.apache.hadoop.hbase.ipc.SecureRpcEngine</value>
        </property>
        <property>
          <name>hbase.regionserver.kerberos.principal</name>
          <value>hbase/mq@MQ.COM</value>
        </property>
        <property>
          <name>hbase.regionserver.keytab.file</name>
          <value>/opt/hadoop/keytab/hadoop/hbase.keytab</value>
        </property>
        <property>
          <name>hbase.master.kerberos.principal</name>
          <value>hbase/mq@MQ.COM</value>
        </property>
        <property>
          <name>hbase.master.keytab.file</name>
          <value>/opt/hadoop/keytab/hadoop/hbase.keytab</value>
        </property>
        <property>
          <name>dfs.namenode.kerberos.principal.pattern</name>
          <value>*</value>
        </property>
        <property>
          <name>javax.security.auth.useSubjectCredsOnly</name>
          <value>false</value>
        </property>
      </configuration>
      
    • zk-jaas.conf (hbase conf目录新增文件)

      Client {
        com.sun.security.auth.module.Krb5LoginModule required
        useKeyTab=true
        useTicketCache=false
        keyTab="/opt/hadoop/keytab/hadoop/zookeeper.keytab"
        principal="zookeeper/mq@MQ.COM";
      };
      
    • hbase-env.sh 修改增加如下配置

      export HBASE_OPTS="$HBASE_OPTS -XX:+UseConcMarkSweepGC -Djava.security.auth.login.config=/opt/hadoop/hbase-1.3.1/conf/zk-jaas.conf"
      

    相关文章

      网友评论

          本文标题:hbase集成kerberos

          本文链接:https://www.haomeiwen.com/subject/xvdgjctx.html