美文网首页
hadoop鉴权逻辑改造

hadoop鉴权逻辑改造

作者: JX907 | 来源:发表于2021-02-07 18:25 被阅读0次

    <meta charset="utf-8">

    背景:自研的数据安全管理系统中为用户分配的hive、hdfs权限最终需要在访问hadoop时生效,借鉴ranger的实现方案,采用javassist改写namenode鉴权的逻辑。

    结合spark的权限报错日志定位到namenode鉴权的具体代码位置:

    执行任务发生错误:

    java.lang.RuntimeException:Cannot create staging directory 'hdfs://hadoop2cluster/user/hive/warehouse/dw_growth.db/t_base_properties/.hive-staging_hive_2018-08-17_15-01-41_191_5939745887863123482-1': Permission denied: user=fuqiang.zhao, access=WRITE, inode="/user/hive/warehouse/dw_growth.db/t_base_properties":hadoop:supergroup:drwxr-xr-x
    
    at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkFsPermission(FSPermissionChecker.java:271)
    
    at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:257)
    
    at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:238)
    
    at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:179)
    
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6547)
    
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6529)
    
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAncestorAccess(FSNamesystem.java:6481)
    
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInternal(FSNamesystem.java:4290)
    
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:4260)
    
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:4233)
    
    at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:853)
    
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:600)
    
    at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
    
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
    
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:975)
    
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2040)
    
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2036)
    
    at java.security.AccessController.doPrivileged(Native Method)
    
    at javax.security.auth.Subject.doAs(Subject.java:422)
    
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1692)
    
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2034)
    

    通过堆栈信息可知FSPermissionChecker.checkFsPermission是hadoop自身实现的基于用户和组的鉴权。

    private void checkFsPermission(INode inode, int snapshotId, FsAction access,
    
    FsPermission mode) throws AccessControlException {
    
    if (user.equals(inode.getUserName(snapshotId))) { //user class
    
    if (mode.getUserAction().implies(access)) { return; }
    
    }
    
    else if (groups.contains(inode.getGroupName(snapshotId))) { //group class
    
    if (mode.getGroupAction().implies(access)) { return; }
    
    }
    
    else { //other class
    
    if (mode.getOtherAction().implies(access)) { return; }
    
    }
    
    throw new AccessControlException(
    
    toAccessControlString(inode, snapshotId, access, mode));
    
    }
    

    可借鉴apache ranger0.4中HadoopAuthClassTransformer类中使用javassist代码注入的方式修改(实际注入修改check方法),增加我们自己的鉴权规则。(ranger0.4对应Hadoop2.6.0,check方法的参数与hadoop2.6.5不一致)

    try {
    
    CtClass[] paramArgs = null ;
    
    if (inodeClass != null && fsActionClass != null) {
    
    CtMethod checkMethod = null ;
    
    if (snapShotClass != null) {
    
    paramArgs = new CtClass[] { inodeClass, snapShotClass, fsActionClass } ;
    
    try {
    
    checkMethod = curClass.getDeclaredMethod("check", paramArgs);
    
    }
    
    catch(NotFoundException SSnfe) {
    
    System.out.println("Unable to find check method with snapshot class. Trying to find check method without snapshot support.") ;
    
    snapShotClass = null;
    
    paramArgs = new CtClass[] { inodeClass, CtClass.intType, fsActionClass } ;
    
    checkMethod = curClass.getDeclaredMethod("check", paramArgs);
    
    withIntParamInMiddle = true ;
    
    System.out.println("Found method check() - without snapshot support") ;
    
    }
    
    }
    
    else {
    
    System.out.println("Snapshot class was already null ... Trying to find check method") ;
    
    paramArgs = new CtClass[] { inodeClass, fsActionClass } ;
    
    checkMethod = curClass.getDeclaredMethod("check", paramArgs);
    
    System.out.println("Found method check() - without snapshot support") ;
    
    }
    
    if (checkMethod != null) {
    
    if (snapShotClass == null && (!withIntParamInMiddle)) {
    
    checkMethod.insertAfter("org.apache.hadoop.hdfs.server.namenode.XaSecureFSPermissionChecker.logHadoopEvent(ugi,$1,$2,true) ;");
    
    CtClass throwable = ClassPool.getDefault().get("java.lang.Throwable");
    
    checkMethod.addCatch("{ org.apache.hadoop.hdfs.server.namenode.XaSecureFSPermissionChecker.logHadoopEvent(ugi,$1,$2,false) ; throw $e; }", throwable);
    
    checkMethod.insertBefore("{ if ( org.apache.hadoop.hdfs.server.namenode.XaSecureFSPermissionChecker.check(ugi,$1,$2) ) { return ; } }");
    
    }
    

    ranger的install文件中将会在hadoop-env.sh中插入自己的shell文件:

    1.png

    该文件将会把javaagen配置增加HADOOP_NAMENODE_OPTS和HADOOP_SECONDARYNAMENODE_OPTS两个变量中

    2.png

    javassist:

    被注入的工程启动时配置vm参数:

    -javaagent:D:\test\target\javassist-1.0-SNAPSHOT.jar=JAgent

    相关文章

      网友评论

          本文标题:hadoop鉴权逻辑改造

          本文链接:https://www.haomeiwen.com/subject/cvoztltx.html