美文网首页程序员IT-网络技术
生产环境NameNode可靠性保证

生产环境NameNode可靠性保证

作者: 伍柒大人的三言两语 | 来源:发表于2018-02-25 14:23 被阅读178次

    前言:Hadoop NameNode节点维护了整个HDFS集群所有的元数据信息,一旦NameNode发生脑裂,或者服务不可用,整个HDFS集群都将处于不可用状态。对于线上生产环境,将会造成不可估量的损失。因此,对于NameNode节点的状态监控不可或缺,一旦发生脑裂或者服务异常停止,都能够及时通知运维介入。当然,NameNode本身有Fencing机制,Hadoop默认Fencing机制不做任何动作,这就需要开发人员自主配置Fencing机制需要执行的操作。

    本文要旨:① 如何监控NameNode的状态切换; ② 自主配置Fencing机制;

    集群环境:Ambari 2.2.1, HDP 2.3.4,Hadoop 2.7.1

    【NameNode状态监控】

    ① 访问url获取集群Namenode状态:

    http://${hostname}:50070/jmx?qry=Hadoop:service=NameNode,name=NameNodeStatus
    

    返回如下结果:

       {
      "beans" : [ {
        "name" : "Hadoop:service=NameNode,name=NameNodeStatus",
        "modelerType" : "org.apache.hadoop.hdfs.server.namenode.NameNode",
        "State" : "active",
        "NNRole" : "NameNode",
        "HostAndPort" : "hostname1:8020",
        "SecurityEnabled" : false,
        "LastHATransitionTime" : 1519300091101,
        "BytesWithFutureGenerationStamps" : 0
      } ]
    }
    

    ② 依次访问两个namenode节点,返回信息通过python脚本解析,获取节点状态:

    # -*- coding: utf-8 -*-
    import json
    import urllib
    import sys
    
    def get_namenode_status(hostname):
        jmxport = 50070
        url = "http://{0}:{1}/jmx?qry=Hadoop:service=NameNode,name=NameNodeStatus".format(hostname, jmxport)
        try:
            response = urllib.urlopen(url)
            jsonString = json.load(response)
            nnstatus = jsonString.get('beans', [{}])[0].get('State', '')
            if 'active' == nnstatus:
                return 'active'
            elif 'standby' == nnstatus:
                return 'standby'
            else:
                return
        except:
            return
    
    if __name__ == "__main__":
        hostname = (sys.argv[1] if len(sys.argv) > 1 else None)
        if hostname:
            print get_namenode_status(hostname)
        else:
            print None
    

    脚本返回三种状态:active(主),standby(备),None(服务异常,无法获取状态)

    ③ 通过shell脚本根据返回状态判断是否报警:

    #!/bin/bash
    
    bin=`dirname "$0"`
    code_path=`cd "$bin";pwd`
    file_name="last_ha_status"
    
    export monitor_info=""
    
    nn1=`python $code_path/namenodestatus.py hostname1`
    nn2=`python $code_path/namenodestatus.py hostname2`
    
    last_status_hdfs=`cat /tmp/$file_name`
    current_status_hdfs=$nn1,$nn2
    
    if [[ "standby" == $nn1 ]] && [[ "standby" == $nn2 ]]; then
            monitor_info="hostname1:"$nn1" hostname2:"$nn2" 上次HA状态信息:"$last_status_hdfs" 当前HA状态信息:"$current_status_hdfs
            echo $nn1,$nn2 > /tmp/$file_name
            #todo 调用报警接口,打印monitor_info内容
            exit 1
    elif [[ "active" == $nn1 ]] && [[ "active" == $nn2 ]]; then
            monitor_info="hostname1:"$nn1" hostname2:"$nn2" 上次HA状态信息:"$last_status_hdfs" 当前HA状态信息:"$current_status_hdfs
            echo $nn1,$nn2 > /tmp/$file_name
            #todo 调用报警接口,打印monitor_info内容
            exit 1
    elif [[ "None" == $nn1 ]] || [[ "None" == $nn2 ]]; then
            monitor_info="hostname1:"$nn1" hostname2:"$nn2" 上次HA状态信息:"$last_status_hdfs" 当前HA状态信息:"$current_status_hdfs
            echo $nn1,$nn2 > /tmp/$file_name
            #todo 调用报警接口,打印monitor_info内容
            exit 1
    else
            #状态正常,将正常状态写入本地文件
            echo $nn1,$nn2 > /tmp/$file_name
            exit 0
    fi
    

    报警阈值:
    ① 两个NameNode都处于standby状态;
    ② 两个NameNode都处于active状态;
    ③ 任何一个NameNode服务停止;

    NameNode的主备状态获取途径除了通过访问50070获取JMX监控信息外,还可以通过hdfs haadmin命令获取:

    nn1=`ssh hdfs@hostname1 hdfs haadmin -Dipc.client.connect.max.retries.on.timeouts=3 -Dipc.client.connect.timeout=3000 -GetServiceState nn1`
    nn2=`ssh hdfs@hostname2 hdfs haadmin -Dipc.client.connect.max.retries.on.timeouts=3 -Dipc.client.connect.timeout=3000 -GetServiceState nn2`
    

    备注①:执行监控脚本的用户A需要设置访问hdfs用户的免密码登录,当然也可以开通用户A访问hdfs的权限;
    备注②:hdfs core-site.xml默认配置定义了socket连接最大重试次数为45,以及每次等待连接时间为20s。这种情况下 如果NameNode节点异常,需要等待45 * 20s才能返回相应结果。对于监控脚本,这么长的等待时间显然不合理,因此需要我们自定义最长允许的等待时间。

    <property>
      <name>ipc.client.connect.timeout</name>
      <value>20000</value>
      <description>Indicates the number of milliseconds a client will wait for the socket to establish a server connection.
      </description>
    </property>
    
    <property>
      <name>ipc.client.connect.max.retries.on.timeouts</name>
      <value>45</value>
      <description>Indicates the number of retries a client will make on socket timeout to establish a server connection.
      </description>
    </property>
    

    【自定义Fencing机制】

    参考官方文档:https://hadoop.apache.org/docs/r2.7.1/hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithQJM.html
    在分布式系统中脑裂又称为双主现象,由于Zookeeper的“假死”,长时间的垃圾回收或其它原因都可能导致双Active NameNode现象,此时两个NameNode都可以对外提供服务,无法保证数据一致性。对于生产环境,这种情况的出现是毁灭性的,必须通过自带的隔离(Fencing)机制预防这种现象的出现。

    Hadoop目前主要提供两种隔离措施:
    sshfence:SSH to the Active NameNode and kill the process;
    shellfence:run an arbitrary shell command to fence the Active NameNode;

    只有在成功地执行完成fencing之后,选主成功的ActiveStandbyElector才会回调ZKFC的becomeActive方法将对应的NameNode切换为Active,开始对外提供服务。

    2018-02-22 18:57:11,865 INFO  ha.NodeFencer (NodeFencer.java:fence(91)) - ====== Beginning Service Fencing Process... ======
    2018-02-22 18:57:11,865 INFO  ha.NodeFencer (NodeFencer.java:fence(94)) - Trying method 1/1: org.apache.hadoop.ha.ShellCommandFencer(/usr/custom/hdfs/restart_namenode.sh)
    2018-02-22 18:57:11,871 INFO  ha.ShellCommandFencer (ShellCommandFencer.java:tryFence(99)) - Launched fencing command '/usr/custom/hdfs/restart_namenode.sh' with pid 21772
    2018-02-22 18:57:11,894 WARN  ha.ShellCommandFencer (StreamPumper.java:pump(88)) - [PID 21772] /usr/cus...menode.sh:   % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
    2018-02-22 18:57:11,894 WARN  ha.ShellCommandFencer (StreamPumper.java:pump(88)) - [PID 21772] /usr/cus...menode.sh:                                  Dload  Upload   Total   Spent    Left  Speed
    2018-02-22 18:57:11,894 WARN  ha.ShellCommandFencer (StreamPumper.java:pump(88)) - [PID 21772] /usr/cus...menode.sh:
    2018-02-22 18:57:11,965 INFO  ha.ShellCommandFencer (StreamPumper.java:pump(86)) - [PID 21772] /usr/cus...menode.sh: {
    2018-02-22 18:57:11,965 INFO  ha.ShellCommandFencer (StreamPumper.java:pump(86)) - [PID 21772] /usr/cus...menode.sh:   "href" : "http://hostname1:8080/api/v1/clusters/cbas_cluster/requests/5363",
    2018-02-22 18:57:11,965 INFO  ha.ShellCommandFencer (StreamPumper.java:pump(86)) - [PID 21772] /usr/cus...menode.sh:   "Requests" : {
    2018-02-22 18:57:11,965 INFO  ha.ShellCommandFencer (StreamPumper.java:pump(86)) - [PID 21772] /usr/cus...menode.sh:     "id" : 5363,
    2018-02-22 18:57:11,965 INFO  ha.ShellCommandFencer (StreamPumper.java:pump(86)) - [PID 21772] /usr/cus...menode.sh:     "status" : "Accepted"
    2018-02-22 18:57:11,965 WARN  ha.ShellCommandFencer (StreamPumper.java:pump(88)) - [PID 21772] /usr/cus...menode.sh:   0     0    0     0    0   346      0   343k --:--:-- --:--:-- --:--:--  343k
    2018-02-22 18:57:11,965 INFO  ha.ShellCommandFencer (StreamPumper.java:pump(86)) - [PID 21772] /usr/cus...menode.sh:   }
    2018-02-22 18:57:11,966 INFO  ha.ShellCommandFencer (StreamPumper.java:pump(86)) - [PID 21772] /usr/cus...menode.sh: }
    2018-02-22 18:57:11,965 WARN  ha.ShellCommandFencer (StreamPumper.java:pump(88)) - [PID 21772] /usr/cus...menode.sh: 495   149  149   149    0   346   2073   4815 --:--:-- --:--:-- --:--:--     0
    2018-02-22 18:57:11,967 INFO  ha.NodeFencer (NodeFencer.java:fence(98)) - ====== Fencing successful by method org.apache.hadoop.ha.ShellCommandFencer(/usr/custom/hdfs/restart_namenode.sh) ======
    2018-02-22 18:57:11,967 INFO  ha.ActiveStandbyElector (ActiveStandbyElector.java:writeBreadCrumbNode(878)) - Writing znode /hadoop-ha/cbas/ActiveBreadCrumb to indicate that the local node is the most recent active...
    2018-02-22 18:57:11,969 INFO  ha.ZKFailoverController (ZKFailoverController.java:becomeActive(380)) - Trying to make NameNode at hostname1/xx.x.xx.xxx:8021 active...
    2018-02-22 18:57:13,710 INFO  ha.ZKFailoverController (ZKFailoverController.java:becomeActive(387)) - Successfully transitioned NameNode at hostname1/xx.x.xx.xxx:8021 to active state
    

    本章只介绍shellfence机制,因为作者所使用集群通过Ambari管理,因此可以调用ambari自带的Rest API达到重启NameNode节点的效果。
    Ambari Rest API官网文档:https://cwiki.apache.org/confluence/display/AMBARI/Restart+Host+Components

    #!/bin/bash
    curl -u admin:${admin_password} -H 'X-Requested-By: ambari' -X POST -d '
    {
       "RequestInfo":{
          "command":"RESTART",
          "context":"Restart NameNode",
          "operation_level":{
             "level":"HOST",
             "cluster_name":"${cluster_name}"
          }
       },
       "Requests/resource_filters":[
          {
             "service_name":"HDFS",
             "component_name":"NAMENODE",
             "hosts":"${hostname1}"
          }
       ]
    }' http://${ambari_hostname}:8080/api/v1/clusters/${cluster_name}/requests
    

    ① 将脚本存放在两台NameNode所在服务器上:/usr/custom/hdfs/restart_namenode.sh,注意脚本的可执行权限,并修改相应变量以达到远程重启另外一个NameNode服务的目的;
    ② 修改hdfs配置文件:
    Amabri入口:Ambari UI => HDFS => Configs => Advanced => Custom hdfs-site

    <property>
          <name>dfs.ha.fencing.methods</name>
          <value>shell(/usr/custom/hdfs/restart_namenode.sh)</value>
        </property>
    

    结束语:对于生产环境的大数据平台建设,NameNode HA模式的状态切换都应积极跟进,查找状态发生切换的根本原因,避免或者是减少切换的频率。状态监控和自定义Fencing机制都只是NameNode异常发生后的处理措施。即使发生状态切换,也应该实现自动化,避免人工干预,用户无感知。

    博客主页:https://www.jianshu.com/u/e97bb429f278

    相关文章

      网友评论

        本文标题:生产环境NameNode可靠性保证

        本文链接:https://www.haomeiwen.com/subject/ookrxftx.html