美文网首页
HDFS fsimage和edits合并实现原理

HDFS fsimage和edits合并实现原理

作者: 专职掏大粪 | 来源:发表于2020-07-14 19:09 被阅读0次

    2.X 版本中fsimage和edits合并实现原理

    我们知道,在Hadoop 2.x中解决了NameNode的单点故障问题;同时SecondaryName已经不用了,而之前的Hadoop 1.x中是通过SecondaryName来合并fsimage和edits以此来减小edits文件的大小,从而减少NameNode重启的时间。而在Hadoop 2.x中已经不用SecondaryName,那它是怎么来实现fsimage和edits合并的呢?首先我们得知道,在Hadoop 2.x中提供了HA机制(解决NameNode单点故障),可以通过配置奇数个JournalNode来实现HA,如何配置今天就不谈了!HA机制通过在同一个集群中运行两个NN(active NN & standby NN)来解决NameNode的单点故障,在任何时间,只有一台机器处于Active状态;另一台机器是处于Standby状态。Active NN负责集群中所有客户端的操作;而Standby NN主要用于备用,它主要维持足够的状态,如果必要,可以提供快速的故障恢复。
      为了让Standby NN的状态和Active NN保持同步,即元数据保持一致,它们都将会和JournalNodes守护进程通信。当Active NN执行任何有关命名空间的修改,它需要持久化到一半以上的JournalNodes上(通过edits log持久化存储),而Standby NN负责观察edits log的变化,它能够读取从JNs中读取edits信息,并更新其内部的命名空间。一旦Active NN出现故障,Standby NN将会保证从JNs中读出了全部的Edits,然后切换成Active状态。Standby NN读取全部的edits可确保发生故障转移之前,是和Active NN拥有完全同步的命名空间状态
      那么这种机制是如何实现fsimage和edits的合并?在standby NameNode节点上会一直运行一个叫做CheckpointerThread的线程,这个线程调用StandbyCheckpointer类的doWork()函数,而doWork函数会每隔Math.min(checkpointCheckPeriod, checkpointPeriod)秒来坐一次合并操作,相关代码如下:

    
    try{
              Thread.sleep(1000* checkpointConf.getCheckPeriod());
            }catch(InterruptedException ie) {
    }
     
    publiclonggetCheckPeriod() {
        returnMath.min(checkpointCheckPeriod, checkpointPeriod);
    }
     
    checkpointCheckPeriod = conf.getLong(
            DFS_NAMENODE_CHECKPOINT_CHECK_PERIOD_KEY,
            DFS_NAMENODE_CHECKPOINT_CHECK_PERIOD_DEFAULT);
             
    checkpointPeriod = conf.getLong(DFS_NAMENODE_CHECKPOINT_PERIOD_KEY,
                                    DFS_NAMENODE_CHECKPOINT_PERIOD_DEFAULT);
    try{
              Thread.sleep(1000* checkpointConf.getCheckPeriod());
            }catch(InterruptedException ie) {
    }
     
    publiclonggetCheckPeriod() {
        returnMath.min(checkpointCheckPeriod, checkpointPeriod);
    }
     
    checkpointCheckPeriod = conf.getLong(
            DFS_NAMENODE_CHECKPOINT_CHECK_PERIOD_KEY,
            DFS_NAMENODE_CHECKPOINT_CHECK_PERIOD_DEFAULT);
             
    checkpointPeriod = conf.getLong(DFS_NAMENODE_CHECKPOINT_PERIOD_KEY,
                                    DFS_NAMENODE_CHECKPOINT_PERIOD_DEFAULT);
    

    上面的checkpointCheckPeriod和checkpointPeriod变量是通过获取hdfs-site.xml以下两个属性的值得到:

    <property>
      <name>dfs.namenode.checkpoint.period</name>
      <value>3600</value>
      <description>The number of seconds between two periodic checkpoints.
      </description>
    </property>
     
    <property>
      <name>dfs.namenode.checkpoint.check.period</name>
      <value>60</value>
      <description>The SecondaryNameNode and CheckpointNode will poll the NameNode
      every'dfs.namenode.checkpoint.check.period'seconds to query the number
      of uncheckpointed transactions.
      </description>
    </property>
    

    当达到下面两个条件的情况下,将会执行一次checkpoint:

    booleanneedCheckpoint = false;
    if(uncheckpointed >= checkpointConf.getTxnCount()) {
         LOG.info("Triggering checkpoint because there have been " +
                    uncheckpointed + " txns since the last checkpoint, which " +
                    "exceeds the configured threshold " +
                    checkpointConf.getTxnCount());
         needCheckpoint = true;
    }elseif(secsSinceLast >= checkpointConf.getPeriod()) {
         LOG.info("Triggering checkpoint because it has been " +
                secsSinceLast + " seconds since the last checkpoint, which " +
                 "exceeds the configured interval " + checkpointConf.getPeriod());
         needCheckpoint = true;
    }
    

    当上述needCheckpoint被设置成true的时候,StandbyCheckpointer类的doWork()函数将会调用doCheckpoint()函数正式处理checkpoint。当fsimage和edits的合并完成之后,它将会把合并后的fsimage上传到Active NameNode节点上,Active NameNode节点下载完合并后的fsimage,再将旧的fsimage删掉(Active NameNode上的)同时清除旧的edits文件。步骤可以归类如下:
      (1)、配置好HA后,客户端所有的更新操作将会写到JournalNodes节点的共享目录中,可以通过下面配置

    <property>
      <name>dfs.namenode.shared.edits.dir</name>
      <value>qjournal://XXXX/mycluster</value>
    </property>

    <property>
      <name>dfs.journalnode.edits.dir</name>
      <value>/export1/hadoop2x/dfs/journal</value>
    </property>

    (2)、Active Namenode和Standby NameNode从JournalNodes的edits共享目录中同步edits到自己edits目录中;
      (3)、Standby NameNode中的StandbyCheckpointer类会定期的检查合并的条件是否成立,如果成立会合并fsimage和edits文件;
      (4)、Standby NameNode中的StandbyCheckpointer类合并完之后,将合并之后的fsimage上传到Active NameNode相应目录中;
      (5)、Active NameNode接到最新的fsimage文件之后,将旧的fsimage和edits文件清理掉;
      (6)、通过上面的几步,fsimage和edits文件就完成了合并,由于HA机制,会使得Standby NameNode和Active NameNode都拥有最新的fsimage和edits文件(之前Hadoop 1.x的SecondaryNameNode中的fsimage和edits不是最新的)

    https://tech.meituan.com/2017/03/17/namenode-restart-optimization.html

    https://blog.csdn.net/yanshu2012/article/details/54669751?utm_medium=distribute.pc_relevant.none-task-blog-BlogCommendFromBaidu-8.nonecase&depth_1-utm_source=distribute.pc_relevant.none-task-blog-BlogCommendFromBaidu-8.nonecase

    相关文章

      网友评论

          本文标题:HDFS fsimage和edits合并实现原理

          本文链接:https://www.haomeiwen.com/subject/zugfhktx.html