美文网首页
Spark对多HDFS集群Namenode HA的支持

Spark对多HDFS集群Namenode HA的支持

作者: chinfun | 来源:发表于2018-10-16 19:30 被阅读0次
    val sc = new SparkContext()
    // 多个HDFS的相同配置
    sc.hadoopConfiguration.setStrings("fs.defaultFS", "hdfs://cluster1", "hdfs://cluster2");
    sc.hadoopConfiguration.setStrings("dfs.nameservices", "cluster1", "cluster2");
    // cluster1的配置
    sc.hadoopConfiguration.set("dfs.ha.namenodes.cluster1", "nn1,nn2");
    sc.hadoopConfiguration.set("dfs.namenode.rpc-address.cluster1.nn1", "namenode001:8020");
    sc.hadoopConfiguration.set("dfs.namenode.rpc-address.cluster1.nn2", "namenode002:8020");
    sc.hadoopConfiguration.set("dfs.client.failover.proxy.provider.cluster1", "org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider");
    // cluster2的配置
    sc.hadoopConfiguration.set("dfs.ha.namenodes.cluster2", "nn3,nn4");
    sc.hadoopConfiguration.set("dfs.namenode.rpc-address.cluster2.nn3", "namenode003:8020");
    sc.hadoopConfiguration.set("dfs.namenode.rpc-address.cluster2.nn4", "namenode004:8020");
    sc.hadoopConfiguration.set("dfs.client.failover.proxy.provider.cluster2", "org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider");
    

    相关文章

      网友评论

          本文标题:Spark对多HDFS集群Namenode HA的支持

          本文链接:https://www.haomeiwen.com/subject/ukqkzftx.html