采用Hadoop自带的基准测试工具写入文件时,出现问题:
There are 0 datanode(s) running and 0 node(s) are excluded in this operation.
at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:2205)
at org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.chooseTargetForNewBlock(FSDirWriteFileOp.java:294)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2731)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:892)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:568)
.......
采用$jps
命令查看,datanodes节点中只启动了“nodemanager",并无“datanodes”进程。
搜索后大多博客说是“多次格式化namenode导致的namenode与datanode之间的不一致”。
于是删除之前的dfs.datanode.data.dir目录(我没有数据就哦),并重新修改了各节点/hadoop-3.1.3/etc/hadoop/hdfs-site.xml
的相关数据路径:
<configuration>
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>your-node-host:9001</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/home/user/bigData/hdfs/name</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/home/user/bigData/hdfs/data</value>
</property>
......
重新格式化hadoop namenode -format
然后再启动集群,还真解决了吖~
网友评论