美文网首页
5.HDFS API访问文件

5.HDFS API访问文件

作者: qimogao | 来源:发表于2018-12-26 14:58 被阅读0次

抛出org.apache.hadoop.security.AccessControlException: Permission denied: user=zhangsan, access=WRITE, inode="/input/fd.txt":root:supergroup:drwxr-xr-x

In my case, a key of the problem was following error message.

There are 1 datanode(s) running and 1 node(s) are excluded in this operation.

It means that your hdfs-client couldn't connect to your datanode with 50010 port. As you connected to hdfs namenode, you could got a datanode's status. But, your hdfs-client would failed to connect to your datanode.

(In hdfs, a namenode manages file directories, and datanodes. If hdfs-client connect to a namnenode, it will find a target file path and address of datanode that have the data. Then hdfs-client will communicate with datanode. (You can check those datanode uri by using netstat. because, hdfs-client will be trying to communicate with datanodes using by address informed by namenode)

solved that problem by:

opening 50010 port in a firewall.

adding propertiy "dfs.client.use.datanode.hostname", "true"

adding hostname to hostfile in my client PC.

VPC要设置正确

最后:

Configuration conf =new Configuration();

conf.set("fs.defaultFS",ConfConst.fs_defaultFS);

conf.set("dfs.replication",ConfConst.dfs_replication);

conf.set("dfs.client.use.datanode.hostname", "true");

System.setProperty("HADOOP_USER_NAME", "root");

相关文章

网友评论

      本文标题:5.HDFS API访问文件

      本文链接:https://www.haomeiwen.com/subject/gdyhlqtx.html