美文网首页
Hive报错

Hive报错

作者: 米卡啦 | 来源:发表于2018-09-14 11:09 被阅读0次

报错如下:

Cannot obtain block length for LocatedBlock

impala.error.OperationalError: Disk I/O error: Failed to open HDFS file hdfs://hadoop1:8020/stat/pv/2018/09/13/2018091315_.1536822002327
Error(255): Unknown error 255
Root cause: IOException: Cannot obtain block length for LocatedBlock{BP-999421447-172.17.147.101-1532439509915:blk_1073851749_111052; getBlockSize()=13026; corrupt=false; offset=0; locs=[DatanodeInfoWithStorage[172.17.147.101:50010,DS-143ef70f-aa5b-4a16-a791-b17a59ad1dea,DISK], DatanodeInfoWithStorage[172.17.147.104:50010,DS-c81f9c73-0867-4485-a909-231eff51d17c,DISK], DatanodeInfoWithStorage[172.17.147.103:50010,DS-37704d37-8894-4ba8-973d-7a9fec6039a2,DISK]]}

原因:
遇到这个错误,表明有文件处于正在被写入状态,也就是说这个文件还没被close.所以reader不能通过和datanode通信来正确的验证它正确的文件长度.

解决方法:

hdfs debug recoverLease -path <path-of-the-file> -retries <retry times>

This command will ask the NameNode to try to recover the lease for the file, and based on the NameNode log you may track to detailed DataNodes to understand the states of the replicas. The command may successfully close the file if there are still healthy replicas. Otherwise we can get more internal details about the file/block state.

相关文章

网友评论

      本文标题:Hive报错

      本文链接:https://www.haomeiwen.com/subject/kazngftx.html