最近通过Spark Streaming消费Kafka数据,消费的数据落到hdfs,一分钟一个小文件,昨天架构那边的同事告诉我要清理历史文件,但是目录太多,手动删比较慢,于是想到可以把文件目录都拿到,写入文本 path_to_clean.txt,通过shell循环读路径,并执行删除。
hdfs://nameservice1/user/hadoop/dw_realtime/dw_real_for_path_list/mb_pageinfo_hash2/date=2016-01-09
hdfs://nameservice1/user/hadoop/dw_realtime/dw_real_for_path_list/mb_pageinfo_hash2/date=2016-07-05
hdfs://nameservice1/user/hadoop/dw_realtime/dw_real_for_path_list/mb_pageinfo_hash2/date=2016-09-05
hdfs://nameservice1/user/hadoop/dw_realtime/dw_real_for_path_list/mb_pageinfo_hash2/date=2016-10-20
hdfs://nameservice1/user/hadoop/dw_realtime/dw_real_for_path_list/mb_pageinfo_hash2/date=2016-11-06
hdfs://nameservice1/user/hadoop/dw_realtime/dw_real_for_path_list/mb_pageinfo_hash2/date=2016-11-07
...
已下是代码。
方式一(我才采用的是这种)
#!/bin/bash
for line in `cat path_to_clean.txt`
do
echo $line
hadoop fs -rm -r $line
done
方式二
#!/bin/bash
while read line
do
echo $line
done < path_to_clean.txt
方式三
#!/bin/bash
cat path_to_clean.txt | while read line
do
echo $line
done
网友评论