情况1:
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapreduce.job.reduces=<number>
卡在这里不动,大致原因:内存不足,方法,关闭其他任务,重启集群后解决;
情况2:
卡在这里:
Starting Job = job_1604227043139_0001, Tracking URL = http://hadoop103:8088/proxy/application_1604227043139_0001/
Kill Command = /opt/module/hadoop-3.1.3/bin/mapred job -kill job_1604227043139_0001
原因:
有一个 节点的NodeManager 挂掉了。
执行:yarn --daemon start nodemanager
把它启动起来就好了;
网友评论