在我们布置好Hadoop环境后我们做的第一件就是 执行 start-all.sh 命令。
那么这个命令都做了什么,今天来探讨一下这个。
(1)start-all.sh内容
Start all hadoop daemons. Run this on master node.
bin=dirname "$0"
bin=cd "$bin"; pwd
. "$bin"/hadoop-config.sh
start dfs daemons
"HADOOP_CONF_DIR
start mapred daemons
"HADOOP_CONF_DIR
从上面可以看出来这个脚本,执行了三个脚本,分别是:
hadoop-config.sh start-dfs.sh start-mapred.sh
(2)hadoop-config.sh 内容
this="this" ]; do
ls=ls -ld "$this"
link=expr "$ls" : '.*-> \(.*\)$'
if expr "link"
else
this=dirname "$this"
/"$link"
fi
done
convert relative path to absolute path
bin=dirname "$this"
script=basename "$this"
bin=cd "$bin"; pwd
this="script"
the root of the Hadoop installation
export HADOOP_HOME=dirname "$this"
/..
check to see if the conf dir is given as an optional argument
if [ 1" ]
then
shift
confdir=confdir
fi
fi
Allow alternate conf dir location.
HADOOP_CONF_DIR="HADOOP_HOME/conf}"
check to see it is specified whether to use the slaves or the
masters file
if [ 1" ]
then
shift
slavesfile={HADOOP_CONF_DIR}/$slavesfile"
fi
fi
这里都有注释,就不解释了。
(3)start-dfs.sh
bin=dirname "$0"
bin=cd "$bin"; pwd
. "$bin"/hadoop-config.sh
get arguments
if [ 1
shift
case nameStartOpt
;;
(*)
echo $usage
exit 1
;;
esac
fi
这里首先会执行hadoop-config.sh,然后在执行dfs的一些初始化。
(4)start-mapred.sh
bin=dirname "$0"
bin=cd "$bin"; pwd
. "$bin"/hadoop-config.sh
start mapred daemons
start jobtracker first to minimize connection errors at startup
"HADOOP_CONF_DIR start jobtracker
"HADOOP_CONF_DIR start tasktracker
这里首先会执行hadoop-config.sh,然后在执行mapred的一些初始化。
存个坑。等会在弄。和下文有出入,没看见调用hadoop-env.sh
https://www.cnblogs.com/wolfblogs/p/4147485.html
网友评论