美文网首页
Mac安装Hadoop Yarn Flink Anaconda

Mac安装Hadoop Yarn Flink Anaconda

作者: 大数据菜鸟 | 来源:发表于2021-12-15 19:49 被阅读0次

设置ssh免密码登录

cd ~/.ssh
cp id_rsa.pub authorized_keys
# 测试是否需要密码
ssh localhost

使用homebrew安装hadoop

brew install hadoop

# 安装成功标识
🍺  /usr/local/Cellar/hadoop/3.3.1: 22,487 files, 1GB

修改配置文件

core-site.xml

vi /usr/local/Cellar/hadoop/3.3.1/libexec/etc/hadoop/core-site.xml

# configuration 标签下新增
<property>
    <name>hadoop.tmp.dir</name>
    <value>file:/usr/local/Cellar/hadoop/3.3.1/libexec/tmp</value>
</property>
<property>
    <name>fs.defaultFS</name>
    <value>hdfs://localhost:8020</value>
</property>

hdfs-site.xml

vi /usr/local/Cellar/hadoop/3.3.1/libexec/etc/hadoop/hdfs-site.xml

# configuration 标签下新增
    <property>
         <name>dfs.replication</name>
         <value>1</value>
    </property>
    <property> 
         <name>dfs.namenode.name.dir</name>
         <value>file:/usr/local/Cellar/hadoop/3.3.1/libexec/tmp/dfs/name</value>
    </property>
    <property>
         <name>dfs.namenode.data.dir</name>
         <value>file:/usr/local/Cellar/hadoop/3.3.1/libexec/tmp/dfs/data</value>
    </property>
    <property>
        <name>dfs.permissions</name>
        <value>false</value>
    </property>

==下面两个xml是yarn的配置==

yarn-site.xml

vi /usr/local/Cellar/hadoop/3.3.1/libexec/etc/hadoop/yarn-site.xml

# configuration 标签下新增
    <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>
    <property>
        <name>yarn.nodemanager.env-whitelist</name>
        <value>JAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CONF_DIR,CLASSPATH_PREPEND_DISTCACHE,HADOOP_YARN_HOME,HADOOP_MAPRED_HOME</value>
    </property>

mapred-site.xml

vi /usr/local/Cellar/hadoop/3.3.1/libexec/etc/hadoop/mapred-site.xml

# configuration 标签下新增
  <property>
    <name>mapreduce.framework.name</name>
    <value>yarn</value>
  </property>

添加hadoop环境变量

export HADOOP_HOME=/usr/local/Cellar/hadoop/3.3.1/libexec
export HADOOP_COMMON_HOME=$HADOOP_HOME
export PATH="$PATH:$HADOOP_HOME/bin"

第一次安装需要初始化(仅第一次需要)

cd /usr/local/Cellar/hadoop/3.3.1/bin
./hdfs namenode -format

启动yarn

# 启动hdfs
cd /usr/local/Cellar/hadoop/3.3.1/sbin 
./start-dfs.sh

# 查看是否启动成功
# jps
6306 SecondaryNameNode
6069 NameNode
6392 Jps
6170 DataNode

# 启动yarn
cd /usr/local/Cellar/hadoop/3.3.1/sbin
./start-yarn.sh

hdfs页面 http://localhost:9870/dfshealth.html#tab-overview

yarn页面 http://localhost:8088/cluster

如果起不来指定下java环境变量 参考链接

vi /usr/local/Cellar/hadoop/3.3.1/libexec/etc/hadoop/hadoop-env.sh


export JAVA_HOME=/Library/Java/JavaVirtualMachines/adoptopenjdk-8.jdk/Contents/Home
export HADOOP_HOME=/usr/local/Cellar/hadoop/3.3.1/libexec
export HADOOP_OS_TYPE=${HADOOP_OS_TYPE:-$(uname -s)}
case ${HADOOP_OS_TYPE} in
  Darwin*)
    export HADOOP_OPTS="${HADOOP_OPTS} -Djava.security.krb5.realm= "
    export HADOOP_OPTS="${HADOOP_OPTS} -Djava.security.krb5.kdc= "
    export HADOOP_OPTS="${HADOOP_OPTS} -Djava.security.krb5.conf= "
  ;;
esac
export HADOOP_ROOT_LOGGER=DEBUG,console
export HADOOP_DAEMON_ROOT_LOGGER=DEBUG,RFA

WordCount验证

hadoop jar /usr/local/Cellar/hadoop/3.3.1/libexec/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.3.1.jar wordcount /input /output
# 最后三个参数的意思是:wordcount是测试用例的名称,/input表示输入文件的目录,/output表示输出文件的目录。运行结果如下


注意⚠️:输出文件必须是一个不存在的文件,如果指定一个已有目录作为hadoop作业输出的话,作业将无法运行。如果想让hadoop将输出存储到一个目录,它必须是不存在的目录,应该是hadoop的一种安全机制,防止hadoop重写有用的文件)


最后,查看程序输出结果及存放位置。在terminal上输入 hadoop fs -ls /output ,可以看到


结果就存放在part-r-00000文件中,在terminal上输入 hadoop fs -cat /output/part-r-00000 

权限问题没跑通,看下面原文吧

参考 https://blog.csdn.net/pgs1004151212/article/details/104391391

安装指定版本Flink

官网下载最新版本的flink并解压 flink-1.14.0-bin-scala_2.11.tgz

复制hadoop的配置文件到flink conf 目录

cd /usr/local/Cellar/hadoop/3.3.1/libexec/etc/hadoop
cp hdfs-site.xml yarn-site.xml core-site.xml /usr/local/develop/flink-1.14.0/lib

添加hadoop 和 flink的环境变量

export PATH="/usr/local/develop/flink-1.14.0/bin:$PATH"
export HADOOP_HOME=/usr/local/Cellar/hadoop/3.3.1/libexec
export HADOOP_COMMON_HOME=$HADOOP_HOME
export PATH="$PATH:$HADOOP_HOME/bin"

export HADOOP_CLASSPATH=$(find $HADOOP_HOME -name '*.jar' | xargs echo | tr ' ' ':')

执行WordCount命令

flink run -m yarn-cluster /usr/local/develop/flink-1.14.0/examples/batch/WordCount.jar

安装Anaconda

brew search anaconda
# 不带数字的就是最新版
brew install --cask anaconda
# 配置anaconda环境变量
echo 'export PATH="/usr/local/anaconda3/bin:$PATH"' >> ~/.zshrc

# 查看版本
conda --version
conda 4.10.1



环境

https://www.jianshu.com/p/ce99bf9d9008

conda env list

conda create -n learn

conda activate learn

conda deactivate

安装TensorFlow

pip install --upgrade pip

pip install tensorflow



相关文章

网友评论

      本文标题:Mac安装Hadoop Yarn Flink Anaconda

      本文链接:https://www.haomeiwen.com/subject/bxxrwctx.html