美文网首页
单机安装Hadoop、spark环境

单机安装Hadoop、spark环境

作者: iE简 | 来源:发表于2020-11-28 19:06 被阅读0次

作者环境:

  • CPU: E5-2678 v3、32G DDR4
  • Centos7 2003
  • java 1.8
  • hadoop 2.10.1
  • hive 2.3.7
  • scala 2.11.8
  • spark 2.4.7

由于版本更新快,以上软件就不放下载链接了。我会把联系方式放在文章最后,若是需要,可以联系我。

修改主机名

编辑hostname,修改为master:

nano /etc/hostname

重启:

reboot

安装Java

将jdk-8u261-linux-x64.tar.gz复制到/home目录下解压:

cd /home
tar -xvf jdk-8u261-linux-x64.tar.gz

配置环境变量:

nano ~/.bashrc

追加下面的内容:

export JAVA_HOME=/home/jdk1.8.0_261
export JRE_HOME=${JAVA_HOME}/jre
export CLASSPATH=.:${JAVA_HOME}/lib:${JRE_HOME}/lib
export PATH=${JAVA_HOME}/bin:$PATH

刷新环境变量:

source ~/.bashrc

查看java版本:

java -version

输出:

java version "1.8.0_261"
Java(TM) SE Runtime Environment (build 1.8.0_261-b12)
Java HotSpot(TM) 64-Bit Server VM (build 25.261-b12, mixed mode)

安装Hadoop

将hadoop-2.10.1.tar.gz复制到/home目录下解压:

cd /home
tar -xvf hadoop-2.10.1.tar.gz

配置hadoop-env.sh

nano /home/hadoop-2.10.1/etc/hadoop/hadoop-env.sh

找到JAVA_HOME配置项,修改为:

export JAVA_HOME=/home/jdk1.8.0_261

配置core-site.xml

nano /home/hadoop-2.10.1/etc/hadoop/core-site.xml

用以下文本代替:

<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://master:9000</value>
    </property>
    <property>
        <name>hadoop.tmp.dir</name>
        <value>/home/hadoopdata</value>
    </property>
</configuration>

配置hdfs-site.xml

nano /home/hadoop-2.10.1/etc/hadoop/hdfs-site.xml

用以下文本代替:

<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>
    <property>
        <name>dfs.replication</name>
        <value>1</value>
    </property>
</configuration>

配置yarn-site.xml

nano /home/hadoop-2.10.1/etc/hadoop/yarn-site.xml

用以下文本代替:

<?xml version="1.0"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->
<configuration>
    <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>
    <property>
        <name>yarn.resourcemanager.address</name>
        <value>master:18040</value>
    </property>
    <property>
        <name>yarn.resourcemanager.scheduler.address</name>
        <value>master:18030</value>
    </property>
    <property>
        <name>yarn.resourcemanager.resource-tracker.address</name>
        <value>master:18025</value>
    </property>
    <property>
        <name>yarn.resourcemanager.admin.address</name>
        <value>master:18141</value>
    </property>
    <property>
        <name>yarn.resourcemanager.webapp.address</name>
        <value>master:18088</value>
    </property>
</configuration>

配置mapred-site.xml

cp /home/hadoop-2.10.1/etc/hadoop/mapred-site.xml.template /home/hadoop-2.10.1/etc/hadoop/mapred-site.xml
nano /home/hadoop-2.10.1/etc/hadoop/mapred-site.xml

用以下文本代替:

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>
    <property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
    </property>
</configuration>

配置Hadoop环境变量

nano ~/.bashrc

追加下面内容:

export HADOOP_HOME=/home/hadoop-2.10.1
export PATH=$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATH

刷新环境变量:

source ~/.bashrc

创建数据目录

cd /home
mkdir hadoopdata

格式化HDFS文件系统:

hdfs namenode -format

查看版本:

hadoop version

输出:

Hadoop 2.10.1
Subversion https://github.com/apache/hadoop -r 1827467c9a56f133025f28557bfc2c562d78e816
Compiled by centos on 2020-09-14T13:17Z
Compiled with protoc 2.5.0
From source with checksum 3114edef868f1f3824e7d0f68be03650
This command was run using /home/hadoop-2.10.1/share/hadoop/common/hadoop-common-2.10.1.jar

启动Hadoop

cd /home/hadoop-2.10.1/sbin
./start-all.sh

输入jps:

jps

输出:

2323 NameNode
2979 ResourceManager
3510 Jps
2505 DataNode
3323 NodeManager
2748 SecondaryNameNode

WEB UI界面

  • NameNode和DataNode: http://192.168.31.66:50070/
  • Yarn: http://192.168.31.66:18088/

安装Hive

安装MariaDB

yum install mariadb-server -y

启动MariaDB:

systemctl start mariadb
systemctl enable mariadb

修改MariaDB密码:

mysql_secure_installation

登录数据库:

mysql -uroot -p

添加数据:

grant all on *.* to hadoop@'%' identified by '123456';
grant all on *.* to hadoop@'localhost' identified by '123456';
grant all on *.* to hadoop@'master' identified by '123456';
flush privileges;
create database hivedata;
quit;

安装Hive

将apache-hive-2.3.7-bin.tar.gz复制到/home目录下解压:

tar -xvf apache-hive-2.3.7-bin.tar.gz

配置hive-site.xml,文件默认不存在,需要手动创建:

nano /home/apache-hive-2.3.7-bin/conf/hive-site.xml

添加以下内容:

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
    <property>
        <name>hive.metastore.local</name>
        <value>false</value>
    </property>
    <property>  
        <name>hive.metastore.uris</name>  
        <value>thrift://localhost:9083</value>  
        <description>Thrift uri for the remote metastore. Used by metastore client to connect to remote metastore.</description>  
    </property>
    <property>
        <name>hive.server2.thrift.bind.host</name>
        <value>localhost</value>
    </property>
    <property>
        <name>hive.server2.thrift.port</name>
        <value>10000</value>
    </property>
    <property>
        <name>javax.jdo.option.ConnectionURL</name>
        <value>jdbc:mysql://localhost:3306/hivedata?characterEncoding=UTF-8</value>
    </property>
    <property>
        <name>javax.jdo.option.ConnectionDriverName</name>
        <value>com.mysql.jdbc.Driver</value>
    </property>
    <property>
        <name>hive.server2.enable.doAs</name>
        <value>false</value> 
    </property>
    <property>
        <name>javax.jdo.option.ConnectionUserName</name>
        <value>hive</value>
    </property>
    <property>
        <name>javax.jdo.option.ConnectionPassword</name>
        <value>123456</value>
    </property>
    <property>
        <name>hive.metastore.schema.verification</name>
        <value>false</value>
    </property>
    <property>
        <name>hive.server2.thrift.client.user</name>
        <value>hive</value>
    </property>
    <property>
        <name>hive.server2.thrift.client.password</name>
        <value>hive123456</value>
    </property>
</configuration>
  • javax.jdo.option.ConnectionURL: 该项配置中的hivedata,需要与刚才mysql创建的库名一致;
  • javax.jdo.option.ConnectionUserName: 刚才配置的mysql用户;
  • javax.jdo.option.ConnectionPassword: 刚才配置的mysql密码;
  • hive.server2.thrift.client.user: 登录hiveserver2的用户;
  • hive.server2.thrift.client.password: 登录hiveserver2的密码。

将mysql的java connector复制到依赖库中:

cp mysql-connector-java-5.1.36-bin.jar /home/apache-hive-2.3.7-bin/lib/

配置Hive环境变量:

nano ~/.bashrc

添加以下内容:

export HIVE_HOME=/home/apache-hive-2.3.7-bin
export PATH=$PATH:$HIVE_HOME/bin

刷新环境变量:

source ~/.bashrc

初始化Hive数据库:

schematool -dbType mysql -initSchema

启动Hive:

hive

创建个数据库:

show databases;
create database hive_data;
show databases;
quit;

启动hiveserver2:

hive --service hiveserver2

启动metastore:

hive --service metastore

安装spark

将scala-2.11.8.tgz、spark-2.4.7-bin-hadoop2.7.tgz复制到/home目录下解压:

tar -xvf scala-2.11.8.tgz
tar -xvf spark-2.4.7-bin-hadoop2.7.tgz

配置环境变量:

nano ~/.bashrc

添加以下内容:

export SCALA_HOME=/home/scala-2.11.8
export SPARK_HOME=/home/spark-2.4.7-bin-hadoop2.7

刷新环境变量:

source ~/.bashrc

配置spark-env.sh:

cd /home/spark-2.4.7-bin-hadoop2.7/conf
cp spark-env.sh.template spark-env.sh
nano spark-env.sh

添加以下内容:

export JAVA_HOME=/home/jdk1.8.0_261
export HADOOP_HOME=/home/hadoop-2.10.1
export HIVE_HOME=/home/apache-hive-2.3.7-bin
export SCALA_HOME=/home/scala-2.11.8
export HIVE_CONF_DIR=$HIVE_HOME/conf
export SPARK_MASTER_IP=master
export SPARK_WORKER_MEMORY=24G

配置slave:

cp slaves.template slaves
nano slaves

将localhost改为master。
启动spark:

cd /home/spark-2.4.7-bin-hadoop2.7/sbin
./start-all.sh
  • WEB UI查看: http://192.168.31.66:8080/

配置pyspark:
将spark里的pyspark复制到python的site-packages里就行:

cd /home/spark-2.4.7-bin-hadoop2.7/python/
cp -rf pyspark /home/anaconda3/envs/tf12/lib/python3.6/site-packages/

安装需要的第三方包:

pip install py4j

QQ:1982248707,完成。

相关文章

网友评论

      本文标题:单机安装Hadoop、spark环境

      本文链接:https://www.haomeiwen.com/subject/ngoqwktx.html