任务目标
- 在笔记本上构建三个节点的大数据测试环境
- 安装hdfs、hive、Spark、MLLIB 等基础大数据服务组件
- 一个NameNode(作为DataNode复用)、两个DataNode
测试环境说明
- 宿主机:mac 10.13.5,配置4C16G
- 虚拟机:centos7的标准版
- centos-hdp11:2C6G,50G SSD弹性空间,IP:192.168.56.11;
- centos-hdp12:1C2G,20G SSD弹性空间,IP:192.168.56.12;
- centos-hdp13:1C2G,20G SSD弹性空间,IP:192.168.56.13;
- Virtualbox版本:5.2.12版本
构建步骤
修改机器名称
#示例如下,三个主机都需要改成对应的名称
vi /etc/hostname
hdp11
关闭防火墙
#三个主机都要处理
systemctl disable firewalld.service
systemctl stop firewalld.service
关闭SELINUX
#三个主机都需要处理
vi /etc/selinux/config
SELINUX=disabled
/usr/sbin/sestatus - v
配置host
#三个主机都需要处理
vi /etc/hosts
192.168.137.61 hdp11
192.168.137.62 hdp12
192.168.137.63 hdp13
ssh无密码(chmod 600 /root/.ssh/authorized_keys)
#三个主机都需要生成
ssh-keygen -t rsa -P ''
cat /root/.ssh/id_rsa.pub >> /root/.ssh/authorized_keys
#复制到其他机器上(其他机器也要运行ssh-keygen -t rsa -P '')
scp /root/.ssh/id_rsa.pub root@hdp12:/root
#登录到hdp12,将公钥写入授权文件
cat /root/id_rsa.pub >> /root/.ssh/authorized_keys
scp /root/.ssh/id_rsa.pub root@hdp13:/root
#登录到hdp13,将公钥写入授权文件
cat /root/id_rsa.pub >> /root/.ssh/authorized_keys
#重启SSH,三个主机都需要处理
systemctl restart sshd.service
安装wget
#三个主机都需要处理,后续下载JDK等内容需要处理
yum -y install wget
安装JDK1.8
mkdir /usr/java
cd /usr/java
wget --no-cookies --header "Cookie: oraclelicense=accept-securebackup-cookie;"
http://download.oracle.com/otn-pub/java/jdk/8u131-b11/d54c1d3a095b4ff2b6607d096fa80163/jdk-8u131-linux-x64.tar.gz
tar -zxvf jdk-8u131-linux-x64.tar.gz
ln -s jdk1.8.0_131 jdk1.8
#JDK的环境变量声明
vi /etc/profile
#声明内容如下
export JAVA_HOME=/usr/java/jdk1.8
export PATH=$JAVA_HOME/bin:$PATH
export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
source /etc/profile
设置ntp,设置时钟时间
yum install -y ntp
chkconfig --list ntpd
systemctl is-enabled ntpd
systemctl enable ntpd
systemctl start ntpd
ntpdate -u cn.pool.ntp.org
关闭Linux的THP服务
vi /etc/default/grub
#在有CMDLINE一行添加 transparent_hugepage=never
grub2-mkconfig -o /boot/grub2/grub.cfg
systemctl disable tuned
#重启之后,用下面的命令检查:
cat /sys/kernel/mm/transparent_hugepage/enabled
#有 [never]则表示THP被禁用
配置UMASK
设定用户所创建目录的初始权限
umask 0022
禁止离线更新
vi /etc/yum/pluginconf.d/refresh-packagekit.conf
添加:enabled=0
======================================================================================================
配置HTTP 服务
#配置HTTP 服务到系统层使其随系统自动启动
yum -y install httpd
systemctl is-enabled httpd
systemctl enable httpd
systemctl start httpd
安装工具
#安装本地源制作相关工具
yum install yum-utils createrepo yum-plugin-priorities -y
vi /etc/yum/pluginconf.d/priorities.conf
添加:gpgcheck = 0
下载 Ambari与HDP*
CentOS 7:
http://public-repo-1.hortonworks.com/ambari/centos7/2.x/updates/2.5.0.3/ambari-2.5.0.3-centos7.tar.gz
http://public-repo-1.hortonworks.com/HDP/centos7/2.x/updates/2.6.0.3/HDP-2.6.0.3-centos7-rpm.tar.gz
http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.21/repos/centos7/HDP-UTILS-1.1.0.21-centos7.tar.gz
创建本地源
#将下载的3个tar包解压到/var/www/html目录下:
mkdir -p /var/www/html/HDP-UTILS/centos7/
tar zxvf /opt/ambari-2.5.0.3-centos7.tar.gz -C /var/www/html
tar zxvf /opt/HDP-2.6.0.3-centos7-rpm.tar.gz -C /var/www/html
tar zxvf /opt/HDP-UTILS-1.1.0.21-centos7.tar.gz -C /var/www/html/HDP-UTILS/centos7
#创建本地源
cd /var/www/html/
createrepo ./
下载ambari.repo
wget -nv http://public-repo-1.hortonworks.com/ambari/centos7/2.x/updates/2.5.0.3/ambari.repo -O /etc/yum.repos.d/ambari.repo
修改ambari.repo,配置为本地源
vi /etc/yum.repos.d/ambari.repo
#VERSION_NUMBER=2.5.0.3-7
[ambari-2.5.0.3]
name=ambari Version - ambari-2.5.0.3
baseurl=http://hdp11/ambari/centos7/
gpgcheck=0
gpgkey=http://hdp11/ambari/centos7/RPM-GPG-KEY/RPM-GPG-KEY-Jenkins
enabled=1
priority=1
#下载hdp.repo
wget -nv http://public-repo-1.hortonworks.com/HDP/centos7/2.x/updates/2.6.0.3/hdp.repo -O /etc/yum.repos.d/hdp.repo
vi /etc/yum.repos.d/hdp.repo
#VERSION_NUMBER=2.6.0.3-8
[HDP-2.6.0.3]
name=HDP Version - HDP-2.6.0.3
baseurl=http://hdp11/HDP/centos7/
gpgcheck=0
gpgkey=http://hdp11/HDP/centos7/RPM-GPG-KEY/RPM-GPG-KEY-Jenkins
enabled=1
priority=1
[HDP-UTILS-1.1.0.21]
name=HDP-UTILS Version - HDP-UTILS-1.1.0.21
baseurl=http://hdp11/HDP-UTILS/centos7/
gpgcheck=0
gpgkey=http://hdp11/HDP/centos7/RPM-GPG-KEY/RPM-GPG-KEY-Jenkins
enabled=1
priority=1
yum clean all
yum makecache
yum repolist
http://hdp11/ambari/centos7/
http://hdp11/HDP/centos7/
http://hdp11/HDP-UTILS/centos7/
=========================================================================================================
安装MySQL*
#Ambari使用的默认数据库是PostgreSQL,用于存储安装元数据,可以使用自己安装mysql数据库作为Ambari元数据库。
wget http://dev.mysql.com/get/mysql-community-release-el7-
5.noarch.rpm
rpm -ivh mysql-community-release-el7-5.noarch.rpm
yum install mysql-community-server
mysql(错误'/var/lib/mysql/mysql.sock' service mysqld start)
安装Ambari
yum install ambari-server
拷贝mysql驱动
mkdir /usr/share/java
cp /opt/mysql-connector-java-5.1.38.jar /usr/share/java/mysql-connector-java.jar
cp /usr/share/java/mysql-connector-java.jar /var/lib/ambari-server/resources/mysql-jdbc-driver.jar
vi /etc/ambari-server/conf/ambari.properties
添加server.jdbc.driver.path=/usr/share/java/mysql-connector-java.jar
#创建数据脚本,使用mysql命令创建
CREATE DATABASE ambari;
use ambari;
CREATE USER 'ambari'@'%' IDENTIFIED BY 'adminal';
GRANT ALL PRIVILEGES ON *.* TO 'ambari'@'%';
CREATE USER 'ambari'@'localhost' IDENTIFIED BY 'adminal';
GRANT ALL PRIVILEGES ON *.* TO 'ambari'@'localhost';
CREATE USER 'ambari'@'hdp11' IDENTIFIED BY 'adminal';
GRANT ALL PRIVILEGES ON *.* TO 'ambari'@'hdp11';
FLUSH PRIVILEGES;
source /var/lib/ambari-server/resources/Ambari-DDL-MySQL-CREATE.sql
show tables;
use mysql;
select Host,User,Password from user where user='ambari';
CREATE DATABASE hive;
use hive;
CREATE USER 'hive'@'%' IDENTIFIED BY 'hive';
GRANT ALL PRIVILEGES ON *.* TO 'hive'@'%';
CREATE USER 'hive'@'localhost' IDENTIFIED BY 'hive';
GRANT ALL PRIVILEGES ON *.* TO 'hive'@'localhost';
CREATE USER 'hive'@'hdp11' IDENTIFIED BY 'hive';
GRANT ALL PRIVILEGES ON *.* TO 'hive'@'hdp11';
FLUSH PRIVILEGES;
CREATE DATABASE oozie;
use oozie;
CREATE USER 'oozie'@'%' IDENTIFIED BY 'oozie';
GRANT ALL PRIVILEGES ON *.* TO 'oozie'@'%';
CREATE USER 'oozie'@'localhost' IDENTIFIED BY 'oozie';
GRANT ALL PRIVILEGES ON *.* TO 'oozie'@'localhost';
CREATE USER 'oozie'@'hdp11' IDENTIFIED BY 'oozie';
GRANT ALL PRIVILEGES ON *.* TO 'oozie'@'hdp11';
FLUSH PRIVILEGES;
================================================================================
配置Ambari(虚拟机先退出避免内存占用过多)
ambari-server setup
1. Customize user account for ambari-server daemon [y/n] (n)? y
2. Enter user account for ambari-server daemon (root):ambari
3. OK to continue [y/n] (y)?y
4.
Enter choice (1): 3
Path to JAVA_HOME: /usr/java/jdk1.8
5. Enter advanced database configuration [y/n] (n)? y
6. Enter choice (3): 3
7.
Hostname (localhost):hdp11
Port (3306):
Database name (ambari):
Username (ambari):
Enter Database Password (bigdata):adminal
Re-Enter password:adminal
8. Proceed with configuring remote database connection properties [y/n] (y)?
这里要注意一下hive和oozie
如果已经在MySQL中创建了元表,选择已经存在的MySQL,输入URL和密码后,点击连接测试总是失败。
需要先停止Ambari,执行下面这个命令
ambari-server stop
ambari-server setup --jdbc-db=mysql --jdbc-driver=/usr/share/java/mysql-connector-java.jar
===============================================================================================
启动Amabri(开机自动启动)
ambari-server start
http://hdp11:8080/
看到界面如下
- Launch Install Wizard
- hmcluster
- Use Local Repository
redhat7输入框填写如下
http://hdp11/HDP/centos7/
http://hdp11/HDP-UTILS/centos7/ - 这是输入框需要填写的内容
hdp1[1-3]
找到私钥,复制到根目录
cp -a /root/.ssh/id_rsa /
上传hdp11的私钥id_rsa到ambari服务器
- 选择服务,其他组件未来也可以安装
hdfs,yarn,zookeeper,ambari - 选择节点内容
默认 - 详细配置
- HDFS
/hadoop/hdfs/namenode
/hadoop/hdfs/data - YARN
/hadoop/yarn/local
/hadoop/yarn/log - Ambari Metrics
admin admin - SmartSense
admin admin
==================================================================================================
作为测试环境,下边的流程我没做
删除服务(最后一步错误重新安装 Install,Start and Test)
curl -u admin:admin -H "X-Requested-By: ambari" -X DELETE http://hdp11:8080/api/v1/clusters/hmcluster
#500 status code received on POST method for API: /api/v1/stacks/HDP/versions/2.6/recommendations
#Error message: Error occured during stack advisor command invocation: Cannot create /var/run/ambari-server/stack-recommendations
sudo chown -R ambari /var/run/ambari-server
#ambari启动卡住
sudo su -l hdfs
hdfs dfsadmin -safemode leave
网友评论