一、安装Hadoop
Hive运行在Hadoop环境之上,因此需要hadoop环境,本次在安装在hadoop完全分布式模式的namennode节点上
请参考:hadoop搭建[完全分布式]
二、安装Hive
- 下载
[hadoop@s101 /home/hadoop]$cd /app/
[hadoop@s101 /app]$wget http://archive.apache.org/dist/hive/stable-2/apache-hive-2.3.4-bin.tar.gz
- 解压安装
[hadoop@s101 /app]$cd /app
[hadoop@s101 /app]$ln -s apache-hive-2.3.4-bin hive
[hadoop@s101 /app]$tar zxvf apache-hive-2.3.4-bin.tar.gz
- 配置hadoop用户的hive环境变量
echo -e '##################HIVE环境变量配置#############\nexport HIVE_HOME=/app/hive\nexport PATH=$HIVE_HOME/bin:$PATH' >> ~/.bash_profile&& source ~/.bash_profile&&tail -3 ~/.bash_profile
三、配置hive
3.1 拷贝出hive-site.xml和hive-env.sh配置文件
[hadoop@s101 /app/hive/conf]$cd /app/hive/conf/
[hadoop@s101 /app/hive/conf]$cp hive-default.xml.template hive-site.xml
[hadoop@s101 /app]$cp hive-env.sh.template hive-env.sh
3.2 修改hive-site.xm中系统无法识别的变量${system:java.io.tmpdir}
和${system:user.name}
- 测试
[hadoop@s101 /app/hive/conf]$sed -n 's#${system:java.io.tmpdir}#/app/hive.java.io.tmpdir#pg' hive-site.xml
<value>/app/hive.java.io.tmpdir/${system:user.name}</value>
<value>/app/hive.java.io.tmpdir/${hive.session.id}_resources</value>
<value>/app/hive.java.io.tmpdir/${system:user.name}</value>
<value>/app/hive.java.io.tmpdir/${system:user.name}/operation_logs</value>
[hadoop@s101 /app/hive/conf]$sed -n 's#${system:user.name}#hadoop#pg' hive-site.xml
<value>/app/hive.java.io.tmpdir/hadoop</value>
<value>/app/hive.java.io.tmpdir/hadoop</value>
<value>/app/hive.java.io.tmpdir/hadoop/operation_logs</value>
- 替换
[hadoop@s101 /app/hive/conf]$sed -i 's#${system:java.io.tmpdir}#/app/hive.java.io.tmpdir#g' hive-site.xml
[hadoop@s101 /app/hive/conf]$sed -i 's#${system:user.name}#hadoop#g' hive-site.xml
- 检查
[hadoop@s101 /app/hive/conf]$grep 'hive.java.io.tmpdir' hive-site.xml
<value>/app/hive.java.io.tmpdir/${system:user.name}</value>
<value>/app/hive.java.io.tmpdir/${hive.session.id}_resources</value>
<value>/app/hive.java.io.tmpdir/${system:user.name}</value>
<value>/app/hive.java.io.tmpdir/${system:user.name}/operation_logs</value>
[hadoop@s101 /app/hive/conf]$grep 'hive.java.io.tmpdir/hadoop' hive-site.xml
<value>/app/hive.java.io.tmpdir/hadoop</value>
<value>/app/hive.java.io.tmpdir/hadoop</value>
<value>/app/hive.java.io.tmpdir/hadoop/operation_logs</value>
- 创建出文件夹
[hadoop@s101 /app]$mkdir -p /app/hive.java.io.tmpdir/hadoop
3.3 初始化默认derby数据库
[hadoop@s101 /home/hadoop]$schematool -initSchema -dbType derby
.......
Metastore connection URL: jdbc:derby:;databaseName=metastore_db;create=true
Metastore Connection Driver : org.apache.derby.jdbc.EmbeddedDriver
Metastore connection User: APP
Starting metastore schema initialization to 2.3.0
Initialization script hive-schema-2.3.0.derby.sql
Initialization script completed
schemaTool completed
- 初始化完成后,会在当前目录产生一个metastore_db文件夹,这就是数据库的名称。
HDFS文件系统中也会产生相应的目录
[hadoop@s101 /home/hadoop]$ll /home/hadoop/metastore_db/
[hadoop@s101 /home/hadoop]$hdfs dfs -ls /tmp
Found 2 items
drwx-wx-wx - hadoop supergroup 0 2018-11-26 19:12 /tmp/hive
四、进入hive命令行
- 检查hadoop集群是否启动,如果没有启动需要启动起来
[hadoop@s101 /home/hadoop]start-dfs.sh
[hadoop@s101 /home/hadoop]start-yarn.sh
- 进入hive命令行
[hadoop@s101 /home/hadoop]$hive
...
hive>
五、配置MySQL数据库存储元数据
5.1 修改URL、drivername、username、password等配置信息
[hadoop@s101 /home/hadoop]$vim /app/hive/conf/hive-site.xml
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://192.168.200.9:3306/hive</value>
</property>
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.jdbc.Driver</value>
</property>
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>hive</value>
</property>
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>123456</value>
</property>
请参考:mysql安装
5.2 mysql的相关设置
- 查看mysql配置文件/etc/my.cnf,看属性binlog_format是否设置为ROW
binlog_format=ROW
否则在后续初始化时会出现一下问题
图片.png
- 登录 mysql -uroot -p
create database hive;
GRANT ALL PRIVILEGES ON hive.* TO 'hive'@'192.168.%.%' IDENTIFIED BY '123456' WITH GRANT OPTION;
flush privileges;
- 上传mysql驱动mysql-connector-java-5.1.38.jar到/app/hive/lib下。
[hadoop@s101 /home/hadoop]$ls /app/hive/lib/mysql-connector-java-5.1.38.jar
/app/hive/lib/mysql-connector-java-5.1.38.jar
- 初始化数据库
[hadoop@s101 /home/hadoop]$schematool -initSchema -dbType mysql
....
Metastore connection URL: jdbc:mysql://192.168.200.9:3306/hive?createDatabaseIfNotExist=true
Metastore Connection Driver : com.mysql.jdbc.Driver
Metastore connection User: hive
Starting metastore schema initialization to 2.3.0
Initialization script hive-schema-2.3.0.mysql.sql
Initialization script completed
schemaTool completed
- 查看生成的表
mysql> use hive;
Database changed
mysql> show tables;
图片.png
- 再次能进hive入命令行,即配置成功
[hadoop@s101 /home/hadoop]$hive
...
...
hive> exit;
网友评论