美文网首页
CarbonData集成Hive、Spark

CarbonData集成Hive、Spark

作者: david9 | 来源:发表于2020-02-20 14:48 被阅读0次

硬件准备:

系统:CentOS 7.6(1810)
CPU:4核
内存:16G

软件准备:

hadoop-2.7.2
hive-1.2.1
spark-2.2.1-bin-hadoop2.7.tgz
apache-carbondata-1.6.1-bin-spark2.2.1-hadoop2.7.2.jar
apache-carbondata-1.6.1-source-release
mysql5.7安装包、jdbc驱动mysql-connector-java-5.1.47.jar

注意:
在carbondata-1.6.1中,有组件版本限制,具体为:
hadoop支持到2.7.2
hive支持到1.2.1(见carbondata-parent-1.6.1/integration/hive/pom.xml)
spark支持到2.x(见carbondata-parent-1.6.1/integration/spark2/pom.xml)

安装:

1、安装jdk

yum install -y java-1.8.0-openjdk java-devel

配置JAVA_HOME

vim /root/.bashrc

在末尾添加

export JAVA_HOME=/usr/lib/jvm/java
2、配置本机免登陆

ssh-keygen

一路回车

ssh-copy-id root@你的IP

3、安装单机版hadoop

关闭防火墙:

systemctl stop firewalld && systemctl disable firewalld

解压hadoop:
上传hadoop-2.7.2.tar.gz到/root目录下

cd /root && tar -xvf hadoop-2.7.2.tar.gz

新建/data文件夹:

mkdir /data

编辑

vim /root/hadoop-2.7.2/etc/hadoop/core-site.xml

内容为:

<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://你的IP:9000</value>
    </property>
    <property>
        <name>hadoop.tmp.dir</name>
        <value>/data/hadoop/tmp</value>
    </property>
</configuration>

初始化namenode:

cd /root/hadoop-2.7.2/bin && ./hadoop namenode -format

启动hadoop:

cd /root/hadoop-2.7.2/sbin && ./start-all.sh

添加HADOOP_HOME环境变量:

vim /root/.bashrc

在末尾添加

export HADOOP_HOME=/root/hadoop-2.7.2
4、安装单机版hive

上传apache-hive-1.2.1-bin.tar.gz到/root目录下,执行以下命令:

cd /root && tar -xvf apache-hive-1.2.1-bin.tar.gz

安装mysql5.7,配置用户密码(略)
下载myql驱动,拷贝至hive对应目录

cp mysql-connector-java-5.1.47.jar /root/apache-hive-1.2.1-bin/lib/

编辑hive-site.xml

vim /root/apache-hive-1.2.1-bin/conf/hive-site.xml

内容为:

<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
    <property>
        <name>javax.jdo.option.ConnectionURL</name>
        <value>jdbc:mysql://127.0.0.1:3306/hive?createDatabaseIfNotExist=true&amp;characterEncoding=UTF-8&amp;useSSL=false</value>
     </property>
    <property>
        <name>javax.jdo.option.ConnectionUserName</name>
        <value>你的数据库用户</value>
    </property>
    <property>
        <name>javax.jdo.option.ConnectionPassword</name>
        <value>你的数据库密码</value>
    </property>
    <property>
        <name>javax.jdo.option.ConnectionDriverName</name>
        <value>com.mysql.jdbc.Driver</value>
    </property>
    <property>
        <name>hive.metastore.schema.verification</name>
        <value>false</value>
    </property>
</configuration>

初始化hive schema:

cd /root/apache-hive-1.2.1-bin/bin && ./schematool -initSchema -dbType mysql

添加HIVE_HOME环境变量:

vim /root/.bashrc

在末尾添加

export HIVE_HOME=/root/apache-hive-1.2.1-bin
4、安装单机版spark

上传spark-2.2.1-bin-hadoop2.7.tgz到/root目录下,执行以下命令:

cd /root && tar -xvf spark-2.2.1-bin-hadoop2.7.tgz

上传apache-carbondata-1.6.1-bin-spark2.2.1-hadoop2.7.2.jar到/root目录下,执行以下命令:

cd /root && cp apache-carbondata-1.6.1-bin-spark2.2.1-hadoop2.7.2.jar /root/spark-2.2.1-bin-hadoop2.7/jars

上传mysql-connector-java-5.1.47.jar到/root目录下,执行以下命令:

cd /root && cp mysql-connector-java-5.1.47.jar /root/spark-2.2.1-bin-hadoop2.7/jars

拷贝hive-site.xml到spark:

cd /root/spark-2.2.1-bin-hadoop2.7/conf && cp /root/apache-hive-1.2.1-bin/conf/hive-site.xml .

5、carbondata集成spark

新建carbondata测试数据:

cd /root && mkdir carbondata && cd carbondata
cat > sample.csv << EOF
id,name,scale,country,salary
1,yuhai,1.77,china,33000.1
2,runlin,1.70,china,33000.2
EOF

将测试数据sample.csv上传到hadoop的tmp目录:

cd /root/hadoop-2.7.2/bin && ./hadoop fs -put /root/carbondata/sample.csv /tmp

执行spark-shell:

cd /root/spark-2.2.1-bin-hadoop2.7/bin && ./spark-shell

在打开的spark-shell中执行如下命令:

import org.apache.spark.sql.SparkSession
import org.apache.spark.sql.CarbonSession._
val rootPath = "hdfs://你的机器IP:9000/user/hadoop/carbon"
val storeLocation = s"$rootPath/store"
val warehouse = s"$rootPath/warehouse"
val metaStoreDB = s"$rootPath/metastore_db"

val carbon = SparkSession.builder().enableHiveSupport().config("spark.sql.warehouse.dir", warehouse).config(org.apache.carbondata.core.constants.CarbonCommonConstants.STORE_LOCATION, storeLocation).getOrCreateCarbonSession(storeLocation, metaStoreDB)

carbon.sql("create table hive_carbon(id int, name string, scale decimal, country string, salary double) STORED BY 'carbondata'")
carbon.sql("LOAD DATA INPATH 'hdfs://你的机器IP:9000/tmp/sample.csv' INTO TABLE hive_carbon")
carbon.sql("SELECT * FROM hive_carbon").show()

可以看到:

scala> carbon.sql("SELECT * FROM hive_carbon").show()
+---+------+-----+-------+-------+
| id|  name|scale|country| salary|
+---+------+-----+-------+-------+
|  1| yuhai|    2|  china|33000.1|
|  2|runlin|    2|  china|33000.2|
+---+------+-----+-------+-------+

至此,carbondata集成spark完成。

6、carbondata集成hive

修改hive-site.xml,添加carbondata配置:

vim /root/apache-hive-1.2.1-bin/conf/hive-site.xml

全部内容如下:

<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
    <name>javax.jdo.option.ConnectionURL</name>
    <value>jdbc:mysql://127.0.0.1:3306/hive?createDatabaseIfNotExist=true&amp;characterEncoding=UTF-8&amp;useSSL=false</value>
 </property>
<property>
    <name>javax.jdo.option.ConnectionUserName</name>
    <value>root</value>
</property>
<property>
    <name>javax.jdo.option.ConnectionPassword</name>
    <value>123456</value>
</property>
<property>
    <name>javax.jdo.option.ConnectionDriverName</name>
    <value>com.mysql.jdbc.Driver</value>
</property>
<property>
    <name>hive.metastore.schema.verification</name>
    <value>false</value>
</property>
<property>
  <name>hive.metastore.pre.event.listeners</name>
  <value>org.apache.carbondata.hive.CarbonHiveMetastoreListener</value>
</property>
</configuration>

添加carbondata-hive-1.6.1.jar到hive:
编译carbondata-parent-1.6.1,获得carbondata-hive-1.6.1.jar(位于carbondata-parent-1.6.1/integration/hive/target目录),上传carbondata-hive-1.6.1.jar到/root目录,执行以下命令:

cp /root/carbondata-hive-1.6.1.jar /root/apache-hive-1.2.1-bin/lib

添加apache-carbondata-1.6.1-bin-spark2.2.1-hadoop2.7.2.jar到hive:
上传apache-carbondata-1.6.1-bin-spark2.2.1-hadoop2.7.2.jar到/root目录下,执行以下命令:

cp /root/apache-carbondata-1.6.1-bin-spark2.2.1-hadoop2.7.2.jar /root/apache-hive-1.2.1-bin/lib

拷贝spark相关jar到hive:

cd /root/apache-hive-1.2.1-bin/lib && cp /root/spark-2.2.1-bin-hadoop2.7/jars/scala-* . && cp /root/spark-2.2.1-bin-hadoop2.7/jars/spark-catalyst_2.11-2.2.1.jar .

启动hive:

cd /root && nohup ~/apache-hive-1.2.1-bin/bin/hive --service metastore 1>metastore.log 2>&1 &
cd /root && nohup ~/apache-hive-1.2.1-bin/bin/hive --service hiveserver2 1>hiveserver2.log 2>&1 &

使用beeline连接hive查询数据:

cd /root/apache-hive-1.2.1-bin/bin && ./beeline -u 'jdbc:hive2://localhost:10000/default' -n root -p '' -e 'set hive.fetch.task.conversion=none;select * from hive_carbon;'

可以看到查询结果:

Connecting to jdbc:hive2://localhost:10000/default
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/root/apache-hive-1.2.1-bin/lib/apache-carbondata-1.6.1-bin-spark2.2.1-hadoop2.7.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/root/hadoop-2.7.2/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
Connected to: Apache Hive (version 1.2.1)
Driver: Hive JDBC (version 1.2.1)
Transaction isolation: TRANSACTION_REPEATABLE_READ
No rows affected (0.077 seconds)
INFO  : Number of reduce tasks is set to 0 since there's no reduce operator
INFO  : number of splits:1
INFO  : Submitting tokens for job: job_local382345580_0004
INFO  : The url to track the job: http://localhost:8080/
INFO  : Job running in-process (local Hadoop)
INFO  : 2020-02-20 14:48:00,962 Stage-1 map = 100%,  reduce = 0%
INFO  : Ended Job = job_local382345580_0004
+-----------------+-------------------+--------------------+----------------------+---------------------+--+
| hive_carbon.id  | hive_carbon.name  | hive_carbon.scale  | hive_carbon.country  | hive_carbon.salary  |
+-----------------+-------------------+--------------------+----------------------+---------------------+--+
| 1               | yuhai             | 2                  | china                | 33000.1             |
| 2               | runlin            | 2                  | china                | 33000.2             |
+-----------------+-------------------+--------------------+----------------------+---------------------+--+
2 rows selected (1.653 seconds)
Beeline version 1.2.1 by Apache Hive
Closing: 0: jdbc:hive2://localhost:10000/default

至此,carbondata集成hive完成。

问题:

1、spark导入数据到表,hive查询没问题,但是hive中建表或者执行insert语句报错:暂时没时间跟踪代码解决。

相关文章

网友评论

      本文标题:CarbonData集成Hive、Spark

      本文链接:https://www.haomeiwen.com/subject/ifitqhtx.html