美文网首页
(一)Spark SQL三种方式启动

(一)Spark SQL三种方式启动

作者: 白面葫芦娃92 | 来源:发表于2018-10-31 20:11 被阅读0次

    Spark SQL is Apache Spark's module for working with structured data.
    Spark SQL是一个处理结构化数据的Spark模块
    注意Spark SQL和Hive on Spark的区别

    环境搭建
    需要把将HIVE_HOME/conf下的hive-site.xml复制到$SPARK_HOME/conf文件夹下
    将$HIVE_HOME/lib下的mysql-connector-java-5.1.27.jar复制到~/software文件夹下
    1.第一种方式启动

    [hadoop@hadoop001 bin]$ ./spark-shell --master local[2] --jars ~/software/mysql-connector-java-5.1.27.jar
    18/09/02 17:15:54 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
    Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
    Setting default log level to "WARN".
    To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
    Spark context Web UI available at http://hadoop001:4040
    Spark context available as 'sc' (master = local[2], app id = local-1535879816467).
    Spark session available as 'spark'.
    Welcome to
          ____              __
         / __/__  ___ _____/ /__
        _\ \/ _ \/ _ `/ __/  '_/
       /___/ .__/\_,_/_/ /_/\_\   version 2.3.1
          /_/
             
    Using Scala version 2.11.8 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_45)
    Type in expressions to have them evaluated.
    Type :help for more information.
    
    scala> 
    
    scala> spark.sql("show tables").show(false)
    +--------+---------+-----------+
    |database|tableName|isTemporary|
    +--------+---------+-----------+
    |default |dept     |false      |
    |default |emp      |false      |
    +--------+---------+-----------+
    scala> spark.sql("use ruozedata")
    scala> spark.sql("show tables").show(false)
    +---------+-----------------------+-----------+
    |database |tableName              |isTemporary|
    +---------+-----------------------+-----------+
    |ruozedata|a                      |false      |
    |ruozedata|b                      |false      |
    |ruozedata|city_info              |false      |
    |ruozedata|dual                   |false      |
    |ruozedata|emp_sqoop              |false      |
    |ruozedata|order_4_partition      |false      |
    |ruozedata|order_mulit_partition  |false      |
    |ruozedata|order_partition        |false      |
    |ruozedata|product_info           |false      |
    |ruozedata|product_rank           |false      |
    |ruozedata|productrevenue         |false      |
    |ruozedata|ruoze_dept             |false      |
    |ruozedata|ruozedata_dynamic_emp  |false      |
    |ruozedata|ruozedata_emp          |false      |
    |ruozedata|ruozedata_emp2         |false      |
    |ruozedata|ruozedata_emp3_new     |false      |
    |ruozedata|ruozedata_emp4         |false      |
    |ruozedata|ruozedata_emp_partition|false      |
    |ruozedata|ruozedata_person       |false      |
    |ruozedata|ruozedata_static_emp   |false      |
    +---------+-----------------------+-----------+
    

    启动hive验证显示的是否正确:

    hive> show tables;
    OK
    dept
    emp
    Time taken: 0.196 seconds, Fetched: 2 row(s)
    

    如果没有加--jars ~/software/mysql-connector-java-5.1.27.jar,会报错找不到驱动:

    Caused by: org.datanucleus.store.rdbms.connectionpool.DatastoreDriverNotFoundException: The specified datastore driver ("com.mysql.jdbc.Driver") was not found in the CLASSPATH. Please check your CLASSPATH specification, and the name of the driver.
            at org.datanucleus.store.rdbms.connectionpool.AbstractConnectionPoolFactory.loadDriver(AbstractConnectionPoolFactory.java:58)
            at org.datanucleus.store.rdbms.connectionpool.BoneCPConnectionPoolFactory.createConnectionPool(BoneCPConnectionPoolFactory.java:54)
            at org.datanucleus.store.rdbms.ConnectionFactoryImpl.generateDataSources(ConnectionFactoryImpl.java:238)
            ... 141 more
    

    使用hive和SparkSQL分别对两个表进行join操作,测试一下两者谁快谁慢:
    hive:

    hive> select e.empno,e.ename,d.dname from emp e join dept d on e.deptno=d.deptno;
    Query ID = hadoop_20180920130606_8a945386-250b-4887-af0a-e39c59c16e8e
    Total jobs = 1
    Execution log at: /tmp/hadoop/hadoop_20180920130606_8a945386-250b-4887-af0a-e39c59c16e8e.log
    2018-09-20 02:44:19     Starting to launch local task to process map join;     maximum memory = 518979584
    2018-09-20 02:44:25     Dump the side-table for tag: 1 with group count: 4 into file: file:/tmp/hadoop/cb727170-007a-4881-8818-3e6b196854ae/hive_2018-09-20_14-44-05_018_7183384201525743718-1/-local-10003/HashTable-Stage-3/MapJoin-mapfile01--.hashtable
    2018-09-20 02:44:25     Uploaded 1 File to: file:/tmp/hadoop/cb727170-007a-4881-8818-3e6b196854ae/hive_2018-09-20_14-44-05_018_7183384201525743718-1/-local-10003/HashTable-Stage-3/MapJoin-mapfile01--.hashtable (373 bytes)
    2018-09-20 02:44:25     End of local task; Time Taken: 6.432 sec.
    Execution completed successfully
    MapredLocal task succeeded
    Launching Job 1 out of 1
    Number of reduce tasks is set to 0 since there's no reduce operator
    Starting Job = job_1537370027569_0003, Tracking URL = http://hadoop000:8088/proxy/application_1537370027569_0003/
    Kill Command = /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/bin/hadoop job  -kill job_1537370027569_0003
    Hadoop job information for Stage-3: number of mappers: 1; number of reducers: 0
    2018-09-20 14:44:46,265 Stage-3 map = 0%,  reduce = 0%
    2018-09-20 14:45:02,478 Stage-3 map = 100%,  reduce = 0%, Cumulative CPU 3.08 sec
    MapReduce Total cumulative CPU time: 3 seconds 80 msec
    Ended Job = job_1537370027569_0003
    MapReduce Jobs Launched: 
    Stage-Stage-3: Map: 1   Cumulative CPU: 3.08 sec   HDFS Read: 6646 HDFS Write: 268 SUCCESS
    Total MapReduce CPU Time Spent: 3 seconds 80 msec
    OK
    7369    SMITH   RESEARCH
    7499    ALLEN   SALES
    7521    WARD    SALES
    7566    JONES   RESEARCH
    7654    MARTIN  SALES
    7698    BLAKE   SALES
    7782    CLARK   ACCOUNTING
    7788    SCOTT   RESEARCH
    7839    KING    ACCOUNTING
    7844    TURNER  SALES
    7876    ADAMS   RESEARCH
    7900    JAMES   SALES
    7902    FORD    RESEARCH
    7934    MILLER  ACCOUNTING
    Time taken: 58.786 seconds, Fetched: 14 row(s)
    

    用时接近1min
    再看SparkSQL

    scala> spark.sql("show tables").show(false)
    +--------+---------+-----------+
    |database|tableName|isTemporary|
    +--------+---------+-----------+
    |default |dept     |false      |
    |default |emp      |false      |
    +--------+---------+-----------+
    scala> spark.sql("select e.empno,e.ename,d.dname from emp e join dept d on e.deptno=d.deptno").show(false)
    +-----+------+----------+                                                       
    |empno|ename |dname     |
    +-----+------+----------+
    |7369 |SMITH |RESEARCH  |
    |7499 |ALLEN |SALES     |
    |7521 |WARD  |SALES     |
    |7566 |JONES |RESEARCH  |
    |7654 |MARTIN|SALES     |
    |7698 |BLAKE |SALES     |
    |7782 |CLARK |ACCOUNTING|
    |7788 |SCOTT |RESEARCH  |
    |7839 |KING  |ACCOUNTING|
    |7844 |TURNER|SALES     |
    |7876 |ADAMS |RESEARCH  |
    |7900 |JAMES |SALES     |
    |7902 |FORD  |RESEARCH  |
    |7934 |MILLER|ACCOUNTING|
    +-----+------+----------+
    

    用时大概5s
    2.第二种方式启动:

    scala> [hadoop@hadoop000 bin]$ ./spark-sql --master local[2] --driver-class-path ~/software/mysql-connector-java-5.1.27.jar
    18/09/20 14:50:32 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
    18/09/20 14:50:34 INFO metastore.HiveMetaStore: 0: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore
    18/09/20 14:50:34 INFO metastore.ObjectStore: ObjectStore, initialize called
    18/09/20 14:50:35 INFO DataNucleus.Persistence: Property hive.metastore.integral.jdo.pushdown unknown - will be ignored
    18/09/20 14:50:35 INFO DataNucleus.Persistence: Property datanucleus.cache.level2 unknown - will be ignored
    18/09/20 14:50:37 INFO metastore.ObjectStore: Setting MetaStore object pin classes with hive.metastore.cache.pinobjtypes="Table,StorageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema,Order"
    18/09/20 14:50:39 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table.
    18/09/20 14:50:39 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table.
    18/09/20 14:50:40 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table.
    18/09/20 14:50:40 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table.
    18/09/20 14:50:40 INFO DataNucleus.Query: Reading in results for query "org.datanucleus.store.rdbms.query.SQLQuery@0" since the connection used is closing
    18/09/20 14:50:40 INFO metastore.MetaStoreDirectSql: Using direct SQL, underlying DB is MYSQL
    18/09/20 14:50:40 INFO metastore.ObjectStore: Initialized ObjectStore
    18/09/20 14:50:40 INFO metastore.HiveMetaStore: Added admin role in metastore
    18/09/20 14:50:40 INFO metastore.HiveMetaStore: Added public role in metastore
    18/09/20 14:50:40 INFO metastore.HiveMetaStore: No user is added in admin role, since config is empty
    18/09/20 14:50:41 INFO metastore.HiveMetaStore: 0: get_all_databases
    18/09/20 14:50:41 INFO HiveMetaStore.audit: ugi=hadoop  ip=unknown-ip-addr     cmd=get_all_databases
    18/09/20 14:50:41 INFO metastore.HiveMetaStore: 0: get_functions: db=default pat=*
    18/09/20 14:50:41 INFO HiveMetaStore.audit: ugi=hadoop  ip=unknown-ip-addr     cmd=get_functions: db=default pat=*
    18/09/20 14:50:41 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MResourceUri" is tagged as "embedded-only" so does not have its own datastore table.
    18/09/20 14:50:42 INFO session.SessionState: Created local directory: /tmp/63e8318a-5966-49b3-801d-1baca1a82baa_resources
    18/09/20 14:50:42 INFO session.SessionState: Created HDFS directory: /tmp/hive/hadoop/63e8318a-5966-49b3-801d-1baca1a82baa
    18/09/20 14:50:42 INFO session.SessionState: Created local directory: /tmp/hadoop/63e8318a-5966-49b3-801d-1baca1a82baa
    18/09/20 14:50:42 INFO session.SessionState: Created HDFS directory: /tmp/hive/hadoop/63e8318a-5966-49b3-801d-1baca1a82baa/_tmp_space.db
    18/09/20 14:50:42 INFO spark.SparkContext: Running Spark version 2.3.1
    18/09/20 14:50:42 INFO spark.SparkContext: Submitted application: SparkSQL::192.168.137.251
    18/09/20 14:50:42 INFO spark.SecurityManager: Changing view acls to: hadoop
    18/09/20 14:50:42 INFO spark.SecurityManager: Changing modify acls to: hadoop
    18/09/20 14:50:42 INFO spark.SecurityManager: Changing view acls groups to: 
    18/09/20 14:50:42 INFO spark.SecurityManager: Changing modify acls groups to: 
    18/09/20 14:50:42 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users  with view permissions: Set(hadoop); groups with view permissions: Set(); users  with modify permissions: Set(hadoop); groups with modify permissions: Set()
    18/09/20 14:50:43 INFO util.Utils: Successfully started service 'sparkDriver' on port 44723.
    18/09/20 14:50:43 INFO spark.SparkEnv: Registering MapOutputTracker
    18/09/20 14:50:43 INFO spark.SparkEnv: Registering BlockManagerMaster
    18/09/20 14:50:43 INFO storage.BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
    18/09/20 14:50:43 INFO storage.BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
    18/09/20 14:50:43 INFO storage.DiskBlockManager: Created local directory at /tmp/blockmgr-57fde9a6-7aa1-45fc-9a2f-1e8e0a24c65f
    18/09/20 14:50:43 INFO memory.MemoryStore: MemoryStore started with capacity 413.9 MB
    18/09/20 14:50:43 INFO spark.SparkEnv: Registering OutputCommitCoordinator
    18/09/20 14:50:43 INFO util.log: Logging initialized @14001ms
    18/09/20 14:50:44 INFO server.Server: jetty-9.3.z-SNAPSHOT
    18/09/20 14:50:44 INFO server.Server: Started @14152ms
    18/09/20 14:50:44 INFO server.AbstractConnector: Started ServerConnector@59a81f73{HTTP/1.1,[http/1.1]}{0.0.0.0:4040}
    18/09/20 14:50:44 INFO util.Utils: Successfully started service 'SparkUI' on port 4040.
    18/09/20 14:50:44 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@2287395{/jobs,null,AVAILABLE,@Spark}
    18/09/20 14:50:44 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@7e34b127{/jobs/json,null,AVAILABLE,@Spark}
    18/09/20 14:50:44 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@679dd234{/jobs/job,null,AVAILABLE,@Spark}
    18/09/20 14:50:44 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@1e5eb20a{/jobs/job/json,null,AVAILABLE,@Spark}
    18/09/20 14:50:44 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@4538856f{/stages,null,AVAILABLE,@Spark}
    18/09/20 14:50:44 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@4c3de38e{/stages/json,null,AVAILABLE,@Spark}
    18/09/20 14:50:44 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@74b86971{/stages/stage,null,AVAILABLE,@Spark}
    18/09/20 14:50:44 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@3d8d17a3{/stages/stage/json,null,AVAILABLE,@Spark}
    18/09/20 14:50:44 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@ac91282{/stages/pool,null,AVAILABLE,@Spark}
    18/09/20 14:50:44 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@7f79edee{/stages/pool/json,null,AVAILABLE,@Spark}
    18/09/20 14:50:44 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@1ca610a0{/storage,null,AVAILABLE,@Spark}
    18/09/20 14:50:44 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@49433c98{/storage/json,null,AVAILABLE,@Spark}
    18/09/20 14:50:44 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@b5c6a30{/storage/rdd,null,AVAILABLE,@Spark}
    18/09/20 14:50:44 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@3bfae028{/storage/rdd/json,null,AVAILABLE,@Spark}
    18/09/20 14:50:44 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@1775c4e7{/environment,null,AVAILABLE,@Spark}
    18/09/20 14:50:44 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@47829d6d{/environment/json,null,AVAILABLE,@Spark}
    18/09/20 14:50:44 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@2f677247{/executors,null,AVAILABLE,@Spark}
    18/09/20 14:50:44 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@43f03c23{/executors/json,null,AVAILABLE,@Spark}
    18/09/20 14:50:44 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@7a1b8a46{/executors/threadDump,null,AVAILABLE,@Spark}
    18/09/20 14:50:44 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@2921199d{/executors/threadDump/json,null,AVAILABLE,@Spark}
    18/09/20 14:50:44 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@3d40a3b4{/static,null,AVAILABLE,@Spark}
    18/09/20 14:50:44 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@1e1232cf{/,null,AVAILABLE,@Spark}
    18/09/20 14:50:44 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@6f6efa4f{/api,null,AVAILABLE,@Spark}
    18/09/20 14:50:44 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@7c1a8f0f{/jobs/job/kill,null,AVAILABLE,@Spark}
    18/09/20 14:50:44 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@3730f716{/stages/stage/kill,null,AVAILABLE,@Spark}
    18/09/20 14:50:44 INFO ui.SparkUI: Bound SparkUI to 0.0.0.0, and started at http://hadoop000:4040
    18/09/20 14:50:44 INFO executor.Executor: Starting executor ID driver on host localhost
    18/09/20 14:50:44 INFO util.Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 34125.
    18/09/20 14:50:44 INFO netty.NettyBlockTransferService: Server created on hadoop000:34125
    18/09/20 14:50:44 INFO storage.BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
    18/09/20 14:50:44 INFO storage.BlockManagerMaster: Registering BlockManager BlockManagerId(driver, hadoop000, 34125, None)
    18/09/20 14:50:44 INFO storage.BlockManagerMasterEndpoint: Registering block manager hadoop000:34125 with 413.9 MB RAM, BlockManagerId(driver, hadoop000, 34125, None)
    18/09/20 14:50:44 INFO storage.BlockManagerMaster: Registered BlockManager BlockManagerId(driver, hadoop000, 34125, None)
    18/09/20 14:50:44 INFO storage.BlockManager: Initialized BlockManager: BlockManagerId(driver, hadoop000, 34125, None)
    18/09/20 14:50:45 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@11b5f4e2{/metrics/json,null,AVAILABLE,@Spark}
    18/09/20 14:50:45 INFO scheduler.EventLoggingListener: Logging events to hdfs://hadoop000:9000/directory/local-1537426244466
    18/09/20 14:50:45 INFO internal.SharedState: loading hive config file: file:/home/hadoop/app/spark-2.3.1-bin-2.6.0-cdh5.7.0/conf/hive-site.xml
    18/09/20 14:50:45 INFO internal.SharedState: Setting hive.metastore.warehouse.dir ('null') to the value of spark.sql.warehouse.dir ('file:/home/hadoop/app/spark-2.3.1-bin-2.6.0-cdh5.7.0/bin/spark-warehouse').
    18/09/20 14:50:45 INFO internal.SharedState: Warehouse path is 'file:/home/hadoop/app/spark-2.3.1-bin-2.6.0-cdh5.7.0/bin/spark-warehouse'.
    18/09/20 14:50:45 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@4cc12db2{/SQL,null,AVAILABLE,@Spark}
    18/09/20 14:50:45 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@5ea7bc4{/SQL/json,null,AVAILABLE,@Spark}
    18/09/20 14:50:45 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@7a64cb0c{/SQL/execution,null,AVAILABLE,@Spark}
    18/09/20 14:50:45 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@785ed99c{/SQL/execution/json,null,AVAILABLE,@Spark}
    18/09/20 14:50:45 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@2cccf134{/static/sql,null,AVAILABLE,@Spark}
    18/09/20 14:50:46 INFO hive.HiveUtils: Initializing HiveMetastoreConnection version 1.2.1 using Spark classes.
    18/09/20 14:50:46 INFO client.HiveClientImpl: Warehouse location for Hive client (version 1.2.2) is file:/home/hadoop/app/spark-2.3.1-bin-2.6.0-cdh5.7.0/bin/spark-warehouse
    18/09/20 14:50:46 INFO hive.metastore: Mestastore configuration hive.metastore.warehouse.dir changed from /user/hive/warehouse to file:/home/hadoop/app/spark-2.3.1-bin-2.6.0-cdh5.7.0/bin/spark-warehouse
    18/09/20 14:50:46 INFO metastore.HiveMetaStore: 0: Shutting down the object store...
    18/09/20 14:50:46 INFO HiveMetaStore.audit: ugi=hadoop  ip=unknown-ip-addr     cmd=Shutting down the object store...
    18/09/20 14:50:46 INFO metastore.HiveMetaStore: 0: Metastore shutdown complete.
    18/09/20 14:50:46 INFO HiveMetaStore.audit: ugi=hadoop  ip=unknown-ip-addr     cmd=Metastore shutdown complete.
    18/09/20 14:50:46 INFO metastore.HiveMetaStore: 0: get_database: default
    18/09/20 14:50:46 INFO HiveMetaStore.audit: ugi=hadoop  ip=unknown-ip-addr     cmd=get_database: default
    18/09/20 14:50:46 INFO metastore.HiveMetaStore: 0: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore
    18/09/20 14:50:46 INFO metastore.ObjectStore: ObjectStore, initialize called
    18/09/20 14:50:46 INFO DataNucleus.Query: Reading in results for query "org.datanucleus.store.rdbms.query.SQLQuery@0" since the connection used is closing
    18/09/20 14:50:46 INFO metastore.MetaStoreDirectSql: Using direct SQL, underlying DB is MYSQL
    18/09/20 14:50:46 INFO metastore.ObjectStore: Initialized ObjectStore
    18/09/20 14:50:47 INFO state.StateStoreCoordinatorRef: Registered StateStoreCoordinator endpoint
    spark-sql> show tables;
    18/09/20 14:51:02 INFO metastore.HiveMetaStore: 0: get_database: global_temp
    18/09/20 14:51:02 INFO HiveMetaStore.audit: ugi=hadoop  ip=unknown-ip-addr     cmd=get_database: global_temp
    18/09/20 14:51:02 WARN metastore.ObjectStore: Failed to get database global_temp, returning NoSuchObjectException
    18/09/20 14:51:05 INFO metastore.HiveMetaStore: 0: get_database: default
    18/09/20 14:51:05 INFO HiveMetaStore.audit: ugi=hadoop  ip=unknown-ip-addr     cmd=get_database: default
    18/09/20 14:51:05 INFO metastore.HiveMetaStore: 0: get_database: default
    18/09/20 14:51:05 INFO HiveMetaStore.audit: ugi=hadoop  ip=unknown-ip-addr     cmd=get_database: default
    18/09/20 14:51:05 INFO metastore.HiveMetaStore: 0: get_tables: db=default pat=*
    18/09/20 14:51:05 INFO HiveMetaStore.audit: ugi=hadoop  ip=unknown-ip-addr     cmd=get_tables: db=default pat=*
    18/09/20 14:51:06 INFO codegen.CodeGenerator: Code generated in 497.365376 ms
    default dept    false
    default emp     false
    Time taken: 4.426 seconds, Fetched 2 row(s)
    18/09/20 14:51:06 INFO thriftserver.SparkSQLCLIDriver: Time taken: 4.426 seconds, Fetched 2 row(s)
    

    使用--jars ~/software/mysql-connector-java-5.1.27.jar会报错

    ...
    Caused by: java.sql.SQLException: No suitable driver found for jdbc:mysql://localhost:3306/ruozedata_basic03?//createDatabaseIfNotExist=true
    ...
    
    spark-sql> select * from emp;
    
    spark-sql> cache table emp;
    

    sparksql的cache操作不是lazy的,而是eager的

    spark-sql> select * from emp;
    

    cache之后,读取同一张表,数据量由714变到1992,是因为cache操作....................

    spark-sql> select * from hive_map;
    1       zhangsan        {"brother":"xiaoxu","father":"xiaoming","mother":"xiaohuang"}   28
    2       lisi    {"brother":"guanyu","father":"mayun","mother":"huangyi"}        22
    3       wangwu  {"father":"wangjianlin","mother":"ruhua","sister":"jingtian"}   29
    4       mayun   {"father":"mayongzhen","mother":"angelababy"}   26
    
    spark-sql> create table ruoze_test(key string,value string);
    spark-sql> explain extended select a.key*(5+6),b.value from ruoze_test a join ruoze_test b on a.key=b.key and a.key>10;
    == Parsed Logical Plan ==
    'Project [unresolvedalias(('a.key * (5 + 6)), None), 'b.value]
    +- 'Join Inner, (('a.key = 'b.key) && ('a.key > 10))
       :- 'SubqueryAlias a
       :  +- 'UnresolvedRelation `ruoze_test`
       +- 'SubqueryAlias b
          +- 'UnresolvedRelation `ruoze_test`
    
    == Analyzed Logical Plan ==
    (CAST(key AS DOUBLE) * CAST((5 + 6) AS DOUBLE)): double, value: string
    Project [(cast(key#111 as double) * cast((5 + 6) as double)) AS (CAST(key AS DOUBLE) * CAST((5 + 6) AS DOUBLE))#115, value#114]
    +- Join Inner, ((key#111 = key#113) && (cast(key#111 as int) > 10))
       :- SubqueryAlias a
       :  +- SubqueryAlias ruoze_test
       :     +- HiveTableRelation `default`.`ruoze_test`, org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe, [key#111, value#112]
       +- SubqueryAlias b
          +- SubqueryAlias ruoze_test
             +- HiveTableRelation `default`.`ruoze_test`, org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe, [key#113, value#114]
    
    == Optimized Logical Plan ==
    Project [(cast(key#111 as double) * 11.0) AS (CAST(key AS DOUBLE) * CAST((5 + 6) AS DOUBLE))#115, value#114]
    +- Join Inner, (key#111 = key#113)
       :- Project [key#111]
       :  +- Filter (isnotnull(key#111) && (cast(key#111 as int) > 10))
    //大数据优化的一个关键点:无关紧要的数据先忽略
       :     +- HiveTableRelation `default`.`ruoze_test`, org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe, [key#111, value#112]
       +- Filter (isnotnull(key#113) && (cast(key#113 as int) > 10))
          +- HiveTableRelation `default`.`ruoze_test`, org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe, [key#113, value#114]
    
    == Physical Plan ==
    *(5) Project [(cast(key#111 as double) * 11.0) AS (CAST(key AS DOUBLE) * CAST((5 + 6) AS DOUBLE))#115, value#114]
    +- *(5) SortMergeJoin [key#111], [key#113], Inner
       :- *(2) Sort [key#111 ASC NULLS FIRST], false, 0
       :  +- Exchange hashpartitioning(key#111, 200)
       :     +- *(1) Filter (isnotnull(key#111) && (cast(key#111 as int) > 10))
       :        +- HiveTableScan [key#111], HiveTableRelation `default`.`ruoze_test`, org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe, [key#111, value#112]
       +- *(4) Sort [key#113 ASC NULLS FIRST], false, 0
          +- Exchange hashpartitioning(key#113, 200)
             +- *(3) Filter (isnotnull(key#113) && (cast(key#113 as int) > 10))
                +- HiveTableScan [key#113, value#114], HiveTableRelation `default`.`ruoze_test`, org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe, [key#113, value#114]
    

    sparksql会进行自动优化
    3.第三种方式:服务端thriftserver的启用方式:

    [hadoop@hadoop001 sbin]$ ./start-thriftserver.sh --master local[2] --jars ~/software/mysql-connector-java-5.1.27.jar
    starting org.apache.spark.sql.hive.thriftserver.HiveThriftServer2, logging to /home/hadoop/app/spark-2.3.1-bin-2.6.0-cdh5.7.0/logs/spark-hadoop-org.apache.spark.sql.hive.thriftserver.HiveThriftServer2-1-hadoop001.out
    [hadoop@hadoop001 sbin]$ tail -200f /home/hadoop/app/spark-2.3.1-bin-2.6.0-cdh5.7.0/logs/spark-hadoop-org.apache.spark.sql.hive.thriftserver.HiveThriftServer2-1-hadoop001.out
    //...
    //18/09/02 18:27:26 INFO AbstractService: Service:ThriftBinaryCLIService is //started.
    //18/09/02 18:27:26 INFO AbstractService: Service:HiveServer2 is started.
    //18/09/02 18:27:26 INFO HiveThriftServer2: HiveThriftServer2 started
    //18/09/02 18:27:28 INFO ThriftCLIService: Starting ThriftBinaryCLIService //on port 10000 with 5...500 worker threads
    [hadoop@hadoop001 bin]$ ./beeline -u jdbc:hive2://localhost:10000 -n hadoop
    Connecting to jdbc:hive2://localhost:10000
    log4j:WARN No appenders could be found for logger (org.apache.hive.jdbc.Utils).
    log4j:WARN Please initialize the log4j system properly.
    log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
    Connected to: Spark SQL (version 2.3.1)
    Driver: Hive JDBC (version 1.2.1.spark2)
    Transaction isolation: TRANSACTION_REPEATABLE_READ
    Beeline version 1.2.1.spark2 by Apache Hive
    0: jdbc:hive2://localhost:10000> show tables;
    +-----------+------------+--------------+--+
    | database  | tableName  | isTemporary  |
    +-----------+------------+--------------+--+
    | default   | dept       | false        |
    | default   | emp        | false        |
    +-----------+------------+--------------+--+
    2 rows selected (1.123 seconds)
    

    4.通过JDBC连接Spark Thriftserver
    首先在pom文件里添加依赖

    <dependency>
          <groupId>org.apache.hive</groupId>
          <artifactId>hive-jdbc</artifactId>
          <version>1.1.0-cdh5.7.0</version>
     </dependency>
    

    然后代码如下

    import java.sql.DriverManager
    object SparkSQLApp {
      def main(args: Array[String]): Unit = {
    
        Class.forName("org.apache.hive.jdbc.HiveDriver")
        val conn = DriverManager.getConnection("jdbc:hive2://hadoop000:10000")
        val stmt = conn.prepareStatement("select empno, ename, deptno from emp")
        val rs = stmt.executeQuery()
        while (rs.next()){
          println("empno:"+ rs.getInt("empno")+"    ename:" +rs.getString("ename"))
        }
      rs.close()
        stmt.close()
        conn.close()
      }
    }
    
    输出结果
    ---------------------------------------------------------------------
    log4j:WARN No appenders could be found for logger (org.apache.hive.jdbc.Utils).
    log4j:WARN Please initialize the log4j system properly.
    log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
    empno:7369    ename:SMITH
    empno:7499    ename:ALLEN
    empno:7521    ename:WARD
    empno:7566    ename:JONES
    empno:7654    ename:MARTIN
    empno:7698    ename:BLAKE
    empno:7782    ename:CLARK
    empno:7788    ename:SCOTT
    empno:7839    ename:KING
    empno:7844    ename:TURNER
    empno:7876    ename:ADAMS
    empno:7900    ename:JAMES
    empno:7902    ename:FORD
    empno:7934    ename:MILLER
    empno:8888    ename:HIVE
    

    三种启动方式有什么区别呢:
    使用服务端的启动方式,只需要启动一次,就作为长服务7*24运行,若想通过jdbc客户端的方式连接,可以通过代码随时连接上,可以减少app启动造成的时间成本

    相关文章

      网友评论

          本文标题:(一)Spark SQL三种方式启动

          本文链接:https://www.haomeiwen.com/subject/ceumwftx.html