美文网首页Java 杂谈Hadoop
Sqoop数据导入/导出

Sqoop数据导入/导出

作者: FantJ | 来源:发表于2018-07-29 15:15 被阅读9次

1. 从HDFS导出到RDBMS数据库

1.1 准备工作

写一个文件

sqoop_export.txt

1201,laojiao, manager,50000, TP
1202,fantj,preader,50000,TP
1203,jiao,dev,30000,AC
1204,laowang,dev,30000,AC
1205,laodu,admin,20000,TP
1206,laop,grp des,20000,GR

上传到hdfs:
hadoop fs -put sqoop_export.txt /sqoop/export/

创建mysql数据库并增加授权:

create database sqoopdb;
grant all privileges on sqoopdb.* to 'sqoop'@'%' identified by 'sqoop';
grant all privileges on sqoopdb.* to 'sqoop'@'localhost' identified by 'sqoop';
grant all privileges on sqoopdb.* to 'sqoop'@'s166' identified by 'sqoop';
flush privileges;

创建表:

use sqoopdb;
CREATE TABLE employee ( 
   id INT NOT NULL PRIMARY KEY, 
   name VARCHAR(20), 
   deg VARCHAR(20),
   salary INT,
   dept VARCHAR(10));
1.2 执行导出命令
bin/sqoop export \
--connect jdbc:mysql://s166:3306/sqoopdb \
--username sqoop \
--password sqoop \
--table employee \
--export-dir /sqoop/export/emp/ \
--input-fields-terminated-by ','

我执行的时候发现它总在报这个错:

 ERROR tool.ExportTool: Encountered IOException running export job: java.io.FileNotFoundException: File does not exist: hdfs://s166/home/fantj/sqoop/lib/avro-mapred-1.5.3.jar

然后找了很多解决方案:

  1. 替换mysql-java的jar包,换个高版本的。
  2. 修改hadoop的mapred-site.xml文件(先更名mv mapred-site.xml.template mapred-site.xml
<configuration>
    <property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
    </property>
</configuration>

解决后再执行:

    Map-Reduce Framework
        Map input records=6
        Map output records=6
        Input split bytes=107
        Spilled Records=0
        Failed Shuffles=0
        Merged Map outputs=0
        GC time elapsed (ms)=95
        CPU time spent (ms)=1210
        Physical memory (bytes) snapshot=97288192
        Virtual memory (bytes) snapshot=2075623424
        Total committed heap usage (bytes)=17006592
    File Input Format Counters 
        Bytes Read=0
    File Output Format Counters 
        Bytes Written=0
 22:34:37 INFO mapreduce.ExportJobBase: Transferred 274 bytes in 47.346 seconds (5.7872 bytes/sec)
 22:34:37 INFO mapreduce.ExportJobBase: Exported 6 records.

说明处理成功!

1.3 验证mysql表
mysql> select * from employee;
+------+---------+----------+--------+------+
| id   | name    | deg      | salary | dept |
+------+---------+----------+--------+------+
| 1201 | laojiao |  manager |  50000 | TP   |
| 1202 | fantj   | preader  |  50000 | TP   |
| 1203 | jiao    | dev      |  30000 | AC   |
| 1204 | laowang | dev      |  30000 | AC   |
| 1205 | laodu   | admin    |  20000 | TP   |
| 1206 | laop    | grp des  |  20000 | GR   |
+------+---------+----------+--------+------+
6 rows in set (0.07 sec)

2. 导入表表数据到HDFS

bin/sqoop import \
--connect jdbc:mysql://s166:3306/sqoopdb \
--username sqoop \
--password sqoop \
--table employee --m 1
22:44:26 INFO mapreduce.Job: The url to track the job: http://s166:8088/proxy/application_1532679575794_0002/

    File System Counters
        FILE: Number of bytes read=0
        FILE: Number of bytes written=123111
        FILE: Number of read operations=0
        FILE: Number of large read operations=0
        FILE: Number of write operations=0
        HDFS: Number of bytes read=87
        HDFS: Number of bytes written=161
        HDFS: Number of read operations=4
        HDFS: Number of large read operations=0
        HDFS: Number of write operations=2
    Job Counters 
        Launched map tasks=1
        Other local map tasks=1
        Total time spent by all maps in occupied slots (ms)=5972
        Total time spent by all reduces in occupied slots (ms)=0
        Total time spent by all map tasks (ms)=5972
        Total vcore-seconds taken by all map tasks=5972
        Total megabyte-seconds taken by all map tasks=6115328
    Map-Reduce Framework
        Map input records=6
        Map output records=6
        Input split bytes=87
        Spilled Records=0
        Failed Shuffles=0
        Merged Map outputs=0
        GC time elapsed (ms)=195
        CPU time spent (ms)=970
        Physical memory (bytes) snapshot=99921920
        Virtual memory (bytes) snapshot=2079825920
        Total committed heap usage (bytes)=18358272
    File Input Format Counters 
        Bytes Read=0
    File Output Format Counters 
        Bytes Written=161
 22:44:57 INFO mapreduce.ImportJobBase: Transferred 161 bytes in 34.5879 seconds (4.6548 bytes/sec)
 22:44:57 INFO mapreduce.ImportJobBase: Retrieved 6 records.

3. 导入关系表到HIVE

sqoop import --connect jdbc:mysql://s166:3306/sqoopdb --username sqoop --password sqoop --table employee --hive-import --m 1

4. 导入到HDFS指定目录

sqoop import \
--connect jdbc:mysql://s166:3306/sqoopdb \
--username sqoop \
--password sqoop \
--target-dir /queryresult \
--table employee --m 1

5. 导入表数据子集

我们可以导入表的使用Sqoop导入工具,"where"子句的一个子集。并将结果存储在HDFS的目标目录。

sqoop import \
--connect jdbc:mysql://s166:3306/sqoopdb \
--username sqoop \
--password sqoop \
--where "salary>10000" \
--target-dir /wherequery \
--table employee --m 1

相关文章

  • 137.如何进行离线计算-3

    137.1 数据导出 Sqoop可以对HDFS文件进行导入导出到关系型数据库 Sqoop 工作机制是将导入或导出命...

  • 51cto赵强HADOOP学习(十五)sqoop的导入导出

    使用sqoop导入关系型数据库中的数据 安装 导入导出

  • sqoop数据导入导出应用案例

    sqoop数据导入导出应用案例 1 sqoop导入数据 将RDBMS中的一个表数据导入到hdfs。表中的每一行被视...

  • 大数据开源技术

    从数据库导入 目前比较常用的数据库导入工具有Sqoop和Canal。Sqoop是一个数据库批量导入导出工具,可以将...

  • sqoop import使用

    一、sqoop作用? sqoop是一个数据交换工具,最常用的两个工具是导入导出。 导入导出的参照物是hadoop,...

  • Sqoop数据导入/导出

    1. 从HDFS导出到RDBMS数据库 1.1 准备工作 写一个文件 上传到hdfs:hadoop fs -put...

  • sqoop数据处理

    参考文章001sqoop操作 1、数据导入 sqoop是sql和hadoop的缩写,表示从数据库中导出数据到had...

  • Hive-3.1.2(六)Sqoop1安装及指令

    Sqoop介绍 Sqoop 是apache提供的工具,用于HDFS和关系数据库之间数据导入和导出,可以从HDFS导...

  • 【Sqoop】Sqoop 1.4.7 安装

    一、Sqoop 介绍 Sqoop 是一款用于 hadoop 和关系型数据库之间数据导入导出的工具。可以通过 Sqo...

  • sqoop的导入命令详解

    sqoop的背景 sqoop是一个配合大数据hadoop做数据底层的导入导出操作,需要配合hive及分布式文件系统...

网友评论

    本文标题:Sqoop数据导入/导出

    本文链接:https://www.haomeiwen.com/subject/znezmftx.html