美文网首页
数据切分迁移实战(二)测试大量数据性能

数据切分迁移实战(二)测试大量数据性能

作者: 大猪小猪在菜盘 | 来源:发表于2019-01-09 19:59 被阅读0次

测试虚拟机配置:16GB内存,200GB机械盘

上一篇文档我们做了环境的搭建,本篇文章我们将测试下mysql大数据表的查询性能,能力有限,希望各位DBA大牛能多多指点。谢谢你们了!

网络上很多文章都说单表1000万之后会有很大的性能下降,为此我就采取使用1000万作为测试指标,虽然不能跟复杂的业务表相比,但还是能摸一摸底。

废弃上一篇文章中的测试表,新建以下表,为两列添加索引,一列不添加

CREATE TABLE `BIG`.`huge_table` (
  `id` INT NOT NULL AUTO_INCREMENT,
  `username` varchar(20) NOT NULL,
  `username_no_index` varchar(20) NOT NULL,
  `batch_no` CHAR(10) NOT NULL,
  PRIMARY KEY (`id`),
  KEY `USERNAME_INDEX` (`username`),
  KEY `BATCH_NO_INDEX` (`batch_no`)
) ENGINE=InnoDB AUTO_INCREMENT=0;

建立一个springboot工程,引入mybatis,lombok,添加下面的maven库

<dependency>
    <groupId>mysql</groupId>
    <artifactId>mysql-connector-java</artifactId>
    <version>5.1.42</version>
</dependency>
<dependency>
    <groupId>org.mybatis.spring.boot</groupId>
    <artifactId>mybatis-spring-boot-starter</artifactId>
    <version>1.3.2</version>
</dependency>
<dependency>
    <groupId>org.mybatis.generator</groupId>
    <artifactId>mybatis-generator-maven-plugin</artifactId>
    <version>1.3.7</version>
</dependency>
<dependency>
    <groupId>org.projectlombok</groupId>
    <artifactId>lombok</artifactId>
    <optional>true</optional>
</dependency>

applications.properties中敲入以下配置:

spring.datasource.url=jdbc:mysql://localhost:3306/BIG?verifyServerCertificate=false&useSSL=false&requireSSL=false
spring.datasource.username=root
spring.datasource.password=aq1sw2de
spring.datasource.driver-class-name=com.mysql.jdbc.Driver

使用mybatis-generator插件生成mapping文件
mybatis-generator.xml

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE generatorConfiguration
    PUBLIC "-//mybatis.org//DTD MyBatis Generator Configuration 1.0//EN"
    "http://mybatis.org/dtd/mybatis-generator-config_1_0.dtd">

<generatorConfiguration>
    <!--数据库驱动-->
    <classPathEntry location="YOUR_MAVEN_LOCATION\repository\mysql\mysql-connector-java\5.1.42\mysql-connector-java-5.1.42.jar"/>
    <context id="DB2Tables" targetRuntime="MyBatis3">
        <commentGenerator>
            <property name="suppressDate" value="true"/>
            <property name="suppressAllComments" value="true"/>
        </commentGenerator>
        <!--数据库链接地址账号密码-->
        <jdbcConnection driverClass="com.mysql.jdbc.Driver" connectionURL="jdbc:mysql://localhost:3306/BIG?verifyServerCertificate=false&useSSL=false&requireSSL=false" userId="root" password="aq1sw2de"></jdbcConnection>
        <javaTypeResolver>
            <property name="forceBigDecimals" value="false"/>
        </javaTypeResolver>
        <!--生成Model类存放位置-->
        <javaModelGenerator targetPackage="zsh.demos.big.dao.pojo" targetProject="./src/main/java">
            <property name="enableSubPackages" value="true"/>
            <property name="trimStrings" value="true"/>
        </javaModelGenerator>
        <!--生成映射文件存放位置-->
        <sqlMapGenerator targetPackage="zsh.demos.big.dao.mapping" targetProject="./src/main/resources">
            <property name="enableSubPackages" value="true"/>
        </sqlMapGenerator>
        <!--生成Dao类存放位置-->
        <javaClientGenerator type="XMLMAPPER" targetPackage="zsh.demos.big.dao.mapper" targetProject="./src/main/java">
            <property name="enableSubPackages" value="true"/>
        </javaClientGenerator>
        <!--生成对应表及类名-->
        <table tableName="huge_table" domainObjectName="HugeTable" enableCountByExample="false" enableUpdateByExample="false" enableDeleteByExample="false" enableSelectByExample="false" selectByExampleQueryId="false"></table>
    </context>
</generatorConfiguration>

使用idea的mybatis-generator插件自动生成代码,然后检查自动生成的pojo及mapper字段类型是否正确。
为了加速往mysql插入1000万条数据,我们使用批量insert语句,减少在IO上的时间消耗,因此我们将mybatis-generator生成的insert语句修改成如下:

<insert id="insert" parameterType="java.util.List">
    insert into huge_table (id, username, username_no_index, batch_no) values
    <foreach collection="list" item="item" index="index" separator="," >
    (
      #{item.id,jdbcType=INTEGER},
      #{item.username,jdbcType=VARCHAR},
      #{item.usernameNoIndex,jdbcType=VARCHAR},
      #{item.batchNo,jdbcType=CHAR}
    )
    </foreach>
  </insert>

然后执行插入大量数据代码:

@SpringBootApplication
@MapperScan("zsh.demos.big.dao.mapper")
public class BigApplication {
    public static void main(String[] args) throws Exception {
        BigApplication app = SpringApplication.run(BigApplication.class, args).context.getBean(BigApplication.class);
        app.insert();
    }
    @Autowired
    private HugeTableMapper hugeTableMapper;
    private volatile int batchNo = 10000000;
    private volatile int finished = 0;
    private static final int WORKERS = 6;
    private List<String> userList;
    private ExecutorService executorService = Executors.newFixedThreadPool(WORKERS);
    private Runnable
    batchNoRunnable = () -> {
        while(true) {
            try {
                Thread.sleep(1000*30); // 30秒计数
            } catch (InterruptedException e) {
                return;
            }
            if (finished >=WORKERS-1 ) {
                System.out.println("batchNoRunnable FINISH");
                return; // 其他线程都完成了
            }
            batchNo++;
        }
    },
    batchRunnable = () -> {
        // 单个线程250万条
        for(int i=0; i<2500; i++) {
            List<HugeTable> list = new ArrayList<>();
            for (int j=0; j<1000; j++) {
                String user = userList.get(RandomUtils.nextInt(0,99));
                HugeTable huge = new HugeTable();
                huge.setBatchNo(String.format("%08d", batchNo));
                huge.setUsername(user);
                huge.setUsernameNoIndex(user);
                list.add(huge);
            }
            hugeTableMapper.insert(list);
        }
        System.out.println(Thread.currentThread().getId() + " FINISH INSERT");
        finished++;
    },
    singleRunnable = () -> {
        try {
            Thread.sleep(1000*60*3); // 为了做单一查询插入2条,等待3分钟后再插入
        } catch (InterruptedException e) {
            e.printStackTrace();
        }
        List<String> ul = generateRandomUser(2);
        List<HugeTable> list = ul.stream().map(user -> {
            HugeTable huge = new HugeTable();
            huge.setBatchNo(String.format("%08d", batchNo));
            huge.setUsername(user);
            huge.setUsernameNoIndex(user);
            return huge;
        }).collect(Collectors.toList());
        hugeTableMapper.insert(list);
        System.out.println("singleRunnable FINISH");
        finished++;
    };

    private List<String> generateRandomUser(int size) {
        List<String> userList = new ArrayList<>();
        for (int i=0; i<size; i++) {
            userList.add(RandomStringUtils.randomAlphabetic(20));
        }
        return userList;
    }

    public void insert() {
        userList = generateRandomUser(100);
        executorService.submit(batchNoRunnable);
        executorService.submit(singleRunnable);
        for(int i=0; i<WORKERS-2; i++) {
            executorService.submit(batchRunnable);
        }
        executorService.shutdown();
    }
}

然后在我自己的本机上执行,数据库部署在虚拟机上。大约6分钟不到后插入数据完毕。我们可以使用mysql命令查看数据量是否是1000万+2行:

mysql> select count(0) from BIG.huge_table;
+----------+
| count(0) |
+----------+
| 10000002 |
+----------+
1 row in set (1.67 sec)

查询结果确实是1000万+2行。并且速度还算可以,大约1.67秒,这是因为这个查询是直接走主键索引,type=index

mysql> explain select count(0) from BIG.huge_table;
+----+-------------+------------+------------+-------+---------------+----------------+---------+------+----------+----------+-------------+
| id | select_type | table      | partitions | type  | possible_keys | key            | key_len | ref  | rows     | filtered | Extra       |
+----+-------------+------------+------------+-------+---------------+----------------+---------+------+----------+----------+-------------+
|  1 | SIMPLE      | huge_table | NULL       | index | NULL          | BATCH_NO_INDEX | 40      | NULL | 10044034 |   100.00 | Using index |
+----+-------------+------------+------------+-------+---------------+----------------+---------+------+----------+----------+-------------+
1 row in set, 1 warning (0.00 sec)

然后我们来执行一下业务中经常会遇见的group by:

mysql> select count(0), batch_no from BIG.huge_table group by batch_no;
+----------+----------+
| count(0) | batch_no |
+----------+----------+
|   868000 | 10000000 |
|  1038000 | 10000001 |
|  1007000 | 10000002 |
|   974000 | 10000003 |
|   990000 | 10000004 |
|   967002 | 10000005 |
|   955000 | 10000006 |
|   983000 | 10000007 |
|   950000 | 10000008 |
|   971000 | 10000009 |
|   297000 | 10000010 |
+----------+----------+
11 rows in set (5.20 sec)

可以看到,在大约11个时间区间的条件下,group by性能已经惨不忍睹了。
我们再来看看组更多的username字段:

mysql> select count(0), username from BIG.huge_table group by username;
+----------+----------------------+
| count(0) | username             |
+----------+----------------------+
|   100706 | aMFPaXqtcSQQaxkkXGUC |
|   101055 | ANlLQLuJLEzNbogPrgwO |
|   100886 | AUIneBVOyOASuhpBrraz |
|   101081 | BGCVgRKfYuTUYmZVZjbL |
|   100614 | BsIeltkklHQfwTuZdCaj |
|   100958 | bSMpvVpIrzSLEuDMVnzm |
|   100684 | BTnLyhAIYJhKnCfWZreO |
|   101298 | BwAeAEwJcxxznwAIleWq |
|   100940 | CHQaoJWvACsmpIYfNmrN |
|   101307 | cSgthGUgDMPsYRYBcQGx |
|   100835 | duQNNwJdLOMbjkaNlgTr |
|   101110 | dZBFUQtbPDptugscrXIj |
|   101230 | EDQJPpftculVnXrqJqVJ |
|   101017 | eEUyTQkvctehoHCUEqQu |
|   101470 | EuGAacdaXvIWGmqjnOmg |
|   101042 | EXdTiIvGCvgIYYcczUkW |
|   100804 | eXNqZhtyVbnaTieGvQQj |
|   101055 | eYdcUfrHQOLDxdfHMDfl |
|   100756 | FFSuvyGMuYbhphNhssBN |
|   101229 | fgZElNlHnheCiNbeXelN |
|   100935 | FTdaEblDhNIrxGdEOttR |
|        1 | fWgvYkWEYjvHbJyivwZv |
|   101102 | giwsJOaZdQKWbkmyNCGb |
|   100953 | gPYDvKgoYyXqjtCqqVnw |
|   100987 | gqHlkVEAeuTlFeYSYrWV |
|   100783 | GSHETgRdZDVCRDwJNMoP |
|   101443 | hixUxeyPqWleYtNMhPyX |
|   100855 | hQZiOIXaAWcyvEmDAjBC |
|   101308 | hypUPgLQzqOoCpBoEXhf |
|   101139 | iyoVJpSJHYJHXSdLGtBx |
|   101455 | JbmEzqUiBsHIkaPEJHuA |
|        1 | JesbYOewndtjHJfEDiku |
|   101054 | jOuprIrfoKBQjxHqSntE |
|   100645 | jVOzzAaYHTgyeBLAEjmh |
|   101446 | kbphQdcxnwuhvQNTqUla |
|   101510 | KcWGebHpNsZlGSBHvUCB |
|   101381 | KdBPaenGBsLQQEIhbygZ |
|   100890 | kGIxwXeSZXgTimEiQGCg |
|   101317 | kKLVYXDiEWYXHxZEumKZ |
|   100669 | kLHmBzInrDAklSrBtcoQ |
|   101004 | KYOJRGumlswlpfrHdOax |
|   101154 | lDVtnJNeWKaikUGBBpcq |
|   101192 | LOrzhmsWsojusNWqEPQs |
|   100752 | lPrjldYvMBFCeigNMjjf |
|   100837 | LwfkFnpUnGNkmsfvEwyE |
|   100353 | MfHzAwcYQNWsVjYjXLMi |
|   100935 | mMyVfCjkklvHWJWSPLor |
|   100907 | nAwkaLqsrOxsnREFhChJ |
|   101125 | NEMxiidjBhfpWsLGEKNf |
|   100975 | NrbYbACQFBHEgetfNdbL |
|   101179 | NruHEwmVSsukUGYVtlCL |
|   100801 | olFKfUAIjLbeebNWovzw |
|   100867 | OLicFawCCCyagExArDpv |
|   100994 | OOHgHSJaQKzLDLqtysGH |
|   101212 | oqXoYuGeGYmAcoGhTXqa |
|   101297 | orjVpkQMQKlmaLQbNEKp |
|   100482 | oybYsJThQBtdVTKZYHfr |
|   100756 | OzuNwfkysbKeOOQHXOUt |
|   100019 | pIhRJPrJzoYHbOFnqhpM |
|   100881 | PMoCbyGlNNjqeEMFmfzW |
|   101390 | ppLAVetsXMXcdiwYLWXh |
|   100695 | PRSJTcnYNZwvepjBGMPr |
|   101212 | PUpBfnxQBUNlTwGbqEsr |
|   100995 | PVkZLcoQdBSLCgAdPFVa |
|   100847 | pXhlTKsabZwLAmjQweTn |
|   100843 | pzPbDnebaMvMUygSzCBU |
|   101080 | QdsEagqMvIhjTYdgdbcc |
|   101469 | qJroAWExJyTRcwhexaEd |
|   101354 | QlvoojOrRdGbZMsWKqfy |
|   101106 | QwnegsDLdgzuUEjttCqq |
|   101237 | rcvaTkYCQsvEjxnJGrpO |
|   100732 | RHHQlxuWjvNAVzjWBhJi |
|   101241 | RlYIKBFThMSyoeteWjQQ |
|   100792 | SdtGbAjyWIZGbMbRsArx |
|   101355 | sWWGfnzRTfVvCQXCBnnX |
|   100386 | szFfPIrQQWkPZlkHdIoI |
|   100871 | TCHmJzcWWXeqJJGzuptb |
|   101082 | tHEPDOXbJDjnFYgQywYo |
|   101194 | tnCJNLAbLUZGWmgjRoXz |
|   101437 | TzQbyVQqiSwPTuveLeJd |
|   100872 | TzSSldZFRJLpxgoNqfhN |
|   101031 | uxYCsPNJWXtjyeniEiLT |
|   100521 | VgjQPwEQnzpRdRkdHGTj |
|   100614 | VIjteykiEkubcyglSGQJ |
|   101130 | VkjpXlVJCZtVKvdkGLdK |
|   101047 | wBsEFCistGgMUnqwcHah |
|   100849 | WPGqDIziVUoOgzYUJqJY |
|   101055 | WQwOZAbWUmMiDgeZNQWu |
|   101027 | wSVZIaZFQbLxUyOxSAKf |
|   101403 | wZOqaXmUfdgojPwHVGOo |
|   101322 | xbqUzvKiAnoIKCuQifBU |
|   100739 | XgilGmpWVNQDVXvVMgfA |
|   101291 | xizoCcXvjVpCjTWRNvBv |
|   101165 | xWLdSSjFosbcrBumRvrx |
|   101178 | YBwNmVBfEwWMWkfztqMd |
|   100843 | YHbgRsJRfhfoaBGzjGbz |
|   101117 | YHXqreegLLdcVmRKNOPH |
|   100964 | YiqhludMgCjhdGqxYXjk |
|   100456 | yJUBQXeCzSkcTMYZNYrg |
|   101072 | ZHkuCaKYOdExHKdUzkzm |
|   101315 | ZtSbeUXFaGEdORKQLYKR |
+----------+----------------------+
101 rows in set (6.36 sec)

101个分组的条件下,时间消耗比11个分组多约1秒,这一块我就不想继续测试了,如果有DBA大牛看到了,能否告知下,group by的性能瓶颈最大的地方在哪里。谢谢!

当然了,如果我们做精确查询,那速度继续是飞一般的感觉,索引的功劳大大滴:

mysql> select * from BIG.huge_table where username='JesbYOewndtjHJfEDiku';
+---------+----------------------+----------------------+----------+
| id      | username             | username_no_index    | batch_no |
+---------+----------------------+----------------------+----------+
| 5840001 | JesbYOewndtjHJfEDiku | JesbYOewndtjHJfEDiku | 10000005 |
+---------+----------------------+----------------------+----------+
1 row in set (0.00 sec)

But, if there's no index matched when you do a query? 允许我飙一句英文:-)
结果自然是惨不忍睹

mysql> select * from BIG.huge_table where username_no_index='JesbYOewndtjHJfEDiku';
+---------+----------------------+----------------------+----------+
| id      | username             | username_no_index    | batch_no |
+---------+----------------------+----------------------+----------+
| 5840001 | JesbYOewndtjHJfEDiku | JesbYOewndtjHJfEDiku | 10000005 |
+---------+----------------------+----------------------+----------+
1 row in set (3.50 sec)

相传大厂都要求查询控制在100毫秒内,超过这个数字就是不及格需要优化。

接下来将会酝酿第三篇文章。请期待

相关文章

  • 数据切分迁移实战(二)测试大量数据性能

    测试虚拟机配置:16GB内存,200GB机械盘 上一篇文档我们做了环境的搭建,本篇文章我们将测试下mysql大数据...

  • 关于数据库的分库分表

    数据切分方式 关系型数据库单表数据量增大,导致处理能力受限,成为业务系统的瓶颈。数据切分将单表的大量数据分为多张小...

  • 快速迁移ES数据

    因为性能测试需要,需要快速迁移已有的测试数据,其中包括mysql和es的。 本地试验/windows环境 1. 安...

  • 一天一道面试题——数据库篇8(分表分库)

    为什么要分表分库 数据量大了,使用多个读写分离,多个从库也无法解决查询性能问题时,需要对数据进行切分。 垂直切分 ...

  • 当自增遇到零

    今天排查一个问题,迁移工具(公司内部同步工具),迁移的数据少一条,好奇怪,由于迁移表依据primary key切分...

  • 腾讯云数据库数据迁移

    2个腾讯云数据库之间迁移 选择数据迁移 填好信息后,可以测试连通性: 新建 校验 开始迁移

  • Pandas:打乱数据并切分

    描述 在机器学习中,拿到一堆训练数据一般会需要将数据切分成训练集和测试集,或者切分成训练集、交叉验证集和测试集,为...

  • 分库分表之数据迁移

    水平切分最大的问题是数据迁移问题。借鉴大众点评订单数据迁移方式,具体有三个阶段,主要目的是保证用户无感知,...

  • 数据迁移测试

    确认需求 当有数据迁移的需求过来之后,首先要搞清楚: 数据迁移的目的:是产品的改版、重构、还是已有功能的外迁。 涉...

  • python-操作数据库

    一、简介 python可以连接数据,方便开发对数据库进行批量操作,做性能测试时,常常用来批量插入测试数据。 二、详...

网友评论

      本文标题:数据切分迁移实战(二)测试大量数据性能

      本文链接:https://www.haomeiwen.com/subject/cdpgrqtx.html