美文网首页
[十四] 我来说说分布式事务

[十四] 我来说说分布式事务

作者: lemonMT | 来源:发表于2020-08-21 23:24 被阅读0次

    分布式事务笔记

    1.最终一致性方案[异步最终一致]

    1.1流程图

    image.png

    1.2 业务流程步骤

    1. 业务系统A发送消息到消息系统, 消息系统标记消息状态为"待确认".
    2. 消息系统存储信息到数据库后,返回对应的结果到系统A,有成功和失败可能.
    3. 系统A完成本身业务逻辑,例如扣费, 然后通知消息系统.
    4. 消息系统收到消息, 更改消息的状态为"发送中",并同时利用消息中间件进行消息发送.
    5. 系统B收到消息
    6. 系统B完成本地业务逻辑,例如增加积分,然后通知到消息系统.
    7. 消息系统删除掉对应存储的消息.

    1.3异常流程处理

    1. 如果流程1发生异常,则由业务系统A进行重试, 重试不成功则预警,业务终止,此时业务数据一致.
    2. 如果消息系统存储"待确认"成功,返回业务系统失败, 一种解决是业务系统重试,二种增加定时任务,重复通知.
    3. 定时任务,扫描"发送中"状态,反向查询系统B对应状态,注意使用幂等.
    4. 定时任务,扫描"待确认"状态, 反向查询系统A对应状态,注意使用幂等.
    5. 当定时任务重试了对应次数和时间,则转到一个人工处理的队列.利用死亡队列进行监听.

    1.4Demo项目流程

    1. 订单系统, 创建订单, 修改订单.模拟支付成功
    2. 用户系统, 创建u用户,修改用户积分.
    3. 消息系统 , 创建消息,修改消息,定时任务.
    4. queue系统, 负责监听消息处理对应业务.
    5. 工具: rabbitmq
    6. 先生成一笔订单,订单状态为支付中,然后发送消息到消息系统进行记录,返回正确后,然后模拟支付成功,完成本地事务, 然后修改消息状态,然后用户系统增加积分, 然后删除掉消息。 最后定时任务
    image.png

    2.最大努力通知型[异步可丢失]

    2.1流程图

    image.png

    2.2业务流程步骤

    1. 系统A完成本地事务,进行异步调用消息系统.
    2. 消息系统记录消息,类似存一条记录即可.
    3. 消息系统连接queue系统进行发送信息.
    4. 如果应答为200,或则成功,则删除此类消息.
    5. 如果超过五次,则将消息删除, 然后丢入死亡队列.
    6. 注意接收方进行幂等.

    2.3Demo项目

    1. 订单系统, 完成订单,调用消息系统
    2. 消息系统, 记录信息,定时任务发/删消息, 调用queue系统
    3. queue系统, 连接 rabbitmq发送信息. 可以使用ACK
    4. 用户系统, 接收用户订单信息.

    3.LCN解决方案[强一致性]

    说明: 现在官网打不开了.最新的版本是5.0.2. 代码还是好用的.

    https://github.com/codingapi/tx-lcn/releases

    https://github.com/codingapi/txlcn-docs/tree/master/docs/zh-cn

    3.1流程图

    image.png

    创建事务组
    是指在事务发起方开始执行业务代码之前先调用TxManager创建事务组对象,然后拿到事务标示GroupId的过程。

    加入事务组
    添加事务组是指参与方在执行完业务方法以后,将该模块的事务信息通知给TxManager的操作。

    通知事务组
    是指在发起方执行完业务代码以后,将发起方执行结果状态通知给TxManager,TxManager将根据事务最终状态和事务组的信息来通知相应的参与模块提交或回滚事务,并返回结果给事务发起方。

    3.2LCN的3种模式

    LCN模式:

    通过代理connection的方式实现对本地事务的处理,然后再txManagaer统一协调控制事务.

    特点:

    1. 对于代码嵌入性低.
    2. 该模式仅限于本地存在连接对象和连接对虾那个控制事务的模块.
    3. 该模式事务提交和回滚由本地事务控制,对于数据一致性,有非常高的保障.
    4. 代理的连接需要随着事务发起方一起释放才释放,所以占用时间比较长.

    TCC模式:

    Try : 尝试执行业务, confirm:确认执行业务, Cancel:取消执行业务.

    特点:

    1. 对代码嵌入型高,要求每个业务都要写三个步骤的操作.
    2. 该模式对有无本地事务都可以全面支持, 使用面比较广.
    3. 数据一致性完全由开发决定, 对业务要开发要求非常高.

    TXC模式:

    通过SQL执行前,了解SQL的信息和创建锁, 锁是通过redis进行创建. 当回滚的时候,通过这些SQL影响信息回滚.

    特点:

    1. 对代码嵌入型低.
    2. 仅限支持SQL
    3. 每次都先查SQL影响的数据.比LCN模式慢.
    4. 该模式不会占用数据库连接资源.

    3.3Demo项目

    1. redis项目
    2. TM项目
    3. 订单系统
    4. 用户系统

    3.4SpringBoot整合LCN

    3.4.1 新建项目
    <dependency>
                <groupId>com.codingapi.txlcn</groupId>
                <artifactId>txlcn-tm</artifactId>
                <version>5.0.2.RELEASE</version>
            </dependency>
    
    3.4.2增加配置
    spring.application.name=TransactionManager
    server.port=7970
    spring.datasource.driver-class-name=com.mysql.cj.jdbc.Driver
    spring.datasource.url=jdbc:mysql://127.0.0.1:3306/tx-manager?characterEncoding=UTF-8&serverTimezone=Asia/Shanghai
    spring.datasource.username=root
    spring.datasource.password=root
    spring.jpa.database-platform=org.hibernate.dialect.MySQL5InnoDBDialect
    spring.jpa.hibernate.ddl-auto=update
    
    mybatis.configuration.map-underscore-to-camel-case=true
    mybatis.configuration.use-generated-keys=true
    
    #tx-lcn.logger.enabled=true
    # TxManager Host Ip
    #tx-lcn.manager.host=127.0.0.1
    # TxClient连接请求端口
    #tx-lcn.manager.port=8070
    # 心跳检测时间(ms)
    #tx-lcn.manager.heart-time=15000
    # 分布式事务执行总时间
    #tx-lcn.manager.dtx-time=30000
    #参数延迟删除时间单位ms
    #tx-lcn.message.netty.attr-delay-time=10000
    #tx-lcn.manager.concurrent-level=128
    # 开启日志
    #tx-lcn.logger.enabled=true
    #logging.level.com.codingapi=debug
    #redis 主机
    #spring.redis.host=127.0.0.1
    #redis 端口
    #spring.redis.port=6379
    #redis 密码
    #spring.redis.password=
    
    3.4.3 启动TM项目

    http://localhost:7970/admin/index.html#/task 密码: codingapi

    [图片上传失败...(image-ac34af-1598023369724)]

    3.4.4启动类增加注解
    @EnableDistributedTransaction
    @SpringBootApplication
    public class ChlLoanServiceOrderApplication {
    
        public static void main(String[] args) {
            SpringApplication.run(ChlLoanServiceOrderApplication.class, args);
        }
    
    }
    
    3.4.5 建立client-A

    新建项目,增加对应的maven依赖.

      <!--第一步,增加lcn的依赖-->
            <!--分布式事物-->
            <dependency>
                <groupId>com.codingapi.txlcn</groupId>
                <artifactId>txlcn-tc</artifactId>
                <version>5.0.2.RELEASE</version>
            </dependency>
    
            <dependency>
                <groupId>com.codingapi.txlcn</groupId>
                <artifactId>txlcn-txmsg-netty</artifactId>
                <version>5.0.2.RELEASE</version>
            </dependency>
    
    3.4.6 建立client-B

    新建项目,增加对应的maven依赖, 和上述保持一致.

    3.4.7 增加两个项目的配置
    # 是否启动LCN负载均衡策略(优化选项,开启与否,功能不受影响)
    tx-lcn.ribbon.loadbalancer.dtx.enabled=true
    # 默认之配置为TM的本机默认端口
    tx-lcn.client.manager-address=127.0.0.1:8070
    # 开启日志,默认为false
    tx-lcn.logger.enabled=true
    
    3.4.8 Client-A项目处理

    流程如下: Client-A 中,在service层中增加一个事务的方法

    @Transactional

    @LcnTransaction

    1. 第一步,首先本地事务, 加入订单,标志为订单支付成功.
    2. 第二步, 调用client-B,给用户增加积分.
    3. 在这里如果成功,则两个项目都有对应的数据增加.
    4. 在这里如果失败,则两个项目数据都进行回滚.
    /**
         * 测试lcn
         * @return
         */
        @Transactional //本地事务注解
        @LcnTransaction//分布式事务注解
        public ResultVO  testLcn()throws Exception{
    
    
            //第一步: 加入订单操作
            LoanOrderPO  loanOrderPO  = new LoanOrderPO();
    
            loanOrderPO.setConsumeAccount(new BigDecimal(1001));
            loanOrderPO.setCreateTime(System.currentTimeMillis());
            loanOrderPO.setEditTime(System.currentTimeMillis());
            loanOrderPO.setOrderId(UUID.randomUUID().toString());
            loanOrderPO.setUserId("1001");
    
            loanOrderPOMapper.insertSelective(loanOrderPO);
    
            //第二步: 给用户增加积分
    
            RpTransactionMessage rpTransactionMessage = new RpTransactionMessage();
            String paramJson = JSON.toJSONString(loanOrderPO);
    
            rpTransactionMessage.setConsumerQueue("order.pay");
            rpTransactionMessage.setCreater("lemon-order");
            rpTransactionMessage.setMessageBody(paramJson);
            rpTransactionMessage.setMessageDataType("json");
            rpTransactionMessage.setMessageId(UUID.randomUUID().toString());
            rpTransactionMessage.setField1("paying");
    
            paramJson = JSON.toJSONString(rpTransactionMessage);
            String url = "http://127.0.0.1:8092/user/create";
            String result = HttpClientUtil.postBody(url, paramJson);
    
    
            ResultVO  resultVO  = new ResultVO();
    
            resultVO.setData(result);
            return  resultVO ;
        }
    
    3.4.8 Client-B业务处理

    流程如下: Client-B中,在service层中增加一个事务的方法

    @Transactional

    @LcnTransaction

    1. 首先判断此次的请求的幂等性.
    2. 如果有了,直接查询结果返回.
    3. 如果没有,则继续执行下述逻辑.
    4. 处理本地事务,提交用户的积分.
    5. 如果本地事务发生异常,那么client-a,client-b都进行回退.
    @LcnTransaction
        @Transactional
        @Override
        public ResultVO addUserCount(RpTransactionMessage rpTransactionMessage) {
    
    
            //是否这个消息是否处理过, 消息幂等
    
            Example example = new Example(UserLoanConsumeLogPO.class);
            example.createCriteria().andEqualTo("messageId", rpTransactionMessage.getMessageId());
            List<UserLoanConsumeLogPO> list = consumeLogPOMapper.selectByExample(example);
    
            if (null != list && list.size() > 0) {
    
                return new ResultVO();
            }
    
            //开始解析消息体
            UserCountForm  userCountForm = JSONObject.parseObject(rpTransactionMessage.getMessageBody()).toJavaObject(UserCountForm.class);
    
            //判断是否有用户
            Example useExample = new Example(UserLoanConsumePO.class);
    
            useExample.createCriteria().andEqualTo("userId", userCountForm.getUserId());
    
            List<UserLoanConsumePO> userLoanConsumePOList = consumePOMapper.selectByExample(useExample);
    
            if (null != userLoanConsumePOList && userLoanConsumePOList.size() > 0) {
    
                //增加积分
    
                Example consumePoExample = new Example(UserLoanConsumePO.class);
    
                consumePoExample.createCriteria().andEqualTo("userId", userCountForm.getUserId());
    
                UserLoanConsumePO userLoanConsumePO = new UserLoanConsumePO();
                userLoanConsumePO.setConsumeAccount(userLoanConsumePOList.get(0).getConsumeAccount().add(userCountForm.getConsumeAccount()));
    
                consumePOMapper.updateByExampleSelective(userLoanConsumePO, consumePoExample);
            } else {
    
                UserLoanConsumePO userLoanConsumePO = new UserLoanConsumePO();
                userLoanConsumePO.setConsumeAccount(userCountForm.getConsumeAccount());
                userLoanConsumePO.setCreateTime(System.currentTimeMillis());
                userLoanConsumePO.setEditTime(System.currentTimeMillis());
                userLoanConsumePO.setUserId(userCountForm.getUserId());
                //创建用户加积分
                consumePOMapper.insertSelective(userLoanConsumePO);
            }
    
            //增加消息记录
    
            UserLoanConsumeLogPO  userLoanConsumeLogPO = new UserLoanConsumeLogPO();
            userLoanConsumeLogPO.setCreateTime(System.currentTimeMillis());
            userLoanConsumeLogPO.setEditTime(System.currentTimeMillis());
            userLoanConsumeLogPO.setMessageId(rpTransactionMessage.getMessageId());
            userLoanConsumeLogPO.setUserId(userCountForm.getUserId());
            consumeLogPOMapper.insertSelective(userLoanConsumeLogPO);
    
            if(1==1){
                throw new BusinessException("safa","sdfs");
            }
            return new ResultVO();
        }
    

    4.seata解决方案[强一致性]

    4.1 最新的版本

    https://github.com/seata/seata/releases/tag/v1.3.0

    http://seata.io/zh-cn/docs/ops/deploy-guide-beginner.html

    image.png image.png

    建议目前使用 springboot 2.2.5 cloud Hoxton.SR3 alibaba 2.2.1

    相关术语:

    TC (Transaction Coordinator) - 事务协调者: 维护全局和分支事务的状态,驱动全局事务提交或回滚。

    TM (Transaction Manager) - 事务管理器: 定义全局事务的范围:开始全局事务、提交或回滚全局事务。

    RM (Resource Manager) - 资源管理器: 管理分支事务处理的资源,与TC交谈以注册分支事务和报告分支事务的状态,并驱动分支事务提交或回滚。

    4.2 最新的文档

    http://seata.io/zh-cn/index.html

    4.3 Seata支持的模式

    4.3.1 AT模式
    4.3.1.1 AT简介

    前提:基于支持本地ACID事务的关系型数据库.

    机制: 阶段一: 业务数据和回滚日志记录大奥一个本地事务提交, 释放本地锁和链接资源.

    ​ 阶段二: 提交异步化, 如果回滚,则通过日志反向回滚.

    4.3.1.2写隔离

    两个全局事务 tx1 和 tx2,分别对 a 表的 m 字段进行更新操作,m 的初始值 1000。

    tx1 先开始,开启本地事务,拿到本地锁,更新操作 m = 1000 - 100 = 900。本地事务提交前,先拿到该记录的 全局锁 ,本地提交释放本地锁。 tx2 后开始,开启本地事务,拿到本地锁,更新操作 m = 900 - 100 = 800。本地事务提交前,尝试拿该记录的 全局锁 ,tx1 全局提交前,该记录的全局锁被 tx1 持有,tx2 需要重试等待 全局锁

    Write-Isolation: Commit

    tx1 二阶段全局提交,释放 全局锁 。tx2 拿到 全局锁 提交本地事务。

    如果 tx1 的二阶段全局回滚,则 tx1 需要重新获取该数据的本地锁,进行反向补偿的更新操作,实现分支的回滚。

    此时,如果 tx2 仍在等待该数据的 全局锁,同时持有本地锁,则 tx1 的分支回滚会失败。分支的回滚会一直重试,直到 tx2 的 全局锁 等锁超时,放弃 全局锁 并回滚本地事务释放本地锁,tx1 的分支回滚最终成功。

    因为整个过程 全局锁 在 tx1 结束前一直是被 tx1 持有的,所以不会发生 脏写 的问题。

    Write-Isolation: Rollback
    4.3.1.4读隔离

    在数据库本地事务隔离级别 读已提交(Read Committed) 或以上的基础上,Seata(AT 模式)的默认全局隔离级别是 读未提交(Read Uncommitted)

    如果应用在特定场景下,必需要求全局的 读已提交 ,目前 Seata 的方式是通过 SELECT FOR UPDATE 语句的代理。

    SELECT FOR UPDATE 语句的执行会申请 全局锁 ,如果 全局锁 被其他事务持有,则释放本地锁(回滚 SELECT FOR UPDATE 语句的本地执行)并重试。这个过程中,查询是被 block 住的,直到 全局锁 拿到,即读取的相关数据是 已提交 的,才返回。

    出于总体性能上的考虑,Seata 目前的方案并没有对所有 SELECT 语句都进行代理,仅针对 FOR UPDATE 的 SELECT 语句。

    Read Isolation: SELECT FOR UPDATE

    4.3.1.5 业务Demo

    1. AT的分支,进行业务逻辑操作:update product set name = 'GTS' where name = 'TXC';

    2. 阶段一: 得到 SQL 的类型(UPDATE),表(product),条件(where name = 'TXC')等相关的信息。

    3. 阶段一:查询前镜像:根据解析得到的条件信息,生成查询语句,定位数据。

    4. select id, name, since from product where name = 'TXC';

    5. 阶段一: 执行业务 SQL:更新这条记录的 name 为 'GTS'。

    6. 阶段一:查询后镜像:根据前镜像的结果,通过 主键 定位数据。

    7. select id, name, since from product where id = 1;

    8. 插入回滚日志:把前后镜像数据以及业务 SQL 相关的信息组成一条回滚日志记录,插入到 UNDO_LOG 表中.

    9. {
       "branchId": 641789253,
       "undoItems": [{
           "afterImage": {
               "rows": [{
                   "fields": [{
                       "name": "id",
                       "type": 4,
                       "value": 1
                   }, {
                       "name": "name",
                       "type": 12,
                       "value": "GTS"
                   }, {
                       "name": "since",
                       "type": 12,
                       "value": "2014"
                   }]
               }],
               "tableName": "product"
           },
           "beforeImage": {
               "rows": [{
                   "fields": [{
                       "name": "id",
                       "type": 4,
                       "value": 1
                   }, {
                       "name": "name",
                       "type": 12,
                       "value": "TXC"
                   }, {
                       "name": "since",
                       "type": 12,
                       "value": "2014"
                   }]
               }],
               "tableName": "product"
           },
           "sqlType": "UPDATE"
       }],
       "xid": "xid:xxx"
      }
      
      1. 提交前,向 TC 注册分支:申请 product 表中,主键值等于 1 的记录的 全局锁
      2. 本地事务提交:业务数据的更新和前面步骤中生成的 UNDO LOG 一并提交。
      3. 将本地事务提交的结果上报给 TC。
      4. 阶段二收到回滚:通过 XID 和 Branch ID 查找到相应的 UNDO LOG 记录。
      5. 数据校验:拿 UNDO LOG 中的后镜与当前数据进行比较,如果有不同,说明数据被当前全局事务之外的动作做了修改。这种情况,需要根据配置策略来做处理,详细的说明在另外的文档中介绍.
      6. 根据 UNDO LOG 中的前镜像和业务 SQL 的相关信息生成并执行回滚的语句
      7. update product set name = 'TXC' where id = 1;
      8. 提交本地事务。并把本地事务的执行结果(即分支事务回滚的结果)上报给 TC。
      9. 阶段二,如果是提交:收到 TC 的分支提交请求,把请求放入一个异步任务的队列中,马上返回提交成功的结果给 TC。
      10. 异步任务阶段的分支提交请求将异步和批量地删除相应 UNDO LOG 记录
    4.3.2 TCC模式
    • 一阶段 prepare 行为:调用 自定义 的 prepare 逻辑。
    • 二阶段 commit 行为:调用 自定义 的 commit 逻辑。
    • 二阶段 rollback 行为:调用 自定义 的 rollback 逻辑
    4.3.3 SAGA模式

    Saga模式是SEATA提供的长事务解决方案,在Saga模式中,业务流程中每个参与者都提交本地事务,当出现某一个参与者失败则补偿前面已经成功的参与者,一阶段正向服务和二阶段补偿服务都由业务开发实现。

    Saga模式示意图

    4.4 Seata两种模式

    4.4.1 不依赖第三方

    直接client和 seata进行通讯,这个时候急需要使用file.conf. 如果使用注册中心, 则就需要把file.conf这个文件删除掉.

    4.4.2 依赖数据库方案

    需要利用registry.conf这个内容

    4.5 SpringBoot+nacos+seata

    运行启动 nacos,这里不再赘述. 启动后,默认端口为: 8091

    sh startup.sh -m standalone
    
    4.5.1 下载seata server

    https://seata.io/zh-cn/blog/download.html

    4.5.2 执行SQL

    下载-源码-,也可以在seata\seata-1.3.0\script\server\db 这个地址中找到.

    https://github.com/seata/seata/blob/develop/script/server/db/mysql.sql
    
    4.5.3 需要用的数据库增加SQL
    -- 注意此处0.3.0+ 增加唯一索引 ux_undo_log
    CREATE TABLE `undo_log` (
      `id` bigint(20) NOT NULL AUTO_INCREMENT,
      `branch_id` bigint(20) NOT NULL,
      `xid` varchar(100) NOT NULL,
      `context` varchar(128) NOT NULL,
      `rollback_info` longblob NOT NULL,
      `log_status` int(11) NOT NULL,
      `log_created` datetime NOT NULL,
      `log_modified` datetime NOT NULL,
      `ext` varchar(100) DEFAULT NULL,
      PRIMARY KEY (`id`),
      UNIQUE KEY `ux_undo_log` (`xid`,`branch_id`)
    ) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8;
    
    4.5.4 修改config.txt原始配置

    将源码的配置需要推送到nacos上面.注意里面需要改动的地方.

    https://github.com/seata/seata/tree/develop/script/config-center/config.txt

    主要修改store.mode=mysql, 然后修改mysql相关的配置.

    transport.type=TCP
    transport.server=NIO
    transport.heartbeat=true
    transport.enableClientBatchSendRequest=false
    transport.threadFactory.bossThreadPrefix=NettyBoss
    transport.threadFactory.workerThreadPrefix=NettyServerNIOWorker
    transport.threadFactory.serverExecutorThreadPrefix=NettyServerBizHandler
    transport.threadFactory.shareBossWorker=false
    transport.threadFactory.clientSelectorThreadPrefix=NettyClientSelector
    transport.threadFactory.clientSelectorThreadSize=1
    transport.threadFactory.clientWorkerThreadPrefix=NettyClientWorkerThread
    transport.threadFactory.bossThreadSize=1
    transport.threadFactory.workerThreadSize=default
    transport.shutdown.wait=3
    service.vgroupMapping.my_test_tx_group=default
    service.default.grouplist=127.0.0.1:8091
    service.enableDegrade=false
    service.disableGlobalTransaction=false
    client.rm.asyncCommitBufferLimit=10000
    client.rm.lock.retryInterval=10
    client.rm.lock.retryTimes=30
    client.rm.lock.retryPolicyBranchRollbackOnConflict=true
    client.rm.reportRetryCount=5
    client.rm.tableMetaCheckEnable=false
    client.rm.sqlParserType=druid
    client.rm.reportSuccessEnable=false
    client.rm.sagaBranchRegisterEnable=false
    client.tm.commitRetryCount=5
    client.tm.rollbackRetryCount=5
    client.tm.degradeCheck=false
    client.tm.degradeCheckAllowTimes=10
    client.tm.degradeCheckPeriod=2000
    store.mode=db
    store.file.dir=file_store/data
    store.file.maxBranchSessionSize=16384
    store.file.maxGlobalSessionSize=512
    store.file.fileWriteBufferCacheSize=16384
    store.file.flushDiskMode=async
    store.file.sessionReloadReadSize=100
    store.db.datasource=druid
    store.db.dbType=mysql
    store.db.driverClassName=com.mysql.cj.jdbc.Driver
    store.db.url=jdbc:mysql://127.0.0.1:3306/seata?useUnicode=true
    store.db.user=root
    store.db.password=root
    store.db.minConn=5
    store.db.maxConn=30
    store.db.globalTable=global_table
    store.db.branchTable=branch_table
    store.db.queryLimit=100
    store.db.lockTable=lock_table
    store.db.maxWait=5000
    store.redis.host=127.0.0.1
    store.redis.port=6379
    store.redis.maxConn=10
    store.redis.minConn=1
    store.redis.database=0
    store.redis.password=null
    store.redis.queryLimit=100
    server.recovery.committingRetryPeriod=1000
    server.recovery.asynCommittingRetryPeriod=1000
    server.recovery.rollbackingRetryPeriod=1000
    server.recovery.timeoutRetryPeriod=1000
    server.maxCommitRetryTimeout=-1
    server.maxRollbackRetryTimeout=-1
    server.rollbackRetryTimeoutUnlockEnable=false
    client.undo.dataValidation=true
    client.undo.logSerialization=jackson
    client.undo.onlyCareUpdateColumns=true
    server.undo.logSaveDays=7
    server.undo.logDeletePeriod=86400000
    client.undo.logTable=undo_log
    client.log.exceptionRate=100
    transport.serialization=seata
    transport.compressor=none
    metrics.enabled=false
    metrics.registryType=compact
    metrics.exporterList=prometheus
    metrics.exporterPrometheusPort=9898
    

    将配置推送到已经启动的nacos

    ndmicro@bogon nacos % sh ./nacos-config.sh
    
    4.5.5 启动seata-server
    ndmicro@bogon bin % sh ./seata-server.sh
    

    启动日志如下:

    2020-08-13 18:29:19.970  INFO 13200 --- [eoutChecker_1_1] i.s.c.r.netty.NettyClientChannelManager  : will connect to 127.0.0.1:8091
    2020-08-13 18:29:19.975  INFO 13200 --- [eoutChecker_1_1] i.s.core.rpc.netty.NettyPoolableFactory  : NettyPool create channel to transactionRole:TMROLE,address:127.0.0.1:8091,msg:< RegisterTMRequest{applicationId='order-service', transactionServiceGroup='my_test_tx_group'} >
    2020-08-13 18:29:19.978  INFO 13200 --- [eoutChecker_2_1] i.s.c.r.netty.NettyClientChannelManager  : will connect to 127.0.0.1:8091
    2020-08-13 18:29:19.979  INFO 13200 --- [eoutChecker_2_1] i.s.core.rpc.netty.NettyPoolableFactory  : NettyPool create channel to transactionRole:RMROLE,address:127.0.0.1:8091,msg:< RegisterRMRequest{resourceIds='null', applicationId='order-service', transactionServiceGroup='my_test_tx_group'} >
    2020-08-13 18:29:20.202  INFO 13200 --- [eoutChecker_2_1] i.s.c.rpc.netty.RmNettyRemotingClient    : register RM success. client version:1.3.0, server version:1.3.0,channel:[id: 0xe5a6191b, L:/127.0.0.1:53413 - R:/127.0.0.1:8091]
    2020-08-13 18:29:20.202  INFO 13200 --- [eoutChecker_1_1] i.s.c.rpc.netty.TmNettyRemotingClient    : register TM success. client version:1.3.0, server version:1.3.0,channel:[id: 0x9b715e24, L:/127.0.0.1:53412 - R:/127.0.0.1:8091]
    2020-08-13 18:29:20.212  INFO 13200 --- [eoutChecker_1_1] i.s.core.rpc.netty.NettyPoolableFactory  : register success, cost 141 ms, version:1.3.0,role:TMROLE,channel:[id: 0x9b715e24, L:/127.0.0.1:53412 - R:/127.0.0.1:8091]
    2020-08-13 18:29:20.212  INFO 13200 --- [eoutChecker_2_1] i.s.core.rpc.netty.NettyPoolableFactory  : register success, cost 142 ms, version:1.3.0,role:RMROLE,channel:[id: 0xe5a6191b, L:/127.0.0.1:53413 - R:/127.0.0.1:8091]
    
    image.png
    4.5.6 观察是否注册进入nacos
    image.png
    4.5.7 依赖客户端的maven
    • 依赖seata-all
    • 依赖seata-spring-boot-starter,支持yml、properties配置(.conf可删除),内部已依赖seata-all
    • 依赖spring-cloud-alibaba-seata,内部集成了seata,并实现了xid传递
     <dependency>
                <groupId>com.alibaba.cloud</groupId>
                <artifactId>spring-cloud-starter-alibaba-seata</artifactId>
                <version>2.2.1.RELEASE</version>
                <exclusions>
                    <exclusion>
                        <groupId>io.seata</groupId>
                        <artifactId>seata-spring-boot-starter</artifactId>
                    </exclusion>
                </exclusions>
            </dependency>
            <dependency>
                <groupId>io.seata</groupId>
                <artifactId>seata-spring-boot-starter</artifactId>
                <version>1.3.0</version>
            </dependency>
    
    4.5.8 开始装备第一个订单系统
    <!--注册到nacos上面-->
            <dependency>
                <groupId>com.alibaba.cloud</groupId>
                <artifactId>spring-cloud-starter-alibaba-nacos-discovery</artifactId>
                <version>2.2.1.RELEASE</version>
            </dependency>
    
            <!--seata 依赖-->
            <dependency>
                <groupId>com.alibaba.cloud</groupId>
                <artifactId>spring-cloud-starter-alibaba-seata</artifactId>
                <version>2.2.1.RELEASE</version>
                <exclusions>
                    <exclusion>
                        <groupId>io.seata</groupId>
                        <artifactId>seata-spring-boot-starter</artifactId>
                    </exclusion>
                </exclusions>
            </dependency>
            <dependency>
                <groupId>io.seata</groupId>
                <artifactId>seata-spring-boot-starter</artifactId>
                <version>1.3.0</version>
            </dependency>
    
    

    在源代码中copy最全的配置[我们应该是根据这个模板来改]

    seata:
      enabled: true
      application-id: applicationName
      tx-service-group: my_test_tx_group
      enable-auto-data-source-proxy: true
      use-jdk-proxy: false
      excludes-for-auto-proxying: firstClassNameForExclude,secondClassNameForExclude
      client:
        rm:
          async-commit-buffer-limit: 1000
          report-retry-count: 5
          table-meta-check-enable: false
          report-success-enable: false
          saga-branch-register-enable: false
          lock:
            retry-interval: 10
            retry-times: 30
            retry-policy-branch-rollback-on-conflict: true
        tm:
          degrade-check: false
          degrade-check-period: 2000
          degrade-check-allow-times: 10
          commit-retry-count: 5
          rollback-retry-count: 5
        undo:
          data-validation: true
          log-serialization: jackson
          log-table: undo_log
          only-care-update-columns: true
        log:
          exceptionRate: 100
      service:
        vgroup-mapping:
          my_test_tx_group: default
        grouplist:
          default: 127.0.0.1:8091
        enable-degrade: false
        disable-global-transaction: false
      transport:
        shutdown:
          wait: 3
        thread-factory:
          boss-thread-prefix: NettyBoss
          worker-thread-prefix: NettyServerNIOWorker
          server-executor-thread-prefix: NettyServerBizHandler
          share-boss-worker: false
          client-selector-thread-prefix: NettyClientSelector
          client-selector-thread-size: 1
          client-worker-thread-prefix: NettyClientWorkerThread
          worker-thread-size: default
          boss-thread-size: 1
        type: TCP
        server: NIO
        heartbeat: true
        serialization: seata
        compressor: none
        enable-client-batch-send-request: true
      config:
        type: file
        consul:
          server-addr: 127.0.0.1:8500
        apollo:
          apollo-meta: http://192.168.1.204:8801
          app-id: seata-server
          namespace: application
        etcd3:
          server-addr: http://localhost:2379
        nacos:
          namespace:
          serverAddr: 127.0.0.1:8848
          group: SEATA_GROUP
          username: ""
          password: ""
        zk:
          server-addr: 127.0.0.1:2181
          session-timeout: 6000
          connect-timeout: 2000
          username: ""
          password: ""
      registry:
        type: file
        consul:
          server-addr: 127.0.0.1:8500
        etcd3:
          serverAddr: http://localhost:2379
        eureka:
          weight: 1
          service-url: http://localhost:8761/eureka
        nacos:
          application: seata-server
          server-addr: 127.0.0.1:8848
          group : "SEATA_GROUP"
          namespace:
          username: ""
          password: ""
        redis:
          server-addr: localhost:6379
          db: 0
          password:
          timeout: 0
        sofa:
          server-addr: 127.0.0.1:9603
          region: DEFAULT_ZONE
          datacenter: DefaultDataCenter
          group: SEATA_GROUP
          addressWaitTime: 3000
          application: default
        zk:
          server-addr: 127.0.0.1:2181
          session-timeout: 6000
          connect-timeout: 2000
          username: ""
          password: ""
    
    

    最后订单系统采用的参数如下:

    spring:
      datasource:
        druid:
          url: jdbc:mysql://localhost:3306/seata_order?useUnicode=true&characterEncoding=UTF-8&useJDBCCompliantTimezoneShift=true&useLegacyDatetimeCode=false&serverTimezone=UTC&useSSL=false
          username: root
          password: root
          driver-class-name: com.mysql.cj.jdbc.Driver
          initial-size: 5
          max-active: 20
          min-idle: 5
          pool-prepared-statements: true
          max-pool-prepared-statement-per-connection-size: 20
          max-open-prepared-statements: 22
          validation-query: SELECT 1 FROM DUAL
          validation-query-timeout: 30000
          test-on-borrow: false
          test-on-return: false
          test-while-idle: true
          time-between-eviction-runs-millis: 60000
          min-evictable-idle-time-millis: 30000
          max-evictable-idle-time-millis: 60000
          filters: stat
          filter:
            stat:
              db-type: mysql
              enabled: true
              log-slow-sql: true
              slow-sql-millis: 1000
              merge-sql: true
          stat-view-servlet:
            login-password: root
            login-username: root
    mybatis-plus:
      type-aliases-package: com.example.order.entity
      mapper-locations: classpath*:mapper/order/*.xml
    server:
      port: 8080
    
    seata:
      enabled: true
      application-id: order-service
      tx-service-group: my_test_tx_group
      service:
        vgroup-mapping:
          my_test_tx_group: default
        grouplist:
          default: 127.0.0.1:8091
      config:
        type: file
    
    
    4.5.9整合数据源和代理

    因为最新的JAR已经支持数据库代理了, 所以不用手动写.因为我这个是整合了mybatis-plus,所以重新整合.

    这里面, 千万不要用 sqlsessionfactory这个类, 否则会一直报错,找不到加载方法. 最终使用MybatisSqlSessionFactoryBean搞定.

    @Configuration
    public class DruidConfig {
    
    
        @Value("${spring.datasource.druid.stat-view-servlet.login-username}")
        private String loginUserName ;
    
        @Value("${spring.datasource.druid.stat-view-servlet.login-password}")
        private String loginPassWord ;
    
        @Value("${mybatis-plus.type-aliases-package}")
        private String  typePackage;
    
        @Value("${mybatis-plus.mapper-locations}")
        private String xmlDir ;
        /**
         * 利用druid 进行数据库代理
         */
        @Bean
        @ConfigurationProperties(prefix = "spring.datasource.druid")
        public DataSource druidDataSource() {
            return new DruidDataSource();
        }
    
    
        @Bean
        public MybatisSqlSessionFactoryBean sqlSessionFactory() throws Exception{
            MybatisSqlSessionFactoryBean sqlSessionFactoryBean = new MybatisSqlSessionFactoryBean();
            sqlSessionFactoryBean.setDataSource(druidDataSource());
            VFS.addImplClass(SpringBootVFS.class);
            PathMatchingResourcePatternResolver resolver = new PathMatchingResourcePatternResolver();
            sqlSessionFactoryBean.setMapperLocations(resolver.getResources(xmlDir));
            return sqlSessionFactoryBean;
        }
    
        @Bean
        public PlatformTransactionManager transactionManager() throws SQLException {
            return new DataSourceTransactionManager(druidDataSource());
        }
    
    
        /**
         * 过滤规则,防止打不开druid
         */
        @Bean
        public FilterRegistrationBean<WebStatFilter> druidStatFilter() {
    
            FilterRegistrationBean<WebStatFilter> filterRegistrationBean = new FilterRegistrationBean<WebStatFilter>(
                    new WebStatFilter());
            // 添加过滤规则.
            filterRegistrationBean.addUrlPatterns("/*");
            // 添加不需要忽略的格式信息.
            filterRegistrationBean.addInitParameter("exclusions", "*.js,*.gif,*.jpg,*.png,*.css,*.ico,/druid/*");
            return filterRegistrationBean;
        }
    
    
        @Bean
        public ServletRegistrationBean<StatViewServlet> druidStatViewServlet() {
            ServletRegistrationBean<StatViewServlet> servletRegistrationBean = new ServletRegistrationBean<StatViewServlet>(
                    new StatViewServlet(), "/druid/*");
    
            servletRegistrationBean.addInitParameter("loginUsername", loginUserName);
            servletRegistrationBean.addInitParameter("loginPassword", loginPassWord);
            servletRegistrationBean.addInitParameter("resetEnable", "false");
            return servletRegistrationBean;
        }
    
    }
    
    4.5.10 关键业务使用
    1. 业务流程代码,需要添加@GlobalTransactional

      @GlobalTransactional
          @Override
          public String business(OrderTblPO orderTblPO) throws Exception {
      
              //加入订单
              addOrder(orderTblPO);
      
              System.out.println("order begin :" + RootContext.getXID());
              //加入账单
              String result = addAccount(orderTblPO);
      
              if (result.equals("SUCCESS")) {
                  return "SUCCESS";
              } else {
                  throw new RuntimeException("账单异常,导致我异常了");
              }
          }
      
    2. 加入订单逻辑代码

      public void addOrder(OrderTblPO orderTblPO) throws Exception {
    
            //加入订单
            orderTblMapper.insert(orderTblPO);
        }
    
    1. 加入账单代码
    String addAccount(OrderTblPO orderTblPO) throws Exception {
    
            String url = "http://localhosot:9898/account/update";
    
            AccountTblPO accountTblPO = new AccountTblPO();
            accountTblPO.setMoney(orderTblPO.getMoney());
            accountTblPO.setUserId(orderTblPO.getUserId());
    
            HttpHeaders headers = new HttpHeaders();
    
            //这里设置的是以payLoad方式提交数据,对于Payload方式,提交的内容一定要是String,且Header要设为“application/json”
    
            headers.setContentType(MediaType.APPLICATION_JSON_UTF8);
    
            ObjectMapper mapper = new ObjectMapper();
    
            String value = mapper.writeValueAsString(accountTblPO);
    
            HttpEntity<String> requestEntity = new HttpEntity<String>(value, headers);
    
            ResponseEntity<String> responseEntity = restTemplate.postForEntity(url, requestEntity, String.class);
    
            return responseEntity.getBody();
        }
    

    4.5.11 账务系统的配置

    账务系统和订单系统的配置一样, 注意事务组的值一定要配置成一样的, 即server端和调用端都是一样,测试用的my_test_tx_group

    关键业务代码为:

     /**
         * 增加一笔账户交易
         */
        @Override
        public String updateAccount(AccountTblPO accountTblPO) throws Exception {
    
            accountTblMapper.insert(accountTblPO);
    
            if (accountTblPO.getUserId().equals("10087")) {
                throw new RuntimeException("劳资故意异常了");
            }
    
            return "SUCCESS";
        }
    

    4.6 成功系统的日志

    4.6.1 订单系统注册成功日志
    2020-08-20 11:01:34.824  INFO 35060 --- [eoutChecker_1_1] i.s.c.r.netty.NettyClientChannelManager  : will connect to 127.0.0.1:8091
    2020-08-20 11:01:34.828  INFO 35060 --- [eoutChecker_1_1] i.s.core.rpc.netty.NettyPoolableFactory  : NettyPool create channel to transactionRole:TMROLE,address:127.0.0.1:8091,msg:< RegisterTMRequest{applicationId='order-service', transactionServiceGroup='my_test_tx_group'} >
    2020-08-20 11:01:34.845  INFO 35060 --- [eoutChecker_2_1] i.s.c.r.netty.NettyClientChannelManager  : will connect to 127.0.0.1:8091
    2020-08-20 11:01:34.846  INFO 35060 --- [eoutChecker_2_1] i.s.core.rpc.netty.NettyPoolableFactory  : NettyPool create channel to transactionRole:RMROLE,address:127.0.0.1:8091,msg:< RegisterRMRequest{resourceIds='null', applicationId='order-service', transactionServiceGroup='my_test_tx_group'} >
    2020-08-20 11:01:34.979  INFO 35060 --- [eoutChecker_1_1] i.s.c.rpc.netty.TmNettyRemotingClient    : register TM success. client version:1.3.0, server version:1.3.0,channel:[id: 0x4b3975a2, L:/127.0.0.1:53522 - R:/127.0.0.1:8091]
    2020-08-20 11:01:34.979  INFO 35060 --- [eoutChecker_2_1] i.s.c.rpc.netty.RmNettyRemotingClient    : register RM success. client version:1.3.0, server version:1.3.0,channel:[id: 0x245302be, L:/127.0.0.1:53523 - R:/127.0.0.1:8091]
    2020-08-20 11:01:34.989  INFO 35060 --- [eoutChecker_2_1] i.s.core.rpc.netty.NettyPoolableFactory  : register success, cost 60 ms, version:1.3.0,role:RMROLE,channel:[id: 0x245302be, L:/127.0.0.1:53523 - R:/127.0.0.1:8091]
    2020-08-20 11:01:34.989  INFO 35060 --- [eoutChecker_1_1] i.s.core.rpc.netty.NettyPoolableFactory  : register success, cost 59 ms, version:1.3.0,role:TMROLE,channel:[id: 0x4b3975a2, L:/127.0.0.1:53522 - R:/127.0.0.1:8091]
    
    
    4.6.2 账务系统注册成功日志
    2020-08-20 11:01:36.318  INFO 35062 --- [eoutChecker_1_1] i.s.c.r.netty.NettyClientChannelManager  : will connect to 127.0.0.1:8091
    2020-08-20 11:01:36.320  INFO 35062 --- [eoutChecker_1_1] i.s.core.rpc.netty.NettyPoolableFactory  : NettyPool create channel to transactionRole:TMROLE,address:127.0.0.1:8091,msg:< RegisterTMRequest{applicationId='account-service', transactionServiceGroup='my_test_tx_group'} >
    2020-08-20 11:01:36.333  INFO 35062 --- [eoutChecker_2_1] i.s.c.r.netty.NettyClientChannelManager  : will connect to 127.0.0.1:8091
    2020-08-20 11:01:36.333  INFO 35062 --- [eoutChecker_2_1] i.s.core.rpc.netty.NettyPoolableFactory  : NettyPool create channel to transactionRole:RMROLE,address:127.0.0.1:8091,msg:< RegisterRMRequest{resourceIds='null', applicationId='account-service', transactionServiceGroup='my_test_tx_group'} >
    2020-08-20 11:01:36.426  INFO 35062 --- [eoutChecker_1_1] i.s.c.rpc.netty.TmNettyRemotingClient    : register TM success. client version:1.3.0, server version:1.3.0,channel:[id: 0x9cfcd088, L:/127.0.0.1:53524 - R:/127.0.0.1:8091]
    2020-08-20 11:01:36.426  INFO 35062 --- [eoutChecker_2_1] i.s.c.rpc.netty.RmNettyRemotingClient    : register RM success. client version:1.3.0, server version:1.3.0,channel:[id: 0xab34bbdf, L:/127.0.0.1:53525 - R:/127.0.0.1:8091]
    2020-08-20 11:01:36.435  INFO 35062 --- [eoutChecker_1_1] i.s.core.rpc.netty.NettyPoolableFactory  : register success, cost 49 ms, version:1.3.0,role:TMROLE,channel:[id: 0x9cfcd088, L:/127.0.0.1:53524 - R:/127.0.0.1:8091]
    2020-08-20 11:01:36.435  INFO 35062 --- [eoutChecker_2_1] i.s.core.rpc.netty.NettyPoolableFactory  : register success, cost 50 ms, version:1.3.0,role:RMROLE,channel:[id: 0xab34bbdf, L:/127.0.0.1:53525 - R:/127.0.0.1:8091]
    
    
    4.6.3 成功调用订单日志
    2020-08-20 11:13:29.908  INFO 35060 --- [nio-8080-exec-1] o.a.c.c.C.[Tomcat].[localhost].[/]       : Initializing Spring DispatcherServlet 'dispatcherServlet'
    2020-08-20 11:13:29.908  INFO 35060 --- [nio-8080-exec-1] o.s.web.servlet.DispatcherServlet        : Initializing Servlet 'dispatcherServlet'
    2020-08-20 11:13:29.916  INFO 35060 --- [nio-8080-exec-1] o.s.web.servlet.DispatcherServlet        : Completed initialization in 8 ms
    2020-08-20 11:13:29.971  INFO 35060 --- [nio-8080-exec-1] io.seata.tm.TransactionManagerHolder     : TransactionManager Singleton io.seata.tm.DefaultTransactionManager@646fc986
    2020-08-20 11:13:29.986  INFO 35060 --- [nio-8080-exec-1] i.seata.tm.api.DefaultGlobalTransaction  : Begin new global transaction [192.168.0.145:8091:39669769082765312]
    2020-08-20 11:13:30.189  INFO 35060 --- [nio-8080-exec-1] com.alibaba.druid.pool.DruidDataSource   : {dataSource-1} inited
    2020-08-20 11:13:30.233  INFO 35060 --- [nio-8080-exec-1] i.s.c.rpc.netty.RmNettyRemotingClient    : will register resourceId:jdbc:mysql://localhost:3306/seata_order
    2020-08-20 11:13:30.235  INFO 35060 --- [ctor_RMROLE_1_1] io.seata.rm.AbstractRMHandler            : the rm client received response msg [version=1.3.0,extraData=null,identified=true,resultCode=null,msg=null] from tc server.
    order begin :192.168.0.145:8091:39669769082765312
    2020-08-20 11:13:31.236  INFO 35060 --- [nio-8080-exec-1] i.seata.tm.api.DefaultGlobalTransaction  : [192.168.0.145:8091:39669769082765312] commit status: Committed
    2020-08-20 11:13:31.799  INFO 35060 --- [h_RMROLE_1_1_24] i.s.c.r.p.c.RmBranchCommitProcessor      : rm client handle branch commit process:xid=192.168.0.145:8091:39669769082765312,branchId=39669770907287553,branchType=AT,resourceId=jdbc:mysql://localhost:3306/seata_order,applicationData=null
    2020-08-20 11:13:31.800  INFO 35060 --- [h_RMROLE_1_1_24] io.seata.rm.AbstractRMHandler            : Branch committing: 192.168.0.145:8091:39669769082765312 39669770907287553 jdbc:mysql://localhost:3306/seata_order null
    2020-08-20 11:13:31.801  INFO 35060 --- [h_RMROLE_1_1_24] io.seata.rm.AbstractRMHandler            : Branch commit result: PhaseTwo_Committed
    
    4.6.4 成功调用账务日志
    2020-08-20 11:13:30.647  INFO 35062 --- [nio-9898-exec-1] o.a.c.c.C.[Tomcat].[localhost].[/]       : Initializing Spring DispatcherServlet 'dispatcherServlet'
    2020-08-20 11:13:30.647  INFO 35062 --- [nio-9898-exec-1] o.s.web.servlet.DispatcherServlet        : Initializing Servlet 'dispatcherServlet'
    2020-08-20 11:13:30.653  INFO 35062 --- [nio-9898-exec-1] o.s.web.servlet.DispatcherServlet        : Completed initialization in 6 ms
    开始访问 account192.168.0.145:8091:39669769082765312
    account:192.168.0.145:8091:39669769082765312
    2020-08-20 11:13:30.866  INFO 35062 --- [nio-9898-exec-1] com.alibaba.druid.pool.DruidDataSource   : {dataSource-1} inited
    2020-08-20 11:13:30.906  INFO 35062 --- [nio-9898-exec-1] i.s.c.rpc.netty.RmNettyRemotingClient    : will register resourceId:jdbc:mysql://localhost:3306/seata_account
    2020-08-20 11:13:30.908  INFO 35062 --- [ctor_RMROLE_1_1] io.seata.rm.AbstractRMHandler            : the rm client received response msg [version=1.3.0,extraData=null,identified=true,resultCode=null,msg=null] from tc server.
    2020-08-20 11:13:31.218  WARN 35062 --- [nio-9898-exec-1] c.a.c.seata.web.SeataHandlerInterceptor  : xid in change during RPC from 192.168.0.145:8091:39669769082765312 to null
    2020-08-20 11:13:31.807  INFO 35062 --- [h_RMROLE_1_1_24] i.s.c.r.p.c.RmBranchCommitProcessor      : rm client handle branch commit process:xid=192.168.0.145:8091:39669769082765312,branchId=39669774061404161,branchType=AT,resourceId=jdbc:mysql://localhost:3306/seata_account,applicationData=null
    2020-08-20 11:13:31.809  INFO 35062 --- [h_RMROLE_1_1_24] io.seata.rm.AbstractRMHandler            : Branch committing: 192.168.0.145:8091:39669769082765312 39669774061404161 jdbc:mysql://localhost:3306/seata_account null
    2020-08-20 11:13:31.809  INFO 35062 --- [h_RMROLE_1_1_24] io.seata.rm.AbstractRMHandler            : Branch commit result: PhaseTwo_Committed
    
    
    4.6.5 失败调用订单日志
    2020-08-20 11:16:55.121  INFO 35060 --- [nio-8080-exec-5] i.seata.tm.api.DefaultGlobalTransaction  : Begin new global transaction [192.168.0.145:8091:39670629510676480]
    order begin :192.168.0.145:8091:39670629510676480
    2020-08-20 11:16:55.161  INFO 35060 --- [h_RMROLE_1_2_24] i.s.c.r.p.c.RmBranchRollbackProcessor    : rm handle branch rollback process:xid=192.168.0.145:8091:39670629510676480,branchId=39670629561008129,branchType=AT,resourceId=jdbc:mysql://localhost:3306/seata_order,applicationData=null
    2020-08-20 11:16:55.162  INFO 35060 --- [h_RMROLE_1_2_24] io.seata.rm.AbstractRMHandler            : Branch Rollbacking: 192.168.0.145:8091:39670629510676480 39670629561008129 jdbc:mysql://localhost:3306/seata_order
    2020-08-20 11:16:55.227  INFO 35060 --- [h_RMROLE_1_2_24] i.s.r.d.undo.AbstractUndoLogManager      : xid 192.168.0.145:8091:39670629510676480 branch 39670629561008129, undo_log deleted with GlobalFinished
    2020-08-20 11:16:55.228  INFO 35060 --- [h_RMROLE_1_2_24] io.seata.rm.AbstractRMHandler            : Branch Rollbacked result: PhaseTwo_Rollbacked
    2020-08-20 11:16:55.235  INFO 35060 --- [nio-8080-exec-5] i.seata.tm.api.DefaultGlobalTransaction  : [192.168.0.145:8091:39670629510676480] rollback status: Rollbacked
    
    
    4.6.7失败调用账务日志
    开始访问 account192.168.0.145:8091:39423456063782912
    account:192.168.0.145:8091:39423456063782912
    2020-08-19 18:54:44.418 ERROR 32738 --- [nio-9898-exec-3] c.a.druid.pool.DruidAbstractDataSource   : discard long time none received connection. , jdbcUrl : jdbc:mysql://localhost:3306/seata_account?useUnicode=true&characterEncoding=UTF-8&useJDBCCompliantTimezoneShift=true&useLegacyDatetimeCode=false&serverTimezone=UTC&useSSL=false, jdbcUrl : jdbc:mysql://localhost:3306/seata_account?useUnicode=true&characterEncoding=UTF-8&useJDBCCompliantTimezoneShift=true&useLegacyDatetimeCode=false&serverTimezone=UTC&useSSL=false, lastPacketReceivedIdleMillis : 67156
    2020-08-19 18:54:44.436  WARN 32738 --- [nio-9898-exec-3] c.a.c.seata.web.SeataHandlerInterceptor  : xid in change during RPC from 192.168.0.145:8091:39423456063782912 to null
    2020-08-19 18:54:44.446  INFO 32738 --- [h_RMROLE_1_2_24] i.s.c.r.p.c.RmBranchRollbackProcessor    : rm handle branch rollback process:xid=192.168.0.145:8091:39423456063782912,branchId=39423456269303809,branchType=AT,resourceId=jdbc:mysql://localhost:3306/seata_account,applicationData=null
    2020-08-19 18:54:44.447  INFO 32738 --- [h_RMROLE_1_2_24] io.seata.rm.AbstractRMHandler            : Branch Rollbacking: 192.168.0.145:8091:39423456063782912 39423456269303809 jdbc:mysql://localhost:3306/seata_account
    2020-08-19 18:54:44.491  INFO 32738 --- [h_RMROLE_1_2_24] i.s.r.d.undo.AbstractUndoLogManager      : xid 192.168.0.145:8091:39423456063782912 branch 39423456269303809, undo_log deleted with GlobalFinished
    2020-08-19 18:54:44.492  INFO 32738 --- [h_RMROLE_1_2_24] io.seata.rm.AbstractRMHandler            : Branch Rollbacked result: PhaseTwo_Rollbacked
    
    4.6.8 其他方法

    所有对是数据库的操作,都可以加上 @Transactional 进行事务保护, 同样可以起到回滚的作用.

    5.利用本地事务

    5.1 业务表现: 在业务A插入一条数据, 然后在业务B插入一条数据. 要么同时失败, 要么同时成功. 这个方案不太可靠,

    主要提供给不太愿意接入分布式事务, 且数据允许有一定错误的情况.

    第一步: 打开事务一, 然后做修改数据库操作.

    第二步: 在上面有事务的方法中, 调用B服务,

    第三步: B服务打开事务, 然后操作B数据库.

    第四步: 如果B服务出错,则异常, 然后所有回滚, 如果正确就走下去.

    此方法的漏点: 有可能B数据库成功了, 因为其他原因导致A服务超时回滚等. 所以数据不准确. 适用于最初的版本.

    相关文章

      网友评论

          本文标题:[十四] 我来说说分布式事务

          本文链接:https://www.haomeiwen.com/subject/ancmjktx.html