美文网首页rocketMq理论与实践todo
RocketMq broker 延迟消息

RocketMq broker 延迟消息

作者: 晴天哥_王志 | 来源:发表于2020-05-08 07:33 被阅读0次

    系列

    开篇

    • 这个系列的主要目的是介绍RocketMq broker的原理和用法,在这个系列当中会介绍 broker 配置文件、broker 启动流程、broker延迟消息、broker消息存储、broker的重试和死信队列。

    • 这篇文章主要介绍broker 延迟消息,本质上所有的数据都是存在commitLog文件的,只是consumeQueue根据topic的不同进行了区分,所以数据存储过程可以参考 RocketMq broker CommitLog介绍RocketMq broker consumeQueue介绍

    • 延迟消息本质上进入到了对应topic下的consumeQueue而已,延迟消息consumeQueue下的queueId是根据延迟粒度来分组的。

    延迟消息Topic变更

    public class CommitLog {
    
        public static final String SCHEDULE_TOPIC = "SCHEDULE_TOPIC_XXXX";
    
        public PutMessageResult putMessage(final MessageExtBrokerInner msg) {
    
            msg.setStoreTimestamp(System.currentTimeMillis());
            msg.setBodyCRC(UtilAll.crc32(msg.getBody()));
            AppendMessageResult result = null;
            StoreStatsService storeStatsService = this.defaultMessageStore.getStoreStatsService();
            String topic = msg.getTopic();
            int queueId = msg.getQueueId();
    
            final int tranType = MessageSysFlag.getTransactionValue(msg.getSysFlag());
            // 处理各类延迟消息的逻辑
            if (tranType == MessageSysFlag.TRANSACTION_NOT_TYPE
                || tranType == MessageSysFlag.TRANSACTION_COMMIT_TYPE) {
                // 转换延迟队列粒度
                if (msg.getDelayTimeLevel() > 0) {
                    if (msg.getDelayTimeLevel() > this.defaultMessageStore.getScheduleMessageService().getMaxDelayLevel()) {
                        msg.setDelayTimeLevel(this.defaultMessageStore.getScheduleMessageService().getMaxDelayLevel());
                    }
                    // 生成延迟消息的topic和queueId
                    topic = ScheduleMessageService.SCHEDULE_TOPIC;
                    queueId = ScheduleMessageService.delayLevel2QueueId(msg.getDelayTimeLevel());
    
                    // 备份原来的topic和对应的queueId
                    MessageAccessor.putProperty(msg, MessageConst.PROPERTY_REAL_TOPIC, msg.getTopic());
                    MessageAccessor.putProperty(msg, MessageConst.PROPERTY_REAL_QUEUE_ID, String.valueOf(msg.getQueueId()));
                    msg.setPropertiesString(MessageDecoder.messageProperties2String(msg.getProperties()));
    
                    msg.setTopic(topic);
                    msg.setQueueId(queueId);
                }
            }
    
            // 省略相关代码
    
            putMessageLock.lock(); //spin or ReentrantLock ,depending on store config
            try {
    
                result = mappedFile.appendMessage(msg, this.appendMessageCallback);
    
                // 省略相关代码
            } finally {
                putMessageLock.unlock();
            }
    
            if (elapsedTimeInLock > 500) {
                log.warn("[NOTIFYME]putMessage in lock cost time(ms)={}, bodyLength={} AppendMessageResult={}", elapsedTimeInLock, msg.getBody().length, result);
            }
    
            if (null != unlockMappedFile && this.defaultMessageStore.getMessageStoreConfig().isWarmMapedFileEnable()) {
                this.defaultMessageStore.unlockMappedFile(unlockMappedFile);
            }
    
            PutMessageResult putMessageResult = new PutMessageResult(PutMessageStatus.PUT_OK, result);
    
            // Statistics
            storeStatsService.getSinglePutMessageTopicTimesTotal(msg.getTopic()).incrementAndGet();
            storeStatsService.getSinglePutMessageTopicSizeTotal(topic).addAndGet(result.getWroteBytes());
    
            handleDiskFlush(result, putMessageResult, msg);
            handleHA(result, putMessageResult, msg);
    
            return putMessageResult;
        }
    }
    
    • 延迟消息数据本质依然是存在commitLog当中,对应的consumeQueue变成了SCHEDULE_TOPIC_XXXX,queueId根据延迟粒度重新生成。
    • 整个延迟消息的处理逻辑包括:转换延迟队列粒度、生成延迟消息的topic和queueId、 备份原来的topic和对应的queueId,其他的逻辑统一走消息写入流程。
    • 消息写入流程可以参考RocketMq broker CommitLog介绍RocketMq broker consumeQueue介绍
    • 延迟消息对应的consumeQueue是没有办法被直接订阅消费的,所有延迟消息的消费前提是将消息重新添加到commitLog同时在topic对应的consumeQueue当中可见。
    • 从SCHEDULE_TOPIC_XXXX的consumeQueue迁移到topic对应的consumeQueue是通过ScheduleMessageService来实现的。

    ScheduleMessageService

    public class ScheduleMessageService extends ConfigManager {
        private static final InternalLogger log = InternalLoggerFactory.getLogger(LoggerName.STORE_LOGGER_NAME);
    
        public static final String SCHEDULE_TOPIC = "SCHEDULE_TOPIC_XXXX";
        private static final long FIRST_DELAY_TIME = 1000L;
        private static final long DELAY_FOR_A_WHILE = 100L;
        private static final long DELAY_FOR_A_PERIOD = 10000L;
    
        private final ConcurrentMap<Integer /* level */, Long/* delay timeMillis */> delayLevelTable =
            new ConcurrentHashMap<Integer, Long>(32);
    
        private final ConcurrentMap<Integer /* level */, Long/* offset */> offsetTable =
            new ConcurrentHashMap<Integer, Long>(32);
        private final DefaultMessageStore defaultMessageStore;
        private final AtomicBoolean started = new AtomicBoolean(false);
        private Timer timer;
        private MessageStore writeMessageStore;
        private int maxDelayLevel;
    
        public ScheduleMessageService(final DefaultMessageStore defaultMessageStore) {
            this.defaultMessageStore = defaultMessageStore;
            this.writeMessageStore = defaultMessageStore;
        }
    
        public void start() {
            if (started.compareAndSet(false, true)) {
                this.timer = new Timer("ScheduleMessageTimerThread", true);
                for (Map.Entry<Integer, Long> entry : this.delayLevelTable.entrySet()) {
                    Integer level = entry.getKey();
                    Long timeDelay = entry.getValue();
                    Long offset = this.offsetTable.get(level);
                    if (null == offset) {
                        offset = 0L;
                    }
    
                    if (timeDelay != null) {
                        this.timer.schedule(new DeliverDelayedMessageTimerTask(level, offset), FIRST_DELAY_TIME);
                    }
                }
    
                this.timer.scheduleAtFixedRate(new TimerTask() {
    
                    @Override
                    public void run() {
                        try {
                            if (started.get()) ScheduleMessageService.this.persist();
                        } catch (Throwable e) {
                            log.error("scheduleAtFixedRate flush exception", e);
                        }
                    }
                }, 10000, this.defaultMessageStore.getMessageStoreConfig().getFlushDelayOffsetInterval());
            }
        }
    
        public boolean load() {
            boolean result = super.load();
            result = result && this.parseDelayLevel();
            return result;
        }
    
       public boolean parseDelayLevel() {
            HashMap<String, Long> timeUnitTable = new HashMap<String, Long>();
            timeUnitTable.put("s", 1000L);
            timeUnitTable.put("m", 1000L * 60);
            timeUnitTable.put("h", 1000L * 60 * 60);
            timeUnitTable.put("d", 1000L * 60 * 60 * 24);
    
            String levelString = this.defaultMessageStore.getMessageStoreConfig().getMessageDelayLevel();
            try {
                String[] levelArray = levelString.split(" ");
                for (int i = 0; i < levelArray.length; i++) {
                    String value = levelArray[i];
                    String ch = value.substring(value.length() - 1);
                    Long tu = timeUnitTable.get(ch);
    
                    int level = i + 1;
                    if (level > this.maxDelayLevel) {
                        this.maxDelayLevel = level;
                    }
                    long num = Long.parseLong(value.substring(0, value.length() - 1));
                    long delayTimeMillis = tu * num;
                    this.delayLevelTable.put(level, delayTimeMillis);
                }
            } catch (Exception e) {
                log.error("parseDelayLevel exception", e);
                log.info("levelString String = {}", levelString);
                return false;
            }
    
            return true;
        }
    }
    
    • ScheduleMessageService的load过程中会针对配置文件的中的延迟粒度进行解析,生成延迟消息配置表delayLevelTable。

    • ScheduleMessageService根据delayLevelTable生成不同延迟粒度任务来处理对应的延迟任务。

    DeliverDelayedMessageTimerTask

    public class ScheduleMessageService extends ConfigManager {
    
        class DeliverDelayedMessageTimerTask extends TimerTask {
            private final int delayLevel;
            private final long offset;
    
            public DeliverDelayedMessageTimerTask(int delayLevel, long offset) {
                this.delayLevel = delayLevel;
                this.offset = offset;
            }
    
            @Override
            public void run() {
                try {
                    if (isStarted()) {
                        this.executeOnTimeup();
                    }
                } catch (Exception e) {
                    ScheduleMessageService.this.timer.schedule(new DeliverDelayedMessageTimerTask(
                        this.delayLevel, this.offset), DELAY_FOR_A_PERIOD);
                }
            }
    
            private long correctDeliverTimestamp(final long now, final long deliverTimestamp) {
    
                long result = deliverTimestamp;
    
                long maxTimestamp = now + ScheduleMessageService.this.delayLevelTable.get(this.delayLevel);
                if (deliverTimestamp > maxTimestamp) {
                    result = now;
                }
    
                return result;
            }
    
            public void executeOnTimeup() {
                // 查找SCHEDULE_TOPIC下对应的延迟粒度的consumeQueue
                ConsumeQueue cq =
                    ScheduleMessageService.this.defaultMessageStore.findConsumeQueue(SCHEDULE_TOPIC,
                        delayLevel2QueueId(delayLevel));
    
                long failScheduleOffset = offset;
    
                if (cq != null) {
                    // 从offset位置开始获取并返回SelectMappedBufferResult结果
                    SelectMappedBufferResult bufferCQ = cq.getIndexBuffer(this.offset);
                    if (bufferCQ != null) {
                        try {
                            long nextOffset = offset;
                            int i = 0;
                            ConsumeQueueExt.CqExtUnit cqExtUnit = new ConsumeQueueExt.CqExtUnit();
                            // 遍历SelectMappedBufferResult并按照固定长度CQ_STORE_UNIT_SIZE进行遍历
                            for (; i < bufferCQ.getSize(); i += ConsumeQueue.CQ_STORE_UNIT_SIZE) {
                                long offsetPy = bufferCQ.getByteBuffer().getLong();
                                int sizePy = bufferCQ.getByteBuffer().getInt();
                                long tagsCode = bufferCQ.getByteBuffer().getLong();
    
                                if (cq.isExtAddr(tagsCode)) {
                                    if (cq.getExt(tagsCode, cqExtUnit)) {
                                        tagsCode = cqExtUnit.getTagsCode();
                                    } else {
                                        long msgStoreTime = defaultMessageStore.getCommitLog().pickupStoreTimestamp(offsetPy, sizePy);
                                        tagsCode = computeDeliverTimestamp(delayLevel, msgStoreTime);
                                    }
                                }
                                // 计算投递时间
                                long now = System.currentTimeMillis();
                                long deliverTimestamp = this.correctDeliverTimestamp(now, tagsCode);
    
                                nextOffset = offset + (i / ConsumeQueue.CQ_STORE_UNIT_SIZE);
                                // 判断是否到底延迟的投递时间
                                long countdown = deliverTimestamp - now;
                                // 针对已经到达投递时间的逻辑处理
                                if (countdown <= 0) {
                                    MessageExt msgExt =
                                        ScheduleMessageService.this.defaultMessageStore.lookMessageByOffset(
                                            offsetPy, sizePy);
    
                                    if (msgExt != null) {
                                        try {
                                            // 重新构建投递commitLog的MessageExtBrokerInner消息对象
                                            MessageExtBrokerInner msgInner = this.messageTimeup(msgExt);
                                            if (MixAll.RMQ_SYS_TRANS_HALF_TOPIC.equals(msgInner.getTopic())) {
                                                continue;
                                            }
                                            PutMessageResult putMessageResult =
                                                ScheduleMessageService.this.writeMessageStore
                                                    .putMessage(msgInner);
    
                                            if (putMessageResult != null
                                                && putMessageResult.getPutMessageStatus() == PutMessageStatus.PUT_OK) {
                                                continue;
                                            } else {
                                                ScheduleMessageService.this.timer.schedule(
                                                    new DeliverDelayedMessageTimerTask(this.delayLevel,
                                                        nextOffset), DELAY_FOR_A_PERIOD);
                                                ScheduleMessageService.this.updateOffset(this.delayLevel,
                                                    nextOffset);
                                                return;
                                            }
                                        } catch (Exception e) {
                                    }
                                } else {
                                    ScheduleMessageService.this.timer.schedule(
                                        new DeliverDelayedMessageTimerTask(this.delayLevel, nextOffset),
                                        countdown);
                                    ScheduleMessageService.this.updateOffset(this.delayLevel, nextOffset);
                                    return;
                                }
                            } // end of for
    
                            nextOffset = offset + (i / ConsumeQueue.CQ_STORE_UNIT_SIZE);
                            ScheduleMessageService.this.timer.schedule(new DeliverDelayedMessageTimerTask(
                                this.delayLevel, nextOffset), DELAY_FOR_A_WHILE);
                            ScheduleMessageService.this.updateOffset(this.delayLevel, nextOffset);
                            return;
                        } finally {
    
                            bufferCQ.release();
                        }
                    } // end of if (bufferCQ != null)
                    else {
    
                        long cqMinOffset = cq.getMinOffsetInQueue();
                        if (offset < cqMinOffset) {
                            failScheduleOffset = cqMinOffset;
                        }
                    }
                } // end of if (cq != null)
    
                ScheduleMessageService.this.timer.schedule(new DeliverDelayedMessageTimerTask(this.delayLevel,
                    failScheduleOffset), DELAY_FOR_A_WHILE);
            }
    
    
            private MessageExtBrokerInner messageTimeup(MessageExt msgExt) {
                MessageExtBrokerInner msgInner = new MessageExtBrokerInner();
                msgInner.setBody(msgExt.getBody());
                msgInner.setFlag(msgExt.getFlag());
                MessageAccessor.setProperties(msgInner, msgExt.getProperties());
    
                TopicFilterType topicFilterType = MessageExt.parseTopicFilterType(msgInner.getSysFlag());
                long tagsCodeValue =
                    MessageExtBrokerInner.tagsString2tagsCode(topicFilterType, msgInner.getTags());
                msgInner.setTagsCode(tagsCodeValue);
                msgInner.setPropertiesString(MessageDecoder.messageProperties2String(msgExt.getProperties()));
    
                msgInner.setSysFlag(msgExt.getSysFlag());
                msgInner.setBornTimestamp(msgExt.getBornTimestamp());
                msgInner.setBornHost(msgExt.getBornHost());
                msgInner.setStoreHost(msgExt.getStoreHost());
                msgInner.setReconsumeTimes(msgExt.getReconsumeTimes());
    
                msgInner.setWaitStoreMsgOK(false);
                MessageAccessor.clearProperty(msgInner, MessageConst.PROPERTY_DELAY_TIME_LEVEL);
    
                msgInner.setTopic(msgInner.getProperty(MessageConst.PROPERTY_REAL_TOPIC));
    
                String queueIdStr = msgInner.getProperty(MessageConst.PROPERTY_REAL_QUEUE_ID);
                int queueId = Integer.parseInt(queueIdStr);
                msgInner.setQueueId(queueId);
    
                return msgInner;
            }
    
    
            private long correctDeliverTimestamp(final long now, final long deliverTimestamp) {
    
                long result = deliverTimestamp;
    
                long maxTimestamp = now + ScheduleMessageService.this.delayLevelTable.get(this.delayLevel);
                if (deliverTimestamp > maxTimestamp) {
                    result = now;
                }
    
                return result;
            }
        }
    
    
        public long computeDeliverTimestamp(final int delayLevel, final long storeTimestamp) {
            Long time = this.delayLevelTable.get(delayLevel);
            if (time != null) {
                return time + storeTimestamp;
            }
    
            return storeTimestamp + 1000;
        }
    }
    
    • 查找SCHEDULE_TOPIC下对应的延迟粒度的consumeQueue。
    • 从指定位置开始获取并返回SelectMappedBufferResult结果。
    • 遍历SelectMappedBufferResult并按照固定长度CQ_STORE_UNIT_SIZE进行遍历。
    • 针对SelectMappedBufferResult下的每个对象计算过期时间并于当前进行对比,如果过期就重新构建MessageExtBrokerInner对象并投递到commitLog当中。
    • 如果发现当前对象还未到过期时间那么就重新启动该延迟粒度下的定时任务,启动时间为 到期时间-当前时间的差值
    • 延迟消息通过重新投递到commitLog并重新构建对应topic的consumeQueue来实现延迟消息消费

    相关文章

      网友评论

        本文标题:RocketMq broker 延迟消息

        本文链接:https://www.haomeiwen.com/subject/vutyghtx.html