当前位置: 代码迷 >> 综合 >> rocketMq-broker篇之接收不同类型消息
  详细解决方案

rocketMq-broker篇之接收不同类型消息

热度:107   发布时间:2023-11-14 22:22:47.0

接收消息流程

NettyServerHandler

 @ChannelHandler.Sharableclass NettyServerHandler extends SimpleChannelInboundHandler<RemotingCommand> {
    @Overrideprotected void channelRead0(ChannelHandlerContext ctx, RemotingCommand msg) throws Exception {
    processMessageReceived(ctx, msg);}}

org.apache.rocketmq.remoting.netty.NettyRemotingAbstract#processMessageReceived

 public void processMessageReceived(ChannelHandlerContext ctx, RemotingCommand msg) throws Exception {
    final RemotingCommand cmd = msg;if (cmd != null) {
    switch (cmd.getType()) {
    case REQUEST_COMMAND:processRequestCommand(ctx, cmd);break;case RESPONSE_COMMAND:processResponseCommand(ctx, cmd);break;default:break;}}}

org.apache.rocketmq.remoting.netty.NettyRemotingAbstract#processRequestCommand

 public void processRequestCommand(final ChannelHandlerContext ctx, final RemotingCommand cmd) {
    final Pair<NettyRequestProcessor, ExecutorService> matched = this.processorTable.get(cmd.getCode());final Pair<NettyRequestProcessor, ExecutorService> pair = null == matched ? this.defaultRequestProcessor : matched;final int opaque = cmd.getOpaque();//定义了,但是没有运行if (pair != null) {
    Runnable run = new Runnable() {
    @Overridepublic void run() {
    try {
    doBeforeRpcHooks(RemotingHelper.parseChannelRemoteAddr(ctx.channel()), cmd);final RemotingResponseCallback callback = new RemotingResponseCallback() {
    @Overridepublic void callback(RemotingCommand response) {
    doAfterRpcHooks(RemotingHelper.parseChannelRemoteAddr(ctx.channel()), cmd, response);if (!cmd.isOnewayRPC()) {
    if (response != null) {
    response.setOpaque(opaque);response.markResponseType();try {
    ctx.writeAndFlush(response);} catch (Throwable e) {
    log.error("process request over, but response failed", e);log.error(cmd.toString());log.error(response.toString());}} else {
    }}}};if (pair.getObject1() instanceof AsyncNettyRequestProcessor) {
    AsyncNettyRequestProcessor processor = (AsyncNettyRequestProcessor)pair.getObject1();processor.asyncProcessRequest(ctx, cmd, callback);} else {
    NettyRequestProcessor processor = pair.getObject1();RemotingCommand response = processor.processRequest(ctx, cmd);callback.callback(response);}} catch (Throwable e) {
    log.error("process request exception", e);log.error(cmd.toString());if (!cmd.isOnewayRPC()) {
    final RemotingCommand response = RemotingCommand.createResponseCommand(RemotingSysResponseCode.SYSTEM_ERROR,RemotingHelper.exceptionSimpleDesc(e));response.setOpaque(opaque);ctx.writeAndFlush(response);}}}};if (pair.getObject1().rejectRequest()) {
    final RemotingCommand response = RemotingCommand.createResponseCommand(RemotingSysResponseCode.SYSTEM_BUSY,"[REJECTREQUEST]system busy, start flow control for a while");response.setOpaque(opaque);ctx.writeAndFlush(response);return;}try {
    final RequestTask requestTask = new RequestTask(run, ctx.channel(), cmd);pair.getObject2().submit(requestTask);} catch (RejectedExecutionException e) {
    if ((System.currentTimeMillis() % 10000) == 0) {
    log.warn(RemotingHelper.parseChannelRemoteAddr(ctx.channel())+ ", too many requests and system thread pool busy, RejectedExecutionException "+ pair.getObject2().toString()+ " request code: " + cmd.getCode());}if (!cmd.isOnewayRPC()) {
    final RemotingCommand response = RemotingCommand.createResponseCommand(RemotingSysResponseCode.SYSTEM_BUSY,"[OVERLOAD]system busy, start flow control for a while");response.setOpaque(opaque);ctx.writeAndFlush(response);}}} else {
    String error = " request type " + cmd.getCode() + " not supported";final RemotingCommand response =RemotingCommand.createResponseCommand(RemotingSysResponseCode.REQUEST_CODE_NOT_SUPPORTED, error);response.setOpaque(opaque);ctx.writeAndFlush(response);log.error(RemotingHelper.parseChannelRemoteAddr(ctx.channel()) + error);}}

同样取出消息对对应的线程池,交给对应的线程池去执行

org.apache.rocketmq.broker.BrokerController#registerProcessor

this.remotingServer.registerProcessor(RequestCode.SEND_MESSAGE, sendProcessor, this.sendMessageExecutor);this.remotingServer.registerProcessor(RequestCode.SEND_MESSAGE_V2, sendProcessor, this.sendMessageExecutor);this.remotingServer.registerProcessor(RequestCode.SEND_BATCH_MESSAGE, sendProcessor, this.sendMessageExecutor);this.remotingServer.registerProcessor(RequestCode.CONSUMER_SEND_MSG_BACK, sendProcessor, this.sendMessageExecutor);this.fastRemotingServer.registerProcessor(RequestCode.SEND_MESSAGE, sendProcessor, this.sendMessageExecutor);this.fastRemotingServer.registerProcessor(RequestCode.SEND_MESSAGE_V2, sendProcessor, this.sendMessageExecutor);this.fastRemotingServer.registerProcessor(RequestCode.SEND_BATCH_MESSAGE, sendProcessor, this.sendMessageExecutor);this.fastRemotingServer.registerProcessor(RequestCode.CONSUMER_SEND_MSG_BACK, sendProcessor, this.sendMessageExecutor);/** 

针对发送消息都是用sendMessageExecutor

 this.sendMessageExecutor = new BrokerFixedThreadPoolExecutor(this.brokerConfig.getSendMessageThreadPoolNums(),this.brokerConfig.getSendMessageThreadPoolNums(),1000 * 60,TimeUnit.MILLISECONDS,this.sendThreadPoolQueue,new ThreadFactoryImpl("SendMessageThread_"));

?

    private int sendMessageThreadPoolNums = 1; 

可以看出来sendMessageThreadPoolNums默认就是1
提交的任务被执行后回到org.apache.rocketmq.remoting.netty.NettyRemotingAbstract#processRequestCommand中的run方法

 if (pair.getObject1() instanceof AsyncNettyRequestProcessor) {
    AsyncNettyRequestProcessor processor = (AsyncNettyRequestProcessor)pair.getObject1();processor.asyncProcessRequest(ctx, cmd, callback);} else {
    NettyRequestProcessor processor = pair.getObject1();RemotingCommand response = processor.processRequest(ctx, cmd);callback.callback(response);}

sengMessageProcessor是AsyncNettyRequestProcessor类型
最终进入 org.apache.rocketmq.broker.processor.SendMessageProcessor#asyncProcessRequest(io.netty.channel.ChannelHandlerContext, org.apache.rocketmq.remoting.protocol.RemotingCommand)

public CompletableFuture<RemotingCommand> asyncProcessRequest(ChannelHandlerContext ctx,RemotingCommand request) throws RemotingCommandException {
    final SendMessageContext mqtraceContext;switch (request.getCode()) {
    case RequestCode.CONSUMER_SEND_MSG_BACK:return this.asyncConsumerSendMsgBack(ctx, request);default:SendMessageRequestHeader requestHeader = parseRequestHeader(request);if (requestHeader == null) {
    return CompletableFuture.completedFuture(null);}mqtraceContext = buildMsgContext(ctx, requestHeader);this.executeSendMessageHookBefore(ctx, request, mqtraceContext);if (requestHeader.isBatch()) {
    return this.asyncSendBatchMessage(ctx, request, mqtraceContext, requestHeader);} else {
    return this.asyncSendMessage(ctx, request, mqtraceContext, requestHeader);}}}

处理消息前会执行 org.apache.rocketmq.broker.mqtrace.SendMessageHook

public interface SendMessageHook {
    public String hookName();public void sendMessageBefore(final SendMessageContext context);public void sendMessageAfter(final SendMessageContext context);
}

进入org.apache.rocketmq.broker.processor.SendMessageProcessor#asyncSendMessage

private CompletableFuture<RemotingCommand> asyncSendMessage(ChannelHandlerContext ctx, RemotingCommand request,SendMessageContext mqtraceContext,SendMessageRequestHeader requestHeader) {
    final RemotingCommand response = preSend(ctx, request, requestHeader);final SendMessageResponseHeader responseHeader = (SendMessageResponseHeader)response.readCustomHeader();if (response.getCode() != -1) {
    return CompletableFuture.completedFuture(response);}final byte[] body = request.getBody();int queueIdInt = requestHeader.getQueueId();TopicConfig topicConfig = this.brokerController.getTopicConfigManager().selectTopicConfig(requestHeader.getTopic());if (queueIdInt < 0) {
    queueIdInt = randomQueueId(topicConfig.getWriteQueueNums());}MessageExtBrokerInner msgInner = new MessageExtBrokerInner();msgInner.setTopic(requestHeader.getTopic());msgInner.setQueueId(queueIdInt);if (!handleRetryAndDLQ(requestHeader, response, request, msgInner, topicConfig)) {
    return CompletableFuture.completedFuture(response);}msgInner.setBody(body);msgInner.setFlag(requestHeader.getFlag());MessageAccessor.setProperties(msgInner, MessageDecoder.string2messageProperties(requestHeader.getProperties()));msgInner.setPropertiesString(requestHeader.getProperties());msgInner.setBornTimestamp(requestHeader.getBornTimestamp());msgInner.setBornHost(ctx.channel().remoteAddress());msgInner.setStoreHost(this.getStoreHost());msgInner.setReconsumeTimes(requestHeader.getReconsumeTimes() == null ? 0 : requestHeader.getReconsumeTimes());String clusterName = this.brokerController.getBrokerConfig().getBrokerClusterName();MessageAccessor.putProperty(msgInner, MessageConst.PROPERTY_CLUSTER, clusterName);msgInner.setPropertiesString(MessageDecoder.messageProperties2String(msgInner.getProperties()));CompletableFuture<PutMessageResult> putMessageResult = null;Map<String, String> origProps = MessageDecoder.string2messageProperties(requestHeader.getProperties());String transFlag = origProps.get(MessageConst.PROPERTY_TRANSACTION_PREPARED);if (transFlag != null && Boolean.parseBoolean(transFlag)) {
    if (this.brokerController.getBrokerConfig().isRejectTransactionMessage()) {
    response.setCode(ResponseCode.NO_PERMISSION);response.setRemark("the broker[" + this.brokerController.getBrokerConfig().getBrokerIP1()+ "] sending transaction message is forbidden");return CompletableFuture.completedFuture(response);}putMessageResult = this.brokerController.getTransactionalMessageService().asyncPrepareMessage(msgInner);} else {
    putMessageResult = this.brokerController.getMessageStore().asyncPutMessage(msgInner);}return handlePutMessageResultFuture(putMessageResult, response, request, msgInner, responseHeader, mqtraceContext, ctx, queueIdInt);}

然后就来到了熟悉的

   putMessageResult = this.brokerController.getMessageStore().asyncPutMessage(msgInner);

DefaultMessageStore 进行对commitLog的写入
因为可能是异步刷盘,因此返回了一个CompletableFuture 类型的putMessageResult
?

 return putMessageResult.thenApply((r) ->handlePutMessageResult(r, response, request, msgInner, responseHeader, sendMessageContext, ctx, queueIdInt));

putMessageResult成功后返回response返回生产者客户端

多线程下消息有序的思考

处理发送消息的线程池sendMessageExecutor是可以配置为多线程的。
那么问题来了 既然多线程, 队列里的任务执行速度的不同,肯定会导致顺序无法保证
?

那么如何保证消息有序呢?
1.首先,需要把需要保持顺序的都放入同一个逻辑队列中,即consumerQuene,防止消费者消费乱序
2. 其次,需要保证将消息逻辑队列需要保证有序,
比如生产者连发了两条消息(同步发送)

producer.send(msg1);
producer.send(msg2);

同步消息需要等到 broker处理完毕并回复才能结束算发送成功。
会不会发生处理msg1的线程没执行完毕,执行msg2的线程已经开始执行了呢
答案是不可能的,因为同步消息有一个确认机制,在msg1的线程没执行完毕之前,是不会返回成功的。那么就msg2就没机会发送。而且尽管是异步刷盘,那么msg1线程执行完毕,最起码保证数据保存至mappedByteBuffer也就是说数据已经保存。这样顺序就没有出错。然后reputService在根据commitLog生成对应的逻辑队列数据,顺序也不会有问题。
所以想要消息有序必须使用同步消息,异步消息是无法做到的

不同类型消息处理

同步/异步消息

异步同步消息,对broker来说没有什么不同,区别是在生产者发送消息是否阻塞,在broker上的处理流程是相同的,整体流程就是上面讲的接收消息流程
?

oneWay消息

看这个接口org.apache.rocketmq.remoting.netty.NettyRemotingAbstract#processRequestCommand

public void run() {
    try {
    doBeforeRpcHooks(RemotingHelper.parseChannelRemoteAddr(ctx.channel()), cmd);final RemotingResponseCallback callback = new RemotingResponseCallback() {
    @Overridepublic void callback(RemotingCommand response) {
    doAfterRpcHooks(RemotingHelper.parseChannelRemoteAddr(ctx.channel()), cmd, response);if (!cmd.isOnewayRPC()) {
    if (response != null) {
    response.setOpaque(opaque);response.markResponseType();try {
    ctx.writeAndFlush(response);} catch (Throwable e) {
    log.error("process request over, but response failed", e);log.error(cmd.toString());log.error(response.toString());}} else {
    }}}};if (pair.getObject1() instanceof AsyncNettyRequestProcessor) {
    AsyncNettyRequestProcessor processor = (AsyncNettyRequestProcessor)pair.getObject1();processor.asyncProcessRequest(ctx, cmd, callback);} else {
    NettyRequestProcessor processor = pair.getObject1();RemotingCommand response = processor.processRequest(ctx, cmd);callback.callback(response);}} catch (Throwable e) {
    log.error("process request exception", e);log.error(cmd.toString());if (!cmd.isOnewayRPC()) {
    final RemotingCommand response = RemotingCommand.createResponseCommand(RemotingSysResponseCode.SYSTEM_ERROR,RemotingHelper.exceptionSimpleDesc(e));response.setOpaque(opaque);ctx.writeAndFlush(response);}}}

如果是oneWay消息就不会回复,整体处理流程和同步异步消息相同。

事务消息

术语介绍
HALF MESSAGE : 事务消息 也称半消息 标识该消息处于"暂时不能投递"状态,不会被Comsumer所消费,待服务端收到生成者对该消息的commit或者rollback响应后,消息会被正常投递或者回滚(丢弃)消息
RMQ_SYS_TRANS_HALF_TOPIC :半消息在被投递到Mq服务器后,会存储于Topic为RMQ_SYS_TRANS_HALF_TOPIC的消费队列中
RMQ_SYS_TRANS_OP_HALF_TOPIC : 在半消息被commit或者rollback处理后,会存储到Topic为RMQ_SYS_TRANS_OP_HALF_TOPIC的队列中,标识半消息已被处理

org.apache.rocketmq.broker.processor.SendMessageProcessor#asyncSendMessage

   private CompletableFuture<RemotingCommand> asyncSendMessage(ChannelHandlerContext ctx, RemotingCommand request,SendMessageContext mqtraceContext,SendMessageRequestHeader requestHeader) {
    final RemotingCommand response = preSend(ctx, request, requestHeader);final SendMessageResponseHeader responseHeader = (SendMessageResponseHeader)response.readCustomHeader();if (response.getCode() != -1) {
    return CompletableFuture.completedFuture(response);}final byte[] body = request.getBody();int queueIdInt = requestHeader.getQueueId();TopicConfig topicConfig = this.brokerController.getTopicConfigManager().selectTopicConfig(requestHeader.getTopic());if (queueIdInt < 0) {
    queueIdInt = randomQueueId(topicConfig.getWriteQueueNums());}MessageExtBrokerInner msgInner = new MessageExtBrokerInner();msgInner.setTopic(requestHeader.getTopic());msgInner.setQueueId(queueIdInt);if (!handleRetryAndDLQ(requestHeader, response, request, msgInner, topicConfig)) {
    return CompletableFuture.completedFuture(response);}msgInner.setBody(body);msgInner.setFlag(requestHeader.getFlag());MessageAccessor.setProperties(msgInner, MessageDecoder.string2messageProperties(requestHeader.getProperties()));msgInner.setPropertiesString(requestHeader.getProperties());msgInner.setBornTimestamp(requestHeader.getBornTimestamp());msgInner.setBornHost(ctx.channel().remoteAddress());msgInner.setStoreHost(this.getStoreHost());msgInner.setReconsumeTimes(requestHeader.getReconsumeTimes() == null ? 0 : requestHeader.getReconsumeTimes());String clusterName = this.brokerController.getBrokerConfig().getBrokerClusterName();MessageAccessor.putProperty(msgInner, MessageConst.PROPERTY_CLUSTER, clusterName);msgInner.setPropertiesString(MessageDecoder.messageProperties2String(msgInner.getProperties()));CompletableFuture<PutMessageResult> putMessageResult = null;Map<String, String> origProps = MessageDecoder.string2messageProperties(requestHeader.getProperties());String transFlag = origProps.get(MessageConst.PROPERTY_TRANSACTION_PREPARED);if (transFlag != null && Boolean.parseBoolean(transFlag)) {
    if (this.brokerController.getBrokerConfig().isRejectTransactionMessage()) {
    response.setCode(ResponseCode.NO_PERMISSION);response.setRemark("the broker[" + this.brokerController.getBrokerConfig().getBrokerIP1()+ "] sending transaction message is forbidden");return CompletableFuture.completedFuture(response);}putMessageResult = this.brokerController.getTransactionalMessageService().asyncPrepareMessage(msgInner);} else {
    putMessageResult = this.brokerController.getMessageStore().asyncPutMessage(msgInner);}return handlePutMessageResultFuture(putMessageResult, response, request, msgInner, responseHeader, mqtraceContext, ctx, queueIdInt);}
  String transFlag = origProps.get(MessageConst.PROPERTY_TRANSACTION_PREPARED);if (transFlag != null && Boolean.parseBoolean(transFlag)) {
    if (this.brokerController.getBrokerConfig().isRejectTransactionMessage()) {
    response.setCode(ResponseCode.NO_PERMISSION);response.setRemark("the broker[" + this.brokerController.getBrokerConfig().getBrokerIP1()+ "] sending transaction message is forbidden");return CompletableFuture.completedFuture(response);}putMessageResult = this.brokerController.getTransactionalMessageService().asyncPrepareMessage(msgInner);}

针对事务消息 生产者会设置属性TRAN_MSG为true
接下来调用TransactionalMessageService的asyncPrepareMessage方法

开始事务

TransactionalMessageService

/** Licensed to the Apache Software Foundation (ASF) under one or more* contributor license agreements. See the NOTICE file distributed with* this work for additional information regarding copyright ownership.* The ASF licenses this file to You under the Apache License, Version 2.0* (the "License"); you may not use this file except in compliance with* the License. You may obtain a copy of the License at** http://www.apache.org/licenses/LICENSE-2.0** Unless required by applicable law or agreed to in writing, software* distributed under the License is distributed on an "AS IS" BASIS,* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.* See the License for the specific language governing permissions and* limitations under the License.*/
package org.apache.rocketmq.broker.transaction;import org.apache.rocketmq.common.message.MessageExt;
import org.apache.rocketmq.common.protocol.header.EndTransactionRequestHeader;
import org.apache.rocketmq.store.MessageExtBrokerInner;
import org.apache.rocketmq.store.PutMessageResult;
import java.util.concurrent.CompletableFuture;public interface TransactionalMessageService {
    /*** Process prepare message, in common, we should put this message to storage service.** @param messageInner Prepare(Half) message.* @return Prepare message storage result.*/PutMessageResult prepareMessage(MessageExtBrokerInner messageInner);/*** Process prepare message in async manner, we should put this message to storage service** @param messageInner Prepare(Half) message.* @return CompletableFuture of put result, will be completed at put success(flush and replica done)*/CompletableFuture<PutMessageResult> asyncPrepareMessage(MessageExtBrokerInner messageInner);/*** Delete prepare message when this message has been committed or rolled back.** @param messageExt*/boolean deletePrepareMessage(MessageExt messageExt);/*** Invoked to process commit prepare message.** @param requestHeader Commit message request header.* @return Operate result contains prepare message and relative error code.*/OperationResult commitMessage(EndTransactionRequestHeader requestHeader);/*** Invoked to roll back prepare message.** @param requestHeader Prepare message request header.* @return Operate result contains prepare message and relative error code.*/OperationResult rollbackMessage(EndTransactionRequestHeader requestHeader);/*** Traverse uncommitted/unroll back half message and send check back request to producer to obtain transaction* status.** @param transactionTimeout The minimum time of the transactional message to be checked firstly, one message only* exceed this time interval that can be checked.* @param transactionCheckMax The maximum number of times the message was checked, if exceed this value, this* message will be discarded.* @param listener When the message is considered to be checked or discarded, the relative method of this class will* be invoked.*/void check(long transactionTimeout, int transactionCheckMax, AbstractTransactionalMessageCheckListener listener);/*** Open transaction service.** @return If open success, return true.*/boolean open();/*** Close transaction service.*/void close();
}

可以看出,大致有准备半消息,提交回滚

org.apache.rocketmq.broker.transaction.queue.TransactionalMessageServiceImpl#asyncPrepareMessage

 @Overridepublic CompletableFuture<PutMessageResult> asyncPrepareMessage(MessageExtBrokerInner messageInner) {
    return transactionalMessageBridge.asyncPutHalfMessage(messageInner);}

org.apache.rocketmq.broker.transaction.queue.TransactionalMessageBridge#asyncPutHalfMessage

 public CompletableFuture<PutMessageResult> asyncPutHalfMessage(MessageExtBrokerInner messageInner) {
    return store.asyncPutMessage(parseHalfMessageInner(messageInner));}
private MessageExtBrokerInner parseHalfMessageInner(MessageExtBrokerInner msgInner) {
    MessageAccessor.putProperty(msgInner, MessageConst.PROPERTY_REAL_TOPIC, msgInner.getTopic());MessageAccessor.putProperty(msgInner, MessageConst.PROPERTY_REAL_QUEUE_ID,String.valueOf(msgInner.getQueueId()));msgInner.setSysFlag(MessageSysFlag.resetTransactionValue(msgInner.getSysFlag(), MessageSysFlag.TRANSACTION_NOT_TYPE));msgInner.setTopic(TransactionalMessageUtil.buildHalfTopic());msgInner.setQueueId(0);msgInner.setPropertiesString(MessageDecoder.messageProperties2String(msgInner.getProperties()));return msgInner;}

设置消息的原topic,queueId到属性中,同时改变topic为RMQ_SYS_TRANS_HALF_TOPIC,queueId为0.因为RMQ_SYS_TRANS_HALF_TOPIC这个topic就一个队列
然后就又到了store.asyncPutMessage跟之前流程相同,将半消息存到事务队列中,而不是真正topic的队列
?

事务提交

EndTransactionProcessor

org.apache.rocketmq.broker.processor.EndTransactionProcessor#processRequest

 OperationResult result = new OperationResult();if (MessageSysFlag.TRANSACTION_COMMIT_TYPE == requestHeader.getCommitOrRollback()) {
    result = this.brokerController.getTransactionalMessageService().commitMessage(requestHeader);if (result.getResponseCode() == ResponseCode.SUCCESS) {
    RemotingCommand res = checkPrepareMessage(result.getPrepareMessage(), requestHeader);if (res.getCode() == ResponseCode.SUCCESS) {
    MessageExtBrokerInner msgInner = endMessageTransaction(result.getPrepareMessage());msgInner.setSysFlag(MessageSysFlag.resetTransactionValue(msgInner.getSysFlag(), requestHeader.getCommitOrRollback()));msgInner.setQueueOffset(requestHeader.getTranStateTableOffset());msgInner.setPreparedTransactionOffset(requestHeader.getCommitLogOffset());msgInner.setStoreTimestamp(result.getPrepareMessage().getStoreTimestamp());MessageAccessor.clearProperty(msgInner, MessageConst.PROPERTY_TRANSACTION_PREPARED);RemotingCommand sendResult = sendFinalMessage(msgInner);if (sendResult.getCode() == ResponseCode.SUCCESS) {
    this.brokerController.getTransactionalMessageService().deletePrepareMessage(result.getPrepareMessage());}return sendResult;}return res;}

进入org.apache.rocketmq.broker.transaction.queue.TransactionalMessageServiceImpl#commitMessage
获取到提交事务的半消息,然后把半消息修改为真正的topic和quueId存起来,再把之前存的半消息删除
删除消息实际上是向Op Half Topic中插入一条数据,代表已经处理

事务回滚

还看EndTransactionProcessor

} else if (MessageSysFlag.TRANSACTION_ROLLBACK_TYPE == requestHeader.getCommitOrRollback()) {
    result = this.brokerController.getTransactionalMessageService().rollbackMessage(requestHeader);if (result.getResponseCode() == ResponseCode.SUCCESS) {
    RemotingCommand res = checkPrepareMessage(result.getPrepareMessage(), requestHeader);if (res.getCode() == ResponseCode.SUCCESS) {
    this.brokerController.getTransactionalMessageService().deletePrepareMessage(result.getPrepareMessage());}return res;}}

删除消息实际上是向Op Half Topic中插入一条数据,代表已经处理

事务反查

** 事务消息总的执行流程:**
Producer 发送事务消息,Broker将其转成Half消息,备份topic和queueid ,
Producer 执行本地事务,根据本地事务执行状态,发送提交或回滚请求,Broker接收到提交请求,先将Half消息恢复成原消息的topic和queueid ,放到可以供消费者消费的队列,并将其标记为删除,如果是回滚则直接标记为删除,两种情况下都再将消息写入half_op队列,打上’d’标签,表示该消息被处理过,以供后面进行消息回查
Producer在60内没有收到请求回复,进行消息回查,查找Half 消息中对应的op消息进行去重,然后将消息再保存到commitlog中,以便可以向前推进消费进度,最后发送回查请求
再根据事务的状态发送提交/回滚请求
事务消息是如何处理回查的

在RocketMQ中,消息都是顺序写随机读的,以offset来记录消息的存储位置与消费位置,所以对于事务消息的prepare消息来说,不可能做到物理删除,broker启动时每间隔60s会开始检查一下有哪些prepare消息需要回查,从上面的分析我们知道,所有prepare消息都存储在Half Topic中,那么如何从该Topic中取出需要回查的消息进行回查呢?这就需要Op Half Topic以及一个内部的消费进度计算出需要回查的prepare消息进行回查:
Half Topic 默认Topic是RMQ_SYS_TRANS_HALF_TOPIC,建一个队列,存储所有的prepare消息
Op Half Topic默认是RMQ_SYS_TRANS_OP_HALF_TOPIC,建立的对列数与Half Topic相同,存储所有已经确定状态的prepare消息(rollback与commit状态),消息内容是该条消息在Half Topic的Offset
Half Topic消费进度,默认消费者是CID_RMQ_SYS_TRANS,每次取prepare消息判断回查时,从该消费进度开始依次获取消息。
Op Half Topic消费进度,默认消费者是CID_RMQ_SYS_TRANS,每次获取prepare消息都需要判断是否在Op Topic中已存在该消息了,若存在表示该prepare消息已结束流程,不需要再进行事务回查,每次判断都是从Op Topic中获取一定消息数量出来进行对比的,获取的消息就是从Op Topic中该消费进度开始获取的,最大一次获取32条。
获取Half Topic的所有队列,循环队列开始检测需要获取的prepare消息,实际上Half Topic只有一个队列。
获取Half Topic与Op Half Topic的消费进度。
调用fillOpRemoveMap方法,获取Op Half Topic中已完成的prepare事务消息。
从Half Topic中当前消费进度依次获取消息,与第3步获取的已结束的prepare消息进行对比,判断是否进行回查:
如果Op消息中包含该消息,则不进行回查,
如果不包含,获取Half Topic中的该消息,判断写入时间是否符合回查条件,若是新消息则不处理下次处理,并将消息重新写入Half Topic,判断回查次数是否小于15次,写入时间是否小于72h,如果不满足就丢弃消息,若满足则更新回查次数,并将消息重新写入Half Topic并进行事务回查,
在循环完后重新更新Half Topic与Op Half Topic中的消费进度,下次判断回查逻辑时,将从最新的消费进度获取信息。
生产客户端的ClientRemotingProcessor的processRequest方法会处理服务端的CHECK_TRANSACTION_STATE请求,最后会调用checkLocalTransactionState方法,该方法就是业务方可以自己实现事务消息回查逻辑的地方,并将结果最后用endTransactionOneway方法返回给Broker,该执行逻辑可以通过ClientRemotingProcessor的方法processRequest依次理解就可以了。

我们有哪些手段来监控事务消息的状态
事务消息主要有三个状态:

UNKNOW状态:表示事务消息未确定,可能是业务方执行本地事务逻辑时间耗时过长或者网络原因等引起的,该状态会导致broker对事务消息进行回查,默认回查总次数是15次,第一次回查间隔时间是6s,后续每次间隔60s,
ROLLBACK状态,该状态表示该事务消息被回滚,因为本地事务逻辑执行失败导致
COMMIT状态,表示事务消息被提交,会被正确分发给消费者。
?

TransactionalMessageCheckService

 @Overridepublic void run() {
    log.info("Start transaction check service thread!");long checkInterval = brokerController.getBrokerConfig().getTransactionCheckInterval();while (!this.isStopped()) {
    this.waitForRunning(checkInterval);}log.info("End transaction check service thread!");}@Overrideprotected void onWaitEnd() {
    long timeout = brokerController.getBrokerConfig().getTransactionTimeOut();int checkMax = brokerController.getBrokerConfig().getTransactionCheckMax();long begin = System.currentTimeMillis();log.info("Begin to check prepare message, begin time:{}", begin);this.brokerController.getTransactionalMessageService().check(timeout, checkMax, this.brokerController.getTransactionalMessageCheckListener());log.info("End to check prepare message, consumed time:{}", System.currentTimeMillis() - begin);}

默认检查间隔为60秒,
进入org.apache.rocketmq.broker.transaction.queue.TransactionalMessageServiceImpl#check

  public void check(long transactionTimeout, int transactionCheckMax,AbstractTransactionalMessageCheckListener listener) {
    try {
    String topic = TopicValidator.RMQ_SYS_TRANS_HALF_TOPIC;Set<MessageQueue> msgQueues = transactionalMessageBridge.fetchMessageQueues(topic);if (msgQueues == null || msgQueues.size() == 0) {
    log.warn("The queue of topic is empty :" + topic);return;}log.debug("Check topic={}, queues={}", topic, msgQueues);for (MessageQueue messageQueue : msgQueues) {
    long startTime = System.currentTimeMillis();MessageQueue opQueue = getOpQueue(messageQueue);long halfOffset = transactionalMessageBridge.fetchConsumeOffset(messageQueue);long opOffset = transactionalMessageBridge.fetchConsumeOffset(opQueue);log.info("Before check, the queue={} msgOffset={} opOffset={}", messageQueue, halfOffset, opOffset);if (halfOffset < 0 || opOffset < 0) {
    log.error("MessageQueue: {} illegal offset read: {}, op offset: {},skip this queue", messageQueue,halfOffset, opOffset);continue;}List<Long> doneOpOffset = new ArrayList<>();HashMap<Long, Long> removeMap = new HashMap<>();PullResult pullResult = fillOpRemoveMap(removeMap, opQueue, opOffset, halfOffset, doneOpOffset);if (null == pullResult) {
    log.error("The queue={} check msgOffset={} with opOffset={} failed, pullResult is null",messageQueue, halfOffset, opOffset);continue;}// single threadint getMessageNullCount = 1;long newOffset = halfOffset;long i = halfOffset;while (true) {
    if (System.currentTimeMillis() - startTime > MAX_PROCESS_TIME_LIMIT) {
    log.info("Queue={} process time reach max={}", messageQueue, MAX_PROCESS_TIME_LIMIT);break;}if (removeMap.containsKey(i)) {
    log.info("Half offset {} has been committed/rolled back", i);Long removedOpOffset = removeMap.remove(i);doneOpOffset.add(removedOpOffset);} else {
    GetResult getResult = getHalfMsg(messageQueue, i);MessageExt msgExt = getResult.getMsg();if (msgExt == null) {
    if (getMessageNullCount++ > MAX_RETRY_COUNT_WHEN_HALF_NULL) {
    break;}if (getResult.getPullResult().getPullStatus() == PullStatus.NO_NEW_MSG) {
    log.debug("No new msg, the miss offset={} in={}, continue check={}, pull result={}", i,messageQueue, getMessageNullCount, getResult.getPullResult());break;} else {
    log.info("Illegal offset, the miss offset={} in={}, continue check={}, pull result={}",i, messageQueue, getMessageNullCount, getResult.getPullResult());i = getResult.getPullResult().getNextBeginOffset();newOffset = i;continue;}}if (needDiscard(msgExt, transactionCheckMax) || needSkip(msgExt)) {
    listener.resolveDiscardMsg(msgExt);newOffset = i + 1;i++;continue;}if (msgExt.getStoreTimestamp() >= startTime) {
    log.debug("Fresh stored. the miss offset={}, check it later, store={}", i,new Date(msgExt.getStoreTimestamp()));break;}long valueOfCurrentMinusBorn = System.currentTimeMillis() - msgExt.getBornTimestamp();long checkImmunityTime = transactionTimeout;String checkImmunityTimeStr = msgExt.getUserProperty(MessageConst.PROPERTY_CHECK_IMMUNITY_TIME_IN_SECONDS);if (null != checkImmunityTimeStr) {
    checkImmunityTime = getImmunityTime(checkImmunityTimeStr, transactionTimeout);if (valueOfCurrentMinusBorn < checkImmunityTime) {
    if (checkPrepareQueueOffset(removeMap, doneOpOffset, msgExt)) {
    newOffset = i + 1;i++;continue;}}} else {
    if ((0 <= valueOfCurrentMinusBorn) && (valueOfCurrentMinusBorn < checkImmunityTime)) {
    log.debug("New arrived, the miss offset={}, check it later checkImmunity={}, born={}", i,checkImmunityTime, new Date(msgExt.getBornTimestamp()));break;}}List<MessageExt> opMsg = pullResult.getMsgFoundList();boolean isNeedCheck = (opMsg == null && valueOfCurrentMinusBorn > checkImmunityTime)|| (opMsg != null && (opMsg.get(opMsg.size() - 1).getBornTimestamp() - startTime > transactionTimeout))|| (valueOfCurrentMinusBorn <= -1);if (isNeedCheck) {
    if (!putBackHalfMsgQueue(msgExt, i)) {
    continue;}listener.resolveHalfMsg(msgExt);//再次向客户端询问消息状态} else {
    pullResult = fillOpRemoveMap(removeMap, opQueue, pullResult.getNextBeginOffset(), halfOffset, doneOpOffset);log.debug("The miss offset:{} in messageQueue:{} need to get more opMsg, result is:{}", i,messageQueue, pullResult);continue;}}newOffset = i + 1;i++;}if (newOffset != halfOffset) {
    transactionalMessageBridge.updateConsumeOffset(messageQueue, newOffset);}long newOpOffset = calculateOpOffset(doneOpOffset, opOffset);if (newOpOffset != opOffset) {
    transactionalMessageBridge.updateConsumeOffset(opQueue, newOpOffset);}}} catch (Throwable e) {
    log.error("Check error", e);}}

这里分别查询两个队列RMQ_SYS_TRANS_HALF_TOPIC,RMQ_SYS_TRANS_OP_HALF_TOPIC的消费offset
?

事务消息的异常恢复机制
事务消息的异常状态主要有:

生产者提交prepare消息到broker成功,但是当前生产者实例宕机了
生产者提交prepare消息到broker失败,可能是因为提交的broker已宕机
生产者提交prepare消息到broker成功,执行本地事务逻辑成功,但是broker宕机了未确定事务状态
生产提交prepare消息到broker成功,但是在进行事务回查的过程中broker宕机了,未确定事务状态
异常解决:
对于1:事务消息会根据producerGroup搜寻其他的生产者实例进行回查,所以transactionId务必保存在中央存储中,并且事务消息的pid不能跟其他消息的pid混用。
对于2:当前实例会搜寻其他的可用的broker-master进行提交,因为只有提交prepare消息后才会执行本地事务,所以没有影响,注意生产者报的是超时异常时,是不会进行重发的。
对于3:因为返回状态是oneway方式,此时如果消费者未收到消息,需要用手段确定该事务消息的状态,尽快将broker重启,broker重启后会通过回查完成事务消息。
对于4:同3,尽快重启broker。

为什么commit/rollback使用oneWay

oneWay不保证broker一定能收到消息,但是有事务反查机制的存在,也能保证事务能够正确提交或者回滚

生产者组的作用

1. 标识一类Producer
2. 可以通过运维工具查询这个发送消息应用下有多个Producer实例
3. 发送分布式事务消息时,如果Producer中途意外宕机,Broker会主动回调Producer Group内的任意一台机器来确认事务状态。

会查方法org.apache.rocketmq.broker.transaction.AbstractTransactionalMessageCheckListener#sendCheckMessage

  Channel channel = brokerController.getProducerManager().getAvaliableChannel(groupId);if (channel != null) {
    brokerController.getBroker2Client().checkProducerTransactionState(groupId, channel, checkTransactionStateRequestHeader, msgExt);} else {
    LOGGER.warn("Check transaction failed, channel is null. groupId={}", groupId);}

延时消息

org.apache.rocketmq.store.CommitLog#asyncPutMessage

 if (tranType == MessageSysFlag.TRANSACTION_NOT_TYPE|| tranType == MessageSysFlag.TRANSACTION_COMMIT_TYPE) {
    // Delay Deliveryif (msg.getDelayTimeLevel() > 0) {
    if (msg.getDelayTimeLevel() > this.defaultMessageStore.getScheduleMessageService().getMaxDelayLevel()) {
    msg.setDelayTimeLevel(this.defaultMessageStore.getScheduleMessageService().getMaxDelayLevel());}topic = TopicValidator.RMQ_SYS_SCHEDULE_TOPIC;queueId = ScheduleMessageService.delayLevel2QueueId(msg.getDelayTimeLevel());// Backup real topic, queueIdMessageAccessor.putProperty(msg, MessageConst.PROPERTY_REAL_TOPIC, msg.getTopic());MessageAccessor.putProperty(msg, MessageConst.PROPERTY_REAL_QUEUE_ID, String.valueOf(msg.getQueueId()));msg.setPropertiesString(MessageDecoder.messageProperties2String(msg.getProperties()));//多加几个属性,改topicmsg.setTopic(topic);msg.setQueueId(queueId);}}

又是老套路,更改真实的topic,改为SCHEDULE_TOPIC_XXXX
并且设置延时级别,就是对应的queueID,最大为18,因此共18个延时级别
?

ScheduleMessageService

public void start() {
    if (started.compareAndSet(false, true)) {
    this.timer = new Timer("ScheduleMessageTimerThread", true);for (Map.Entry<Integer, Long> entry : this.delayLevelTable.entrySet()) {
    Integer level = entry.getKey();Long timeDelay = entry.getValue();Long offset = this.offsetTable.get(level);if (null == offset) {
    offset = 0L;}if (timeDelay != null) {
    this.timer.schedule(new DeliverDelayedMessageTimerTask(level, offset), FIRST_DELAY_TIME);}}this.timer.scheduleAtFixedRate(new TimerTask() {
    @Overridepublic void run() {
    try {
    if (started.get()) ScheduleMessageService.this.persist();} catch (Throwable e) {
    log.error("scheduleAtFixedRate flush exception", e);}}}, 10000, this.defaultMessageStore.getMessageStoreConfig().getFlushDelayOffsetInterval());}}

delayLevelTable中存放了,配置的延时级别和对应时间,每个延时级别都做是一个延时任务
?

                Long offset = this.offsetTable.get(level);

offsetTable维护了队列的的消费进度

if (timeDelay != null) {
    this.timer.schedule(new DeliverDelayedMessageTimerTask(level, offset), FIRST_DELAY_TIME);}

java.util.Timer定时器,实际上是个线程,定时调度所拥有的TimerTasks。
一个TimerTask实际上就是一个拥有run方法的类,需要定时执行的代码放到run方法体内,TimerTask一般是以匿名类的方式创建。

处理延时消息线程

org.apache.rocketmq.store.schedule.ScheduleMessageService.DeliverDelayedMessageTimerTask
org.apache.rocketmq.store.schedule.ScheduleMessageService.DeliverDelayedMessageTimerTask#executeOnTimeup

public void executeOnTimeup() {
    ConsumeQueue cq =//根据什么查找的队列?ScheduleMessageService.this.defaultMessageStore.findConsumeQueue(TopicValidator.RMQ_SYS_SCHEDULE_TOPIC,delayLevel2QueueId(delayLevel));//不同的等级在不同的消息队列中,等级代表消息队列long failScheduleOffset = offset;if (cq != null) {
    SelectMappedBufferResult bufferCQ = cq.getIndexBuffer(this.offset);if (bufferCQ != null) {
    try {
    long nextOffset = offset;int i = 0;ConsumeQueueExt.CqExtUnit cqExtUnit = new ConsumeQueueExt.CqExtUnit();for (; i < bufferCQ.getSize(); i += ConsumeQueue.CQ_STORE_UNIT_SIZE) {
    long offsetPy = bufferCQ.getByteBuffer().getLong();int sizePy = bufferCQ.getByteBuffer().getInt();long tagsCode = bufferCQ.getByteBuffer().getLong();if (cq.isExtAddr(tagsCode)) {
    if (cq.getExt(tagsCode, cqExtUnit)) {
    tagsCode = cqExtUnit.getTagsCode();} else {
    //can't find ext content.So re compute tags code.log.error("[BUG] can't find consume queue extend file content!addr={}, offsetPy={}, sizePy={}",tagsCode, offsetPy, sizePy);long msgStoreTime = defaultMessageStore.getCommitLog().pickupStoreTimestamp(offsetPy, sizePy);tagsCode = computeDeliverTimestamp(delayLevel, msgStoreTime);}}long now = System.currentTimeMillis();long deliverTimestamp = this.correctDeliverTimestamp(now, tagsCode);nextOffset = offset + (i / ConsumeQueue.CQ_STORE_UNIT_SIZE);long countdown = deliverTimestamp - now;//到期时间<now则发送消息if (countdown <= 0) {
    MessageExt msgExt =ScheduleMessageService.this.defaultMessageStore.lookMessageByOffset(offsetPy, sizePy);if (msgExt != null) {
    try {
    MessageExtBrokerInner msgInner = this.messageTimeup(msgExt);if (TopicValidator.RMQ_SYS_TRANS_HALF_TOPIC.equals(msgInner.getTopic())) {
    log.error("[BUG] the real topic of schedule msg is {}, discard the msg. msg={}",msgInner.getTopic(), msgInner);continue;}PutMessageResult putMessageResult =ScheduleMessageService.this.writeMessageStore.putMessage(msgInner);if (putMessageResult != null&& putMessageResult.getPutMessageStatus() == PutMessageStatus.PUT_OK) {
    continue;} else {
    // XXX: warn and notify melog.error("ScheduleMessageService, a message time up, but reput it failed, topic: {} msgId {}",msgExt.getTopic(), msgExt.getMsgId());ScheduleMessageService.this.timer.schedule(new DeliverDelayedMessageTimerTask(this.delayLevel,nextOffset), DELAY_FOR_A_PERIOD);ScheduleMessageService.this.updateOffset(this.delayLevel,nextOffset);return;}} catch (Exception e) {
    /** XXX: warn and notify me*/log.error("ScheduleMessageService, messageTimeup execute error, drop it. msgExt="+ msgExt + ", nextOffset=" + nextOffset + ",offsetPy="+ offsetPy + ",sizePy=" + sizePy, e);}}} else {
    ScheduleMessageService.this.timer.schedule(new DeliverDelayedMessageTimerTask(this.delayLevel, nextOffset),countdown);ScheduleMessageService.this.updateOffset(this.delayLevel, nextOffset);return;}} // end of fornextOffset = offset + (i / ConsumeQueue.CQ_STORE_UNIT_SIZE);ScheduleMessageService.this.timer.schedule(new DeliverDelayedMessageTimerTask(this.delayLevel, nextOffset), DELAY_FOR_A_WHILE);ScheduleMessageService.this.updateOffset(this.delayLevel, nextOffset);return;} finally {
    bufferCQ.release();}} // end of if (bufferCQ != null)else {
    long cqMinOffset = cq.getMinOffsetInQueue();if (offset < cqMinOffset) {
    failScheduleOffset = cqMinOffset;log.error("schedule CQ offset invalid. offset=" + offset + ", cqMinOffset="+ cqMinOffset + ", queueId=" + cq.getQueueId());}}} // end of if (cq != null)ScheduleMessageService.this.timer.schedule(new DeliverDelayedMessageTimerTask(this.delayLevel,failScheduleOffset), DELAY_FOR_A_WHILE);}
   ConsumeQueue cq =//根据什么查找的队列?ScheduleMessageService.this.defaultMessageStore.findConsumeQueue(TopicValidator.RMQ_SYS_SCHEDULE_TOPIC,delayLevel2QueueId(delayLevel));//不同的等级在不同的消息队列中,等级代表消息队列

可以看出来延时消息线程池根据topic以及延时级别查询对应的消息队列
?

?

     SelectMappedBufferResult bufferCQ = cq.getIndexBuffer(this.offset);

根据当时的偏移量获取消息,如果获取不到数据,说明没有新的延时消息需要处理
org.apache.rocketmq.store.schedule.ScheduleMessageService#computeDeliverTimestamp
计算延时时间

 public long computeDeliverTimestamp(final int delayLevel, final long storeTimestamp) {
    Long time = this.delayLevelTable.get(delayLevel);//存进去的时间加上延时时间等于到期时间if (time != null) {
    return time + storeTimestamp;}return storeTimestamp + 1000;}

如果在配置的延时级别,加上对应延时时间,否则,加1秒,延时一秒
org.apache.rocketmq.store.schedule.ScheduleMessageService.DeliverDelayedMessageTimerTask#correctDeliverTimestamp

      private long correctDeliverTimestamp(final long now, final long deliverTimestamp) {
    long result = deliverTimestamp;//到期时间,long maxTimestamp = now + ScheduleMessageService.this.delayLevelTable.get(this.delayLevel);//矫正数据if (deliverTimestamp > maxTimestamp) {
    result = now;}return result;}

比较过期时间和现在时间,如果过期时间大于当前时间+延期时间,说明数据异常, 立即发送

   long countdown = deliverTimestamp - now;//到期时间<now则发送消息if (countdown <= 0) {
    MessageExt msgExt =ScheduleMessageService.this.defaultMessageStore.lookMessageByOffset(offsetPy, sizePy);if (msgExt != null) {
    try {
    MessageExtBrokerInner msgInner = this.messageTimeup(msgExt);if (TopicValidator.RMQ_SYS_TRANS_HALF_TOPIC.equals(msgInner.getTopic())) {
    log.error("[BUG] the real topic of schedule msg is {}, discard the msg. msg={}",msgInner.getTopic(), msgInner);continue;}PutMessageResult putMessageResult =ScheduleMessageService.this.writeMessageStore.putMessage(msgInner);if (putMessageResult != null&& putMessageResult.getPutMessageStatus() == PutMessageStatus.PUT_OK) {
    continue;} else {
    // XXX: warn and notify melog.error("ScheduleMessageService, a message time up, but reput it failed, topic: {} msgId {}",msgExt.getTopic(), msgExt.getMsgId());ScheduleMessageService.this.timer.schedule(new DeliverDelayedMessageTimerTask(this.delayLevel,nextOffset), DELAY_FOR_A_PERIOD);ScheduleMessageService.this.updateOffset(this.delayLevel,nextOffset);return;}} catch (Exception e) {
    /** XXX: warn and notify me*/log.error("ScheduleMessageService, messageTimeup execute error, drop it. msgExt="+ msgExt + ", nextOffset=" + nextOffset + ",offsetPy="+ offsetPy + ",sizePy=" + sizePy, e);}}}

修改topic

	 private MessageExtBrokerInner messageTimeup(MessageExt msgExt) {
    MessageExtBrokerInner msgInner = new MessageExtBrokerInner();msgInner.setBody(msgExt.getBody());msgInner.setFlag(msgExt.getFlag());MessageAccessor.setProperties(msgInner, msgExt.getProperties());TopicFilterType topicFilterType = MessageExt.parseTopicFilterType(msgInner.getSysFlag());long tagsCodeValue =MessageExtBrokerInner.tagsString2tagsCode(topicFilterType, msgInner.getTags());msgInner.setTagsCode(tagsCodeValue);msgInner.setPropertiesString(MessageDecoder.messageProperties2String(msgExt.getProperties()));msgInner.setSysFlag(msgExt.getSysFlag());msgInner.setBornTimestamp(msgExt.getBornTimestamp());msgInner.setBornHost(msgExt.getBornHost());msgInner.setStoreHost(msgExt.getStoreHost());msgInner.setReconsumeTimes(msgExt.getReconsumeTimes());msgInner.setWaitStoreMsgOK(false);MessageAccessor.clearProperty(msgInner, MessageConst.PROPERTY_DELAY_TIME_LEVEL);msgInner.setTopic(msgInner.getProperty(MessageConst.PROPERTY_REAL_TOPIC));String queueIdStr = msgInner.getProperty(MessageConst.PROPERTY_REAL_QUEUE_ID);int queueId = Integer.parseInt(queueIdStr);msgInner.setQueueId(queueId);return msgInner;}

存放到真正所属的topic中

 PutMessageResult putMessageResult =ScheduleMessageService.this.writeMessageStore.putMessage(msgInner);if (putMessageResult != null&& putMessageResult.getPutMessageStatus() == PutMessageStatus.PUT_OK) {
    continue;} else {
    // XXX: warn and notify melog.error("ScheduleMessageService, a message time up, but reput it failed, topic: {} msgId {}",msgExt.getTopic(), msgExt.getMsgId());ScheduleMessageService.this.timer.schedule(new DeliverDelayedMessageTimerTask(this.delayLevel,nextOffset), DELAY_FOR_A_PERIOD);ScheduleMessageService.this.updateOffset(this.delayLevel,nextOffset);return;}

将修改topic的消息重新放入真正所在的消息队列,如果失败,继续再以此偏移量封装成任务,放入热任务队列中。
?

ConsumeQueue 中用来存储消息偏移量的结构大小为 CQ_STORE_UNIT_SIZE,为20个字节,cqOffse t为 ConsumeQueue 中已经记录了多少条消息的偏移量,所以 expectLogicOffset 即为当前需要存储的消息偏移量结构,在ConsumeQueue的MappedFileQueue中的位置。
?

计算偏移量

  nextOffset = offset + (i / ConsumeQueue.CQ_STORE_UNIT_SIZE);

?

  nextOffset = offset + (i / ConsumeQueue.CQ_STORE_UNIT_SIZE);ScheduleMessageService.this.timer.schedule(new DeliverDelayedMessageTimerTask(this.delayLevel, nextOffset), DELAY_FOR_A_WHILE);ScheduleMessageService.this.updateOffset(this.delayLevel, nextOffset);

根据偏移量读取检查玩所有数据后,更新偏移量,并封装成下一个任务,放入任务队列中,隔100毫秒后可再执行,对队列中数据进行检测。
?

如何改造支持指定时间延时

目前rocketMq的逻辑为 针对不对不同延时级别,放入不同的消息队列,同时在线程中扫描消费各个队列,取出到达时间的消息,投放真正的队列。
如果可以自定义延时时间,那么就无法做到将不同的延时的任务放到不同的队列中,我们知道线程池中的延时任务队列只用的优先阻塞队列,而mq的队列实际上存储在文件上

任意延迟的消息难点在哪里?

开源版本没有支持任意延迟的消息,我想可能有以下几个原因:

  1. 任意延迟的消息的需求不强烈
  2. 可能是一个比较有技术含量的点,不愿意开源

需求不强
对支持任意延迟的需求确实不强,因为:

  1. 延迟并不是MQ场景的核心功能,业务单独做一个替代方案的成本不大
  2. 业务上一般对延迟的需求都是固定的,比如下单后半小时check是否付款,发货后7天check是否收货

在我司,MQ上线一年多后才有业务方希望我能支持延迟消息,且不要求任意延迟,只要求和RocketMQ开源版本一致,支持一些业务上的级别即可。
不愿意开源
为了差异化(好在云上卖钱),只能降开源版本的功能进行阉割,所以开源版本的RocketMQ变成了只支持特定Level的延迟。
难点在哪里?
既然业务有需求,我们肯定也要去支持。
首先,我们先划清楚定义和边界:
在我们的系统范围内,支持任意延迟的消息指的是:

  1. 精度支持到秒级别
  2. 最大支持30天的延迟

本着对自己的高要求,我们并不满足于开源RocketMQ的18个Level的方案。那么,如果我们自己要去实现一个支持任意延迟的消息队列,难点在哪里呢?

  1. 排序
  2. 消息存储

首先,支持任意延迟意味着消息是需要在服务端进行排序的。
比如用户先发了一条延迟1分钟的消息,一秒后发了一条延迟3秒的消息,显然延迟3秒的消息需要先被投递出去。那么服务端在收到消息后需要对消息进行排序后再投递出去。
在MQ中,为了保证可靠性,消息是需要落盘的,且对性能和延迟的要求,决定了在服务端对消息进行排序是完全不可接受的。
其次,目前MQ的方案中都是基于WAL的方式实现的(RocketMQ、Kafka),日志文件会被过期删除,一般会保留最近一段时间的数据。
?

支持任意级别的延迟,那么需要保存最近30天的消息。
阿里内部 1000+ 核心应用使用,每天流转几千亿条消息,经过双11交易、商品等核心链路真实场景的验证,稳定可靠。
考虑一下一天几千亿的消息,保存30天的话需要堆多少服务器,显然是无法做到的。
?

?

?

?