美文网首页
Kafka之Producer源码

Kafka之Producer源码

作者: alvin_wang | 来源:发表于2018-08-14 09:07 被阅读73次

    简介

    Kafka是一个分布式的流处理平台:

    • 发布和订阅数据流,类似于消息队列或者企业消息系统
    • 容错方式存储数据流
    • 数据流到即处理

    Kafka主要用于以下两种类型的应用:

    • 建立从系统或者应用中获取可靠实时的数据流管道
    • 建立转换数据流的实时流应用

    Kafka有以下4个核心API:

    • Producer API发布一个数据流到一个或多个Kafka topic。
    • Consumer API订阅一个或多个topic,并且处理topic中的数据。
    • Streams API作为一个流处理器,消费来自一个或多个topic的输入流,同时产生输出流到一个或多个topic。
    • Connector API建立运行一个可复用的生产者或者消费者用来连接存在于应用或者数据系统中的topics。
    kafka_Intro.png

    本文主要从源码的角度解析一下Producer。

    Producer

    Producer发布数据到指定的topics。Producer主要负责数据被分发到对应topic的哪个分区。最简单的负载均衡是通过轮询来进行分区,也可以通过其他的分区函数(根据数据中的key等)。

    下面的代码是通过KafkaTemplate模版建立的一个kafka实例,然后调用了send方法把消息发送到"abc123"这个topic上去。

        @Autowired
        private KafkaTemplate<String, String> kafkaTemplate;
        public void send() {
            String message = "2018-08-07 08:21:47578|1|18701046390|001003|0|2|NULL|2018-08-07 08:21:47:544|2018-08-07 08:21:47:578|0|10.200.1.85|10.200.1.147:7022|";
            kafkaTemplate.send("abc123", message);
        }
    

    其内部实现主要是依靠doSend方法。首先进来判断是否设置了支持事务,接着获取了一个producer实例,然后调用其send方法。在send的回调结束后调用了closeProducer方法来关闭producer。

        protected ListenableFuture<SendResult<K, V>> doSend(final ProducerRecord<K, V> producerRecord) {
            if (this.transactional) {
                Assert.state(inTransaction(),
                        "No transaction is in process; "
                            + "possible solutions: run the template operation within the scope of a "
                            + "template.executeInTransaction() operation, start a transaction with @Transactional "
                            + "before invoking the template method, "
                            + "run in a transaction started by a listener container when consuming a record");
            }
            final Producer<K, V> producer = getTheProducer();
            if (this.logger.isTraceEnabled()) {
                this.logger.trace("Sending: " + producerRecord);
            }
            final SettableListenableFuture<SendResult<K, V>> future = new SettableListenableFuture<>();
            producer.send(producerRecord, new Callback() {
    
            @Override
            public void onCompletion(RecordMetadata metadata, Exception exception) {
                try {
                    if (exception == null) {
                        future.set(new SendResult<>(producerRecord, metadata));
                        if (KafkaTemplate.this.producerListener != null) {
                            KafkaTemplate.this.producerListener.onSuccess(producerRecord, metadata);
                        }
                        if (KafkaTemplate.this.logger.isTraceEnabled()) {
                            KafkaTemplate.this.logger.trace("Sent ok: " + producerRecord + ", metadata: " + metadata);
                        }
                    }
                    else {
                        future.setException(new KafkaProducerException(producerRecord, "Failed to send", exception));
                        if (KafkaTemplate.this.producerListener != null) {
                            KafkaTemplate.this.producerListener.onError(producerRecord, exception);
                        }
                        if (KafkaTemplate.this.logger.isDebugEnabled()) {
                            KafkaTemplate.this.logger.debug("Failed to send: " + producerRecord, exception);
                        }
                    }
                }
                finally {
                    if (!KafkaTemplate.this.transactional) {
                        closeProducer(producer, false);
                    }
                }
            }
    
        });
        if (this.autoFlush) {
            flush();
        }
        if (this.logger.isTraceEnabled()) {
            this.logger.trace("Sent: " + producerRecord);
        }
        return future;
    }
    

    producer中的doSend方法实现异步发送数据到topic。

    1. 确认topic的元数据是可用的,并设置等待超时时间。
    2. 序列化record的key,topic和header。
    3. 序列化record的value,topic,header。
    4. 设置record的分区。这边如果在最开始传入时设置了分区,就用设置的分区,如果没有,就用轮询的方式计算。
    5. 检查序列化后要传输的record是否超过限制;
    6. 把前面设置好的分区、序列化的key,value、超时时间、header等参数放入到累加器中。
    7. 如果返回的结果显示批队列已经满了或者新建立了一个批队列,那么就唤醒这个sender发送数据。
    8. 返回result的future给上层。
        private Future<RecordMetadata> doSend(ProducerRecord<K, V> record, Callback callback) {
            TopicPartition tp = null;
            try {
                // first make sure the metadata for the topic is available
                ClusterAndWaitTime clusterAndWaitTime = waitOnMetadata(record.topic(), record.partition(), maxBlockTimeMs);
                long remainingWaitMs = Math.max(0, maxBlockTimeMs - clusterAndWaitTime.waitedOnMetadataMs);
                Cluster cluster = clusterAndWaitTime.cluster;
                byte[] serializedKey;
                try {
                    serializedKey = keySerializer.serialize(record.topic(), record.headers(), record.key());
                } catch (ClassCastException cce) {
                    throw new SerializationException("Can't convert key of class " + record.key().getClass().getName() +
                            " to class " + producerConfig.getClass(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG).getName() +
                            " specified in key.serializer", cce);
                }
                byte[] serializedValue;
                try {
                    serializedValue = valueSerializer.serialize(record.topic(), record.headers(), record.value());
                } catch (ClassCastException cce) {
                    throw new SerializationException("Can't convert value of class " + record.value().getClass().getName() +
                            " to class " + producerConfig.getClass(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG).getName() +
                            " specified in value.serializer", cce);
                }
                int partition = partition(record, serializedKey, serializedValue, cluster);
                tp = new TopicPartition(record.topic(), partition);
    
                setReadOnly(record.headers());
                Header[] headers = record.headers().toArray();
    
                int serializedSize = AbstractRecords.estimateSizeInBytesUpperBound(apiVersions.maxUsableProduceMagic(),
                        compressionType, serializedKey, serializedValue, headers);
                ensureValidRecordSize(serializedSize);
                long timestamp = record.timestamp() == null ? time.milliseconds() : record.timestamp();
                log.trace("Sending record {} with callback {} to topic {} partition {}", record, callback, record.topic(), partition);
                // producer callback will make sure to call both 'callback' and interceptor callback
                Callback interceptCallback = this.interceptors == null ? callback : new InterceptorCallback<>(callback, this.interceptors, tp);
    
                if (transactionManager != null && transactionManager.isTransactional())
                    transactionManager.maybeAddPartitionToTransaction(tp);
    
                RecordAccumulator.RecordAppendResult result = accumulator.append(tp, timestamp, serializedKey,
                        serializedValue, headers, interceptCallback, remainingWaitMs);
                if (result.batchIsFull || result.newBatchCreated) {
                    log.trace("Waking up the sender since topic {} partition {} is either full or getting a new batch", record.topic(), partition);
                    this.sender.wakeup();
                }
                return result.future;
                // handling exceptions and record the errors;
                // for API exceptions return them in the future,
                // for other exceptions throw directly
            } catch (ApiException e) {
                log.debug("Exception occurred during message send:", e);
                if (callback != null)
                    callback.onCompletion(null, e);
                this.errors.record();
                if (this.interceptors != null)
                    this.interceptors.onSendError(record, tp, e);
                return new FutureFailure(e);
            } catch (InterruptedException e) {
                this.errors.record();
                if (this.interceptors != null)
                    this.interceptors.onSendError(record, tp, e);
                throw new InterruptException(e);
            } catch (BufferExhaustedException e) {
                this.errors.record();
                this.metrics.sensor("buffer-exhausted-records").record();
                if (this.interceptors != null)
                    this.interceptors.onSendError(record, tp, e);
                throw e;
            } catch (KafkaException e) {
                this.errors.record();
                if (this.interceptors != null)
                    this.interceptors.onSendError(record, tp, e);
                throw e;
            } catch (Exception e) {
                // we notify interceptor about all exceptions, since onSend is called before anything else in this method
                if (this.interceptors != null)
                    this.interceptors.onSendError(record, tp, e);
                throw e;
            }
        }
    

    RecordAccumulator中的append方法用于把record添加到累加器中,并返回累加的结果。

    首先它检查是否有一个在处理的batch。如果有,直接尝试增加序列化后的record到累加器中。如果没有,则创建一个带有缓冲区的新的batch,然后尝试增加序列化后的record到batch中的缓冲区内,接着增加batch到队列中。最终返回累加的结果。

        public RecordAppendResult append(TopicPartition tp,
                                         long timestamp,
                                         byte[] key,
                                         byte[] value,
                                         Header[] headers,
                                         Callback callback,
                                         long maxTimeToBlock) throws InterruptedException {
            // We keep track of the number of appending thread to make sure we do not miss batches in
            // abortIncompleteBatches().
            appendsInProgress.incrementAndGet();
            ByteBuffer buffer = null;
            if (headers == null) headers = Record.EMPTY_HEADERS;
            try {
                // check if we have an in-progress batch
                Deque<ProducerBatch> dq = getOrCreateDeque(tp);
                synchronized (dq) {
                    if (closed)
                        throw new IllegalStateException("Cannot send after the producer is closed.");
                    RecordAppendResult appendResult = tryAppend(timestamp, key, value, headers, callback, dq);
                    if (appendResult != null)
                        return appendResult;
                }
    
                // we don't have an in-progress record batch try to allocate a new batch
                byte maxUsableMagic = apiVersions.maxUsableProduceMagic();
                int size = Math.max(this.batchSize, AbstractRecords.estimateSizeInBytesUpperBound(maxUsableMagic, compression, key, value, headers));
                log.trace("Allocating a new {} byte message buffer for topic {} partition {}", size, tp.topic(), tp.partition());
                buffer = free.allocate(size, maxTimeToBlock);
                synchronized (dq) {
                    // Need to check if producer is closed again after grabbing the dequeue lock.
                    if (closed)
                        throw new IllegalStateException("Cannot send after the producer is closed.");
    
                    RecordAppendResult appendResult = tryAppend(timestamp, key, value, headers, callback, dq);
                    if (appendResult != null) {
                        // Somebody else found us a batch, return the one we waited for! Hopefully this doesn't happen often...
                        return appendResult;
                    }
    
                    MemoryRecordsBuilder recordsBuilder = recordsBuilder(buffer, maxUsableMagic);
                    ProducerBatch batch = new ProducerBatch(tp, recordsBuilder, time.milliseconds());
                    FutureRecordMetadata future = Utils.notNull(batch.tryAppend(timestamp, key, value, headers, callback, time.milliseconds()));
    
                    dq.addLast(batch);
                    incomplete.add(batch);
    
                    // Don't deallocate this buffer in the finally block as it's being used in the record batch
                    buffer = null;
    
                    return new RecordAppendResult(future, dq.size() > 1 || batch.isFull(), true);
                }
            } finally {
                if (buffer != null)
                    free.deallocate(buffer);
                appendsInProgress.decrementAndGet();
            }
        }
    

    真正发送record到集群的的类是Sender类,它是一个bachground thread。它在run方法中调用sendProducerData方法。

    而sendProducerData方法做了以下事情:

    1. 从累加器中获取可以准备发送的record
    2. 如果有任何分区的leader还不知道,强制元数据更新
    3. 移除还没有准备好发送的节点
    4. 创建一个request请求用于发送batch
    5. 对于过期的batch进行reset producer id
    6. 发送batch request
        private long sendProducerData(long now) {
            Cluster cluster = metadata.fetch();
    
            // get the list of partitions with data ready to send
            RecordAccumulator.ReadyCheckResult result = this.accumulator.ready(cluster, now);
    
            // if there are any partitions whose leaders are not known yet, force metadata update
            if (!result.unknownLeaderTopics.isEmpty()) {
                // The set of topics with unknown leader contains topics with leader election pending as well as
                // topics which may have expired. Add the topic again to metadata to ensure it is included
                // and request metadata update, since there are messages to send to the topic.
                for (String topic : result.unknownLeaderTopics)
                    this.metadata.add(topic);
                this.metadata.requestUpdate();
            }
    
            // remove any nodes we aren't ready to send to
            Iterator<Node> iter = result.readyNodes.iterator();
            long notReadyTimeout = Long.MAX_VALUE;
            while (iter.hasNext()) {
                Node node = iter.next();
                if (!this.client.ready(node, now)) {
                    iter.remove();
                    notReadyTimeout = Math.min(notReadyTimeout, this.client.connectionDelay(node, now));
                }
            }
    
            // create produce requests
            Map<Integer, List<ProducerBatch>> batches = this.accumulator.drain(cluster, result.readyNodes,
                    this.maxRequestSize, now);
            if (guaranteeMessageOrder) {
                // Mute all the partitions drained
                for (List<ProducerBatch> batchList : batches.values()) {
                    for (ProducerBatch batch : batchList)
                        this.accumulator.mutePartition(batch.topicPartition);
                }
            }
    
            List<ProducerBatch> expiredBatches = this.accumulator.expiredBatches(this.requestTimeout, now);
            // Reset the producer id if an expired batch has previously been sent to the broker. Also update the metrics
            // for expired batches. see the documentation of @TransactionState.resetProducerId to understand why
            // we need to reset the producer id here.
            if (!expiredBatches.isEmpty())
                log.trace("Expired {} batches in accumulator", expiredBatches.size());
            for (ProducerBatch expiredBatch : expiredBatches) {
                failBatch(expiredBatch, -1, NO_TIMESTAMP, expiredBatch.timeoutException(), false);
                if (transactionManager != null && expiredBatch.inRetry()) {
                    // This ensures that no new batches are drained until the current in flight batches are fully resolved.
                    transactionManager.markSequenceUnresolved(expiredBatch.topicPartition);
                }
            }
    
            sensors.updateProduceRequestMetrics(batches);
    
            // If we have any nodes that are ready to send + have sendable data, poll with 0 timeout so this can immediately
            // loop and try sending more data. Otherwise, the timeout is determined by nodes that have partitions with data
            // that isn't yet sendable (e.g. lingering, backing off). Note that this specifically does not include nodes
            // with sendable data that aren't ready to send since they would cause busy looping.
            long pollTimeout = Math.min(result.nextReadyCheckDelayMs, notReadyTimeout);
            if (!result.readyNodes.isEmpty()) {
                log.trace("Nodes with data ready to send: {}", result.readyNodes);
                // if some partitions are already ready to be sent, the select time would be 0;
                // otherwise if some partition already has some data accumulated but not ready yet,
                // the select time will be the time difference between now and its linger expiry time;
                // otherwise the select time will be the time difference between now and the metadata expiry time;
                pollTimeout = 0;
            }
            sendProduceRequests(batches, now);
    
            return pollTimeout;
        }
    

    总结

    本文简单介绍了Kafka的基本情况,包含Producer、Consumer、Streams、Connector4个API。接着从源码入手分析了Producer发送数据到集群的过程,其主要是把数据放入缓冲,然后再从缓冲区发送数据。

    相关文章

      网友评论

          本文标题:Kafka之Producer源码

          本文链接:https://www.haomeiwen.com/subject/tklmbftx.html