美文网首页
kafka官网文档

kafka官网文档

作者: blotstorm | 来源:发表于2017-10-31 11:47 被阅读0次

    4.3 Efficiency

    there are two common causes of inefficiency in this type of system: too many small I/O operations, and excessive byte copying.
    The small I/O problem happens both between the client and the server and in the server's own persistent operations.
    To avoid this, our protocol is built around a "message set" abstraction that naturally groups messages together. This allows network requests to group messages together and amortize the overhead of the network roundtrip rather than sending a single message at a time. The server in turn appends chunks of messages to its log in one go, and the consumer fetches large linear chunks at a time.
    This simple optimization produces orders of magnitude speed up. Batching leads to larger network packets, larger sequential disk operations, contiguous memory blocks, and so on, all of which allows Kafka to turn a bursty stream of random message writes into linear writes that flow to the consumers.
    The other inefficiency is in byte copying. At low message rates this is not an issue, but under load the impact is significant. To avoid this we employ a standardized binary message format that is shared by the producer, the broker, and the consumer (so data chunks can be transferred without modification between them).
    Modern unix operating systems offer a highly optimized code path for transferring data out of pagecache to a socket; in Linux this is done with the sendfile system call.
    4.6 Message Delivery Semantics

    • At most once—Messages may be lost but are never redelivered.
    • At least once—Messages are never lost but may be redelivered.
    • Exactly once—this is what people actually want, each message is delivered once and only once.
      .

    These are not the strongest possible semantics for publishers. Although we cannot be sure of what happened in the case of a network error, it is possible to allow the producer to generate a sort of "primary key" that makes retrying the produce request idempotent. This feature is not trivial for a replicated system because of course it must work even (or especially) in the case of a server failure. With this feature it would suffice for the producer to retry until it receives acknowledgement of a successfully committed message at which point we would guarantee the message had been published exactly once. We hope to add this in a future Kafka version。(producer端没有实现)
    The classic way of achieving this would be to introduce a two-phase commit between the storage for the consumer position and the storage of the consumers output. But this can be handled more simply and generally by simply letting the consumer store its offset in the same place as its output.
    4.7 Replication
    The unit of replication is the topic partition.
    For Kafka node liveness has two conditions:

    • A node must be able to maintain its session with ZooKeeper (via ZooKeeper's heartbeat mechanism)
    • If it is a slave it must replicate the writes happening on the leader and not fall "too far" behind

    We refer to nodes satisfying these two conditions as being "in sync" to avoid the vagueness of "alive" or "failed". The leader keeps track of the set of "in sync" nodes. If a follower dies, gets stuck, or falls behind, the leader will remove it from the list of in sync replicas. The determination of stuck and lagging replicas is controlled by the replica.lag.time.max.ms configuration.
    A message is considered "committed" when all in sync replicas for that partition have applied it to their log.

    Replicated Logs: Quorums, ISRs, and State Machines (Oh my!)
    This majority vote approach has a very nice property: the latency is dependent on only the fastest servers. That is, if the replication factor is three, the latency is determined by the faster slave not the slower one.
    There are a rich variety of algorithms in this family including ZooKeeper's Zab, Raft, and Viewstamped Replication.
    Instead of majority vote, Kafka dynamically maintains a set of in-sync replicas (ISR) that are caught-up to the leader. Only members of this set are eligible for election as leader. A write to a Kafka partition is not considered committed until all in-sync replicas have received the write. This ISR set is persisted to ZooKeeper whenever it changes. Because of this, any replica in the ISR is eligible to be elected leader.

    相关文章

      网友评论

          本文标题:kafka官网文档

          本文链接:https://www.haomeiwen.com/subject/xsovpxtx.html