Let us see what issues we have with the above setup with the involvement of Zookeeper. KIP-612 adds the ability to limit the connection creation rate on brokers, while KIP-651 supports the PEM format for SSL certificates and private keys. Kafka broker, producer, and consumer KIP-500 update In Apache Kafka 2.5, some preparatory work has been done towards the removal of Apache ZooKeeper (ZK). We will create follow-on KIPs to hash out the concrete details of each change. Rather than being stored in a separate system, metadata should be stored in Kafka itself.  This will avoid all the problems associated with discrepancies between the controller state and the Zookeeper state.  Rather than pushing out notifications to brokers, brokers should simply consume metadata events from the event log.  This ensures that metadata changes will always arrive in the same order.  Brokers will be able to store metadata locally in a file.  When they start up, they will only need to read what has changed from the controller, not the full state.  This will let us support more partitions with less CPU consumption. There should be multiple replicas of a partition, each stored in a different broker. Before KIP-500, our Kafka setup looks like depicted below. Zookeeper’s leader election or Quartz Clustering, so only one of the instances of the service sends the email. Every partition leader maintains the ISR list or the list of ISRs. If the number of topics and the partitions is more per topic, the failover Kafka controller time increases. Managing a Zookeeper cluster creates an additional burden on the infrastructure and the admins. The nice thing about sticking with consumer groups is that it lets us expose the inner details from a … As of version 2.5, Kafka supports authenticating to ZooKeeper with SASL and mTLS–either individually or together. Powered by a free Atlassian Confluence Open Source Project License granted to Apache Software Foundation. ( Log Out /  This KIP expresses a vision of how we would like to evolve Kafka in the future.  We will create follow-on KIPs to hash out the concrete details of each change. Post KIP-500, just the entry needs to add to the metadata partition. Once it has taken over the /controller node, the active controller will proceed to load the full state of ZooKeeper.  It will write out this information to the quorum's metadata storage.  After this point, the metadata quorum will be the metadata store of record, rather than the data in ZooKeeper. durability. uses Apache ZooKeeper to store its metadata. Just like ZooKeeper, Raft requires a majority of nodes to be running in order to continue running.  Therefore, a three-node controller cluster can survive one failure.  A five-node controller cluster can survive two failures, and so on. Finally, I want to say ConfuentInc planning to launch new version Kafka … Change ). KIP-500 introduced the concept of a bridge release that can coexist with both pre- and post-KIP-500 versions of Kafka. Kafka: the source of the event data. With KIP-500, Kafka will NOT need Zookeeper anymore. This improvement also inherits the security characteristics of similar functionalities. Note that this diagram is slightly misleading.  Other brokers besides the controller can and do communicate with ZooKeeper.  So really, a line should be drawn from each broker to ZK.  However, drawing that many lines would make the diagram difficult to read.  Another issue which this diagram leaves out is that external command line tools and utilities can modify the state in ZooKeeper, without the involvement of the controller.  As discussed earlier, these issues make it difficult to know whether the state in memory on the controller truly reflects the persistent state in ZooKeeper. This process is done by the Kafka broker who is acting as a controller. KIP-500 describes a concept of a total ZooKeeper removal. ZooKeeper connections that use mTLS are encrypted. Consider that cluster as a controller cluster. ( Log Out /  It is no longer necessary to use ZooKeeper for making these changes, hence this improvement also plays part in a bigger effort to remove ZooKeeper… This post was jointly written by Neha Narkhede, co-creator of Apache Kafka, and Flavio Junqueira, co-creator of Apache ZooKeeper.. This, however, will change shortly as part of KIP-500, as Kafka is going to have its own metadata quorum. Here we have a 3 node Zookeeper cluster and a 4 node Kafka cluster. Siva Janapati is an Architect with experience in building Cloud Native Microservices architectures, Reactive Systems, Large scale distributed systems, and Serverless Systems. Using the Raft algorithm, the controller nodes will elect a leader from amongst themselves, without relying on any external system.  The leader of the metadata log is called the active controller.  The active controller handles all RPCs made from the brokers.  The follower controllers replicate the data which is written to the active controller, and serve as hot standbys if the active controller should fail.  Because the controllers will now all track the latest state, controller failover will not require a lengthy reloading period where we transfer all the state to the new controller. ZooKeeper, a centralized service for maintaining config in a distributed system, is used to store a Kafka cluster's metadata. The purpose of this KIP is to go into detail about how the Kafka Controller will change during this transition. Below are a few important parameters to consider. KIP-500 introduces a new way of storing data in Kafka itself, rather than in external systems such as ZooKeeper. Let’s think Kafka cluster without Zookeeper with KIP-500. Soon, Apache Kafka ® will no longer need ZooKeeper! Until then, Happy Messaging!! ... For information, see Preparing Your Clients and Tools for KIP-500: ZooKeeper Removal from Apache Kafka. This update continues to work towards deprecating ZooKeeper and expands the non-ZK functionality of dynamic configs. ( Log Out /  In the future, I want to see the elimination of the second Kafka cluster for controllers and eventually, we should be able to manage the metadata within the actual Kafka cluster. For example, if you lost the Kafka data in ZooKeeper, the mapping of replicas to Brokers and topic configurations would be lost as well, making your Kafka cluster no longer functional and potentially resulting in total data loss. The cluster must be upgraded to the bridge release, if it isn't already. Apache Kafka, Apache Kafka Connect, Apache Kafka MirrorMaker 2, M3, M3 Aggregator, Apache Cassandra, Elasticsearch, PostgreSQL, MySQL, Redis, InfluxDB, Grafana are trademarks and property of their respective owners. This site uses Akismet to reduce spam. Signed-off-by: Ron Dagostino rdagostino@confluent.io Committer Checklist (excluded from commit message) Verify design and implementation Verify test coverage and CI build status Verify documentation (including upgrade notes) Kafka集群搭建 本文使用的zookeeper是kafka自带的,最好不要用kafka自带的,可以先搭建好zookeeper集群 1、 Kafka的安装需要java环境,cent os 7自带java1.6版本,可以不用重新安装,直接使用自带的jdk 即可;如果觉得jdk版本. Learn how your comment data is processed. New broker-based API to change SCRAM settings for users. Currently, Apache Kafka is using Zookeeper to manage its cluster metadata. Currently, if a broker loses its ZooKeeper session, the controller removes it from the cluster metadata.  In the post-ZooKeeper world, the active controller removes a broker from the cluster metadata if it has not sent a MetadataFetch heartbeat in a long enough time. KIP-500 outlines a better way of handling metadata in Kafka. ( Log Out /  KIP-555: Deprecate Direct Zookeeper access in Kafka Administrative Tools: This KIP is another step towards removing the ZooKeeper dependency . Previously, under certain rare conditions, if a broker became partitioned from Zookeeper but . Currently, some tools and scripts directly contact ZooKeeper.  In a post-ZooKeeper world, these tools must use Kafka APIs instead.  Fortunately, "KIP-4: Command line and centralized administrative operations" began the task of removing direct ZooKeeper access several years ago, and it is nearly complete. When deploying a secure Kafka cluster, it’s critical to use TLS to encrypt communication in transit. KIP-500 update In Apache Kafka 2.5, some preparatory work has been done towards the removal of Apache ZooKeeper (ZK). Soon, Apache Kafka ® will no longer need ZooKeeper! Based on the notification provides by Zookeeper, the producers and consumers find the presence of any new broker or failure of the broker in the entire Kafka cluster. When we delete or create a topic, the Kafka cluster needs to talk to Zookeeper to get the updated list of topics. Eventually, the active controller will ask the broker to finally go offline, by returning a special result code in the MetadataFetchResponse.  Alternately, the broker will shut down if the leaders can't be moved in a predetermined amount of time. Additionally, if we supported multiple metadata storage options, we would have to use "least common denominator" APIs.  In other words, we could not use any API unless all possible metadata storage options supported it.  In practice, this would make it difficult to optimize the system. Post KIP-500, the metadata scalability increases which eventually improves the SCALABILITY of Kafka. For the latest version (2.4.1) ZooKeeper is still required for running Kafka, but in the near future, ZooKeeper dependency will be removed from Apache Kafka. Siva has hands-on in architecture, design, and implementation of scalable systems using Cloud, Java, Go lang, Apache Kafka, Apache Solr, Spring, Spring Boot, Lightbend reactive tech stack, APIGEE edge & on-premise and other open-source, proprietary technologies. However, as described in the KIP (emphasis mine): This KIP expresses a vision of how we would like to evolve Kafka in the future. Note that although the controller processes are logically separate from the broker processes, they need not be physically separate.  In some cases, it may make sense to deploy some or all of the controller processes on the same node as the broker processes.  This is similar to how ZooKeeper processes may be deployed on the same nodes as Kafka brokers today in smaller clusters.  As per usual, all sorts of deployment options are possible, including running in the same JVM. We would like to remove this dependency on ZooKeeper. Making the Zookeeper cluster highly available is an issue as without the Zookeeper cluster the Kafka cluster is DEAD. Kafka stores the basic metadata in zookeeper like topics, list of Kafka cluster instances, messages consumers, etc. In the past; clients would literally connect to Zookeeper to fetch information about the cluster, which corroborates the fact that Zookeeper played an important role in Kafka’s world. Instead of the controller pushing out updates to the other brokers, those brokers will fetch updates from the active controller via the new MetadataFetch API. Many operations that were formerly performed by a direct write to ZooKeeper will become controller operations instead.  For example, changing configurations, altering ACLs that are stored with the default Authorizer, and so on. This is similar to KIP-4, which presented an over… Apache Kafka is in the process of moving from storing metadata in Apache Zookeeper, to storing metadata in an internal Raft topic. We often talk about the benefits of managing state as a stream of events. The rolling upgrade from the bridge release will take several steps. The Kafka Streams client library sees three new KIPs. The brokers in the Kafka cluster will periodically pull the metadata from the controller. In the KIP's Motivation and Overview you mentioned the LeaderAndIsr and UpdateMetadata RPC. A new change to Kafka Connect (KIP-558) tracks the set of actively used topics by connectors. Create a free website or blog at WordPress.com. Handling Metadata via Write-Ahead Logging. This is the architecture that we would have traditionally use for such a microservice: 1. This KIP presents an overall vision for a scalable post-ZooKeeper Kafka. If a broker fails, partitions on that broker with a leader temporarily become inaccessible. 3. As described in the blog post Apache Kafka ® Needs No Keeper: Removing the Apache ZooKeeper Dependency, when KIP-500 lands next year, Apache Kafka will replace its usage of Apache ZooKeeper with its own built-in consensus layer. We will roll the broker nodes as usual.  The new broker nodes will not contact ZooKeeper.  If the configuration for the zookeeper server addresses is left in the configuration, it will be ignored. All the brokers in the cluster will be in sync. What we're talking about today is a new Kafka improvement proposal called KIP-500 that's talking about how we can move beyond Zookeeper and basically use Kafka … Currently, removing this dependency on ZooKeeper is work in progress (through the KIP-500 ) . The controller broker should get metadata from the Zookeeper for each of the affected partition. This week’s release is a new set of articles that focus on Redpoint Ventures Reverse … This means that you’ll be able to remove ZooKeeper from your Apache Kafka deployments so that the only thing you need to run Kafka is…Kafka itself. This can be a daunting task for administrators, especially if they are not very familiar with deploying Java services.  Unifying the system would greatly improve the "day one" experience of running Kafka, and help broaden its adoption. Signed-off-by: Ron Dagostino rdagostino@confluent.io Committer Checklist (excluded from commit message) Verify design and implementation Verify test coverage and CI build status Verify documentation (including upgrade notes) Post KIP-500 will speed up the topic creation and deletion. Because the Kafka and ZooKeeper configurations are separate, it is easy to make mistakes.  For example, administrators may set up SASL on Kafka, and incorrectly think that they have secured all of the data travelling over the network.  In fact, it is also necessary to configure security in the separate, external ZooKeeper system in order to do this.  Unifying the two systems would give a uniform security configuration model. ZooKeeper dependency confuses newcomers and makes Kafka deployment more complex. You can think of this as “Kafka on Kafka,” since it involves storing Kafka’s metadata in Kafka itself rather than in an external system such as ZooKeeper. Post was not sent - check your email addresses! All tools have been updated to not rely on ZooKeeper so this KIP proposes deprecating the --zookeeper flag to … KIP-555: details about the ZooKeeper deprecation process in Information such as the partitions, configuration of topics, access control lists, etc. With KIP-554, SCRAM credentials can be managed via the Kafka protocol and the kafka-configs tool was updated to use the newly introduced protocol APIs. The new active controller will monitor ZooKeeper for legacy broker node registrations.  It will know how to send the legacy "push" metadata requests to those nodes, during the transition period. All tools have been updated to not rely on ZooKeeper so this KIP proposes deprecating the --zookeeper flag to … Proposed Changes Deployment KIP-500 Mode At present, There is no any alternative for zookeeper in Kafka. To find out what the key improvements in the new version are and how you can get in on the action, read on! STATUS. Beginning with ZooKeeper 3.5.7 (the version shipped with Kafka 2.5), ZooKeeper supports the server-side configuration ssl.clientAuth=none, which is case-insensitive; valid options are: want, need (the default), and none. Unfortunately, this strategy would not address either of the two main goals of ZooKeeper removal.  Because they have ZooKeeper-like APIs and design goals, these external systems would not let us treat metadata as an event log.  Because they are still external systems that are not integrated with the project, deployment and configuration would still remain more complex than they needed to be. Brokers enter the stopping state when they receive a SIGINT.  This indicates that the system administrator wants to shut down the broker. Subject: [DISCUSS] KIP-500: Replace ZooKeeper with a Self-Managed Metadata Quorum Date: 2019/08/01 21:04:46 List: dev@kafka.apache.org Hi all, I've written a KIP about removing ZooKeeper from Kafka. That reduces the burden on the infrastructure and the administrator’s job to the next level. metadata stored in a ZooKeeper cluster. Work on KIP-500 includes removing direct access to ZooKeeper from the admin tools. Kafka broker, producer, and consumer KIP-500 update In Apache Kafka 2.5, some preparatory work has been done towards the removal of Apache ZooKeeper (ZK). With KIP-500, Kafka will include its own built-in consensus layer, removing the ZooKeeper dependency altogether.The next big milestone in this effort is coming in Apache Kafka 2.8.0Apache Kafka 2.8.0 We will need to keep it updated as we consume new messages from Kafka. Colin McCabe and Jason Gustafson discuss the history of Kafka, the creation of KIP-500, and the implications of removing ZooKeeper dependency and replacing … The release of Kafka 2.7 furthermore includes end-to-end latency metrics and sliding windows. He has successfully delivered multiple applications in retail, telco, and financial services domains. Kafka stores the basic metadata in zookeeper like topics, list of Kafka cluster instances, messages consumers, etc. KIP-555: Deprecate Direct Zookeeper access in Kafka Administrative Tools: This KIP is another step towards removing the ZooKeeper dependency (). Electing another Kafka broker as a controller requires pulling the metadata from the Zookeeper which leads to the Kafka cluster unavailability. KIP-558 is enabled by default. At the moment, Kafka still requires Zookeeper. It was time for the Zookeeper to retire. Sorry, your blog cannot share posts by email. This KIP presents an overall vision for a scalable post-ZooKeeper Kafka.  In order to present the big picture, I have mostly left out details like RPC formats, on-disk formats, and so on.  We will want to have follow-on KIPs to describe each step in greater detail.    This is similar to KIP-4, which presented an overall vision which subsequent KIPs enlarged upon. Apache Kafka 2.4 already ships with ZooKeeper 3.5, which adds TLS support between the broker and ZooKeeper. In the current world, a broker which can contact ZooKeeper but which is partitioned from the controller will continue serving user requests, but will not receive any metadata updates.  This can lead to some confusing and difficult situations.  For example, a producer using acks=1 might continue to produce to a leader that actually was not the leader any more, but which failed to receive the controller's LeaderAndIsrRequest moving the leadership. The orange Kafka node is a controller node. With KIP-500, Kafka will include its own built-in consensus layer, removing the ZooKeeper … When a broker is stopping, it is still running, but we are trying to migrate the partition leaders off of the broker. This speeds up the topic creation and deletion. Change ), You are commenting using your Twitter account. Currently, a Kafka cluster contains several broker nodes, and an external quorum of ZooKeeper nodes.  We have pictured 4 broker nodes and 3 ZooKeeper nodes in this diagram.  This is a typical size for a small cluster.  The controller (depicted in orange) loads its state from the ZooKeeper quorum after it is elected.  The lines extending from the controller to the other nodes in the broker represent the updates which the controller pushes, such as LeaderAndIsr and UpdateMetadata messages. This guide will help you prepare and plan for ZooKeeper removal, ensuring your Kafka tools, applications, and CLI commands can run ZooKeeper-free. This setup is a minimum for sustaining 1 Kafka broker failure. In the post-ZK world, cluster membership is integrated with metadata updates.  Brokers cannot continue to be members of the cluster if they cannot receive metadata updates.  While it is still possible for a broker to be partitioned from a particular client, the broker will be removed from the cluster if it is partitioned from the controller. 2 KIP-500 为此Apache启动了一个KIP-500的项目,将Kafka的元数据存储在Kafka本身中,而不是存储在ZooKeeper之类的外部系统中。将controller作为分区的负责人。拥有的分区和元数据越多,controller的可 … In 2019, a KIP, Kafka … He manages the GitHub(https://github.com/2013techsmarts) where he put the source code of his work related to his blog posts. However, although our users enjoy these benefits, Kafka itself has been left out.  We treat changes to metadata as isolated changes with no relationship to each other.  When the controller pushes out state change notifications (such as LeaderAndIsrRequest) to other brokers in the cluster, it is possible for brokers to get some of the changes, but not all.  Although the controller retries several times, it eventually give up.  This can leave brokers in a divergent state. In 2019, a KIP, Kafka … KIP-555: details about the ZooKeeper deprecation process in admin tools KIP-543: dynamic configs When Kafka Controller fails, a new one needs to load a full cluster state from ZooKeeper, which can take a while. ZooKeeperの依存関係はApache Kafkaから削除されます。KIP-500:ZooKeeperを自己管理メタデータクォーラムに置き換えるでの高レベルの議論を参照してください。 これらの取り組みには、いくつかのKafkaリリースと追加のKIPが必要 With KIP-500, we are going to see a Kafka cluster without the Zookeeper cluster where the metadata management will be done with Kafka itself. A MetadataFetch is similar to a fetch request.  Just like with a fetch request, the broker will track the offset of the last updates it fetched, and only request newer updates from the active controller.Â, The broker will persist the metadata it fetched to disk.  This will allow the broker to start up very quickly, even if there are hundreds of thousands or even millions of partitions.  (Note that since this persistence is an optimization, we can leave it out of the first version, if it makes development easier.). Database: to track the US open positions for each client. Here we have a 3 node Zookeeper cluster and a 4 node Kafka cluster. KIP-500: Replace ZooKeeper with a Self-Managed Metadata Quorum, {"serverDuration": 91, "requestCorrelationId": "8ec0f48b916aa5ef"}, KIP-455: Create an Administrative API for Replica Reassignment, KIP-497: Add inter-broker API to alter ISR, KIP-543: Expand ConfigCommand's non-ZK functionality, KIP-555: Deprecate Direct Zookeeper access in Kafka Administrative Tools, KIP-589 Add API to update Replica state in Controller, KIP-590: Redirect Zookeeper Mutation Protocols to The Controller, KIP-595: A Raft Protocol for the Metadata Quorum, KIP-631: The Quorum-based Kafka Controller, In Search of an Understandable Consensus Algorithm, Tango: Distributed Data Structures over a Shared Log, Shvachko, K., Kuang, H., Radia, S. Chansler, R.Â, Balakrishnan, M., Malkhi, D., Wobber, T.Â. KIP-497 is also related to the removal of ZooKeeper. Expertise working with and building RESTful, GraphQL APIs. The communication between the controller broker and the Zookeeper happens in a serial manner which leads to unavailability of the partition if the leader broker dies. KIP-555: details about the ZooKeeper deprecation process in Before KIP-500, our Kafka setup looks like depicted below. At the moment, the kafka-configs tool still requires Zookeeper to update topic configurations and quotas. Below is the Kafka cluster setup. We will configure the controller quorum nodes with the address of the ZooKeeper quorum.  Once the controller quorum is established, the active controller will enter its node information into /brokers/ids and overwrite the /controller node with its ID.  This will prevent any of the un-upgraded broker nodes from becoming the controller at any future point during the rolling upgrade. This, however, will change shortly as part of KIP-500, as Kafka is going to have its own metadata quorum. The overall plan for compatibility is to create a "bridge release" of Kafka where the ZooKeeper dependency is well-isolated. On behalf of the Apache Kafka ® community, it is my pleasure to announce the release of Apache Kafka 2.6.0.. In order to present the big picture, I have mostly left out details like RPC formats, on-disk formats, and so on. For the latest version (2.4.1) ZooKeeper is still required for running Kafka, but in the near future, ZooKeeper dependency will be removed from Apache Kafka. Umbrella JIRA for tasks related to KIP-500: Replace ZooKeeper with a Metadata Quorum Aiven for Apache Kafka moves to version 2.7. These efforts will take a few Kafka releases and additional KIPs. New versions of the clients should send these operations directly to the active controller.  This is a backwards compatible change: it will work with both old and new clusters.  In order to preserve compatibility with old clients that sent these operations to a random broker, the brokers will forward these requests to the active controller. With KIP-554, SCRAM credentials can be managed via the Kafka protocol and the kafka-configs tool was updated to use the newly introduced protocol APIs. I have a few comments and questions. TLS を使用して Zookeeper と通信するように Kafka を構成することも可能です (KIP-515)。 Scala 2.11 はサポートされなくなりました。 現在、Scala 2.12 と 2.13 (Kafka 2.4.0 でサポートを追加) のみがサポートされています ( KIP-531 )。 Zookeeper stands as the leader for Kafka to update the changes of topology in the cluster. @jeqo I'd probably suggest sticking with consumer groups initially if we want to have any hope of getting this into the release. One of the replicas is designated as a leader and the rest of the replicas are followers. Note that while this KIP only discusses broker metadata management, client metadata management is important for scalability as well.  Once the infrastructure for sending incremental metadata updates exists, we will want to use it for clients as well as for brokers.  After all, there are typically a lot more clients than brokers.  As the number of partitions grows, it will become more and more important to deliver metadata updates incrementally to clients that are interested in many partitions.  We will discuss this further in follow-on KIPs. To see the impact of topic deletion or creation with the Kafka cluster will take time. In the Kafka world we have Zookeeper that plays an important role in keeping Kafka on its feet serving its clients. Kafka uses ZooKeeper to manage the cluster.ZooKeeper is used to coordinate the brokers/cluster topology.ZooKeeper is a consistent file system for configuration information.ZooKeeper gets used for leadership election for Broker Topic Partition Leaders.. KIP-500 was met with applause from much of the Kafka community, who were sick and tired of dealing with Zookeeper. KIP-500 is coming! Say goodbye to Kafka ZooKeeper dependency! The controller marked in orange color is an active controller and the other nodes are standby controllers. Apache Kafka 2.6 works to remove the ZooKeeper dependency, and adds client quota APIs, metrics to track disk read, as well as updates to Kafka Streams and more. Zookeeper stands as the leader for Kafka … While this has worked well, over the years, a Welcome to the 31st edition of the data engineering newsletter. Here are all the ways ZooKeeper removal benefits Kafka, with 42 things you can finally stop doing when Kafka 2.8.0 is released. Zookeeper stands as the leader for Kafka … If you look at the post KIP-500, the metadata is stored in the Kafka cluster itself. Change ), You are commenting using your Facebook account. Running ZooKeeper in Production Apache Kafka® uses ZooKeeper to store persistent cluster metadata and is a critical component of the Confluent Platform deployment. This is another important step towards KIP-500 where ZooKeeper is replaced by a built in quorum. ... KIP-515 introduces the necessary changes to … I read that Kafka no longer requires zookeeper You may well have read that in the future Apache Kafka will not need Zookeeper - this is detailed in KIP-500 However, this is not yet implemented, so for the time being (January 2021 Kafka uses ZooKeeper to store its metadata about partitions and brokers, and to elect a broker to be the Kafka Controller. This design means that when a new controller is elected, we never need to go through a lengthy metadata loading process. When a broker is online, it is ready to respond to requests from clients. KIP-500 described the overall architecture and plan. Provide Intuitive User Timeouts in The Producer (KIP-91) Kafka's replication protocol now supports improved fencing of zombies. The broker will periodically ask for metadata updates from the active controller.  This request will double as a heartbeat, letting the controller know that the broker is alive. Availability of the Kafka cluster if the controller dies. This is fairly complicated and will require lots of code. Right now, Apache Kafka® utilizes Apache ZooKeeper™ to store its metadata. This setup is a minimum for sustaining 1 Kafka broker failure. Currently, Apache Kafka ® uses Apache ZooKeeper to store its metadata. KIP-599 has to do with throttling the rate of creating topics, deleting topics, and creating partitions.