KRaft (aka KIP-500) mode Preview Release
=========================================================
# Introduction
It is now possible to run Apache Kafka without Apache ZooKeeper! We call this the [Kafka Raft metadata mode](https://cwiki.apache.org/confluence/display/KAFKA/KIP-500%3A+Replace+ZooKeeper+with+a+Self-Managed+Metadata+Quorum), typically shortened to `KRaft mode`.
`KRaft` is intended to be pronounced like `craft` (as in `craftsmanship`). It is currently *PREVIEW AND SHOULD NOT BE USED IN PRODUCTION*, but it
is available for testing in the Kafka 3.1 release.
When the Kafka cluster is in KRaft mode, it does not store its metadata in ZooKeeper. In fact, you do not have to run ZooKeeper at all, because it stores its metadata in a KRaft quorum of controller nodes.
KRaft mode has many benefits -- some obvious, and some not so obvious. Clearly, it is nice to manage and configure one service rather than two services. In addition, you can now run a single process Kafka cluster.
Most important of all, KRaft mode is more scalable. We expect to be able to [support many more topics and partitions](https://www.confluent.io/kafka-summit-san-francisco-2019/kafka-needs-no-keeper/) in this mode.
# Quickstart
## Warning
KRaft mode in Kafka 3.1 is provided for testing only, *NOT* for production. We do not yet support upgrading existing ZooKeeper-based Kafka clusters into this mode.
There may be bugs, including serious ones. You should *assume that your data could be lost at any time* if you try the preview release of KRaft mode.
## Generate a cluster ID
The first step is to generate an ID for your new cluster, using the kafka-storage tool:
~~~~
$ ./bin/kafka-storage.sh random-uuid
xtzWWN4bTjitpL3kfd9s5g
~~~~
## Format Storage Directories
The next step is to format your storage directories. If you are running in single-node mode, you can do this with one command:
~~~~
$ ./bin/kafka-storage.sh format -t <uuid> -c ./config/kraft/server.properties
Formatting /tmp/kraft-combined-logs
~~~~
If you are using multiple nodes, then you should run the format command on each node. Be sure to use the same cluster ID for each one.
This example configures the node as both a broker and controller (i.e. `process.roles=broker,controller`). It is also possible to run the broker and controller nodes separately.
Please see [here](https://github.com/apache/kafka/blob/trunk/config/kraft/broker.properties) and [here](https://github.com/apache/kafka/blob/trunk/config/kraft/controller.properties) for example configurations.
## Start the Kafka Server
Finally, you are ready to start the Kafka server on each node.
~~~~
$ ./bin/kafka-server-start.sh ./config/kraft/server.properties
[2021-02-26 15:37:11,071] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$)
[2021-02-26 15:37:11,294] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util)
[2021-02-26 15:37:11,466] INFO [Log partition=__cluster_metadata-0, dir=/tmp/kraft-combined-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2021-02-26 15:37:11,509] INFO [raft-expiration-reaper]: Starting (kafka.raft.TimingWheelExpirationService$ExpiredOperationReaper)
[2021-02-26 15:37:11,640] INFO [RaftManager nodeId=1] Completed transition to Unattached(epoch=0, voters=[1], electionTimeoutMs=9037) (org.apache.kafka.raft.QuorumState)
...
~~~~
Just like with a ZooKeeper based broker, you can connect to port 9092 (or whatever port you configured) to perform administrative operations or produce or consume data.
~~~~
$ ./bin/kafka-topics.sh --create --topic foo --partitions 1 --replication-factor 1 --bootstrap-server localhost:9092
Created topic foo.
~~~~
# Deployment
## Controller Servers
In KRaft mode, only a small group of specially selected servers can act as controllers (unlike the ZooKeeper-based mode, where any server can become the
Controller). The specially selected controller servers will participate in the metadata quorum. Each controller server is either active, or a hot
standby for the current active controller server.
You will typically select 3 or 5 servers for this role, depending on factors like cost and the number of concurrent failures your system should withstand
without availability impact. Just like with ZooKeeper, you must keep a majority of the controllers alive in order to maintain availability. So if you have 3
controllers, you can tolerate 1 failure; with 5 controllers, you can tolerate 2 failures.
## Process Roles
Each Kafka server now has a new configuration key called `process.roles` which can have the following values:
* If `process.roles` is set to `broker`, the server acts as a broker in KRaft mode.
* If `process.roles` is set to `controller`, the server acts as a controller in KRaft mode.
* If `process.roles` is set to `broker,controller`, the server acts as both a broker and a controller in KRaft mode.
* If `process.roles` is not set at all then we are assumed to be in ZooKeeper mode. As mentioned earlier, you can't currently transition back and forth between ZooKeeper mode and KRaft mode without reformatting.
Nodes that act as both brokers and controllers are referred to as "combined" nodes. Combined nodes are simpler to operate for simple use cases and allow you to avoid
some fixed memory overheads associated with JVMs. The key disadvantage is that the controller will be less isolated from the rest of the system. For example, if activity on the broker causes an out of
memory condition, the controller part of the server is not isolated from that OOM condition.
## Quorum Voters
All nodes in the system must set the `controller.quorum.voters` configuration. This identifies the quorum controller servers that should be used. All the controllers must be enumerated.
This is similar to how, when using ZooKeeper, the `zookeeper.connect` configuration must contain all the ZooKeeper servers. Unlike with the ZooKeeper config, however, `controller.quorum.voters`
also has IDs for each node. The format is id1@host1:port1,id2@host2:port2, etc.
So if you have 10 brokers and 3 controllers named controller1, controller2, controller3, you might have the following configuration on controller1:
```
process.roles=controller
node.id=1
listeners=CONTROLLER://controller1.example.com:9093
controller.quorum.voters=1@controller1.example.com:9093,2@controller2.example.com:9093,3@controller3.example.com:9093
```
Each broker and each controller must set `controller.quorum.voters`. Note that the node ID supplied in the `controller.quorum.voters` configuration must match that supplied to the server.
So on controller1, node.id must be set to 1, and so forth. Note that there is no requirement for controller IDs to start at 0 or 1. However, the easiest and least confusing way to allocate
node IDs is probably just to give each server a numeric ID, starting from 0. Also note that each node ID must be unique across all the nodes in a particular cluster; no two nodes can have the same node ID regardless of their `process.roles` values.
Note that clients never need to configure `controller.quorum.voters`; only servers do.
## Kafka Storage Tool
As described above in the QuickStart section, you must use the `kafka-storage.sh` tool to generate a cluster ID for your new cluster, and then run the format command on each node before starting the node.
This is different from how Kafka has operated in the past. Previously, Kafka would format blank storage directories automatically, and also generate a new cluster UUID automatically. One reason for the change
is that auto-formatting can sometimes obscure an error condition. For example, under UNIX, if a data directory can't be mounted, it may show up as blank. In this case, auto-formatting would be the wrong thing to do.
This is particularly important for the metadata log maintained by t
没有合适的资源?快使用搜索试试~ 我知道了~
kafka_2.13-3.2.1.tgz
需积分: 0 6 下载量 199 浏览量
2022-08-04
19:53:04
上传
评论
收藏 99.14MB TGZ 举报
温馨提示
共199个文件
jar:101个
sh:37个
bat:29个
Kafka是一种高吞吐量的分布式发布订阅消息系统
资源详情
资源评论
资源推荐
收起资源包目录
kafka_2.13-3.2.1.tgz (199个子文件)
eclipse-public-license-2.0 14KB
eclipse-distribution-license-1.0 2KB
CDDL+GPL-1.1 38KB
argparse-MIT 1KB
kafka-run-class.bat 5KB
kafka-server-start.bat 1KB
connect-distributed.bat 1KB
connect-standalone.bat 1KB
zookeeper-server-start.bat 1KB
zookeeper-shell.bat 1KB
kafka-server-stop.bat 997B
kafka-streams-application-reset.bat 972B
kafka-producer-perf-test.bat 940B
kafka-consumer-perf-test.bat 938B
kafka-console-consumer.bat 925B
kafka-console-producer.bat 925B
zookeeper-server-stop.bat 905B
kafka-transactions.bat 893B
kafka-reassign-partitions.bat 888B
kafka-replica-verification.bat 886B
kafka-delegation-tokens.bat 885B
kafka-broker-api-versions.bat 885B
kafka-leader-election.bat 884B
kafka-delete-records.bat 883B
kafka-consumer-groups.bat 883B
kafka-dump-log.bat 878B
kafka-get-offsets.bat 877B
kafka-log-dirs.bat 877B
kafka-configs.bat 876B
kafka-topics.bat 875B
kafka-storage.bat 874B
kafka-mirror-maker.bat 874B
kafka-acls.bat 873B
trogdor.conf 1KB
DWTFYWTPL 484B
rocksdbjni-6.29.4.1.jar 50.86MB
scala-library-2.13.8.jar 5.73MB
zstd-jni-1.5.2-1.jar 5.61MB
kafka_2.13-3.2.1.jar 5.26MB
kafka-clients-3.2.1.jar 4.71MB
scala-reflect-2.13.8.jar 3.6MB
snappy-java-1.1.8.4.jar 1.88MB
kafka-streams-3.2.1.jar 1.52MB
jackson-databind-2.12.6.1.jar 1.45MB
zookeeper-3.6.3.jar 1.2MB
jersey-common-2.34.jar 1.13MB
jline-3.21.0.jar 971KB
jersey-server-2.34.jar 925KB
javassist-3.27.0-GA.jar 764KB
jetty-server-9.4.44.v20210927.jar 702KB
lz4-java-1.8.0.jar 667KB
netty-common-4.1.73.Final.jar 634KB
scala-java8-compat_2.13-1.0.2.jar 625KB
connect-runtime-3.2.1.jar 597KB
jetty-util-9.4.44.v20210927.jar 568KB
netty-handler-4.1.73.Final.jar 510KB
commons-lang3-3.8.1.jar 490KB
netty-transport-4.1.73.Final.jar 468KB
kafka-metadata-3.2.1.jar 458KB
jackson-module-scala_2.13-2.12.6.jar 394KB
jackson-core-2.12.6.jar 357KB
trogdor-3.2.1.jar 334KB
netty-codec-4.1.73.Final.jar 329KB
reload4j-1.2.19.jar 326KB
jetty-client-9.4.44.v20210927.jar 318KB
netty-buffer-4.1.73.Final.jar 296KB
jose4j-0.7.9.jar 269KB
plexus-utils-3.3.0.jar 257KB
jersey-client-2.34.jar 253KB
zookeeper-jute-3.6.3.jar 245KB
jetty-http-9.4.44.v20210927.jar 219KB
hk2-locator-2.6.1.jar 199KB
hk2-api-2.6.1.jar 196KB
kafka-raft-3.2.1.jar 181KB
jetty-io-9.4.44.v20210927.jar 175KB
kafka-streams-scala_2.13-3.2.1.jar 161KB
kafka-storage-3.2.1.jar 150KB
jetty-servlet-9.4.44.v20210927.jar 142KB
jakarta.ws.rs-api-2.1.6.jar 137KB
netty-transport-classes-epoll-4.1.73.Final.jar 135KB
hk2-utils-2.6.1.jar 129KB
kafka-tools-3.2.1.jar 127KB
javax.ws.rs-api-2.1.1.jar 124KB
jaxb-api-2.3.0.jar 123KB
jetty-security-9.4.44.v20210927.jar 116KB
jakarta.xml.bind-api-2.3.2.jar 113KB
jetty-servlets-9.4.44.v20210927.jar 105KB
reflections-0.9.12.jar 103KB
connect-transforms-3.2.1.jar 103KB
metrics-core-4.1.12.1.jar 103KB
jackson-dataformat-csv-2.12.6.jar 101KB
connect-api-3.2.1.jar 97KB
javax.servlet-api-3.1.0.jar 94KB
connect-mirror-3.2.1.jar 91KB
jakarta.validation-api-2.0.2.jar 90KB
argparse4j-0.7.0.jar 88KB
kafka-shell-3.2.1.jar 83KB
metrics-core-2.2.0.jar 80KB
jopt-simple-5.0.4.jar 76KB
jersey-hk2-2.34.jar 75KB
共 199 条
- 1
- 2
qxmjava
- 粉丝: 22
- 资源: 604
上传资源 快速赚钱
- 我的内容管理 展开
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额
- 我的收藏
- 我的下载
- 下载帮助
最新资源
- 论文(最终)_20240430235101.pdf
- 基于python编写的Keras深度学习框架开发,利用卷积神经网络CNN,快速识别图片并进行分类
- 最全空间计量实证方法(空间杜宾模型和检验以及结果解释文档).txt
- 5uonly.apk
- 蓝桥杯Python组的历年真题
- 2023-04-06-项目笔记 - 第一百十九阶段 - 4.4.2.117全局变量的作用域-117 -2024.04.30
- 2023-04-06-项目笔记 - 第一百十九阶段 - 4.4.2.117全局变量的作用域-117 -2024.04.30
- 前端开发技术实验报告:内含4四实验&实验报告
- Highlight Plus v20.0.1
- 林周瑜-论文.docx
资源上传下载、课程学习等过程中有任何疑问或建议,欢迎提出宝贵意见哦~我们会及时处理!
点击此处反馈
安全验证
文档复制为VIP权益,开通VIP直接复制
信息提交成功
评论0