kafka下载安装部署

Apache kafka 是一个分布式的基于push-subscribe的消息系统,它具备快速、可扩展、可持久化的特点。它现在是Apache旗下的一个开源系统,作为hadoop生态系统的一部分,被各种商业公司广泛应用。它的最大的特性就是可以实时的处理大量数据以满足各种需求场景:比如基于hadoop的批处理系统、低延迟的实时系统、storm/spark流式处理引擎。

kafka的特性:

1.高吞吐量、低延迟:kafka每秒可以处理几十万条消息,它的延迟最低只有几毫秒

2.可扩展性:kafka集群支持热扩展

3.持久性、可靠性:消息被持久化到本地磁盘,并且支持数据备份防止数据丢失

4.容错性:允许集群中节点失败(若副本数量为n,则允许n-1个节点失败)

5.高并发:支持数千个客户端同时读写

kafka是一个分布式消息系统,由linkedin使用scala编写,用作LinkedIn的活动流(Activity Stream)和运营数据处理管道(Pipeline)的基础。具有高水平扩展和高吞吐量。

Kafka和其他主流分布式消息系统的对比 

1、下载安装kafka

Kafka需要依赖JAVA环境运行,需要先下载安装JDKKafka支持内置的Zookeeper和引用外部的Zookeeper,如果使用外部的zookeeper,需要提前下载安装zookeeper (zookeeper下载安装部署)

在安装jdk之前,先卸载Linux系统自带的jdk

通过 rpm -qa | grep jdk 命令查看系统自带的jdk,并通过 rpm -e --nodeps命令逐个卸载。

Jdk8下载地址:Java Downloads | Oracle

下载后上传到Linux系统的某个目录下,解压并移动到/usr/local目录下。

tar -zxvf jdk-8u391-linux-x64.tar.gz
mv jdk1.8.0_391 /usr/local/jdk1.8/jdk1.8.0_391

配置环境变量,修改 /etc/profile 文件,添加如下jdk的配置。

#set java environment
export JAVA_HOME=/usr/local/jdk1.8/jdk1.8.0_391
export JRE_HOME=$JAVA_HOME/jre
export CLASSPATH=$JAVA_HOME/lib:$JRE_HOME/lib:$CLASSPATHexport PATH=$JAVA_HOME/bin:$JRE_HOME/bin:$PATH:$HOME/bin

然后执行 source /etc/profile 命令使得修改立即生效。

kafka下载地址:Apache Kafka

在/usr/local/目录下创建kafka目录,并在kafka目录下通过wget命令下载kafka压缩包,或者将在Windows系统中下载好的kafka压缩包通过Xftp传到kafka目录中,然后解压。

tar -zxvf kafka_2.12-3.6.1.tgz

最后使用root用户修改/etc/profile文件,添加kafka启动bin目录,以便在任何目录下都可以通过cd $KAFKA_HOME命令进入到kafka安装目录。最后通过source /etc/profile 命令使得修改生效。

配置环境变量,修改 /etc/profile 文件,在最后加上如下配置:

export KAFKA_HOME=/usr/local/kafka/kafka_2.12-3.6.1
export PATH=$KAFKA_HOME/bin:$PATH

然后执行 source /etc/profile 命令使得修改立即生效。

2、单机部署

2.1、修改配置文件

在 /usr/local/kafka/kafka_2.12-3.6.1 目录下创建一个用于存放日志的目录mylogs,在server.properties配置文件中会使用到这个目录。

mkdir -p /usr/local/kafka/kafka_2.12-3.6.1/mylogs

修改kafka的配置文件:server.properties

# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#    http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.# see kafka.server.KafkaConfig for additional details and defaults############################# Server Basics ############################## The id of the broker. This must be set to a unique integer for each broker.
broker.id=0port=9092
host.name=192.168.10.188############################# Socket Server Settings ############################## The address the socket server listens on. It will get the value returned from 
# java.net.InetAddress.getCanonicalHostName() if not configured.
#   FORMAT:
#     listeners = listener_name://host_name:port
#   EXAMPLE:
#     listeners = PLAINTEXT://your.host.name:9092
listeners=PLAINTEXT://192.168.10.188:9092# Hostname and port the broker will advertise to producers and consumers. If not set, 
# it uses the value for "listeners" if configured.  Otherwise, it will use the value
# returned from java.net.InetAddress.getCanonicalHostName().
#advertised.listeners=PLAINTEXT://your.host.name:9092# Maps listener names to security protocols, the default is for them to be the same. See the config documentation for more details
#listener.security.protocol.map=PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL# The number of threads that the server uses for receiving requests from the network and sending responses to the network
num.network.threads=3# The number of threads that the server uses for processing requests, which may include disk I/O
num.io.threads=8# The send buffer (SO_SNDBUF) used by the socket server
socket.send.buffer.bytes=102400# The receive buffer (SO_RCVBUF) used by the socket server
socket.receive.buffer.bytes=102400# The maximum size of a request that the socket server will accept (protection against OOM)
socket.request.max.bytes=104857600############################# Log Basics ############################## A comma separated list of directories under which to store log files
#log.dirs=/tmp/kafka-logs
#log.dirs=D:/MySoftware/Install/tools/kafka/logs
log.dirs=/usr/local/kafka/kafka_2.12-3.6.1/mylogs# The default number of log partitions per topic. More partitions allow greater
# parallelism for consumption, but this will also result in more files across
# the brokers.
num.partitions=1# The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.
# This value is recommended to be increased for installations with data dirs located in RAID array.
num.recovery.threads.per.data.dir=1############################# Internal Topic Settings  #############################
# The replication factor for the group metadata internal topics "__consumer_offsets" and "__transaction_state"
# For anything other than development testing, a value greater than 1 is recommended to ensure availability such as 3.
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1############################# Log Flush Policy ############################## Messages are immediately written to the filesystem but by default we only fsync() to sync
# the OS cache lazily. The following configurations control the flush of data to disk.
# There are a few important trade-offs here:
#    1. Durability: Unflushed data may be lost if you are not using replication.
#    2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush.
#    3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to excessive seeks.
# The settings below allow one to configure the flush policy to flush data after a period of time or
# every N messages (or both). This can be done globally and overridden on a per-topic basis.# The number of messages to accept before forcing a flush of data to disk
#log.flush.interval.messages=10000# The maximum amount of time a message can sit in a log before we force a flush
#log.flush.interval.ms=1000############################# Log Retention Policy ############################## The following configurations control the disposal of log segments. The policy can
# be set to delete segments after a period of time, or after a given size has accumulated.
# A segment will be deleted whenever *either* of these criteria are met. Deletion always happens
# from the end of the log.# The minimum age of a log file to be eligible for deletion due to age
log.retention.hours=168# A size-based retention policy for logs. Segments are pruned from the log unless the remaining
# segments drop below log.retention.bytes. Functions independently of log.retention.hours.
#log.retention.bytes=1073741824# The maximum size of a log segment file. When this size is reached a new log segment will be created.
log.segment.bytes=1073741824# The interval at which log segments are checked to see if they can be deleted according
# to the retention policies
log.retention.check.interval.ms=300000############################# Zookeeper ############################## Zookeeper connection string (see zookeeper docs for details).
# This is a comma separated host:port pairs, each corresponding to a zk
# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
# You can also append an optional chroot string to the urls to specify the
# root directory for all kafka znodes.
# zookeeper.connect=localhost:2181
zookeeper.connect=192.168.10.188:2181/kafka# Timeout in ms for connecting to zookeeper
zookeeper.connection.timeout.ms=18000############################# Group Coordinator Settings ############################## The following configuration specifies the time, in milliseconds, that the GroupCoordinator will delay the initial consumer rebalance.
# The rebalance will be further delayed by the value of group.initial.rebalance.delay.ms as new members join the group, up to a maximum of max.poll.interval.ms.
# The default value for this is 3 seconds.
# We override this to 0 here as it makes for a better out-of-the-box experience for development and testing.
# However, in production environments the default value of 3 seconds is more suitable as this will help to avoid unnecessary, and potentially expensive, rebalances during application startup.
group.initial.rebalance.delay.ms=0#Delete topic
delete.topic.enable=true

server.properties中只添加了port和host.name,并修改了log.dirszookeeper.connect属性,其余都是默认。另外,还要开启listeners属性,不然在后面启动consumer接受消息时看不到消息。

配置文件中参数详解:

broker.id=0  #当前机器在集群中的唯一标识,和zookeepermyid性质一样
port=9092 #当前kafka对外提供服务的端口默认是9092
host.name=192.168.7.100 #这个参数默认是关闭的,在0.8.1有个bugDNS解析问题,失败率的问题。改成自己centosip地址。
num.network.threads=3 #这个是borker进行网络处理的线程数
num.io.threads=8 #这个是borker进行I/O处理的线程数
log.dirs=/opt/kafka/kafkalogs/ #消息存放的目录,这个目录可以配置为逗号分割的表达式,上面的num.io.threads要大于这个目录的个数这个目录,如果配置多个目录,新创建的topic他把消息持久化的地方是,当前以逗号分割的目录中,那个分区数最少就放那一个
socket.send.buffer.bytes=102400 #发送缓冲区buffer大小,数据不是一下子就发送的,先回存储到缓冲区了到达一定的大小后在发送,能提高性能
socket.receive.buffer.bytes=102400 #kafka接收缓冲区大小,当数据到达一定大小后在序列化到磁盘
socket.request.max.bytes=104857600 #这个参数是向kafka请求消息或者向kafka发送消息的请请求的最大数,这个值不能超过java的堆栈大小
num.partitions=1 #默认的分区数,一个topic默认1个分区数
log.retention.hours=168 #默认消息的最大持久化时间,168小时,7
message.max.byte=5242880  #消息保存的最大值5M
default.replication.factor=2  #kafka保存消息的副本数,如果一个副本失效了,另一个还可以继续提供服务
replica.fetch.max.bytes=5242880  #取消息的最大直接数
log.segment.bytes=1073741824 #这个参数是:因为kafka的消息是以追加的形式落地到文件,当超过这个值的时候,kafka会新起一个文件
log.retention.check.interval.ms=300000 #每隔300000毫秒去检查上面配置的log失效时间(log.retention.hours=168 ),到目录查看是否有过期的消息如果有,删除
log.cleaner.enable=false #是否启用log压缩,一般不用启用,启用的话可以提高性能
zookeeper.connect=192.168.7.100:2181,192.168.7.101:2181,192.168.7.107:2181/kafka #设置zookeeper的连接端口,在集群配置时,要把所有机器的ip地址都要写上,这里以三个机器为例。如果是单机部署,只需要写一个ip地址就行了。

注意:在zookeeper.connect的最后加上/kafka是因为kafka需要依赖zookeeper,在kafka启动之后默认会在zookeeper服务所在节点的根目录下创建很多与kafka有关的目录,这样就会导致zookeeper服务所在节点的根目录下的文件很多很乱。另外,如果多个kafka共用一个zookeeper,就会导致zookeeper服务的根目录下各个kafka文件更加混乱。所以在zookeeper.connect的最后加上/kafka是为了在kafka启动时将创建的文件都放到zookeeper节点根目录下的/kafka子目录下。多个kafka共用一个zookeeper时可以分别配置自己的子目录以示区分。

启动zookeeper和kafka之后,会自动在zookeeper节点上创建/kafka目录。

2.2、配置和启动zookeeper

方式一:使用kafka自带的zookeeper,修改zookeeper.properties配置文件:

# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
# 
#    http://www.apache.org/licenses/LICENSE-2.0
# 
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# the directory where the snapshot is stored.
# dataDir=D:/MySoftware/Install/tools/kafka/tmp/zookeeper
dataDir=/usr/local/kafka/kafka_2.12-3.6.1/tmp/zk/data
dataLogDir=/usr/local/kafka/kafka_2.12-3.6.1/tmp/zk/logs
# the port at which the clients will connect
clientPort=2181
# disable the per-ip limit on the number of connections since this is a non-production config
maxClientCnxns=0
# Disable the adminserver by default to avoid port conflicts.
# Set the port to something non-conflicting if choosing to enable this
admin.enableServer=false
# admin.serverPort=8080

进入到kafka安装目录下,执行如下命令启动zookeeper:

bin/zookeeper-server-start.sh config/zookeeper.properties

bin/zookeeper-server-start.sh config/zookeeper.properties

方式二:使用外部安装的zookeeper。

这里使用外部安装的zookeeper。由于这里kafka是单机部署,所以zookeeper也要使用单机部署,具体步骤见 zookeeper下载安装部署 中的单机部署zookeeper部分。因为之前已经安装并配置了zookeeper,所以这里不在配置了,直接启动就行了。

进入到zookeeper安装目录下的bin目录中,执行如下命令启动zookeeper服务端。

./zkServer.sh start

./zkServer.sh start

2.3、启动kafka

切记:启动kafka之前必须先启动zookeeper

进入到kafkabin目录下,启动kafka。参数-daemon的含义是指启动的服务进程是作为后台进程(守护进程模式)启动,不加就是作为前端线程来启动。Kafka在启动一段时间后,如果出现服务自动关闭情况,可在启动kafka的时使用守护进程模式启动,即在原启动命令中加 -daemon启动之后用jps命令检查是否启动启动命令./kafka-server-start.sh -daemon ../config/server.properties
# 进入到 bin 目录
./kafka-server-start.sh -daemon ../config/server.properties# 或者进入到kafka安装目录,执行如下命令
bin/kafka-server-start.sh -daemon config/server.properties

2.4、创建、查看、删除topic

创建topic:

创建一个名字为testKafka的topic,只有一个副本,一个分区。

进入到kafka安装目录的bin目录下,执行kafka-topics.sh脚本。

--zookeeper参数表示要指定zookeeper服务的安装节点,多个节点可以用逗号分隔。并且最后加上在server.properties配置文件中zookeeper.connect属性设置的kafka启动时存储信息的路径,上面配置文件中zookeeper.connect属性配置的路径是/kafka。

命令:./kafka-topics.sh --zookeeper 192.168.10.188:2181/kafka --create --replication-factor 1 --partitions 1 --topic testKafka

./kafka-topics.sh --zookeeper 192.168.10.188:2181/kafka --create --replication-factor 1 --partitions 1 --topic testKafka
#参数解释
--replication-factor 1   #副本因子是1
--partitions 1   #创建1个分区
--topic testKafka   #主题为testKafka

192.168.10.188:2181是在server.properties文件中配置的zookeeper.connect,这个是zk的连接端口。

查看topic及topic状态:

查看topic的命令:./kafka-topics.sh -zookeeper 192.168.10.188:2181 -list      

或者:./kafka-topics.sh --zookeeper 192.168.10.188:2181 --list

./kafka-topics.sh --zookeeper 192.168.10.188:2181 --list

查看topic状态的命令:./kafka-topics.sh --zookeeper 192.168.10.188:2181 --describe --topic testKafka

./kafka-topics.sh --zookeeper 192.168.10.188:2181 --describe --topic testKafka

leader:负责处理消息的读和写,leader是从所有节点中随机选择的.
replicas:列出了所有的副本节点,不管节点是否在服务中.
isr:是正在服务中的节点.

此处是单机部署kafka,只有一个broker,在server.properties文件中broker.id=0,所以此处leader是节点为0broker

删除topic:

命令:./kafka-topics.sh --zookeeper 192.168.10.188:2181 --delete --topic testKafka

./kafka-topics.sh --zookeeper 192.168.10.188:2181 --delete --topic testKafka

#Delete topic
delete.topic.enable=true

2.5、启动producer和consumer

在介绍启动producer和consumer的命令之前,先简单了解一下broker-list、bootstrap-servers和zookeeper几个参数。

1.broker:kafka服务端,可以是一个服务器也可以是一个集群。producer和consumer都相当于这个服务端的客户端。

2.broker-list:指定kafka集群中的一个或多个服务器,一般在使用kafka-console-producer.sh的时候,这个参数是必备参数,另外一个必备的参数是topic。

3.bootstrap-servers指的是kafka目标集群的服务器地址,这和broker-list功能一样,不过在启动producer时要求用broker-list,在启动consumer时用bootstrap-servers。

4. zookeeper指的是zk服务器或zk集群的地址。旧版本(0.9以前)的kafka,消费的进度(offset)是写在zk中的,所以启动consumer需要知道zk的地址。后来的版本都统一由broker管理,所以在启动consumer时就用bootstrap-server。

启动producer并发送消息,发送消息之后用Ctrl+C结束。

命令:./kafka-console-producer.sh --broker-list 192.168.10.188:9092 --topic testKafka

./kafka-console-producer.sh --broker-list 192.168.10.188:9092 --topic testKafka

启动consumer并接受消息。按Ctrl+C结束。

命令:./kafka-console-consumer.sh --zookeeper 192.168.10.188:2181 --topic testKafka --from-beginning           (参数zookeeperbootstrap-server代替了)

或者:./kafka-console-consumer.sh --bootstrap-server 192.168.10.188:9092 --topic testKafka --from-beginning

./kafka-console-consumer.sh --bootstrap-server 192.168.10.188:9092 --topic testKafka --from-beginning

参数--zookeeper 192.168.10.188:2181中的ip和port是zookeeper节点的ip和zookeeper的port,参数--bootstrap-server 192.168.10.188:9092中的ip和port是kafka节点的ip和kafka的port。

2.6、查看消费者组以及消息是否积压

查看消费者组的命令:

bin/kafka-consumer-groups.sh --bootstrap-server 192.168.10.188:9092 --list

bin/kafka-consumer-groups.sh --bootstrap-server 192.168.10.188:9092 --list

查看消息是否有积压的命令:

bin/kafka-consumer-groups.sh --bootstrap-server 192.168.10.188:9092 --describe --group consumer-group-01

bin/kafka-consumer-groups.sh --bootstrap-server 192.168.10.188:9092 --describe --group consumer-group-01

上图是在windows系统中执行kafka命令的截图,与Linux系统命令类似。上图中GROUP表示消费者组,TOPIC表示消息主题,PARTITION表示分区,CURRENT-OFFSET表示当前消费的消息条数,LOG-END-OFFSET表示kafka中生产的消息条数,LAG表示kafka中有多少条消息还未消费,也就是有多少条积压的消息。

在kafka中,消费者是按批次拉取数据的,每一批次拉取的数据条数是0-n条,每个消费者可以拉取多个分区的数据,但是一个分区的数据只能被同一个消费者组中的一个消费者拉取。如果一个消费者拉取多个分区的数据,那么拉取的这一批次的数据就包含多个分区的数据。消费者处理完这批数据之后,会将offset提交到__consumer_offsets这个topic中,__consumer_offsets(是一个topic)就是用于维护消费者消费到哪条数据offset的,是按照分区粒度维护的,各个分区的offset是互不影响的。例如一个consumer拉取两个分区(p0、p1)的数据,如果p0分区的数据处理完并将offset提交到__consumer_offsets中,而p1分区的数据还未处理完,p1分区的offset还未提交到__consumer_offsets中,此时consumer异常重启,consumer不会再拉取p0分区上次已消费的数据,但是会重新拉取p1分区上次消费但未提交的数据。

__consumer_offsets这个topickafka自动创建的,当consumer消费数据之后,consumer就会把offset提交到__consumer_offsets中。

2.7、关闭zookeeper和kafka

关闭kafka的命令:./kafka-server-stop.sh   (必须进到kafka的bin目录下才能执行该命令)

关闭zk的命令:./zkServer.sh stop   (必须进到zookeeper的bin目录下才能执行该命令)

3、集群部署

集群部署的步骤与单机部署几乎是一样的,主要的区别在于kafka的配置文件。

Kafka集群是把状态保存在Zookeeper中的,首先要搭建Zookeeper集群。

Zookeeper的集群部署具体步骤见 zookeeper下载安装部署 中的集群部署zookeeper部分。这里与zookeeper集群部署一样,仍然使用三台计算机构成kafka集群。下面先在一台计算机上部署kafka,另外两台计算机的配置与这一台完全一样,只需修改配置文件中对应节点的ip和broker.id。假设三台计算机的ip地址分别是192.168.1.128192.168.1.129192.168.1.130

3.1、修改配置文件

在 /usr/local/kafka/kafka_2.12-3.6.1 目录下创建一个用于存放日志的目录mylogs,在server.properties配置文件中会使用到这个目录。

mkdir -p /usr/local/kafka/kafka_2.12-3.6.1/mylogs

 修改kafka的配置文件:server.properties

# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#    http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.# see kafka.server.KafkaConfig for additional details and defaults############################# Server Basics ############################## The id of the broker. This must be set to a unique integer for each broker.
broker.id=1port=9092
host.name=192.168.1.128############################# Socket Server Settings ############################## The address the socket server listens on. It will get the value returned from 
# java.net.InetAddress.getCanonicalHostName() if not configured.
#   FORMAT:
#     listeners = listener_name://host_name:port
#   EXAMPLE:
#     listeners = PLAINTEXT://your.host.name:9092
listeners=PLAINTEXT://192.168.1.128:9092# Hostname and port the broker will advertise to producers and consumers. If not set, 
# it uses the value for "listeners" if configured.  Otherwise, it will use the value
# returned from java.net.InetAddress.getCanonicalHostName().
#advertised.listeners=PLAINTEXT://your.host.name:9092# Maps listener names to security protocols, the default is for them to be the same. See the config documentation for more details
#listener.security.protocol.map=PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL# The number of threads that the server uses for receiving requests from the network and sending responses to the network
num.network.threads=3# The number of threads that the server uses for processing requests, which may include disk I/O
num.io.threads=8# The send buffer (SO_SNDBUF) used by the socket server
socket.send.buffer.bytes=102400# The receive buffer (SO_RCVBUF) used by the socket server
socket.receive.buffer.bytes=102400# The maximum size of a request that the socket server will accept (protection against OOM)
socket.request.max.bytes=104857600############################# Log Basics ############################## A comma separated list of directories under which to store log files
#log.dirs=/tmp/kafka-logs
#log.dirs=D:/MySoftware/Install/tools/kafka/logs
log.dirs=/usr/local/kafka/kafka_2.12-3.6.1/mylogs# The default number of log partitions per topic. More partitions allow greater
# parallelism for consumption, but this will also result in more files across
# the brokers.
num.partitions=1# The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.
# This value is recommended to be increased for installations with data dirs located in RAID array.
num.recovery.threads.per.data.dir=1############################# Internal Topic Settings  #############################
# The replication factor for the group metadata internal topics "__consumer_offsets" and "__transaction_state"
# For anything other than development testing, a value greater than 1 is recommended to ensure availability such as 3.
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1############################# Log Flush Policy ############################## Messages are immediately written to the filesystem but by default we only fsync() to sync
# the OS cache lazily. The following configurations control the flush of data to disk.
# There are a few important trade-offs here:
#    1. Durability: Unflushed data may be lost if you are not using replication.
#    2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush.
#    3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to excessive seeks.
# The settings below allow one to configure the flush policy to flush data after a period of time or
# every N messages (or both). This can be done globally and overridden on a per-topic basis.# The number of messages to accept before forcing a flush of data to disk
#log.flush.interval.messages=10000# The maximum amount of time a message can sit in a log before we force a flush
#log.flush.interval.ms=1000############################# Log Retention Policy ############################## The following configurations control the disposal of log segments. The policy can
# be set to delete segments after a period of time, or after a given size has accumulated.
# A segment will be deleted whenever *either* of these criteria are met. Deletion always happens
# from the end of the log.# The minimum age of a log file to be eligible for deletion due to age
log.retention.hours=168# A size-based retention policy for logs. Segments are pruned from the log unless the remaining
# segments drop below log.retention.bytes. Functions independently of log.retention.hours.
#log.retention.bytes=1073741824# The maximum size of a log segment file. When this size is reached a new log segment will be created.
log.segment.bytes=1073741824# The interval at which log segments are checked to see if they can be deleted according
# to the retention policies
log.retention.check.interval.ms=300000############################# Zookeeper ############################## Zookeeper connection string (see zookeeper docs for details).
# This is a comma separated host:port pairs, each corresponding to a zk
# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
# You can also append an optional chroot string to the urls to specify the
# root directory for all kafka znodes.
# zookeeper.connect=localhost:2181
zookeeper.connect=192.168.1.128:2181,192.168.1.129:2181,192.168.1.130:2181/kafka# Timeout in ms for connecting to zookeeper
zookeeper.connection.timeout.ms=18000############################# Group Coordinator Settings ############################## The following configuration specifies the time, in milliseconds, that the GroupCoordinator will delay the initial consumer rebalance.
# The rebalance will be further delayed by the value of group.initial.rebalance.delay.ms as new members join the group, up to a maximum of max.poll.interval.ms.
# The default value for this is 3 seconds.
# We override this to 0 here as it makes for a better out-of-the-box experience for development and testing.
# However, in production environments the default value of 3 seconds is more suitable as this will help to avoid unnecessary, and potentially expensive, rebalances during application startup.
group.initial.rebalance.delay.ms=0#Delete topic
delete.topic.enable=true

server.properties文件中主要配置的就是broker.idporthost.namelistenerslog.dirszookeeper.connect这六个属性,其他的都是默认值。

在zookeeper.connect的最后加上/kafka是因为kafka需要依赖zookeeper,在kafka启动之后默认会在zookeeper服务所在节点的根目录下创建很多与kafka有关的目录,这样就会导致zookeeper服务所在节点的根目录下的文件很多很乱。另外,如果多个kafka共用一个zookeeper,就会导致zookeeper服务的根目录下各个kafka文件更加混乱。所以在zookeeper.connect的最后加上/kafka是为了在kafka启动时将创建的文件都放到zookeeper节点根目录下的/kafka子目录下。多个kafka共用一个zookeeper时可以分别配置自己的子目录以示区分。

启动zookeeper和kafka之后,会自动在zookeeper节点上创建/kafka目录。

3.2、配置和启动zookeeper

Zookeeper的集群部署具体步骤见 zookeeper下载安装部署 中的集群部署zookeeper部分。

3.3、启动kafka

三个机器都要启动kafka进入到kafkabin目录下,启动kafka启动之后用jps命令检查是否启动

启动命令./kafka-server-start.sh -daemon ../config/server.properties

# 进入到 bin 目录
./kafka-server-start.sh -daemon ../config/server.properties# 或者进入到kafka安装目录,执行如下命令
bin/kafka-server-start.sh -daemon config/server.properties

3.4、创建topic

创建一个名字为testKafka的topic,有两个副本,两个分区。

--zookeeper参数表示要指定zookeeper服务的安装节点,多个节点可以用逗号分隔。并且最后加上在server.properties配置文件中zookeeper.connect属性设置的kafka启动时存储信息的路径,上面配置文件中zookeeper.connect属性配置的路径是/kafka。

命令:./kafka-topics.sh --zookeeper 192.168.1.128:2181/kafka --create --replication-factor 2 --partitions 2 --topic testKafka

./kafka-topics.sh --zookeeper 192.168.1.128:2181/kafka --create --replication-factor 2 --partitions 2 --topic testKafka

3.5、启动producer和consumer

启动producer并发送消息,发送消息之后用Ctrl+C结束。

命令:./kafka-console-producer.sh --broker-list 192.168.1.128:9092 --topic testKafka

./kafka-console-producer.sh --broker-list 192.168.1.128:9092 --topic testKafka

启动consumer并接受消息。按Ctrl+C结束。

命令:./kafka-console-consumer.sh --bootstrap-server 192.168.1.128:9092 --topic testKafka --from-beginning

./kafka-console-consumer.sh --bootstrap-server 192.168.1.128:9092 --topic testKafka --from-beginning

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.rhkb.cn/news/235937.html

如若内容造成侵权/违法违规/事实不符,请联系长河编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

基于单片机设计的智慧农业大棚检测系统

一、设计目标 本项目基于单片机设计一个智慧农业大棚检测系统,以提供实时监测和管理大棚环境的关键参数。系统支持环境温度、湿度检测,光照强度检测,并能根据预设的阀值进行报警提示。为了实现数据的显示和管理,该系统还利用Qt开…

浏览器输入一个域名的解析过程

目录 从输入一个域名的解析过程 以www.baidu.com为例子 本地缓存和hosts文件 mDNS和LLMNR NBT-NS 路由器广播 Root域名服务器 顶级域名服务器 目标域名服务器 DNS解析完成 操作系统发起TCP连接: TCP三次握手: TCP连接的建立采用经典的三次握手过程&#…

C2855 命令行选项“/Zc:referenceBinding“与预编译头不一致和C2855 命令行选项“/Zc:__cplusplus“与预编译头不一致

在VS2019和Qt5.12.12环境下,笔记本上编译这个工程没有问题,把工程拷贝到台式机上,一样的配置,但是报如下错误: 打开项目的命令行配置如下: 解决办法:在编译选项"/Zc:referenceBinding"…

ptaR7-5打探基priority_queue的使用

题目 最近乐乐开发出了一款新的游戏《打探基》,这款游戏需要多人配合来玩,至少三个游戏玩家同时出招才能使探基的血量下降一点,同时,出招的每个人战斗力下降一点,当战斗力小于10的时候将不能再出招,不知道…

electron+vue网页直接播放RTSP视频流?

目前大部分摄像头都支持RTSP协议,但是在浏览器限制,最新版的浏览器都不能直接播放RTSP协议,Electron 桌面应用是基于 Chromium 内核的,所以也不能直接播放RTSP,但是我们又有这个需求怎么办呢? 市场上的方案…

一小时掌握:使用ScrapySharp和C#打造新闻下载器

引言 爬虫技术是指通过编程的方式,自动从互联网上获取和处理数据的技术。爬虫技术有很多应用场景,比如搜索引擎、数据分析、舆情监测、电商比价等。爬虫技术也是一门有趣的技术,可以让你发现网络上的各种有价值的信息。 本文将介绍如何使用…

Unity组件开发--长连接webSocket

1.下载安装UnityWebSocket 插件 https://gitee.com/cambright/UnityWebSocket/ 引入unity项目: 2.定义消息体结构:ExternalMessage和包结构Package: using ProtoBuf; using System; using System.Collections; using System.Collections.Ge…

【java八股文】之Java基础篇

1、Java有哪几种数据类型 基本数据类型:byte(1字节) short(2字节) int(4字节) long(8字节) float(4字节) double(8字节) char(2字节&a…

【动态规划】 【字典树】C++算法:472 连接词

作者推荐 【动态规划】458:可怜的小猪 涉及知识点 动态规划 字典树 LeetCode472 连接词 给你一个 不含重复 单词的字符串数组 words ,请你找出并返回 words 中的所有 连接词 。 连接词 定义为:一个完全由给定数组中的至少两个较短单词(不…

DUET: Cross-Modal Semantic Grounding for Contrastive Zero-Shot Learning论文阅读

文章目录 摘要1.问题的提出引出当前研究的不足与问题属性不平衡问题属性共现问题 解决方案 2.数据集和模型构建数据集传统的零样本学习范式v.s. DUET学习范式DUET 模型总览属性级别对比学习正负样本解释: 3.结果分析VIT-based vision transformer encoder.消融研究消…

RTL编码(1)——概述

一、RTL级描述 RTL(Register Transfer Level)级:寄存器+组合逻辑,其功能与时序用Verilog HDL(以下简称Verilog)或VHDL代码描述。 RTL描述包含了同步数字电路最重要的三个特征:组合逻…

【Python】编程练习的解密与实战(三)

​🌈个人主页:Sarapines Programmer🔥 系列专栏:《Python | 编程解码》⏰诗赋清音:云生高巅梦远游, 星光点缀碧海愁。 山川深邃情难晤, 剑气凌云志自修。 目录 🪐1. 初识Python &a…

[BJDCTF2020]ZJCTF,不过如此

题目源码&#xff1a; <?phperror_reporting(0); $text $_GET["text"]; $file $_GET["file"]; if(isset($text)&&(file_get_contents($text,r)"I have a dream")){echo "<br><h1>".file_get_contents($tex…

vscode 创建文件自动添加注释信息

随机记录 目录 1. 背景介绍 2. "Docstring Generator"扩展 2.1 安装 2.2 设置注释信息 3. 自动配置py 文件头注释 1. 背景介绍 在VS Code中&#xff0c;您可以使用扩展来为新创建的Python文件自动添加头部注释信息。有几个常用的扩展可以实现此功能&#xff0…

im6ull学习总结(三-五)freetype显示正行字

知识补充 笛卡尔坐标系 这里笛卡尔坐标系就是初高中学的直角坐标系的第一象限 lcd坐标系则不同 这两个坐标系如何转换 观察两个坐标系 点&#xff08;x,y&#xff09;的x坐标在两个坐标系中相同&#xff0c;纵坐标&#xff08;y&#xff09;存在着yV-yV V是整个屏幕的行数的像…

Mysql是怎么运行的(上)

文章目录 Mysql是怎么运行的Mysql处理一条语句的流程连接管理解析与优化存储引擎 基本配置配置文件系统变量状态变量字符集四种重要的字符集MySQL中的utf8和utf8mb4各级别的字符集和比较规则MySQL中字符集的转换排序规则产生的不同的排序结果 InnoDB存储引擎介绍COMPACT行格式介…

PostgreSQL内存浅析

体系结构 &#xff08;https://www.postgresql.fastware.com/blog/lets-get-back-to-basics-postgresql-memory-components&#xff09; &#xff08;http://geekdaxue.co/read/fcantsql/qts5is) 共享内存 linux的共享内存实现 (https://momjian.us/main/writings/pgsql/insi…

解锁前端新潜能:如何使用 Rust 锈化前端工具链

前言 近年来&#xff0c;Rust的受欢迎程度不断上升。首先&#xff0c;在操作系统领域&#xff0c;Rust 已成为 Linux 内核官方认可的开发语言之一&#xff0c;Windows 也宣布将使用 Rust 来重写内核&#xff0c;并重写部分驱动程序。此外&#xff0c;国内手机厂商 Vivo 也宣布…

如何利用ChatGPT快速生成月报?

随着每个月的结束&#xff0c;个人和团队经常需要编写月报来回顾和总结。这项任务通常消耗大量时间和精力。幸运的是&#xff0c;借助ChatGPT&#xff0c;这个过程可以变得更加简单和高效。接下来&#xff0c;我将详细介绍如何利用ChatGPT快速生成月报&#xff0c;从而帮助你节…

回归预测 | Matlab基于CPO-BP基于冠豪猪算法优化BP神经网络的数据多输入单输出回归预测

回归预测 | Matlab基于CPO-BP基于冠豪猪算法优化BP神经网络的数据多输入单输出回归预测 目录 回归预测 | Matlab基于CPO-BP基于冠豪猪算法优化BP神经网络的数据多输入单输出回归预测效果一览基本介绍程序设计参考资料 效果一览 基本介绍 1.CPO-BP回归基于冠豪猪优化算法[24年新…