kafka2.x版本配置SSL进行加密和身份验证

背景:找了一圈资料,都是东讲讲西讲讲,最后我还没搞好,最终决定参考官网说明。

官网指导手册地址:Apache Kafka

需要预备的知识,keytool和openssl

关于keytool的参考:keytool的使用-CSDN博客

关于openssl的参考:openssl常用命令大全_openssl命令参数大全-CSDN博客

先只看SSL安全机制方式。

Apache Kafka 允许客户端通过 SSL 进行连接。默认情况下,SSL 处于禁用状态,但可以根据需要打开。

  1. 1为每个 Kafka 代理生成 SSL 密钥和证书

部署一个或多个支持 SSL 的代理的第一步是为集群中的每台计算机生成密钥和证书。您可以使用 Java 的 keytool 实用程序来完成此任务。我们最初会将密钥生成到临时密钥库中,以便稍后使用 CA 导出和签名。

keytool -keystore server.keystore.jks -alias localhost -validity 700 -genkey -keyalg RSA

您需要在上面的命令中指定两个参数:

  1. 密钥库:存储证书的密钥库文件。密钥库文件包含证书的私钥;因此,它需要安全保存。这里是server.keystore.jks
  2. 有效期:证书的有效时间,单位为天。这里是700天。

可以看到,目录下生成了对应文件

之后可以运行以下命令来验证生成的证书的内容:

keytool -list -v -keystore server.keystore.jks

  1. 2创建您自己的 CA

完成第一步后,群集中的每台计算机都有一个公钥-私钥对,以及一个用于标识计算机的证书。但是,该证书是未签名的,这意味着攻击者可以创建此类证书来伪装成任何计算机。

因此,通过为群集中的每台计算机对证书进行签名来防止伪造证书非常重要。证书颁发机构 (CA) 负责对证书进行签名。CA的工作方式类似于签发护照的政府——政府在每本护照上盖章(签名),使护照变得难以伪造。其他政府会验证印章以确保护照的真实性。同样,CA 对证书进行签名,而加密技术保证签名证书在计算上难以伪造。因此,只要 CA 是真实且受信任的颁发机构,客户端就可以高度保证它们连接到真实的计算机。

openssl req -new -x509 -keyout ca-key -out ca-cert -days 365

生成的 CA 只是一个公钥-私钥对和证书,它旨在对其他证书进行签名。
下一步是将生成的 CA 添加到客户端的信任库中,以便客户端可以信任此 CA:

keytool -keystore server.truststore.jks -alias CARoot -import -file ca-cert

与步骤 1 中存储每台机器自己的身份的密钥库不同,客户机的信任库存储客户机应信任的所有证书。将证书导入到信任库中还意味着信任由该证书签名的所有证书。如上所述,信任政府 (CA) 也意味着信任它签发的所有护照(证书)。此属性称为信任链,在大型 Kafka 集群上部署 SSL 时特别有用。您可以使用单个 CA 对集群中的所有证书进行签名,并让所有计算机共享信任该 CA 的同一信任库。这样,所有计算机都可以对所有其他计算机进行身份验证。

  1. 3对证书进行签名

下一步是使用步骤 2 中生成的 CA 对步骤 1 生成的所有证书进行签名。首先,您需要从密钥库中导出证书:

keytool -密钥库 client.truststore.jks -alias CARoot -import -file ca-certkeytool -keystore server.keystore.jks -alias localhost -certreq -file cert-file

然后与 CA 一起签名:

openssl x509 -req -CA ca-cert -CAkey ca-key -in cert-file -out cert-signed -days 700 -CAcreateserial -passin pass:{ca-password}

最后,您需要将 CA 的证书和签名的证书都导入到密钥库中:

keytool -keystore server.keystore.jks -alias CARoot -import -file ca-certkeytool -keystore server.keystore.jks -alias localhost -import -file cert-signed

参数的定义如下:

  1. 密钥库:密钥库的位置
  2. ca-cert:CA的证书
  3. ca-key:CA的私钥
  4. ca-password:CA的密码
  5. cert-file:导出的服务器未签名证书
  6. cert-signed:服务器的签名证书

  1. 4配置 Kafka 代理

Kafka Broker 支持侦听多个端口上的连接。我们需要在 server.properties 中配置以下属性,该属性必须具有一个或多个逗号分隔值:

如果未为代理间通信启用 SSL(请参阅下文了解如何启用它),则需要 PLAINTEXT 和 SSL 端口。

listeners=PLAINTEXT://localhost:9092,SSL://localhost:9092

代理端需要以下 SSL 配置

ssl.keystore.location=/home/lighthouse/server.keystore.jksssl.keystore.password=test1234ssl.key.password=test1234ssl.truststore.location=/home/lighthouse/server.truststore.jksssl.truststore.password=测试1234

注意:ssl.truststore.password 在技术上是可选的,但强烈建议使用。如果未设置密码,则对信任库的访问仍然可用,但完整性检查将被禁用。值得考虑的可选设置:

  1. ssl.client.auth=none(“required” => 需要客户端身份验证,“requested” =>请求客户端身份验证,没有证书的客户端仍然可以连接。不建议使用“requested”,因为它提供了错误的安全感,并且配置错误的客户端仍将成功连接。
  2. ssl.cipher.suites(可选)。密码套件是身份验证、加密、MAC 和密钥交换算法的命名组合,用于协商使用 TLS 或 SSL 网络协议的网络连接的安全设置。(默认值为空列表)
  3. ssl.enabled.protocols=TLSv1.2,TLSv1.1,TLSv1 (列出要从客户端接受的 SSL 协议。请注意,SSL 已被弃用,取而代之的是 TLS,不建议在生产中使用 SSL)
  4. ssl.keystore.type=JKS
  5. ssl.truststore.type=JKS
  6. ssl.secure.random.implementation=SHA1PRNG

如果要为代理之间的通信启用 SSL,请将以下内容添加到 server.properties 文件(默认为 PLAINTEXT)

security.inter.broker.protocol=SSL

  1. 5配置 Kafka 客户端

SSL 仅支持新的 Kafka 生产者和使用者,不支持较旧的 API。对于生产者和使用者,SSL 的配置是相同的。
如果代理中不需要客户机认证,那么下面是一个最小配置示例:

security.protocol=SSL协议ssl.truststore.location=/var/private/ssl/client.truststore.jksssl.truststore.password=测试1234

注意:ssl.truststore.password 在技术上是可选的,但强烈建议使用。如果未设置密码,则对信任库的访问仍然可用,但完整性检查将被禁用。如果需要客户机认证,那么必须像步骤 1 中一样创建密钥库,并且还必须配置以下内容:

ssl.keystore.location=/var/private/ssl/client.keystore.jksssl.keystore.password=test1234ssl.key.password=test1234

根据我们的要求和代理配置,可能还需要其他配置设置:

  1. ssl.provider(可选)。用于 SSL 连接的安全提供程序的名称。缺省值是 JVM 的缺省安全提供程序。
  2. ssl.cipher.suites(可选)。密码套件是身份验证、加密、MAC 和密钥交换算法的命名组合,用于协商使用 TLS 或 SSL 网络协议的网络连接的安全设置。
  3. ssl.enabled.protocols=TLSv1.2,TLSv1.1,TLSv1。它应该列出至少一个在代理端配置的协议
  4. ssl.truststore.type=JKS
  5. ssl.keystore.type=JKS

生产者和消费者共同使用到的client-ssl.properties文件内容如下:

使用 console-producer 和 console-consumer 的示例:

./bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test --producer.config ./config/client-ssl.properties
./bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic test --consumer.config ./config/client-ssl.properties

报错了:

还要在用户目录下执行如下命令,信任客户端:

keytool -keystore client.truststore.jks -alias CARoot -import -file ca-cert
keytool -keystore client.keystore.jks -alias CARoot -import -file ca-cert

如果密码错了,还会报如下错误:

lighthouse@VM-8-10-ubuntu:~/kafkaWithZk/kafka_2.12-2.2.1$ ./bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test --producer.config ./config/client-ssl.properties
org.apache.kafka.common.KafkaException: Failed to construct kafka producerat org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:431)at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:299)at kafka.tools.ConsoleProducer$.main(ConsoleProducer.scala:44)at kafka.tools.ConsoleProducer.main(ConsoleProducer.scala)
Caused by: org.apache.kafka.common.KafkaException: org.apache.kafka.common.KafkaException: org.apache.kafka.common.KafkaException: Failed to load SSL keystore /home/lighthouse/client.truststore.jks of type JKSat org.apache.kafka.common.network.SslChannelBuilder.configure(SslChannelBuilder.java:73)at org.apache.kafka.common.network.ChannelBuilders.create(ChannelBuilders.java:146)at org.apache.kafka.common.network.ChannelBuilders.clientChannelBuilder(ChannelBuilders.java:67)at org.apache.kafka.clients.ClientUtils.createChannelBuilder(ClientUtils.java:99)at org.apache.kafka.clients.producer.KafkaProducer.newSender(KafkaProducer.java:439)at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:420)... 3 more
Caused by: org.apache.kafka.common.KafkaException: org.apache.kafka.common.KafkaException: Failed to load SSL keystore /home/lighthouse/client.truststore.jks of type JKSat org.apache.kafka.common.security.ssl.SslFactory.configure(SslFactory.java:144)at org.apache.kafka.common.network.SslChannelBuilder.configure(SslChannelBuilder.java:71)... 8 more
Caused by: org.apache.kafka.common.KafkaException: Failed to load SSL keystore /home/lighthouse/client.truststore.jks of type JKSat org.apache.kafka.common.security.ssl.SslFactory$SecurityStore.load(SslFactory.java:357)at org.apache.kafka.common.security.ssl.SslFactory.createSSLContext(SslFactory.java:248)at org.apache.kafka.common.security.ssl.SslFactory.configure(SslFactory.java:141)... 9 more
Caused by: java.io.IOException: keystore password was incorrectat java.base/sun.security.pkcs12.PKCS12KeyStore.engineLoad(PKCS12KeyStore.java:2092)at java.base/sun.security.util.KeyStoreDelegator.engineLoad(KeyStoreDelegator.java:243)at java.base/java.security.KeyStore.load(KeyStore.java:1479)at org.apache.kafka.common.security.ssl.SslFactory$SecurityStore.load(SslFactory.java:354)... 11 more
Caused by: java.security.UnrecoverableKeyException: failed to decrypt safe contents entry: javax.crypto.BadPaddingException: Given final block not properly padded. Such issues can arise if a bad key is used during decryption.... 15 more

然后我核对了client-ssl.properties文件中的配置(包含密码),再次启动producer,会报如下错:

lighthouse@VM-8-10-ubuntu:~/kafkaWithZk/kafka_2.12-2.2.1$ ./bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test --producer.config ./config/client-ssl.properties
>[2024-03-19 13:42:49,783] WARN [Producer clientId=console-producer] Connection to node -1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2024-03-19 13:42:49,835] WARN [Producer clientId=console-producer] Connection to node -1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2024-03-19 13:42:49,937] WARN [Producer clientId=console-producer] Connection to node -1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2024-03-19 13:42:50,140] WARN [Producer clientId=console-producer] Connection to node -1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2024-03-19 13:42:50,543] WARN [Producer clientId=console-producer] Connection to node -1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2024-03-19 13:42:51,298] WARN [Producer clientId=console-producer] Connection to node -1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2024-03-19 13:42:52,203] WARN [Producer clientId=console-producer] Connection to node -1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2024-03-19 13:42:53,158] WARN [Producer clientId=console-producer] Connection to node -1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2024-03-19 13:42:54,264] WARN [Producer clientId=console-producer] Connection to node -1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2024-03-19 13:42:55,220] WARN [Producer clientId=console-producer] Connection to node -1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2024-03-19 13:42:56,376] WARN [Producer clientId=console-producer] Connection to node -1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
^C^Clighthouse@VM-8-10-ubuntu:~/kafkaWithZk/kafka_2.12-2.2.1$

核对了server.properties文件的密码后,启动kafka还是报错,报的错关键信息如下:

[2024-03-19 14:34:31,955] INFO [SocketServer brokerId=0] Failed authentication with /127.0.0.1 (SSL handshake failed) (org.apache.kafka.common.network.Selector)
[2024-03-19 14:34:31,957] WARN SSL handshake failed (kafka.utils.CoreUtils$)
org.apache.kafka.common.errors.SslAuthenticationException: SSL handshake failed
Caused by: javax.net.ssl.SSLHandshakeException: No name matching localhost foundat java.base/sun.security.ssl.Alert.createSSLException(Alert.java:131)at java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:360)at java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:303)at java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:298)at java.base/sun.security.ssl.CertificateMessage$T12CertificateConsumer.checkServerCerts(CertificateMessage.java:654)at java.base/sun.security.ssl.CertificateMessage$T12CertificateConsumer.onCertificate(CertificateMessage.java:473)at java.base/sun.security.ssl.CertificateMessage$T12CertificateConsumer.consume(CertificateMessage.java:369)at java.base/sun.security.ssl.SSLHandshake.consume(SSLHandshake.java:392)at java.base/sun.security.ssl.HandshakeContext.dispatch(HandshakeContext.java:443)at java.base/sun.security.ssl.SSLEngineImpl$DelegatedTask$DelegatedAction.run(SSLEngineImpl.java:1076)at java.base/sun.security.ssl.SSLEngineImpl$DelegatedTask$DelegatedAction.run(SSLEngineImpl.java:1063)at java.base/java.security.AccessController.doPrivileged(Native Method)at java.base/sun.security.ssl.SSLEngineImpl$DelegatedTask.run(SSLEngineImpl.java:1010)at org.apache.kafka.common.network.SslTransportLayer.runDelegatedTasks(SslTransportLayer.java:402)at org.apache.kafka.common.network.SslTransportLayer.handshakeUnwrap(SslTransportLayer.java:484)at org.apache.kafka.common.network.SslTransportLayer.doHandshake(SslTransportLayer.java:340)at org.apache.kafka.common.network.SslTransportLayer.handshake(SslTransportLayer.java:265)at org.apache.kafka.common.network.KafkaChannel.prepare(KafkaChannel.java:170)at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:547)at org.apache.kafka.common.network.Selector.poll(Selector.java:483)at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:535)at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:74)at kafka.server.KafkaServer.doControlledShutdown$1(KafkaServer.scala:510)at kafka.server.KafkaServer.controlledShutdown(KafkaServer.scala:563)at kafka.server.KafkaServer.$anonfun$shutdown$2(KafkaServer.scala:585)at kafka.utils.CoreUtils$.swallow(CoreUtils.scala:86)at kafka.server.KafkaServer.shutdown(KafkaServer.scala:585)at kafka.server.KafkaServerStartable.shutdown(KafkaServerStartable.scala:48)at kafka.Kafka$$anon$1.run(Kafka.scala:72)
Caused by: java.security.cert.CertificateException: No name matching localhost foundat java.base/sun.security.util.HostnameChecker.matchDNS(HostnameChecker.java:234)at java.base/sun.security.util.HostnameChecker.match(HostnameChecker.java:103)at java.base/sun.security.ssl.X509TrustManagerImpl.checkIdentity(X509TrustManagerImpl.java:461)at java.base/sun.security.ssl.X509TrustManagerImpl.checkIdentity(X509TrustManagerImpl.java:435)at java.base/sun.security.ssl.X509TrustManagerImpl.checkTrusted(X509TrustManagerImpl.java:283)at java.base/sun.security.ssl.X509TrustManagerImpl.checkServerTrusted(X509TrustManagerImpl.java:141)at java.base/sun.security.ssl.CertificateMessage$T12CertificateConsumer.checkServerCerts(CertificateMessage.java:632)... 24 more
[2024-03-19 14:34:31,957] ERROR [Controller id=0, targetBrokerId=0] Connection to node 0 (localhost/127.0.0.1:9092) failed authentication due to: SSL handshake failed (org.apache.kafka.clients.NetworkClient)
[2024-03-19 14:34:31,960] INFO [/config/changes-event-process-thread]: Shutting down (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread)
[2024-03-19 14:34:31,961] INFO [/config/changes-event-process-thread]: Shutdown completed (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread)
[2024-03-19 14:34:31,961] INFO [/config/changes-event-process-thread]: Stopped (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread)
[2024-03-19 14:34:31,962] INFO [SocketServer brokerId=0] Stopping socket server request processors (kafka.network.SocketServer)
[2024-03-19 14:34:31,979] INFO [SocketServer brokerId=0] Stopped socket server request processors (kafka.network.SocketServer)
[2024-03-19 14:34:31,980] INFO [data-plane Kafka Request Handler on Broker 0], shutting down (kafka.server.KafkaRequestHandlerPool)
[2024-03-19 14:34:31,988] INFO [data-plane Kafka Request Handler on Broker 0], shut down completely (kafka.server.KafkaRequestHandlerPool)
[2024-03-19 14:34:31,995] INFO [KafkaApi-0] Shutdown complete. (kafka.server.KafkaApis)
[2024-03-19 14:34:31,997] INFO [ExpirationReaper-0-topic]: Shutting down (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2024-03-19 14:34:32,059] WARN [Controller id=0, targetBrokerId=0] Connection to node 0 (localhost/127.0.0.1:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
^C[2024-03-19 14:34:32,114] INFO Terminating process due to signal SIGINT (org.apache.kafka.common.utils.LoggingSignalHandler)
[2024-03-19 14:34:32,132] INFO [ExpirationReaper-0-topic]: Stopped (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2024-03-19 14:34:32,132] INFO [ExpirationReaper-0-topic]: Shutdown completed (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2024-03-19 14:34:32,134] INFO [TransactionCoordinator id=0] Shutting down. (kafka.coordinator.transaction.TransactionCoordinator)
[2024-03-19 14:34:32,135] INFO [ProducerId Manager 0]: Shutdown complete: last producerId assigned 1000 (kafka.coordinator.transaction.ProducerIdManager)
[2024-03-19 14:34:32,136] INFO [Transaction State Manager 0]: Shutdown complete (kafka.coordinator.transaction.TransactionStateManager)
[2024-03-19 14:34:32,136] INFO [Transaction Marker Channel Manager 0]: Shutting down (kafka.coordinator.transaction.TransactionMarkerChannelManager)
[2024-03-19 14:34:32,139] INFO [Transaction Marker Channel Manager 0]: Stopped (kafka.coordinator.transaction.TransactionMarkerChannelManager)
[2024-03-19 14:34:32,140] INFO [Transaction Marker Channel Manager 0]: Shutdown completed (kafka.coordinator.transaction.TransactionMarkerChannelManager)
[2024-03-19 14:34:32,141] INFO [TransactionCoordinator id=0] Shutdown complete. (kafka.coordinator.transaction.TransactionCoordinator)
[2024-03-19 14:34:32,141] INFO [GroupCoordinator 0]: Shutting down. (kafka.coordinator.group.GroupCoordinator)
[2024-03-19 14:34:32,144] INFO [ExpirationReaper-0-Heartbeat]: Shutting down (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2024-03-19 14:34:32,160] WARN [Controller id=0, targetBrokerId=0] Connection to node 0 (localhost/127.0.0.1:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2024-03-19 14:34:32,261] WARN [Controller id=0, targetBrokerId=0] Connection to node 0 (localhost/127.0.0.1:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2024-03-19 14:34:32,344] INFO [ExpirationReaper-0-Heartbeat]: Shutdown completed (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2024-03-19 14:34:32,344] INFO [ExpirationReaper-0-Heartbeat]: Stopped (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2024-03-19 14:34:32,344] INFO [ExpirationReaper-0-Rebalance]: Shutting down (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2024-03-19 14:34:32,362] INFO [ExpirationReaper-0-Rebalance]: Stopped (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2024-03-19 14:34:32,362] WARN [Controller id=0, targetBrokerId=0] Connection to node 0 (localhost/127.0.0.1:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2024-03-19 14:34:32,363] INFO [ExpirationReaper-0-Rebalance]: Shutdown completed (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2024-03-19 14:34:32,363] INFO [GroupCoordinator 0]: Shutdown complete. (kafka.coordinator.group.GroupCoordinator)
[2024-03-19 14:34:32,364] INFO [ReplicaManager broker=0] Shutting down (kafka.server.ReplicaManager)
[2024-03-19 14:34:32,364] INFO [LogDirFailureHandler]: Shutting down (kafka.server.ReplicaManager$LogDirFailureHandler)
[2024-03-19 14:34:32,366] INFO [LogDirFailureHandler]: Stopped (kafka.server.ReplicaManager$LogDirFailureHandler)
[2024-03-19 14:34:32,366] INFO [LogDirFailureHandler]: Shutdown completed (kafka.server.ReplicaManager$LogDirFailureHandler)
[2024-03-19 14:34:32,368] INFO [ReplicaFetcherManager on broker 0] shutting down (kafka.server.ReplicaFetcherManager)
[2024-03-19 14:34:32,369] INFO [ReplicaFetcherManager on broker 0] shutdown completed (kafka.server.ReplicaFetcherManager)
[2024-03-19 14:34:32,370] INFO [ReplicaAlterLogDirsManager on broker 0] shutting down (kafka.server.ReplicaAlterLogDirsManager)
[2024-03-19 14:34:32,370] INFO [ReplicaAlterLogDirsManager on broker 0] shutdown completed (kafka.server.ReplicaAlterLogDirsManager)
[2024-03-19 14:34:32,370] INFO [ExpirationReaper-0-Fetch]: Shutting down (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2024-03-19 14:34:32,463] WARN [Controller id=0, targetBrokerId=0] Connection to node 0 (localhost/127.0.0.1:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2024-03-19 14:34:32,492] INFO [ExpirationReaper-0-Fetch]: Shutdown completed (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2024-03-19 14:34:32,492] INFO [ExpirationReaper-0-Fetch]: Stopped (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2024-03-19 14:34:32,492] INFO [ExpirationReaper-0-Produce]: Shutting down (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2024-03-19 14:34:32,564] WARN [Controller id=0, targetBrokerId=0] Connection to node 0 (localhost/127.0.0.1:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2024-03-19 14:34:32,666] WARN [Controller id=0, targetBrokerId=0] Connection to node 0 (localhost/127.0.0.1:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2024-03-19 14:34:32,674] INFO [ExpirationReaper-0-Produce]: Stopped (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2024-03-19 14:34:32,674] INFO [ExpirationReaper-0-Produce]: Shutdown completed (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2024-03-19 14:34:32,674] INFO [ExpirationReaper-0-DeleteRecords]: Shutting down (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2024-03-19 14:34:32,692] INFO [ExpirationReaper-0-DeleteRecords]: Stopped (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2024-03-19 14:34:32,692] INFO [ExpirationReaper-0-DeleteRecords]: Shutdown completed (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2024-03-19 14:34:32,693] INFO [ExpirationReaper-0-ElectPreferredLeader]: Shutting down (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2024-03-19 14:34:32,768] WARN [Controller id=0, targetBrokerId=0] Connection to node 0 (localhost/127.0.0.1:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2024-03-19 14:34:32,870] WARN [Controller id=0, targetBrokerId=0] Connection to node 0 (localhost/127.0.0.1:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2024-03-19 14:34:32,893] INFO [ExpirationReaper-0-ElectPreferredLeader]: Shutdown completed (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2024-03-19 14:34:32,893] INFO [ExpirationReaper-0-ElectPreferredLeader]: Stopped (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2024-03-19 14:34:32,897] INFO [ReplicaManager broker=0] Shut down completely (kafka.server.ReplicaManager)
[2024-03-19 14:34:32,898] INFO Shutting down. (kafka.log.LogManager)
[2024-03-19 14:34:32,934] INFO Shutdown complete. (kafka.log.LogManager)
[2024-03-19 14:34:32,960] INFO [ZooKeeperClient] Closing. (kafka.zookeeper.ZooKeeperClient)
[2024-03-19 14:34:32,964] INFO Session: 0x100cd6124170002 closed (org.apache.zookeeper.ZooKeeper)
[2024-03-19 14:34:32,966] INFO EventThread shut down for session: 0x100cd6124170002 (org.apache.zookeeper.ClientCnxn)
[2024-03-19 14:34:32,966] INFO [ZooKeeperClient] Closed. (kafka.zookeeper.ZooKeeperClient)
[2024-03-19 14:34:32,968] INFO [ThrottledChannelReaper-Fetch]: Shutting down (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2024-03-19 14:34:33,168] INFO [ThrottledChannelReaper-Fetch]: Stopped (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2024-03-19 14:34:33,168] INFO [ThrottledChannelReaper-Fetch]: Shutdown completed (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2024-03-19 14:34:33,168] INFO [ThrottledChannelReaper-Produce]: Shutting down (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2024-03-19 14:34:33,170] INFO [ThrottledChannelReaper-Produce]: Stopped (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2024-03-19 14:34:33,170] INFO [ThrottledChannelReaper-Produce]: Shutdown completed (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2024-03-19 14:34:33,170] INFO [ThrottledChannelReaper-Request]: Shutting down (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
^C[2024-03-19 14:34:33,740] INFO Terminating process due to signal SIGINT (org.apache.kafka.common.utils.LoggingSignalHandler)
^C[2024-03-19 14:34:33,972] INFO Terminating process due to signal SIGINT (org.apache.kafka.common.utils.LoggingSignalHandler)
[2024-03-19 14:34:34,170] INFO [ThrottledChannelReaper-Request]: Stopped (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2024-03-19 14:34:34,170] INFO [ThrottledChannelReaper-Request]: Shutdown completed (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2024-03-19 14:34:34,172] INFO [SocketServer brokerId=0] Shutting down socket server (kafka.network.SocketServer)
^C[2024-03-19 14:34:34,204] INFO [SocketServer brokerId=0] Shutdown completed (kafka.network.SocketServer)
[2024-03-19 14:34:34,204] INFO Terminating process due to signal SIGINT (org.apache.kafka.common.utils.LoggingSignalHandler)
[2024-03-19 14:34:34,206] INFO [KafkaServer id=0] shut down completed (kafka.server.KafkaServer)

经过一番清理ca-*,cert-*,client-*,server-*文件后,然后重新生成秘钥证书和CA、签名,步骤如下:

一、生成 SSL 密钥和证书
keytool -keystore server.keystore.jks -alias localhost -validity 700 -genkey -keyalg RSA
keytool -keystore server.truststore.jks -alias localhost -validity 700 -genkey -keyalg RSA
keytool -keystore client.keystore.jks -alias localhost -validity 700 -genkey -keyalg RSA
keytool -keystore client.truststore.jks -alias localhost -validity 700 -genkey -keyalg RSA2、创建我自己的CA
openssl req -new -x509 -keyout ca-key -out ca-cert -days 700
keytool -keystore server.keystore.jks -alias localhost -certreq -file cert-file
openssl x509 -req -CA ca-cert -CAkey ca-key -in cert-file -out cert-signed -days 700 -CAcreateserial -passin pass:1234563、对证书进行签名
keytool -keystore server.keystore.jks -alias CARoot -import -file ca-cert
keytool -keystore server.keystore.jks -alias localhost -import -file cert-signedkeytool -keystore server.truststore.jks -alias CARoot -import -file ca-certkeytool -keystore client.keystore.jks -alias CARoot -import -file ca-certkeytool -keystore client.truststore.jks -alias CARoot -import -file ca-cert

再次启动zookeeper和kafka,然后执行生产producer命令,发现还是报错:

[2024-03-19 17:52:38,773] INFO [SocketServer brokerId=0] Failed authentication with /127.0.0.1 (SSL handshake failed) (org.apache.kafka.common.network.Selector)
[2024-03-19 17:52:38,876] INFO [Controller id=0, targetBrokerId=0] Failed authentication with localhost/127.0.0.1 (SSL handshake failed) (org.apache.kafka.common.network.Selector)
[2024-03-19 17:52:38,876] ERROR [Controller id=0, targetBrokerId=0] Connection to node 0 (localhost/127.0.0.1:9092) failed authentication due to: SSL handshake failed (org.apache.kafka.clients.NetworkClient)
[2024-03-19 17:52:38,876] INFO [SocketServer brokerId=0] Failed authentication with /127.0.0.1 (SSL handshake failed) (org.apache.kafka.common.network.Selector)
[2024-03-19 17:52:38,979] INFO [Controller id=0, targetBrokerId=0] Failed authentication with localhost/127.0.0.1 (SSL handshake failed) (org.apache.kafka.common.netw
ork.Selector)
[2024-03-19 17:52:38,980] ERROR [Controller id=0, targetBrokerId=0] Connection to node 0 (localhost/127.0.0.1:9092) failed authentication due to: SSL handshake failed(org.apache.kafka.clients.NetworkClient)
[2024-03-19 17:52:38,980] INFO [SocketServer brokerId=0] Failed authentication with /127.0.0.1 (SSL handshake failed) (org.apache.kafka.common.network.Selector)
[2024-03-19 17:52:39,083] INFO [Controller id=0, targetBrokerId=0] Failed authentication with localhost/127.0.0.1 (SSL handshake failed) (org.apache.kafka.common.netw
ork.Selector)
[2024-03-19 17:52:39,083] INFO [SocketServer brokerId=0] Failed authentication with /127.0.0.1 (SSL handshake failed) (org.apache.kafka.common.network.Selector)
[2024-03-19 17:52:39,083] ERROR [Controller id=0, targetBrokerId=0] Connection to node 0 (localhost/127.0.0.1:9092) failed authentication due to: SSL handshake failed(org.apache.kafka.clients.NetworkClient)

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.rhkb.cn/news/283312.html

如若内容造成侵权/违法违规/事实不符,请联系长河编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

Springboot+vue的作业管理系统+数据库+报告+免费远程调试

项目介绍: Springbootvue的作业管理系统&#xff0c;Javaee项目&#xff0c;springboot vue前后端分离项目 本文设计了一个基于Springbootvue的前后端分离的作业管理系统&#xff0c;采用M&#xff08;model&#xff09;V&#xff08;view&#xff09;C&#xff08;controller&…

485问题汇总

485问题汇总 485 通信波形没有负电压 问题描述&#xff1a;设备在没有外设的时候通信波形是正常的&#xff0c;即5V可以出来&#xff0c;在连接上设备后&#xff0c;设备的通信波形的-5V会随着设备的增多&#xff0c;电压会慢慢上升。当设备连接到24台设备后&#xff0c;485总…

蓝桥杯十四届 试题E接龙数列

思路&#xff1a; 做题要想到用对立面解题&#xff0c;要求最短的&#xff0c;就可以先求最长的 //先求最长的接龙序列的长度maxx&#xff0c;再用长度n减去maxx //先声明dp数组&#xff0c;记录以0-9结尾的最长的接龙数列的长度 //以字符串的形式输入 //更新以b结尾的最大接…

linux系统------------MySQL 存储引擎

目录 一、存储引擎概念介绍 二、常用的存储引擎 2.1MyISAM 2.1.1MYlSAM的特点 2.1.2MyISAM 表支持 3 种不同的存储格式⭐&#xff1a; &#xff08;1&#xff09;静态(固定长度)表 &#xff08;2&#xff09;动态表 &#xff08;3&#xff09;压缩表 2.1.3MyISAM适…

使用 Dify 和 AWS Bedrock 玩转 Anthropic Claude 3

本篇文章&#xff0c;聊聊怎么比较稳定的使用 Anthropic Claude 3&#xff0c;以及基于目前表现非常好的模型&#xff0c;来做一些有趣的 AI Native 小工具。 写在前面 在实际体验了半个多月&#xff0c;月初上线的 Anthropic Claude Pro 后&#xff0c;发现 Claude 3 系列模…

学习几个地图组件(基于react)

去年开发时用的公司封装的地图组件&#xff0c;挺方便的&#xff0c;但是拓展性不强&#xff0c;所以看看有哪些优秀的开源地图组件吧 1、React Leaflet 介绍&#xff1a;开源的JavaScript库&#xff0c;用于在web上制作交互式地图&#xff0c;允许你使用React组件的方式在应…

QT作业。。

1.使用手动连接&#xff0c;将登录框中的取消按钮使用t4版本的连接到自定义的槽函数中&#xff0c;在自定义的槽函数中调用关闭函数将登录按钮使用t5版本的连接到自定义的槽函数中&#xff0c;在槽函数中判断u界面上输入的账号是否为"admin"&#xff0c;密码是否为&q…

Web前端笔记+表单练习+五彩导航

一、笔记 表单&#xff1a;数据交互的一种方式 登录、注册、搜索 <from> <input type""> --- <input type"text"> --- 普通输入框&#xff0c;内容在一行显示 <input type"password"> --- 密码框 <input type"…

内存卡损坏怎么修复数据,内存卡损坏修复数据方法

内存卡损坏是许多用户都可能面临的问题。当我们的内存卡损坏时,其中存储的重要数据可能会受到威胁,承载着我们无尽回忆的数据,一旦失去,将成为大家心中永远的遗憾。因此我们迫切需要找到一种方法来修复这些数据。本文将介绍一些内存卡损坏修复数据方法,帮助大家解决因为内…

【计算机视觉】Gaussian Splatting源码解读补充(一)

本文旨在补充gwpscut创作的博文学习笔记之——3D Gaussian Splatting源码解读。 Gaussian Splatting Github地址&#xff1a;https://github.com/graphdeco-inria/gaussian-splatting 论文地址&#xff1a;https://repo-sam.inria.fr/fungraph/3d-gaussian-splatting/3d_gauss…

物联网数据报表分析

随着物联网技术的迅猛发展&#xff0c;越来越多的企业开始将物联网解决方案应用于各个领域&#xff0c;从提高生产效率到优化用户体验&#xff0c;物联网都发挥着至关重要的作用。然而&#xff0c;如何有效地分析和管理物联网产生的海量数据&#xff0c;成为企业面临的挑战之一…

Linux centos7安装nginx-1.24.0并且实现自启动

1.安装之前的操作 ps -ef|grep nginx 查看是否有运行 如果有就杀掉 kill -9 pid find / -name nginx 查看nginx文件 rm -rf file /usr/local/nginx* 通通删掉删掉 yum remove nginx 限载一下服务 1.2.下载安装包 地址 nginx: download 2.减压文件 tar…

学习笔记Day14:Linux下软件安装

软件安装 Anaconda 所有语言的包(package)、依赖(dependency)和环境(environment)管理器&#xff0c;类似应用商店 Conda < Miniconda < Anaconda&#xff08;有交互界面&#xff09; Linux下Miniconda即可 安装Miniconda 搜索北外/清华miniconda镜像网站&#xff…

【09】进阶JavaScript事件循环Promise

一、事件循环 浏览器的进程模型 何为进程? 程序运行需要有它自己专属的内存空间,可以把这块内存空间简单的理解为进程 每个应用至少有一个进程,进程之间相互独立,即使要通信,也需要双方同意。 何为线程? 有了进程后,就可以运行程序的代码了。 运行代码的「人」称之…

Linux信号处理

Linux信号处理 什么是linux信号 本质是一种通知机制&#xff0c;用户 or 操作系统通过发送一定的信号&#xff0c;通知进程&#xff0c;某些事情已经发生&#xff0c;你可以在后续进行处理。 信号产生是随机的&#xff0c;进程可能正在忙自己的事情&#xff0c;所以&#xf…

目标检测——PP-YOLOE-R算法解读

PP-YOLO系列&#xff0c;均是基于百度自研PaddlePaddle深度学习框架发布的算法&#xff0c;2020年基于YOLOv3改进发布PP-YOLO&#xff0c;2021年发布PP-YOLOv2和移动端检测算法PP-PicoDet&#xff0c;2022年发布PP-YOLOE和PP-YOLOE-R。由于均是一个系列&#xff0c;所以放一起解…

二、阅读器的开发(初始)-- 2、阅读器开发

1、epubjs核心工作原理 1.1 epubjs的核心工作原理解析 epub电子书&#xff0c;会通过epubjs去实例化一个Book对象&#xff0c;Book对象会对电子书进行解析。Book对象可以通过renderTo方法去生成一个Rendition对象&#xff0c;Rendition主要负责电子书的渲染&#xff0c;通过R…

java网络原理(三)----三次握手四次挥手

三次握手 三次握手是建立连接的过程&#xff0c;四次挥手是断开连接的过程&#xff0c;三次握手发生在socket.accept()之前。 客户端和服务器尝试建立连接的时候服务器就会和客户端进行一系列的数据交换称为握手&#xff0c;这个过程建立完了后&#xff0c;连接就好了。 A和B…

python的BBS论坛系统flask-django-nodejs-php

为了更好地发挥本系统的技术优势&#xff0c;根据BBS论坛系统的需求&#xff0c;本文尝试以B/S架构设计模式中的django/flask框架&#xff0c;python语言为基础&#xff0c;通过必要的编码处理、BBS论坛系统整体框架、功能服务多样化和有效性的高级经验和技术实现方法&#xff…

数据挖掘与分析学习笔记

一、Numpy NumPy&#xff08;Numerical Python&#xff09;是一种开源的Python库&#xff0c;专注于数值计算和处理多维数组。它是Python数据科学和机器学习生态系统的基础工具包之一&#xff0c;因为它高效地实现了向量化计算&#xff0c;并提供了对大型多维数组和矩阵的支持…