ClickHouse 5节点集群安装

ClickHouse 5节点集群安装

在此架构中,配置了五台服务器。其中两个用于托管数据副本。其他三台服务器用于协调数据的复制。在此示例中,我们将创建一个数据库和表,将使用 ReplicatedMergeTree 表引擎在两个数据节点之间复制该数据库和表。

官方文档:https://clickhouse.com/docs/en/architecture/replication

部署环境

在这里插入图片描述

节点清单:

主机名节点IP操作系统节点配置描述
clickhouse-01192.168.72.51Ubuntu22.042C/4G/100G DISKClickhose server, client
clickhouse-02192.168.72.52Ubuntu22.042C/4G/100G DISKClickhose server, client
clickhouse-keeper-01192.168.72.53Ubuntu22.042C/4G/100G DISKClikhouse keeper
clickhouse-keeper-02192.168.72.54Ubuntu22.042C/4G/100G DISKClikhouse keeper
clickhouse-keeper-03192.168.72.55Ubuntu22.042C/4G/100G DISKClikhouse keeper

说明:

在生产环境中,我们强烈建议为 ClickHouse keeper 使用专用主机。在测试环境中,可以在同一服务器上组合运行 ClickHouse Server 和 ClickHouse Keeper。另一个基本示例“横向扩展”就使用了这种方法。在此示例中,我们介绍了将 Keeper 与 ClickHouse Server 分离的推荐方法。 Keeper 服务器可以更小,4GB RAM 通常足以用于每个 Keeper 服务器,直到您的 ClickHouse 服务器变得非常大。

在所有节点上配置主机名

hostnamectl set-hostname clickhouse-01
hostnamectl set-hostname clickhouse-02
hostnamectl set-hostname clickhouse-keeper-01
hostnamectl set-hostname clickhouse-keeper-02
hostnamectl set-hostname clickhouse-keeper-03

在所有节点上编辑 /etc/hosts 文件

cat >/etc/hosts<<EOF
192.168.72.51 clickhouse-01 clickhouse-01.example.com
192.168.72.52 clickhouse-02 clickhouse-02.example.com
192.168.72.53 clickhouse-keeper-01 clickhouse-keeper-01.example.com
192.168.72.54 clickhouse-keeper-02 clickhouse-keeper-02.example.com
192.168.72.55 clickhouse-keeper-03 clickhouse-keeper-03.example.com
EOF

安装clickhouse

clickhouse-01和clickhouse-02节点执行

在clickhouse-01和clickhouse-02节点上需要安装ClickHouse-server及client

sudo apt-get install -y apt-transport-https ca-certificates dirmngr
sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 8919F6BD2B48D754
echo "deb https://packages.clickhouse.com/deb stable main" | sudo tee \/etc/apt/sources.list.d/clickhouse.list
sudo apt-get update
sudo apt-get install -y clickhouse-server clickhouse-client

clickhouse-keeper-01~03节点执行

在clickhouse-keeper-01~03节点上仅安装clickhose-keeper

sudo apt-get install -y apt-transport-https ca-certificates dirmngr
sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 8919F6BD2B48D754
echo "deb https://packages.clickhouse.com/deb stable main" | sudo tee \/etc/apt/sources.list.d/clickhouse.list
sudo apt-get update
sudo apt-get install -y clickhouse-keeper

创建clickhose-keeper相关目录

mkdir -p /etc/clickhouse-keeper/config.d
mkdir -p /var/log/clickhouse-keeper
mkdir -p /var/lib/clickhouse-keeper/coordination/log
mkdir -p /var/lib/clickhouse-keeper/coordination/snapshots
mkdir -p /var/lib/clickhouse-keeper/cores
chown -R clickhouse.clickhouse /etc/clickhouse-keeper /var/log/clickhouse-keeper /var/lib/clickhouse-keeper

clickhouse-01配置

对于 clickhouse-01 有五个配置文件。您可以选择将这些文件合并为一个文件,但为了文档的清晰性,单独查看它们可能会更简单。当您通读配置文件时,您会发现 clickhouse-01 和 clickhouse-02 之间的大部分配置是相同的;差异将被突出显示。

网络和日志记录配置

这些值可以根据您的意愿进行定制。此示例配置为您提供:

  • 调试日志将以 1000M 滚动 3 次
  • 使用clickhouse-client连接时显示的名称是cluster_1S_2R node 1
  • ClickHouse 将侦听 IPV4 网络的端口 8123 和 9000。

clickhouse-01 上的 /etc/clickhouse-server/config.d/network-and-logging.xml

root@clickhouse-01:~# cat /etc/clickhouse-server/config.d/network-and-logging.xml
<clickhouse><logger><level>debug</level><log>/var/log/clickhouse-server/clickhouse-server.log</log><errorlog>/var/log/clickhouse-server/clickhouse-server.err.log</errorlog><size>1000M</size><count>3</count></logger><display_name>cluster_1S_2R node 1</display_name><listen_host>0.0.0.0</listen_host><http_port>8123</http_port><tcp_port>9000</tcp_port>
</clickhouse>

宏配置

shardreplica降低了分布式 DDL 的复杂性。配置的值会自动替换到您的 DDL 查询中,从而简化您的 DDL。此配置的宏指定每个节点的分片和副本数量。
在此 1 分片 2 副本示例中,副本宏是 clickhouse-01 上的replica_1和 clickhouse-02 上的replica_2 。 clickhouse-01 和 clickhouse-02 上的分片宏均为1因为只有一个分片。

clickhouse-01 上的 /etc/clickhouse-server/config.d/macros.xml

root@clickhouse-01:~# cat /etc/clickhouse-server/config.d/macros.xml
<clickhouse><macros><shard>01</shard><replica>01</replica><cluster>cluster_1S_2R</cluster></macros>
</clickhouse>

复制和分片配置

从顶部开始:

  • XML 的remote_servers 部分指定环境中的每个集群。属性replace=true将默认ClickHouse配置中的示例remote_servers替换为此文件中指定的remote_server配置。如果没有此属性,此文件中的远程服务器将被附加到默认的示例列表中。
  • 在此示例中,有一个名为cluster_1S_2R的集群。
  • 为名为cluster_1S_2R的集群创建一个机密,其值为mysecretphrase 。该秘密在环境中的所有远程服务器之间共享,以确保正确的服务器连接在一起。
  • 集群cluster_1S_2R有 1 个分片和 2 个副本。查看本文档开头的架构图,并将其与下面 XML 中的shard定义进行比较。分片定义包含两个副本。指定每个副本的主机和端口。一个副本存储在clickhouse-01上,另一个副本存储在clickhouse-02上。
  • 分片的内部复制设置为 true。每个分片都可以在配置文件中定义internal_replication 参数。如果该参数设置为true,则写操作会选择第一个健康的副本并向其写入数据。

clickhouse-01 上的 /etc/clickhouse-server/config.d/remote-servers.xml

root@clickhouse-01:~# cat /etc/clickhouse-server/config.d/remote-servers.xml
<clickhouse><remote_servers replace="true"><cluster_1S_2R><secret>mysecretphrase</secret><shard><internal_replication>true</internal_replication><replica><host>clickhouse-01</host><port>9000</port></replica><replica><host>clickhouse-02</host><port>9000</port></replica></shard></cluster_1S_2R></remote_servers>
</clickhouse>

配置Keeper的使用

此配置文件use-keeper.xml将 ClickHouse Server 配置为使用 ClickHouse Keeper 来协调复制和分布式 DDL。此文件指定 ClickHouse Server 应在端口 9181 上的节点 clickhouse-keeper-01 - 03 上使用 Keeper,并且该文件在clickhouse-01clickhouse-02上相同。

clickhouse-01 上的 /etc/clickhouse-server/config.d/use-keeper.xml

root@clickhouse-01:~# cat /etc/clickhouse-server/config.d/use-keeper.xml
<clickhouse><zookeeper><!-- where are the ZK nodes --><node><host>clickhouse-keeper-01</host><port>9181</port></node><node><host>clickhouse-keeper-02</host><port>9181</port></node><node><host>clickhouse-keeper-03</host><port>9181</port></node></zookeeper>
</clickhouse>

clickhouse-02配置

由于 clickhouse-01 和 clickhouse-02 上的配置非常相似,这里仅指出差异。

网络和日志记录配置

该文件在 clickhouse-01 和 clickhouse-02 上都是相同的,但display_name除外。

clickhouse-02 上的 /etc/clickhouse-server/config.d/network-and-logging.xml

root@clickhouse-02:~# cat /etc/clickhouse-server/config.d/network-and-logging.xml
<clickhouse><logger><level>debug</level><log>/var/log/clickhouse-server/clickhouse-server.log</log><errorlog>/var/log/clickhouse-server/clickhouse-server.err.log</errorlog><size>1000M</size><count>3</count></logger><display_name>cluster_1S_2R node 2</display_name><listen_host>0.0.0.0</listen_host><http_port>8123</http_port><tcp_port>9000</tcp_port>
</clickhouse>

宏配置

clickhouse-01 和 clickhouse-02 之间的宏配置不同。 replica在此节点上设置为02

clickhouse-02 上的 /etc/clickhouse-server/config.d/macros.xml

root@clickhouse-02:~# cat /etc/clickhouse-server/config.d/macros.xml
<clickhouse><macros><shard>01</shard><replica>02</replica><cluster>cluster_1S_2R</cluster></macros>
</clickhouse>

复制和分片配置

该文件在 clickhouse-01 和 clickhouse-02 上是相同的。

clickhouse-02 上的 /etc/clickhouse-server/config.d/remote-servers.xml

root@clickhouse-02:~# cat /etc/clickhouse-server/config.d/remote-servers.xml
<clickhouse><remote_servers replace="true"><cluster_1S_2R><secret>mysecretphrase</secret><shard><internal_replication>true</internal_replication><replica><host>clickhouse-01</host><port>9000</port></replica><replica><host>clickhouse-02</host><port>9000</port></replica></shard></cluster_1S_2R></remote_servers>
</clickhouse>

配置Keeper的使用

该文件在 clickhouse-01 和 clickhouse-02 上是相同的。

clickhouse-02 上的 /etc/clickhouse-server/config.d/use-keeper.xml

root@clickhouse-02:~# cat /etc/clickhouse-server/config.d/use-keeper.xml
<clickhouse><zookeeper><!-- where are the ZK nodes --><node><host>clickhouse-keeper-01</host><port>9181</port></node><node><host>clickhouse-keeper-02</host><port>9181</port></node><node><host>clickhouse-keeper-03</host><port>9181</port></node></zookeeper>
</clickhouse>

clickhouse-keeper-01配置

最佳实践

通过编辑配置文件来配置 ClickHouse Keeper 时,您应该:

  • 备份 /etc/clickhouse-keeper/keeper_config.xml
  • 编辑 /etc/clickhouse-keeper/keeper_config.xml 文件

ClickHouse Keeper 提供数据复制和分布式 DDL 查询执行的协调系统。 ClickHouse Keeper 与 Apache ZooKeeper 兼容。此配置在端口 9181 上启用 ClickHouse Keeper。突出显示的行指定该 Keeper 实例的 server_id 为 1。这是三台服务器的enable-keeper.xml文件中的唯一区别。 clickhouse-keeper-02server_id设置为2clickhouse-keeper-03server_id设置为3 。 raft 配置部分在所有三台服务器上都是相同的,下面突出显示以向您展示 raft 配置中server_idserver实例之间的关系。

说明

如果出于任何原因更换或重建 Keeper 节点,请勿重复使用现有的server_id 。例如,如果重建了server_id2的Keeper节点,则将其server_id设置为4或更高。

备份所有节点keeper_config.xml配置

# 备份配置
cp /etc/clickhouse-keeper/keeper_config.xml{,.bak}
# 清空默认配置
echo > /etc/clickhouse-keeper/keeper_config.xml

clickhouse-keeper-01 上的 /etc/clickhouse-keeper/keeper_config.xml

root@clickhouse-keeper-01:~# cat /etc/clickhouse-keeper/keeper_config.xml
<clickhouse><logger><level>trace</level><log>/var/log/clickhouse-keeper/clickhouse-keeper.log</log><errorlog>/var/log/clickhouse-keeper/clickhouse-keeper.err.log</errorlog><size>1000M</size><count>3</count></logger><listen_host>0.0.0.0</listen_host><keeper_server><tcp_port>9181</tcp_port><server_id>1</server_id><log_storage_path>/var/lib/clickhouse/coordination/log</log_storage_path><snapshot_storage_path>/var/lib/clickhouse/coordination/snapshots</snapshot_storage_path><coordination_settings><operation_timeout_ms>10000</operation_timeout_ms><session_timeout_ms>30000</session_timeout_ms><raft_logs_level>trace</raft_logs_level></coordination_settings><raft_configuration><server><id>1</id><hostname>clickhouse-keeper-01</hostname><port>9234</port></server><server><id>2</id><hostname>clickhouse-keeper-02</hostname><port>9234</port></server><server><id>3</id><hostname>clickhouse-keeper-03</hostname><port>9234</port></server></raft_configuration></keeper_server>
</clickhouse>

clickhouse-keeper-02配置

clickhouse-keeper-01clickhouse-keeper-02之间只有一行差异。该节点上的server_id设置为2

clickhouse-keeper-02 上的 /etc/clickhouse-keeper/keeper_config.xml

root@clickhouse-keeper-02:~# cat /etc/clickhouse-keeper/keeper_config.xml
<clickhouse><logger><level>trace</level><log>/var/log/clickhouse-keeper/clickhouse-keeper.log</log><errorlog>/var/log/clickhouse-keeper/clickhouse-keeper.err.log</errorlog><size>1000M</size><count>3</count></logger><listen_host>0.0.0.0</listen_host><keeper_server><tcp_port>9181</tcp_port><server_id>2</server_id><log_storage_path>/var/lib/clickhouse/coordination/log</log_storage_path><snapshot_storage_path>/var/lib/clickhouse/coordination/snapshots</snapshot_storage_path><coordination_settings><operation_timeout_ms>10000</operation_timeout_ms><session_timeout_ms>30000</session_timeout_ms><raft_logs_level>trace</raft_logs_level></coordination_settings><raft_configuration><server><id>1</id><hostname>clickhouse-keeper-01</hostname><port>9234</port></server><server><id>2</id><hostname>clickhouse-keeper-02</hostname><port>9234</port></server><server><id>3</id><hostname>clickhouse-keeper-03</hostname><port>9234</port></server></raft_configuration></keeper_server>
</clickhouse>

clickhouse-keeper-03配置

clickhouse-keeper-01clickhouse-keeper-03之间只有一行差异。该节点上的server_id设置为3

clickhouse-keeper-03 上的 /etc/clickhouse-keeper/keeper_config.xml

root@clickhouse-keeper-03:~# cat /etc/clickhouse-keeper/keeper_config.xml
<clickhouse><logger><level>trace</level><log>/var/log/clickhouse-keeper/clickhouse-keeper.log</log><errorlog>/var/log/clickhouse-keeper/clickhouse-keeper.err.log</errorlog><size>1000M</size><count>3</count></logger><listen_host>0.0.0.0</listen_host><keeper_server><tcp_port>9181</tcp_port><server_id>3</server_id><log_storage_path>/var/lib/clickhouse/coordination/log</log_storage_path><snapshot_storage_path>/var/lib/clickhouse/coordination/snapshots</snapshot_storage_path><coordination_settings><operation_timeout_ms>10000</operation_timeout_ms><session_timeout_ms>30000</session_timeout_ms><raft_logs_level>trace</raft_logs_level></coordination_settings><raft_configuration><server><id>1</id><hostname>clickhouse-keeper-01</hostname><port>9234</port></server><server><id>2</id><hostname>clickhouse-keeper-02</hostname><port>9234</port></server><server><id>3</id><hostname>clickhouse-keeper-03</hostname><port>9234</port></server></raft_configuration></keeper_server>
</clickhouse>

启动服务

clickhouse-keeper-01~03节点执行;

启动服务

systemctl enable --now clickhouse-keeper.service

确认clickhouse-keeper-01服务运行状态

root@clickhouse-keeper-01:~# systemctl status clickhouse-keeper.service 
● clickhouse-keeper.service - ClickHouse Keeper - zookeeper compatible distributed coordination serverLoaded: loaded (/lib/systemd/system/clickhouse-keeper.service; enabled; vendor preset: enabled)Active: active (running) since Sun 2024-10-27 19:44:26 CST; 3h 0min agoMain PID: 3460 (clickhouse-keep)Tasks: 41 (limit: 4556)Memory: 58.8MCPU: 1min 13.000sCGroup: /system.slice/clickhouse-keeper.service└─3460 /usr/bin/clickhouse-keeper --config=/etc/clickhouse-keeper/keeper_config.xml --pid-file=/run/clickhouse-keeper/clickhouse-keeper.pidOct 27 19:44:26 clickhouse-keeper-01 systemd[1]: Started ClickHouse Keeper - zookeeper compatible distributed coordination server.
Oct 27 19:44:26 clickhouse-keeper-01 clickhouse-keeper[3460]: Processing configuration file '/etc/clickhouse-keeper/keeper_config.xml'.
Oct 27 19:44:26 clickhouse-keeper-01 clickhouse-keeper[3460]: Logging trace to /var/log/clickhouse-keeper/clickhouse-keeper.log
Oct 27 19:44:26 clickhouse-keeper-01 clickhouse-keeper[3460]: Logging errors to /var/log/clickhouse-keeper/clickhouse-keeper.err.log

确认clickhouse-keeper-02服务运行状态

root@clickhouse-keeper-02:~# systemctl status clickhouse-keeper.service 
● clickhouse-keeper.service - ClickHouse Keeper - zookeeper compatible distributed coordination serverLoaded: loaded (/lib/systemd/system/clickhouse-keeper.service; enabled; vendor preset: enabled)Active: active (running) since Sun 2024-10-27 19:44:28 CST; 3h 0min agoMain PID: 3053 (clickhouse-keep)Tasks: 41 (limit: 4556)Memory: 44.7MCPU: 1min 557msCGroup: /system.slice/clickhouse-keeper.service└─3053 /usr/bin/clickhouse-keeper --config=/etc/clickhouse-keeper/keeper_config.xml --pid-file=/run/clickhouse-keeper/clickhouse-keeper.pidOct 27 19:44:28 clickhouse-keeper-02 systemd[1]: Started ClickHouse Keeper - zookeeper compatible distributed coordination server.
Oct 27 19:44:28 clickhouse-keeper-02 clickhouse-keeper[3053]: Processing configuration file '/etc/clickhouse-keeper/keeper_config.xml'.
Oct 27 19:44:28 clickhouse-keeper-02 clickhouse-keeper[3053]: Logging trace to /var/log/clickhouse-keeper/clickhouse-keeper.log
Oct 27 19:44:28 clickhouse-keeper-02 clickhouse-keeper[3053]: Logging errors to /var/log/clickhouse-keeper/clickhouse-keeper.err.log

确认clickhouse-keeper-03服务运行状态

root@clickhouse-keeper-03:~# systemctl status clickhouse-keeper.service
● clickhouse-keeper.service - ClickHouse Keeper - zookeeper compatible distributed coordination serverLoaded: loaded (/lib/systemd/system/clickhouse-keeper.service; enabled; vendor preset: enabled)Active: active (running) since Sun 2024-10-27 19:44:30 CST; 3h 0min agoMain PID: 2991 (clickhouse-keep)Tasks: 41 (limit: 4556)Memory: 43.4MCPU: 1min 336msCGroup: /system.slice/clickhouse-keeper.service└─2991 /usr/bin/clickhouse-keeper --config=/etc/clickhouse-keeper/keeper_config.xml --pid-file=/run/clickhouse-keeper/clickhouse-keeper.pidOct 27 19:44:30 clickhouse-keeper-03 systemd[1]: Started ClickHouse Keeper - zookeeper compatible distributed coordination server.
Oct 27 19:44:30 clickhouse-keeper-03 clickhouse-keeper[2991]: Processing configuration file '/etc/clickhouse-keeper/keeper_config.xml'.
Oct 27 19:44:30 clickhouse-keeper-03 clickhouse-keeper[2991]: Logging trace to /var/log/clickhouse-keeper/clickhouse-keeper.log
Oct 27 19:44:30 clickhouse-keeper-03 clickhouse-keeper[2991]: Logging errors to /var/log/clickhouse-keeper/clickhouse-keeper.err.log

clickhouse-01~02 节点执行;

systemctl enable --now clickhouse-server.service
systemctl restart clickhouse-server.service

确认clickhouse-01服务运行状态

root@clickhouse-01:~# systemctl status clickhouse-server.service 
● clickhouse-server.service - ClickHouse Server (analytic DBMS for big data)Loaded: loaded (/lib/systemd/system/clickhouse-server.service; enabled; vendor preset: enabled)Active: active (running) since Sun 2024-10-27 19:55:27 CST; 2h 51min agoMain PID: 3107 (clickhouse-serv)Tasks: 701 (limit: 4556)Memory: 802.6MCPU: 25min 4.495sCGroup: /system.slice/clickhouse-server.service├─3104 clickhouse-watchdog "" "" "" "" "" "" "" --config=/etc/clickhouse-server/config.xml --pid-file=/run/clickhouse-server/clickhouse-server.pid└─3107 /usr/bin/clickhouse-server --config=/etc/clickhouse-server/config.xml --pid-file=/run/clickhouse-server/clickhouse-server.pidOct 27 19:55:26 clickhouse-01 systemd[1]: Starting ClickHouse Server (analytic DBMS for big data)...
Oct 27 19:55:26 clickhouse-01 clickhouse-server[3104]: Processing configuration file '/etc/clickhouse-server/config.xml'.
Oct 27 19:55:26 clickhouse-01 clickhouse-server[3104]: Merging configuration file '/etc/clickhouse-server/config.d/macros.xml'.
Oct 27 19:55:26 clickhouse-01 clickhouse-server[3104]: Merging configuration file '/etc/clickhouse-server/config.d/network-and-logging.xml'.
Oct 27 19:55:26 clickhouse-01 clickhouse-server[3104]: Merging configuration file '/etc/clickhouse-server/config.d/remote-servers.xml'.
Oct 27 19:55:26 clickhouse-01 clickhouse-server[3104]: Merging configuration file '/etc/clickhouse-server/config.d/use-keeper.xml'.
Oct 27 19:55:26 clickhouse-01 clickhouse-server[3104]: Logging debug to /var/log/clickhouse-server/clickhouse-server.log
Oct 27 19:55:26 clickhouse-01 clickhouse-server[3104]: Logging errors to /var/log/clickhouse-server/clickhouse-server.err.log
Oct 27 19:55:26 clickhouse-01 systemd[1]: clickhouse-server.service: Supervising process 3107 which is not our child. We'll most likely not notice when it exits.
Oct 27 19:55:27 clickhouse-01 systemd[1]: Started ClickHouse Server (analytic DBMS for big data).
root@clickhouse-01:~# 

确认clickhouse-02服务运行状态

root@clickhouse-01:~# systemctl status clickhouse-server.service 
● clickhouse-server.service - ClickHouse Server (analytic DBMS for big data)Loaded: loaded (/lib/systemd/system/clickhouse-server.service; enabled; vendor preset: enabled)Active: active (running) since Sun 2024-10-27 19:55:27 CST; 2h 51min agoMain PID: 3107 (clickhouse-serv)Tasks: 701 (limit: 4556)Memory: 759.0MCPU: 25min 6.801sCGroup: /system.slice/clickhouse-server.service├─3104 clickhouse-watchdog "" "" "" "" "" "" "" --config=/etc/clickhouse-server/config.xml --pid-file=/run/clickhouse-server/clickhouse-server.pid└─3107 /usr/bin/clickhouse-server --config=/etc/clickhouse-server/config.xml --pid-file=/run/clickhouse-server/clickhouse-server.pidOct 27 19:55:26 clickhouse-01 systemd[1]: Starting ClickHouse Server (analytic DBMS for big data)...
Oct 27 19:55:26 clickhouse-01 clickhouse-server[3104]: Processing configuration file '/etc/clickhouse-server/config.xml'.
Oct 27 19:55:26 clickhouse-01 clickhouse-server[3104]: Merging configuration file '/etc/clickhouse-server/config.d/macros.xml'.
Oct 27 19:55:26 clickhouse-01 clickhouse-server[3104]: Merging configuration file '/etc/clickhouse-server/config.d/network-and-logging.xml'.
Oct 27 19:55:26 clickhouse-01 clickhouse-server[3104]: Merging configuration file '/etc/clickhouse-server/config.d/remote-servers.xml'.
Oct 27 19:55:26 clickhouse-01 clickhouse-server[3104]: Merging configuration file '/etc/clickhouse-server/config.d/use-keeper.xml'.
Oct 27 19:55:26 clickhouse-01 clickhouse-server[3104]: Logging debug to /var/log/clickhouse-server/clickhouse-server.log
Oct 27 19:55:26 clickhouse-01 clickhouse-server[3104]: Logging errors to /var/log/clickhouse-server/clickhouse-server.err.log
Oct 27 19:55:26 clickhouse-01 systemd[1]: clickhouse-server.service: Supervising process 3107 which is not our child. We'll most likely not notice when it exits.
Oct 27 19:55:27 clickhouse-01 systemd[1]: Started ClickHouse Server (analytic DBMS for big data).
root@clickhouse-01:~# 

测试集群

要获得 ReplicatedMergeTree 和 ClickHouse Keeper 的经验,您可以运行以下命令:

  • 在上面配置的集群上创建数据库
  • 使用 ReplicatedMergeTree 表引擎在数据库上创建表
  • 在一个节点上插入数据,在另一个节点上查询
  • 停止一个ClickHouse服务器节点
  • 在运行节点上插入更多数据
  • 重新启动停止的节点
  • 查询重启节点时验证数据是否可用

验证 ClickHouse Keeper 是否正在运行

mntr命令用于验证 ClickHouse Keeper 是否正在运行并获取有关三个 Keeper 节点关系的状态信息。在此示例中使用的配置中,三个节点一起工作。节点将选举领导者,其余节点将成为追随者。 mntr命令提供与性能以及特定节点是跟随者还是领导者相关的信息。

提示

您可能需要安装netcat才能将mntr命令发送到 Keeper。请参阅nmap.org页面以获取下载信息。

从 clickhouse-keeper-01、clickhouse-keeper-02 和 clickhouse-keeper-03 上的 shell 运行

echo mntr | nc localhost 9181

来自关注者的回应

zk_version  v23.3.1.2823-testing-46e85357ce2da2a99f56ee83a079e892d7ec3726
zk_avg_latency  0
zk_max_latency  0
zk_min_latency  0
zk_packets_received 0
zk_packets_sent 0
zk_num_alive_connections    0
zk_outstanding_requests 0
zk_server_state follower
zk_znode_count  6
zk_watch_count  0
zk_ephemerals_count 0
zk_approximate_data_size    1271
zk_key_arena_size   4096
zk_latest_snapshot_size 0
zk_open_file_descriptor_count   46
zk_max_file_descriptor_count    18446744073709551615

leader的回应

zk_version  v23.3.1.2823-testing-46e85357ce2da2a99f56ee83a079e892d7ec3726
zk_avg_latency  0
zk_max_latency  0
zk_min_latency  0
zk_packets_received 0
zk_packets_sent 0
zk_num_alive_connections    0
zk_outstanding_requests 0
zk_server_state leader
zk_znode_count  6
zk_watch_count  0
zk_ephemerals_count 0
zk_approximate_data_size    1271
zk_key_arena_size   4096
zk_latest_snapshot_size 0
zk_open_file_descriptor_count   48
zk_max_file_descriptor_count    18446744073709551615
zk_followers    2
zk_synced_followers 2

验证 ClickHouse 集群功能

在一个 shell 中使用clickhouse client连接到节点clickhouse-01 ,并在另一个 shell 中使用clickhouse client端连接到节点clickhouse-02

1、在上面配置的集群上创建数据库

在节点 clickhouse-01 或 clickhouse-02 上运行

CREATE DATABASE db1 ON CLUSTER cluster_1S_2R
┌─host──────────┬─port─┬─status─┬─error─┬─num_hosts_remaining─┬─num_hosts_active─┐
│ clickhouse-0290000 │       │                   10 │
│ clickhouse-0190000 │       │                   00 │
└───────────────┴──────┴────────┴───────┴─────────────────────┴──────────────────┘

2、使用 ReplicatedMergeTree 表引擎在数据库上创建表

在节点 clickhouse-01 或 clickhouse-02 上运行

CREATE TABLE db1.table1 ON CLUSTER cluster_1S_2R
(`id` UInt64,`column1` String
)
ENGINE = ReplicatedMergeTree
ORDER BY id
┌─host──────────┬─port─┬─status─┬─error─┬─num_hosts_remaining─┬─num_hosts_active─┐
│ clickhouse-0290000 │       │                   10 │
│ clickhouse-0190000 │       │                   00 │
└───────────────┴──────┴────────┴───────┴─────────────────────┴──────────────────┘

3、在一个节点上插入数据,在另一个节点上查询

在节点 clickhouse-01 上运行

INSERT INTO db1.table1 (id, column1) VALUES (1, 'abc');

4、查询节点clickhouse-02上的表

在节点 clickhouse-02 上运行

SELECT *
FROM db1.table1
┌─id─┬─column1─┐
│  1 │ abc     │
└────┴─────────┘

5、在另一个节点上插入数据,并在节点clickhouse-01上查询

在节点 clickhouse-02 上运行

INSERT INTO db1.table1 (id, column1) VALUES (2, 'def');

在节点 clickhouse-01 上运行

SELECT *
FROM db1.table1
┌─id─┬─column1─┐
│  1 │ abc     │
└────┴─────────┘
┌─id─┬─column1─┐
│  2 │ def     │
└────┴─────────┘

6、停止一个 ClickHouse 服务器节点 通过运行类似于启动该节点的命令的操作系统命令来停止其中一个 ClickHouse 服务器节点。如果您使用systemctl start启动节点,则使用systemctl stop停止它。

root@clickhouse-01:~# systemctl stop clickhouse-server.service

7、在运行节点上插入更多数据

在正在运行的节点上运行

INSERT INTO db1.table1 (id, column1) VALUES (3, 'ghi');

选择数据:

在正在运行的节点上运行

SELECT *
FROM db1.table1
┌─id─┬─column1─┐
│  1 │ abc     │
└────┴─────────┘
┌─id─┬─column1─┐
│  2 │ def     │
└────┴─────────┘
┌─id─┬─column1─┐
│  3 │ ghi     │
└────┴─────────┘

8、重新启动停止的节点并从那里选择

root@clickhouse-01:~# systemctl start clickhouse-server.service

在重启的节点上运行

SELECT *
FROM db1.table1
┌─id─┬─column1─┐
│  1 │ abc     │
└────┴─────────┘
┌─id─┬─column1─┐
│  2 │ def     │
└────┴─────────┘
┌─id─┬─column1─┐
│  3 │ ghi     │
└────┴─────────┘

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.rhkb.cn/news/458017.html

如若内容造成侵权/违法违规/事实不符,请联系长河编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

RHCE作业二

1.要求&#xff1a; 配置nginx服务通过ip访问多网站 2. 1关闭防火墙 2创建ip 3配置 4创建文件 5测试

logback 如何将日志输出到文件

如何作 将日志输出到文件需要使用 RollingFileAppender&#xff0c;该 Appender 必须定义 rollingPolicy &#xff0c;另外 rollingPollicy 下必须定义 fileNamePattern 和 encoder <appender name"fileAppender" class"ch.qos.logback.core.rolling.Rollin…

二、Spring的执行流程

文章目录 1. spring的初始化过程1.1 ClassPathXmlApplicationContext的构造方法1.2 refresh方法&#xff08;核心流程&#xff09;1.2.1 prepareRefresh() 方法1.2.2 obtainFreshBeanFactory() 方法1.2.3 prepareBeanFactory() 方法1.2.4 invokeBeanFactoryPostProcessors() 方…

shodan2---清风

注&#xff1a;本文章源于泷羽SEC&#xff0c;如有侵权请联系我&#xff0c;违规必删 学习请认准泷羽SEC学习视频:https://space.bilibili.com/350329294 实验一&#xff1a;search 存在CVE-2019-0708的网络设备 CVE - 2019 - 0708**漏洞&#xff1a;** 该漏洞存在于远程桌面…

解读数字化转型的敏捷架构:从理论到实践的深度分析

在当今数字经济的推动下&#xff0c;企业要在瞬息万变的市场中保持竞争力&#xff0c;数字化转型已经不再是一种选择&#xff0c;而是不可避免的战略需求。然而&#xff0c;企业如何从理论到实践进行有效的转型&#xff0c;尤其是在复杂的技术环境中&#xff0c;如何通过正确的…

来源爬虫程序调研报告

来源爬虫程序调研报告 一、什么是爬虫 爬虫&#xff1a;就是抓取网页数据的程序。从网站某一个页面&#xff08;通常是首页&#xff09;开始&#xff0c;读取网页的内容&#xff0c;找到在网页中的其它链接地址&#xff0c;然后通过这些链接地址寻找下一个网页&#xff0c;这…

中小型门诊管理系统源码,云诊所管理系统源码,前端技术栈:Vue 2 , Vite , Vue Router 3

中小型门诊管理系统源码&#xff0c;云诊所管理系统源码&#xff0c; 前端技术栈&#xff1a;Vue 2 Vite Vue Router 3 Vuex 3 Element Plus Axios TypeScript Quill Election 后端技术栈&#xff1a;Spring Boot MyBatis MyBatis-Plus Spring Security Swagger2 …

使用Python计算相对强弱指数(RSI)进阶

使用Python计算相对强弱指数&#xff08;RSI&#xff09;进阶 废话不多说&#xff0c;直接上主题&#xff1a;> 代码实现 以下是实现RSI计算的完整代码&#xff1a; # 创建一个DataFramedata {DATE: date_list, # 日期CLOSE: close_px_list, # 收盘价格 }df pd.DataF…

基于丑萌气质狗--C#的sqlserver学习

#region 常用取值 查询List<string> isName new List<string> { "第一", "第二", "第三", "第四" }; List<string> result isName.Where(m > m "第三").ToList();MyDBContext myDBnew MyDBContext(…

【数据分享】中国汽车市场年鉴(2013-2023)

数据介绍 在这十年里&#xff0c;中国自主品牌汽车迅速崛起。吉利、长城、比亚迪等品牌不断推出具有竞争力的车型&#xff0c;在国内市场乃至全球市场都占据了一席之地。同时&#xff0c;新能源汽车的发展更是如日中天。随着环保意识的提高和政策的大力支持&#xff0c;电动汽车…

CSS伪元素以及伪类和CSS特性

伪元素&#xff1a;可以理解为假标签。 有2个伪元素 &#xff08;1&#xff09;::before &#xff08;2&#xff09;::after ::before <!DOCTYPE html> <html> <head><title></title><style type"text/css">body::before{con…

Android简单控件实现简易计算器

学了一些Android的简单控件&#xff0c;用这些布局和控件&#xff0c;设计并实现一个简单计算器。 计算器的界面分为两大部分&#xff0c;第一部分是上方的计算表达式&#xff0c;既包括用户的按键输入&#xff0c;也包括计算结果 数字&#xff1b;第二部分是下方的各个按键&a…

【redis】初识非关系型数据库——redis

W...Y的主页 &#x1f60a; 代码仓库分享&#x1f495; 初识 Redis Redis是⼀种基于键值对&#xff08;key-value&#xff09;的NoSQL数据库&#xff0c;与很多键值对数据库不同的是&#xff0c;Redis 中的值可以是由string&#xff08;字符串&#xff09;、hash&#xff0…

基于协同过滤算法的个性化课程推荐系统

作者&#xff1a;计算机学姐 开发技术&#xff1a;SpringBoot、SSM、Vue、MySQL、JSP、ElementUI、Python、小程序等&#xff0c;“文末源码”。 专栏推荐&#xff1a;前后端分离项目源码、SpringBoot项目源码、Vue项目源码、SSM项目源码、微信小程序源码 精品专栏&#xff1a;…

AndroidStudio部署多渠道打包环境(一)

对于游戏来说&#xff0c;需要上架国内很多家应用商店&#xff0c;还有一些小的渠道SDK&#xff0c;大大小小加起来也有几十家了&#xff0c;那么我们部署了多渠道打包环境之后就很方便了。 一 、配置游戏基本参数&#xff1a;在app下面的build.gradle文件里编辑&#xff0c; …

Java全栈经典面试题剖析4】JavaSE高级 -- 包装类,String, 类方法

目录 面试题3.1 什么是自动装箱与拆箱&#xff1f;用什么方式来装箱与拆箱&#xff1f; 面试题3.2 int和Integer有什么区别&#xff1f; 面试题3.3 Integer常量池 面试题3.4 字符串常量池 面试题3.5 这句代码创建了几个对象? String str1 new String("xyz");…

【AI大模型】深入解析 存储和展示地理数据(.kmz)文件格式:结构、应用与项目实战

文章目录 1. 引言2. 什么是 .kmz 文件&#xff1f;2.1 .kmz 文件的定义与用途2.2 .kmz 与 .kml 的关系2.3 常见的 .kmz 文件使用场景 3. .kmz 文件的内部结构3.1 .kmz 文件的压缩格式3.2 解压缩 .kmz 文件的方法3.3 .kmz 文件的典型内容3.4 .kml 文件的结构与主要元素介绍 4. 深…

python对文件的读写操作

任务:读取文件夹下的批量txt数据&#xff0c;并将其写入到对应的word文档中。 txt文件中包含&#xff1a;编号、报告内容和表格数据。写入到word当中&#xff1a;编号、报告内容、表格数据、人格雷达图以及对应的详细说明&#xff08;详细说明是根据表格中的标识那一列中的加号…

安徽对口高考Python试题选:输入一个正整数,然后输出该整数的3的幂数相加形式。

第一步&#xff1a;求出3的最高次幂是多少 guoint(input("请输入一个正整数:")) iguo a0 while i>0: if 3**i<guo: ai break ii-1print(a)#此语句为了看懂题目&#xff0c;题目中不需要打印出最高幂数 第二步…

开源模型应用落地-Qwen2-VL-7B-Instruct-vLLM-OpenAI API Client调用

一、前言 学习Qwen2-VL &#xff0c;为我们打开了一扇通往先进人工智能技术的大门。让我们能够深入了解当今最前沿的视觉语言模型的工作原理和强大能力。这不仅拓宽了我们的知识视野&#xff0c;更让我们站在科技发展的潮头&#xff0c;紧跟时代的步伐。 Qwen2-VL 具有卓越的图…