Docker compose 安装 ELK

1. 简介

方案概述

我们使用 Filebeat 作为日志收集器,接入到 Redis 队列,然后消费队列中的日志数据流转到 Logstash 中进行解析处理,最后输出到 Elasticsearch 中,再由 Kibana 展示到页面上。我们采用 Elasticsearch 3 节点集群来确保高可用性和扩展性。

系统架构

  1. Filebeat:日志收集器,负责从各个日志源收集日志并发送到 Redis。
  2. Redis:消息队列,暂存日志数据,确保日志数据的可靠传输。
  3. Logstash:日志处理器,从 Redis 中消费日志数据,进行解析和处理,然后发送到 Elasticsearch。
  4. Elasticsearch:分布式搜索和分析引擎,存储和索引日志数据。
  5. Kibana:数据可视化工具,展示 Elasticsearch 中的日志数据。

部署架构

Elasticsearch

  • 节点数量:3 个节点,组成一个高可用集群。
  • 配置:每个节点都配置为主节点和数据节点,确保集群的高可用性和数据冗余。

Logstash

  • 实例数量:实现高可用性和负载均衡至少需要 2 个 Logstash 实例。
  • 负载均衡:可以使用 Redis 的 list 数据结构来实现负载均衡,多个 Logstash 实例可以同时从 Redis 中消费日志数据。

Redis

  • 部署方式:可以根据业务需求选择单机或集群模式。
    • 单机模式:适用于日志量较小的场景,配置简单。
    • 集群模式:适用于日志量较大的场景,提供更高的可用性和扩展性。
  • 数据结构:使用 list 数据结构来暂存日志数据,确保数据的有序性和可靠传输。

Filebeat

  • 部署方式:Filebeat 部署在每个日志源服务器上,负责收集本地日志并发送到 Redis。Kubernetes 中使用 Dasemonset 部署 Filebeat 在每一个节点,收集日志可以实现指定命名空间和指定应用 pod。
  • 配置:配置 Filebeat 发送日志到 Redis 的 list 中。

Kibana

  • 实例数量:通常部署 1 个实例即可,除非有高并发访问需求,可以考虑部署多个实例并使用负载均衡器进行流量分发。
  • 配置:配置 Kibana 连接到 Elasticsearch 集群。

方案示意图

+----------------+       +----------------+       +----------------+       +----------------+
|                |       |                |       |                |       |                |
|    Filebeat    |  -->  |     Redis      |  -->  |   Logstash     |  -->  | Elasticsearch  |
|                |       |     (list)     |       |                |       |                |
+----------------+       +----------------+       +----------------+       +----------------+

总结

  1. Elasticsearch:3 个节点组成高可用集群。
  2. Logstash:至少部署 2 个实例,实现高可用性和负载均衡。
  3. Redis:根据业务需求选择单机或集群模式。
  4. Filebeat:部署在每个日志源服务器上,负责收集日志并发送到 Redis。
  5. Kibana:通常部署 1 个实例,除非有高并发访问需求。

通过这种部署架构,可以确保日志收集、处理、存储和展示的高可用性和扩展性。
但是经过测试,使用了 redis 效率有些低,后面我们又把 redis 停用了,再加上 filebeat 是推荐直接接入 es,logstash 可以需要的时候使用。

2. 安装

因为涉及到证书生成,我们使用单机把 es 集群拉起,然后再把数据目录和证书目录同步到其它物理节点上。

docker 和 docker-compose 环境准备,略。

hosts:

10.1.205.165 elk-node1
10.1.205.166 elk-node2
10.1.205.167 elk-node3

2.1 单机启动 es 集群

mkdir -p /data/docker/elk
cd /data/docker/elk
mkdir -p es01/data es02/data es03/data kibana
chown 1000.1000 -R es01/data es02/data es03/data kibana
vim .env

.env 内容如下:

# Password for the 'elastic' user (at least 6 characters)
ELASTIC_PASSWORD=changeme# Password for the 'kibana_system' user (at least 6 characters)
KIBANA_PASSWORD=changeme# Version of Elastic products
STACK_VERSION=8.15.0# Set the cluster name
CLUSTER_NAME=docker-cluster# Set to 'basic' or 'trial' to automatically start the 30-day trial
LICENSE=basic
#LICENSE=trial# Port to expose Elasticsearch HTTP API to the host
ES_PORT=9200
#ES_PORT=127.0.0.1:9200# Port to expose Kibana to the host
KIBANA_PORT=5601
#KIBANA_PORT=80# Increase or decrease based on the available host memory (in bytes)
MEM_LIMIT=1073741824# Project namespace (defaults to the current folder name if not set)
#COMPOSE_PROJECT_NAME=myproject

docker-compose.yml 如下(下面证书生成中最好把 IP 也加上),

version: "2.2"services:setup:image: docker.elastic.co/elasticsearch/elasticsearch:${STACK_VERSION}volumes:- ./certs:/usr/share/elasticsearch/config/certsuser: "0"command: >bash -c 'if [ x${ELASTIC_PASSWORD} == x ]; thenecho "Set the ELASTIC_PASSWORD environment variable in the .env file";exit 1;elif [ x${KIBANA_PASSWORD} == x ]; thenecho "Set the KIBANA_PASSWORD environment variable in the .env file";exit 1;fi;if [ ! -f config/certs/ca.zip ]; thenecho "Creating CA";bin/elasticsearch-certutil ca --silent --pem -out config/certs/ca.zip;unzip config/certs/ca.zip -d config/certs;fi;if [ ! -f config/certs/certs.zip ]; thenecho "Creating certs";echo -ne \"instances:\n"\"  - name: es01\n"\"    dns:\n"\"      - es01\n"\"      - localhost\n"\"    ip:\n"\"      - 127.0.0.1\n"\"  - name: es02\n"\"    dns:\n"\"      - es02\n"\"      - localhost\n"\"    ip:\n"\"      - 127.0.0.1\n"\"  - name: es03\n"\"    dns:\n"\"      - es03\n"\"      - localhost\n"\"    ip:\n"\"      - 127.0.0.1\n"\> config/certs/instances.yml;bin/elasticsearch-certutil cert --silent --pem -out config/certs/certs.zip --in config/certs/instances.yml --ca-cert config/certs/ca/ca.crt --ca-key config/certs/ca/ca.key;unzip config/certs/certs.zip -d config/certs;fi;echo "Setting file permissions"chown -R root:root config/certs;find . -type d -exec chmod 750 \{\} \;;find . -type f -exec chmod 640 \{\} \;;echo "Waiting for Elasticsearch availability";until curl -s --cacert config/certs/ca/ca.crt https://es01:9200 | grep -q "missing authentication credentials"; do sleep 30; done;echo "Setting kibana_system password";until curl -s -X POST --cacert config/certs/ca/ca.crt -u "elastic:${ELASTIC_PASSWORD}" -H "Content-Type: application/json" https://es01:9200/_security/user/kibana_system/_password -d "{\"password\":\"${KIBANA_PASSWORD}\"}" | grep -q "^{}"; do sleep 10; done;echo "All done!";'healthcheck:test: ["CMD-SHELL", "[ -f config/certs/es01/es01.crt ]"]interval: 1stimeout: 5sretries: 120es01:depends_on:setup:condition: service_healthyimage: docker.elastic.co/elasticsearch/elasticsearch:${STACK_VERSION}volumes:- ./certs:/usr/share/elasticsearch/config/certs- ./es01/data:/usr/share/elasticsearch/data:rwports:- ${ES_PORT}:9200restart: always	  environment:- node.name=es01- cluster.name=${CLUSTER_NAME}- cluster.initial_master_nodes=es01,es02,es03- discovery.seed_hosts=es02,es03- ELASTIC_PASSWORD=${ELASTIC_PASSWORD}- bootstrap.memory_lock=true- xpack.security.enabled=true- xpack.security.http.ssl.enabled=true- xpack.security.http.ssl.key=certs/es01/es01.key- xpack.security.http.ssl.certificate=certs/es01/es01.crt- xpack.security.http.ssl.certificate_authorities=certs/ca/ca.crt- xpack.security.transport.ssl.enabled=true- xpack.security.transport.ssl.key=certs/es01/es01.key- xpack.security.transport.ssl.certificate=certs/es01/es01.crt- xpack.security.transport.ssl.certificate_authorities=certs/ca/ca.crt- xpack.security.transport.ssl.verification_mode=certificate- xpack.license.self_generated.type=${LICENSE}mem_limit: ${MEM_LIMIT}ulimits:memlock:soft: -1hard: -1healthcheck:test:["CMD-SHELL","curl -s --cacert config/certs/ca/ca.crt https://localhost:9200 | grep -q 'missing authentication credentials'",]interval: 10stimeout: 10sretries: 120es02:depends_on:- es01image: docker.elastic.co/elasticsearch/elasticsearch:${STACK_VERSION}volumes:- ./certs:/usr/share/elasticsearch/config/certs- ./es02/data:/usr/share/elasticsearch/data:rwrestart: always	  environment:- node.name=es02- cluster.name=${CLUSTER_NAME}- cluster.initial_master_nodes=es01,es02,es03- discovery.seed_hosts=es01,es03- ELASTIC_PASSWORD=${ELASTIC_PASSWORD}- bootstrap.memory_lock=true- xpack.security.enabled=true- xpack.security.http.ssl.enabled=true- xpack.security.http.ssl.key=certs/es02/es02.key- xpack.security.http.ssl.certificate=certs/es02/es02.crt- xpack.security.http.ssl.certificate_authorities=certs/ca/ca.crt- xpack.security.transport.ssl.enabled=true- xpack.security.transport.ssl.key=certs/es02/es02.key- xpack.security.transport.ssl.certificate=certs/es02/es02.crt- xpack.security.transport.ssl.certificate_authorities=certs/ca/ca.crt- xpack.security.transport.ssl.verification_mode=certificate- xpack.license.self_generated.type=${LICENSE}mem_limit: ${MEM_LIMIT}ulimits:memlock:soft: -1hard: -1healthcheck:test:["CMD-SHELL","curl -s --cacert config/certs/ca/ca.crt https://localhost:9200 | grep -q 'missing authentication credentials'",]interval: 10stimeout: 10sretries: 120es03:depends_on:- es02image: docker.elastic.co/elasticsearch/elasticsearch:${STACK_VERSION}volumes:- ./certs:/usr/share/elasticsearch/config/certs- ./es03/data:/usr/share/elasticsearch/data:rwrestart: always	  environment:- node.name=es03- cluster.name=${CLUSTER_NAME}- cluster.initial_master_nodes=es01,es02,es03- discovery.seed_hosts=es01,es02- ELASTIC_PASSWORD=${ELASTIC_PASSWORD}- bootstrap.memory_lock=true- xpack.security.enabled=true- xpack.security.http.ssl.enabled=true- xpack.security.http.ssl.key=certs/es03/es03.key- xpack.security.http.ssl.certificate=certs/es03/es03.crt- xpack.security.http.ssl.certificate_authorities=certs/ca/ca.crt- xpack.security.transport.ssl.enabled=true- xpack.security.transport.ssl.key=certs/es03/es03.key- xpack.security.transport.ssl.certificate=certs/es03/es03.crt- xpack.security.transport.ssl.certificate_authorities=certs/ca/ca.crt- xpack.security.transport.ssl.verification_mode=certificate- xpack.license.self_generated.type=${LICENSE}mem_limit: ${MEM_LIMIT}ulimits:memlock:soft: -1hard: -1healthcheck:test:["CMD-SHELL","curl -s --cacert config/certs/ca/ca.crt https://localhost:9200 | grep -q 'missing authentication credentials'",]interval: 10stimeout: 10sretries: 120kibana:depends_on:es01:condition: service_healthyes02:condition: service_healthyes03:condition: service_healthyimage: docker.elastic.co/kibana/kibana:${STACK_VERSION}volumes:- ./certs:/usr/share/kibana/config/certs- ./kibana/data:/usr/share/kibana/data:rwports:- ${KIBANA_PORT}:5601environment:- SERVERNAME=kibana- ELASTICSEARCH_HOSTS=https://es01:9200- ELASTICSEARCH_USERNAME=kibana_system- ELASTICSEARCH_PASSWORD=${KIBANA_PASSWORD}- ELASTICSEARCH_SSL_CERTIFICATEAUTHORITIES=config/certs/ca/ca.crtmem_limit: ${MEM_LIMIT}healthcheck:test:["CMD-SHELL","curl -s -I http://localhost:5601 | grep -q 'HTTP/1.1 302 Found'",]interval: 10stimeout: 10sretries: 120

先拉起来:

docker-compose up -d

启动成功后,浏览器访问进去,看看能否正常访问。用户使用 elastic,密码使用 .env 中设置的 ELASTIC_PASSWORD

没有问题后,我们关闭 es 集群:

docker-compose down

2.2 启动 es node1 节点

因为刚刚是单机启动了 3 个节点的 es 集群,先把文件和数据目录同步到其它节点:

# 同步之前先在节点 2、3 创建 /data/docker/elk 目录
rsync -avz /data/docker/elk/.env elk-node2:/data/docker/elk/.env
rsync -avz /data/docker/elk/docker-compose.yml elk-node2:/data/docker/elk/docker-compose.yml
rsync -avz /data/docker/elk/certs/ elk-node2:/data/docker/elk/certs/
rsync -avz /data/docker/elk/es02/ elk-node2:/data/docker/elk/es02/rsync -avz /data/docker/elk/.env elk-node3:/data/docker/elk/.env
rsync -avz /data/docker/elk/docker-compose.yml elk-node3:/data/docker/elk/docker-compose.yml
rsync -avz /data/docker/elk/certs/ elk-node3:/data/docker/elk/certs/
rsync -avz /data/docker/elk/es03/ elk-node3:/data/docker/elk/es03/

现在修改 elk-node1 的 docker-compose.yml

因为证书不需要再处理了,把 setup 段注释,并且以下段注释

    depends_on:setup:condition: service_healthy
version: "2.2"services:
#  setup:
#    image: 10.1.205.109/library/elasticsearch/elasticsearch:${STACK_VERSION}
#    volumes:
#      - ./certs:/usr/share/elasticsearch/config/certs
#    user: "0"
#    network_mode: host
#    extra_hosts:
#      - "es01:10.1.205.165"
#      - "es02:10.1.205.166"
#      - "es03:10.1.205.167"
#    command: >
#      bash -c '
#        if [ x${ELASTIC_PASSWORD} == x ]; then
#          echo "Set the ELASTIC_PASSWORD environment variable in the .env file";
#          exit 1;
#        elif [ x${KIBANA_PASSWORD} == x ]; then
#          echo "Set the KIBANA_PASSWORD environment variable in the .env file";
#          exit 1;
#        fi;
#        if [ ! -f config/certs/ca.zip ]; then
#          echo "Creating CA";
#          bin/elasticsearch-certutil ca --silent --pem -out config/certs/ca.zip;
#          unzip config/certs/ca.zip -d config/certs;
#        fi;
#        if [ ! -f config/certs/certs.zip ]; then
#          echo "Creating certs";
#          echo -ne \
#          "instances:\n"\
#          "  - name: es01\n"\
#          "    dns:\n"\
#          "      - es01\n"\
#          "      - localhost\n"\
#          "    ip:\n"\
#          "      - 127.0.0.1\n"\
#          "  - name: es02\n"\
#          "    dns:\n"\
#          "      - es02\n"\
#          "      - localhost\n"\
#          "    ip:\n"\
#          "      - 127.0.0.1\n"\
#          "  - name: es03\n"\
#          "    dns:\n"\
#          "      - es03\n"\
#          "      - localhost\n"\
#          "    ip:\n"\
#          "      - 127.0.0.1\n"\
#          > config/certs/instances.yml;
#          bin/elasticsearch-certutil cert --silent --pem -out config/certs/certs.zip --in config/certs/instances.yml --ca-cert config/certs/ca/ca.crt --
ca-key config/certs/ca/ca.key;
#          unzip config/certs/certs.zip -d config/certs;
#        fi;
#        echo "Setting file permissions"
#        chown -R root:root config/certs;
#        find . -type d -exec chmod 750 \{\} \;;
#        find . -type f -exec chmod 640 \{\} \;;
#        echo "Waiting for Elasticsearch availability";
#        until curl -s --cacert config/certs/ca/ca.crt https://es01:9200 | grep -q "missing authentication credentials"; do sleep 30; done;
#        echo "Setting kibana_system password";
#        until curl -s -X POST --cacert config/certs/ca/ca.crt -u "elastic:${ELASTIC_PASSWORD}" -H "Content-Type: application/json" https://es01:9200/_se
curity/user/kibana_system/_password -d "{\"password\":\"${KIBANA_PASSWORD}\"}" | grep -q "^{}"; do sleep 10; done;
#        echo "All done!";
#      '
#    healthcheck:
#      test: ["CMD-SHELL", "[ -f config/certs/es01/es01.crt ]"]
#      interval: 1s
#      timeout: 5s
#      retries: 120es01:
#    depends_on:
#      setup:
#        condition: service_healthycontainer_name: es01image: docker.elastic.co/elasticsearch/elasticsearch:${STACK_VERSION}volumes:- ./certs:/usr/share/elasticsearch/config/certs- ./es01/data:/usr/share/elasticsearch/data:rwports:- ${ES_PORT}:9200- 9300:9300restart: alwaysnetwork_mode: host# 添加 hostsextra_hosts:- "es01:10.1.205.165"- "es02:10.1.205.166"- "es03:10.1.205.167"	environment:- node.name=es01- cluster.name=${CLUSTER_NAME}- cluster.initial_master_nodes=es01,es02,es03- discovery.seed_hosts=es02,es03- ELASTIC_PASSWORD=${ELASTIC_PASSWORD}- bootstrap.memory_lock=true- xpack.security.enabled=true- xpack.security.http.ssl.enabled=true- xpack.security.http.ssl.key=certs/es01/es01.key- xpack.security.http.ssl.certificate=certs/es01/es01.crt- xpack.security.http.ssl.certificate_authorities=certs/ca/ca.crt- xpack.security.transport.ssl.enabled=true- xpack.security.transport.ssl.key=certs/es01/es01.key- xpack.security.transport.ssl.certificate=certs/es01/es01.crt- xpack.security.transport.ssl.certificate_authorities=certs/ca/ca.crt- xpack.security.transport.ssl.verification_mode=certificate- xpack.license.self_generated.type=${LICENSE}mem_limit: ${MEM_LIMIT}ulimits:memlock:soft: -1hard: -1healthcheck:test:["CMD-SHELL","curl -s --cacert config/certs/ca/ca.crt https://localhost:9200 | grep -q 'missing authentication credentials'",]interval: 10stimeout: 10sretries: 120kibana:depends_on:es01:condition: service_healthyimage: docker.elastic.co/kibana/kibana:${STACK_VERSION}volumes:- ./certs:/usr/share/kibana/config/certs- ./kibana/data:/usr/share/kibana/data:rwports:- ${KIBANA_PORT}:5601environment:- SERVERNAME=kibana- ELASTICSEARCH_HOSTS=https://es01:9200- ELASTICSEARCH_USERNAME=kibana_system- ELASTICSEARCH_PASSWORD=${KIBANA_PASSWORD}- ELASTICSEARCH_SSL_CERTIFICATEAUTHORITIES=config/certs/ca/ca.crtmem_limit: ${MEM_LIMIT}healthcheck:test:["CMD-SHELL","curl -s -I http://localhost:5601 | grep -q 'HTTP/1.1 302 Found'",]interval: 10stimeout: 10sretries: 120

2.3 启动 es node2 节点

docker-compose.yml

version: "2.2"services:test_es01:image: docker.elastic.co/elasticsearch/elasticsearch:${STACK_VERSION}volumes:- ./certs:/usr/share/elasticsearch/config/certsuser: "0"extra_hosts:- "es01:10.1.205.165"- "es02:10.1.205.166"- "es03:10.1.205.167"restart: alwaysnetwork_mode: hostcommand: >bash -c 'sleep infinity' healthcheck:test:["CMD-SHELL","curl -s --cacert config/certs/ca/ca.crt https://es01:9200 | grep -q 'missing authentication credentials'",]interval: 10stimeout: 10sretries: 120es02:depends_on:test_es01:condition: service_healthycontainer_name: es02image: docker.elastic.co/elasticsearch/elasticsearch:${STACK_VERSION}extra_hosts:- "es01:10.1.205.165"- "es02:10.1.205.166"- "es03:10.1.205.167"ports:- ${ES_PORT}:9200- 9300:9300           volumes:- ./certs:/usr/share/elasticsearch/config/certs- ./es02/data:/usr/share/elasticsearch/data:rwrestart: alwaysenvironment:- node.name=es02- cluster.name=${CLUSTER_NAME}- cluster.initial_master_nodes=es01,es02,es03- discovery.seed_hosts=es01,es03- ELASTIC_PASSWORD=${ELASTIC_PASSWORD}- bootstrap.memory_lock=true- xpack.security.enabled=true- xpack.security.http.ssl.enabled=true- xpack.security.http.ssl.key=certs/es02/es02.key- xpack.security.http.ssl.certificate=certs/es02/es02.crt- xpack.security.http.ssl.certificate_authorities=certs/ca/ca.crt- xpack.security.transport.ssl.enabled=true- xpack.security.transport.ssl.key=certs/es02/es02.key- xpack.security.transport.ssl.certificate=certs/es02/es02.crt- xpack.security.transport.ssl.certificate_authorities=certs/ca/ca.crt- xpack.security.transport.ssl.verification_mode=certificate- xpack.license.self_generated.type=${LICENSE}mem_limit: ${MEM_LIMIT}ulimits:memlock:soft: -1hard: -1healthcheck:test:["CMD-SHELL","curl -s --cacert config/certs/ca/ca.crt https://localhost:9200 | grep -q 'missing authentication credentials'",]interval: 10stimeout: 10sretries: 120

启动:

docker-compose up -d

2.4 启动 es node3 节点

docker-compose.yml

version: "2.2"services:test_es02:image: docker.elastic.co/elasticsearch/elasticsearch:${STACK_VERSION}volumes:- ./certs:/usr/share/elasticsearch/config/certsuser: "0"extra_hosts:- "es01:10.1.205.165"- "es02:10.1.205.166"- "es03:10.1.205.167"restart: alwaysnetwork_mode: hostcommand: >bash -c 'sleep infinity' healthcheck:test:["CMD-SHELL","curl -s --cacert config/certs/ca/ca.crt https://es02:9200 | grep -q 'missing authentication credentials'",]interval: 10stimeout: 10sretries: 120es03:depends_on:test_es02:condition: service_healthycontainer_name: es03image: docker.elastic.co/elasticsearch/elasticsearch:${STACK_VERSION}volumes:- ./certs:/usr/share/elasticsearch/config/certs- ./es03/data:/usr/share/elasticsearch/data:rwrestart: alwaysenvironment:- node.name=es03- cluster.name=${CLUSTER_NAME}- cluster.initial_master_nodes=es01,es02,es03- discovery.seed_hosts=es01,es02- ELASTIC_PASSWORD=${ELASTIC_PASSWORD}- bootstrap.memory_lock=true- xpack.security.enabled=true- xpack.security.http.ssl.enabled=true- xpack.security.http.ssl.key=certs/es03/es03.key- xpack.security.http.ssl.certificate=certs/es03/es03.crt- xpack.security.http.ssl.certificate_authorities=certs/ca/ca.crt- xpack.security.transport.ssl.enabled=true- xpack.security.transport.ssl.key=certs/es03/es03.key- xpack.security.transport.ssl.certificate=certs/es03/es03.crt- xpack.security.transport.ssl.certificate_authorities=certs/ca/ca.crt- xpack.security.transport.ssl.verification_mode=certificate- xpack.license.self_generated.type=${LICENSE}mem_limit: ${MEM_LIMIT}ulimits:memlock:soft: -1hard: -1healthcheck:test:["CMD-SHELL","curl -s --cacert config/certs/ca/ca.crt https://localhost:9200 | grep -q 'missing authentication credentials'",]interval: 10stimeout: 10sretries: 120

启动:

docker-compose up -d# 最好是 3 个节点都试下
# 如果不行,进一个节点 `elasticsearch-reset-password -u elastic`,再测试`
curl -X GET "https://es01:9200/_cluster/health?pretty" \-H "Content-Type: application/json" \-u elastic:密码" \--cacert certs/ca/ca.crt

启动后浏览器访问 https://10.1.205.165:9200/_cluster/health?pretty
验证的话,输入 elastic 的用户和密码。

3. 安装 kibana

上面已经有安装 kibana了,略。

报错:

-response-actions] with timeout of [5m] and run interval of [60s]
[2024-08-27T10:09:54.254+00:00][ERROR][elasticsearch-service] Unable to retrieve version information from Elasticsearch nodes. Request timed out
[2024-08-27T10:09:54.753+00:00][ERROR][elasticsearch-service] Unable to retrieve version information from Elasticsearch nodes. security_exceptionRoot causes:security_exception: unable to authenticate user [kibana_system] for REST request [/_nodes?filter_path=nodes.*.version%2Cnodes.*.http.publish_address%2Cnodes.*.ip]
[2024-08-27T10:09:54.908+00:00][INFO ][plugins.screenshotting.chromium] Browser executable: /usr/share/kibana/node_modules/@kbn/screenshotting-plugin/chromium/headless_shell-linux_x64/headless_shell

可能是因为你清理了目录和数据,没有执行打开 setup 这个初始化一次。

4. 安装 logstash

将以下段添加到 docker-compose.yml

  logstash:image: docker.elastic.co/logstash/logstash:8.15.0restart: alwaysextra_hosts:- "es01:10.1.205.165"- "es02:10.1.205.166"- "es03:10.1.205.167"network_mode: hostvolumes:- ./logstash/pipeline:/usr/share/logstash/pipeline/
cd /data/docker/elk
docker-compose up -d
docker cp elk-logstash-1:/usr/share/logstash/config logstash/
chown 1000.1000 -R logstash
    volumes:- ./logstash/data:/usr/share/logstash/data/- ./logstash/pipeline:/usr/share/logstash/pipeline/- ./logstash/config:/usr/share/logstash/config/
cd /data/docker/elk
chown 1000.1000 -R  certs

在浏览器中进入 kibana,点左边的 “management” -> “Stack Managerment”,然后在 “Security” 那点击 “Users”,选择 “logstash_system”,修改它的密码,下面用得到。

编辑 logstash/config/logstash.yml,把刚才获取到的密码替换进去:

http.host: "0.0.0.0"
xpack.monitoring.enabled: true
xpack.monitoring.elasticsearch.hosts: [ "https://es01:9200" ]
xpack.monitoring.elasticsearch.username: "logstash_system"
xpack.monitoring.elasticsearch.password: "logstash_passowrd"
xpack.monitoring.elasticsearch.ssl.certificate_authority: "/usr/share/logstash/certs/ca/ca.crt"

这里给 2 个 pipeline:
k8s-filebeat.conf

#
input {beats {port => 5044}
}filter {
}output {# 处理解析成功的事件elasticsearch {hosts => ["https://es01:9200", "https://es02:9200", "https://es03:9200"]index => "logstash-%{+YYYY.MM.dd}"user => "logstash_writer"password => "logstash_password"ssl => truecacert => "/usr/share/logstash/certs/ca/ca.crt"}# stdout { codec => rubydebug }}

k8s-filebeat-redis.conf

#
input {redis {host => "10.1.205.167"  # Redis 服务器的 IP 地址port => 6379              # Redis 服务器的端口data_type => "list"       # 数据类型为列表key => "filebeat"         # Redis 列表的键名password => "redispwd"  # 如果 Redis 设置了密码,请在此处填写}
}filter {
}output {elasticsearch {hosts => ["https://es01:9200", "https://es02:9200", "https://es03:9200"]index => "logstash-%{+YYYY.MM.dd}"user => "logstash_writer"password => "logstash_password"ssl => truecacert => "/usr/share/logstash/certs/ca/ca.crt"}# stdout { codec => rubydebug }}

创建用户和角色:

cd /data/docker/elk
curl -X POST "https://127.0.0.1:9200/_security/user/logstash_writer" \-H "Content-Type: application/json" \-u elastic:$ELASTIC_PASSWORD \--cacert certs/ca/ca.crt \-d '{"password" : "logstash_password","roles" : [ "logstash_writer_role" ]
}'curl -X POST "https://127.0.0.1:9200/_security/role/logstash_writer_role" \-H "Content-Type: application/json" \-u elastic:$ELASTIC_PASSWORD \--cacert certs/ca/ca.crt \-d '{"description": "Role for Logstash to write data to Elasticsearch","cluster": ["monitor", "manage_index_templates", "manage"],"indices": [{"names": [ "logstash-*" ],"privileges": ["write", "create", "create_index", "manage"]}]}'

然后可以尝试启动,docker-compose up -d

5. 安装 redis

根据 elk 需求,我这里使用单机 redis 即可。
在 es03 上 /data/docker/elk/docker-compose.yml 中添加以下段

  redis:restart: alwaysimage: redis:7.4command: redis-server /etc/redis/redis.confvolumes:- ./redis/conf/redis.conf:/etc/redis/redis.conf- ./redis/data:/dataports:- "6379:6379"

下面是 redis.conf

protected-mode yes
port 6379
tcp-backlog 511
timeout 0
tcp-keepalive 300
daemonize no
pidfile /var/run/redis_6379.pid
loglevel notice
logfile ""
databases 16
always-show-logo no
set-proc-title yes
proc-title-template "{title} {listen-addr} {server-mode}"
locale-collate ""
stop-writes-on-bgsave-error yes
rdbcompression yes
rdbchecksum yes
dbfilename dump.rdb
rdb-del-sync-files no
dir /data
replica-serve-stale-data yes
replica-read-only yes
repl-diskless-sync yes
repl-diskless-sync-delay 5
repl-diskless-sync-max-replicas 0
repl-diskless-load disabled
repl-disable-tcp-nodelay no
replica-priority 100
acllog-max-len 128
lazyfree-lazy-eviction no
lazyfree-lazy-expire no
lazyfree-lazy-server-del no
replica-lazy-flush no
lazyfree-lazy-user-del no
lazyfree-lazy-user-flush no
oom-score-adj no
oom-score-adj-values 0 200 800
disable-thp yes
appendonly no
appendfilename "appendonly.aof"
appenddirname "appendonlydir"
appendfsync everysec
no-appendfsync-on-rewrite no
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
aof-load-truncated yes
aof-use-rdb-preamble yes
aof-timestamp-enabled no
slowlog-log-slower-than 10000
slowlog-max-len 128
latency-monitor-threshold 0
notify-keyspace-events ""
hash-max-listpack-entries 512
hash-max-listpack-value 64
list-max-listpack-size -2
list-compress-depth 0
set-max-intset-entries 512
set-max-listpack-entries 128
set-max-listpack-value 64
zset-max-listpack-entries 128
zset-max-listpack-value 64
hll-sparse-max-bytes 3000
stream-node-max-bytes 4096
stream-node-max-entries 100
activerehashing yes
client-output-buffer-limit normal 0 0 0
client-output-buffer-limit replica 256mb 64mb 60
client-output-buffer-limit pubsub 32mb 8mb 60
hz 10
dynamic-hz yes
aof-rewrite-incremental-fsync yes
rdb-save-incremental-fsync yes
jemalloc-bg-thread yes# 设置 Redis 访问密码
requirepass redispwd
# 设置 最大内存,避免撑爆
maxmemory 2gb

启动

cd /data/docker/elk
docker-compose up -d

6. 安装 filebeat

6.1 k8s 安装 filebeat

k8s 使用的 filebeat-kubernetes.yaml

apiVersion: v1
kind: ServiceAccount
metadata:name: filebeatnamespace: kube-systemlabels:k8s-app: filebeat
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:name: filebeatlabels:k8s-app: filebeat
rules:
- apiGroups: [""] # "" indicates the core API groupresources:- namespaces- pods- nodesverbs:- get- watch- list
- apiGroups: ["apps"]resources:- replicasetsverbs: ["get", "list", "watch"]
- apiGroups: ["batch"]resources:- jobsverbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:name: filebeat# should be the namespace where filebeat is runningnamespace: kube-systemlabels:k8s-app: filebeat
rules:- apiGroups:- coordination.k8s.ioresources:- leasesverbs: ["get", "create", "update"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:name: filebeat-kubeadm-confignamespace: kube-systemlabels:k8s-app: filebeat
rules:- apiGroups: [""]resources:- configmapsresourceNames:- kubeadm-configverbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:name: filebeat
subjects:
- kind: ServiceAccountname: filebeatnamespace: kube-system
roleRef:kind: ClusterRolename: filebeatapiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:name: filebeatnamespace: kube-system
subjects:- kind: ServiceAccountname: filebeatnamespace: kube-system
roleRef:kind: Rolename: filebeatapiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:name: filebeat-kubeadm-confignamespace: kube-system
subjects:- kind: ServiceAccountname: filebeatnamespace: kube-system
roleRef:kind: Rolename: filebeat-kubeadm-configapiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: ConfigMap
metadata:name: filebeat-confignamespace: kube-systemlabels:k8s-app: filebeat
data:filebeat.yml: |-# filebeat.inputs:# - type: filestream#   id: kubernetes-container-logs#   paths:#     - /var/log/containers/*.log#   parsers:#     - container: ~#   prospector:#     scanner:#       fingerprint.enabled: true#       symlinks: true#   file_identity.fingerprint: ~#   processors:#     - add_kubernetes_metadata:#         host: ${NODE_NAME}#         matchers:#         - logs_path:#             logs_path: "/var/log/containers/"# To enable hints based autodiscover, remove `filebeat.inputs` configuration and uncomment this:filebeat.autodiscover:providers:- type: kubernetesnode: ${NODE_NAME}hints.enabled: truehints.default_config:type: filestreamid: kubernetes-container-logs-${data.kubernetes.pod.name}-${data.kubernetes.container.id}paths:- /var/log/containers/*-${data.kubernetes.container.id}.logparsers:- container: ~prospector:scanner:fingerprint.enabled: truesymlinks: truefile_identity.fingerprint: ~ignore_older: 48hclean_inactive: 72hclose_inactive: 5mscan_frequency: 10sprocessors:- add_cloud_metadata:- add_host_metadata:output.redis:hosts: ["10.1.205.167:6379"]key: "filebeat"datatype: "list"db: 0timeout: 10password: "redispwd"logging.level: warning
---
apiVersion: apps/v1
kind: DaemonSet
metadata:name: filebeatnamespace: kube-systemlabels:k8s-app: filebeat
spec:selector:matchLabels:k8s-app: filebeattemplate:metadata:labels:k8s-app: filebeatspec:tolerations:- key: node-role.kubernetes.io/mastereffect: NoScheduleserviceAccountName: filebeatterminationGracePeriodSeconds: 30hostNetwork: truednsPolicy: ClusterFirstWithHostNetcontainers:- name: filebeatimage: docker.elastic.co/beats/filebeat:8.15.0args: ["-c", "/etc/filebeat.yml","-e",]env:- name: NODE_NAMEvalueFrom:fieldRef:fieldPath: spec.nodeNamesecurityContext:runAsUser: 0# If using Red Hat OpenShift uncomment this:#privileged: trueresources:limits:memory: 300Mirequests:cpu: 100mmemory: 100MivolumeMounts:- name: configmountPath: /etc/filebeat.ymlreadOnly: truesubPath: filebeat.yml- name: datamountPath: /usr/share/filebeat/data- name: varlibdockercontainersmountPath: /var/lib/docker/containersreadOnly: true- name: varlogmountPath: /var/logreadOnly: truevolumes:- name: configconfigMap:defaultMode: 0640name: filebeat-config- name: varlibdockercontainershostPath:path: /var/lib/docker/containers- name: varloghostPath:path: /var/log# data folder stores a registry of read status for all files, so we don't send everything again on a Filebeat pod restart- name: datahostPath:# When filebeat runs as non-root user, this directory needs to be writable by group (g+w).path: /var/lib/filebeat-datatype: DirectoryOrCreate
---

6.2 普通服务器安装 filebeat

使用二进制安装,解压后即可使用。
我放在了 /usr/local/filebeat-8.15.0-linux-x86_64,然后做了个软链接。

cd /usr/local/
ln -s filebeat-8.15.0-linux-x86_64 filebeat

先弄 2 个启停脚本 start.sh

#!/usr/bin/env bashcd "$(dirname "$0")" || return 1
SH_DIR=$(pwd)nohup $SH_DIR/filebeat 2>&1 &

stop.sh

pkill filebeat 

filebeat.yml

filebeat.config.modules:# Glob pattern for configuration loadingpath: ${path.config}/modules.d/*.yml#output.elasticsearch:
#  hosts: ["https://10.1.205.165:9200"]
#  username: "filebeat_internal"
#  password: "YOUR_PASSWORD"
#  ssl:
#    enabled: true
#    # fingerprint=$(openssl x509 -fingerprint -sha256 -noout -in certs/ca/ca.crt | awk -F"=" '{print $2}' | sed 's/://g')
#    ca_trusted_fingerprint: "33CB5A3B3ECCA59FDF7333D9XXXXXXXXFD34D5386FF9205AB8E1"
# certs/ca 目录从 es 中拷过来
#    certificate_authorities: ["certs/ca/ca.crt"]output.logstash:hosts: ["10.1.205.165:5044", "10.1.205.166:5044"]setup.kibana:host: "10.1.205.165:5601"logging.level: warning

6.3 导入 filebeat 模板

因为 filebeat 没有直连 es 集群,所以需要创建角色和用户,让 filebeat 直连 es 一次集群创建模板。

PUT /_security/role/filebeat_writer
{"cluster": ["manage_index_templates", "monitor", "manage_ilm"],"indices": [{"names": ["filebeat-*"],"privileges": ["write", "create_index", "manage"]}],"applications": [{"application": "kibana-.kibana","privileges": ["read", "write"],"resources": ["*"]}]
}

使用 Dev Tools 创建:
创建角色

创建角色:

PUT /_security/role/kibana_dashboard_manager
{"cluster": ["monitor"],"indices": [{"names": ["filebeat-*"],"privileges": ["read", "view_index_metadata", "create_index"]}],"applications": [{"application": "kibana-.kibana","privileges": ["all"],"resources": ["*"]}]
}

创建用户:

POST /_security/user/filebeat_internal
{"password" : "YOUR_PASSWORD","roles" : [ "filebeat_writer", "kibana_dashboard_manager" ],"full_name" : "Filebeat Internal User","email" : "filebeat_internal@example.com","enabled": true
}

连接 es 导入模板数据:

cd /data/docker/elk
fingerprint=$(openssl x509 -fingerprint -sha256 -noout -in certs/ca/ca.crt | awk -F"=" '{print $2}' | sed 's/://g')docker run --net="host" --rm \-v $(pwd)/certs:/usr/share/filebeat/certs \10.1.205.109/library/beats/filebeat:8.15.0 setup -e \-E output.logstash.enabled=false \-E output.elasticsearch.hosts=['https://10.1.205.165:9200'] \-E output.elasticsearch.username=filebeat_internal \-E output.elasticsearch.password=YOUR_PASSWORD \-E output.elasticsearch.ssl.enabled=true \-E output.elasticsearch.ssl.ca_trusted_fingerprint=${fingerprint} \-E output.elasticsearch.ssl.certificate_authorities=["/usr/share/filebeat/certs/ca/ca.crt"] \-E setup.kibana.host=10.1.205.165:5601

7. 问题处理

logstash 写入权限问题
全部启动后,logstash 启动发现老是退出,通过查看日志发现权限问题,

权限问题

有权限提示:

dle/jruby/3.1.0/gems/logstash-output-elasticsearch-11.22.7-java/lib/logstash/plugin_mixins/elasticsearch/common.rb:172:in `block in after_successful_conn
ection'"], :body=>"{\"error\":{\"root_cause\":[{\"type\":\"security_exception\",\"reason\":\"action [indices:admin/index_template/put] is unauthorized fo
r user [logstash_writer] with effective roles [logstash_writer_role], this action is granted by the cluster privileges [manage_index_templates,manage,all
]\"}],\"type\":\"security_exception\",\"reason\":\"action [indices:admin/index_template/put] is unauthorized for user [logstash_writer] with effective ro
les [logstash_writer_role], this action is granted by the cluster privileges [manage_index_templates,manage,all]\"},\"status\":403}"}

完整的权限是(上文已更新):

PUT /_security/role/logstash_writer_role
{"cluster": ["manage_index_templates", "manage"],"indices": [{"names": ["logstash-*"],"privileges": ["write", "create", "create_index", "manage"]}]
}

用了 redis 效率低
filebeat 直接输出日志到 logstash,效率高很多。

8. 索引模式创建

8.1 创建 Data View

新版本 kibana 叫 Data Views了,按如图创建。

索引模式创建

8.2 配置 ILM 策略

ILM 有些复杂,需要多熟悉和测试。
以下是一个示例 ILM 策略,配置为在索引创建 15 天后自动删除索引:

  1. 创建 ILM 策略

你可以在 Kibana 的 Dev Tools 中运行以下命令来创建 ILM 策略:

PUT _ilm/policy/delete-after-15-days
{"policy": {"phases": {"hot": {"actions": {"rollover": {"max_age": "1d"}}},"delete": {"min_age": "15d","actions": {"delete": {}}}}}
}

这个策略定义了两个阶段:

  • hot:在索引达到 1 天时进行滚动。
  • delete:在索引达到 15 天时删除索引。
  1. 应用 ILM 策略到索引模板

接下来,你需要将这个 ILM 策略应用到你的索引模板。假设你有一个名为 logstash 的索引模板:

PUT _template/logstash_template
{"index_patterns": ["logstash-*"],"settings": {"index.lifecycle.name": "delete-after-15-days","index.lifecycle.rollover_alias": "logstash"}
}

这个模板将 delete-after-15-days 策略应用到所有匹配 logstash-* 模式的索引。

  1. 创建索引并设置别名
PUT /logstash-000001
{"aliases": {"logstash": {"is_write_index": true}}
}
  1. 验证配置

你可以通过以下命令验证 ILM 策略是否正确应用:

GET logstash-*/_ilm/explain

这个命令将显示每个索引的 ILM 状态和当前阶段。

通过配置 Elasticsearch 的 Index Lifecycle Management (ILM) 策略,你可以自动管理和清理索引。

参考资料:
[1] https://www.elastic.co/guide/en/elasticsearch/reference/current/docker.html
[2] https://www.elastic.co/guide/en/logstash/current/docker.html
[3] https://www.elastic.co/guide/en/logstash/current/docker-config.html
[4] https://www.elastic.co/guide/en/kibana/current/docker.html
[5] https://www.elastic.co/guide/en/beats/filebeat/current/running-on-kubernetes.html

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.rhkb.cn/news/416237.html

如若内容造成侵权/违法违规/事实不符,请联系长河编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

web前端-网页

一、网页 1.网页 网站是指在因特网上根据一定的规则,使用 HTML等制作的用于展示特定内容相关的网页集合。 网页是网站中的一“页”,通常是 HTML格式的文件,它要通过浏览器来阅读。 网页是构成网站的基本元素,它通常由图片、链接、文字、声…

婚宴时扫码查桌号

如何通过关键词查询信息? 在婚宴这一喜庆的时刻,确保每位宾客都能迅速找到自己的座位是至关重要的。为了使这一过程更加流畅和高效,我们特别引入了扫码查桌号服务。以下是详细的操作指南,帮助您快速掌握如何使用此服务&#xff0c…

缓存:浅谈双写导致的数据一致性问题

从理论上来说,给缓存设置过期时间,是保证最终一致性的解决方案。这种方案下,我们对存入缓存的数据设置过期时间,所有的写操作以数据库为准,对缓存操作只是尽最大努力更新即可。也就是说如果数据库写成功,缓…

C++11新增特性:列表初始化(std::initializer_list) decltype、auto、nullptr、范围for

C11新增特性:列表初始化(std::initializer_list)& decltype、auto、nullptr、范围for 一、C11新增统一初始化方式1.1 新增方式1.2 初始化容器底层原理(std::initializer_list) 二、新增声明2.1 decltype2.3 auto &…

网络安全服务基础Windows--第10节-FTP主动与被动模式

概述 将某台计算机中的⽂件通过⽹络传送到可能相距很远的另⼀台计算机中,是⼀项基本的⽹络应⽤,即⽂件传送。 ⽂件传送协议FTP (File Transfer Protocol)是因特⽹上使⽤得最⼴泛的⽂件传送协议。 FTP是⼀个⽼早的⽹络协议&…

VMware 虚拟化平台部分问题和优化措施汇总

本文整理记录了VMware 虚拟化平台部分问题和优化措施。 1、vCLS虚拟机无法启动: 修改办法,参照本人下文: vCLS报错处理(缺少功能“MWAIT”,没有与虚拟机兼容的主机) 2、优化存储卷的路径选择策略 ESXi…

可以进行非机动车违停、人员聚集、临街摆摊、垃圾满溢、烟雾火情等城市治理场景的智能识别的智慧城管开源了

智慧城管视觉监控平台是一款功能强大且简单易用的实时算法视频监控系统。它的愿景是最底层打通各大芯片厂商相互间的壁垒,省去繁琐重复的适配流程,实现芯片、算法、应用的全流程组合,从而大大减少企业级应用约95%的开发成本。 基于深度学习技…

Redis 篇-深入了解查询缓存与缓存所带来的问题(读写不一致、缓存穿透、缓存雪崩、缓存击穿)

🔥博客主页: 【小扳_-CSDN博客】 ❤感谢大家点赞👍收藏⭐评论✍ 本章目录 1.0 什么是缓存 2.0 项目中具体如何添加缓存 3.0 添加缓存后所带来的问题 3.1 读写不一致问题 3.1.1 缓存更新策略 3.1.2 具体实现缓存与数据库的双写一致 3.2 缓存穿…

vue2———组件

一个简单的组件 组件进行注册并使用 结果: 在进行对组件的学习时遇见一些问题: 1、组件的命名 解决方法: 组件的命名 Vue.js 组件的命名遵循一些最佳实践,这些实践有助于保持代码的清晰和一致性。 多单词命名:Vue 官…

Robotics: computational motion planning 部分笔记—— week 2 Configuration Space 构型空间

基本概念 构型(Configuration):构型是机器人上所有点的完整描述。它提供了机器人在特定时刻状态的简洁表示。 构型空间(Configuration Space):也称为C-Space,指的是机器人可以到达的所有可能构型的集合。它考虑了空间限制范围和机器人的物理…

期权交易方式和基本策略有哪几种?期权交易要注意什么?

今天带你了解期权交易方式和基本策略有哪几种?期权交易要注意什么?期权,作为一种金融衍生品,它赋予了持有人在未来某个时间内购买或出售特定资产的权利,近年来在全球范围内得到了广泛的关注和应用。 期权交易方式 期…

Latex安装--新手教程、遇到的问题

第一个LaTeX文件的编写 1.tex文件:自己创建后缀为.tex的文件 2.在VScode中打开1.tex文件(图1),然后双击打开1.tex文件(图2),VScode左侧工具栏出现TEX插件,点击TEX即可 3.写第一个1.t…

SpringBoot-读取配置文件方式

目录 前言 一. 使用 ConfigurationProperties 注解读取 二. 使用 Value 注解读取配置文件 三. 使用 Environment 类获取配置属性 前言 Spring Boot提供了多种灵活的方式来读取配置文件,以适应不同的开发和部署需求,SpringBoot启动的时候,…

[Linux] 项目自动化构建工具-make/Makefile

标题:[Linux] 项目自动化构建工具-make/Makefile 水墨不写bug 目录 一、什么是make/makefile 二、make/makefile语法 补充(多文件标识): 三、make/makefile原理 四、make/makefile根据时间对文件选择操作 正文开始&#xff…

在安卓和Windows下使用Vizario H264 RTSP

Unity2021.3.35f1,运行模式为ENGINE_SERVER 1.环境设置 Windows设置 安卓设置 2.代码修改 ConnectionProperties中的server必须与真实IP一样,所以需要新增一个获取IP的函数 public string GetLocalIPAddress(){IPHostEntry host;string localIP &quo…

codesys进行控制虚拟轴运动时出现的一些奇怪bug的解释

codesys进行控制虚拟轴运动时出现的一些奇怪bug的解释 问题描述第一个奇怪的bug:新建的工程没有SoftMotion General Axis Pool选项第二个奇怪的bug:在新建工程SoftMotion General Axis Pool选项时,无法手动添加第三个奇怪的bug:虚…

Postgresql碎片整理

创建pgstattuple 扩展 CREATE EXTENSION pgstattuple 获取表的元组(行)信息,包括空闲空间的比例和行的平均宽度 SELECT * FROM pgstattuple(表名); 查看表和索引大小 SELECT pg_relation_size(表名), pg_relation_size(索引名称); 清理碎片方…

如何进行 AWS 云监控

什么是 AWS? Amazon Web Services(AWS)是 Amazon 提供的一个全面、广泛使用的云计算平台。它提供广泛的云服务,包括计算能力、存储选项、网络功能、数据库、分析、机器学习、人工智能、物联网和安全。 使用 AWS 有哪些好处&…

AI预测体彩排3采取888=3策略+和值012路或胆码测试9月4日升级新模型预测第72弹

经过70多期的测试,当然有很多彩友也一直在观察我每天发的预测结果,得到了一个非常有价值的信息,那就是9码定位的命中率非常高,已到达90%的命中率,这给喜欢打私菜的朋友提供了极高价值的预测结果~当然了,大部…

亚信安全信立方安全大模型荣获“磐石·Y”大模型安全评定

2024年4月,在中国软件评测中心(工业和信息化部软件与集成电路促进中心)联合数据安全关键技术与产业应用评价工业和信息化部重点实验室、中国计算机行业协会数据安全专业委员会开展的大模型安全性测评“磐石X”榜单计划中,亚信安全…