ELK的Filebeat

目录

  • 传送门
  • 前言
  • 一、概念
    • 1. 主要功能
    • 2. 架构
    • 3. 使用场景
    • 4. 模块
    • 5. 监控与管理
  • 二、下载地址
  • 三、Linux下7.6.2版本安装
    • filebeat.yml配置文件参考(不要直接拷贝用)
    • 多行匹配配置
    • 过滤配置
    • 最终配置(一、多行匹配、直接读取日志文件、EFK方案)
    • 最终配置(二、多行匹配、直接读取日志文件、ELFK方案)
  • 四、综合理论案例

传送门

SpringMVC的源码解析(精品)
Spring6的源码解析(精品)
SpringBoot3框架(精品)
MyBatis框架(精品)
MyBatis-Plus
SpringDataJPA
SpringCloudNetflix
SpringCloudAlibaba(精品)
Shiro
SpringSecurity
java的LOG日志框架
Activiti(敬请期待)
JDK8新特性
JDK9新特性
JDK10新特性
JDK11新特性
JDK12新特性
JDK13新特性
JDK14新特性
JDK15新特性
JDK16新特性
JDK17新特性
JDK18新特性
JDK19新特性
JDK20新特性
JDK21新特性
其他技术文章传送门入口

前言

ELK设置后抓日志非常好用,当然也不只是用于抓日志。功能强大,全文检索等等。

以下文章不定时更新。

ELK的ElasticStack概念
ELK的ElasticStack语法
ELK的ElasticStack安装
ELK的Logstash
ELK的Kibana
ELK的Filebeat

一、概念

在这里插入图片描述
Filebeat是Elastic Stack中的一个轻量级数据传输工具,专门用于转发和集中化日志数据。它的设计目的是高效地读取和转发日志文件到Elasticsearch或Logstash,进而帮助用户进行日志分析和监控。以下是对Filebeat的详细介绍,包括其主要功能、架构、使用场景和配置示例。

1. 主要功能

1.轻量级:Filebeat是一个轻量级的代理,不会对源系统造成显著的负担,适合在各种环境中部署。
2.日志采集:可以监控和采集本地的日志文件,并将其转发到指定的目标。
3.数据转换:支持对日志数据进行简单的处理,例如解析和格式化。
4.容错性:在目标不可用时,Filebeat可以缓冲数据,确保不会丢失日志信息。

2. 架构

Filebeat的架构主要包括以下几个部分:

5.输入:定义要采集的日志文件及其路径。
6.处理:可以对输入的数据进行处理,包括解析、添加字段等。
7.输出:将处理后的数据发送到目标,例如Elasticsearch或Logstash。
8.模块:Filebeat提供了一些预定义的模块,用于简化常见应用程序和服务的日志采集配置。

3. 使用场景

9.集中化日志管理:将多个服务器的日志集中到一个地方,便于统一管理和分析。
10.监控应用程序和系统日志:实时监控和分析应用程序、系统和网络设备的日志信息。
11.安全事件监控:采集和分析安全相关的日志,以检测和响应安全事件。

4. 模块

Filebeat提供了多个预定义的模块,可以简化常见应用程序和服务的日志采集配置。

5. 监控与管理

Filebeat支持与Elastic Stack中的其他工具(如Kibana)集成,以实现日志的可视化和监控。用户可以在Kibana中创建仪表板,以实时查看采集到的日志数据。
结论
Filebeat是一个强大的工具,适合在各种环境中实现高效的日志采集和转发。通过与Elasticsearch和Kibana的结合,Filebeat使用户能够轻松集中和分析日志数据,帮助快速识别和解决问题。

type: log #input类型为log
enable: true #表示是该log类型配置生效
paths:     #指定要监控的日志,目前按照Go语言的glob函数处理。没有对配置目录做递归处理,比如配置的如果是:
- /var/log/* /*.log  #则只会去/var/log目录的所有子目录中寻找以".log"结尾的文件,而不会寻找/var/log目录下以".log"结尾的文件。
recursive_glob.enabled: #启用全局递归模式,例如/foo/**包括/foo, /foo/*, /foo/*/*
encoding:#指定被监控的文件的编码类型,使用plain和utf-8都是可以处理中文日志的
exclude_lines: ['^DBG'] #不包含匹配正则的行
include_lines: ['^ERR', '^WARN']  #包含匹配正则的行
harvester_buffer_size: 16384 #每个harvester在获取文件时使用的缓冲区的字节大小
max_bytes: 10485760 #单个日志消息可以拥有的最大字节数。max_bytes之后的所有字节都被丢弃而不发送。默认值为10MB (10485760)
exclude_files: ['\.gz$']  #用于匹配希望Filebeat忽略的文件的正则表达式列表
ingore_older: 0 #默认为0,表示禁用,可以配置2h,2m等,注意ignore_older必须大于close_inactive的值.表示忽略超过设置值未更新的
文件或者文件从来没有被harvester收集
close_* #close_ *配置选项用于在特定标准或时间之后关闭harvester。 关闭harvester意味着关闭文件处理程序。 如果在harvester关闭
后文件被更新,则在scan_frequency过后,文件将被重新拾取。 但是,如果在harvester关闭时移动或删除文件,Filebeat将无法再次接收文件
,并且harvester未读取的任何数据都将丢失。
close_inactive  #启动选项时,如果在制定时间没有被读取,将关闭文件句柄
读取的最后一条日志定义为下一次读取的起始点,而不是基于文件的修改时间
如果关闭的文件发生变化,一个新的harverster将在scan_frequency运行后被启动
建议至少设置一个大于读取日志频率的值,配置多个prospector来实现针对不同更新速度的日志文件
使用内部时间戳机制,来反映记录日志的读取,每次读取到最后一行日志时开始倒计时使用2h 5m 来表示
close_rename #当选项启动,如果文件被重命名和移动,filebeat关闭文件的处理读取
close_removed #当选项启动,文件被删除时,filebeat关闭文件的处理读取这个选项启动后,必须启动clean_removed
close_eof #适合只写一次日志的文件,然后filebeat关闭文件的处理读取
close_timeout #当选项启动时,filebeat会给每个harvester设置预定义时间,不管这个文件是否被读取,达到设定时间后,将被关闭
close_timeout 不能等于ignore_older,会导致文件更新时,不会被读取如果output一直没有输出日志事件,这个timeout是不会被启动的,
至少要要有一个事件发送,然后haverter将被关闭
设置0 表示不启动
clean_inactived #从注册表文件中删除先前收获的文件的状态
设置必须大于ignore_older+scan_frequency,以确保在文件仍在收集时没有删除任何状态
配置选项有助于减小注册表文件的大小,特别是如果每天都生成大量的新文件
此配置选项也可用于防止在Linux上重用inode的Filebeat问题
clean_removed #启动选项后,如果文件在磁盘上找不到,将从注册表中清除filebeat
如果关闭close removed 必须关闭clean removed
scan_frequency #prospector检查指定用于收获的路径中的新文件的频率,默认10s
tail_files:#如果设置为true,Filebeat从文件尾开始监控文件新增内容,把新增的每一行文件作为一个事件依次发送,
而不是从文件开始处重新发送所有内容。
symlinks:#符号链接选项允许Filebeat除常规文件外,可以收集符号链接。收集符号链接时,即使报告了符号链接的路径,
Filebeat也会打开并读取原始文件。
backoff: #backoff选项指定Filebeat如何积极地抓取新文件进行更新。默认1s,backoff选项定义Filebeat在达到EOF之后
再次检查文件之间等待的时间。
max_backoff: #在达到EOF之后再次检查文件之前Filebeat等待的最长时间
backoff_factor: #指定backoff尝试等待时间几次,默认是2
harvester_limit:#harvester_limit选项限制一个prospector并行启动的harvester数量,直接影响文件打开数tags #列表中添加标签,用过过滤,例如:tags: ["json"]
fields #可选字段,选择额外的字段进行输出可以是标量值,元组,字典等嵌套类型
默认在sub-dictionary位置
filebeat.inputs:
fields:
app_id: query_engine_12
fields_under_root #如果值为ture,那么fields存储在输出文档的顶级位置multiline.pattern #必须匹配的regexp模式
multiline.negate #定义上面的模式匹配条件的动作是 否定的,默认是false
假如模式匹配条件'^b',默认是false模式,表示讲按照模式匹配进行匹配 将不是以b开头的日志行进行合并
如果是true,表示将不以b开头的日志行进行合并
multiline.match # 指定Filebeat如何将匹配行组合成事件,在之前或者之后,取决于上面所指定的negate
multiline.max_lines #可以组合成一个事件的最大行数,超过将丢弃,默认500
multiline.timeout #定义超时时间,如果开始一个新的事件在超时时间内没有发现匹配,也将发送日志,默认是5smax_procs #设置可以同时执行的最大CPU数。默认值为系统中可用的逻辑CPU的数量。name #为该filebeat指定名字,默认为主机的hostname

二、下载地址

https://www.elastic.co/cn/downloads/filebeat

https://www.elastic.co/cn/downloads/past-releases/filebeat-7-6-2

三、Linux下7.6.2版本安装

cd /usr/local/filebeat
tar -zxvf filebeat-7.6.2-linux-x86_64.tar.gz #  解压tar包,解压时间有点长。
cd /usr/local/filebeat/filebeat-7.6.2-linux-x86_64  # 公司古博服务器
cp /usr/local/filebeat/filebeat-7.6.2-linux-x86_64/filebeat.yml  /usr/local/filebeat/filebeat-7.6.2-linux-x86_64/filebeat.bak.yml # 备份
vim /usr/local/filebeat/filebeat-7.6.2-linux-x86_64/filebeat.yml   #  可以先备份一下在 path位置修改一下路径就可以了,单节点的es也不用改其他的。multiline是多行匹配,异常合并这种。深坑:一定要注意缩进,否则无效paths:-  /java/logs/*/*.logmultiline.pattern: '^[0-9]{4}-[0-9]{2}-[0-9]{2}'multiline.negate: truemultiline.match: after/usr/local/filebeat/filebeat-7.6.2-linux-x86_64/filebeat test config  # 验证配置文件命令
nohup /usr/local/filebeat/filebeat-7.6.2-linux-x86_64/filebeat -e -c filebeat.yml > /dev/null 2>&1 &  disown  # 后台启动,并将日志放到黑洞,注意disown参数要带上,该参数解决关闭xshell后filebeat自动退出问题。
#### nohup /usr/local/filebeat/filebeat-7.6.2-linux-x86_64/filebeat -e -c filebeat.yml > /usr/local/filebeat/filebeat-7.6.2-linux-x86_64/filebeat.txt  &  # 非黑洞方式,可以查看日志排查原因。
####tail -n 10 -f  /usr/local/filebeat/filebeat-7.6.2-linux-x86_64/filebeat.txt
###  -c 后面跟自定义配置文件 -d 是开启debug模式  默认端口5044

filebeat.yml配置文件参考(不要直接拷贝用)

###################### Filebeat Configuration Example ########################## This file is an example configuration file highlighting only the most common
# options. The filebeat.reference.yml file from the same directory contains all the
# supported options with more comments. You can use it as a reference.
#
# You can find the full configuration reference here:
# https://www.elastic.co/guide/en/beats/filebeat/index.html# For more available modules and options, please see the filebeat.reference.yml sample
# configuration file.#=========================== Filebeat inputs =============================filebeat.inputs:# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations.- type: log# Change to true to enable this input configuration.enabled: true# Paths that should be crawled and fetched. Glob based paths.paths:-  /java/logs/*/*.log#- /var/logs/es_aaa_index_search_slowlog.log#- /var/logs/es_bbb_index_search_slowlog.log#- /var/logs/es_ccc_index_search_slowlog.log#- /var/logs/es_dddd_index_search_slowlog.log#- c:\programdata\elasticsearch\logs\*# Exclude lines. A list of regular expressions to match. It drops the lines that are# matching any regular expression from the list.#exclude_lines: ['^DBG']# Include lines. A list of regular expressions to match. It exports the lines that are# matching any regular expression from the list.#include_lines: ['^ERR', '^WARN']# Exclude files. A list of regular expressions to match. Filebeat drops the files that# are matching any regular expression from the list. By default, no files are dropped.#exclude_files: ['.gz$']# Optional additional fields. These fields can be freely picked# to add additional information to the crawled log files for filtering#fields:#  level: debug#  review: 1### Multiline options# Multiline can be used for log messages spanning multiple lines. This is common# for Java Stack Traces or C-Line Continuation# The regexp Pattern that has to be matched. The example pattern matches all lines starting with [#multiline.pattern: ^\[# Defines if the pattern set under pattern should be negated or not. Default is false.#multiline.negate: false# Match can be set to "after" or "before". It is used to define if lines should be append to a pattern# that was (not) matched before or after or as long as a pattern is not matched based on negate.# Note: After is the equivalent to previous and before is the equivalent to to next in Logstash#multiline.match: after#============================= Filebeat modules ===============================filebeat.config.modules:# Glob pattern for configuration loadingpath: ${path.config}/modules.d/*.yml# Set to true to enable config reloadingreload.enabled: false# Period on which files under path should be checked for changes#reload.period: 10s#==================== Elasticsearch template setting ==========================#================================ General =====================================# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
name: filebeat222# The tags of the shipper are included in their own field with each
# transaction published.
#tags: ["service-X", "web-tier"]# Optional fields that you can specify to add additional information to the
# output.
#fields:
#  env: staging#cloud.auth:#================================ Outputs =====================================#-------------------------- Elasticsearch output ------------------------------
output.elasticsearch:# Array of hosts to connect to.#hosts: ["192.168.110.130:9200","92.168.110.131:9200"]hosts: ["localhost:9200"]# Protocol - either `http` (default) or `https`.#protocol: "https"# Authentication credentials - either API key or username/password.#api_key: "id:api_key"#username: "elastic"#password: "${ES_PWD}"   #通过keystore设置密码

多行匹配配置

在这里插入图片描述
4种组合设置 以b为中心的, b之前的,b之后的各种不同组合规则,比如 让异常 at开头的都在一行中,而不是分开
在这里插入图片描述

异常合并版本:
multiline.pattern: '^[[:space:]]+(at|\.{3})\b|^Caused by:'
multiline.negate: false
multiline.match: after
异常合并版本2(生产环境,Kibana全体倒序排列的时候,一个接口内时间正序,很好):
multiline.pattern: '^[0-9]{4}-[0-9]{2}-[0-9]{2}'
multiline.negate: true
multiline.match: after
合并了一个线程的(官方推荐的,合并的有点厉害,这种不行的,合并以后时间都是错乱的)multiline.pattern: '^\['multiline.negate: truemultiline.match: after

过滤配置

在这里插入图片描述
在这里插入图片描述
在这里插入图片描述

最终配置(一、多行匹配、直接读取日志文件、EFK方案)

###################### Filebeat Configuration Example ########################## This file is an example configuration file highlighting only the most common
# options. The filebeat.reference.yml file from the same directory contains all the
# supported options with more comments. You can use it as a reference.
#
# You can find the full configuration reference here:
# https://www.elastic.co/guide/en/beats/filebeat/index.html# For more available modules and options, please see the filebeat.reference.yml sample
# configuration file.#=========================== Filebeat inputs =============================filebeat.inputs:# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations.- type: log# Change to true to enable this input configuration.enabled: true# Paths that should be crawled and fetched. Glob based paths.paths:# - /var/log/*.log- /java/logs/*/*.log- /java/logs/auth/*.log#- c:\programdata\elasticsearch\logs\*# Exclude lines. A list of regular expressions to match. It drops the lines that are# matching any regular expression from the list.#exclude_lines: ['^DBG']# Include lines. A list of regular expressions to match. It exports the lines that are# matching any regular expression from the list.#include_lines: ['^ERR', '^WARN']# Exclude files. A list of regular expressions to match. Filebeat drops the files that# are matching any regular expression from the list. By default, no files are dropped.#exclude_files: ['.gz$']# Optional additional fields. These fields can be freely picked# to add additional information to the crawled log files for filtering#fields:#  level: debug#  review: 1### Multiline options# Multiline can be used for log messages spanning multiple lines. This is common# for Java Stack Traces or C-Line Continuation# The regexp Pattern that has to be matched. The example pattern matches all lines starting with [#multiline.pattern: ^\[# Defines if the pattern set under pattern should be negated or not. Default is false.#multiline.negate: false# Match can be set to "after" or "before". It is used to define if lines should be append to a pattern# that was (not) matched before or after or as long as a pattern is not matched based on negate.# Note: After is the equivalent to previous and before is the equivalent to to next in Logstash#multiline.match: after# multiline.pattern: '^\['# multiline.negate: true# multiline.match: after# multiline.pattern: '^[[:space:]]+(at|\.{3})\b|^Caused by:'# multiline.negate: false# multiline.match: aftermultiline.pattern: '^[0-9]{4}-[0-9]{2}-[0-9]{2}'multiline.negate: truemultiline.match: after
#============================= Filebeat modules ===============================filebeat.config.modules:# Glob pattern for configuration loadingpath: ${path.config}/modules.d/*.yml# Set to true to enable config reloadingreload.enabled: false# Period on which files under path should be checked for changes#reload.period: 10s#==================== Elasticsearch template setting ==========================setup.template.settings:index.number_of_shards: 1#index.codec: best_compression#_source.enabled: false#================================ General =====================================# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
#name:# The tags of the shipper are included in their own field with each
# transaction published.
#tags: ["service-X", "web-tier"]# Optional fields that you can specify to add additional information to the
# output.
#fields:
#  env: staging#============================== Dashboards =====================================
# These settings control loading the sample dashboards to the Kibana index. Loading
# the dashboards is disabled by default and can be enabled either by setting the
# options here or by using the `setup` command.
#setup.dashboards.enabled: false# The URL from where to download the dashboards archive. By default this URL
# has a value which is computed based on the Beat name and version. For released
# versions, this URL points to the dashboard archive on the artifacts.elastic.co
# website.
#setup.dashboards.url:#============================== Kibana =====================================# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:# Kibana Host# Scheme and port can be left out and will be set to the default (http and 5601)# In case you specify and additional path, the scheme is required: http://localhost:5601/path# IPv6 addresses should always be defined as: https://[2001:db8::1]:5601#host: "localhost:5601"# Kibana Space ID# ID of the Kibana Space into which the dashboards should be loaded. By default,# the Default Space will be used.#space.id:#============================= Elastic Cloud ==================================# These settings simplify using Filebeat with the Elastic Cloud (https://cloud.elastic.co/).# The cloud.id setting overwrites the `output.elasticsearch.hosts` and
# `setup.kibana.host` options.
# You can find the `cloud.id` in the Elastic Cloud web UI.
#cloud.id:# The cloud.auth setting overwrites the `output.elasticsearch.username` and
# `output.elasticsearch.password` settings. The format is `<user>:<pass>`.
#cloud.auth:#================================ Outputs =====================================# Configure what output to use when sending the data collected by the beat.#-------------------------- Elasticsearch output ------------------------------
output.elasticsearch:# Array of hosts to connect to.hosts: ["localhost:9200"]# Protocol - either `http` (default) or `https`.#protocol: "https"# Authentication credentials - either API key or username/password.#api_key: "id:api_key"#username: "elastic"#password: "changeme"#----------------------------- Logstash output --------------------------------
#output.logstash:# The Logstash hosts#hosts: ["localhost:5044"]# Optional SSL. By default is off.# List of root certificates for HTTPS server verifications#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]# Certificate for SSL client authentication#ssl.certificate: "/etc/pki/client/cert.pem"# Client Certificate Key#ssl.key: "/etc/pki/client/cert.key"#================================ Processors =====================================# Configure processors to enhance or manipulate events generated by the beat.processors:- add_host_metadata: ~- add_cloud_metadata: ~- add_docker_metadata: ~- add_kubernetes_metadata: ~#================================ Logging =====================================# Sets log level. The default log level is info.
# Available log levels are: error, warning, info, debug
#logging.level: debug# At debug level, you can selectively enable logging only for some components.
# To enable all selectors use ["*"]. Examples of other selectors are "beat",
# "publish", "service".
#logging.selectors: ["*"]#============================== X-Pack Monitoring ===============================
# filebeat can export internal metrics to a central Elasticsearch monitoring
# cluster.  This requires xpack monitoring to be enabled in Elasticsearch.  The
# reporting is disabled by default.# Set to true to enable the monitoring reporter.
#monitoring.enabled: false# Sets the UUID of the Elasticsearch cluster under which monitoring data for this
# Filebeat instance will appear in the Stack Monitoring UI. If output.elasticsearch
# is enabled, the UUID is derived from the Elasticsearch cluster referenced by output.elasticsearch.
#monitoring.cluster_uuid:# Uncomment to send the metrics to Elasticsearch. Most settings from the
# Elasticsearch output are accepted here as well.
# Note that the settings should point to your Elasticsearch *monitoring* cluster.
# Any setting that is not set is automatically inherited from the Elasticsearch
# output configuration, so if you have the Elasticsearch output configured such
# that it is pointing to your Elasticsearch monitoring cluster, you can simply
# uncomment the following line.
#monitoring.elasticsearch:#================================= Migration ==================================# This allows to enable 6.7 migration aliases
#migration.6_to_7.enabled: true

最终配置(二、多行匹配、直接读取日志文件、ELFK方案)

###################### Filebeat Configuration Example ########################## This file is an example configuration file highlighting only the most common
# options. The filebeat.reference.yml file from the same directory contains all the
# supported options with more comments. You can use it as a reference.
#
# You can find the full configuration reference here:
# https://www.elastic.co/guide/en/beats/filebeat/index.html# For more available modules and options, please see the filebeat.reference.yml sample
# configuration file.#=========================== Filebeat inputs =============================filebeat.inputs:# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations.- type: log# Change to true to enable this input configuration.enabled: true# Paths that should be crawled and fetched. Glob based paths.paths:# - /var/log/*.log- /java/logs/*/*.log- /java/logs/auth/*.log#- c:\programdata\elasticsearch\logs\*# Exclude lines. A list of regular expressions to match. It drops the lines that are# matching any regular expression from the list.#exclude_lines: ['^DBG']# Include lines. A list of regular expressions to match. It exports the lines that are# matching any regular expression from the list.#include_lines: ['^ERR', '^WARN']# Exclude files. A list of regular expressions to match. Filebeat drops the files that# are matching any regular expression from the list. By default, no files are dropped.#exclude_files: ['.gz$']# Optional additional fields. These fields can be freely picked# to add additional information to the crawled log files for filtering#fields:#  level: debug#  review: 1### Multiline options# Multiline can be used for log messages spanning multiple lines. This is common# for Java Stack Traces or C-Line Continuation# The regexp Pattern that has to be matched. The example pattern matches all lines starting with [#multiline.pattern: ^\[# Defines if the pattern set under pattern should be negated or not. Default is false.#multiline.negate: false# Match can be set to "after" or "before". It is used to define if lines should be append to a pattern# that was (not) matched before or after or as long as a pattern is not matched based on negate.# Note: After is the equivalent to previous and before is the equivalent to to next in Logstash#multiline.match: after# multiline.pattern: '^\['# multiline.negate: true# multiline.match: after# multiline.pattern: '^[[:space:]]+(at|\.{3})\b|^Caused by:'# multiline.negate: false# multiline.match: aftermultiline.pattern: '^[0-9]{4}-[0-9]{2}-[0-9]{2}'multiline.negate: truemultiline.match: after
#============================= Filebeat modules ===============================filebeat.config.modules:# Glob pattern for configuration loadingpath: ${path.config}/modules.d/*.yml# Set to true to enable config reloadingreload.enabled: false# Period on which files under path should be checked for changes#reload.period: 10s#==================== Elasticsearch template setting ==========================setup.template.settings:index.number_of_shards: 1#index.codec: best_compression#_source.enabled: false#================================ General =====================================# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
#name:# The tags of the shipper are included in their own field with each
# transaction published.
#tags: ["service-X", "web-tier"]# Optional fields that you can specify to add additional information to the
# output.
#fields:
#  env: staging#============================== Dashboards =====================================
# These settings control loading the sample dashboards to the Kibana index. Loading
# the dashboards is disabled by default and can be enabled either by setting the
# options here or by using the `setup` command.
#setup.dashboards.enabled: false# The URL from where to download the dashboards archive. By default this URL
# has a value which is computed based on the Beat name and version. For released
# versions, this URL points to the dashboard archive on the artifacts.elastic.co
# website.
#setup.dashboards.url:#============================== Kibana =====================================# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:# Kibana Host# Scheme and port can be left out and will be set to the default (http and 5601)# In case you specify and additional path, the scheme is required: http://localhost:5601/path# IPv6 addresses should always be defined as: https://[2001:db8::1]:5601#host: "localhost:5601"# Kibana Space ID# ID of the Kibana Space into which the dashboards should be loaded. By default,# the Default Space will be used.#space.id:#============================= Elastic Cloud ==================================# These settings simplify using Filebeat with the Elastic Cloud (https://cloud.elastic.co/).# The cloud.id setting overwrites the `output.elasticsearch.hosts` and
# `setup.kibana.host` options.
# You can find the `cloud.id` in the Elastic Cloud web UI.
#cloud.id:# The cloud.auth setting overwrites the `output.elasticsearch.username` and
# `output.elasticsearch.password` settings. The format is `<user>:<pass>`.
#cloud.auth:#================================ Outputs =====================================# Configure what output to use when sending the data collected by the beat.#-------------------------- Elasticsearch output ------------------------------
#output.elasticsearch:# Array of hosts to connect to.# hosts: ["localhost:9200"]# Protocol - either `http` (default) or `https`.#protocol: "https"# Authentication credentials - either API key or username/password.#api_key: "id:api_key"#username: "elastic"#password: "changeme"#----------------------------- Logstash output --------------------------------
output.logstash:# The Logstash hostshosts: ["localhost:5044"]# Optional SSL. By default is off.# List of root certificates for HTTPS server verifications#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]# Certificate for SSL client authentication#ssl.certificate: "/etc/pki/client/cert.pem"# Client Certificate Key#ssl.key: "/etc/pki/client/cert.key"#================================ Processors =====================================# Configure processors to enhance or manipulate events generated by the beat.processors:- add_host_metadata: ~- add_cloud_metadata: ~- add_docker_metadata: ~- add_kubernetes_metadata: ~#================================ Logging =====================================# Sets log level. The default log level is info.
# Available log levels are: error, warning, info, debug
#logging.level: debug# At debug level, you can selectively enable logging only for some components.
# To enable all selectors use ["*"]. Examples of other selectors are "beat",
# "publish", "service".
#logging.selectors: ["*"]#============================== X-Pack Monitoring ===============================
# filebeat can export internal metrics to a central Elasticsearch monitoring
# cluster.  This requires xpack monitoring to be enabled in Elasticsearch.  The
# reporting is disabled by default.# Set to true to enable the monitoring reporter.
#monitoring.enabled: false# Sets the UUID of the Elasticsearch cluster under which monitoring data for this
# Filebeat instance will appear in the Stack Monitoring UI. If output.elasticsearch
# is enabled, the UUID is derived from the Elasticsearch cluster referenced by output.elasticsearch.
#monitoring.cluster_uuid:# Uncomment to send the metrics to Elasticsearch. Most settings from the
# Elasticsearch output are accepted here as well.
# Note that the settings should point to your Elasticsearch *monitoring* cluster.
# Any setting that is not set is automatically inherited from the Elasticsearch
# output configuration, so if you have the Elasticsearch output configured such
# that it is pointing to your Elasticsearch monitoring cluster, you can simply
# uncomment the following line.
#monitoring.elasticsearch:#================================= Migration ==================================# This allows to enable 6.7 migration aliases
#migration.6_to_7.enabled: true

1、console一直没成功过,以为是整个有问题,换了另一台linux重新部署还是一样;
2、输出es开始也没成功,最后用最原始filebeat.yml的配置,成功输出到es了,本质还是yml中path那边配置自己项目路径日志有大问题,需要比对案例,空格和格式不能错,一点一点手动写入,不要拷贝;
3、巨坑:filebeat在用nohup和黑洞的方式启动以后,老是自己退出关闭。查找资料是用这个nohup启动以后,必须exit退出以后,再去关闭xshell(这种很容易忘记或者别人误操作,不太好)。还有一种解决办法是加一个配置文件(不太好)。最佳解决办法为nohup命令后面加一个参数。(这个参数将会使启动的nohup进程从当前shell的作业列表中清除,从而避免nohup进程在关闭这个shell时退出)

四、综合理论案例

案例(多虚机ELFK+Redis+TCP)

在这里插入图片描述
在这里插入图片描述
logstash配置一,后面是逐渐完善的。stdout方便测试输出

在这里插入图片描述
logstash配置二

在这里插入图片描述
filebeat的redis配置

在这里插入图片描述
filebeat的beats配置

tcp那个 是logstash直接读取tcp,所以不用配置filebeat

在这里插入图片描述
Kibana直接看到3个索引

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.rhkb.cn/news/483345.html

如若内容造成侵权/违法违规/事实不符,请联系长河编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

JS实现高效导航——A*寻路算法+导航图简化法

一、如何实现两点间路径导航 导航实现的通用步骤&#xff0c;一般是&#xff1a; 1、网格划分 将地图划分为网格&#xff0c;即例如地图是一张图片&#xff0c;其像素为1000*1000&#xff0c;那我们将此图片划分为各个10*10的网格&#xff0c;从而提高寻路算法的计算量。 2、标…

【分页查询】.NET开源 ORM 框架 SqlSugar 系列

&#x1f4a5; .NET开源 ORM 框架 SqlSugar 系列 &#x1f389;&#x1f389;&#x1f389; 【开篇】.NET开源 ORM 框架 SqlSugar 系列【入门必看】.NET开源 ORM 框架 SqlSugar 系列【实体配置】.NET开源 ORM 框架 SqlSugar 系列【Db First】.NET开源 ORM 框架 SqlSugar 系列…

AI - 谈谈RAG中的查询分析(2)

AI - 谈谈RAG中的查询分析&#xff08;2&#xff09; 大家好&#xff0c;RAG中的查询分析是比较有趣的一个点&#xff0c;内容丰富&#xff0c;并不是一句话能聊的清楚的。今天接着上一篇&#xff0c;继续探讨RAG中的查询分析&#xff0c;并在功能层面和代码层面持续改进。 功…

Python 入门教程(2)搭建环境 | 2.4、VSCode配置Node.js运行环境

文章目录 一、VSCode配置Node.js运行环境1、软件安装2、安装Node.js插件3、配置VSCode4、创建并运行Node.js文件5、调试Node.js代码 一、VSCode配置Node.js运行环境 1、软件安装 安装下面的软件&#xff1a; 安装Node.js&#xff1a;Node.js官网 下载Node.js安装包。建议选择L…

redis核心命令全局命令 + redis 常见的数据结构 + redis单线程模型

文章目录 一. 核心命令1. set2. get 二. 全局命令1. keys2. exists3. del4. expire5. ttl6. type 三. redis 常见的数据结构及内部编码四. redis单线程模型 一. 核心命令 1. set set key value key 和 value 都是string类型的 对于key value, 不需要加上引号, 就是表示字符串…

哈希及其模拟实现

1.哈希的概念 顺序结构以及平衡树中&#xff0c;元素的关键码与其存储位置之间没有对应的关系。因此&#xff0c;在查找一个元素时&#xff0c;必须要经过关键码的多次比较。顺序查找的时间复杂度为O(N)&#xff0c;平衡树中为树的高度&#xff0c;即O(log_2 N)&#xff0c;搜…

k8s,声明式API对象理解

命令式API 比如&#xff1a; 先kubectl create&#xff0c;再replace的操作&#xff0c;我们称为命令式配置文件操作 kubectl replace的执行过程&#xff0c;是使用新的YAML文件中的API对象&#xff0c;替换原有的API对象&#xff1b;而kubectl apply&#xff0c;则是执行了一…

开源ISP介绍(1)——开源ISP的Vivado框架搭建

开源github链接&#xff1a;bxinquan/zynq_cam_isp_demo: 基于verilog实现了ISP图像处理IP 国内Gitee链接&#xff1a;zynq_cam_isp: 开源ISP项目 基于以上开源链接移植项目到正点原子领航者Zynq7020开发板&#xff0c;并对该项目的Vivddo工程进行架构详解&#xff0c;后续会…

[Redis#13] cpp-redis接口 | set | hash |zset

目录 Set 1. Sadd 和 Smembers 2. Sismember 3. Scard 4. Spop 5. Sinter 6. Sinter store Hash 1. Hset 和 Hget 2. Hexists 3. Hdel 4. Hkeys 和 Hvals 5. Hmget 和 Hmset Zset 1. Zadd 和 Zrange 2. Zcard 3. Zrem 4. Zscore cpp-redis 的学习 主要关注于…

GEOBench-VLM:专为地理空间任务设计的视觉-语言模型基准测试数据集

2024-11-29 ,由穆罕默德本扎耶德人工智能大学等机构创建了GEOBench-VLM数据集&#xff0c;目的评估视觉-语言模型&#xff08;VLM&#xff09;在地理空间任务中的表现。该数据集的推出填补了现有基准测试在地理空间应用中的空白&#xff0c;提供了超过10,000个经过人工验证的指…

设计模式 更新ing

设计模式 1、六大原则1.1 单一设计原则 SRP1.2 开闭原则1.3 里氏替换原则1.4 迪米特法则1.5 接口隔离原则1.6 依赖倒置原则 2、工厂模式 1、六大原则 1.1 单一设计原则 SRP 一个类应该只有一个变化的原因 比如一个视频软件&#xff0c;区分不同的用户级别 包括访客&#xff0…

nlp培训重点

1. SGD梯度下降公式 当梯度大于0时&#xff0c;变小&#xff0c;往左边找梯度接近0的值。 当梯度小于0时&#xff0c;减去一个负数会变大&#xff0c;往右边找梯度接近0的值&#xff0c;此时梯度从负数到0上升 2.Adam优化器实现原理 #coding:utf8import torch import torch.n…

电脑关机的趣味小游戏——system函数、strcmp函数、goto语句的使用

文章目录 前言一. system函数1.1 system函数清理屏幕1.2 system函数暂停运行1.3 system函数电脑关机、重启 二、strcmp函数三、goto语句四、电脑关机小游戏4.1. 程序要求4.2. 游戏代码 总结 前言 今天我们写一点稍微有趣的代码&#xff0c;比如写一个小程序使电脑关机&#xf…

基础入门-Web应用OSS存储负载均衡CDN加速反向代理WAF防护部署影响

知识点&#xff1a; 1、基础入门-Web应用-防护产品-WAF保护 2、基础入门-Web应用-加速服务-CDN节点 3、基础入门-Web应用-文件托管-OSS存储 4、基础入门-Web应用-通讯服务-反向代理 5、基础入门-Web应用-运维安全-负载均衡 一、演示案例-Web-拓展架构-WAF保护-拦截攻击 原理&a…

Milvus×OPPO:如何构建更懂你的大模型助手

01. 背景 AI业务快速增长下传统关系型数据库无法满足需求。 2024年恰逢OPPO品牌20周年&#xff0c;OPPO也宣布正式进入AI手机的时代。超千万用户开始通过例如通话摘要、新小布助手、小布照相馆等搭载在OPPO手机上的应用体验AI能力。 与传统的应用不同的是&#xff0c;在AI驱动的…

002-日志增强版

日志增强版 一、需求二、引入依赖三、配置日志处理切面四、配置RequestWrapper五、效果展示 一、需求 需要打印请求参数和返回参数 二、引入依赖 <dependency><groupId>org.springframework.boot</groupId><artifactId>spring-boot-starter-aop<…

Spire.PDF for .NET【页面设置】演示:旋放大 PDF 边距而不改变页面大小

PDF 页边距是正文内容和页面边缘之间的空白。与 Word 不同&#xff0c;PDF 文档中的页边距不易修改&#xff0c;因为 Adobe 不提供任何功能供用户自由操作页边距。但是&#xff0c;您可以更改页面缩放比例&#xff08;放大/压缩内容&#xff09;或裁剪页面以获得合适的页边距。…

服务器数据恢复—EVA存储硬盘磁头和盘片损坏离线的数据恢复案例

服务器存储数据恢复环境&故障&#xff1a; 一台HP EVA存储中有23块硬盘&#xff0c;挂接到一台windows server操作系统的服务器。 EVA存储上有三个硬盘指示灯亮黄灯&#xff0c;此刻存储还能正常使用。管理员在更换硬盘的过程中&#xff0c;又出现一块硬盘对应的指示灯亮黄…

探索仓颉编程语言:官网上线,在线体验与版本下载全面启航

文章目录 每日一句正能量前言什么是仓颉编程语言仓颉编程语言的来历如何使用仓颉编程语言在线版本版本下载后记 每日一句正能量 当你被孤独感驱使着去寻找远离孤独的方法时&#xff0c;会处于一种非常可怕的状态。因为无法和自己相处的人也很难和别人相处&#xff0c;无法和别人…

idea 自动导包,并且禁止自动导 *(java.io.*)

自动导包配置 进入 idea 设置&#xff0c;可以按下图所示寻找位置&#xff0c;也可以直接输入 auto import 快速定位到配置。 Add unambiguous imports on the fly&#xff1a;自动帮我们优化导入的包Optimize imports on the fly&#xff1a;自动去掉一些没有用到的包 禁止导…