Elasticsearch,Logstash和Kibana安装部署(ELK Stack)

前言

当今数字化时代,信息的快速增长使得各类组织和企业面临着海量数据的处理和分析挑战。在这样的背景下,ELK Stack(Elasticsearch、Logstash 和 Kibana)作为一套强大的开源工具组合,成为了解决数据管理、搜索和可视化的首选方案。无论是监控日志、实时数据分析,还是构建仪表盘来监测业务指标,ELK Stack 都提供了一站式的解决方案。

ELK Stack 的每个组件都扮演着关键的角色:

  • Elasticsearch: 作为分布式搜索和分析引擎,Elasticsearch 可以高效地存储、搜索和分析海量数据。其强大的全文搜索能力和分布式架构使得在海量数据中快速定位所需信息成为可能。
  • Logstash: 这是一个用于数据收集、转换和传输的数据处理引擎。它能够从各种数据源中采集数据,经过处理后发送到 Elasticsearch 或其他目标。无论是日志、事件数据还是指标,Logstash 可以将数据标准化,并将其准确地传送到适当的位置。
  • Kibana: 作为 ELK Stack 的可视化工具,Kibana 提供了直观友好的用户界面,让用户能够通过创建仪表盘、图表和可视化来探索、分析和展示数据。这使得即便对数据分析没有深入专业知识的人员,也能够从数据中提取有价值的见解。

在本文档中,我们将深入探讨如何安装、配置和使用 ELK Stack。

系统环境如下

系统:ubuntu20.04 LTS

硬件:8核12G 500G

安装JAVA

sudo apt-get update
#安装对应系统版本JDK,使用java --version查看相应jdk安装版本
apt install openjdk-16-jre-headless

添加ELK存储库

wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -
sh -c 'echo "deb https://artifacts.elastic.co/packages/8.x/apt stable main" > /etc/apt/sources.list.d/elastic-8.x.list'

更新软件源

apt-get update

安装Elasticsearch

apt-get install elasticsearch

安装完成加入开机启动并重启

sudo systemctl daemon-reload
systemctl enable elasticsearch.service && systemctl start elasticsearch.service

为elasticsearch生成密码作为登录使用,用户名是elastic,密码会在屏幕随机生成。

cd /usr/share/elasticsearch && bin/elasticsearch-reset-password -u elastic

注意备份elasticsearch原始文件,以防丢失想要恢复无法恢复。

cp /etc/elasticsearch/elasticsearch.yml /etc/elasticsearch/elasticsearch.yml.bak

生成Enrollment token,第一次登录时候需要验证。

cd /usr/share/elasticsearch && bin/elasticsearch-create-enrollment-token --scope kibana

安装Kibana

apt install kibana
systemctl enable kibana.service && systemctl start kibana.service
systemctl stop kibana.service && systemctl start kibana.service

生成Enrollment token后所需要的验证码

cd /usr/share/kibana/ && bin/kibana-verification-code

注意ELK中所说的L是指Logstash,本文以安装filebeat为收集工具。

Logstash 和 Filebeat 都是用于数据收集和传输的工具,但它们在功能和使用方面有一些区别。以下是它们之间的主要区别:

Logstash:

Logstash 是一个功能强大的数据收集、转换和传输引擎。它的主要功能是将不同来源的数据(如日志、事件、指标等)收集起来,进行过滤、解析、转换,然后将处理后的数据发送到指定的目标,如 Elasticsearch、其他存储系统或分析工具。Logstash 的主要特点包括:

  1. 数据处理能力: Logstash 提供了丰富的插件,能够对数据进行多种处理,如解析、过滤、标准化等,以确保数据在传输之前得到适当的处理。
  2. 多样的数据源: Logstash 可以从多种数据源中采集数据,包括日志文件、网络流量、消息队列等,使得它在处理各种数据类型和格式时非常有用。
  3. 数据传输: Logstash 可以将处理后的数据发送到多种目标,如 Elasticsearch、文件、消息队列等,以满足不同的数据存储和分析需求。
  4. 灵活性: Logstash 的配置非常灵活,您可以通过配置文件定义数据流的各个阶段,从而实现高度定制化的数据处理流程。

Filebeat:

Filebeat 是一个轻量级的日志数据传输工具,专门用于从文件系统中收集日志数据并将其传输到中央存储或分析系统。它的主要特点包括:

  1. 轻量级: Filebeat 被设计为轻量级工具,占用资源较少,适用于部署在资源有限的环境中。
  2. 实时性: Filebeat 可以实时监测日志文件的变化,一旦日志发生更新,它会立即传输变更的部分,确保实时性。
  3. 简化的数据处理: Filebeat 的主要功能是将日志数据收集并传输,而数据处理方面的功能较弱。它不像 Logstash 那样能进行复杂的数据解析和处理。
  4. 易于部署: 由于 Filebeat 轻量级的特点,它适用于分布式部署和轻松扩展。

总之,Logstash 更适合需要对数据进行复杂处理和转换的场景,而 Filebeat 则适用于轻量级、实时的日志传输需求。在实际应用中,可以根据具体需求选择使用 Logstash、Filebeat,或两者的结合,以构建适合的数据收集和传输方案。

安装filebeat采集工具

curl -L -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-8.9.0-amd64.deb
dpkg -i filebeat-8.9.0-amd64.deb
systemctl start filebeat && systemctl enable filebeat

安装完检查对应软件status是否正常,接下来开始配置

elasticsearch配置

vi /etc/elasticsearch/elasticsearch.yml

这里主要主机端口号

network.host: 127.0.0.1
http.port: 9200

全部配置如下,仅供参考。

# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
#       Before you set out to tweak and tune the configuration, make sure you
#       understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
#cluster.name: my-application
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
#node.name: node-1
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
path.data: /var/lib/elasticsearch
#
# Path to log files:
#
path.logs: /var/log/elasticsearch
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# By default Elasticsearch is only accessible on localhost. Set a different
# address here to expose this node on the network:
#
#network.host: 192.168.0.1
#
# By default Elasticsearch listens for HTTP traffic on the first free port it
# finds starting at 9200. Set a specific HTTP port here:
#
#http.port: 9200
network.host: 127.0.0.1
http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when this node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
#discovery.seed_hosts: ["host1", "host2"]
#
# Bootstrap the cluster using an initial set of master-eligible nodes:
#
#cluster.initial_master_nodes: ["node-1", "node-2"]
#
# For more information, consult the discovery and cluster formation module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Allow wildcard deletion of indices:
#
#action.destructive_requires_name: false#----------------------- BEGIN SECURITY AUTO CONFIGURATION -----------------------
#
# The following settings, TLS certificates, and keys have been automatically      
# generated to configure Elasticsearch security features on 09-08-2023 02:38:11
#
# --------------------------------------------------------------------------------# Enable security features
xpack.security.enabled: truexpack.security.enrollment.enabled: true# Enable encryption for HTTP API client connections, such as Kibana, Logstash, and Agents
xpack.security.http.ssl:enabled: truekeystore.path: certs/http.p12# Enable encryption and mutual authentication between cluster nodes
xpack.security.transport.ssl:enabled: trueverification_mode: certificatekeystore.path: certs/transport.p12truststore.path: certs/transport.p12
# Create a new cluster with the current node only
# Additional nodes can still join the cluster later
cluster.initial_master_nodes: ["ubuntu"]# Allow HTTP API connections from anywhere
# Connections are encrypted and require user authentication
http.host: 0.0.0.0
#logger.org.elasticsearch: "ERROR"# Allow other nodes to join the cluster from anywhere
# Connections are encrypted and mutually authenticated
#transport.host: 0.0.0.0#----------------------- END SECURITY AUTO CONFIGURATION -------------------------

kibana配置

# For more configuration options see the configuration guide for Kibana in
# https://www.elastic.co/guide/index.html# =================== System: Kibana Server ===================
# Kibana is served by a back end server. This setting specifies the port to use.
#server.port: 5601# Specifies the address to which the Kibana server will bind. IP addresses and host names are both valid values.
# The default is 'localhost', which usually means remote machines will not be able to connect.
# To allow connections from remote users, set this parameter to a non-loopback address.
server.host: "123.58.97.169"# Enables you to specify a path to mount Kibana at if you are running behind a proxy.
# Use the `server.rewriteBasePath` setting to tell Kibana if it should remove the basePath
# from requests it receives, and to prevent a deprecation warning at startup.
# This setting cannot end in a slash.
#server.basePath: ""# Specifies whether Kibana should rewrite requests that are prefixed with
# `server.basePath` or require that they are rewritten by your reverse proxy.
# Defaults to `false`.
#server.rewriteBasePath: false# Specifies the public URL at which Kibana is available for end users. If
# `server.basePath` is configured this URL should end with the same basePath.
#server.publicBaseUrl: ""# The maximum payload size in bytes for incoming server requests.
#server.maxPayload: 1048576# The Kibana server's name. This is used for display purposes.
#server.name: "your-hostname"# =================== System: Kibana Server (Optional) ===================
# Enables SSL and paths to the PEM-format SSL certificate and SSL key files, respectively.
# These settings enable SSL for outgoing requests from the Kibana server to the browser.
#server.ssl.enabled: false
#server.ssl.certificate: /path/to/your/server.crt
#server.ssl.key: /path/to/your/server.key# =================== System: Elasticsearch ===================
# The URLs of the Elasticsearch instances to use for all your queries.
#elasticsearch.hosts: ["http://localhost:9200"]# If your Elasticsearch is protected with basic authentication, these settings provide
# the username and password that the Kibana server uses to perform maintenance on the Kibana
# index at startup. Your Kibana users still need to authenticate with Elasticsearch, which
# is proxied through the Kibana server.
#elasticsearch.username: "kibana_system"
#elasticsearch.password: "pass"# Kibana can also authenticate to Elasticsearch via "service account tokens".
# Service account tokens are Bearer style tokens that replace the traditional username/password based configuration.
# Use this token instead of a username/password.
# elasticsearch.serviceAccountToken: "my_token"# Time in milliseconds to wait for Elasticsearch to respond to pings. Defaults to the value of
# the elasticsearch.requestTimeout setting.
#elasticsearch.pingTimeout: 1500# Time in milliseconds to wait for responses from the back end or Elasticsearch. This value
# must be a positive integer.
#elasticsearch.requestTimeout: 30000# The maximum number of sockets that can be used for communications with elasticsearch.
# Defaults to `Infinity`.
#elasticsearch.maxSockets: 1024# Specifies whether Kibana should use compression for communications with elasticsearch
# Defaults to `false`.
#elasticsearch.compression: false# List of Kibana client-side headers to send to Elasticsearch. To send *no* client-side
# headers, set this value to [] (an empty list).
#elasticsearch.requestHeadersWhitelist: [ authorization ]# Header names and values that are sent to Elasticsearch. Any custom headers cannot be overwritten
# by client-side headers, regardless of the elasticsearch.requestHeadersWhitelist configuration.
#elasticsearch.customHeaders: {}# Time in milliseconds for Elasticsearch to wait for responses from shards. Set to 0 to disable.
#elasticsearch.shardTimeout: 30000# =================== System: Elasticsearch (Optional) ===================
# These files are used to verify the identity of Kibana to Elasticsearch and are required when
# xpack.security.http.ssl.client_authentication in Elasticsearch is set to required.
#elasticsearch.ssl.certificate: /path/to/your/client.crt
#elasticsearch.ssl.key: /path/to/your/client.key# Enables you to specify a path to the PEM file for the certificate
# authority for your Elasticsearch instance.
#elasticsearch.ssl.certificateAuthorities: [ "/path/to/your/CA.pem" ]# To disregard the validity of SSL certificates, change this setting's value to 'none'.
#elasticsearch.ssl.verificationMode: full# =================== System: Logging ===================
# Set the value of this setting to off to suppress all logging output, or to debug to log everything. Defaults to 'info'
#logging.root.level: debug# Enables you to specify a file where Kibana stores log output.
logging:appenders:file:type: filefileName: /var/log/kibana/kibana.loglayout:type: jsonroot:appenders:- default- file
#  layout:
#    type: json# Logs queries sent to Elasticsearch.
#logging.loggers:
#  - name: elasticsearch.query
#    level: debug# Logs http responses.
#logging.loggers:
#  - name: http.server.response
#    level: debug# Logs system usage information.
#logging.loggers:
#  - name: metrics.ops
#    level: debug# =================== System: Other ===================
# The path where Kibana stores persistent data not saved in Elasticsearch. Defaults to data
#path.data: data# Specifies the path where Kibana creates the process ID file.
pid.file: /run/kibana/kibana.pid# Set the interval in milliseconds to sample system and process performance
# metrics. Minimum is 100ms. Defaults to 5000ms.
#ops.interval: 5000# Specifies locale to be used for all localizable strings, dates and number formats.
# Supported languages are the following: English (default) "en", Chinese "zh-CN", Japanese "ja-JP", French "fr-FR".
#i18n.locale: "en"# =================== Frequently used (Optional)===================# =================== Saved Objects: Migrations ===================
# Saved object migrations run at startup. If you run into migration-related issues, you might need to adjust these settings.# The number of documents migrated at a time.
# If Kibana can't start up or upgrade due to an Elasticsearch `circuit_breaking_exception`,
# use a smaller batchSize value to reduce the memory pressure. Defaults to 1000 objects per batch.
#migrations.batchSize: 1000# The maximum payload size for indexing batches of upgraded saved objects.
# To avoid migrations failing due to a 413 Request Entity Too Large response from Elasticsearch.
# This value should be lower than or equal to your Elasticsearch cluster’s `http.max_content_length`
# configuration option. Default: 100mb
#migrations.maxBatchSizeBytes: 100mb# The number of times to retry temporary migration failures. Increase the setting
# if migrations fail frequently with a message such as `Unable to complete the [...] step after
# 15 attempts, terminating`. Defaults to 15
#migrations.retryAttempts: 15# =================== Search Autocomplete ===================
# Time in milliseconds to wait for autocomplete suggestions from Elasticsearch.
# This value must be a whole number greater than zero. Defaults to 1000ms
#unifiedSearch.autocomplete.valueSuggestions.timeout: 1000# Maximum number of documents loaded by each shard to generate autocomplete suggestions.
# This value must be a whole number greater than zero. Defaults to 100_000
#unifiedSearch.autocomplete.valueSuggestions.terminateAfter: 100000
i18n.locale: "zh-CN"# This section was automatically generated during setup.
elasticsearch.hosts: ['https://123.58.97.169:9200']
elasticsearch.serviceAccountToken: AAEAAWVsYXN0aWMva2liYW5hL2Vucm9sbC1wcm9jZXNzLXRva2VuLTE2OTE1NDk3NTYyNDE6NE55LU1IdVFRRTY0UkVpUloyZDhQdw
elasticsearch.ssl.certificateAuthorities: [/var/lib/kibana/ca_1691549757740.crt]
xpack.fleet.outputs: [{id: fleet-default-output, name: default, is_default: true, is_default_monitoring: true, type: elasticsearch, hosts: ['https://123.58.97.169:9200'], ca_trusted_fingerprint: 27991095e8dddf17d06a00968bd1b693fc906ea2d52d9f5563134505625791f1}]

常见问题

1、为什么我添加了仪表盘面板不显示?

答:当确保索引配置都正确的同时,不要忘记“sudo filebeat setup”初始化面板。执行初始化即可。

2、安装了filebeat,启用和配置 system 模块后,模块状态点击检查数据 显示“未连接”

答:造成此现象是filebeat的系统配置modules.d/system.yml文件未正确配置文件集,也就是找不到文件路径。配置正确后,systemctl status filebeat 查看运行状态并检查是否有错误日志。

3、为什么在索引管理里删除不了索引?

答:删除索引需要先暂停数据源服务,例如使用filebeat,需要先systemctl stop filebeat ,随后点击索引管理里的数据流,点击删除数据流即可删除数据流里的索引。

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.rhkb.cn/news/122350.html

如若内容造成侵权/违法违规/事实不符,请联系长河编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

【电压质量】提高隔离电源系统的电压质量(Simulink实现)

💥💥💞💞欢迎来到本博客❤️❤️💥💥 🏆博主优势:🌞🌞🌞博客内容尽量做到思维缜密,逻辑清晰,为了方便读者。 ⛳️座右铭&a…

使用rabbitmq进行支付之后的消息通知

订单服务完成支付后将支付结果发给每一个与订单服务对接的微服务,订单服务将消息发给交换机,由交换机广播消息,每个订阅消息的微服务都可以接收到支付结果. 微服务收到支付结果根据订单的类型去更新自己的业务数据。 相关技术方案 使用消息…

leetcode172. 阶乘后的零(java)

阶乘后的零 题目描述巧妙的解法代码演示 上期经典 题目描述 难度 - 中等 172. 阶乘后的零 给定一个整数 n ,返回 n! 结果中尾随零的数量。 提示 n! n * (n - 1) * (n - 2) * … * 3 * 2 * 1 示例 1: 输入:n 3 输出:0 解释&#…

【计算机基础】Git从安装到使用,详细每一步!扩展Github\Gitlab

📢:如果你也对机器人、人工智能感兴趣,看来我们志同道合✨ 📢:不妨浏览一下我的博客主页【https://blog.csdn.net/weixin_51244852】 📢:文章若有幸对你有帮助,可点赞 👍…

面试题速记:JavaScript有哪些数据类型,它们的区别是?

JavaScript有哪些数据类型,它们的区别? JavaScript共有八种数据类型,分别是 Undefined、Null、Boolean、Number、String、Object、Symbol、BigInt。 其中 Symbol 和 BigInt 是ES6 中新增的数据类型: ●Symbol 代表创建后独一无二…

《安富莱嵌入式周报》第321期:开源12导联便携心电仪,PCB AI设计,150M示波器差分探头,谷歌全栈环境IDX,微软在Excel推出Python

周报汇总地址:嵌入式周报 - uCOS & uCGUI & emWin & embOS & TouchGFX & ThreadX - 硬汉嵌入式论坛 - Powered by Discuz! 视频版: https://www.bilibili.com/video/BV1ju4y1D7A8/ 《安富莱嵌入式周报》第321期:开源12导…

机器学习笔记之最优化理论与方法(七)无约束优化问题——常用求解方法(上)

机器学习笔记之最优化理论与方法——基于无约束优化问题的常用求解方法[上] 引言总体介绍回顾:线搜索下降算法收敛速度的衡量方式线性收敛范围高阶收敛范围 二次终止性朴素算法:坐标轴交替下降法最速下降法(梯度下降法)梯度下降法的特点 针对最速下降法缺…

《vue3实战》运用push()方法实现电影评价系统的添加功能

目录 前言 电影评价系统的添加功能是什么? 电影评价系统的添加功能有什么作用? 一、push()方法是什么?它有什么作用? 含义: 作用: 二、功能实现 这段是添加开始时点击按钮使…

用户端APP自动化测试_L2

目录: appium server 环境安装capability 进阶用法元素定位工具高级定位技巧-xpath 定位高级定位技巧-css 定位与原生定位特殊控件 toast 识别显式等待高级使用高级控件交互方法设备交互api模拟器控制雪球财经app股票详情功能点自动化测试实战 1.appium server 环…

Podman安装与使用

1.Podman简介 Podman是一个无守护进程的容器引擎,用于在Linux系统上开发、管理和运行OCI容器。 Podman的主要功能包括: 创建和管理容器:Podman可以创建、启动、停止和删除容器,以及管理容器的生命周期。容器镜像管理&#xff1…

WSL中为Ubuntu和Debian设置固定IP的终极指南

文章目录 **WSL中为Ubuntu和Debian设置固定IP的终极指南****引言/背景****1. 传统方法****2. 新方法:添加指定IP而不是更改IP****结论**WSL中为Ubuntu和Debian设置固定IP的终极指南 引言/背景 随着WSL(Windows Subsystem for Linux)的普及,越来越多的开发者开始在Windows…

77 # koa 中间件的应用

调用 next() 表示执行下一个中间件 const Koa require("koa");const app new Koa();app.use(async (ctx, next) > {console.log(1);next();console.log(2); });app.use(async (ctx, next) > {console.log(3);next();console.log(4); });app.use(async (ctx,…

说说 TCP的粘包、拆包

分析&回答 拆包和粘包是在socket编程中经常出现的情况, 在socket通讯过程中,如果通讯的一端一次性连续发送多条数据包,tcp协议会将多个数据包打包成一个tcp报文发送出去,这就是所谓的粘包。如果通讯的一端发送的数据包超过一…

MRI多任务技术及应用

目录 一、定量心血管磁共振成像(CMR)的改进方法二、磁共振多任务三、磁共振多任务的成像框架四、磁共振多任务的图像模型和采样和重建策略五、利用MR多任务进行快速三维稳态CEST(ss-CEST)成像5.1 利用MR多任务进行快速三维稳态CEST(ss-CEST)成像介绍5.2 …

【数据结构】链表

【数据结构】 链表 1.链表的概念及结构 链表是一种物理存储单元上非连续、非顺序的存储结构,数据元素的逻辑顺序是通过链表中的指针链接次序实现的。链表由一系列结点(链表中每一个元素称为结点)组成,结点可以在运行时动态生成。…

C# void 关键字学习

C#中void关键字是System.Void的别名; 可以将 void 用作方法(或本地函数)的返回类型来指定该方法不返回值; 如果C#方法中没有参数,则不能将void用作参数;这是与C语言不同的,C语言有…

rk3568 SDK的buildroot添加package

开发源码工程 首先进入<SDK>/app 目录下&#xff0c;在该目录下创建一个名为“mypackage”的文件夹。 在 mypackage 目录下创建一个.c 源文件 main.c&#xff0c;以及一个 Makefile 文件。 大家可以自己在 main.c 源文件中编写一个简单的测试代码&#xff0c;譬如打印一…

设计模式-建造者(生成器)模式

文章目录 简介建造者模式的核心概念产品&#xff08;Product&#xff09;建造者&#xff08;Builder&#xff09;指挥者&#xff08;Director&#xff09;建造者模式与其他设计模式的关系工厂模式和建造者模式uml对比 建造者模式的实现步骤建造者模式的应用场景spring中应用 建…

RabbtiMQ的安装与使用

一、安装Erlang与Rabbitmq 安装教程本教程是在centos8下试验的&#xff0c;其实linux系统的都差不多RabbitMQ官方&#xff1a;Messaging that just works — RabbitMQRabbitMQ是开源AMQP实现&#xff0c;服务器端用Erlang语言编写&#xff0c;Python、Ruby、 NET、Java、JMS、c…

软件架构之前后端分离架构服务器端高并发演进之路

软件架构之前后端分离架构&服务器端高并发演进之路 前后端分离架构从业务角度从质量属性从性能角度 服务器端关于不同并发量的演进之路1. 单体架构2. 第一次演进&#xff1a;应用服务器和数据库服务器分开部署3. 第二次演进&#xff1a;引入本地缓存和分部署缓存4. 第三次演…