ELK

ELK

  • elk介绍
  • 前期准备
    • 1、修改主机名
    • 2、配置/ect/hosts
    • 3、检查防火墙selinux是否关闭
    • 4、时钟同步
  • elasticsearch部署
    • 介绍
    • 1、安装JAVA包
    • 2、解压安装包,修改配置文件
  • elasticsearch集群部署
  • elaticsearch基础API操作
    • 1、RestFul API 格式
    • 2、查看节点信息
    • 3、查看索引信息和新增索引
    • 4、删除索引
    • 5、导入数据
    • 6、查询bank索引的数据(使用查询字符串进行查询)
    • 7、查询bank索引的数据 (使用json格式进行查询)
    • 8、match_all 查询
    • 9、from,size 查询
    • 10、指定位置与查询条数
    • 11、匹配查询字段
    • 12、match 查询
    • 13、基本搜索查询,针对特定字段或字段集合进行搜索
    • 14、bool 查询
    • 15、range 查询
  • elasticsearch-head
    • 安装nodejs
    • 安装es-head
    • 修改ES集群配置文件,并重启服务
  • logstash部署
    • 部署
    • 验证方式一:
    • 验证方式二:
  • 日志采集
    • 采集messages日志
    • 采集多日志源
  • kibana部署
    • 部署
    • 汉化

elk介绍

运维人员需要对系统和业务日志进行精准把控,便于分析系统和业务状态。日志分布在不同的服务器上,传统的使用传统的方法依次登录每台服务器查看日志,既繁琐又效率低下。所以我们需要集中化的日志管理工具将位于不同服务器上的日志收集到一起, 然后进行分析,展示。
在这里插入图片描述
在这里插入图片描述

前期准备

1、修改主机名

[root@node1 ~]# hostnamectl hostname vm1.example.com
[root@node1 ~]# bash[root@node1 ~]# hostnamectl hostname vm2.example.com 
[root@node1 ~]# bash
[root@vm2 ~]# [root@node1 ~]# hostnamectl hostname v3.example.com
[root@node1 ~]# bash
[root@v3 ~]# 

2、配置/ect/hosts

[root@vm1 ~]# vim /etc/hosts 
[root@vm1 ~]# cat /etc/hosts 
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.100.30	vm1.example.com	kibana
192.168.100.80	vm2.example.com	elasticsearch
192.168.100.90	vm3.example.com	logstash
[root@vm1 ~]# scp /etc/hosts root@192.168.100.80:/etc/hosts
The authenticity of host '192.168.100.80 (192.168.100.80)' can't be established.
ED25519 key fingerprint is SHA256:Ci2qzv2Hvt2jld5Q8LBu35qRbAnKzC3EaGZRV6Htsw0.
This key is not known by any other names
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added '192.168.100.80' (ED25519) to the list of known hosts.
root@192.168.100.80's password: 
hosts                                                    100%  281   249.2KB/s   00:00    
[root@vm1 ~]# scp /etc/hosts root@192.168.100.90:/etc/hosts
The authenticity of host '192.168.100.90 (192.168.100.90)' can't be established.
ED25519 key fingerprint is SHA256:Ci2qzv2Hvt2jld5Q8LBu35qRbAnKzC3EaGZRV6Htsw0.
This host key is known by the following other names/addresses:~/.ssh/known_hosts:1: 192.168.100.80
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added '192.168.100.90' (ED25519) to the list of known hosts.
root@192.168.100.90's password: 
hosts                                                    100%  281   681.0KB/s   00:00    
[root@vm1 ~]# 

3、检查防火墙selinux是否关闭

[root@vm1 ~]# systemctl status firewalld
○ firewalld.service - firewalld - dynamic firewall daemonLoaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; preset: enabled)Active: inactive (dead)Docs: man:firewalld(1)   
[root@vm1 ~]# getenforce 
Disabled
[root@vm1 ~]# [root@vm2 ~]# yum -y install lrzsz tar net-tools wget

4、时钟同步

[root@vm1 ~]# yum -y install chrony
[root@vm1 ~]# systemctl restart chronyd
[root@vm1 ~]# systemctl enable chronyd
[root@vm1 ~]# timedatectl Local time: Mon 2024-08-19 16:02:19 CSTUniversal time: Mon 2024-08-19 08:02:19 UTCRTC time: Mon 2024-08-19 08:02:19Time zone: Asia/Shanghai (CST, +0800)
System clock synchronized: yesNTP service: activeRTC in local TZ: no
[root@vm1 ~]# hwclock -w

elasticsearch部署

介绍

Elasticsearch(简称ES)是一个开源的分布式搜索引擎,Elasticsearch还是一个分布式文档数据库。所以它
提供了大量数据的存储功能,快速的搜索与分析功能。

1、安装JAVA包

[root@vm1 ~]# yum -y install java-1.8.0*
[root@vm2 ~]# yum -y install java-1.8.0*
[root@vm3 ~]# yum -y install java-1.8.0*[root@vm1 ~]# java -version
openjdk version "1.8.0_422"
OpenJDK Runtime Environment (build 1.8.0_422-b05)
OpenJDK 64-Bit Server VM (build 25.422-b05, mixed mode)

2、解压安装包,修改配置文件

[root@vm2 ~]# ls
anaconda-ks.cfg  -e  elasticsearch-6.5.2.rpm  -i.bak
[root@vm2 ~]# rpm -ivh elasticsearch-6.5.2.rpm [root@vm2 ~]# 
[root@vm2 ~]# cd /etc/elasticsearch/
[root@vm2 elasticsearch]# ls
elasticsearch.keystore  jvm.options        role_mapping.yml  users
elasticsearch.yml       log4j2.properties  roles.yml         users_roles
[root@vm2 elasticsearch]# vim elasticsearch.yml 
cluster.name: elk-cluster 
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 0.0.0.0 
http.port: 9200
[root@vm2 elasticsearch]# systemctl restart elasticsearch
[root@vm2 elasticsearch]# systemctl enable elasticsearch
Created symlink /etc/systemd/system/multi-user.target.wants/elasticsearch.service → /usr/lib/systemd/system/elasticsearch.service.
[root@vm2 elasticsearch]# ss -anlt
State    Recv-Q   Send-Q       Local Address:Port       Peer Address:Port   Process   
LISTEN   0        128                0.0.0.0:22              0.0.0.0:*                
LISTEN   0        4096                     *:9300                  *:*                
LISTEN   0        128                   [::]:22                 [::]:*                
LISTEN   0        4096                     *:9200                  *:*   
[root@vm2 elasticsearch]# curl http://192.168.100.80:9200/_cluster/health?pretty
{"cluster_name" : "elk-cluster","status" : "green","timed_out" : false,"number_of_nodes" : 1,"number_of_data_nodes" : 1,"active_primary_shards" : 0,"active_shards" : 0,"relocating_shards" : 0,"initializing_shards" : 0,"unassigned_shards" : 0,"delayed_unassigned_shards" : 0,"number_of_pending_tasks" : 0,"number_of_in_flight_fetch" : 0,"task_max_waiting_in_queue_millis" : 0,"active_shards_percent_as_number" : 100.0
}
[root@vm2 elasticsearch]# 

elasticsearch集群部署

vm1:
[root@vm1 ~]# ls
anaconda-ks.cfg  -e  elasticsearch-6.5.2.rpm  -i.bak
[root@vm1 ~]# rpm -ivh elasticsearch-6.5.2.rpm 
warning: elasticsearch-6.5.2.rpm: Header V4 RSA/SHA512 Signature, key ID d88e42b4: NOKEY
Verifying...                          ################################# [100%]
Preparing...                          ################################# [100%]
Creating elasticsearch group... OK
Creating elasticsearch user... OK
Updating / installing...1:elasticsearch-0:6.5.2-1          ################################# [100%]
### NOT starting on installation, please execute the following statements to configure elasticsearch service to start automatically using systemdsudo systemctl daemon-reloadsudo systemctl enable elasticsearch.service
### You can start elasticsearch service by executingsudo systemctl start elasticsearch.service
Created elasticsearch keystore in /etc/elasticsearch
/usr/lib/tmpfiles.d/elasticsearch.conf:1: Line references path below legacy directory /var/run/, updating /var/run/elasticsearch → /run/elasticsearch; please update the tmpfiles.d/ drop-in file accordingly.
------------------------------------------------------------------------
[root@vm1 ~]# vim /etc/elasticsearch/elasticsearch.yml 
------------------------------------------------------------------------
cluster.name: elk-cluster
node.name: 192.168.100.30		本机IP或主机名
node.master: false 				指定不为master节点
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 0.0.0.0
http.port: 9200
discovery.zen.ping.unicast.hosts: ["192.168.100.30", "192.168.100.80"] 集群所有节点IP
------------------------------------------------------------------------
[root@vm1 ~]# systemctl restart elasticsearch
[root@vm1 ~]# systemctl enable elasticsearch
Created symlink /etc/systemd/system/multi-user.target.wants/elasticsearch.service → /usr/lib/systemd/system/elasticsearch.service.
[root@vm1 ~]# vm2:
[root@vm2 elasticsearch]# vim elasticsearch.yml 
-------------------------------------------------------------------
cluster.name: elk-cluster
node.name: 192.168.100.80 				本机IP或主机名
node.master: true 指定为master节点
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 0.0.0.0
http.port: 9200
discovery.zen.ping.unicast.hosts: ["192.168.100.30", "192.168.100.80"] 集群所有节点IP
-----------------------------------------------------------------
[root@vm2 elasticsearch]# systemctl restart elasticsearch

在这里插入图片描述

elaticsearch基础API操作

1、RestFul API 格式

RestFul API 格式:curl -X<verb><protocol>://<host>:<port>/<path>?<query_string>-d ‘<body>
参数描述
verbHTTP方法,比如GET、POST、PUT、HEAD、DELETE
hostES集群中的任意节点主机名
portES HTTP服务端口,默认9200
path索引路径
query_string可选的查询请求参数。例如?pretty参数将返回JSON格式数据
-d里面放一个GET的JSON格式请求主体
body自己写的 JSON格式的请求主体

2、查看节点信息

[root@vm2 elasticsearch]# curl http://192.168.100.80:9200/_cat/nodes?v
ip             heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name
192.168.100.30           13          95   0    0.00    0.00     0.00 di        -      192.168.100.30
192.168.100.80           11          96   0    0.00    0.00     0.00 mdi       *      192.168.100.80

在这里插入图片描述

3、查看索引信息和新增索引

[root@vm2 elasticsearch]# curl http://192.168.100.80:9200/_cat/indices?v
health status index uuid pri rep docs.count docs.deleted store.size pri.store.size  //没有索引
[root@vm2 elasticsearch]# curl -X PUT http://192.168.100.80:9200/nginx_access_log 
{"acknowledged":true,"shards_acknowledged":true,"index":"nginx_access_log"}[root@vm2 elasticsearch]# curl -X PUT http:/
[root@vm2 elasticsearch]# curl http://192.168.100.80:9200/_cat/indices?v
health status index            uuid                   pri rep docs.count docs.deleted store.size pri.store.size
green  open   nginx_access_log PGrIVaIERO2IizDOKL9b9A   5   1          0            0      2.2kb          1.1kb

在这里插入图片描述

4、删除索引

5、导入数据

[root@vm2 ~]# ls
accounts.json  anaconda-ks.cfg  -e  elasticsearch-6.5.2.rpm  -i.bak
[root@vm2 ~]#  curl -H "Content-Type: application/json" -XPOST "192.168.100.80:9200/bank/_doc/_bulk?pretty&refresh" --data-binary "@accounts.json"
[root@vm2 ~]# curl "192.168.100.80:9200/_cat/indices?v"
health status index            uuid                   pri rep docs.count docs.deleted store.size pri.store.size
green  open   nginx_access_log PGrIVaIERO2IizDOKL9b9A   5   1          0            0      2.5kb          1.2kb
green  open   bank             RZH-6IBNSOmQpduyCHSRKA   5   1       1000            0    965.6kb        482.5kb

6、查询bank索引的数据(使用查询字符串进行查询)

[root@vm2 ~]# curl -X GET "192.168.100.80:9200/bank/_search?q=*&sort=account_number:asc&pretty"
、
默认结果为10条
_search 属于一类API,用于执行查询操作
q=* ES批量索引中的所有文档
sort=account_number:asc 表示根据account_number按升序对结果排序
pretty调整显示格式

7、查询bank索引的数据 (使用json格式进行查询)

[root@vm2 ~]# curl -X GET "192.168.100.80:9200/bank/_search" -H 'content-Type:application/json' -d'
> {
> “query”: { "match_all": {} },
> "sort": [ 
> { "account_number": "asc"}
> ]
> }
> '
{"error":{"root_cause":[{"type":"json_parse_exception","reason":"Unexpected character ('“' (code 8220 / 0x201c)): was expecting double-quote to start field name\n at [Source: org.elasticsearch.transport.netty4.ByteBufStreamInput@6738f56b; line: 3, column: 4]"}],"type":"json_parse_exception","reason":"Unexpected character ('“' (code 8220 / 0x201c)): was expecting double-quote to start field name\n at [Source: org.elasticsearch.transport.netty4.ByteBufStreamInput@6738f56b; line: 3, column: 4]"},"status":500}[root@vm2 ~]# 

8、match_all 查询

匹配所有文档。默认查询

[root@vm2 ~]# curl -X GET "192.168.100.80:9200/bank/_search?pretty" -H "content-Type:application/json" d'
> {
> "query": { "match_all": {} }
> }
> '
# query告诉我们查询什么
# match_all是我们查询的类型
# match_all查询仅仅在指定的索引的所有文件进行搜索

9、from,size 查询

除了query参数外,还可以传递其他参数影响查询结果,比如前面提到的sort,接下来使用的size

[root@vm2 ~]# curl -X GET "192.168.100.80:9200/bank/_search?pretty" -H 'content-Type:application/json' -d'
{
"query":{ "match_all": {} },
"size":1
}
'

10、指定位置与查询条数

[root@vm2 ~]# curl -X GET "192.168.100.80:9200/bank/_search?pretty" -H 'content-Type:application/json' -d'
> {
> "query": { "match_all": {} }
> "from": 0
> "size": 2
> }
> '

11、匹配查询字段

返回_source字段中的片段字段

[root@vm2 ~]# curl -X GET "192.168.100.80:9200/bank/_search?pretty" -H 'content-Type:application/json' -d'
{
"query": { "match_all": {} },
> "_source": ["account_number","balance"]
> }
> '

12、match 查询

[root@vm2 ~]# curl -X GET "192.168.100.80:9200/bank/_search?pretty" -H 'content-Type:application/json' -d'
{
"query": { "match": {"account_number": 20} }
> }
> '

13、基本搜索查询,针对特定字段或字段集合进行搜索

[root@vm2 ~]# curl -X GET "192.168.100.80:9200/bank/_search?pretty" -H >'content-Type:application/json' -d'
>{
>"query": { "match": {"account_number": "mill"} }
>}
>'

14、bool 查询

bool must 查询的字段必须同时存在
查询包含mill和lane的所有账户

[root@vm2 ~]# curl -X GET "192.168.100.80:9200/bank/_search?pretty" -H 'content-Type:application/json' -d'
> {
> "query": {
> "bool": {
> "must": [ 
> { "match": {"address": "mill"} },
> { "match": {"address": "lane"} }
> ]
> }
> }
> }
> '

15、range 查询

指定区间内的数字或者时间
操作符:gt大于,gte大于等于,lt小于,lte小于等于

[root@vm2 ~]# curl -X GET "10.1.1.12:9200/bank/_search?pretty" -H 'Content-Type:
application/json' -d'
>{
>"query": {
>"bool": {
>"must": { "match_all": {} },
>"filter": {
>"range": {
>"balance": {
>"gte": 20000,
>"lte": 30000
>}
>}
>}
>}
>}
>}
>'

elasticsearch-head

elasticsearch-head是集群管理、数据可视化、增删改查、查询语句可视化工具。从ES5版本后安装方式
和ES2以上的版本有很大的不同,在ES2中可以直接在bin目录下执行plugin install xxxx 来进行安装,但是
在ES5中这种安装方式变了,要想在ES5中安装Elasticsearch Head必须要安装NodeJs,然后通过NodeJS来
启动Head。

安装nodejs

[root@vm1 ~]# ls
anaconda-ks.cfg  -e  elasticsearch-6.5.2.rpm  -i.bak  node-v10.24.1-linux-x64.tar.xz
[root@vm1 ~]# tar xf node-v10.24.1-linux-x64.tar.xz -C /usr/local/
[root@vm1 ~]# ls /usr/local/
bin  etc  games  include  lib  lib64  libexec  node-v10.24.1-linux-x64  sbin  share  src
[root@vm1 ~]# mv /usr/local/node-v10.24.1-linux-x64/  /usr/local/nodejs
[root@vm1 ~]# ls /usr/local/
bin  etc  games  include  lib  lib64  libexec  nodejs  sbin  share  src
[root@vm1 ~]# ln -s /usr/local/nodejs/bin/npm /bin/npm
[root@vm1 ~]# ln -s /usr/local/nodejs/bin/node /bin/node
[root@vm1 ~]# 

安装es-head

[root@vm2 bin]# yum -y install unzip
[root@vm2 ~]# ls
accounts.json    -e                       elasticsearch-head-master.zip  node-v10.24.1-linux-x64.tar.xz
anaconda-ks.cfg  elasticsearch-6.5.2.rpm  -i.bak
[root@vm2 ~]# unzip elasticsearch-head-master.zip
[root@vm2 ~]# cd elasticsearch-head-master/
[root@vm2 elasticsearch-head-master]# npm install -g grunt-cli --registry=http://registry.npm.taobao.org
##  --registry=http://registry.npm.taobao.org 网络不好就添加,网络好就不需要添加[root@vm2 elasticsearch-head-master]# npm install -g grunt-cli --registry=http://registry.npm.taobao.orgadded 56 packages in 5s5 packages are looking for fundingrun `npm fund` for details
npm notice 
npm notice New major version of npm available! 8.19.4 -> 10.8.2
npm notice Changelog: https://github.com/npm/cli/releases/tag/v10.8.2
npm notice Run npm install -g npm@10.8.2 to update!
npm notice 
[root@vm2 elasticsearch-head-master]# npm install --registry=http://registry.npm.taobao.org

在这里插入图片描述

解决报错
[root@vm2 elasticsearch-head-master]# npm install phantomjs-prebuilt@2.1.16 --ignore-script
npm WARN EBADENGINE Unsupported engine {
npm WARN EBADENGINE   package: 'karma@1.3.0',
npm WARN EBADENGINE   required: { node: '0.10 || 0.12 || 4 || 5 || 6' },
npm WARN EBADENGINE   current: { node: 'v16.20.2', npm: '8.19.4' }
npm WARN EBADENGINE }
npm WARN EBADENGINE Unsupported engine {
npm WARN EBADENGINE   package: 'http2@3.3.7',
npm WARN EBADENGINE   required: { node: '>=0.12.0 <9.0.0' },
npm WARN EBADENGINE   current: { node: 'v16.20.2', npm: '8.19.4' }
npm WARN EBADENGINE }
npm WARN deprecated inflight@1.0.6: This module is not supported, and leaks memory. Do not use it. Check out lru-cache if you want a good and tested way to coalesce async requests by a key value, which is much more comprehensive and powerful.
npm WARN deprecated source-map-url@0.4.1: See https://github.com/lydell/source-map-url#deprecated
npm WARN deprecated rimraf@2.7.1: Rimraf versions prior to v4 are no longer supported
npm WARN deprecated rimraf@2.7.1: Rimraf versions prior to v4 are no longer supported
npm WARN deprecated rimraf@2.7.1: Rimraf versions prior to v4 are no longer supported
npm WARN deprecated urix@0.1.0: Please see https://github.com/lydell/urix#deprecated
npm WARN deprecated har-validator@5.1.5: this library is no longer supported
npm WARN deprecated resolve-url@0.2.1: https://github.com/lydell/resolve-url#deprecated
npm WARN deprecated json3@3.3.2: Please use the native JSON object instead of JSON 3
npm WARN deprecated rimraf@2.2.8: Rimraf versions prior to v4 are no longer supported
npm WARN deprecated glob@7.2.3: Glob versions prior to v9 are no longer supported
npm WARN deprecated glob@5.0.15: Glob versions prior to v9 are no longer supported
npm WARN deprecated glob@7.2.3: Glob versions prior to v9 are no longer supported
npm WARN deprecated glob@7.2.3: Glob versions prior to v9 are no longer supported
npm WARN deprecated source-map-resolve@0.5.3: See https://github.com/lydell/source-map-resolve#deprecated
npm WARN deprecated chokidar@1.7.0: Chokidar 2 will break on node v14+. Upgrade to chokidar 3 with 15x less dependencies.
npm WARN deprecated glob@7.1.7: Glob versions prior to v9 are no longer supported
npm WARN deprecated glob@7.0.6: Glob versions prior to v9 are no longer supported
npm WARN deprecated uuid@3.4.0: Please upgrade  to version 7 or higher.  Older versions may use Math.random() in certain circumstances, which is known to be problematic.  See https://v8.dev/blog/math-random for details.
npm WARN deprecated phantomjs-prebuilt@2.1.16: this package is now deprecated
npm WARN deprecated request@2.88.2: request has been deprecated, see https://github.com/request/request/issues/3142
npm WARN deprecated http2@3.3.7: Use the built-in module in node 9.0.0 or newer, instead
npm WARN deprecated json3@3.2.6: Please use the native JSON object instead of JSON 3
npm WARN deprecated coffee-script@1.10.0: CoffeeScript on NPM has moved to "coffeescript" (no hyphen)
npm WARN deprecated log4js@0.6.38: 0.x is no longer supported. Please upgrade to 6.x or higher.
npm WARN deprecated core-js@2.6.12: core-js@<3.23.3 is no longer maintained and not recommended for usage due to the number of issues. Because of the V8 engine whims, feature detection in old core-js versions could cause a slowdown up to 100x even if nothing is polyfilled. Some versions have web compatibility issues. Please, upgrade your dependencies to the actual version of core-js.added 528 packages, and audited 529 packages in 33s22 packages are looking for fundingrun `npm fund` for details45 vulnerabilities (3 low, 7 moderate, 27 high, 8 critical)To address issues that do not require attention, run:npm audit fixTo address all issues possible (including breaking changes), run:npm audit fix --forceSome issues need review, and may require choosing
a different dependency.Run `npm audit` for details.
[root@vm2 elasticsearch-head-master]# 
[root@vm2 elasticsearch-head-master]# npm install  --registry=http://registry.npm.taobao.org
[root@vm2 elasticsearch-head-master]# nohup npm run start &
[root@vm2 elasticsearch-head-master]# ss -anlt
State       Recv-Q       Send-Q             Local Address:Port             Peer Address:Port      Process      
LISTEN      0            511                      0.0.0.0:9100                  0.0.0.0:*                      
LISTEN      0            128                      0.0.0.0:22                    0.0.0.0:*                      
LISTEN      0            4096                           *:9300                        *:*                      
LISTEN      0            4096                           *:9200                        *:*                      
LISTEN      0            128                         [::]:22                       [::]:*                      
[root@vm2 elasticsearch-head-master]# 

在这里插入图片描述

修改ES集群配置文件,并重启服务

[root@vm1 ~]# vim /etc/elasticsearch/elasticsearch.yml
[root@vm1 ~]# vim /etc/elasticsearch/elasticsearch.yml
http.cors.enabled: true
http.cors.allow-origin: "*"  
#添加两行
[root@vm1 ~]# systemctl restart elasticsearch
[root@vm2 ~]# systemctl restart elasticsearch
[root@vm1 ~]# ss -anlt
State       Recv-Q       Send-Q             Local Address:Port             Peer Address:Port      Process      
LISTEN      0            128                      0.0.0.0:22                    0.0.0.0:*                      
LISTEN      0            4096                           *:9300                        *:*                      
LISTEN      0            4096                           *:9200                        *:*                      
LISTEN      0            128                         [::]:22                       [::]:*                      
[root@vm1 ~]# 
[root@vm2 ~]# ss -anlt
State       Recv-Q       Send-Q             Local Address:Port             Peer Address:Port      Process      
LISTEN      0            511                      0.0.0.0:9100                  0.0.0.0:*                      
LISTEN      0            128                      0.0.0.0:22                    0.0.0.0:*                      
LISTEN      0            4096                           *:9300                        *:*                      
LISTEN      0            4096                           *:9200                        *:*                      
LISTEN      0            128                         [::]:22                       [::]:*                      
[root@vm2 ~]# 

在这里插入图片描述

logstash部署

部署

[root@v3 ~]# ls 
anaconda-ks.cfg  -e  -i.bak  logstash-6.5.2.rpm
[root@v3 ~]# rpm -ivh logstash-6.5.2.rpm 
[root@v3 ~]# cd /etc/logstash/
[root@v3 logstash]# ls
conf.d       log4j2.properties     logstash.yml   startup.options
jvm.options  logstash-sample.conf  pipelines.yml
[root@v3 logstash]# vim logstash.yml
-------------------------------------------------------------
path.data: /var/lib/logstash
path.config: /etc/logstash/conf.d/ 
http.host: "0.0.0.0" 
path.logs: /var/log/logstash
-------------------------------------------------------------

验证方式一:

[root@v3 logstash]# cd /usr/share/logstash/bin/
[root@v3 bin]# ./logstash -e 'input {stdout {}} output {stdout {}}'

末尾出现:
在这里插入图片描述

验证方式二:

[root@v3 ~]# vim /etc/logstash/conf.d/test.conf
[root@v3 ~]# cat /etc/logstash/conf.d/test.conf 
input {stdin {}
}filter {
}output {stdout {codec => rubydebug}
}
[root@v3 ~]# [root@v3 ~]# cd /usr/share/logstash/bin/
[root@v3 bin]# ./logstash --path.settings /etc/logstash -f /etc/logstash/conf.d/test.conf -t
--path.settings 指定logstash主配置文件目录
-f 指定片段配置文件
-t 测试配置文件是否正确
-r参数很强大,会动态装载配置文件,也就是说启动后,可以不用重启修改配置文件
codec => rubydebug这句可写可不定,默认就是这种输出方式

出现:
在这里插入图片描述

[root@v3 bin]# ./logstash --path.settings /etc/logstash -r -f /etc/logstash/conf.d/test.conf 
Sending Logstash logs to /var/log/logstash which is now configured via log4j2.properties
[2024-08-20T14:35:14,083][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.queue", :path=>"/var/lib/logstash/queue"}
[2024-08-20T14:35:14,106][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.dead_letter_queue", :path=>"/var/lib/logstash/dead_letter_queue"}
[2024-08-20T14:35:14,542][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
Configuration OK
[2024-08-20T14:35:16,347][INFO ][logstash.runner          ] Using config.test_and_exit mode. Config Validation Result: OK. Exiting Logstash
[root@v3 bin]# 
[root@v3 bin]# ./logstash --path.settings /etc/logstash -r -f /etc/logstash/conf.d/test.conf 
Sending Logstash logs to /var/log/logstash which is now configured via log4j2.properties
[2024-08-20T14:38:00,603][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2024-08-20T14:38:00,615][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"6.5.2"}
[2024-08-20T14:38:00,645][INFO ][logstash.agent           ] No persistent UUID file found. Generating new UUID {:uuid=>"8843a144-df1e-45d7-a38b-c67a4758c30e", :path=>"/var/lib/logstash/uuid"}
[2024-08-20T14:38:02,829][INFO ][logstash.pipeline        ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>1, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}
[2024-08-20T14:38:03,016][INFO ][logstash.pipeline        ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0xce3fefd sleep>"}
The stdin plugin is now waiting for input:
[2024-08-20T14:38:03,059][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}

日志采集

采集messages日志

[root@v3 bin]# vim /etc/logstash/conf.d/test.conf 
[root@v3 bin]# cat /etc/logstash/conf.d/test.conf 
input {
file {
path => "/var/log/messages"
start_position => "beginning"
}
}
output {
elasticsearch{
hosts => ["192.168.100.80:9200"]
index => "test-%{+YYYY.MM.dd}"
}
}
[root@v3 bin]# ps -ef | grep java   #停止服务

在这里插入图片描述

采集多日志源

[root@v3 bin]# vim /etc/logstash/conf.d/test.conf 
[root@v3 bin]# cat /etc/logstash/conf.d/test.conf 
input {file {path => "/var/log/messages"start_position => "beginning"type => "messages"}file {path => "/var/log/dnf.log"start_position => "beginning"type => "dnf"}
}filter{}output{if [type] == "messages" {elasticsearch {hosts => ["192.168.100.30:9200","192.168.100.80:9200"]index => "messages-%{+YYYY-MM-dd}"}}if [type] == "dnf" {elasticsearch {hosts => ["192.168.100.30:9200","192.168.100.80:9200"]index => "yum-%{+YYYY-MM-dd}"}}
}[root@v3 bin]# ./logstash --path.settings /etc/logstash -r -f /etc/logstash/conf.d/test.conf &
[root@v3 bin]# ss -anlt
State        Recv-Q       Send-Q             Local Address:Port             Peer Address:Port       Process       
LISTEN       0            128                      0.0.0.0:22                    0.0.0.0:*                        
LISTEN       0            50                             *:9600                        *:*                        
LISTEN       0            128                         [::]:22                       [::]:*   

在这里插入图片描述

kibana部署

部署

[root@vm1 ~]# ls
04-ELK2.pdf      -e                       -i.bak                   node-v10.24.1-linux-x64.tar.xz
anaconda-ks.cfg  elasticsearch-6.5.2.rpm  kibana-6.5.2-x86_64.rpm
[root@vm1 ~]# rpm -ivh kibana-6.5.2-x86_64.rpm 
warning: kibana-6.5.2-x86_64.rpm: Header V4 RSA/SHA512 Signature, key ID d88e42b4: NOKEY
Verifying...                          ################################# [100%]
Preparing...                          ################################# [100%]
Updating / installing...1:kibana-6.5.2-1                   ################################# [100%]
[root@vm1 ~]# 
[root@vm1 ~]# cd /etc/kibana/
[root@vm1 kibana]# ls
kibana.yml
[root@vm1 kibana]# vim kibana.yml 
---------------------------------------------------------------
server.port: 5601 端口
server.host: "0.0.0.0" 监听所有,允许所有人能访问
elasticsearch.url: "http://192.168.100.30:9200" ES集群的路径
logging.dest: /var/log/kibana.log 我这里加了kibana日志,方便排错与调试
---------------------------------------------------------------
[root@vm1 kibana]# cd /var/log/
[root@vm1 log]# ls
anaconda  cron             dnf.rpm.log    hawkey.log-20240819  messages           secure            sssd
audit     cron-20240819    elasticsearch  lastlog              messages-20240819  secure-20240819   tallylog
btmp      dnf.librepo.log  firewalld      maillog              private            spooler           wtmp
chrony    dnf.log          hawkey.log     maillog-20240819     README             spooler-20240819
[root@vm1 log]# touch kibana.log
[root@vm1 log]# chown kibana.kibana kibana.log 
[root@vm1 log]# systemctl restart kibana
[root@vm1 log]# systemctl enable kibana
[root@vm1 log]# 

在这里插入图片描述

汉化

[root@vm1 ~]# unzip kibana-6.5.4_hanization-master.zip -d /usr/local/
[root@vm1 ~]# cd /usr/local/kibana-6.5.4_hanization-master
这里要注意:1,要安装python; 2,rpm版的kibana安装目录为/usr/share/kibana/
[root@vm1 kibana-6.5.4_hanization-master]# python main.py  /usr/share/kibana/汉化完后需要重启
[root@vm1 Kibana_Hanization-master]# systemctl stop kibana
[root@vm1 Kibana_Hanization-master]# systemctl start kibana

在这里插入图片描述

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.rhkb.cn/news/409857.html

如若内容造成侵权/违法违规/事实不符,请联系长河编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

蓝牙芯片 vs. 蓝牙模块:如何为蓝牙方案做出最佳选择?

不论您是设计全新的低功耗蓝牙产品&#xff0c;还是升级现有产品&#xff0c;开发者都面临的一个关键的选择&#xff1a;是采用蓝牙芯片还是蓝牙模块呢&#xff1f;作为蓝牙技术领域的资深专家&#xff0c;信驰达将从蓝牙芯片与蓝牙模块的各自优缺点进行分析&#xff0c;帮助您…

使用AWS的EC2服务如何降低成本

在现代企业中&#xff0c;云计算已经成为推动业务创新和发展的重要工具。亚马逊云服务&#xff08;AWS&#xff09;的弹性计算云&#xff08;EC2&#xff09;提供了灵活的计算能力&#xff0c;企业可以根据需求快速部署和管理应用。然而&#xff0c;如何在使用EC2服务的过程中有…

机器学习:SVM的代码实现

目录 前言 一、完整代码 二、输出结果 三、实现步骤解析 1.读取数据 2.创建模型并训练 3.可视化SVM结果 总结 前言 支持向量机&#xff08;SVM&#xff0c;Support Vector Machine&#xff09;是一种用于分类和回归的监督学习算法。它的核心思想是通过在特征空间中找到…

记录|Visual Studio中的Git上传下载使用

目录 前言一、前提准备Step1 仓库准备Step2. 本地仓库和远程仓库绑定当前效果展示 二、下载更新内容到本地仓库情形Step1. 下载 三、更新内容&#xff0c;上传文件到远程仓库情形Step1. 下载Step2. 上传当前效果展示 更新时间 前言 这部分是使用过程中的经验 一、前提准备 St…

SpringBoot集成kafka-监听器手动确认接收消息(主要为了保证业务完成后再确认接收)

SpringBoot集成kafka-监听器手动确认接收消息 1、说明2、示例2.1、application.yml2.2、消费者2.3、生产者2.4、测试类2.5、测试 1、说明 kafak中默认情况下是自动确认消息接收的&#xff0c;也就是说先启动消费者监听程序&#xff0c;再启动生产者发送消息&#xff0c;此时消…

【Java并发】变量的内存存储、线程安全分析

要理解原因&#xff0c;首先要清楚局部变量是什么&#xff1f;局部变量的存储方式是什么&#xff1f; 局部变量&#xff0c;从名字上就可以知道&#xff0c;它是只在特定作用域内可见并且只能在该作用域内使用的变量。也就意味着不同作用域的局部变量是不共享的。在多线程环境下…

HTML静态网页成品作业(HTML+CSS+JS)——迪士尼公主介绍(6个页面)

&#x1f389;不定期分享源码&#xff0c;关注不丢失哦 文章目录 一、作品介绍二、作品演示三、代码目录四、网站代码HTML部分代码 五、源码获取 一、作品介绍 &#x1f3f7;️本套采用HTMLCSS&#xff0c;使用Javacsript代码&#xff0c;共有6个页面。 二、作品演示 三、代码…

ICML 2024 顶级论文:机器学习有什么新进展?

在本周的文章中&#xff0c;我打算探讨在国际机器学习大会 ICML 上发表的论文&#xff0c;该大会目前于 2024 年 7 月 21 日至 27 日在奥地利首都维也纳举行。与其他顶级人工智能会议一样&#xff0c;每年都会有数千篇论文提交&#xff0c;但录取率相对较低&#xff08;过去三年…

机械学习—零基础学习日志(如何理解概率论5)

二维随机变量 这里的其实就是边缘分布 联合分布 当结合来看&#xff0c;小明和小红的成绩。可以发现&#xff0c;小明和小红是独立事件&#xff0c;可以放到一个模块内部分析。 而当所有的情况考虑&#xff0c;单独小红取得某个成绩的概率&#xff0c;都可以计算出来。 例如…

攻防世界 1000次点击

做题笔记。 下载解压 查壳。 32位ida打开。 查找字符串。 winmain函数写的&#xff0c;程序运行如下&#xff1a; 一开始思路是想着分析找到关键代码然后去od进行调试。 后来&#xff0c;额&#xff0c;不想看代码了。吐了。 尝试去字符串搜索flag样式&#xff0c;确实一发现…

【C/C++】Sleep()函数详解

&#x1f984;个人主页:修修修也 &#x1f38f;所属专栏:Linux ⚙️操作环境:Visual Studio 2022 / Xshell (操作系统:CentOS 7.9 64位) 目录 &#x1f4cc;Windows系统下Sleep()函数简介 &#x1f38f;函数功能 &#x1f38f;函数参数 &#x1f579;️DWORD milliseconds &…

Linux云计算 |【第二阶段】SHELL-DAY2

主要内容&#xff1a; 条件测试&#xff08;字符串比较、整数比较、文件状态&#xff09;、IF选择结构&#xff08;单分支、双分支、多分支&#xff09;、For循环结构、While循环结构 一、表达式比较评估 test 命令是 Unix 和 Linux 系统中用于评估条件表达式的命令。它通常用…

致远OA OCR票据识别组件

OCR票据识别 技术支持 技术大佬支持本文档 使用范围 任何票种信息&#xff0c;只要需要对接到oa底表中&#xff0c;就能够实现各种票种&#xff0c;各种字段的对接&#xff0c;包括票据识别&#xff0c;发票核验&#xff0c;适配各种票据 使用介绍 1 配置每种发票的ocr设…

yup 使用 2 - 获取默认值,循环依赖,超大数字验证,本地化

yup 使用 2 - 获取默认值&#xff0c;循环依赖&#xff0c;超大数字验证&#xff0c;本地化 上一篇的使用在这里&#xff1a;yup 基础使用以及 jest 测试&#xff0c;这篇讲的是比较基础的东西&#xff0c; 获取默认值 之前用的都是 cast({})&#xff0c;然后如果有些值是必…

叉车(工业车辆)安全管理系统,云端监管人车信息运营情况方案

近年来&#xff0c;国家和各地政府相继出台了多项政策法规&#xff0c;从政策层面推行叉车智慧监管&#xff0c;加大叉车安全监管力度。同时鼓励各地结合实际&#xff0c;积极探索智慧叉车建设&#xff0c;实现作业人员资格认证、车辆状态认证、安全操作提醒、行驶轨迹监控等&a…

如何利用电商 API 数据分析助力精准选品!

电商 API 数据分析在选品过程中起着至关重要的作用&#xff0c;它们之间有着密切的关系&#xff1a; 一、提供市场趋势洞察 热门商品识别&#xff1a; 通过分析电商 API 中的销售数据&#xff0c;包括商品的销售量、销售额、销售频率等指标&#xff0c;可以快速准确地识别出当…

1Panel应用推荐:MeterSphere开源持续测试工具

1Panel&#xff08;github.com/1Panel-dev/1Panel&#xff09;是一款现代化、开源的Linux服务器运维管理面板&#xff0c;它致力于通过开源的方式&#xff0c;帮助用户简化建站与运维管理流程。为了方便广大用户快捷安装部署相关软件应用&#xff0c;1Panel特别开通应用商店&am…

redis面试(二十一)读写锁互斥

读锁非互斥 非互斥的意思就是&#xff0c;一个客户端或者线程加锁之后&#xff0c;另一个客户端线程也可以来进行加锁。 还是拿着ReadLock的lua脚本来看看 刚才我们已经分析过第一个线程来加读锁的逻辑了 所以上半截不用重复说了&#xff0c; hset anyLock mode read hset an…

后端微服务架构:构建分布式博客系统

后端微服务架构&#xff1a;构建分布式博客系统 在当今的软件开发领域&#xff0c;微服务架构已经成为构建可扩展、灵活且易于维护的应用程序的主流选择。本文将探讨如何利用微服务架构来设计和实现一个分布式的博客系统。 1. 微服务架构简介 微服务架构是一种将应用程序分解…

【微服务部署】Linux部署微服务启动报ORA-01005

问题背景&#xff1a; Linux机器部署springboot微服务&#xff0c;部署完成后发现无法启动&#xff0c;后台报ORA-01005错误。 解决方案&#xff1a; 1.检查当前服务器是否已安装oracle客户端 命令行执行sqlplus username/passwd实例名&#xff0c;如果执行成功&#xff0c;说…