基于Kubernetes编排部署EFK日志收集系统

基于K8S编排部署EFK日志收集系统

案例分析

1. 规划节点

节点规划,见表1。

表1 节点规划

IP主机名k8s版本信息
192.168.100.3masterv1.25.2
192.168.100.4nodev1.25.2
2. 基础准备

Kubernete环境已安装完成,将提供的软件包efk-img.tar.gz上传至master节点/root目录下并解压。

3. EFK简介

EFK日志收集系统

EFK 代表 Elasticsearch、Fluentd 和 Kibana。EFK 是 Kubernetes 日志聚合和分析的常用且最佳的开源选择。

  1. Elasticsearch 是一种分布式且可扩展的搜索引擎,通常用于筛选大量日志数据。它是一个基于 Lucene 搜索引擎(来自 Apache 的搜索库)的 NoSQL 数据库。它的主要工作是存储日志并从 fluentd 检索日志。
  2. Fluentd 是一家原木运输商。它是一个开源的日志收集代理,支持多种数据源和输出格式。此外,它还可以将日志转发到 Stackdriver、Cloudwatch、elasticsearch、Splunk、Bigquery 等解决方案。简而言之,它是生成日志数据的系统和存储日志数据的系统之间的统一层。
  3. Kibana 是用于查询、数据可视化和仪表板的 UI 工具。它是一个查询引擎,允许您通过 Web 界面浏览日志数据,为事件日志构建可视化效果,特定于查询以筛选信息以检测问题。您可以使用 Kibana 虚拟构建任何类型的仪表板。Kibana 查询语言 (KQL) 用于查询 Elasticsearch 数据。在这里,我们使用 Kibana 在 Elasticsearch 中查询索引数据

案例实施

1. 基础环境准备
(1)导入软件包
[root@master ~]# nerdctl load -i efk-img.tar.gz

查看集群状态:

[root@master ~]# kubectl cluster-info
Kubernetes control plane is running at https://apiserver.cluster.local:6443
CoreDNS is running at https://apiserver.cluster.local:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxyTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
2. 配置StorageClass 动态绑定
(1)配置NFS Server端
[root@master ~]# yum install -y nfs-utils rpcbind

设置 NFS 服务开机自启

[root@master ~]# systemctl enable nfs rpcbind --now

创建EFK共享目录

[root@master ~]# mkdir -p /root/data/
[root@master ~]# chmod -R 777 /root/data/

编辑 NFS 配置文件

[root@master ~]# vim /etc/exports
/root/data/ *(rw,sync,no_all_squash,no_root_squash)

刷新 NFS 导出列表并启动 NFS 服务

[root@master ~]# exportfs -r
[root@master ~]# systemctl restart nfs

查看 NFS 共享的导出状态

[root@master ~]# showmount -e
Export list for master:
/root/data *
(2)创建ServiceAccount并配置RBAC

创建目录存放EFK配置文件

[root@master ~]# mkdir efk
[root@master ~]# cd efk/

编写 rbac 配置文件

[root@master efk]# vim rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:name: nfs-client-provisioner---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:name: leader-locking-nfs-client-provisioner
rules:
- apiGroups:- ""resources:- endpointsverbs:- get- list- watch- create- update- patch---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:name: leader-locking-nfs-client-provisioner
roleRef:apiGroup: rbac.authorization.k8s.iokind: Rolename: leader-locking-nfs-client-provisioner
subjects:
- kind: ServiceAccountname: nfs-client-provisionernamespace: default---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:name: nfs-client-provisioner-runner
rules:
- apiGroups:- ""resources:- persistentvolumesverbs:- get- list- watch- create- delete
- apiGroups:- ""resources:- persistentvolumeclaimsverbs:- get- list- watch- update
- apiGroups:- ""resources:- endpointsverbs:- get- list- watch- create- update- patch
- apiGroups:- storage.k8s.ioresources:- storageclassesverbs:- get- list- watch
- apiGroups:- ""resources:- eventsverbs:- create- update- patch
- apiGroups:- ""resources:- pods- namespacesverbs:- get- list- watch---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:name: run-nfs-client-provisioner
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: nfs-client-provisioner-runner
subjects:
- kind: ServiceAccountname: nfs-client-provisionernamespace: default

部署 rbac 文件

[root@master efk]# kubectl apply -f rbac.yaml
(3)配置NFS Provisioner

导入镜像包

[root@master ~]# nerdctl -n k8s.io load -i nfs-subdir-external-provisioner-v4.0.2.tar

编写 nfs 配置文件

[root@master efk]# vim nfs-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:name: nfs-client-provisioner  # 部署对象名称为 nfs-client-provisionernamespace: default  # 部署在 default 命名空间
spec:replicas: 1  # 副本数量为 1selector:matchLabels:app: nfs-client-provisioner  # 选择带有此标签的 Podtemplate:metadata:labels:app: nfs-client-provisioner  # 为 Pod 设置标签spec:serviceAccountName: nfs-client-provisioner  # 使用 nfs-client-provisioner 服务账号containers:- name: nfs-client-provisioner  # 容器名称为 nfs-client-provisionerimage: registry.k8s.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2  # 使用指定版本的 NFS provisioner 镜像imagePullPolicy: IfNotPresentenv:- name: PROVISIONER_NAME  # 设置环境变量 PROVISIONER_NAMEvalue: nfs.client.com  # 环境变量的值为 nfs.client.com- name: NFS_SERVER  # 设置 NFS 服务端地址value: 192.168.100.3  # 替换为实际的 NFS 服务端 IP 地址- name: NFS_PATH  # 设置 NFS 共享目录路径value: /root/data/  # 替换为实际的 NFS 共享目录路径volumeMounts:- name: nfs-client-root  # 数据卷挂载点名称mountPath: /persistentvolumes  # 容器内的挂载路径volumes:- name: nfs-client-root  # 定义 NFS 数据卷nfs:server: 192.168.100.3  # 替换为实际的 NFS 服务端 IP 地址path: /root/data/  # 替换为实际的 NFS 共享目录路径径

部署 nfs 文件

[root@master efk]# kubectl apply -f nfs-deployment.yaml

查看 Pod 运行状态

[root@master efk]# kubectl get pod
NAME                                      READY   STATUS    RESTARTS   AGE
nfs-client-provisioner-76fdd7948b-2n9wv   1/1     Running   0          2s
(4)配置StorageClass动态绑定
[root@master efk]# vim storageclass.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:name: managed-nfs-storage  # StorageClass 名称为 managed-nfs-storageannotations:storageclass.kubernetes.io/is-default-class: "true"  # 使用 annotations 指定为默认 StorageClass
provisioner: nfs.client.com  # 设置 Provisioner 为 nfs.client.com,对应 NFS Provisioner
parameters:archiveOnDelete: "false"  # 当 PVC 被删除时,NFS 上的持久卷不会被自动删除

部署 StorageClass 文件

[root@master efk]# kubectl apply -f storageclass.yaml
3. 配置EFK
(1)配置Elasticsearch
[root@master efk]# vim elasticsearch.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:name: es-cluster
spec:serviceName: elasticsearchreplicas: 3selector:matchLabels:app: elasticsearchtemplate:metadata:labels:app: elasticsearchspec:serviceAccountName: nfs-client-provisioner # 使用 NFS 服务账号提供卷服务initContainers:- name: fix-permissionsimage: busyboximagePullPolicy: IfNotPresentcommand: ["sh", "-c", "chown -R 1000:1000 /usr/share/elasticsearch/data"]securityContext:privileged: truevolumeMounts:- name: elasticsearch-datamountPath: /usr/share/elasticsearch/data- name: increase-vm-max-mapimage: busyboximagePullPolicy: IfNotPresentcommand: ["sysctl", "-w", "vm.max_map_count=262144"]securityContext:privileged: true- name: increase-fd-ulimitimage: busyboximagePullPolicy: IfNotPresentcommand: ["sh", "-c", "ulimit -n 65536"]securityContext:privileged: truecontainers:- name: elasticsearchimage: docker.elastic.co/elasticsearch/elasticsearch:7.2.0imagePullPolicy: IfNotPresentresources:limits:cpu: 1000mrequests:cpu: 100mports:- containerPort: 9200name: restprotocol: TCP- containerPort: 9300name: inter-nodeprotocol: TCPvolumeMounts:- name: elasticsearch-datamountPath: /usr/share/elasticsearch/dataenv:- name: cluster.namevalue: k8s-logs- name: node.namevalueFrom:fieldRef:fieldPath: metadata.name- name: discovery.seed_hostsvalue: "es-cluster-0.elasticsearch,es-cluster-1.elasticsearch,es-cluster-2.elasticsearch"- name: cluster.initial_master_nodesvalue: "es-cluster-0,es-cluster-1,es-cluster-2"- name: ES_JAVA_OPTSvalue: "-Xms512m -Xmx512m"volumeClaimTemplates:- metadata:name: elasticsearch-datalabels:app: elasticsearchannotations:volume.beta.kubernetes.io/storage-class: "managed-nfs-storage" # 存储类的名称,使用 NFS 存储spec:accessModes: [ "ReadWriteOnce" ]resources:requests:storage: 100Gi

部署 elasticsearch-sts 文件

[root@master efk]# kubectl apply -f elasticsearch.yaml

查看 Pod 运行状态

[root@master efk]# kubectl get pod
NAME                                      READY   STATUS    RESTARTS   AGE
es-cluster-0                              1/1     Running   0          17s
es-cluster-1                              1/1     Running   0          13s
es-cluster-2                              1/1     Running   0          7s
nfs-client-provisioner-79865487f9-tnssn   1/1     Running   0          5m29s

配置 service 文件

[root@master efk]# vim elasticsearch-svc.yaml
apiVersion: v1
kind: Service
metadata:name: elasticsearchlabels:app: elasticsearch
spec:selector:app: elasticsearchclusterIP: Noneports:- port: 9200name: rest- port: 9300name: inter-node

部署 service 文件

[root@master efk]# kubectl apply -f elasticsearch-svc.yaml

查看 service

[root@master efk]# kubectl get svc
NAME            TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)             AGE
elasticsearch   ClusterIP   None         <none>        9200/TCP,9300/TCP   3s
kubernetes      ClusterIP   10.96.0.1    <none>        443/TCP             7d19h

验证 Elasticsearch 部署 开启端口转发

[root@master efk]# kubectl port-forward es-cluster-0 9200:9200

打开一个新的 Terminal 窗口,以便我们可以查询 REST API 接口

[root@master efk]# curl -XGET 'localhost:9200/_cluster/health?pretty'
{"cluster_name": "k8s-logs",  // 集群的名称为 "k8s-logs""status": "green",           // 集群状态为 "green",表示所有的主分片和副本分片都正常。"timed_out": false,          // 表示查询没有超时。"number_of_nodes": 3,        // 集群中有 3 个节点。"number_of_data_nodes": 3,   // 集群中有 3 个数据节点,表明这些节点存储数据。"active_primary_shards": 1,  // 当前活跃的主分片数量为 1"active_shards": 2,          // 当前活跃的分片总数为 2(包括主分片和副本分片)。"relocating_shards": 0,      // 当前没有正在重新分配的分片。"initializing_shards": 0,    // 当前没有初始化的分片。"unassigned_shards": 0,      // 当前没有未分配的分片,意味着所有分片都已分配给节点。"delayed_unassigned_shards": 0, // 当前没有延迟的未分配分片。"number_of_pending_tasks": 0, // 当前没有待处理的任务。"number_of_in_flight_fetch": 0, // 当前没有正在进行的获取请求。"task_max_waiting_in_queue_millis": 0, // 当前没有任务在队列中等待,表示没有延迟。"active_shards_percent_as_number": 100.0 // 活跃分片百分比为 100,表示所有分片都处于活跃状态。
}

测试 headless 域名解析

### 创建测试Pod
[root@master efk]# vim pod.yaml
apiVersion: v1
kind: Pod
metadata:name: nslookup
spec:containers:- name: dataimage: busyboximagePullPolicy: IfNotPresentargs:- /bin/sh- -c- sleep 36000

部署 Pod 文件

[root@master efk]# kubectl apply -f pod.yaml

测试解析

[root@master efk]# kubectl exec -it nslookup -- sh
/ # nslookup es-cluster-0.elasticsearch.default.svc.cluster.local
Server:		10.96.0.10
Address:	10.96.0.10:53Name:	es-cluster-0.elasticsearch.default.svc.cluster.local
Address: 10.244.0.18/ # nslookup elasticsearch.default.svc.cluster.local
Server:		10.96.0.10
Address:	10.96.0.10:53Name:	elasticsearch.default.svc.cluster.local
Address: 10.244.0.18
Name:	elasticsearch.default.svc.cluster.local
Address: 10.244.0.20
Name:	elasticsearch.default.svc.cluster.local
Address: 10.244.0.21
(2)配置Kibana
[root@master efk]# vim kibana.yaml
apiVersion: apps/v1
kind: Deployment
metadata:name: kibanalabels:app: kibana
spec:replicas: 1selector:matchLabels:app: kibanatemplate:metadata:labels:app: kibanaspec:containers:- name: kibanaimage: docker.elastic.co/kibana/kibana:7.2.0imagePullPolicy: IfNotPresentports:- containerPort: 5601resources:limits:cpu: 1000mrequests:cpu: 100menv:- name: ELASTICSEARCH_URLvalue: http://elasticsearch.default.svc.cluster.local:9200

部署 kibana 文件

[root@master efk]# kubectl apply -f kibana.yaml

查看 Pod 运行状态

[root@master efk]# kubectl get pod
NAME                                      READY   STATUS    RESTARTS   AGE
es-cluster-0                              1/1     Running   0          3m30s
es-cluster-1                              1/1     Running   0          3m26s
es-cluster-2                              1/1     Running   0          3m20s
kibana-774758dd6c-r5c9s                   1/1     Running   0          3s
nfs-client-provisioner-79865487f9-tnssn   1/1     Running   0          8m42s

配置 service 文件

[root@master efk]# vim kibana-svc.yaml
apiVersion: v1
kind: Service
metadata:name: kibanalabels:app: kibana
spec:type: LoadBalancerports:- protocol: TCPport: 80targetPort: 5601selector:app: kibana

部署 service 文件

[root@master efk]# kubectl apply -f kibana-svc.yaml

查看 service

[root@master efk]# kubectl get svc
NAME            TYPE           CLUSTER-IP    EXTERNAL-IP   PORT(S)             AGE
elasticsearch   ClusterIP      None          <none>        9200/TCP,9300/TCP   8m
kibana          LoadBalancer   10.99.16.75   <pending>     80:32361/TCP        32s
kubernetes      ClusterIP      10.96.0.1     <none>        443/TCP             7d19h

验证 Kibana 部署 开启端口转发

[root@master efk]# kubectl port-forward kibana-786f7f49d-4vlqk 5601:5601

打开一个新的 Terminal 窗口,以便我们可以查询 REST API 接口

[root@master efk]# curl http://localhost:5601/app/kibana
(3)配置Fluentd

创建角色文件

[root@master efk]# vim fluentd-rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:name: fluentdlabels:app: fluentd
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:name: fluentdlabels:app: fluentd
rules:
- apiGroups:- ""resources:- pods- namespacesverbs:- get- list- watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:name: fluentd
roleRef:kind: ClusterRolename: fluentdapiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccountname: fluentdnamespace: default
---
apiVersion: apps/v1
kind: DaemonSet
metadata:name: fluentdnamespace: defaultlabels:app: fluentd
spec:selector:matchLabels:app: fluentdtemplate:metadata:labels:app: fluentdspec:serviceAccount: fluentdserviceAccountName: fluentdtolerations:- operator: Existscontainers:- name: fluentdimage: fluent/fluentd-kubernetes-daemonset:v1.4.2-debian-elasticsearch-1.1env:- name:  FLUENT_ELASTICSEARCH_HOSTvalue: "elasticsearch.default.svc.cluster.local"- name:  FLUENT_ELASTICSEARCH_PORTvalue: "9200"- name: FLUENT_ELASTICSEARCH_SCHEMEvalue: "http"- name: FLUENTD_SYSTEMD_CONFvalue: disable- name: FLUENT_ELASTICSEARCH_SED_DISABLEvalue: "true"resources:limits:cpu: 300mmemory: 512Mirequests:cpu: 50mmemory: 100MivolumeMounts:- name: varlogmountPath: /var/log- name: varlibdockercontainersmountPath: /var/lib/docker/containersreadOnly: true- name: configmountPath: /fluentd/etc/kubernetes.confsubPath: kubernetes.conf- name: index-templatemountPath: /fluentd/etc/index_template.jsonsubPath: index_template.jsonterminationGracePeriodSeconds: 30volumes:- name: varloghostPath:path: /var/log- name: varlibdockercontainershostPath:path: /var/lib/docker/containers- name: configconfigMap:name: fluentd-configitems:- key: kubernetes.confpath: kubernetes.conf- name: index-templateconfigMap:name: fluentd-configitems:- key: index_template.jsonpath: index_template.json

部署 fluentd-role 文件

[root@master efk]# kubectl apply -f fluentd-rbac.yaml

创建 configmap 文件

[root@master efk]# vim fluentd-config.yaml
kind: ConfigMap
apiVersion: v1
metadata:name: fluentd-configlabels:addonmanager.kubernetes.io/mode: Reconcile
data:kubernetes.conf: |-# AUTOMATICALLY GENERATED# DO NOT EDIT THIS FILE DIRECTLY, USE /templates/conf/kubernetes.conf.erb<match fluent.**>@type elasticsearchrequest_timeout 2147483648include_tag_key truehost "#{ENV['FLUENT_ELASTICSEARCH_HOST']}"port "#{ENV['FLUENT_ELASTICSEARCH_PORT']}"path "#{ENV['FLUENT_ELASTICSEARCH_PATH']}"scheme "#{ENV['FLUENT_ELASTICSEARCH_SCHEME'] || 'http'}"ssl_verify "#{ENV['FLUENT_ELASTICSEARCH_SSL_VERIFY'] || 'true'}"ssl_version "#{ENV['FLUENT_ELASTICSEARCH_SSL_VERSION'] || 'TLSv1'}"reload_connections "#{ENV['FLUENT_ELASTICSEARCH_RELOAD_CONNECTIONS'] || 'false'}"reconnect_on_error "#{ENV['FLUENT_ELASTICSEARCH_RECONNECT_ON_ERROR'] || 'true'}"reload_on_failure "#{ENV['FLUENT_ELASTICSEARCH_RELOAD_ON_FAILURE'] || 'true'}"log_es_400_reason "#{ENV['FLUENT_ELASTICSEARCH_LOG_ES_400_REASON'] || 'false'}"logstash_prefix "#{ENV['FLUENT_ELASTICSEARCH_LOGSTASH_PREFIX'] || 'logstash'}"logstash_format "#{ENV['FLUENT_ELASTICSEARCH_LOGSTASH_FORMAT'] || 'true'}"index_name "#{ENV['FLUENT_ELASTICSEARCH_LOGSTASH_INDEX_NAME'] || 'logstash'}"type_name "#{ENV['FLUENT_ELASTICSEARCH_LOGSTASH_TYPE_NAME'] || 'fluentd'}"enable_ilm true#ilm_policy_id watch-history-ilm-policy#ilm_policy_overwrite false#rollover_index true#ilm_policy {}template_name delete-after-7daystemplate_file /fluentd/etc/index_template.json#customize_template {"<<index_prefix>>": "fluentd"}<buffer>@type filepath /var/log/fluentd-bufferflush_thread_count "#{ENV['FLUENT_ELASTICSEARCH_BUFFER_FLUSH_THREAD_COUNT'] || '8'}"flush_interval "#{ENV['FLUENT_ELASTICSEARCH_BUFFER_FLUSH_INTERVAL'] || '5s'}"chunk_limit_size "#{ENV['FLUENT_ELASTICSEARCH_BUFFER_CHUNK_LIMIT_SIZE'] || '4M'}"queue_limit_length "#{ENV['FLUENT_ELASTICSEARCH_BUFFER_QUEUE_LIMIT_LENGTH'] || '32'}"retry_max_interval "#{ENV['FLUENT_ELASTICSEARCH_BUFFER_RETRY_MAX_INTERVAL'] || '30'}"retry_forever true</buffer></match><source>@type tailtag kubernetes.*path /var/log/containers/*.logpos_file /var/log/kube-containers.log.posread_from_head false<parse>@type multi_format<pattern>format jsontime_format %Y-%m-%dT%H:%M:%S.%NZ</pattern><pattern>format regexptime_format %Y-%m-%dT%H:%M:%S.%N%:zexpression /^(?<time>.+)\b(?<stream>stdout|stderr)\b(?<log>.*)$/</pattern></parse></source><source>@type tail@id in_tail_minionpath /var/log/salt/minionpos_file /var/log/fluentd-salt.postag saltread_from_head false<parse>@type regexpexpression /^(?<time>[^ ]* [^ ,]*)[^\[]*\[[^\]]*\]\[(?<severity>[^ \]]*) *\] (?<message>.*)$/time_format %Y-%m-%d %H:%M:%S</parse></source><source>@type tail@id in_tail_startupscriptpath /var/log/startupscript.logpos_file /var/log/fluentd-startupscript.log.postag startupscriptread_from_head false<parse>@type syslog</parse></source><source>@type tail@id in_tail_dockerpath /var/log/docker.logpos_file /var/log/fluentd-docker.log.postag dockerread_from_head false<parse>@type regexpexpression /^time="(?<time>[^)]*)" level=(?<severity>[^ ]*) msg="(?<message>[^"]*)"( err="(?<error>[^"]*)")?( statusCode=($<status_code>\d+))?/</parse></source><source>@type tail@id in_tail_etcdpath /var/log/etcd.logpos_file /var/log/fluentd-etcd.log.postag etcdread_from_head false<parse>@type none</parse></source><source>@type tail@id in_tail_kubeletmultiline_flush_interval 5spath /var/log/kubelet.logpos_file /var/log/fluentd-kubelet.log.postag kubeletread_from_head false<parse>@type kubernetes</parse></source><source>@type tail@id in_tail_kube_proxymultiline_flush_interval 5spath /var/log/kube-proxy.logpos_file /var/log/fluentd-kube-proxy.log.postag kube-proxyread_from_head false<parse>@type kubernetes</parse></source><source>@type tail@id in_tail_kube_apiservermultiline_flush_interval 5spath /var/log/kube-apiserver.logpos_file /var/log/fluentd-kube-apiserver.log.postag kube-apiserverread_from_head false<parse>@type kubernetes</parse></source><source>@type tail@id in_tail_kube_controller_managermultiline_flush_interval 5spath /var/log/kube-controller-manager.logpos_file /var/log/fluentd-kube-controller-manager.log.postag kube-controller-managerread_from_head false<parse>@type kubernetes</parse></source><source>@type tail@id in_tail_kube_schedulermultiline_flush_interval 5spath /var/log/kube-scheduler.logpos_file /var/log/fluentd-kube-scheduler.log.postag kube-schedulerread_from_head false<parse>@type kubernetes</parse></source><source>@type tail@id in_tail_reschedulermultiline_flush_interval 5spath /var/log/rescheduler.logpos_file /var/log/fluentd-rescheduler.log.postag reschedulerread_from_head false<parse>@type kubernetes</parse></source><source>@type tail@id in_tail_glbcmultiline_flush_interval 5spath /var/log/glbc.logpos_file /var/log/fluentd-glbc.log.postag glbcread_from_head false<parse>@type kubernetes</parse></source><source>@type tail@id in_tail_cluster_autoscalermultiline_flush_interval 5spath /var/log/cluster-autoscaler.logpos_file /var/log/fluentd-cluster-autoscaler.log.postag cluster-autoscalerread_from_head false<parse>@type kubernetes</parse></source># Example:# 2017-02-09T00:15:57.992775796Z AUDIT: id="90c73c7c-97d6-4b65-9461-f94606ff825f" ip="104.132.1.72" method="GET" user="kubecfg" as="<self>" asgroups="<lookup>" namespace="default" uri="/api/v1/namespaces/default/pods"# 2017-02-09T00:15:57.993528822Z AUDIT: id="90c73c7c-97d6-4b65-9461-f94606ff825f" response="200"<source>@type tail@id in_tail_kube_apiserver_auditmultiline_flush_interval 5spath /var/log/kubernetes/kube-apiserver-audit.logpos_file /var/log/kube-apiserver-audit.log.postag kube-apiserver-auditread_from_head false<parse>@type multilineformat_firstline /^\S+\s+AUDIT:/# Fields must be explicitly captured by name to be parsed into the record.# Fields may not always be present, and order may change, so this just looks# for a list of key="\"quoted\" value" pairs separated by spaces.# Unknown fields are ignored.# Note: We can't separate query/response lines as format1/format2 because#       they don't always come one after the other for a given query.format1 /^(?<time>\S+) AUDIT:(?: (?:id="(?<id>(?:[^"\\]|\\.)*)"|ip="(?<ip>(?:[^"\\]|\\.)*)"|method="(?<method>(?:[^"\\]|\\.)*)"|user="(?<user>(?:[^"\\]|\\.)*)"|groups="(?<groups>(?:[^"\\]|\\.)*)"|as="(?<as>(?:[^"\\]|\\.)*)"|asgroups="(?<asgroups>(?:[^"\\]|\\.)*)"|namespace="(?<namespace>(?:[^"\\]|\\.)*)"|uri="(?<uri>(?:[^"\\]|\\.)*)"|response="(?<response>(?:[^"\\]|\\.)*)"|\w+="(?:[^"\\]|\\.)*"))*/time_format %Y-%m-%dT%T.%L%Z</parse></source><filter kubernetes.**>@type kubernetes_metadata@id filter_kube_metadata</filter>index_template.json: |-{"index_patterns": ["logstash-*"],"settings": {"index": {"lifecycle": {"name": "watch-history-ilm-policy","rollover_alias": ""}}}}

部署 configmap 文件

[root@master efk]# kubectl apply -f fluentd-config.yaml

创建fluentd

[root@master efk]# vim fluentd.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:name: fluentdnamespace: defaultlabels:app: fluentd
spec:selector:matchLabels:app: fluentdtemplate:metadata:labels:app: fluentdspec:serviceAccount: fluentdserviceAccountName: fluentdtolerations:- operator: Existscontainers:- name: fluentdimage: fluent/fluentd-kubernetes-daemonset:v1.4.2-debian-elasticsearch-1.1env:- name:  FLUENT_ELASTICSEARCH_HOSTvalue: "elasticsearch.default.svc.cluster.local"- name:  FLUENT_ELASTICSEARCH_PORTvalue: "9200"- name: FLUENT_ELASTICSEARCH_SCHEMEvalue: "http"- name: FLUENTD_SYSTEMD_CONFvalue: disable- name: FLUENT_ELASTICSEARCH_SED_DISABLEvalue: "true"resources:limits:cpu: 300mmemory: 512Mirequests:cpu: 50mmemory: 100MivolumeMounts:- name: varlogmountPath: /var/log- name: varlibdockercontainersmountPath: /var/lib/docker/containersreadOnly: true- name: configmountPath: /fluentd/etc/kubernetes.confsubPath: kubernetes.conf- name: index-templatemountPath: /fluentd/etc/index_template.jsonsubPath: index_template.jsonterminationGracePeriodSeconds: 30volumes:- name: varloghostPath:path: /var/log- name: varlibdockercontainershostPath:path: /var/lib/docker/containers- name: configconfigMap:name: fluentd-configitems:- key: kubernetes.confpath: kubernetes.conf- name: index-templateconfigMap:name: fluentd-configitems:- key: index_template.jsonpath: index_template.json

部署 fluentd 文件

[root@master efk]# kubectl apply -f fluentd.yaml

查看 Pod 运行状态

[root@master efk]# kubectl get pod
NAME                                      READY   STATUS    RESTARTS   AGE
es-cluster-0                              1/1     Running   0          6h31m
es-cluster-1                              1/1     Running   0          6h31m
es-cluster-2                              1/1     Running   0          6h31m
fluentd-5dkzc                             1/1     Running   0          17m
fluentd-k6h54                             1/1     Running   0          17m
kibana-786f7f49d-x6qt8                    1/1     Running   0          6h31m
nfs-client-provisioner-79865487f9-c28ql   1/1     Running   0          7h12m
(4)登录kibana控制台

image-20241028170614766

(5)数据源模拟
[root@master efk]# vim data.yaml
apiVersion: v1
kind: Pod
metadata:name: data-logs
spec:containers:- name: counterimage: busyboximagePullPolicy: IfNotPresentargs:- /bin/sh- -c- 'i=0; while true; do echo "$i: Hello, are you collecting my data? $(date)"; i=$((i+1)); sleep 5; done'

部署 data 文件

[root@master efk]# kubectl apply -f data.yaml
(6)创建索引模式

配置索引模式

image-20241126165402666

配置时间戳筛选数据

image-20241126165437488

查看数据

image-20241126165852639

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.rhkb.cn/news/479234.html

如若内容造成侵权/违法违规/事实不符,请联系长河编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

Kubernetes 还是 SpringCloud?

前些年&#xff0c;随着微服务的概念提出以及落地&#xff0c;不断有很多的公司都加入到了这场技术革新中&#xff0c;现在可谓是人人都在做和说微服务。 提到微服务&#xff0c;Java栈内&#xff0c;就不得不提SpringBoot、SpringCloud、Dubbo。 近几年&#xff0c;随着Cloud …

ChatGPT如何辅助academic writing?

今天想和大家分享一篇来自《Nature》杂志的文章《Three ways ChatGPT helps me in my academic writing》&#xff0c;如果您的日常涉及到学术论文的写作&#xff08;writing&#xff09;、编辑&#xff08;editing&#xff09;或者审稿&#xff08; peer review&#xff09;&a…

101.【C语言】数据结构之二叉树的堆实现(顺序结构) 2

目录 1.堆删除函数HeapPop 一个常见的错误想法:挪动删除 正确方法 设计堆顶删除函数HeapPop 解析向下调整函数AdjustDown 核心思想 向下调整最多次数 向下调整的前提 代码实现 提问 细节分析 2.测试堆删除函数 运行结果 3.引申问题 运行结果 4.练习 分析 代码…

【机器学习chp8】统计学习理论

前言 本文遗留问题&#xff1a;无 目录 前言 一、结构风险最小化 1、最小化风险决策 2、分类与回归中的最小化风险决策 3、统计学习的基本目标 4、无免费午餐定理 5、Hoeffding不等式 &#xff08;1&#xff09;背景及定义 &#xff08;2&#xff09;Hoeffding不等式…

Springboot启动报错’javax.management.MBeanServer’ that could not be found.

报错信息如下图&#xff1a; 解决办法&#xff1a; 1.在你的.yml文件或者.properties文件里加上如下配置&#xff1a; properties: management.endpoints.jmx.enabledfalseyml: management:endpoints:jmx:enabled: false2.如果以上方法行不通&#xff0c;在springboot启动类…

英语知识网站:Spring Boot技术构建

6系统测试 6.1概念和意义 测试的定义&#xff1a;程序测试是为了发现错误而执行程序的过程。测试(Testing)的任务与目的可以描述为&#xff1a; 目的&#xff1a;发现程序的错误&#xff1b; 任务&#xff1a;通过在计算机上执行程序&#xff0c;暴露程序中潜在的错误。 另一个…

英伟达推出了全新的小型语言模型家族——Hymba 1.5B

每周跟踪AI热点新闻动向和震撼发展 想要探索生成式人工智能的前沿进展吗&#xff1f;订阅我们的简报&#xff0c;深入解析最新的技术突破、实际应用案例和未来的趋势。与全球数同行一同&#xff0c;从行业内部的深度分析和实用指南中受益。不要错过这个机会&#xff0c;成为AI领…

spring boot2.7集成OpenFeign 3.1.7

1.Feign Feign是一个声明式web服务客户端。它使编写web服务客户端更容易。要使用Feign&#xff0c;请创建一个接口并对其进行注释。它具有可插入注释支持&#xff0c;包括Feign注释和JAX-RS注释。Feign还支持可插拔编码器和解码器。Spring Cloud增加了对Spring MVC注释的支持&…

Jmeter中的前置处理器

5&#xff09;前置处理器 1--JSR223 PreProcessor 功能特点 自定义数据处理&#xff1a;使用脚本语言处理请求数据&#xff0c;实现高度定制化的数据处理和生成。动态数据生成&#xff1a;在请求发送前生成动态数据&#xff0c;如随机数、时间戳等。变量设置&#xff1a;设置…

git(Linux)

1.git 三板斧 基本准备工作&#xff1a; 把远端仓库拉拉取到本地了 .git --> 本地仓库 git在提交的时候&#xff0c;只会提交变化的部分 就可以在当前目录下新增代码了 test.c 并没有被仓库管理起来 怎么添加&#xff1f; 1.1 git add test.c 也不算完全添加到仓库里面&…

学习Java的日子 Day56 数据库连接池,Druid连接池

Day56 1.数据库连接池 理解&#xff1a;池就是容器&#xff0c;容器中存放了多个连接对象 使用原因&#xff1a; 1.优化创建和销毁连接的时间&#xff08;在项目启动时创建连接池&#xff0c;项目销毁时关闭连接池&#xff09; 2.提高连接对象的复用率 3.有效控制项目中连接的…

Jmeter测试工具的安装和使用,mac版本,jmeter版本5.2.1

Jmeter测试工具的安装和使用JSON格式请求 一、安装1、安装jdk包和设置java环境2、去官网下载Jmeter3、解压后&#xff0c;打开mac终端&#xff0c;进入apache-jmeter的bin文件开启jmeter 二、使用jmeter1、添加线程2、添加HTTP请求3、配置请求的协议、IP地址、端口号、请求方法…

Envoy 源码解析(一):Envoy 整体架构、Envoy 的初始化

本文基于 Envoy 1.31.0 版本进行源码学习 1、Envoy 整体架构 1&#xff09;、核心组件 Envoy 包含以下四个核心组件&#xff1a; Listener&#xff08;监听器&#xff09;&#xff1a;定义了 Envoy 如何处理入站请求。一旦连接建立&#xff0c;请求会被传递给一组过滤器进行处…

【VUE3】VUE组合式(响应式)API常见语法

pnpm常用命令 pnpm i //pnpm安装VUE3常见语法汇总 ref() //const count ref(0) //count.value&#xff08;访问值&#xff0c;包括对象要加.value&#xff09; //任何类型的值&#xff0c;包括深层嵌套的对象或则JS内置数据结构 await nextTick() //要等待 DOM 更新完成后…

CGAL CGAL::Polygon_mesh_processing::self_intersections解析

CGAL::Polygon_mesh_processing::self_intersections 是用于检测多边形网格&#xff08;Polygon Mesh&#xff09;中的自相交的函数。自相交是指网格中的某些面&#xff08;例如三角形&#xff09;与同一网格中的其他面交叉的情况。这种情况通常是不期望的&#xff0c;因为它会…

⭐ Unity 资源管理解决方案:Addressable_ Demo演示

一、使用Addressable插件的好处&#xff1a; 1.自动管理依赖关系 2.方便资源卸载 3.自带整合好的资源管理界面 4.支持远程资源加载和热更新 二、使用步骤 安装组件 1.创建资源分组 2.将资源加入资源组 3.打包资源 4.加载资源 三种方式可以加载 using System.Collections…

Vue前端开发2.3.5 条件渲染指令

本文介绍了Vue中两种条件渲染指令&#xff1a;v-if和v-show。v-if通过布尔值控制元素的DOM树存在&#xff0c;适用于不频繁切换显示状态的场景&#xff1b;v-show则通过CSS的display属性控制显示&#xff0c;适合频繁切换。通过创建单文件组件示例&#xff0c;演示了如何使用这…

GitLab指定用户分配合并权限

进入项目 -》 Project Settings Repository -》展开 Protected branches -》 添加要保护的分支&#xff0c;设置角色 管理用户角色权限 查看到不同用户的角色&#xff0c;一般设置Developer只有Merger Request权限&#xff0c;Maintainer还有Merge审批权限 GitLab 中的权限…

计算机网络socket编程(5)_TCP网络编程实现echo_server

个人主页&#xff1a;C忠实粉丝 欢迎 点赞&#x1f44d; 收藏✨ 留言✉ 加关注&#x1f493;本文由 C忠实粉丝 原创 计算机网络socket编程(5)_TCP网络编程实现echo_server 收录于专栏【计算机网络】 本专栏旨在分享学习计算机网络的一点学习笔记&#xff0c;欢迎大家在评论区交…

C++ 二叉搜索树(Binary Search Tree, BST)深度解析与全面指南:从基础概念到高级应用、算法优化及实战案例

&#x1f31f;个人主页&#xff1a;落叶 &#x1f31f;当前专栏: C专栏 目录 ⼆叉搜索树的概念 ⼆叉搜索树的性能分析 ⼆叉搜索树的插⼊ ⼆叉搜索树的查找 二叉搜索树中序遍历 ⼆叉搜索树的删除 cur的左节点为空的情况 cur的右节点为空的情况 左&#xff0c;右节点都不为…