【微服务部署】08-监控与告警

文章目录

    • 1. PrometheusOperator
      • 1.1 优势
      • 1.2 配置脚本
      • 1.3 部署脚本
    • 2. Granfana实现监控看板
      • 2.1 Granfana核心特性
      • 2.2 部署文件
    • 3. prometheus-net收集自定义指标
      • 3.1 组件包
      • 3.2 使用场景

目前Kubernetes中最流行的监控解决方案是使用Prometheus和AlertManager

1. PrometheusOperator

1.1 优势

  • 自动化安装
  • 将配置资源化
  • 灵活的扩展能力

在这里插入图片描述

PrometheusOperator将整个监控系统抽象为ServiceMonitor和Prometheus对象
Prometheus对象则可以通过Helm文件配置出多个实例
ServiceMonitor的作用是桥接service和Prometheus,通过ServiceMonitor可以非常方便的通过Prometheus监控服务
Operator作用是监控ServiceMonitor的变化。当ServiceMonitor定义发生变化时,Operator会自动化热更新配置到PrometheusServer

1.2 配置脚本

部分values.yaml

alertmanager:## Deploy alertmanager##enabled: true## Api that prometheus will use to communicate with alertmanager. Possible values are v1, v2##apiVersion: v2## Service account for Alertmanager to use.## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/##serviceAccount:create: truename: ""## Configure pod disruption budgets for Alertmanager## ref: https://kubernetes.io/docs/tasks/run-application/configure-pdb/#specifying-a-poddisruptionbudget## This configuration is immutable once created and will require the PDB to be deleted to be changed## https://github.com/kubernetes/kubernetes/issues/45398##podDisruptionBudget:enabled: falseminAvailable: 1maxUnavailable: ""## Alertmanager configuration directives## ref: https://prometheus.io/docs/alerting/configuration/#configuration-file##      https://prometheus.io/webtools/alerting/routing-tree-editor/##config:global:resolve_timeout: 5mroute:group_by: ['job']group_wait: 30sgroup_interval: 5mrepeat_interval: 12hreceiver: 'null'routes:- match:alertname: Watchdogreceiver: 'null'receivers:- name: 'null'## Pass the Alertmanager configuration directives through Helm's templating## engine. If the Alertmanager configuration contains Alertmanager templates,## they'll need to be properly escaped so that they are not interpreted by## Helm## ref: https://helm.sh/docs/developing_charts/#using-the-tpl-function##      https://prometheus.io/docs/alerting/configuration/#%3Ctmpl_string%3E##      https://prometheus.io/docs/alerting/notifications/##      https://prometheus.io/docs/alerting/notification_examples/tplConfig: false## Alertmanager template files to format alerts## ref: https://prometheus.io/docs/alerting/notifications/##      https://prometheus.io/docs/alerting/notification_examples/##templateFiles: {}### An example template:#   template_1.tmpl: |-#       {{ define "cluster" }}{{ .ExternalURL | reReplaceAll ".*alertmanager\\.(.*)" "$1" }}{{ end }}##       {{ define "slack.myorg.text" }}#       {{- $root := . -}}#       {{ range .Alerts }}#         *Alert:* {{ .Annotations.summary }} - `{{ .Labels.severity }}`#         *Cluster:*  {{ template "cluster" $root }}#         *Description:* {{ .Annotations.description }}#         *Graph:* <{{ .GeneratorURL }}|:chart_with_upwards_trend:>#         *Runbook:* <{{ .Annotations.runbook }}|:spiral_note_pad:>#         *Details:*#           {{ range .Labels.SortedPairs }} • *{{ .Name }}:* `{{ .Value }}`#           {{ end }}ingress:enabled: trueannotations: {}labels: {}## Hosts must be provided if Ingress is enabled.##hosts:- alertmanager.local## Paths to use for ingress rules - one path should match the alertmanagerSpec.routePrefix##paths: []# - /## TLS configuration for Alertmanager Ingress## Secret must be manually created in the namespace##tls: []# - secretName: alertmanager-general-tls#   hosts:#   - alertmanager.example.com## Configuration for Alertmanager secret##...
prometheus:enabled: true## Annotations for Prometheus##annotations: {}## Service account for Prometheuses to use.## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/##serviceAccount:create: truename: ""## Configuration for Prometheus service##service:annotations: {}labels: {}clusterIP: ""## Port for Prometheus Service to listen on##port: 9090## To be used with a proxy extraContainer porttargetPort: 9090## List of IP addresses at which the Prometheus server service is available## Ref: https://kubernetes.io/docs/user-guide/services/#external-ips##externalIPs: []## Port to expose on each node## Only used if service.type is 'NodePort'##nodePort: 30090## Loadbalancer IP## Only use if service.type is "loadbalancer"loadBalancerIP: ""loadBalancerSourceRanges: []## Service type##type: ClusterIPsessionAffinity: ""## Configuration for creating a separate Service for each statefulset Prometheus replica##
...
## 自定义ServiceMonitors添加在这里
additionalServiceMonitors:- name: "hc"namespaceSelector:any: trueselector:matchLabels:geek: app-metricsendpoints:- path: /metricsport: http- name: ""namespaceSelector:any: trueselector:matchLabels:geek: app-metricsendpoints:- path: /metricsport: http...

1.3 部署脚本

helm repo add stable http://mirror.azure.cn/kubernetes/charts/
helm repo update
helm install prometheus-operator stable/prometheus-operator --version 8.10.0 --values .\prometheus-operator\values.yaml  --namespace kube-system

完整values.yaml

# Default values for prometheus-operator.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.## Provide a name in place of prometheus-operator for `app:` labels
##
nameOverride: ""## Provide a name to substitute for the full names of resources
##
fullnameOverride: ""## Labels to apply to all resources
##
commonLabels: {}
# scmhash: abc123
# myLabel: aakkmd## Create default rules for monitoring the cluster
##
defaultRules:create: truerules:alertmanager: trueetcd: truegeneral: truek8s: truekubeApiserver: truekubeApiserverError: truekubePrometheusNodeAlerting: truekubePrometheusNodeRecording: truekubernetesAbsent: truekubernetesApps: truekubernetesResources: truekubernetesStorage: truekubernetesSystem: truekubeScheduler: truenetwork: truenode: trueprometheus: trueprometheusOperator: truetime: true## Runbook url prefix for default rulesrunbookUrl: https://github.com/kubernetes-monitoring/kubernetes-mixin/tree/master/runbook.md### Reduce app namespace alert scopeappNamespacesTarget: ".*"## Labels for default ruleslabels: {}## Annotations for default rulesannotations: {}## Provide custom recording or alerting rules to be deployed into the cluster.
##
additionalPrometheusRules:- name: my-rule-filegroups:- name: my_grouprules:- record: my_recordexpr: 100 * my_record##
global:rbac:create: truepspEnabled: true## Reference to one or more secrets to be used when pulling images## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/##imagePullSecrets: []# - name: "image-pull-secret"## Configuration for alertmanager
## ref: https://prometheus.io/docs/alerting/alertmanager/
##
alertmanager:## Deploy alertmanager##enabled: true## Api that prometheus will use to communicate with alertmanager. Possible values are v1, v2##apiVersion: v2## Service account for Alertmanager to use.## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/##serviceAccount:create: truename: ""## Configure pod disruption budgets for Alertmanager## ref: https://kubernetes.io/docs/tasks/run-application/configure-pdb/#specifying-a-poddisruptionbudget## This configuration is immutable once created and will require the PDB to be deleted to be changed## https://github.com/kubernetes/kubernetes/issues/45398##podDisruptionBudget:enabled: falseminAvailable: 1maxUnavailable: ""## Alertmanager configuration directives## ref: https://prometheus.io/docs/alerting/configuration/#configuration-file##      https://prometheus.io/webtools/alerting/routing-tree-editor/##config:global:resolve_timeout: 5mroute:group_by: ['job']group_wait: 30sgroup_interval: 5mrepeat_interval: 12hreceiver: 'null'routes:- match:alertname: Watchdogreceiver: 'null'receivers:- name: 'null'## Pass the Alertmanager configuration directives through Helm's templating## engine. If the Alertmanager configuration contains Alertmanager templates,## they'll need to be properly escaped so that they are not interpreted by## Helm## ref: https://helm.sh/docs/developing_charts/#using-the-tpl-function##      https://prometheus.io/docs/alerting/configuration/#%3Ctmpl_string%3E##      https://prometheus.io/docs/alerting/notifications/##      https://prometheus.io/docs/alerting/notification_examples/tplConfig: false## Alertmanager template files to format alerts## ref: https://prometheus.io/docs/alerting/notifications/##      https://prometheus.io/docs/alerting/notification_examples/##templateFiles: {}### An example template:#   template_1.tmpl: |-#       {{ define "cluster" }}{{ .ExternalURL | reReplaceAll ".*alertmanager\\.(.*)" "$1" }}{{ end }}##       {{ define "slack.myorg.text" }}#       {{- $root := . -}}#       {{ range .Alerts }}#         *Alert:* {{ .Annotations.summary }} - `{{ .Labels.severity }}`#         *Cluster:*  {{ template "cluster" $root }}#         *Description:* {{ .Annotations.description }}#         *Graph:* <{{ .GeneratorURL }}|:chart_with_upwards_trend:>#         *Runbook:* <{{ .Annotations.runbook }}|:spiral_note_pad:>#         *Details:*#           {{ range .Labels.SortedPairs }} • *{{ .Name }}:* `{{ .Value }}`#           {{ end }}ingress:enabled: trueannotations: {}labels: {}## Hosts must be provided if Ingress is enabled.##hosts:- alertmanager.local## Paths to use for ingress rules - one path should match the alertmanagerSpec.routePrefix##paths: []# - /## TLS configuration for Alertmanager Ingress## Secret must be manually created in the namespace##tls: []# - secretName: alertmanager-general-tls#   hosts:#   - alertmanager.example.com## Configuration for Alertmanager secret##secret:annotations: {}## Configuration for Alertmanager service##service:annotations: {}labels: {}clusterIP: ""## Port for Alertmanager Service to listen on##port: 9093## Port to expose on each node## Only used if service.type is 'NodePort'##nodePort: 30903## List of IP addresses at which the Prometheus server service is available## Ref: https://kubernetes.io/docs/user-guide/services/#external-ips##externalIPs: []loadBalancerIP: ""loadBalancerSourceRanges: []## Service type##type: ClusterIP## If true, create a serviceMonitor for alertmanager##serviceMonitor:## Scrape interval. If not set, the Prometheus default scrape interval is used.##interval: ""selfMonitor: true## 	metric relabel configs to apply to samples before ingestion.##metricRelabelings: []# - action: keep#   regex: 'kube_(daemonset|deployment|pod|namespace|node|statefulset).+'#   sourceLabels: [__name__]# 	relabel configs to apply to samples before ingestion.##relabelings: []# - sourceLabels: [__meta_kubernetes_pod_node_name]#   separator: ;#   regex: ^(.*)$#   targetLabel: nodename#   replacement: $1#   action: replace## Settings affecting alertmanagerSpec## ref: https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#alertmanagerspec##alertmanagerSpec:## Standard object’s metadata. More info: https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md#metadata## Metadata Labels and Annotations gets propagated to the Alertmanager pods.##podMetadata: {}## Image of Alertmanager##image:repository: quay.io/prometheus/alertmanagertag: v0.20.0## If true then the user will be responsible to provide a secret with alertmanager configuration## So when true the config part will be ignored (including templateFiles) and the one in the secret will be used##useExistingSecret: false## Secrets is a list of Secrets in the same namespace as the Alertmanager object, which shall be mounted into the## Alertmanager Pods. The Secrets are mounted into /etc/alertmanager/secrets/.##secrets: []## ConfigMaps is a list of ConfigMaps in the same namespace as the Alertmanager object, which shall be mounted into the Alertmanager Pods.## The ConfigMaps are mounted into /etc/alertmanager/configmaps/.##configMaps: []## ConfigSecret is the name of a Kubernetes Secret in the same namespace as the Alertmanager object, which contains configuration for## this Alertmanager instance. Defaults to 'alertmanager-' The secret is mounted into /etc/alertmanager/config.### configSecret:## Define Log Format# Use logfmt (default) or json-formatted logginglogFormat: logfmt## Log level for Alertmanager to be configured with.##logLevel: info## Size is the expected size of the alertmanager cluster. The controller will eventually make the size of the## running cluster equal to the expected size.replicas: 1## Time duration Alertmanager shall retain data for. Default is '120h', and must match the regular expression## [0-9]+(ms|s|m|h) (milliseconds seconds minutes hours).##retention: 120h## Storage is the definition of how storage will be used by the Alertmanager instances.## ref: https://github.com/coreos/prometheus-operator/blob/master/Documentation/user-guides/storage.md##storage: {}# volumeClaimTemplate:#   spec:#     storageClassName: gluster#     accessModes: ["ReadWriteOnce"]#     resources:#       requests:#         storage: 50Gi#   selector: {}## 	The external URL the Alertmanager instances will be available under. This is necessary to generate correct URLs. This is necessary if Alertmanager is not served from root of a DNS name.	string	false##externalUrl:## 	The route prefix Alertmanager registers HTTP handlers for. This is useful, if using ExternalURL and a proxy is rewriting HTTP routes of a request, and the actual ExternalURL is still true,## but the server serves requests under a different route prefix. For example for use with kubectl proxy.##routePrefix: /## If set to true all actions on the underlying managed objects are not going to be performed, except for delete actions.##paused: false## Define which Nodes the Pods are scheduled on.## ref: https://kubernetes.io/docs/user-guide/node-selection/##nodeSelector: {}## Define resources requests and limits for single Pods.## ref: https://kubernetes.io/docs/user-guide/compute-resources/##resources: {}# requests:#   memory: 400Mi## Pod anti-affinity can prevent the scheduler from placing Prometheus replicas on the same node.## The default value "soft" means that the scheduler should *prefer* to not schedule two replica pods onto the same node but no guarantee is provided.## The value "hard" means that the scheduler is *required* to not schedule two replica pods onto the same node.## The value "" will disable pod anti-affinity so that no anti-affinity rules will be configured.##podAntiAffinity: ""## If anti-affinity is enabled sets the topologyKey to use for anti-affinity.## This can be changed to, for example, failure-domain.beta.kubernetes.io/zone##podAntiAffinityTopologyKey: kubernetes.io/hostname## Assign custom affinity rules to the alertmanager instance## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/##affinity: {}# nodeAffinity:#   requiredDuringSchedulingIgnoredDuringExecution:#     nodeSelectorTerms:#     - matchExpressions:#       - key: kubernetes.io/e2e-az-name#         operator: In#         values:#         - e2e-az1#         - e2e-az2## If specified, the pod's tolerations.## ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/##tolerations: []# - key: "key"#   operator: "Equal"#   value: "value"#   effect: "NoSchedule"## SecurityContext holds pod-level security attributes and common container settings.## This defaults to non root user with uid 1000 and gid 2000.	*v1.PodSecurityContext	false## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/##securityContext:runAsNonRoot: truerunAsUser: 1000fsGroup: 2000## ListenLocal makes the Alertmanager server listen on loopback, so that it does not bind against the Pod IP.## Note this is only for the Alertmanager UI, not the gossip communication.##listenLocal: false## Containers allows injecting additional containers. This is meant to allow adding an authentication proxy to an Alertmanager pod.##containers: []## Priority class assigned to the Pods##priorityClassName: ""## AdditionalPeers allows injecting a set of additional Alertmanagers to peer with to form a highly available cluster.##additionalPeers: []## PortName to use for Alert Manager.##portName: "web"## Using default values from https://github.com/helm/charts/blob/master/stable/grafana/values.yaml
##
grafana:enabled: true## Deploy default dashboards.##defaultDashboardsEnabled: trueadminPassword: adminingress:## If true, Grafana Ingress will be created##enabled: true## Annotations for Grafana Ingress##annotations: {}# kubernetes.io/ingress.class: nginx# kubernetes.io/tls-acme: "true"## Labels to be added to the Ingress##labels: {}## Hostnames.## Must be provided if Ingress is enable.##hosts:- grafana.local## Path for grafana ingresspath: /## TLS configuration for grafana Ingress## Secret must be manually created in the namespace##tls: []# - secretName: grafana-general-tls#   hosts:#   - grafana.example.comsidecar:dashboards:enabled: truelabel: grafana_dashboarddatasources:enabled: truedefaultDatasourceEnabled: true## Annotations for Grafana datasource configmaps##annotations: {}## Create datasource for each Pod of Prometheus StatefulSet;## this uses headless service `prometheus-operated` which is## created by Prometheus Operator## ref: https://git.io/fjaBScreatePrometheusReplicasDatasources: falselabel: grafana_datasourceextraConfigmapMounts: []# - name: certs-configmap#   mountPath: /etc/grafana/ssl/#   configMap: certs-configmap#   readOnly: true## Configure additional grafana datasources## ref: http://docs.grafana.org/administration/provisioning/#datasourcesadditionalDataSources: []# - name: prometheus-sample#   access: proxy#   basicAuth: true#   basicAuthPassword: pass#   basicAuthUser: daco#   editable: false#   jsonData:#       tlsSkipVerify: true#   orgId: 1#   type: prometheus#   url: https://prometheus.svc:9090#   version: 1## Passed to grafana subchart and used by servicemonitor below##service:portName: service## If true, create a serviceMonitor for grafana##serviceMonitor:## Scrape interval. If not set, the Prometheus default scrape interval is used.##interval: ""selfMonitor: true## 	metric relabel configs to apply to samples before ingestion.##metricRelabelings: []# - action: keep#   regex: 'kube_(daemonset|deployment|pod|namespace|node|statefulset).+'#   sourceLabels: [__name__]# 	relabel configs to apply to samples before ingestion.##relabelings: []# - sourceLabels: [__meta_kubernetes_pod_node_name]#   separator: ;#   regex: ^(.*)$#   targetLabel: nodename#   replacement: $1#   action: replace## Component scraping the kube api server
##
kubeApiServer:enabled: truetlsConfig:serverName: kubernetesinsecureSkipVerify: false## If your API endpoint address is not reachable (as in AKS) you can replace it with the kubernetes service##relabelings: []# - sourceLabels:#     - __meta_kubernetes_namespace#     - __meta_kubernetes_service_name#     - __meta_kubernetes_endpoint_port_name#   action: keep#   regex: default;kubernetes;https# - targetLabel: __address__#   replacement: kubernetes.default.svc:443serviceMonitor:## Scrape interval. If not set, the Prometheus default scrape interval is used.##interval: ""jobLabel: componentselector:matchLabels:component: apiserverprovider: kubernetes## 	metric relabel configs to apply to samples before ingestion.##metricRelabelings: []# - action: keep#   regex: 'kube_(daemonset|deployment|pod|namespace|node|statefulset).+'#   sourceLabels: [__name__]## Component scraping the kubelet and kubelet-hosted cAdvisor
##
kubelet:enabled: truenamespace: kube-systemserviceMonitor:## Scrape interval. If not set, the Prometheus default scrape interval is used.##interval: ""## Enable scraping the kubelet over https. For requirements to enable this see## https://github.com/coreos/prometheus-operator/issues/926##https: true## Metric relabellings to apply to samples before ingestion##cAdvisorMetricRelabelings: []# - sourceLabels: [__name__, image]#   separator: ;#   regex: container_([a-z_]+);#   replacement: $1#   action: drop# - sourceLabels: [__name__]#   separator: ;#   regex: container_(network_tcp_usage_total|network_udp_usage_total|tasks_state|cpu_load_average_10s)#   replacement: $1#   action: drop# 	relabel configs to apply to samples before ingestion.#   metrics_path is required to match upstream rules and charts##cAdvisorRelabelings:- sourceLabels: [__metrics_path__]targetLabel: metrics_path# - sourceLabels: [__meta_kubernetes_pod_node_name]#   separator: ;#   regex: ^(.*)$#   targetLabel: nodename#   replacement: $1#   action: replacemetricRelabelings: []# - sourceLabels: [__name__, image]#   separator: ;#   regex: container_([a-z_]+);#   replacement: $1#   action: drop# - sourceLabels: [__name__]#   separator: ;#   regex: container_(network_tcp_usage_total|network_udp_usage_total|tasks_state|cpu_load_average_10s)#   replacement: $1#   action: drop# 	relabel configs to apply to samples before ingestion.#   metrics_path is required to match upstream rules and charts##relabelings:- sourceLabels: [__metrics_path__]targetLabel: metrics_path# - sourceLabels: [__meta_kubernetes_pod_node_name]#   separator: ;#   regex: ^(.*)$#   targetLabel: nodename#   replacement: $1#   action: replace## Component scraping the kube controller manager
##
kubeControllerManager:enabled: true## If your kube controller manager is not deployed as a pod, specify IPs it can be found on##endpoints: []# - 10.141.4.22# - 10.141.4.23# - 10.141.4.24## If using kubeControllerManager.endpoints only the port and targetPort are used##service:port: 10252targetPort: 10252# selector:#   component: kube-controller-managerserviceMonitor:## Scrape interval. If not set, the Prometheus default scrape interval is used.##interval: ""## Enable scraping kube-controller-manager over https.## Requires proper certs (not self-signed) and delegated authentication/authorization checks##https: false# Skip TLS certificate validation when scrapinginsecureSkipVerify: null# Name of the server to use when validating TLS certificateserverName: null## 	metric relabel configs to apply to samples before ingestion.##metricRelabelings: []# - action: keep#   regex: 'kube_(daemonset|deployment|pod|namespace|node|statefulset).+'#   sourceLabels: [__name__]# 	relabel configs to apply to samples before ingestion.##relabelings: []# - sourceLabels: [__meta_kubernetes_pod_node_name]#   separator: ;#   regex: ^(.*)$#   targetLabel: nodename#   replacement: $1#   action: replace## Component scraping coreDns. Use either this or kubeDns
##
coreDns:enabled: trueservice:port: 9153targetPort: 9153# selector:#   k8s-app: kube-dnsserviceMonitor:## Scrape interval. If not set, the Prometheus default scrape interval is used.##interval: ""## 	metric relabel configs to apply to samples before ingestion.##metricRelabelings: []# - action: keep#   regex: 'kube_(daemonset|deployment|pod|namespace|node|statefulset).+'#   sourceLabels: [__name__]# 	relabel configs to apply to samples before ingestion.##relabelings: []# - sourceLabels: [__meta_kubernetes_pod_node_name]#   separator: ;#   regex: ^(.*)$#   targetLabel: nodename#   replacement: $1#   action: replace## Component scraping kubeDns. Use either this or coreDns
##
kubeDns:enabled: falseservice:dnsmasq:port: 10054targetPort: 10054skydns:port: 10055targetPort: 10055# selector:#   k8s-app: kube-dnsserviceMonitor:## Scrape interval. If not set, the Prometheus default scrape interval is used.##interval: ""## 	metric relabel configs to apply to samples before ingestion.##metricRelabelings: []# - action: keep#   regex: 'kube_(daemonset|deployment|pod|namespace|node|statefulset).+'#   sourceLabels: [__name__]# 	relabel configs to apply to samples before ingestion.##relabelings: []# - sourceLabels: [__meta_kubernetes_pod_node_name]#   separator: ;#   regex: ^(.*)$#   targetLabel: nodename#   replacement: $1#   action: replacednsmasqMetricRelabelings: []# - action: keep#   regex: 'kube_(daemonset|deployment|pod|namespace|node|statefulset).+'#   sourceLabels: [__name__]# 	relabel configs to apply to samples before ingestion.##dnsmasqRelabelings: []# - sourceLabels: [__meta_kubernetes_pod_node_name]#   separator: ;#   regex: ^(.*)$#   targetLabel: nodename#   replacement: $1#   action: replace## Component scraping etcd
##
kubeEtcd:enabled: true## If your etcd is not deployed as a pod, specify IPs it can be found on##endpoints: []# - 10.141.4.22# - 10.141.4.23# - 10.141.4.24## Etcd service. If using kubeEtcd.endpoints only the port and targetPort are used##service:port: 2379targetPort: 2379# selector:#   component: etcd## Configure secure access to the etcd cluster by loading a secret into prometheus and## specifying security configuration below. For example, with a secret named etcd-client-cert#### serviceMonitor:##   scheme: https##   insecureSkipVerify: false##   serverName: localhost##   caFile: /etc/prometheus/secrets/etcd-client-cert/etcd-ca##   certFile: /etc/prometheus/secrets/etcd-client-cert/etcd-client##   keyFile: /etc/prometheus/secrets/etcd-client-cert/etcd-client-key##serviceMonitor:## Scrape interval. If not set, the Prometheus default scrape interval is used.##interval: ""scheme: httpinsecureSkipVerify: falseserverName: ""caFile: ""certFile: ""keyFile: ""## 	metric relabel configs to apply to samples before ingestion.##metricRelabelings: []# - action: keep#   regex: 'kube_(daemonset|deployment|pod|namespace|node|statefulset).+'#   sourceLabels: [__name__]# 	relabel configs to apply to samples before ingestion.##relabelings: []# - sourceLabels: [__meta_kubernetes_pod_node_name]#   separator: ;#   regex: ^(.*)$#   targetLabel: nodename#   replacement: $1#   action: replace## Component scraping kube scheduler
##
kubeScheduler:enabled: true## If your kube scheduler is not deployed as a pod, specify IPs it can be found on##endpoints: []# - 10.141.4.22# - 10.141.4.23# - 10.141.4.24## If using kubeScheduler.endpoints only the port and targetPort are used##service:port: 10251targetPort: 10251# selector:#   component: kube-schedulerserviceMonitor:## Scrape interval. If not set, the Prometheus default scrape interval is used.##interval: ""## Enable scraping kube-scheduler over https.## Requires proper certs (not self-signed) and delegated authentication/authorization checks##https: false## Skip TLS certificate validation when scrapinginsecureSkipVerify: null## Name of the server to use when validating TLS certificateserverName: null## 	metric relabel configs to apply to samples before ingestion.##metricRelabelings: []# - action: keep#   regex: 'kube_(daemonset|deployment|pod|namespace|node|statefulset).+'#   sourceLabels: [__name__]# 	relabel configs to apply to samples before ingestion.##relabelings: []# - sourceLabels: [__meta_kubernetes_pod_node_name]#   separator: ;#   regex: ^(.*)$#   targetLabel: nodename#   replacement: $1#   action: replace## Component scraping kube proxy
##
kubeProxy:enabled: true## If your kube proxy is not deployed as a pod, specify IPs it can be found on##endpoints: []# - 10.141.4.22# - 10.141.4.23# - 10.141.4.24service:port: 10249targetPort: 10249# selector:#   k8s-app: kube-proxyserviceMonitor:## Scrape interval. If not set, the Prometheus default scrape interval is used.##interval: ""## Enable scraping kube-proxy over https.## Requires proper certs (not self-signed) and delegated authentication/authorization checks##https: false## 	metric relabel configs to apply to samples before ingestion.##metricRelabelings: []# - action: keep#   regex: 'kube_(daemonset|deployment|pod|namespace|node|statefulset).+'#   sourceLabels: [__name__]# 	relabel configs to apply to samples before ingestion.##relabelings: []# - action: keep#   regex: 'kube_(daemonset|deployment|pod|namespace|node|statefulset).+'#   sourceLabels: [__name__]## Component scraping kube state metrics
##
kubeStateMetrics:enabled: trueserviceMonitor:## Scrape interval. If not set, the Prometheus default scrape interval is used.##interval: ""## 	metric relabel configs to apply to samples before ingestion.##metricRelabelings: []# - action: keep#   regex: 'kube_(daemonset|deployment|pod|namespace|node|statefulset).+'#   sourceLabels: [__name__]# 	relabel configs to apply to samples before ingestion.##relabelings: []# - sourceLabels: [__meta_kubernetes_pod_node_name]#   separator: ;#   regex: ^(.*)$#   targetLabel: nodename#   replacement: $1#   action: replace## Configuration for kube-state-metrics subchart
##
kube-state-metrics:rbac:create: truepodSecurityPolicy:enabled: true## Deploy node exporter as a daemonset to all nodes
##
nodeExporter:enabled: true## Use the value configured in prometheus-node-exporter.podLabels##jobLabel: jobLabelserviceMonitor:## Scrape interval. If not set, the Prometheus default scrape interval is used.##interval: ""## How long until a scrape request times out. If not set, the Prometheus default scape timeout is used.##scrapeTimeout: ""## 	metric relabel configs to apply to samples before ingestion.##metricRelabelings: []# - sourceLabels: [__name__]#   separator: ;#   regex: ^node_mountstats_nfs_(event|operations|transport)_.+#   replacement: $1#   action: drop## 	relabel configs to apply to samples before ingestion.##relabelings: []# - sourceLabels: [__meta_kubernetes_pod_node_name]#   separator: ;#   regex: ^(.*)$#   targetLabel: nodename#   replacement: $1#   action: replace## Configuration for prometheus-node-exporter subchart
##
prometheus-node-exporter:podLabels:## Add the 'node-exporter' label to be used by serviceMonitor to match standard common usage in rules and grafana dashboards##jobLabel: node-exporterextraArgs:- --collector.filesystem.ignored-mount-points=^/(dev|proc|sys|var/lib/docker/.+)($|/)- --collector.filesystem.ignored-fs-types=^(autofs|binfmt_misc|cgroup|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|mqueue|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|sysfs|tracefs)$## Manages Prometheus and Alertmanager components
##
prometheusOperator:enabled: true# If true prometheus operator will create and update its CRDs on startupmanageCrds: truetlsProxy:enabled: trueimage:repository: squareup/ghostunneltag: v1.5.2pullPolicy: IfNotPresentresources: {}## Admission webhook support for PrometheusRules resources added in Prometheus Operator 0.30 can be enabled to prevent incorrectly formatted## rules from making their way into prometheus and potentially preventing the container from startingadmissionWebhooks:failurePolicy: Failenabled: true## If enabled, generate a self-signed certificate, then patch the webhook configurations with the generated data.## On chart upgrades (or if the secret exists) the cert will not be re-generated. You can use this to provide your own## certs ahead of time if you wish.##patch:enabled: trueimage:repository: jettech/kube-webhook-certgentag: v1.0.0pullPolicy: IfNotPresentresources: {}## Provide a priority class name to the webhook patching job##priorityClassName: ""podAnnotations: {}nodeSelector: {}affinity: {}tolerations: []## Namespaces to scope the interaction of the Prometheus Operator and the apiserver (allow list).## This is mutually exclusive with denyNamespaces. Setting this to an empty object will disable the configuration##namespaces: {}# releaseNamespace: true# additional:# - kube-system## Namespaces not to scope the interaction of the Prometheus Operator (deny list).##denyNamespaces: []## Service account for Alertmanager to use.## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/##serviceAccount:create: truename: ""## Configuration for Prometheus operator service##service:annotations: {}labels: {}clusterIP: ""## Port to expose on each node## Only used if service.type is 'NodePort'##nodePort: 30080nodePortTls: 30443## Additional ports to open for Prometheus service## ref: https://kubernetes.io/docs/concepts/services-networking/service/#multi-port-services##additionalPorts: []## Loadbalancer IP## Only use if service.type is "loadbalancer"##loadBalancerIP: ""loadBalancerSourceRanges: []## Service type## NodePort, ClusterIP, loadbalancer##type: ClusterIP## List of IP addresses at which the Prometheus server service is available## Ref: https://kubernetes.io/docs/user-guide/services/#external-ips##externalIPs: []## Deploy CRDs used by Prometheus Operator.##createCustomResource: true## Attempt to clean up CRDs created by Prometheus Operator.##cleanupCustomResource: false## Labels to add to the operator pod##podLabels: {}## Annotations to add to the operator pod##podAnnotations: {}## Assign a PriorityClassName to pods if set# priorityClassName: ""## Define Log Format# Use logfmt (default) or json-formatted logging# logFormat: logfmt## Decrease log verbosity to errors only# logLevel: error## If true, the operator will create and maintain a service for scraping kubelets## ref: https://github.com/coreos/prometheus-operator/blob/master/helm/prometheus-operator/README.md##kubeletService:enabled: truenamespace: kube-system## Create a servicemonitor for the operator##serviceMonitor:## Scrape interval. If not set, the Prometheus default scrape interval is used.##interval: ""selfMonitor: true## 	metric relabel configs to apply to samples before ingestion.##metricRelabelings: []# - action: keep#   regex: 'kube_(daemonset|deployment|pod|namespace|node|statefulset).+'#   sourceLabels: [__name__]# 	relabel configs to apply to samples before ingestion.##relabelings: []# - sourceLabels: [__meta_kubernetes_pod_node_name]#   separator: ;#   regex: ^(.*)$#   targetLabel: nodename#   replacement: $1#   action: replace## Resource limits & requests##resources: {}# limits:#   cpu: 200m#   memory: 200Mi# requests:#   cpu: 100m#   memory: 100Mi## Define which Nodes the Pods are scheduled on.## ref: https://kubernetes.io/docs/user-guide/node-selection/##nodeSelector: {}## Tolerations for use with node taints## ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/##tolerations: []# - key: "key"#   operator: "Equal"#   value: "value"#   effect: "NoSchedule"## Assign custom affinity rules to the prometheus operator## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/##affinity: {}# nodeAffinity:#   requiredDuringSchedulingIgnoredDuringExecution:#     nodeSelectorTerms:#     - matchExpressions:#       - key: kubernetes.io/e2e-az-name#         operator: In#         values:#         - e2e-az1#         - e2e-az2securityContext:runAsNonRoot: truerunAsUser: 65534## Prometheus-operator image##image:repository: quay.io/coreos/prometheus-operatortag: v0.36.0pullPolicy: IfNotPresent## Configmap-reload image to use for reloading configmaps##configmapReloadImage:repository: quay.io/coreos/configmap-reloadtag: v0.0.1## Prometheus-config-reloader image to use for config and rule reloading##prometheusConfigReloaderImage:repository: quay.io/coreos/prometheus-config-reloadertag: v0.36.0## Set the prometheus config reloader side-car CPU limit##configReloaderCpu: 100m## Set the prometheus config reloader side-car memory limit##configReloaderMemory: 25Mi## Hyperkube image to use when cleaning up##hyperkubeImage:repository: k8s.gcr.io/hyperkubetag: v1.12.1pullPolicy: IfNotPresent## Deploy a Prometheus instance
##
prometheus:enabled: true## Annotations for Prometheus##annotations: {}## Service account for Prometheuses to use.## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/##serviceAccount:create: truename: ""## Configuration for Prometheus service##service:annotations: {}labels: {}clusterIP: ""## Port for Prometheus Service to listen on##port: 9090## To be used with a proxy extraContainer porttargetPort: 9090## List of IP addresses at which the Prometheus server service is available## Ref: https://kubernetes.io/docs/user-guide/services/#external-ips##externalIPs: []## Port to expose on each node## Only used if service.type is 'NodePort'##nodePort: 30090## Loadbalancer IP## Only use if service.type is "loadbalancer"loadBalancerIP: ""loadBalancerSourceRanges: []## Service type##type: ClusterIPsessionAffinity: ""## Configuration for creating a separate Service for each statefulset Prometheus replica##servicePerReplica:enabled: falseannotations: {}## Port for Prometheus Service per replica to listen on##port: 9090## To be used with a proxy extraContainer porttargetPort: 9090## Port to expose on each node## Only used if servicePerReplica.type is 'NodePort'##nodePort: 30091## Loadbalancer source IP ranges## Only used if servicePerReplica.type is "loadbalancer"loadBalancerSourceRanges: []## Service type##type: ClusterIP## Configure pod disruption budgets for Prometheus## ref: https://kubernetes.io/docs/tasks/run-application/configure-pdb/#specifying-a-poddisruptionbudget## This configuration is immutable once created and will require the PDB to be deleted to be changed## https://github.com/kubernetes/kubernetes/issues/45398##podDisruptionBudget:enabled: falseminAvailable: 1maxUnavailable: ""ingress:enabled: trueannotations: {}labels: {}## Hostnames.## Must be provided if Ingress is enabled.##hosts:- prometheus.local## Paths to use for ingress rules - one path should match the prometheusSpec.routePrefix##paths: []# - /## TLS configuration for Prometheus Ingress## Secret must be manually created in the namespace##tls: []# - secretName: prometheus-general-tls#   hosts:#     - prometheus.example.com## Configuration for creating an Ingress that will map to each Prometheus replica service## prometheus.servicePerReplica must be enabled##ingressPerReplica:enabled: falseannotations: {}labels: {}## Final form of the hostname for each per replica ingress is## {{ ingressPerReplica.hostPrefix }}-{{ $replicaNumber }}.{{ ingressPerReplica.hostDomain }}#### Prefix for the per replica ingress that will have `-$replicaNumber`## appended to the endhostPrefix: ""## Domain that will be used for the per replica ingresshostDomain: ""## Paths to use for ingress rules##paths: []# - /## Secret name containing the TLS certificate for Prometheus per replica ingress## Secret must be manually created in the namespacetlsSecretName: ""## Configure additional options for default pod security policy for Prometheus## ref: https://kubernetes.io/docs/concepts/policy/pod-security-policy/podSecurityPolicy:allowedCapabilities: []serviceMonitor:## Scrape interval. If not set, the Prometheus default scrape interval is used.##interval: ""selfMonitor: true## scheme: HTTP scheme to use for scraping. Can be used with `tlsConfig` for example if using istio mTLS.scheme: ""## tlsConfig: TLS configuration to use when scraping the endpoint. For example if using istio mTLS.## Of type: https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#tlsconfigtlsConfig: {}bearerTokenFile:## 	metric relabel configs to apply to samples before ingestion.##metricRelabelings: []# - action: keep#   regex: 'kube_(daemonset|deployment|pod|namespace|node|statefulset).+'#   sourceLabels: [__name__]# 	relabel configs to apply to samples before ingestion.##relabelings: []# - sourceLabels: [__meta_kubernetes_pod_node_name]#   separator: ;#   regex: ^(.*)$#   targetLabel: nodename#   replacement: $1#   action: replace## Settings affecting prometheusSpec## ref: https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#prometheusspec##prometheusSpec:## If true, pass --storage.tsdb.max-block-duration=2h to prometheus. This is already done if using Thanos##disableCompaction: false## APIServerConfig## ref: https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#apiserverconfig##apiserverConfig: {}## Interval between consecutive scrapes.##scrapeInterval: ""## Interval between consecutive evaluations.##evaluationInterval: ""## ListenLocal makes the Prometheus server listen on loopback, so that it does not bind against the Pod IP.##listenLocal: false## EnableAdminAPI enables Prometheus the administrative HTTP API which includes functionality such as deleting time series.## This is disabled by default.## ref: https://prometheus.io/docs/prometheus/latest/querying/api/#tsdb-admin-apis##enableAdminAPI: false## Image of Prometheus.##image:repository: quay.io/prometheus/prometheustag: v2.15.2## Tolerations for use with node taints## ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/##tolerations: []#  - key: "key"#    operator: "Equal"#    value: "value"#    effect: "NoSchedule"## Alertmanagers to which alerts will be sent## ref: https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#alertmanagerendpoints#### Default configuration will connect to the alertmanager deployed as part of this release##alertingEndpoints: []# - name: ""#   namespace: ""#   port: http#   scheme: http#   pathPrefix: ""#   tlsConfig: {}#   bearerTokenFile: ""#   apiVersion: v2## External labels to add to any time series or alerts when communicating with external systems##externalLabels: {}## Name of the external label used to denote replica name##replicaExternalLabelName: ""## If true, the Operator won't add the external label used to denote replica name##replicaExternalLabelNameClear: false## Name of the external label used to denote Prometheus instance name##prometheusExternalLabelName: ""## If true, the Operator won't add the external label used to denote Prometheus instance name##prometheusExternalLabelNameClear: false## External URL at which Prometheus will be reachable.##externalUrl: ""## Define which Nodes the Pods are scheduled on.## ref: https://kubernetes.io/docs/user-guide/node-selection/##nodeSelector: {}## Secrets is a list of Secrets in the same namespace as the Prometheus object, which shall be mounted into the Prometheus Pods.## The Secrets are mounted into /etc/prometheus/secrets/. Secrets changes after initial creation of a Prometheus object are not## reflected in the running Pods. To change the secrets mounted into the Prometheus Pods, the object must be deleted and recreated## with the new list of secrets.##secrets: []## ConfigMaps is a list of ConfigMaps in the same namespace as the Prometheus object, which shall be mounted into the Prometheus Pods.## The ConfigMaps are mounted into /etc/prometheus/configmaps/.##configMaps: []## QuerySpec defines the query command line flags when starting Prometheus.## ref: https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#queryspec##query: {}## Namespaces to be selected for PrometheusRules discovery.## If nil, select own namespace. Namespaces to be selected for ServiceMonitor discovery.## See https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#namespaceselector for usage##ruleNamespaceSelector: {}## If true, a nil or {} value for prometheus.prometheusSpec.ruleSelector will cause the## prometheus resource to be created with selectors based on values in the helm deployment,## which will also match the PrometheusRule resources created##ruleSelectorNilUsesHelmValues: true## PrometheusRules to be selected for target discovery.## If {}, select all ServiceMonitors##ruleSelector: {}## Example which select all prometheusrules resources## with label "prometheus" with values any of "example-rules" or "example-rules-2"# ruleSelector:#   matchExpressions:#     - key: prometheus#       operator: In#       values:#         - example-rules#         - example-rules-2### Example which select all prometheusrules resources with label "role" set to "example-rules"# ruleSelector:#   matchLabels:#     role: example-rules## If true, a nil or {} value for prometheus.prometheusSpec.serviceMonitorSelector will cause the## prometheus resource to be created with selectors based on values in the helm deployment,## which will also match the servicemonitors created##serviceMonitorSelectorNilUsesHelmValues: true## ServiceMonitors to be selected for target discovery.## If {}, select all ServiceMonitors##serviceMonitorSelector: {}## Example which selects ServiceMonitors with label "prometheus" set to "somelabel"# serviceMonitorSelector:#   matchLabels:#     prometheus: somelabel## Namespaces to be selected for ServiceMonitor discovery.## See https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#namespaceselector for usage##serviceMonitorNamespaceSelector: {}## If true, a nil or {} value for prometheus.prometheusSpec.podMonitorSelector will cause the## prometheus resource to be created with selectors based on values in the helm deployment,## which will also match the podmonitors created##podMonitorSelectorNilUsesHelmValues: true## PodMonitors to be selected for target discovery.## If {}, select all PodMonitors##podMonitorSelector: {}## Example which selects PodMonitors with label "prometheus" set to "somelabel"# podMonitorSelector:#   matchLabels:#     prometheus: somelabel## Namespaces to be selected for PodMonitor discovery.## See https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#namespaceselector for usage##podMonitorNamespaceSelector: {}## How long to retain metrics##retention: 10d## Maximum size of metrics##retentionSize: ""## Enable compression of the write-ahead log using Snappy.##walCompression: false## If true, the Operator won't process any Prometheus configuration changes##paused: false## Number of Prometheus replicas desired##replicas: 1## Log level for Prometheus be configured in##logLevel: info## Log format for Prometheus be configured in##logFormat: logfmt## Prefix used to register routes, overriding externalUrl route.## Useful for proxies that rewrite URLs.##routePrefix: /## Standard object’s metadata. More info: https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md#metadata## Metadata Labels and Annotations gets propagated to the prometheus pods.##podMetadata: {}# labels:#   app: prometheus#   k8s-app: prometheus## Pod anti-affinity can prevent the scheduler from placing Prometheus replicas on the same node.## The default value "soft" means that the scheduler should *prefer* to not schedule two replica pods onto the same node but no guarantee is provided.## The value "hard" means that the scheduler is *required* to not schedule two replica pods onto the same node.## The value "" will disable pod anti-affinity so that no anti-affinity rules will be configured.podAntiAffinity: ""## If anti-affinity is enabled sets the topologyKey to use for anti-affinity.## This can be changed to, for example, failure-domain.beta.kubernetes.io/zone##podAntiAffinityTopologyKey: kubernetes.io/hostname## Assign custom affinity rules to the prometheus instance## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/##affinity: {}# nodeAffinity:#   requiredDuringSchedulingIgnoredDuringExecution:#     nodeSelectorTerms:#     - matchExpressions:#       - key: kubernetes.io/e2e-az-name#         operator: In#         values:#         - e2e-az1#         - e2e-az2## The remote_read spec configuration for Prometheus.## ref: https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#remotereadspecremoteRead: []# - url: http://remote1/read## The remote_write spec configuration for Prometheus.## ref: https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#remotewritespecremoteWrite: []# - url: http://remote1/push## Enable/Disable Grafana dashboards provisioning for prometheus remote write featureremoteWriteDashboards: false## Resource limits & requests##resources: {}# requests:#   memory: 400Mi## Prometheus StorageSpec for persistent data## ref: https://github.com/coreos/prometheus-operator/blob/master/Documentation/user-guides/storage.md##storageSpec: {}#  volumeClaimTemplate:#    spec:#      storageClassName: gluster#      accessModes: ["ReadWriteOnce"]#      resources:#        requests:#          storage: 50Gi#    selector: {}## AdditionalScrapeConfigs allows specifying additional Prometheus scrape configurations. Scrape configurations## are appended to the configurations generated by the Prometheus Operator. Job configurations must have the form## as specified in the official Prometheus documentation:## https://prometheus.io/docs/prometheus/latest/configuration/configuration/#scrape_config. As scrape configs are## appended, the user is responsible to make sure it is valid. Note that using this feature may expose the possibility## to break upgrades of Prometheus. It is advised to review Prometheus release notes to ensure that no incompatible## scrape configs are going to break Prometheus after the upgrade.#### The scrape configuraiton example below will find master nodes, provided they have the name .*mst.*, relabel the## port to 2379 and allow etcd scraping provided it is running on all Kubernetes master nodes##additionalScrapeConfigs: []# - job_name: kube-etcd#   kubernetes_sd_configs:#     - role: node#   scheme: https#   tls_config:#     ca_file:   /etc/prometheus/secrets/etcd-client-cert/etcd-ca#     cert_file: /etc/prometheus/secrets/etcd-client-cert/etcd-client#     key_file:  /etc/prometheus/secrets/etcd-client-cert/etcd-client-key#   relabel_configs:#   - action: labelmap#     regex: __meta_kubernetes_node_label_(.+)#   - source_labels: [__address__]#     action: replace#     targetLabel: __address__#     regex: ([^:;]+):(\d+)#     replacement: ${1}:2379#   - source_labels: [__meta_kubernetes_node_name]#     action: keep#     regex: .*mst.*#   - source_labels: [__meta_kubernetes_node_name]#     action: replace#     targetLabel: node#     regex: (.*)#     replacement: ${1}#   metric_relabel_configs:#   - regex: (kubernetes_io_hostname|failure_domain_beta_kubernetes_io_region|beta_kubernetes_io_os|beta_kubernetes_io_arch|beta_kubernetes_io_instance_type|failure_domain_beta_kubernetes_io_zone)#     action: labeldrop## additionalPrometheusSecretsAnnotations allows to add annotations to the kubernetes secret. This can be useful## when deploying via spinnaker to disable versioning on the secret, strategy.spinnaker.io/versioned: 'false'additionalPrometheusSecretsAnnotations: {}## AdditionalAlertManagerConfigs allows for manual configuration of alertmanager jobs in the form as specified## in the official Prometheus documentation https://prometheus.io/docs/prometheus/latest/configuration/configuration/#<alertmanager_config>.## AlertManager configurations specified are appended to the configurations generated by the Prometheus Operator.## As AlertManager configs are appended, the user is responsible to make sure it is valid. Note that using this## feature may expose the possibility to break upgrades of Prometheus. It is advised to review Prometheus release## notes to ensure that no incompatible AlertManager configs are going to break Prometheus after the upgrade.##additionalAlertManagerConfigs: []# - consul_sd_configs:#   - server: consul.dev.test:8500#     scheme: http#     datacenter: dev#     tag_separator: ','#     services:#       - metrics-prometheus-alertmanager## AdditionalAlertRelabelConfigs allows specifying Prometheus alert relabel configurations. Alert relabel configurations specified are appended## to the configurations generated by the Prometheus Operator. Alert relabel configurations specified must have the form as specified in the## official Prometheus documentation: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#alert_relabel_configs.## As alert relabel configs are appended, the user is responsible to make sure it is valid. Note that using this feature may expose the## possibility to break upgrades of Prometheus. It is advised to review Prometheus release notes to ensure that no incompatible alert relabel## configs are going to break Prometheus after the upgrade.##additionalAlertRelabelConfigs: []# - separator: ;#   regex: prometheus_replica#   replacement: $1#   action: labeldrop## SecurityContext holds pod-level security attributes and common container settings.## This defaults to non root user with uid 1000 and gid 2000.## https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md##securityContext:runAsNonRoot: truerunAsUser: 1000fsGroup: 2000## 	Priority class assigned to the Pods##priorityClassName: ""## Thanos configuration allows configuring various aspects of a Prometheus server in a Thanos environment.## This section is experimental, it may change significantly without deprecation notice in any release.## This is experimental and may change significantly without backward compatibility in any release.## ref: https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#thanosspec##thanos: {}## Containers allows injecting additional containers. This is meant to allow adding an authentication proxy to a Prometheus pod.##  if using proxy extraContainer  update targetPort with proxy container portcontainers: []## Enable additional scrape configs that are managed externally to this chart. Note that the prometheus## will fail to provision if the correct secret does not exist.## This option requires that you are maintaining a secret in the same namespace as Prometheus with## a name of 'prometheus-operator-prometheus-scrape-confg' and a key of 'additional-scrape-configs.yaml' that## contains a list of scrape_config's. The name of the secret may vary if you utilize the "fullnameOverride".## This feature cannot be used in conjunction with the additionalScrapeConfigs attribute (the helm-generated## secret will overwrite your self-maintained secret).#### scrape_config docs: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#scrape_config## explanation of "confg" typo: https://github.com/helm/charts/issues/13368additionalScrapeConfigsExternal: false## PortName to use for Prometheus.##portName: "web"additionalServiceMonitors:- name: "hc"namespaceSelector:any: trueselector:matchLabels:geek: app-metricsendpoints:- path: /metricsport: http- name: ""namespaceSelector:any: trueselector:matchLabels:geek: app-metricsendpoints:- path: /metricsport: http## Name of the ServiceMonitor to create### - name: ""## Additional labels to set used for the ServiceMonitorSelector. Together with standard labels from## the chart### additionalLabels: {}## Service label for use in assembling a job name of the form <label value>-<port>## If no label is specified, the service name is used.### jobLabel: ""## labels to transfer from the kubernetes service to the target### targetLabels: ""## Label selector for services to which this ServiceMonitor applies### selector: {}## Namespaces from which services are selected### namespaceSelector:## Match any namespace### any: false## Explicit list of namespace names to select### matchNames: []## Endpoints of the selected service to be monitored### endpoints: []## Name of the endpoint's service port## Mutually exclusive with targetPort# - port: ""## Name or number of the endpoint's target port## Mutually exclusive with port# - targetPort: ""## File containing bearer token to be used when scraping targets###   bearerTokenFile: ""## Interval at which metrics should be scraped###   interval: 30s## HTTP path to scrape for metrics###   path: /metrics## HTTP scheme to use for scraping###   scheme: http## TLS configuration to use when scraping the endpoint###   tlsConfig:## Path to the CA file### caFile: ""## Path to client certificate file### certFile: ""## Skip certificate verification### insecureSkipVerify: false## Path to client key file### keyFile: ""## Server name used to verify host name### serverName: ""additionalPodMonitors: []## Name of the PodMonitor to create### - name: ""## Additional labels to set used for the PodMonitorSelector. Together with standard labels from## the chart### additionalLabels: {}## Pod label for use in assembling a job name of the form <label value>-<port>## If no label is specified, the pod endpoint name is used.### jobLabel: ""## Label selector for pods to which this PodMonitor applies### selector: {}## PodTargetLabels transfers labels on the Kubernetes Pod onto the target.### podTargetLabels: {}## SampleLimit defines per-scrape limit on number of scraped samples that will be accepted.### sampleLimit: 0## Namespaces from which pods are selected### namespaceSelector:## Match any namespace### any: false## Explicit list of namespace names to select### matchNames: []## Endpoints of the selected pods to be monitored## https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#podmetricsendpoint### podMetricsEndpoints: []

总结

在prometheus-operator架构中ServiceMonitor是比较核心的对象,这个对象可以通过values.yaml文件中additionServiceMonitors去配置,通过matchLabels去定义服务的selector策略。需要注意的是path和port,
port的名称需要与service定义的是相同的

2. Granfana实现监控看板

2.1 Granfana核心特性

  • 支持各种主流数据源
  • 丰富的图形组件
  • 灵活的看板组织方式
  • 灵活的权限支持
  • 支持OAuth账户支持

2.2 部署文件

grafana:enabled: true## Deploy default dashboards.##defaultDashboardsEnabled: trueadminPassword: adminingress:## If true, Grafana Ingress will be created##enabled: true## Annotations for Grafana Ingress##annotations: {}# kubernetes.io/ingress.class: nginx# kubernetes.io/tls-acme: "true"## Labels to be added to the Ingress##labels: {}## Hostnames.## Must be provided if Ingress is enable.##hosts:- grafana.local## Path for grafana ingresspath: /## TLS configuration for grafana Ingress## Secret must be manually created in the namespace##tls: []# - secretName: grafana-general-tls#   hosts:#   - grafana.example.com...// 在默认的prometheus作为数据源的基础上再添加其他数据源additionalDataSources: []# - name: prometheus-sample#   access: proxy#   basicAuth: true#   basicAuthPassword: pass#   basicAuthUser: daco#   editable: false#   jsonData:#       tlsSkipVerify: true#   orgId: 1#   type: prometheus#   url: https://prometheus.svc:9090#   version: 1

3. prometheus-net收集自定义指标

prometheus-net是开源组件,
项目地址:prometheus-net Github地址

3.1 组件包

  • prometheus-net
  • prometheus-net.AspNetCore

3.2 使用场景

  • 应用运行状况指标
  • 自定义指标
// startup
public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
{...endpoints.MapMetrics();// 将metrics映射出来...
}// 输出自定义指标
[HttpGet]
public async Task GetMetricsAsync()
{var r = Metric.NewCustomRegistry();MetricFactory f = Metric.WithCustomRegistry(r);r.AddBeforeCollectCallback(() =>{f.CreateCounter("counter_v1", "").Inc(100);});Response.ContentType = PrometheusConstants.ExporterContentType;Response.StatusCode = 200;await r.CollectAndExportAsTextAsync(Response.Body, HttpContext.RequestAborted);
}

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.rhkb.cn/news/124650.html

如若内容造成侵权/违法违规/事实不符,请联系长河编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

jar包或exe程序设置为windows服务

最近在使用java和python制作客户端时突发奇想&#xff0c;是否能够通过一种方法来讲jar包和exe程序打包成windows服务呢&#xff1f;简单了解了一下是可以的。 首先要用到的是winSW&#xff0c;制作windows服务的过程非常简单&#xff0c;仅需几步制作完成&#xff0c;也不需要…

ESP32-C3的存储器类型

本文主要参考ESP-IDF编程指南&#xff0c;一点小记录。 ESP32-C3的存储器有&#xff1a; ESP-IDF 区分了指令总线&#xff08;IRAM、IROM、RTC FAST memory&#xff09;和数据总线 (DRAM、DROM)。 内部SRAM的一部分是指令RAM(IRAM)。那为什么要把指令放在RAM中&#xff0c;就是…

Zookeeper简述

数新网络-让每个人享受数据的价值 官网现已全新升级—欢迎访问&#xff01; 前 言 ZooKeeper是一个开源的、高可用的、分布式的协调服务&#xff0c;由Apache软件基金会维护。它旨在帮助管理和协调分布式系统和应用程序&#xff0c;提供了一个可靠的平台&#xff0c;用于处理…

QT第二天

1.优化登陆界面&#xff0c;当点击登录按钮后&#xff0c;在该按钮对应的槽函数中&#xff0c;判断账户和密码框内的数据是否为admin和123456&#xff0c;如果账户密码匹配成功&#xff0c;则提示登陆成功并关闭登录界面&#xff0c;如果账户密码匹配失败&#xff0c;则提示登录…

嵌入式开发-11 Linux下GDB调试工具

目录 1 GDB简介 2 GDB基本命令 3 GDB调试程序 1 GDB简介 GDB是GNU开源组织发布的一个强大的Linux下的程序调试工具。 一般来说&#xff0c;GDB主要帮助你完成下面四个方面的功能&#xff1a; 1、启动你的程序&#xff0c;可以按照你的自定义的要求随心所欲的运行程序&#…

介绍PHP

PHP是一种流行的服务器端编程语言&#xff0c;用于开发Web应用程序。它是一种开源的编程语言&#xff0c;具有易学易用的语法和强大的功能。PHP支持在服务器上运行的动态网页和Web应用程序的快速开发。 PHP可以与HTML标记语言结合使用&#xff0c;从而能够生成动态的Web页面&a…

Python Opencv实践 - Harris角点检测

参考资料&#xff1a;https://blog.csdn.net/wsp_1138886114/article/details/90415190 import cv2 as cv import numpy as np import matplotlib.pyplot as pltimg cv.imread("../SampleImages/chinease_tower.jpg", cv.IMREAD_COLOR) plt.imshow(img[:,:,::-1])#…

CSS笔记(黑马程序员pink老师前端)浮动,清除浮动

浮动可以改变标签的默认排列方式。浮动元素常与标准流的父元素搭配使用. 网页布局第一准则:多个块级元素纵向排列找标准流&#xff0c;多个块级元素横向排列找浮动。 float属性用于创建浮动框&#xff0c;将其移动到一边&#xff0c;直到左边缘或右边缘触及包含块或另一个浮动框…

xss前十二关靶场练习

目录 一、xss原理和分类 1.原理 2.分类&#xff1a;xss分为存储型和反射型以及dom型 &#xff08;1&#xff09;反射性 &#xff08;2&#xff09;存储型 &#xff08;3&#xff09;dom型 二、靶场关卡练习​编辑 1.第一关 2.第二关 3.第三关 4.第四关 5.第五关 6…

Redis——认识Redis

简单介绍 Redis诞生于2009年&#xff0c;全称是Remote Dictionary Server&#xff0c;远程词典服务器&#xff0c;是一个基于内存的键值型NoSQL数据库。 特征 键值&#xff08;Key-value&#xff09;型&#xff0c;value支持多种不同数据结构&#xff0c;功能丰富单线程&…

对象模型和this指针(个人学习笔记黑马学习)

1、成员变量和成员函数 #include <iostream> using namespace std; #include <string>//成员变量和成员函数分开存储class Person {int m_A;//非静态成员变量 属于类的对象上的static int m_B;//静态成员变量 不属于类的对象上void func() {} //非静态成员函数 不…

博客程序系统其它功能扩充

一、注册功能 1、约定前后端接口 2、后端代码编写 WebServlet("/register") public class RegisterServlet extends HttpServlet {Overrideprotected void doPost(HttpServletRequest req, HttpServletResponse resp) throws ServletException, IOException {//设置…

《向量数据库指南》——提高向量数据库Milvus Cloud 2.3的运行效率

简介:向量数据库彻底改变了我们处理复杂数据结构的方式: 向量数据库彻底改变了我们处理复杂数据结构的方式,为高维矢量提供了高效的存储和检索。作为向量数据库专家和《向量数据库指南》的作者,我很高兴能与大家分享向量数据库运行效率方面的最新进展。在本文中,我们将探讨…

ARM编程模型-常用指令集

一、ARM指令集 ARM是RISC架构&#xff0c;所有的指令长度都是32位&#xff0c;并且大多数指令都在一个单周期内执行。主要特点&#xff1a;指令是条件执行的&#xff0c;内存访问使用Load/store架构。 二、Thumb 指令集 Thumb是一个16位的指令集&#xff0c;是ARM指令集的功能…

PandaGPT部署演示

PandaGPT 是一种通用的指令跟踪模型&#xff0c;可以看到和听到。实验表明&#xff0c;PandaGPT 可以执行复杂的任务&#xff0c;例如生成详细的图像描述、编写受视频启发的故事以及回答有关音频的问题。更有趣的是&#xff0c;PandaGPT 可以同时接受多模态输入并自然地组合它们…

嵌入式linux(imx6ull)下RS485接口配置

接口原理图如下&#xff1a; 由原理图可知收发需要收UART_CTS引脚控制,高电平时接收&#xff0c;低电平时发送。通过查看Documentation/devicetree/bindings/serial/fsl-imx-uart.yaml和Documentation/devicetree/bindings/serial/rs485.yaml两个说明文档&#xff0c;修改设备树…

Nginx__高级进阶篇之LNMP动态网站环境部署

动态网站和LNMP&#xff08;LinuxNginxMySQLPHP&#xff09;都是用于建立和运行 web 应用程序的技术。 动态网站是通过服务器端脚本语言&#xff08;如 PHP、Python、Ruby等&#xff09;动态生成网页内容的网站。通过这种方式&#xff0c;动态网站可以根据用户的不同请求生成不…

分类算法系列⑤:决策树

目录 1、认识决策树 2、决策树的概念 3、决策树分类原理 基本原理 数学公式 4、信息熵的作用 5、决策树的划分依据之一&#xff1a;信息增益 5.1、定义与公式 5.2、⭐手动计算案例 5.3、log值逼近 6、决策树的三种算法实现 7、API 8、⭐两个代码案例 8.1、决策树…

SpringCloud(34):Nacos服务发现

1 从单体架构到微服务 1.1单体架构 Web应用程序发展的早期,大部分web工程师将所有的功能模块打包到一起并放在一个web容器中运行,所有功能模块使用同一个数据库,同时,它还提供API或者UI访问的web模块等。 尽管也是模块化逻辑,但是最终它还是会打包并部署为单体式应用,这…

C++:类和对象(二)

本文主要介绍&#xff1a;构造函数、析构函数、拷贝构造函数、赋值运算符重载、const成员函数、取地址及const取地址操作符重载。 目录 一、类的六个默认成员函数 二、构造函数 1.概念 2.特性 三、析构函数 1.概念 2.特性 四、拷贝构造函数 1.概念 2.特征 五、赋值…