apache-skywalking-apm-10.1.0使用

apache-skywalking-apm-10.1.0使用

本文主要介绍如何使用apache-skywalking-apm-10.1.0,同时配合elasticsearch-8.17.0-windows-x86_64来作为存储 es持久化数据使用。

步骤如下:

一、下载elasticsearch-8.17.0-windows-x86_64

1、下载ES(elasticsearch 简称 ES 下载链接:https://www.elastic.co/downloads/elasticsearch)

ES 下载链接:https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-8.17.0-windows-x86_64.zip)),需要在修改ES配置,进入ES解压以后的文件下的config目录,找到elasticsearch.yml,打开后修改自己需要的配置,可以参考ES elasticsearch-8.17.0-windows-x86_64使用 - 龙骑科技 - 博客园

二、下载skywalking

下载地址:https://skywalking.apache.org/downloads/,之后再解压,如下:

下载之后解压

三、安装jdk21

elasticsearch-8.17.0自带的jdk23版本过高,skywalking-v10.1.0无法使用。需要skywalking-v10.1.0最高支持jdk21,所以需要下载对应版本的jdk,即下载jdk21,下载地址:Archived OpenJDK GA Releases,java环境配置参考:自学[vue+SpringCloud]-002-配置本机java环境_springcloud vue搭建-CSDN博客

可以从 Oracle 的 JDK 下载页面 或者使用 AdoptOpenJDK 等其他来源下载和安装。如果是安装包,则安装完成后,你可以使用以下命令 java -version,在命令提示符中确认 JDK 是否安装成功,

如果绿色版本的https://download.java.net/java/GA/jdk21.0.2/f2283984656d49d69e91c558476027ac/13/GPL/openjdk-21.0.2_windows-x64_bin.zip,只需要解压道指定目录即可,

本文采用的是绿色版本的,https://download.java.net/java/GA/jdk21.0.2/f2283984656d49d69e91c558476027ac/13/GPL/openjdk-21.0.2_windows-x64_bin.zip下载的。

1、配置 JAVA_HOME 环境变量
1.1 设置 JAVA_HOME
        打开系统属性:

       右键点击桌面的“此电脑”图标,选择“属性”。
       在打开的窗口左侧,点击“高级系统设置”。
1.2 打开环境变量设置:

      在“系统属性”窗口中,点击“高级”选项卡下的“环境变量”按钮。
      添加 JAVA_HOME:

     在环境变量窗口中,在“系统变量”下点击“新建”按钮。
      在“变量名”中输入 JAVA_HOME,在“变量值”中输入 JDK 的安装路径,例如 C:\Program Files\Java\jdk-17。本文采用绿色版本,解压目录为D:\soft\java\openjdk-21.0.2_windows-x64_bin\jdk-21.0.2

2、 配置 PATH 变量
编辑 PATH 变量:
在“系统变量”下找到 Path,选择并点击“编辑”按钮。
在“编辑环境变量”窗口中,点击“新建”,然后添加 %JAVA_HOME%\bin。

3、 验证配置
配置完成后,打开新的命令提示符窗口,运行以下命令验证:

echo %JAVA_HOME%
它应该输出你设置的 JDK 路径。你还可以运行 java -version 和 javac -version 以确保 Java 和 Java 编译器的版本正确。

多 JDK 版本切换(可选)

如果你有多个 JDK 版本安装并需要切换,可以通过修改 JAVA_HOME 变量的值来实现。你可以手动更改,或者编写一个批处理脚本来快速切换。

例如,创建一个批处理文件 switchJDK.bat,内容如下:

@echo off
setlocal
set JDK8="C:\Program Files\Java\jdk1.8.0_271"
set JDK11="C:\Program Files\Java\jdk-11.0.9"
set JDK17="C:\Program Files\Java\jdk-17"if "%1"=="8" (set JAVA_HOME=%JDK8%
) else if "%1"=="11" (set JAVA_HOME=%JDK11%
) else if "%1"=="17" (set JAVA_HOME=%JDK17%
) else (echo Invalid JDK versionexit /b 1
)setx JAVA_HOME "%JAVA_HOME%"
echo JAVA_HOME set to %JAVA_HOME%endlocal

使用此脚本切换 JDK 版本时,只需在命令提示符中运行:

switchJDK 8  // 切换到 JDK 8
switchJDK 11 // 切换到 JDK 11
switchJDK 17 // 切换到 JDK 17

这样可以方便地在不同版本的 JDK 之间切换。

四、修改配置apache-skywalking-apm-10.1.0

修改collector收集器 配置文件D:\Temp\1\apache-skywalking-apm-bin\config

1、修改application.yml

# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.cluster:selector: ${SW_CLUSTER:standalone}standalone:# Please check your ZooKeeper is 3.5+, However, it is also compatible with ZooKeeper 3.4.x. Replace the ZooKeeper 3.5+# library the oap-libs folder with your ZooKeeper 3.4.x library.zookeeper:namespace: ${SW_NAMESPACE:""}hostPort: ${SW_CLUSTER_ZK_HOST_PORT:localhost:2181}# Retry PolicybaseSleepTimeMs: ${SW_CLUSTER_ZK_SLEEP_TIME:1000} # initial amount of time to wait between retriesmaxRetries: ${SW_CLUSTER_ZK_MAX_RETRIES:3} # max number of times to retry# Enable ACLenableACL: ${SW_ZK_ENABLE_ACL:false} # disable ACL in defaultschema: ${SW_ZK_SCHEMA:digest} # only support digest schemaexpression: ${SW_ZK_EXPRESSION:skywalking:skywalking}internalComHost: ${SW_CLUSTER_INTERNAL_COM_HOST:""}internalComPort: ${SW_CLUSTER_INTERNAL_COM_PORT:-1}kubernetes:namespace: ${SW_CLUSTER_K8S_NAMESPACE:default}labelSelector: ${SW_CLUSTER_K8S_LABEL:app=collector,release=skywalking}uidEnvName: ${SW_CLUSTER_K8S_UID:SKYWALKING_COLLECTOR_UID}consul:serviceName: ${SW_SERVICE_NAME:"SkyWalking_OAP_Cluster"}# Consul cluster nodes, example: 10.0.0.1:8500,10.0.0.2:8500,10.0.0.3:8500hostPort: ${SW_CLUSTER_CONSUL_HOST_PORT:localhost:8500}aclToken: ${SW_CLUSTER_CONSUL_ACLTOKEN:""}internalComHost: ${SW_CLUSTER_INTERNAL_COM_HOST:""}internalComPort: ${SW_CLUSTER_INTERNAL_COM_PORT:-1}etcd:# etcd cluster nodes, example: 10.0.0.1:2379,10.0.0.2:2379,10.0.0.3:2379endpoints: ${SW_CLUSTER_ETCD_ENDPOINTS:localhost:2379}namespace: ${SW_CLUSTER_ETCD_NAMESPACE:/skywalking}serviceName: ${SW_CLUSTER_ETCD_SERVICE_NAME:"SkyWalking_OAP_Cluster"}authentication: ${SW_CLUSTER_ETCD_AUTHENTICATION:false}user: ${SW_CLUSTER_ETCD_USER:}password: ${SW_CLUSTER_ETCD_PASSWORD:}internalComHost: ${SW_CLUSTER_INTERNAL_COM_HOST:""}internalComPort: ${SW_CLUSTER_INTERNAL_COM_PORT:-1}nacos:serviceName: ${SW_SERVICE_NAME:"SkyWalking_OAP_Cluster"}hostPort: ${SW_CLUSTER_NACOS_HOST_PORT:localhost:8848}# Nacos Naming namespacenamespace: ${SW_CLUSTER_NACOS_NAMESPACE:"public"}contextPath: ${SW_CLUSTER_NACOS_CONTEXT_PATH:""}# Nacos auth usernameusername: ${SW_CLUSTER_NACOS_USERNAME:""}password: ${SW_CLUSTER_NACOS_PASSWORD:""}# Nacos auth accessKeyaccessKey: ${SW_CLUSTER_NACOS_ACCESSKEY:""}secretKey: ${SW_CLUSTER_NACOS_SECRETKEY:""}internalComHost: ${SW_CLUSTER_INTERNAL_COM_HOST:""}internalComPort: ${SW_CLUSTER_INTERNAL_COM_PORT:-1}
core:selector: ${SW_CORE:default}default:# Mixed: Receive agent data, Level 1 aggregate, Level 2 aggregate# Receiver: Receive agent data, Level 1 aggregate# Aggregator: Level 2 aggregaterole: ${SW_CORE_ROLE:Mixed} # Mixed/Receiver/AggregatorrestHost: ${SW_CORE_REST_HOST:0.0.0.0}restPort: ${SW_CORE_REST_PORT:12800}restContextPath: ${SW_CORE_REST_CONTEXT_PATH:/}restMaxThreads: ${SW_CORE_REST_MAX_THREADS:200}restIdleTimeOut: ${SW_CORE_REST_IDLE_TIMEOUT:30000}restAcceptQueueSize: ${SW_CORE_REST_QUEUE_SIZE:0}httpMaxRequestHeaderSize: ${SW_CORE_HTTP_MAX_REQUEST_HEADER_SIZE:8192}gRPCHost: ${SW_CORE_GRPC_HOST:0.0.0.0}gRPCPort: ${SW_CORE_GRPC_PORT:11800}maxConcurrentCallsPerConnection: ${SW_CORE_GRPC_MAX_CONCURRENT_CALL:0}maxMessageSize: ${SW_CORE_GRPC_MAX_MESSAGE_SIZE:52428800} #50MBgRPCThreadPoolSize: ${SW_CORE_GRPC_THREAD_POOL_SIZE:-1}gRPCSslEnabled: ${SW_CORE_GRPC_SSL_ENABLED:false}gRPCSslKeyPath: ${SW_CORE_GRPC_SSL_KEY_PATH:""}gRPCSslCertChainPath: ${SW_CORE_GRPC_SSL_CERT_CHAIN_PATH:""}gRPCSslTrustedCAPath: ${SW_CORE_GRPC_SSL_TRUSTED_CA_PATH:""}downsampling:- Hour- Day# Set a timeout on metrics data. After the timeout has expired, the metrics data will automatically be deleted.enableDataKeeperExecutor: ${SW_CORE_ENABLE_DATA_KEEPER_EXECUTOR:true} # Turn it off then automatically metrics data delete will be close.dataKeeperExecutePeriod: ${SW_CORE_DATA_KEEPER_EXECUTE_PERIOD:5} # How often the data keeper executor runs periodically, unit is minuterecordDataTTL: ${SW_CORE_RECORD_DATA_TTL:3} # Unit is daymetricsDataTTL: ${SW_CORE_METRICS_DATA_TTL:7} # Unit is day# The period of L1 aggregation flush to L2 aggregation. Unit is ms.l1FlushPeriod: ${SW_CORE_L1_AGGREGATION_FLUSH_PERIOD:500}# The threshold of session time. Unit is ms. Default value is 70s.storageSessionTimeout: ${SW_CORE_STORAGE_SESSION_TIMEOUT:70000}# The period of doing data persistence. Unit is second.Default value is 25spersistentPeriod: ${SW_CORE_PERSISTENT_PERIOD:25}topNReportPeriod: ${SW_CORE_TOPN_REPORT_PERIOD:10} # top_n record worker report cycle, unit is minute# Extra model column are the column defined by in the codes, These columns of model are not required logically in aggregation or further query,# and it will cause more load for memory, network of OAP and storage.# But, being activated, user could see the name in the storage entities, which make users easier to use 3rd party tool, such as Kibana->ES, to query the data by themselves.activeExtraModelColumns: ${SW_CORE_ACTIVE_EXTRA_MODEL_COLUMNS:false}# The max length of service + instance names should be less than 200serviceNameMaxLength: ${SW_SERVICE_NAME_MAX_LENGTH:70}# The period(in seconds) of refreshing the service cache. Default value is 10s.serviceCacheRefreshInterval: ${SW_SERVICE_CACHE_REFRESH_INTERVAL:10}instanceNameMaxLength: ${SW_INSTANCE_NAME_MAX_LENGTH:70}# The max length of service + endpoint names should be less than 240endpointNameMaxLength: ${SW_ENDPOINT_NAME_MAX_LENGTH:150}# Define the set of span tag keys, which should be searchable through the GraphQL.# The max length of key=value should be less than 256 or will be dropped.searchableTracesTags: ${SW_SEARCHABLE_TAG_KEYS:http.method,http.status_code,rpc.status_code,db.type,db.instance,mq.queue,mq.topic,mq.broker}# Define the set of log tag keys, which should be searchable through the GraphQL.# The max length of key=value should be less than 256 or will be dropped.searchableLogsTags: ${SW_SEARCHABLE_LOGS_TAG_KEYS:level,http.status_code}# Define the set of alarm tag keys, which should be searchable through the GraphQL.# The max length of key=value should be less than 256 or will be dropped.searchableAlarmTags: ${SW_SEARCHABLE_ALARM_TAG_KEYS:level}# The max size of tags keys for autocomplete select.autocompleteTagKeysQueryMaxSize: ${SW_AUTOCOMPLETE_TAG_KEYS_QUERY_MAX_SIZE:100}# The max size of tags values for autocomplete select.autocompleteTagValuesQueryMaxSize: ${SW_AUTOCOMPLETE_TAG_VALUES_QUERY_MAX_SIZE:100}# The number of threads used to prepare metrics data to the storage.prepareThreads: ${SW_CORE_PREPARE_THREADS:2}# Turn it on then automatically grouping endpoint by the given OpenAPI definitions.enableEndpointNameGroupingByOpenapi: ${SW_CORE_ENABLE_ENDPOINT_NAME_GROUPING_BY_OPENAPI:true}# The period of HTTP URI pattern recognition. Unit is second.syncPeriodHttpUriRecognitionPattern: ${SW_CORE_SYNC_PERIOD_HTTP_URI_RECOGNITION_PATTERN:10}# The training period of HTTP URI pattern recognition. Unit is second.trainingPeriodHttpUriRecognitionPattern: ${SW_CORE_TRAINING_PERIOD_HTTP_URI_RECOGNITION_PATTERN:60}# The max number of HTTP URIs per service for further URI pattern recognition.maxHttpUrisNumberPerService: ${SW_CORE_MAX_HTTP_URIS_NUMBER_PER_SVR:3000}# If disable the hierarchy, the service and instance hierarchy relation will not be built. And the query of hierarchy will return empty result.# All the hierarchy relations are defined in the `hierarchy-definition.yml`.# Notice: some of the configurations only available for kubernetes environments.enableHierarchy: ${SW_CORE_ENABLE_HIERARCHY:true}
storage:selector: ${SW_STORAGE:elasticsearch}elasticsearch:namespace: ${SW_NAMESPACE:"linjie"}clusterNodes: ${SW_STORAGE_ES_CLUSTER_NODES:localhost:9200}protocol: ${SW_STORAGE_ES_HTTP_PROTOCOL:"http"}connectTimeout: ${SW_STORAGE_ES_CONNECT_TIMEOUT:3000}socketTimeout: ${SW_STORAGE_ES_SOCKET_TIMEOUT:30000}responseTimeout: ${SW_STORAGE_ES_RESPONSE_TIMEOUT:15000}numHttpClientThread: ${SW_STORAGE_ES_NUM_HTTP_CLIENT_THREAD:0}user: ${SW_ES_USER:""}password: ${SW_ES_PASSWORD:""}trustStorePath: ${SW_STORAGE_ES_SSL_JKS_PATH:""}trustStorePass: ${SW_STORAGE_ES_SSL_JKS_PASS:""}secretsManagementFile: ${SW_ES_SECRETS_MANAGEMENT_FILE:""} # Secrets management file in the properties format includes the username, password, which are managed by 3rd party tool.dayStep: ${SW_STORAGE_DAY_STEP:1} # Represent the number of days in the one minute/hour/day index.indexShardsNumber: ${SW_STORAGE_ES_INDEX_SHARDS_NUMBER:1} # Shard number of new indexesindexReplicasNumber: ${SW_STORAGE_ES_INDEX_REPLICAS_NUMBER:1} # Replicas number of new indexes# Specify the settings for each index individually.# If configured, this setting has the highest priority and overrides the generic settings.specificIndexSettings: ${SW_STORAGE_ES_SPECIFIC_INDEX_SETTINGS:""}# Super data set has been defined in the codes, such as trace segments.The following 3 config would be improve es performance when storage super size data in es.superDatasetDayStep: ${SW_STORAGE_ES_SUPER_DATASET_DAY_STEP:-1} # Represent the number of days in the super size dataset record index, the default value is the same as dayStep when the value is less than 0superDatasetIndexShardsFactor: ${SW_STORAGE_ES_SUPER_DATASET_INDEX_SHARDS_FACTOR:5} #  This factor provides more shards for the super data set, shards number = indexShardsNumber * superDatasetIndexShardsFactor. Also, this factor effects Zipkin traces.superDatasetIndexReplicasNumber: ${SW_STORAGE_ES_SUPER_DATASET_INDEX_REPLICAS_NUMBER:0} # Represent the replicas number in the super size dataset record index, the default value is 0.indexTemplateOrder: ${SW_STORAGE_ES_INDEX_TEMPLATE_ORDER:0} # the order of index templatebulkActions: ${SW_STORAGE_ES_BULK_ACTIONS:5000} # Execute the async bulk record data every ${SW_STORAGE_ES_BULK_ACTIONS} requestsbatchOfBytes: ${SW_STORAGE_ES_BATCH_OF_BYTES:10485760} # A threshold to control the max body size of ElasticSearch Bulk flush.# flush the bulk every 5 seconds whatever the number of requestsflushInterval: ${SW_STORAGE_ES_FLUSH_INTERVAL:5}concurrentRequests: ${SW_STORAGE_ES_CONCURRENT_REQUESTS:2} # the number of concurrent requestsresultWindowMaxSize: ${SW_STORAGE_ES_QUERY_MAX_WINDOW_SIZE:10000}metadataQueryMaxSize: ${SW_STORAGE_ES_QUERY_MAX_SIZE:10000}scrollingBatchSize: ${SW_STORAGE_ES_SCROLLING_BATCH_SIZE:5000}segmentQueryMaxSize: ${SW_STORAGE_ES_QUERY_SEGMENT_SIZE:200}profileTaskQueryMaxSize: ${SW_STORAGE_ES_QUERY_PROFILE_TASK_SIZE:200}profileDataQueryBatchSize: ${SW_STORAGE_ES_QUERY_PROFILE_DATA_BATCH_SIZE:100}oapAnalyzer: ${SW_STORAGE_ES_OAP_ANALYZER:"{\"analyzer\":{\"oap_analyzer\":{\"type\":\"stop\"}}}"} # the oap analyzer.oapLogAnalyzer: ${SW_STORAGE_ES_OAP_LOG_ANALYZER:"{\"analyzer\":{\"oap_log_analyzer\":{\"type\":\"standard\"}}}"} # the oap log analyzer. It could be customized by the ES analyzer configuration to support more language log formats, such as Chinese log, Japanese log and etc.advanced: ${SW_STORAGE_ES_ADVANCED:""}# Enable shard metrics and records indices into multi-physical indices, one index template per metric/meter aggregation function or record.logicSharding: ${SW_STORAGE_ES_LOGIC_SHARDING:false}# Custom routing can reduce the impact of searches. Instead of having to fan out a search request to all the shards in an index, the request can be sent to just the shard that matches the specific routing value (or values).enableCustomRouting: ${SW_STORAGE_ES_ENABLE_CUSTOM_ROUTING:false}h2:properties:jdbcUrl: ${SW_STORAGE_H2_URL:jdbc:h2:mem:skywalking-oap-db;DB_CLOSE_DELAY=-1;DATABASE_TO_UPPER=FALSE}dataSource.user: ${SW_STORAGE_H2_USER:sa}metadataQueryMaxSize: ${SW_STORAGE_H2_QUERY_MAX_SIZE:5000}maxSizeOfBatchSql: ${SW_STORAGE_MAX_SIZE_OF_BATCH_SQL:100}asyncBatchPersistentPoolSize: ${SW_STORAGE_ASYNC_BATCH_PERSISTENT_POOL_SIZE:1}mysql:properties:jdbcUrl: ${SW_JDBC_URL:"jdbc:mysql://localhost:3306/swtest?rewriteBatchedStatements=true&allowMultiQueries=true"}dataSource.user: ${SW_DATA_SOURCE_USER:root}dataSource.password: ${SW_DATA_SOURCE_PASSWORD:root@1234}dataSource.cachePrepStmts: ${SW_DATA_SOURCE_CACHE_PREP_STMTS:true}dataSource.prepStmtCacheSize: ${SW_DATA_SOURCE_PREP_STMT_CACHE_SQL_SIZE:250}dataSource.prepStmtCacheSqlLimit: ${SW_DATA_SOURCE_PREP_STMT_CACHE_SQL_LIMIT:2048}dataSource.useServerPrepStmts: ${SW_DATA_SOURCE_USE_SERVER_PREP_STMTS:true}metadataQueryMaxSize: ${SW_STORAGE_MYSQL_QUERY_MAX_SIZE:5000}maxSizeOfBatchSql: ${SW_STORAGE_MAX_SIZE_OF_BATCH_SQL:2000}asyncBatchPersistentPoolSize: ${SW_STORAGE_ASYNC_BATCH_PERSISTENT_POOL_SIZE:4}postgresql:properties:jdbcUrl: ${SW_JDBC_URL:"jdbc:postgresql://localhost:5432/skywalking"}dataSource.user: ${SW_DATA_SOURCE_USER:postgres}dataSource.password: ${SW_DATA_SOURCE_PASSWORD:123456}dataSource.cachePrepStmts: ${SW_DATA_SOURCE_CACHE_PREP_STMTS:true}dataSource.prepStmtCacheSize: ${SW_DATA_SOURCE_PREP_STMT_CACHE_SQL_SIZE:250}dataSource.prepStmtCacheSqlLimit: ${SW_DATA_SOURCE_PREP_STMT_CACHE_SQL_LIMIT:2048}dataSource.useServerPrepStmts: ${SW_DATA_SOURCE_USE_SERVER_PREP_STMTS:true}metadataQueryMaxSize: ${SW_STORAGE_MYSQL_QUERY_MAX_SIZE:5000}maxSizeOfBatchSql: ${SW_STORAGE_MAX_SIZE_OF_BATCH_SQL:2000}asyncBatchPersistentPoolSize: ${SW_STORAGE_ASYNC_BATCH_PERSISTENT_POOL_SIZE:4}banyandb:# Targets is the list of BanyanDB servers, separated by commas.# Each target is a BanyanDB server in the format of `host:port` # The host is the IP address or domain name of the BanyanDB server, and the port is the port number of the BanyanDB server.targets: ${SW_STORAGE_BANYANDB_TARGETS:localhost:17912}# The max number of records in a bulk write request.# Bigger value can improve the write performance, but also increase the OAP and BanyanDB Server memory usage.maxBulkSize: ${SW_STORAGE_BANYANDB_MAX_BULK_SIZE:10000}# The minimum seconds between two bulk flushes.# If the data in a bulk is less than maxBulkSize, the data will be flushed after this period.# If the data in a bulk is more than maxBulkSize, the data will be flushed immediately.# Bigger value can reduce the write pressure on BanyanDB Server, but also increase the latency of the data.flushInterval: ${SW_STORAGE_BANYANDB_FLUSH_INTERVAL:15}# The timeout seconds of a bulk flush.flushTimeout: ${SW_STORAGE_BANYANDB_FLUSH_TIMEOUT:10}# The shard number of `measure` groups that store the metrics data.metricsShardsNumber: ${SW_STORAGE_BANYANDB_METRICS_SHARDS_NUMBER:1}# The shard number of `stream` groups that store the trace, log and profile data.recordShardsNumber: ${SW_STORAGE_BANYANDB_RECORD_SHARDS_NUMBER:1}# The multiplier of the number of shards of the super dataset.# Super dataset is a special dataset that stores the trace or log data that is too large to be stored in the normal dataset.# If the normal dataset has `n` shards, the super dataset will have `n * superDatasetShardsFactor` shards.# For example, supposing `recordShardsNumber` is 3, and `superDatasetShardsFactor` is 2,# `segment-default` is a normal dataset that has 3 shards, and `segment-minute` is a super dataset that has 6 shards.superDatasetShardsFactor: ${SW_STORAGE_BANYANDB_SUPERDATASET_SHARDS_FACTOR:2}# The number of threads that write data to BanyanDB concurrently.# Bigger value can improve the write performance, but also increase the OAP and BanyanDB Server CPU usage.concurrentWriteThreads: ${SW_STORAGE_BANYANDB_CONCURRENT_WRITE_THREADS:15}# The max number of profile task query in a request.profileTaskQueryMaxSize: ${SW_STORAGE_BANYANDB_PROFILE_TASK_QUERY_MAX_SIZE:200}# Data is stored in BanyanDB in segments. A segment is a time range of data.# The segment interval is the time range of a segment.# The value should be less or equal to data TTL relevant settings.segmentIntervalDays: ${SW_STORAGE_BANYANDB_SEGMENT_INTERVAL_DAYS:1}# The super dataset segment interval is the time range of a segment in the super dataset.superDatasetSegmentIntervalDays: ${SW_STORAGE_BANYANDB_SUPER_DATASET_SEGMENT_INTERVAL_DAYS:1}# Specific groups settings.# For example, {"group1": {"blockIntervalHours": 4, "segmentIntervalDays": 1}}# Please refer to https://github.com/apache/skywalking-banyandb/blob/${BANYANDB_RELEASE}/docs/interacting/bydbctl/schema/group.md#create-operation# for group setting details.specificGroupSettings: ${SW_STORAGE_BANYANDB_SPECIFIC_GROUP_SETTINGS:""}# If the BanyanDB server is configured with TLS, config the TLS cert file path and open tls connection.sslTrustCAPath: ${SW_STORAGE_BANYANDB_SSL_TRUST_CA_PATH:""}agent-analyzer:selector: ${SW_AGENT_ANALYZER:default}default:# The default sampling rate and the default trace latency time configured by the 'traceSamplingPolicySettingsFile' file.traceSamplingPolicySettingsFile: ${SW_TRACE_SAMPLING_POLICY_SETTINGS_FILE:trace-sampling-policy-settings.yml}slowDBAccessThreshold: ${SW_SLOW_DB_THRESHOLD:default:200,mongodb:100} # The slow database access thresholds. Unit ms.forceSampleErrorSegment: ${SW_FORCE_SAMPLE_ERROR_SEGMENT:true} # When sampling mechanism active, this config can open(true) force save some error segment. true is default.segmentStatusAnalysisStrategy: ${SW_SEGMENT_STATUS_ANALYSIS_STRATEGY:FROM_SPAN_STATUS} # Determine the final segment status from the status of spans. Available values are `FROM_SPAN_STATUS` , `FROM_ENTRY_SPAN` and `FROM_FIRST_SPAN`. `FROM_SPAN_STATUS` represents the segment status would be error if any span is in error status. `FROM_ENTRY_SPAN` means the segment status would be determined by the status of entry spans only. `FROM_FIRST_SPAN` means the segment status would be determined by the status of the first span only.# Nginx and Envoy agents can't get the real remote address.# Exit spans with the component in the list would not generate the client-side instance relation metrics.noUpstreamRealAddressAgents: ${SW_NO_UPSTREAM_REAL_ADDRESS:6000,9000}meterAnalyzerActiveFiles: ${SW_METER_ANALYZER_ACTIVE_FILES:datasource,threadpool,satellite,go-runtime,python-runtime,continuous-profiling,java-agent} # Which files could be meter analyzed, files split by ","slowCacheReadThreshold: ${SW_SLOW_CACHE_SLOW_READ_THRESHOLD:default:20,redis:10} # The slow cache read operation thresholds. Unit ms.slowCacheWriteThreshold: ${SW_SLOW_CACHE_SLOW_WRITE_THRESHOLD:default:20,redis:10} # The slow cache write operation thresholds. Unit ms.log-analyzer:selector: ${SW_LOG_ANALYZER:default}default:lalFiles: ${SW_LOG_LAL_FILES:envoy-als,mesh-dp,mysql-slowsql,pgsql-slowsql,redis-slowsql,k8s-service,nginx,default}malFiles: ${SW_LOG_MAL_FILES:"nginx"}event-analyzer:selector: ${SW_EVENT_ANALYZER:default}default:receiver-sharing-server:selector: ${SW_RECEIVER_SHARING_SERVER:default}default:# For HTTP serverrestHost: ${SW_RECEIVER_SHARING_REST_HOST:0.0.0.0}restPort: ${SW_RECEIVER_SHARING_REST_PORT:0}restContextPath: ${SW_RECEIVER_SHARING_REST_CONTEXT_PATH:/}restMaxThreads: ${SW_RECEIVER_SHARING_REST_MAX_THREADS:200}restIdleTimeOut: ${SW_RECEIVER_SHARING_REST_IDLE_TIMEOUT:30000}restAcceptQueueSize: ${SW_RECEIVER_SHARING_REST_QUEUE_SIZE:0}httpMaxRequestHeaderSize: ${SW_RECEIVER_SHARING_HTTP_MAX_REQUEST_HEADER_SIZE:8192}# For gRPC servergRPCHost: ${SW_RECEIVER_GRPC_HOST:0.0.0.0}gRPCPort: ${SW_RECEIVER_GRPC_PORT:0}maxConcurrentCallsPerConnection: ${SW_RECEIVER_GRPC_MAX_CONCURRENT_CALL:0}maxMessageSize: ${SW_RECEIVER_GRPC_MAX_MESSAGE_SIZE:52428800} #50MBgRPCThreadPoolSize: ${SW_RECEIVER_GRPC_THREAD_POOL_SIZE:0}gRPCSslEnabled: ${SW_RECEIVER_GRPC_SSL_ENABLED:false}gRPCSslKeyPath: ${SW_RECEIVER_GRPC_SSL_KEY_PATH:""}gRPCSslCertChainPath: ${SW_RECEIVER_GRPC_SSL_CERT_CHAIN_PATH:""}gRPCSslTrustedCAsPath: ${SW_RECEIVER_GRPC_SSL_TRUSTED_CAS_PATH:""}authentication: ${SW_AUTHENTICATION:""}
receiver-register:selector: ${SW_RECEIVER_REGISTER:default}default:receiver-trace:selector: ${SW_RECEIVER_TRACE:default}default:receiver-jvm:selector: ${SW_RECEIVER_JVM:default}default:receiver-clr:selector: ${SW_RECEIVER_CLR:default}default:receiver-profile:selector: ${SW_RECEIVER_PROFILE:default}default:receiver-zabbix:selector: ${SW_RECEIVER_ZABBIX:-}default:port: ${SW_RECEIVER_ZABBIX_PORT:10051}host: ${SW_RECEIVER_ZABBIX_HOST:0.0.0.0}activeFiles: ${SW_RECEIVER_ZABBIX_ACTIVE_FILES:agent}service-mesh:selector: ${SW_SERVICE_MESH:default}default:envoy-metric:selector: ${SW_ENVOY_METRIC:default}default:acceptMetricsService: ${SW_ENVOY_METRIC_SERVICE:true}alsHTTPAnalysis: ${SW_ENVOY_METRIC_ALS_HTTP_ANALYSIS:""}alsTCPAnalysis: ${SW_ENVOY_METRIC_ALS_TCP_ANALYSIS:""}# `k8sServiceNameRule` allows you to customize the service name in ALS via Kubernetes metadata,# the available variables are `pod`, `service`, f.e., you can use `${service.metadata.name}-${pod.metadata.labels.version}`# to append the version number to the service name.# Be careful, when using environment variables to pass this configuration, use single quotes(`''`) to avoid it being evaluated by the shell.k8sServiceNameRule: ${K8S_SERVICE_NAME_RULE:"${pod.metadata.labels.(service.istio.io/canonical-name)}.${pod.metadata.namespace}"}istioServiceNameRule: ${ISTIO_SERVICE_NAME_RULE:"${serviceEntry.metadata.name}.${serviceEntry.metadata.namespace}"}# When looking up service informations from the Istio ServiceEntries, some# of the ServiceEntries might be created in several namespaces automatically# by some components, and OAP will randomly pick one of them to build the# service name, users can use this config to exclude ServiceEntries that# they don't want to be used. Comma separated.istioServiceEntryIgnoredNamespaces: ${SW_ISTIO_SERVICE_ENTRY_IGNORED_NAMESPACES:""}gRPCHost: ${SW_ALS_GRPC_HOST:0.0.0.0}gRPCPort: ${SW_ALS_GRPC_PORT:0}maxConcurrentCallsPerConnection: ${SW_ALS_GRPC_MAX_CONCURRENT_CALL:0}maxMessageSize: ${SW_ALS_GRPC_MAX_MESSAGE_SIZE:0}gRPCThreadPoolSize: ${SW_ALS_GRPC_THREAD_POOL_SIZE:0}gRPCSslEnabled: ${SW_ALS_GRPC_SSL_ENABLED:false}gRPCSslKeyPath: ${SW_ALS_GRPC_SSL_KEY_PATH:""}gRPCSslCertChainPath: ${SW_ALS_GRPC_SSL_CERT_CHAIN_PATH:""}gRPCSslTrustedCAsPath: ${SW_ALS_GRPC_SSL_TRUSTED_CAS_PATH:""}kafka-fetcher:selector: ${SW_KAFKA_FETCHER:-}default:bootstrapServers: ${SW_KAFKA_FETCHER_SERVERS:localhost:9092}namespace: ${SW_NAMESPACE:""}partitions: ${SW_KAFKA_FETCHER_PARTITIONS:3}replicationFactor: ${SW_KAFKA_FETCHER_PARTITIONS_FACTOR:2}enableNativeProtoLog: ${SW_KAFKA_FETCHER_ENABLE_NATIVE_PROTO_LOG:true}enableNativeJsonLog: ${SW_KAFKA_FETCHER_ENABLE_NATIVE_JSON_LOG:true}consumers: ${SW_KAFKA_FETCHER_CONSUMERS:1}kafkaHandlerThreadPoolSize: ${SW_KAFKA_HANDLER_THREAD_POOL_SIZE:-1}kafkaHandlerThreadPoolQueueSize: ${SW_KAFKA_HANDLER_THREAD_POOL_QUEUE_SIZE:-1}cilium-fetcher:selector: ${SW_CILIUM_FETCHER:-}default:peerHost: ${SW_CILIUM_FETCHER_PEER_HOST:hubble-peer.kube-system.svc.cluster.local}peerPort: ${SW_CILIUM_FETCHER_PEER_PORT:80}fetchFailureRetrySecond: ${SW_CILIUM_FETCHER_FETCH_FAILURE_RETRY_SECOND:10}sslConnection: ${SW_CILIUM_FETCHER_SSL_CONNECTION:false}sslPrivateKeyFile: ${SW_CILIUM_FETCHER_PRIVATE_KEY_FILE_PATH:}sslCertChainFile: ${SW_CILIUM_FETCHER_CERT_CHAIN_FILE_PATH:}sslCaFile: ${SW_CILIUM_FETCHER_CA_FILE_PATH:}convertClientAsServerTraffic: ${SW_CILIUM_FETCHER_CONVERT_CLIENT_AS_SERVER_TRAFFIC:true}receiver-meter:selector: ${SW_RECEIVER_METER:default}default:receiver-otel:selector: ${SW_OTEL_RECEIVER:default}default:enabledHandlers: ${SW_OTEL_RECEIVER_ENABLED_HANDLERS:"otlp-metrics,otlp-logs"}enabledOtelMetricsRules: ${SW_OTEL_RECEIVER_ENABLED_OTEL_METRICS_RULES:"apisix,nginx/*,k8s/*,istio-controlplane,vm,mysql/*,postgresql/*,oap,aws-eks/*,windows,aws-s3/*,aws-dynamodb/*,aws-gateway/*,redis/*,elasticsearch/*,rabbitmq/*,mongodb/*,kafka/*,pulsar/*,bookkeeper/*,rocketmq/*,clickhouse/*,activemq/*"}receiver-zipkin:selector: ${SW_RECEIVER_ZIPKIN:-}default:# Defines a set of span tag keys which are searchable.# The max length of key=value should be less than 256 or will be dropped.searchableTracesTags: ${SW_ZIPKIN_SEARCHABLE_TAG_KEYS:http.method}# The sample rate precision is 1/10000, should be between 0 and 10000sampleRate: ${SW_ZIPKIN_SAMPLE_RATE:10000}## The below configs are for OAP collect zipkin trace from HTTPenableHttpCollector: ${SW_ZIPKIN_HTTP_COLLECTOR_ENABLED:true}restHost: ${SW_RECEIVER_ZIPKIN_REST_HOST:0.0.0.0}restPort: ${SW_RECEIVER_ZIPKIN_REST_PORT:9411}restContextPath: ${SW_RECEIVER_ZIPKIN_REST_CONTEXT_PATH:/}restMaxThreads: ${SW_RECEIVER_ZIPKIN_REST_MAX_THREADS:200}restIdleTimeOut: ${SW_RECEIVER_ZIPKIN_REST_IDLE_TIMEOUT:30000}restAcceptQueueSize: ${SW_RECEIVER_ZIPKIN_REST_QUEUE_SIZE:0}## The below configs are for OAP collect zipkin trace from kafkaenableKafkaCollector: ${SW_ZIPKIN_KAFKA_COLLECTOR_ENABLED:false}kafkaBootstrapServers: ${SW_ZIPKIN_KAFKA_SERVERS:localhost:9092}kafkaGroupId: ${SW_ZIPKIN_KAFKA_GROUP_ID:zipkin}kafkaTopic: ${SW_ZIPKIN_KAFKA_TOPIC:zipkin}# Kafka consumer config, JSON format as Properties. If it contains the same key with above, would override.kafkaConsumerConfig: ${SW_ZIPKIN_KAFKA_CONSUMER_CONFIG:"{\"auto.offset.reset\":\"earliest\",\"enable.auto.commit\":true}"}# The Count of the topic consumerskafkaConsumers: ${SW_ZIPKIN_KAFKA_CONSUMERS:1}kafkaHandlerThreadPoolSize: ${SW_ZIPKIN_KAFKA_HANDLER_THREAD_POOL_SIZE:-1}kafkaHandlerThreadPoolQueueSize: ${SW_ZIPKIN_KAFKA_HANDLER_THREAD_POOL_QUEUE_SIZE:-1}receiver-browser:selector: ${SW_RECEIVER_BROWSER:default}default:# The sample rate precision is 1/10000. 10000 means 100% sample in default.sampleRate: ${SW_RECEIVER_BROWSER_SAMPLE_RATE:10000}receiver-log:selector: ${SW_RECEIVER_LOG:default}default:query:selector: ${SW_QUERY:graphql}graphql:# Enable the log testing API to test the LAL.# NOTE: This API evaluates untrusted code on the OAP server.# A malicious script can do significant damage (steal keys and secrets, remove files and directories, install malware, etc).# As such, please enable this API only when you completely trust your users.enableLogTestTool: ${SW_QUERY_GRAPHQL_ENABLE_LOG_TEST_TOOL:false}# Maximum complexity allowed for the GraphQL query that can be used to# abort a query if the total number of data fields queried exceeds the defined threshold.maxQueryComplexity: ${SW_QUERY_MAX_QUERY_COMPLEXITY:3000}# Allow user add, disable and update UI templateenableUpdateUITemplate: ${SW_ENABLE_UPDATE_UI_TEMPLATE:false}# "On demand log" allows users to fetch Pod containers' log in real time,# because this might expose secrets in the logs (if any), users need# to enable this manually, and add permissions to OAP cluster role.enableOnDemandPodLog: ${SW_ENABLE_ON_DEMAND_POD_LOG:false}# This module is for Zipkin query API and support zipkin-lens UI
query-zipkin:selector: ${SW_QUERY_ZIPKIN:-}default:# For HTTP serverrestHost: ${SW_QUERY_ZIPKIN_REST_HOST:0.0.0.0}restPort: ${SW_QUERY_ZIPKIN_REST_PORT:9412}restContextPath: ${SW_QUERY_ZIPKIN_REST_CONTEXT_PATH:/zipkin}restMaxThreads: ${SW_QUERY_ZIPKIN_REST_MAX_THREADS:200}restIdleTimeOut: ${SW_QUERY_ZIPKIN_REST_IDLE_TIMEOUT:30000}restAcceptQueueSize: ${SW_QUERY_ZIPKIN_REST_QUEUE_SIZE:0}# Default look back for traces and autocompleteTags, 1 day in millislookback: ${SW_QUERY_ZIPKIN_LOOKBACK:86400000}# The Cache-Control max-age (seconds) for serviceNames, remoteServiceNames and spanNamesnamesMaxAge: ${SW_QUERY_ZIPKIN_NAMES_MAX_AGE:300}## The below config are OAP support for zipkin-lens UI# Default traces query max sizeuiQueryLimit: ${SW_QUERY_ZIPKIN_UI_QUERY_LIMIT:10}# Default look back on the UI for search traces, 15 minutes in millisuiDefaultLookback: ${SW_QUERY_ZIPKIN_UI_DEFAULT_LOOKBACK:900000}#This module is for PromQL API.
promql:selector: ${SW_PROMQL:default}default:# For HTTP serverrestHost: ${SW_PROMQL_REST_HOST:0.0.0.0}restPort: ${SW_PROMQL_REST_PORT:9090}restContextPath: ${SW_PROMQL_REST_CONTEXT_PATH:/}restMaxThreads: ${SW_PROMQL_REST_MAX_THREADS:200}restIdleTimeOut: ${SW_PROMQL_REST_IDLE_TIMEOUT:30000}restAcceptQueueSize: ${SW_PROMQL_REST_QUEUE_SIZE:0}# The below config is for the API buildInfo, set the value to mock the build info.buildInfoVersion: ${SW_PROMQL_BUILD_INFO_VERSION:"2.45.0"}buildInfoRevision: ${SW_PROMQL_BUILD_INFO_REVISION:""}buildInfoBranch: ${SW_PROMQL_BUILD_INFO_BRANCH:""}buildInfoBuildUser: ${SW_PROMQL_BUILD_INFO_BUILD_USER:""}buildInfoBuildDate: ${SW_PROMQL_BUILD_INFO_BUILD_DATE:""}buildInfoGoVersion: ${SW_PROMQL_BUILD_INFO_GO_VERSION:""}#This module is for LogQL API.
logql:selector: ${SW_LOGQL:default}default:# For HTTP serverrestHost: ${SW_LOGQL_REST_HOST:0.0.0.0}restPort: ${SW_LOGQL_REST_PORT:3100}restContextPath: ${SW_LOGQL_REST_CONTEXT_PATH:/}restMaxThreads: ${SW_LOGQL_REST_MAX_THREADS:200}restIdleTimeOut: ${SW_LOGQL_REST_IDLE_TIMEOUT:30000}restAcceptQueueSize: ${SW_LOGQL_REST_QUEUE_SIZE:0}alarm:selector: ${SW_ALARM:default}default:telemetry:selector: ${SW_TELEMETRY:none}none:prometheus:host: ${SW_TELEMETRY_PROMETHEUS_HOST:0.0.0.0}port: ${SW_TELEMETRY_PROMETHEUS_PORT:1234}sslEnabled: ${SW_TELEMETRY_PROMETHEUS_SSL_ENABLED:false}sslKeyPath: ${SW_TELEMETRY_PROMETHEUS_SSL_KEY_PATH:""}sslCertChainPath: ${SW_TELEMETRY_PROMETHEUS_SSL_CERT_CHAIN_PATH:""}configuration:selector: ${SW_CONFIGURATION:none}none:grpc:host: ${SW_DCS_SERVER_HOST:""}port: ${SW_DCS_SERVER_PORT:80}clusterName: ${SW_DCS_CLUSTER_NAME:SkyWalking}period: ${SW_DCS_PERIOD:20}maxInboundMessageSize: ${SW_DCS_MAX_INBOUND_MESSAGE_SIZE:4194304}apollo:apolloMeta: ${SW_CONFIG_APOLLO:http://localhost:8080}apolloCluster: ${SW_CONFIG_APOLLO_CLUSTER:default}apolloEnv: ${SW_CONFIG_APOLLO_ENV:""}appId: ${SW_CONFIG_APOLLO_APP_ID:skywalking}zookeeper:period: ${SW_CONFIG_ZK_PERIOD:60} # Unit seconds, sync period. Default fetch every 60 seconds.namespace: ${SW_CONFIG_ZK_NAMESPACE:/default}hostPort: ${SW_CONFIG_ZK_HOST_PORT:localhost:2181}# Retry PolicybaseSleepTimeMs: ${SW_CONFIG_ZK_BASE_SLEEP_TIME_MS:1000} # initial amount of time to wait between retriesmaxRetries: ${SW_CONFIG_ZK_MAX_RETRIES:3} # max number of times to retryetcd:period: ${SW_CONFIG_ETCD_PERIOD:60} # Unit seconds, sync period. Default fetch every 60 seconds.endpoints: ${SW_CONFIG_ETCD_ENDPOINTS:http://localhost:2379}namespace: ${SW_CONFIG_ETCD_NAMESPACE:/skywalking}authentication: ${SW_CONFIG_ETCD_AUTHENTICATION:false}user: ${SW_CONFIG_ETCD_USER:}password: ${SW_CONFIG_ETCD_password:}consul:# Consul host and ports, separated by comma, e.g. 1.2.3.4:8500,2.3.4.5:8500hostAndPorts: ${SW_CONFIG_CONSUL_HOST_AND_PORTS:1.2.3.4:8500}# Sync period in seconds. Defaults to 60 seconds.period: ${SW_CONFIG_CONSUL_PERIOD:60}# Consul aclTokenaclToken: ${SW_CONFIG_CONSUL_ACL_TOKEN:""}k8s-configmap:period: ${SW_CONFIG_CONFIGMAP_PERIOD:60}namespace: ${SW_CLUSTER_K8S_NAMESPACE:default}labelSelector: ${SW_CLUSTER_K8S_LABEL:app=collector,release=skywalking}nacos:# Nacos Server HostserverAddr: ${SW_CONFIG_NACOS_SERVER_ADDR:127.0.0.1}# Nacos Server Portport: ${SW_CONFIG_NACOS_SERVER_PORT:8848}# Nacos Configuration Groupgroup: ${SW_CONFIG_NACOS_SERVER_GROUP:skywalking}# Nacos Configuration namespacenamespace: ${SW_CONFIG_NACOS_SERVER_NAMESPACE:}# Unit seconds, sync period. Default fetch every 60 seconds.period: ${SW_CONFIG_NACOS_PERIOD:60}# Nacos auth usernameusername: ${SW_CONFIG_NACOS_USERNAME:""}password: ${SW_CONFIG_NACOS_PASSWORD:""}# Nacos auth accessKeyaccessKey: ${SW_CONFIG_NACOS_ACCESSKEY:""}secretKey: ${SW_CONFIG_NACOS_SECRETKEY:""}exporter:selector: ${SW_EXPORTER:-}default:# gRPC exporterenableGRPCMetrics: ${SW_EXPORTER_ENABLE_GRPC_METRICS:false}gRPCTargetHost: ${SW_EXPORTER_GRPC_HOST:127.0.0.1}gRPCTargetPort: ${SW_EXPORTER_GRPC_PORT:9870}# Kafka exporterenableKafkaTrace: ${SW_EXPORTER_ENABLE_KAFKA_TRACE:false}enableKafkaLog: ${SW_EXPORTER_ENABLE_KAFKA_LOG:false}kafkaBootstrapServers: ${SW_EXPORTER_KAFKA_SERVERS:localhost:9092}# Kafka producer config, JSON format as Properties.kafkaProducerConfig: ${SW_EXPORTER_KAFKA_PRODUCER_CONFIG:""}kafkaTopicTrace: ${SW_EXPORTER_KAFKA_TOPIC_TRACE:skywalking-export-trace}kafkaTopicLog: ${SW_EXPORTER_KAFKA_TOPIC_LOG:skywalking-export-log}exportErrorStatusTraceOnly: ${SW_EXPORTER_KAFKA_TRACE_FILTER_ERROR:false}health-checker:selector: ${SW_HEALTH_CHECKER:-}default:checkIntervalSeconds: ${SW_HEALTH_CHECKER_INTERVAL_SECONDS:5}debugging-query:selector: ${SW_DEBUGGING_QUERY:default}default:# Include the list of keywords to filter configurations including secrets. Separate keywords by a comma.keywords4MaskingSecretsOfConfig: ${SW_DEBUGGING_QUERY_KEYWORDS_FOR_MASKING_SECRETS:user,password,token,accessKey,secretKey,authentication}configuration-discovery:selector: ${SW_CONFIGURATION_DISCOVERY:default}default:disableMessageDigest: ${SW_DISABLE_MESSAGE_DIGEST:false}receiver-event:selector: ${SW_RECEIVER_EVENT:default}default:receiver-ebpf:selector: ${SW_RECEIVER_EBPF:default}default:# The continuous profiling policy cache time, Unit is second.continuousPolicyCacheTimeout: ${SW_CONTINUOUS_POLICY_CACHE_TIMEOUT:60}gRPCHost: ${SW_EBPF_GRPC_HOST:0.0.0.0}gRPCPort: ${SW_EBPF_GRPC_PORT:0}maxConcurrentCallsPerConnection: ${SW_EBPF_GRPC_MAX_CONCURRENT_CALL:0}maxMessageSize: ${SW_EBPF_ALS_GRPC_MAX_MESSAGE_SIZE:0}gRPCThreadPoolSize: ${SW_EBPF_GRPC_THREAD_POOL_SIZE:0}gRPCSslEnabled: ${SW_EBPF_GRPC_SSL_ENABLED:false}gRPCSslKeyPath: ${SW_EBPF_GRPC_SSL_KEY_PATH:""}gRPCSslCertChainPath: ${SW_EBPF_GRPC_SSL_CERT_CHAIN_PATH:""}gRPCSslTrustedCAsPath: ${SW_EBPF_GRPC_SSL_TRUSTED_CAS_PATH:""}receiver-telegraf:selector: ${SW_RECEIVER_TELEGRAF:default}default:activeFiles: ${SW_RECEIVER_TELEGRAF_ACTIVE_FILES:vm}aws-firehose:selector: ${SW_RECEIVER_AWS_FIREHOSE:default}default:host: ${SW_RECEIVER_AWS_FIREHOSE_HTTP_HOST:0.0.0.0}port: ${SW_RECEIVER_AWS_FIREHOSE_HTTP_PORT:12801}contextPath: ${SW_RECEIVER_AWS_FIREHOSE_HTTP_CONTEXT_PATH:/}maxThreads: ${SW_RECEIVER_AWS_FIREHOSE_HTTP_MAX_THREADS:200}idleTimeOut: ${SW_RECEIVER_AWS_FIREHOSE_HTTP_IDLE_TIME_OUT:30000}acceptQueueSize: ${SW_RECEIVER_AWS_FIREHOSE_HTTP_ACCEPT_QUEUE_SIZE:0}maxRequestHeaderSize: ${SW_RECEIVER_AWS_FIREHOSE_HTTP_MAX_REQUEST_HEADER_SIZE:8192}firehoseAccessKey: ${SW_RECEIVER_AWS_FIREHOSE_ACCESS_KEY:}enableTLS: ${SW_RECEIVER_AWS_FIREHOSE_HTTP_ENABLE_TLS:false}tlsKeyPath: ${SW_RECEIVER_AWS_FIREHOSE_HTTP_TLS_KEY_PATH:}tlsCertChainPath: ${SW_RECEIVER_AWS_FIREHOSE_HTTP_TLS_CERT_CHAIN_PATH:}ai-pipeline:selector: ${SW_AI_PIPELINE:default}default:uriRecognitionServerAddr: ${SW_AI_PIPELINE_URI_RECOGNITION_SERVER_ADDR:}uriRecognitionServerPort: ${SW_AI_PIPELINE_URI_RECOGNITION_SERVER_PORT:17128}

2、log4j2.xml修改

<?xml version="1.0" encoding="UTF-8"?>
<!--~ Licensed to the Apache Software Foundation (ASF) under one or more~ contributor license agreements.  See the NOTICE file distributed with~ this work for additional information regarding copyright ownership.~ The ASF licenses this file to You under the Apache License, Version 2.0~ (the "License"); you may not use this file except in compliance with~ the License.  You may obtain a copy of the License at~~     http://www.apache.org/licenses/LICENSE-2.0~~ Unless required by applicable law or agreed to in writing, software~ distributed under the License is distributed on an "AS IS" BASIS,~ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.~ See the License for the specific language governing permissions and~ limitations under the License.~--><Configuration status="info"><Properties><Property name="log-path">${sys:oap.logDir}</Property></Properties><Appenders><RollingFile name="RollingFile" fileName="${log-path}/skywalking-oap-server.log"filePattern="${log-path}/skywalking-oap-server-%d{yyyy-MM-dd}-%i.log"><PatternLayout><pattern>%d - %c - %L [%t] %-5p %x - %m%n</pattern></PatternLayout><Policies><SizeBasedTriggeringPolicy size="102400KB"/></Policies><DefaultRolloverStrategy max="30"/></RollingFile><Console name="Console" target="SYSTEM_OUT"><PatternLayout><pattern>%m%n</pattern></PatternLayout><MarkerFilter marker="Console" onMatch="ACCEPT" onMismatch="DENY"/></Console></Appenders><Loggers><logger name="org.apache.zookeeper" level="INFO"/><logger name="io.grpc.netty" level="INFO"/><Logger name="org.apache.skywalking.oap.server.library.server.grpc.GRPCServer" level="INFO" additivity="false"><AppenderRef ref="Console"/></Logger><Root level="info"><AppenderRef ref="RollingFile"/><AppenderRef ref="Console"/></Root></Loggers>
</Configuration>

修改UI配置,D:\Temp\1\apache-skywalking-apm-bin\webapp下面

1、修改application.yml

# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.serverPort: ${SW_SERVER_PORT:-8080}# Comma seperated list of OAP addresses.
oapServices: ${SW_OAP_ADDRESS:-http://localhost:12800}zipkinServices: ${SW_ZIPKIN_ADDRESS:-http://localhost:9412}

2、log4j2.xml修改

<?xml version="1.0" encoding="UTF-8"?>
<!--~ Licensed to the Apache Software Foundation (ASF) under one or more~ contributor license agreements.  See the NOTICE file distributed with~ this work for additional information regarding copyright ownership.~ The ASF licenses this file to You under the Apache License, Version 2.0~ (the "License"); you may not use this file except in compliance with~ the License.  You may obtain a copy of the License at~~     http://www.apache.org/licenses/LICENSE-2.0~~ Unless required by applicable law or agreed to in writing, software~ distributed under the License is distributed on an "AS IS" BASIS,~ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.~ See the License for the specific language governing permissions and~ limitations under the License.~--><Configuration status="info"><Properties><Property name="log-path">${sys:webapp.logDir}</Property></Properties><Appenders><RollingFile name="RollingFile" fileName="${log-path}/skywalking-webapp.log"filePattern="${log-path}/skywalking-webapp-%d{yyyy-MM-dd}-%i.log"><PatternLayout><pattern>%d - %c - %L [%t] %-5p %x - %m%n</pattern></PatternLayout><Policies><SizeBasedTriggeringPolicy size="102400KB"/></Policies><DefaultRolloverStrategy max="30"/></RollingFile><Console name="Console" target="SYSTEM_OUT"><PatternLayout><pattern>%m%n</pattern></PatternLayout><MarkerFilter marker="Console" onMatch="ACCEPT" onMismatch="DENY"/></Console></Appenders><Loggers><logger name="org.apache.zookeeper" level="INFO"/><logger name="io.grpc.netty" level="INFO"/><Logger name="org.apache.skywalking.oap.server.webapp.ApplicationStartUp" level="INFO" additivity="false"><AppenderRef ref="Console"/></Logger><Root level="info"><AppenderRef ref="RollingFile"/><AppenderRef ref="Console"/></Root></Loggers>
</Configuration>

注意

skywalking服务必须放在无空格的文件夹,比如:Program Files这个文件是绝对不能放的,不然服务运行的时候只会一闪而过,连log日志都不会生成)。
默认监听 11800(gRPC)与 12800(HTTP) 端口。
上面解压后的文件内容包含 collector 与UI 2部分,
collector:负责收集日志等信息,此部分配置文件在config文件夹下 application.yml 及日志配置 log4j2.xml
UI:用于数据展示 ,此部分配置文件在 webapp 文件夹下 application.yml 及日志配置 log4j2.xml

五、启动

1、先启动D:\Temp\1\elasticsearch-8.17.0-windows-x86_64\bin下的elasticsearch,双击或者右键管理员权限运行elasticsearch.bat即可

2、先启动D:\Temp\1\apache-skywalking-apm-bin\bin下的skywalking,双击或者右键管理员权限运行startup.bat即可

切到 D:\Temp\1\apache-skywalking-apm-bin\bin,双击或者右键管理员权限运行startup.bat

如果出错,

elasticsearch出错,查看D:\Temp\1\elasticsearch-8.17.0-windows-x86_64\logs,

skywalking出错,查看D:\Temp\1\apache-skywalking-apm-bin\logs日志。

参考链接:

skywalking-v10.0.1安装:https://blog.csdn.net/smdai/article/details/142479469

java 环境变量配置 参考原文链接:https://blog.csdn.net/smdai/article/details/140670008

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.rhkb.cn/news/2146.html

如若内容造成侵权/违法违规/事实不符,请联系长河编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

Flink系统知识讲解之:容错与State状态管理

Flink系统知识之&#xff1a;容错与State状态管理 状态在Flink中叫作State&#xff0c;用来保存中间计算结果或者缓存数据。根据是否需要保存中间结果&#xff0c;分为无状态计算和有状态计算。对于流计算而言&#xff0c;事件持续不断地产生&#xff0c;如果每次计算都是相互…

Python线性混合效应回归LMER分析大鼠幼崽体重数据、假设检验可视化|数据分享...

全文链接&#xff1a;https://tecdat.cn/?p38816 在数据分析领域&#xff0c;当数据呈现出层次结构时&#xff0c;传统的一般线性模型&#xff08;GLM&#xff09;可能无法充分捕捉数据的特征。混合效应回归作为GLM的扩展&#xff0c;能够有效处理这类具有层次结构的数据&…

大疆机场及无人机上云

最近基于大疆上云api进行二次开发&#xff0c;后面将按照开发步骤对其进行说明&#xff01;

【WEB】网络传输中的信息安全 - 加密、签名、数字证书与HTTPS

文章目录 1. 概述2. 网络传输安全2.1.什么是中间人攻击2.2. 加密和签名2.2.1.加密算法2.2.2.摘要2.2.3.签名 2.3.数字证书2.3.1.证书的使用2.3.2.根证书2.3.3.证书链 2.4.HTTPS 1. 概述 本篇主要是讲解讲一些安全相关的基本知识&#xff08;如加密、签名、证书等&#xff09;&…

SpringMVC

开发模式&#xff1a; &#xff08;1&#xff09;前后端不分离&#xff1a;服务端渲染 数据和结构并不分离&#xff0c;客户端发送请求后访问指定路径资源&#xff0c;服务端业务处理之后将数据组装到页面&#xff0c;并返回带数据的完整页面。 &#xff08;2&#xff09;前…

uni-app编写微信小程序使用uni-popup搭配uni-popup-dialog组件在ios自动弹出键盘。

uni-popup-dialog 对话框 将 uni-popup 的type属性改为 dialog&#xff0c;并引入对应组件即可使用对话框 &#xff0c;该组件不支持单独使用 示例 <button click"open">打开弹窗</button> <uni-popup ref"popup" type"dialog"…

UML系列之Rational Rose笔记九:组件图

一、新建组件图 二、组件图成品展示 三、工作台介绍 最主要的还是这个component组件&#xff1b; 然后还有这几个&#xff0c;正常是用不到的&#xff1b;基本的使用第四部分介绍一下&#xff1a; 四、基本使用示例 这些&#xff0c;主要是运用package还有package specifica…

数据结构《MapSet哈希表》

文章目录 一、搜索树1.1 定义1.2 模拟实现搜索 二、Map2.1 定义2.2 Map.Entry2.3 TreeMap的使用2.4 Map的常用方法 三、Set3.1 定义3.2 TreeSet的使用3.3 Set的常用方法 四、哈希表4.1 哈希表的概念4.2 冲突4.2.1 冲突的概念4.2.2 冲突的避免1. 选择合适的哈希函数2. 负载因子调…

赛灵思(Xilinx)公司Artix-7系列FPGA

苦难从不值得歌颂&#xff0c;在苦难中萃取的坚韧才值得珍视&#xff1b; 痛苦同样不必美化&#xff0c;从痛苦中开掘出希望才是壮举。 没有人是绝对意义的主角&#xff0c; 但每个人又都是自己生活剧本里的英雄。滑雪&#xff0c;是姿态优雅的“贴地飞行”&#xff0c;也有着成…

qt vs ios开发应用环境搭建和上架商店的记录

qt 下载链接如下 https://download.qt.io/new_archive/qt/5.14/5.14.2/qt-opensource-mac-x64-5.14.2.dmg 安装选项全勾选就行&#xff0c;这里特别说明下qt5.14.2/qml qt5.14.2对qml支持还算成熟&#xff0c;但很多特性还得qt6才行&#xff0c;这里用qt5.14.2主要是考虑到服…

JavaSE学习心得(反射篇)

反射 前言 获取class对象的三种方式 利用反射获取构造方法 利用反射获取成员变量 利用反射获取成员方法 练习 保存信息 跟配置文件结合动态创建 前言 接上期文章&#xff1a;JavaSE学习心得&#xff08;多线程与网络编程篇&#xff09; 教程链接&#xff1a;黑马…

FPGA 串口与HC05蓝牙模块通信

介绍 关于接线&#xff1a;HC-05蓝牙模块一共有6个引脚&#xff0c;但经过我查阅资料以及自己的实操&#xff0c;实际上只需要用到中间的4个引脚即可&#xff08;即RXD,TXD,GND,VCC&#xff09;。需要注意的是&#xff0c;蓝牙模块的RXD引脚需要接单片机的TXD引脚&#xff0c;同…

基于CiteSpace的知网专利文献计量分析与可视化

CiteSpace是一款可视化学术文献分析软件&#xff0c;它可以帮助用户分析和可视化研究领域的文献数据。适用于分析大量文献数据&#xff0c;例如由 Web of Science、Scopus 和知网等学术数据库生成的数据。图为来自CiteSpace的成图&#xff0c;是不是很美观&#xff1f;接下来我…

Gitee图形界面上传(详细步骤)

目录 1.软件安装 2.安装顺序 3.创建仓库 4.克隆远程仓库到本地电脑 提交代码的三板斧 1.软件安装 Git - Downloads (git-scm.com) Download – TortoiseGit – Windows Shell Interface to Git 2.安装顺序 1. 首先安装git-2.33.1-64-bit.exe&#xff0c;顺序不能搞错2. …

深入了解生成对抗网络(GAN):原理、实现及应用

生成对抗网络&#xff08;GAN, Generative Adversarial Networks&#xff09;是由Ian Goodfellow等人于2014年提出的一种深度学习模型&#xff0c;旨在通过对抗训练生成与真实样本相似的数据。GAN在图像生成、图像修复、超分辨率等领域取得了显著的成果。本文将深入探讨GAN的基…

Git的基本命令以及其原理(公司小白学习)

从 Git 配置、代码提交与远端同步三部分展开&#xff0c;重点讲解 Git 命令使用方式及基本原理。 了解这些并不是为了让我们掌握&#xff0c;会自己写版本控制器&#xff0c;更多的是方便大家查找BUG&#xff0c;解决BUG &#xff0c;这就和八股文一样&#xff0c;大多数都用…

信号与系统初识---信号的分类

文章目录 0.引言1.介绍2.信号的分类3.关于周期大小的求解4.实信号和复信号5.奇信号和偶信号6.能量信号和功率信号 0.引言 学习这个自动控制原理一段时间了&#xff0c;但是只写了一篇博客&#xff0c;其实主要是因为最近在打这个华数杯&#xff0c;其次是因为在补这个数学知识…

【初识扫盲】厚尾分布

厚尾分布&#xff08;Fat-tailed distribution&#xff09;是一种概率分布&#xff0c;其尾部比正态分布更“厚”&#xff0c;即尾部的概率密度更大&#xff0c;极端值出现的概率更高。 一、厚尾分布的特征 尾部概率大 在正态分布中&#xff0c;极端值&#xff08;如距离均值很…

--- 多线程编程 基本用法 java ---

随着时代的发展&#xff0c;单核cpu的发展遇到了瓶颈&#xff0c;而要提高算力就要发展多核cpu&#xff0c;他能允许多个程序同时运行&#xff0c;这时并发编程他能利用到多核的优势&#xff0c;于是就成为了时代所趋了 其实多进程编程也能进行实现并发编程&#xff0c;只不过…

Linux网络_套接字_UDP网络_TCP网络

一.UDP网络 1.socket()创建套接字 #include<sys/socket.h> int socket(int domain, int type, int protocol);domain (地址族): AF_INET网络 AF_UNIX本地 AF_INET&#xff1a;IPv4 地址族&#xff0c;适用于 IPv4 协议。用于网络通信AF_INET6&#xff1a;IPv6 地址族&a…