Elasticsearch:Jira 连接器教程第二部分 - 6 个优化技巧

作者:来自 Elastic Gustavo Llermaly

将 Jira 连接到 Elasticsearch 后,我们现在将回顾最佳实践以升级此部署。

在本系列的第一部分中,我们配置了 Jira 连接器并将对象索引到 Elasticsearch 中。在第二部分中,我们将回顾一些最佳实践和高级配置以升级连接器。这些实践是对当前文档的补充,将在索引阶段使用。

运行连接器只是第一步。当你想要索引大量数据时,每个细节都很重要,当你从 Jira 索引文档时,你可以使用许多优化点。

优化点

  1. 通过应用高级同步过滤器仅索引你需要的文档
  2. 仅索引你将使用的字段
  3. 根据你的需求优化映射
  4. 自动化文档级别安全性
  5. 卸载附件提取
  6. 监控连接器的日志

1. 通过应用高级同步过滤器仅索引你需要的文档

默认情况下,Jira 会发送所有项目、问题和附件。如果你只对其中一些感兴趣,或者例如只对 “In Progress - 正在进行” 的问题感兴趣,我们建议不要索引所有内容。

在将文档放入 Elasticsearch 之前,有三个实例可以过滤文档:

  1. 远程:我们可以使用原生 Jira 过滤器来获取我们需要的内容。这是最好的选择,你应该尽可能尝试使用此选项,因为这样,文档在进入 Elasticsearch 之前甚至不会从源中出来。我们将为此使用高级同步规则。
  2. 集成:如果源​​没有原生过滤器来提供我们需要的内容,我们仍然可以使用基本同步规则在集成级别进行过滤,然后再将其导入 Elasticsearch。
  3. 摄入管道:在索引数据之前处理数据的最后一个选项是使用 Elasticsearch 摄入管道(ingest pipeline)。通过使用 Painless 脚本,我们可以非常灵活地过滤或操作文档。这样做的缺点是数据已经离开源并通过连接器,因此可能会给系统带来沉重的负担并产生安全问题。

让我们快速回顾一下 Jira 问题:

GET bank/_search
{"_source": ["Issue.status.name", "Issue.summary"],"query": {"exists": {"field": "Issue.status.name"}}
}

注意:我们使用 “exists” 查询仅返回具有我们过滤的字段的文档。

你可以看到 “To Do” 中有很多我们不需要的问题:

{"took": 3,"timed_out": false,"_shards": {"total": 2,"successful": 2,"skipped": 0,"failed": 0},"hits": {"total": {"value": 6,"relation": "eq"},"max_score": 1,"hits": [{"_index": "bank","_id": "Marketing Mars-MM-1","_score": 1,"_source": {"Issue": {"summary": "Conquer Mars","status": {"name": "To Do"}}}},{"_index": "bank","_id": "Marketing Mars-MM-3","_score": 1,"_source": {"Issue": {"summary": "Conquering Earth","status": {"name": "In Progress"}}}},{"_index": "bank","_id": "Marketing Mars-MM-2","_score": 1,"_source": {"Issue": {"summary": "Conquer the moon","status": {"name": "To Do"}}}},{"_index": "bank","_id": "Galactic Banking Project-GBP-3","_score": 1,"_source": {"Issue": {"summary": "Intergalactic Security and Compliance","status": {"name": "In Progress"}}}},{"_index": "bank","_id": "Galactic Banking Project-GBP-2","_score": 1,"_source": {"Issue": {"summary": "Bank Application Frontend","status": {"name": "To Do"}}}},{"_index": "bank","_id": "Galactic Banking Project-GBP-1","_score": 1,"_source": {"Issue": {"summary": "Development of API for International Transfers","status": {"name": "To Do"}}}}]}
}

为了仅获取 “In Progress” 的问题,我们将使用 JQL 查询(Jira 查询语言)创建高级同步规则:

转到连接器并单击 sync rules 选项卡,然后单击 Draft Rules。进入后,转到 Advanced Sync Rules 并添加以下内容:

  [{"query": "status IN ('In Progress')"}]

应用规则后,运行 Full Content Sync

此规则将排除所有非 “In Progress” 的问题。你可以通过再次运行查询来检查:

GET bank/_search
{"_source": ["Issue.status.name", "Issue.summary"],"query": {"exists": {"field": "Issue.status.name"}}
}

以下是新的回应:

{"took": 2,"timed_out": false,"_shards": {"total": 2,"successful": 2,"skipped": 0,"failed": 0},"hits": {"total": {"value": 2,"relation": "eq"},"max_score": 1,"hits": [{"_index": "bank","_id": "Marketing Mars-MM-3","_score": 1,"_source": {"Issue": {"summary": "Conquering Earth","status": {"name": "In Progress"}}}},{"_index": "bank","_id": "Galactic Banking Project-GBP-3","_score": 1,"_source": {"Issue": {"summary": "Intergalactic Security and Compliance","status": {"name": "In Progress"}}}}]}
}

2. 仅索引你将使用的字段

现在我们只有我们想要的文档,你可以看到我们仍然会得到很多我们不需要的字段。我们可以在运行查询时使用 _source 隐藏它们,但最好的选择是不索引它们。

为此,我们将使用摄取管道(ingest pipeline)。我们可以创建一个删除所有我们不会使用的字段的管道。假设我们只想要来自问题的以下信息:

  • Assignee
  • Title
  • Status

我们可以创建一个新的摄取管道,仅使用摄取管道的 Content UI 获取这些字段:

单击复 Copy and customize,然后修改名为 index-name@custom 的管道,该管道应该刚刚创建且为空。我们可以使用 Kibana DevTools 控制台执行此操作,运行以下命令:

PUT _ingest/pipeline/bank@custom
{"description": "Only keep needed fields for jira issues and move them to root","processors": [{"remove": {"keep": ["Issue.assignee.displayName","Issue.summary","Issue.status.name"],"ignore_missing": true}},{"rename": {"field": "Issue.assignee.displayName","target_field": "assignee","ignore_missing": true}},{"rename": {"field": "Issue.summary","target_field": "summary","ignore_missing": true}},{"rename": {"field": "Issue.status.name","target_field": "status","ignore_missing": true}},{"remove": {"field": "Issue"}}]
}

让我们删除不需要的字段,并将需要的字段移至文档的根目录。

带有 keep 参数的 remove 处理器将从文档中删除除 keep 数组中的字段之外的所有字段。

我们可以通过运行模拟来检查这是否有效。从索引中添加其中一个文档的内容:

POST /_ingest/pipeline/bank@custom/_simulate
{"docs": [{"_index": "bank","_id": "Galactic Banking Project-GBP-3","_score": 1,"_source": {"Type": "Epic","Custom_Fields": {"Satisfaction": null,"Approvals": null,"Change reason": null,"Epic Link": null,"Actual end": null,"Design": null,"Campaign assets": null,"Story point estimate": null,"Approver groups": null,"[CHART] Date of First Response": null,"Request Type": null,"Campaign goals": null,"Project overview key": null,"Related projects": null,"Campaign type": null,"Impact": null,"Request participants": [],"Locked forms": null,"Time to first response": null,"Work category": null,"Audience": null,"Open forms": null,"Details": null,"Sprint": null,"Stakeholders": null,"Marketing asset type": null,"Submitted forms": null,"Start date": null,"Actual start": null,"Category": null,"Change risk": null,"Target start": null,"Issue color": "purple","Parent Link": {"hasEpicLinkFieldDependency": false,"showField": false,"nonEditableReason": {"reason": "EPIC_LINK_SHOULD_BE_USED","message": "To set an epic as the parent, use the epic link instead"}},"Format": null,"Target end": null,"Approvers": null,"Team": null,"Change type": null,"Satisfaction date": null,"Request language": null,"Amount": null,"Rank": "0|i0001b:","Affected services": null,"Type": null,"Time to resolution": null,"Total forms": null,"[CHART] Time in Status": null,"Organizations": [],"Flagged": null,"Project overview status": null},"Issue": {"statuscategorychangedate": "2024-11-07T16:59:54.786-0300","issuetype": {"avatarId": 10307,"hierarchyLevel": 1,"name": "Epic","self": "https://tomasmurua.atlassian.net/rest/api/2/issuetype/10008","description": "Epics track collections of related bugs, stories, and tasks.","entityId": "f5637521-ec75-48b8-a1b8-de18520807ca","id": "10008","iconUrl": "https://tomasmurua.atlassian.net/rest/api/2/universal_avatar/view/type/issuetype/avatar/10307?size=medium","subtask": false},"components": [],"timespent": null,"timeoriginalestimate": null,"project": {"simplified": true,"avatarUrls": {"48x48": "https://tomasmurua.atlassian.net/rest/api/2/universal_avatar/view/type/project/avatar/10415","24x24": "https://tomasmurua.atlassian.net/rest/api/2/universal_avatar/view/type/project/avatar/10415?size=small","16x16": "https://tomasmurua.atlassian.net/rest/api/2/universal_avatar/view/type/project/avatar/10415?size=xsmall","32x32": "https://tomasmurua.atlassian.net/rest/api/2/universal_avatar/view/type/project/avatar/10415?size=medium"},"name": "Galactic Banking Project","self": "https://tomasmurua.atlassian.net/rest/api/2/project/10001","id": "10001","projectTypeKey": "software","key": "GBP"},"description": null,"fixVersions": [],"aggregatetimespent": null,"resolution": null,"timetracking": {},"security": null,"aggregatetimeestimate": null,"attachment": [],"resolutiondate": null,"workratio": -1,"summary": "Intergalactic Security and Compliance","watches": {"self": "https://tomasmurua.atlassian.net/rest/api/2/issue/GBP-3/watchers","isWatching": true,"watchCount": 1},"issuerestriction": {"issuerestrictions": {},"shouldDisplay": true},"lastViewed": "2024-11-08T02:04:25.247-0300","creator": {"accountId": "712020:88983800-6c97-469a-9451-79c2dd3732b5","emailAddress": "contornan_cliche.0y@icloud.com","avatarUrls": {"48x48": "https://secure.gravatar.com/avatar/f098101294d1a0da282bb2388df8c257?d=https%3A%2F%2Favatar-management--avatars.us-west-2.prod.public.atl-paas.net%2Finitials%2FTM-3.png","24x24": "https://secure.gravatar.com/avatar/f098101294d1a0da282bb2388df8c257?d=https%3A%2F%2Favatar-management--avatars.us-west-2.prod.public.atl-paas.net%2Finitials%2FTM-3.png","16x16": "https://secure.gravatar.com/avatar/f098101294d1a0da282bb2388df8c257?d=https%3A%2F%2Favatar-management--avatars.us-west-2.prod.public.atl-paas.net%2Finitials%2FTM-3.png","32x32": "https://secure.gravatar.com/avatar/f098101294d1a0da282bb2388df8c257?d=https%3A%2F%2Favatar-management--avatars.us-west-2.prod.public.atl-paas.net%2Finitials%2FTM-3.png"},"displayName": "Tomas Murua","accountType": "atlassian","self": "https://tomasmurua.atlassian.net/rest/api/2/user?accountId=712020%3A88983800-6c97-469a-9451-79c2dd3732b5","active": true,"timeZone": "Chile/Continental"},"subtasks": [],"created": "2024-10-29T15:52:40.306-0300","reporter": {"accountId": "712020:88983800-6c97-469a-9451-79c2dd3732b5","emailAddress": "contornan_cliche.0y@icloud.com","avatarUrls": {"48x48": "https://secure.gravatar.com/avatar/f098101294d1a0da282bb2388df8c257?d=https%3A%2F%2Favatar-management--avatars.us-west-2.prod.public.atl-paas.net%2Finitials%2FTM-3.png","24x24": "https://secure.gravatar.com/avatar/f098101294d1a0da282bb2388df8c257?d=https%3A%2F%2Favatar-management--avatars.us-west-2.prod.public.atl-paas.net%2Finitials%2FTM-3.png","16x16": "https://secure.gravatar.com/avatar/f098101294d1a0da282bb2388df8c257?d=https%3A%2F%2Favatar-management--avatars.us-west-2.prod.public.atl-paas.net%2Finitials%2FTM-3.png","32x32": "https://secure.gravatar.com/avatar/f098101294d1a0da282bb2388df8c257?d=https%3A%2F%2Favatar-management--avatars.us-west-2.prod.public.atl-paas.net%2Finitials%2FTM-3.png"},"displayName": "Tomas Murua","accountType": "atlassian","self": "https://tomasmurua.atlassian.net/rest/api/2/user?accountId=712020%3A88983800-6c97-469a-9451-79c2dd3732b5","active": true,"timeZone": "Chile/Continental"},"aggregateprogress": {"total": 0,"progress": 0},"priority": {"name": "Medium","self": "https://tomasmurua.atlassian.net/rest/api/2/priority/3","iconUrl": "https://tomasmurua.atlassian.net/images/icons/priorities/medium.svg","id": "3"},"labels": [],"environment": null,"timeestimate": null,"aggregatetimeoriginalestimate": null,"versions": [],"duedate": null,"progress": {"total": 0,"progress": 0},"issuelinks": [],"votes": {"hasVoted": false,"self": "https://tomasmurua.atlassian.net/rest/api/2/issue/GBP-3/votes","votes": 0},"comment": {"total": 0,"comments": [],"maxResults": 0,"self": "https://tomasmurua.atlassian.net/rest/api/2/issue/10008/comment","startAt": 0},"assignee": {"accountId": "712020:88983800-6c97-469a-9451-79c2dd3732b5","emailAddress": "contornan_cliche.0y@icloud.com","avatarUrls": {"48x48": "https://secure.gravatar.com/avatar/f098101294d1a0da282bb2388df8c257?d=https%3A%2F%2Favatar-management--avatars.us-west-2.prod.public.atl-paas.net%2Finitials%2FTM-3.png","24x24": "https://secure.gravatar.com/avatar/f098101294d1a0da282bb2388df8c257?d=https%3A%2F%2Favatar-management--avatars.us-west-2.prod.public.atl-paas.net%2Finitials%2FTM-3.png","16x16": "https://secure.gravatar.com/avatar/f098101294d1a0da282bb2388df8c257?d=https%3A%2F%2Favatar-management--avatars.us-west-2.prod.public.atl-paas.net%2Finitials%2FTM-3.png","32x32": "https://secure.gravatar.com/avatar/f098101294d1a0da282bb2388df8c257?d=https%3A%2F%2Favatar-management--avatars.us-west-2.prod.public.atl-paas.net%2Finitials%2FTM-3.png"},"displayName": "Tomas Murua","accountType": "atlassian","self": "https://tomasmurua.atlassian.net/rest/api/2/user?accountId=712020%3A88983800-6c97-469a-9451-79c2dd3732b5","active": true,"timeZone": "Chile/Continental"},"worklog": {"total": 0,"maxResults": 20,"startAt": 0,"worklogs": []},"updated": "2024-11-07T16:59:54.786-0300","status": {"name": "In Progress","self": "https://tomasmurua.atlassian.net/rest/api/2/status/10004","description": "","iconUrl": "https://tomasmurua.atlassian.net/","id": "10004","statusCategory": {"colorName": "yellow","name": "In Progress","self": "https://tomasmurua.atlassian.net/rest/api/2/statuscategory/4","id": 4,"key": "indeterminate"}}},"id": "Galactic Banking Project-GBP-3","_timestamp": "2024-11-07T16:59:54.786-0300","Key": "GBP-3","_allow_access_control": ["account_id:63c04b092341bff4fff6e0cb","account_id:712020:88983800-6c97-469a-9451-79c2dd3732b5","name:Gustavo","name:Tomas-Murua"]}}]
}

响应将是:

{"docs": [{"doc": {"_index": "bank","_version": "-3","_id": "Galactic Banking Project-GBP-3","_source": {"summary": "Intergalactic Security and Compliance","assignee": "Tomas Murua","status": "In Progress"},"_ingest": {"timestamp": "2024-11-10T06:58:25.494057572Z"}}}]
}

这看起来好多了!现在,让我们运行 full content sync 来应用更改。

3. 根据你的需求优化映射

文档很干净。但是,我们可以进一步优化。我们可以进入  “it depends”  的领域。有些映射可以适用于你的用例,而其他映射则不行。找出答案的最佳方法是进行实验。

假设我们测试并得到了这个映射设计:

  • assignee:全文搜索和过滤器
  • summary:全文搜索
  • status:过滤器和排序

默认情况下,连接器将使用 dynamic_templates 创建映射,这些映射将配置所有文本字段以进行全文搜索、过滤和排序,这是一个坚实的基础,但如果我们知道我们想要用我们的字段做什么,它可以进行优化。

这是规则:

{"all_text_fields": {"match_mapping_type": "string","mapping": {"analyzer": "iq_text_base","fields": {"delimiter": {"analyzer": "iq_text_delimiter","type": "text","index_options": "freqs"},"joined": {"search_analyzer": "q_text_bigram","analyzer": "i_text_bigram","type": "text","index_options": "freqs"},"prefix": {"search_analyzer": "q_prefix","analyzer": "i_prefix","type": "text","index_options": "docs"},"enum": {"ignore_above": 2048,"type": "keyword"},"stem": {"analyzer": "iq_text_stem","type": "text"}}}}
}

让我们为所有文本字段创建用于不同目的的不同子字段。你可以在文档中找到有关分析器的其他信息。

要使用这些映射,你必须:

  1. 在创建连接器之前创建索引
  2. 创建连接器时,选择该索引而不是创建新索引
  3. 创建摄取管道以获取所需的字段
  4. 运行 Full Content Sync*

*Full Content Sync 会将所有文档发送到 Elasticsearch。Incremental Sync 只会将上次增量或完整内容同步后更改的文档发送到 Elasticsearch。这两种方法都将从数据源获取所有数据。

我们的优化映射如下:

PUT bank-optimal
{"mappings": {"properties": {"assignee": {"type": "text","fields": {"delimiter": {"type": "text","index_options": "freqs","analyzer": "iq_text_delimiter"},"enum": {"type": "keyword","ignore_above": 2048},"joined": {"type": "text","index_options": "freqs","analyzer": "i_text_bigram","search_analyzer": "q_text_bigram"},"prefix": {"type": "text","index_options": "docs","analyzer": "i_prefix","search_analyzer": "q_prefix"},"stem": {"type": "text","analyzer": "iq_text_stem"}},"analyzer": "iq_text_base"},"summary": {"type": "text","fields": {"delimiter": {"type": "text","index_options": "freqs","analyzer": "iq_text_delimiter"},"joined": {"type": "text","index_options": "freqs","analyzer": "i_text_bigram","search_analyzer": "q_text_bigram"},"prefix": {"type": "text","index_options": "docs","analyzer": "i_prefix","search_analyzer": "q_prefix"},"stem": {"type": "text","analyzer": "iq_text_stem"}},"analyzer": "iq_text_base"},"status": {"type": "keyword"}}}
}

对于 assignee,我们保留了原有的映射,因为我们希望此字段针对搜索和过滤器进行优化。对于 summary,我们删除了 “enum” 关键字字段,因为我们不打算过滤摘要。我们将 status 映射为关键字,因为我们只打算过滤该字段。

注意:如果你不确定如何使用字段,基线分析器应该没问题。

4. 自动化文档级安全性

在第一部分中,我们学习了使用文档级安全性 (Document Level Security - DLS) 为用户手动创建 API 密钥并根据该密钥限制访问权限。但是,如果你想在每次用户访问我们的网站时自动创建具有权限的 API 密钥,则需要创建一个脚本来接收请求,使用用户 ID 生成 API 密钥,然后使用它在 Elasticsearch 中搜索。

这是 Python 中的参考文件:

import os
import requests
class ElasticsearchKeyGenerator:def __init__(self):self.es_url = "https://xxxxxxx.es.us-central1.gcp.cloud.es.io" # Your Elasticsearch URLself.es_user = "" # Your Elasticsearch Userself.es_password = "" # Your Elasticsearch password# Basic configuration for requestsself.auth = (self.es_user, self.es_password)self.headers = {'Content-Type': 'application/json'}def create_api_key(self, user_id, index, expiration='1d', metadata=None):"""Create an Elasticsearch API key for a single index with user-specific filters.Args:user_id (str): User identifier on the source systemindex (str): Index nameexpiration (str): Key expiration time (default: '1d')metadata (dict): Additional metadata for the API keyReturns:str: Encoded API key if successful, None if failed"""try:# Get user-specific ACL filtersacl_index = f'.search-acl-filter-{index}'response = requests.get(f'{self.es_url}/{acl_index}/_doc/{user_id}',auth=self.auth,headers=self.headers)response.raise_for_status()# Build the queryquery = {'bool': {'must': [{'term': {'_index': index}},response.json()['_source']['query']]}}# Set default metadata if none providedif not metadata:metadata = {'created_by': 'create-api-key'}# Prepare API key request bodyapi_key_body = {'name': user_id,'expiration': expiration,'role_descriptors': {f'jira-role': {'index': [{'names': [index],'privileges': ['read'],'query': query}]}},'metadata': metadata}print(api_key_body)# Create API keyapi_key_response = requests.post(f'{self.es_url}/_security/api_key',json=api_key_body,auth=self.auth,headers=self.headers)api_key_response.raise_for_status()return api_key_response.json()['encoded']except requests.exceptions.RequestException as e:print(f"Error creating API key: {str(e)}")return None# Example usage
if __name__ == "__main__":key_generator = ElasticsearchKeyGenerator()encoded_key = key_generator.create_api_key(user_id="63c04b092341bff4fff6e0cb", # User id on Jiraindex="bank",expiration="1d",metadata={"application": "my-search-app","namespace": "dev","foo": "bar"})if encoded_key:print(f"Generated API key: {encoded_key}")else:print("Failed to generate API key")

你可以在每个 API 请求上调用此 create_api_key 函数来生成 API 密钥,用户可以在后续请求中使用该密钥查询 Elasticsearch。你可以设置到期时间,还可以设置任意元数据,以防你想要注册有关用户或生成密钥的 API 的一些信息。

5. 卸载附件提取

对于内容提取,例如从 PDF 和 Powerpoint 文件中提取文本,Elastic 提供了一种开箱即用的服务,该服务运行良好,但有大小限制。

默认情况下,本机连接器的提取服务支持每个附件最大 10MB。如果你有更大的附件,例如里面有大图像的 PDF,或者你想要托管提取服务,Elastic 提供了一个工具,可让你部署自己的提取服务。

此选项仅与连接器客户端兼容,因此如果你使用的是本机连接器,则需要将其转换为连接器客户端并将其托管在你自己的基础架构中。

请按照以下步骤操作:

a. 配置自定义提取服务并使用 Docker 运行

docker run \-p 8090:8090 \-it \--name extraction-service \docker.elastic.co/enterprise-search/data-extraction-service:$EXTRACTION_SERVICE_VERSION

EXTRACTION_SERVICE_VERSION 你应该使用 Elasticsearch 8.15 的 0.3.x。

b. 配置 yaml con 提取服务自定义并运行

转到连接器客户端并将以下内容添加到 config.yml 文件以使用提取服务:

extraction_service:host: http://localhost:8090

c. 按照步骤运行连接器客户端

配置完成后,你可以使用要使用的连接器运行连接器客户端。

docker run \
-v "</absolute/path/to>/connectors-config:/config" \ # NOTE: change absolute path to match where config.yml is located on your machine
--tty \
--rm \
docker.elastic.co/enterprise-search/elastic-connectors:{version}.0 \
/app/bin/elastic-ingest \
-c /config/config.yml # Path to your configuration file in the container

你可以参考文档中的完整流程。

6. 监控连接器的日志

在出现问题时,查看连接器的日志非常重要,Elastic 提供了开箱即用的功能。

第一步是在集群中激活日志记录。建议将日志发送到其他集群(监控部署),但在开发环境中,你也可以将日志发送到索引文档的同一集群。

默认情况下,连接器会将日志发送到 elastic-cloud-logs-8 索引。如果你使用的是 Cloud,则可以在新的 Logs Explorer 中检查日志:

结论

在本文中,我们了解了在生产环境中使用连接器时需要考虑的不同策略。优化资源、自动化安全性和集群监控是正确运行大型系统的关键机制。

想要获得 Elastic 认证?了解下一期 Elasticsearch 工程师培训的时间!

Elasticsearch 包含许多新功能,可帮助你为你的用例构建最佳搜索解决方案。深入了解我们的示例笔记本以了解更多信息,开始免费云试用,或立即在你的本地机器上试用 Elastic。

原文:Jira connector tutorial part II: 6 optimization tips - Elasticsearch Labs

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.rhkb.cn/news/3444.html

如若内容造成侵权/违法违规/事实不符,请联系长河编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

mongoose 支持https踩坑纪实

简述 mongoose是C编写的嵌入式web服务&#xff0c;它能够支持https协议&#xff0c;可以简单的部署&#xff0c;但要做到完美部署&#xff0c;不是那么容易。 部署方法 本人使用的是最新的7.16版&#xff0c;以前版本似乎是要通过修改 头文件中的 MG_ENABLE_SSL 宏定义&…

每打开一个chrome页面都会【自动打开F12开发者模式】,原因是 使用HBuilderX会影响谷歌浏览器的浏览模式

打开 HBuilderX&#xff0c;点击 运行 -> 运行到浏览器 -> 设置web服务器 -> 添加chrome浏览器安装路径 chrome谷歌浏览器插件 B站视频下载助手插件&#xff1a; 参考地址&#xff1a;Chrome插件 - B站下载助手&#xff08;轻松下载bilibili哔哩哔哩视频&#xff09…

XML在线格式化 - 加菲工具

XML在线格式化 打开网站 加菲工具 选择“XML 在线格式化” 输入XML&#xff0c;点击左上角的“格式化”按钮 得到格式化后的结果

微信原生小程序自定义封装组件(以导航navbar为例)

封装 topnav.js const App getApp(); Component({// 组件的属性列表properties: {pageName: String, //中间的titleshowNav: { //判断是否显示左上角的按钮 type: Boolean,value: true},showHome: { //判断是否显示左上角的home按钮type: Boolean,value: true},showLocat…

RPA赋能内容创作:打造小红书入门词语图片的全自动化流程

&#x1f31f; 嗨&#xff0c;我是LucianaiB&#xff01; &#x1f30d; 总有人间一两风&#xff0c;填我十万八千梦。 &#x1f680; 路漫漫其修远兮&#xff0c;吾将上下而求索。 用RPA全自动化批量生产【入门词语】图片做小红书商单&#xff0c;保姆级工具开发教程 最近由…

Linux SUID提权

文章目录 1. SUID/SGID2. 查找SUID文件3. SUID/SGID提权3.1 SUID配置不当3.2 SUID systemctl提权3.3 $PATH变量劫持 4. 参考 1. SUID/SGID SUID&#xff08;Set User ID&#xff09;意味着如果某个用户对属于自己的文件设置了这种权限&#xff0c;那么其他用户在执行这一脚本时…

【PyQt】图像处理系统

[toc]pyqt实现图像处理系统 图像处理系统 1.创建阴影去除ui文件 2.阴影去除代码 1.创建阴影去除ui文件 UI文件效果图&#xff1a; 1.1QT Desiger设置组件 1.两个Pushbutton按钮 2.两个label来显示图像 3.Text Browser来显示输出信息 1.2布局的设置 1.先不使用任何La…

从零创建一个 Django 项目

1. 准备环境 在开始之前&#xff0c;确保你的开发环境满足以下要求&#xff1a; 安装了 Python (推荐 3.8 或更高版本)。安装 pip 包管理工具。如果要使用 MySQL 或 PostgreSQL&#xff0c;确保对应的数据库已安装。 创建虚拟环境 在项目目录中创建并激活虚拟环境&#xff…

springboot多环境配置

问题背景 以后在工作中&#xff0c;对于开发环境、测试环境、生产环境的配置肯定都不相同&#xff0c;比如我们开发阶段会在自己的电脑上安装 mysql &#xff0c;连接自己电脑上的 mysql 即可&#xff0c;但是项目开发完毕后要上线就需要该配置&#xff0c;将环境的配置改为线…

WOA-CNN-GRU-Attention、CNN-GRU-Attention、WOA-CNN-GRU、CNN-GRU四模型对比多变量时序预测

WOA-CNN-GRU-Attention、CNN-GRU-Attention、WOA-CNN-GRU、CNN-GRU四模型对比多变量时序预测 目录 WOA-CNN-GRU-Attention、CNN-GRU-Attention、WOA-CNN-GRU、CNN-GRU四模型对比多变量时序预测预测效果基本介绍程序设计参考资料 预测效果 基本介绍 基于WOA-CNN-GRU-Attention、…

鸿蒙动态路由实现方案

背景 随着CSDN 鸿蒙APP 业务功能的增加&#xff0c;以及为了与iOS、Android 端统一页面跳转路由&#xff0c;以及动态下发路由链接&#xff0c;路由重定向等功能。鸿蒙动态路由方案的实现迫在眉睫。 实现方案 鸿蒙版本动态路由的实现原理&#xff0c;类似于 iOS与Android的实…

【Go】Go Gorm 详解

1. 概念 Gorm 官网&#xff1a;https://gorm.io/zh_CN/docs/ Gorm&#xff1a;The fantastic ORM library for Golang aims to be developer friendly&#xff0c;这是官网的介绍&#xff0c;简单来说 Gorm 就是一款高性能的 Golang ORM 库&#xff0c;便于开发人员提高效率 那…

Chrome谷歌浏览器如何能恢复到之前的旧版本

升级了谷歌最新版不习惯&#xff0c;如何降级版本 未完待续。。 电脑中的Chrome谷歌浏览器升级到了最新版本&#xff0c;但是有种种的不适应&#xff0c;如何能恢复到之前的旧版本呢&#xff1f;我们来看看操作步骤&#xff0c;而且无需卸载重装。 怎么恢复Chrome 之前版本&a…

技术晋升读书笔记—华为研发

读完《华为研发》第三版&#xff0c;我深感震撼&#xff0c;书中的内容不仅详实地记录了华为公司的成长历程&#xff0c;还揭示了华为成功背后的管理理念和创新思路。这本书通过真实的案例和数据&#xff0c;展示了华为如何从一个小企业发展成全球通信行业的领导者。 一、关键人…

数据可视化:让数据讲故事的艺术

目录 1 前言2 数据可视化的基本概念2.1 可视化的核心目标2.2 传统可视化手段 3 数据可视化在知识图谱中的应用3.1 知识图谱的可视化需求3.2 知识图谱的可视化方法 4 数据可视化叙事&#xff1a;让数据讲故事4.1 叙事可视化的关键要素4.2 数据可视化叙事的实现方法 5 数据可视化…

【OpenCV(C++)快速入门】--opencv学习

0 配置环境 配置环境网上很多资料&#xff0c;这里就不赘述了。 笔者使用的是VS2022opencv4.9.0 测试配置环境 // 打开摄像头样例 #include <opencv2/highgui/highgui.hpp> #include <opencv2/imgproc/imgproc.hpp> #include <opencv2/imgcodecs/imgcod…

归并排序算法

归并排序 1算法介绍 和选择排序一样&#xff0c;归并排序的性能不受输入数据的影响&#xff0c;但表现比选择排序好的多&#xff0c;因为始终都是O(n log n&#xff09;的时间复杂度。代价是需要额外的内存空间。归并排序是建立在归并操作上的一种有效的排序算法。该算法是采用…

unity学习18:unity里的 Debug.Log相关

目录 1 unity里的 Debug.log相关 2 用Debug.DrawLine 和 Debug.DrawRay画线 2.1 画线 1 unity里的 Debug.log相关 除了常用的 Debug.Log&#xff0c;还有另外2个 Debug.Log("Debug.Log"); Debug.LogWarning("Debug.LogWarning"); Debug.LogErro…

c语言第一天

前言&#xff1a; bili视频2. 【初识C语言】第一个C语言项目_哔哩哔哩_bilibili 我感觉我意志不坚定&#xff0c;感觉要学网络安全&#xff0c;我又去专升本了&#xff0c;咋搞啊 多学一点是一点&#xff0c;我看到day1团队的人&#xff0c;一天学12个小时&#xff0c;年入2…

PyTorch DAY2: 搭建神经网络

如今&#xff0c;我们已经了解了 PyTorch 中张量及其运算&#xff0c;但这远远不够。本次实验将学会如何使用 PyTorch 方便地构建神经网络模型&#xff0c;以及 PyTorch 训练神经网络的步骤及方法。 知识点 PyTorch 构建神经网络Sequential 容器结构使用 GPU 加速训练模型保存…