AI - 谈谈RAG中的查询分析(2)

AI - 谈谈RAG中的查询分析(2)

大家好,RAG中的查询分析是比较有趣的一个点,内容丰富,并不是一句话能聊的清楚的。今天接着上一篇,继续探讨RAG中的查询分析,并在功能层面和代码层面持续改进。

ai-langchain-rag

功能层面

如果用户问了一个不着边际的问题,也就是和工具无关的问题,那么无须调用工具,直接生成答案。否则,就调用工具,检索本地知识库,生成答案。

ai-rag-state

代码方面

  • 考虑到在对答聊天中,对话状态是如此重要,所以我们可以直接使用LangChain内置的MessagesState,而不用自己定义State类。

    class State(TypedDict):question: strquery: Searchcontext: List[Document]answer: str
    
  • 上一篇的Search工具主要用于结构化输出,工具本身没有实质性内容,所以本篇会将retrieve作为一个工具,既可以绑定到LLM,也可以通过LangGraph内置的组件 ToolNode,,形成一个Graph节点,在收到LLM的应答之后,开始执行从本地知识库语义搜索的动作,最终生成一个ToolMessage

实例代码

备注:对于本文中的代码片段,主体来源于LangChain官网,有兴趣的读者可以去官网查看。

import os
from langchain_openai import ChatOpenAI
from langchain_openai import OpenAIEmbeddings
from langchain_core.vectorstores import InMemoryVectorStore
import bs4
from langchain import hub
from langchain_community.document_loaders import WebBaseLoader
from langchain_core.documents import Document
from langchain_text_splitters import RecursiveCharacterTextSplitter
from langgraph.graph import START, StateGraph, MessagesState
from typing_extensions import List, TypedDict
from langchain_core.tools import tool
from langchain_core.messages import SystemMessage
from langgraph.graph import END
from langgraph.prebuilt import ToolNode, tools_condition# Setup environment variables for authentication
os.environ["OPENAI_API_KEY"] = 'your_openai_api_key'# Initialize OpenAI embeddings using a specified model
embeddings = OpenAIEmbeddings(model="text-embedding-3-large")# Create an in-memory vector store to store the embeddings
vector_store = InMemoryVectorStore(embeddings)# Initialize the language model from OpenAI
llm = ChatOpenAI(model="gpt-4o-mini")# Setup the document loader for a given web URL, specifying elements to parse
loader = WebBaseLoader(web_paths=("https://lilianweng.github.io/posts/2023-06-23-agent/",),bs_kwargs=dict(parse_only=bs4.SoupStrainer(class_=("post-content", "post-title", "post-header"))),
)
# Load the documents from the web page
docs = loader.load()# Initialize a text splitter to chunk the document text
text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=200)
all_splits = text_splitter.split_documents(docs)# Index the chunks in the vector store
_ = vector_store.add_documents(documents=all_splits)# Define a retrieval tool to get relevant documents for a query
@tool(response_format="content_and_artifact")
def retrieve(query: str):"""Retrieve information related to a query."""retrieved_docs = vector_store.similarity_search(query, k=2)serialized = "\n\n".join((f"Source: {doc.metadata}\n" f"Content: {doc.page_content}")for doc in retrieved_docs)return serialized, retrieved_docs# Step 1: Function to generate a tool call or respond based on the state
def query_or_respond(state: MessagesState):"""Generate tool call for retrieval or respond."""llm_with_tools = llm.bind_tools([retrieve])  # Bind the retrieve tool to LLMresponse = llm_with_tools.invoke(state["messages"])  # Invoke the LLM with current messagesreturn {"messages": [response]}  # Return the response messages# Step 2: Execute the retrieval tool
tools = ToolNode([retrieve])# Step 3: Function to generate a response using retrieved content
def generate(state: MessagesState):"""Generate answer."""# Get the most recent tool messagesrecent_tool_messages = []for message in reversed(state["messages"]):if message.type == "tool":recent_tool_messages.append(message)else:breaktool_messages = recent_tool_messages[::-1]  # Reverse to get the original order# Create a system message with the retrieved contextdocs_content = "\n\n".join(doc.content for doc in tool_messages)system_message_content = ("You are an assistant for question-answering tasks. ""Use the following pieces of retrieved context to answer ""the question. If you don't know the answer, say that you ""don't know. Use three sentences maximum and keep the ""answer concise.""\n\n"f"{docs_content}")# Filter human and system messages for the promptconversation_messages = [messagefor message in state["messages"]if message.type in ("human", "system")or (message.type == "ai" and not message.tool_calls)]prompt = [SystemMessage(system_message_content)] + conversation_messages# Invoke the LLM with the promptresponse = llm.invoke(prompt)return {"messages": [response]}# Build the state graph for managing message state transitions
graph_builder = StateGraph(MessagesState)
graph_builder.add_node(query_or_respond)  # Add query_or_respond node to the graph
graph_builder.add_node(tools)             # Add tools node to the graph
graph_builder.add_node(generate)          # Add generate node to the graph# Set the entry point for the state graph
graph_builder.set_entry_point("query_or_respond")
# Define conditional edges based on tool invocation
graph_builder.add_conditional_edges("query_or_respond",tools_condition,{END: END, "tools": "tools"},
)
graph_builder.add_edge("tools", "generate")  # Define transition from tools to generate
graph_builder.add_edge("generate", END)      # Define transition from generate to END# Compile the graph
graph = graph_builder.compile()# Interact with the compiled graph using an initial input message
input_message = "Hello"
for step in graph.stream({"messages": [{"role": "user", "content": input_message}]},stream_mode="values",
):step["messages"][-1].pretty_print()  # Print the final message# Another interaction with the graph with a different input message
input_message = "What is Task Decomposition?"
for step in graph.stream({"messages": [{"role": "user", "content": input_message}]},stream_mode="values",
):step["messages"][-1].pretty_print()  # Print the final message

代码详解

导入必要的库
import os
from langchain_openai import ChatOpenAI, OpenAIEmbeddings
from langchain_core.vectorstores import InMemoryVectorStore
import bs4
from langchain import hub
from langchain_community.document_loaders import WebBaseLoader
from langchain_core.documents import Document
from langchain_text_splitters import RecursiveCharacterTextSplitter
from langgraph.graph import START, StateGraph, MessagesState
from typing_extensions import List, TypedDict
from langchain_core.tools import tool
from langchain_core.messages import SystemMessage
from langgraph.graph import END
from langgraph.prebuilt import ToolNode, tools_condition

我们首先导入了需要的库,这些库提供了处理语言和存储向量的工具。

设置环境变量
os.environ["OPENAI_API_KEY"] = 'your_openai_api_key'

设置一些环境变量,用于API的身份验证和项目配置。

初始化嵌入模型和向量存储
embeddings = OpenAIEmbeddings(model="text-embedding-3-large")
vector_store = InMemoryVectorStore(embeddings)

我们使用OpenAI的嵌入模型来创建文本嵌入,并在内存中初始化一个向量存储,用于后续的向量操作。

llm = ChatOpenAI(model="gpt-4o-mini")

初始化GPT-4小版本的语言模型,用于后续的AI对话生成。

加载和分割文档
loader = WebBaseLoader(web_paths=("https://lilianweng.github.io/posts/2023-06-23-agent/",),bs_kwargs=dict(parse_only=bs4.SoupStrainer(class_=("post-content", "post-title", "post-header"))),
)
docs = loader.load()text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=200)
all_splits = text_splitter.split_documents(docs)

加载指定网页的内容,并对页面内容进行解析和分割。分割后的文本块将用于嵌入和向量存储。

向量存储文档
_ = vector_store.add_documents(documents=all_splits)

将分割后的文档片段添加到向量存储中,以供后续检索操作。

定义检索工具
@tool(response_format="content_and_artifact")
def retrieve(query: str):"""Retrieve information related to a query."""retrieved_docs = vector_store.similarity_search(query, k=2)serialized = "\n\n".join((f"Source: {doc.metadata}\n" f"Content: {doc.page_content}")for doc in retrieved_docs)return serialized, retrieved_docs

定义一个检索工具函数retrieve,该函数可以根据查询在向量存储中进行相似性搜索,并返回检索到的文档内容。

定义步骤:生成工具调用或直接回复
def query_or_respond(state: MessagesState):"""Generate tool call for retrieval or respond."""llm_with_tools = llm.bind_tools([retrieve])response = llm_with_tools.invoke(state["messages"])return {"messages": [response]}

该函数根据当前的消息状态生成调用检索工具的请求或直接生成回复。

定义步骤:执行检索工具
tools = ToolNode([retrieve])

定义一个执行检索工具的步骤。

定义步骤:生成回答
def generate(state: MessagesState):"""Generate answer."""recent_tool_messages = []for message in reversed(state["messages"]):if message.type == "tool":recent_tool_messages.append(message)else:breaktool_messages = recent_tool_messages[::-1]docs_content = "\n\n".join(doc.content for doc in tool_messages)system_message_content = ("You are an assistant for question-answering tasks. ""Use the following pieces of retrieved context to answer ""the question. If you don't know the answer, say that you ""don't know. Use three sentences maximum and keep the ""answer concise.""\n\n"f"{docs_content}")conversation_messages = [messagefor message in state["messages"]if message.type in ("human", "system")or (message.type == "ai" and not message.tool_calls)]prompt = [SystemMessage(system_message_content)] + conversation_messagesresponse = llm.invoke(prompt)return {"messages": [response]}

该函数生成最后的回答。它会首先收集最近的工具消息,并结合这些消息内容生成系统消息,然后与现有对话消息一起作为提示,最终调用LLM生成回复。

构建状态图
graph_builder = StateGraph(MessagesState)
graph_builder.add_node(query_or_respond)
graph_builder.add_node(tools)
graph_builder.add_node(generate)graph_builder.set_entry_point("query_or_respond")
graph_builder.add_conditional_edges("query_or_respond",tools_condition,{END: END, "tools": "tools"},
)
graph_builder.add_edge("tools", "generate")
graph_builder.add_edge("generate", END)graph = graph_builder.compile()

使用状态图构建器创建一个消息状态图,并添加节点和条件边,确定消息的流转逻辑。

与状态图进行交互
input_message = "Hello"
for step in graph.stream({"messages": [{"role": "user", "content": input_message}]},stream_mode="values",
):step["messages"][-1].pretty_print()input_message = "What is Task Decomposition?"
for step in graph.stream({"messages": [{"role": "user", "content": input_message}]},stream_mode="values",
):step["messages"][-1].pretty_print()

我们通过给定的输入消息与状态图进行互动,流式处理消息,并最终打印出生成的回复。

LLM消息抓取

以上整个过程中,我们都是在调用LangChain API与LLM在进行交互,至于底层发送的请求细节,一无所知。在某些场景下面,我们还是需要去探究一下这些具体的细节,这样可以有一个全面的了解。下面我们看一下具体的发送内容,以上代码涉及到三个LLM交互。

交互1

rag2-LLM1

提问
{"messages": [[{"lc": 1,"type": "constructor","id": ["langchain","schema","messages","HumanMessage"],"kwargs": {"content": "Hello","type": "human","id": "da95e909-50bb-4204-8aad-4181dcccbffb"}}]]
}
回答
{"generations": [[{"text": "Hello! How can I assist you today?","generation_info": {"finish_reason": "stop","logprobs": null},"type": "ChatGeneration","message": {"lc": 1,"type": "constructor","id": ["langchain","schema","messages","AIMessage"],"kwargs": {"content": "Hello! How can I assist you today?","additional_kwargs": {"refusal": null},"response_metadata": {"token_usage": {"completion_tokens": 10,"prompt_tokens": 44,"total_tokens": 54,"completion_tokens_details": {"accepted_prediction_tokens": 0,"audio_tokens": 0,"reasoning_tokens": 0,"rejected_prediction_tokens": 0},"prompt_tokens_details": {"audio_tokens": 0,"cached_tokens": 0}},"model_name": "gpt-4o-mini-2024-07-18","system_fingerprint": "fp_3de1288069","finish_reason": "stop","logprobs": null},"type": "ai","id": "run-611efcc9-1fe5-47e4-83fc-f42623556d93-0","usage_metadata": {"input_tokens": 44,"output_tokens": 10,"total_tokens": 54,"input_token_details": {"audio": 0,"cache_read": 0},"output_token_details": {"audio": 0,"reasoning": 0}},"tool_calls": [],"invalid_tool_calls": []}}}]],"llm_output": {"token_usage": {"completion_tokens": 10,"prompt_tokens": 44,"total_tokens": 54,"completion_tokens_details": {"accepted_prediction_tokens": 0,"audio_tokens": 0,"reasoning_tokens": 0,"rejected_prediction_tokens": 0},"prompt_tokens_details": {"audio_tokens": 0,"cached_tokens": 0}},"model_name": "gpt-4o-mini-2024-07-18","system_fingerprint": "fp_3de1288069"},"run": null,"type": "LLMResult"
}

交互2

rag2-LLM2

提问
{"messages": [[{"lc": 1,"type": "constructor","id": ["langchain","schema","messages","HumanMessage"],"kwargs": {"content": "What is Task Decomposition?","type": "human","id": "6a790b36-fafd-4ff3-b293-9bb3ac9f4157"}}]]
}
回答
{"generations": [[{"text": "","generation_info": {"finish_reason": "tool_calls","logprobs": null},"type": "ChatGeneration","message": {"lc": 1,"type": "constructor","id": ["langchain","schema","messages","AIMessage"],"kwargs": {"content": "","additional_kwargs": {"tool_calls": [{"id": "call_RClqnmrtp2sbwIbb2jHm0VeQ","function": {"arguments": "{\"query\":\"Task Decomposition\"}","name": "retrieve"},"type": "function"}],"refusal": null},"response_metadata": {"token_usage": {"completion_tokens": 15,"prompt_tokens": 49,"total_tokens": 64,"completion_tokens_details": {"accepted_prediction_tokens": 0,"audio_tokens": 0,"reasoning_tokens": 0,"rejected_prediction_tokens": 0},"prompt_tokens_details": {"audio_tokens": 0,"cached_tokens": 0}},"model_name": "gpt-4o-mini-2024-07-18","system_fingerprint": "fp_0705bf87c0","finish_reason": "tool_calls","logprobs": null},"type": "ai","id": "run-056b1c5a-cd5c-40cf-940c-bbf98512615d-0","tool_calls": [{"name": "retrieve","args": {"query": "Task Decomposition"},"id": "call_RClqnmrtp2sbwIbb2jHm0VeQ","type": "tool_call"}],"usage_metadata": {"input_tokens": 49,"output_tokens": 15,"total_tokens": 64,"input_token_details": {"audio": 0,"cache_read": 0},"output_token_details": {"audio": 0,"reasoning": 0}},"invalid_tool_calls": []}}}]],"llm_output": {"token_usage": {"completion_tokens": 15,"prompt_tokens": 49,"total_tokens": 64,"completion_tokens_details": {"accepted_prediction_tokens": 0,"audio_tokens": 0,"reasoning_tokens": 0,"rejected_prediction_tokens": 0},"prompt_tokens_details": {"audio_tokens": 0,"cached_tokens": 0}},"model_name": "gpt-4o-mini-2024-07-18","system_fingerprint": "fp_0705bf87c0"},"run": null,"type": "LLMResult"
}

交互3

rag2-LLM3

提问
{"messages": [[{"lc": 1,"type": "constructor","id": ["langchain","schema","messages","SystemMessage"],"kwargs": {"content": "You are an assistant for question-answering tasks. Use the following pieces of retrieved context to answer the question. If you don't know the answer, say that you don't know. Use three sentences maximum and keep the answer concise.\n\nSource: {'source': 'https://lilianweng.github.io/posts/2023-06-23-agent/'}\nContent: Fig. 1. Overview of a LLM-powered autonomous agent system.\nComponent One: Planning#\nA complicated task usually involves many steps. An agent needs to know what they are and plan ahead.\nTask Decomposition#\nChain of thought (CoT; Wei et al. 2022) has become a standard prompting technique for enhancing model performance on complex tasks. The model is instructed to “think step by step” to utilize more test-time computation to decompose hard tasks into smaller and simpler steps. CoT transforms big tasks into multiple manageable tasks and shed lights into an interpretation of the model’s thinking process.\n\nSource: {'source': 'https://lilianweng.github.io/posts/2023-06-23-agent/'}\nContent: Tree of Thoughts (Yao et al. 2023) extends CoT by exploring multiple reasoning possibilities at each step. It first decomposes the problem into multiple thought steps and generates multiple thoughts per step, creating a tree structure. The search process can be BFS (breadth-first search) or DFS (depth-first search) with each state evaluated by a classifier (via a prompt) or majority vote.\nTask decomposition can be done (1) by LLM with simple prompting like \"Steps for XYZ.\\n1.\", \"What are the subgoals for achieving XYZ?\", (2) by using task-specific instructions; e.g. \"Write a story outline.\" for writing a novel, or (3) with human inputs.","type": "system"}},{"lc": 1,"type": "constructor","id": ["langchain","schema","messages","HumanMessage"],"kwargs": {"content": "What is Task Decomposition?","type": "human","id": "6a790b36-fafd-4ff3-b293-9bb3ac9f4157"}}]]
}
回答
{"generations": [[{"text": "Task Decomposition is the process of breaking down a complicated task into smaller, more manageable steps. It often involves techniques like Chain of Thought (CoT), where the model is prompted to think step-by-step, enhancing performance on complex tasks. This approach helps to clarify the model's thinking process and makes it easier to tackle difficult problems.","generation_info": {"finish_reason": "stop","logprobs": null},"type": "ChatGeneration","message": {"lc": 1,"type": "constructor","id": ["langchain","schema","messages","AIMessage"],"kwargs": {"content": "Task Decomposition is the process of breaking down a complicated task into smaller, more manageable steps. It often involves techniques like Chain of Thought (CoT), where the model is prompted to think step-by-step, enhancing performance on complex tasks. This approach helps to clarify the model's thinking process and makes it easier to tackle difficult problems.","additional_kwargs": {"refusal": null},"response_metadata": {"token_usage": {"completion_tokens": 67,"prompt_tokens": 384,"total_tokens": 451,"completion_tokens_details": {"accepted_prediction_tokens": 0,"audio_tokens": 0,"reasoning_tokens": 0,"rejected_prediction_tokens": 0},"prompt_tokens_details": {"audio_tokens": 0,"cached_tokens": 0}},"model_name": "gpt-4o-mini-2024-07-18","system_fingerprint": "fp_0705bf87c0","finish_reason": "stop","logprobs": null},"type": "ai","id": "run-b3565b23-18d5-439d-a87b-f836ee281d91-0","usage_metadata": {"input_tokens": 384,"output_tokens": 67,"total_tokens": 451,"input_token_details": {"audio": 0,"cache_read": 0},"output_token_details": {"audio": 0,"reasoning": 0}},"tool_calls": [],"invalid_tool_calls": []}}}]],"llm_output": {"token_usage": {"completion_tokens": 67,"prompt_tokens": 384,"total_tokens": 451,"completion_tokens_details": {"accepted_prediction_tokens": 0,"audio_tokens": 0,"reasoning_tokens": 0,"rejected_prediction_tokens": 0},"prompt_tokens_details": {"audio_tokens": 0,"cached_tokens": 0}},"model_name": "gpt-4o-mini-2024-07-18","system_fingerprint": "fp_0705bf87c0"},"run": null,"type": "LLMResult"
}

总结

ai-quotes-elon-musk2

本文通过OpenAI语言模型和自定义检索工具,构建了一个智能问答系统。首先,从网络上加载和分割文档内容,并将其存储到向量数据库中。然后,定义一个检索工具,可以根据查询请求从数据库中寻找相关文档。使用状态图管理对话流程,根据不同条件,系统会决定是否调用检索工具或者直接生成回复。最终,通过与状态图交互,实现智能应答。这样一个系统大大增强了自动化问答的能力,通过结合嵌入模型和语言模型,能够处理更为复杂和多样化的用户查询。

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.rhkb.cn/news/483339.html

如若内容造成侵权/违法违规/事实不符,请联系长河编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

Python 入门教程(2)搭建环境 | 2.4、VSCode配置Node.js运行环境

文章目录 一、VSCode配置Node.js运行环境1、软件安装2、安装Node.js插件3、配置VSCode4、创建并运行Node.js文件5、调试Node.js代码 一、VSCode配置Node.js运行环境 1、软件安装 安装下面的软件: 安装Node.js:Node.js官网 下载Node.js安装包。建议选择L…

redis核心命令全局命令 + redis 常见的数据结构 + redis单线程模型

文章目录 一. 核心命令1. set2. get 二. 全局命令1. keys2. exists3. del4. expire5. ttl6. type 三. redis 常见的数据结构及内部编码四. redis单线程模型 一. 核心命令 1. set set key value key 和 value 都是string类型的 对于key value, 不需要加上引号, 就是表示字符串…

哈希及其模拟实现

1.哈希的概念 顺序结构以及平衡树中,元素的关键码与其存储位置之间没有对应的关系。因此,在查找一个元素时,必须要经过关键码的多次比较。顺序查找的时间复杂度为O(N),平衡树中为树的高度,即O(log_2 N),搜…

k8s,声明式API对象理解

命令式API 比如: 先kubectl create,再replace的操作,我们称为命令式配置文件操作 kubectl replace的执行过程,是使用新的YAML文件中的API对象,替换原有的API对象;而kubectl apply,则是执行了一…

开源ISP介绍(1)——开源ISP的Vivado框架搭建

开源github链接:bxinquan/zynq_cam_isp_demo: 基于verilog实现了ISP图像处理IP 国内Gitee链接:zynq_cam_isp: 开源ISP项目 基于以上开源链接移植项目到正点原子领航者Zynq7020开发板,并对该项目的Vivddo工程进行架构详解,后续会…

[Redis#13] cpp-redis接口 | set | hash |zset

目录 Set 1. Sadd 和 Smembers 2. Sismember 3. Scard 4. Spop 5. Sinter 6. Sinter store Hash 1. Hset 和 Hget 2. Hexists 3. Hdel 4. Hkeys 和 Hvals 5. Hmget 和 Hmset Zset 1. Zadd 和 Zrange 2. Zcard 3. Zrem 4. Zscore cpp-redis 的学习 主要关注于…

GEOBench-VLM:专为地理空间任务设计的视觉-语言模型基准测试数据集

2024-11-29 ,由穆罕默德本扎耶德人工智能大学等机构创建了GEOBench-VLM数据集,目的评估视觉-语言模型(VLM)在地理空间任务中的表现。该数据集的推出填补了现有基准测试在地理空间应用中的空白,提供了超过10,000个经过人工验证的指…

设计模式 更新ing

设计模式 1、六大原则1.1 单一设计原则 SRP1.2 开闭原则1.3 里氏替换原则1.4 迪米特法则1.5 接口隔离原则1.6 依赖倒置原则 2、工厂模式 1、六大原则 1.1 单一设计原则 SRP 一个类应该只有一个变化的原因 比如一个视频软件,区分不同的用户级别 包括访客&#xff0…

nlp培训重点

1. SGD梯度下降公式 当梯度大于0时,变小,往左边找梯度接近0的值。 当梯度小于0时,减去一个负数会变大,往右边找梯度接近0的值,此时梯度从负数到0上升 2.Adam优化器实现原理 #coding:utf8import torch import torch.n…

电脑关机的趣味小游戏——system函数、strcmp函数、goto语句的使用

文章目录 前言一. system函数1.1 system函数清理屏幕1.2 system函数暂停运行1.3 system函数电脑关机、重启 二、strcmp函数三、goto语句四、电脑关机小游戏4.1. 程序要求4.2. 游戏代码 总结 前言 今天我们写一点稍微有趣的代码,比如写一个小程序使电脑关机&#xf…

基础入门-Web应用OSS存储负载均衡CDN加速反向代理WAF防护部署影响

知识点: 1、基础入门-Web应用-防护产品-WAF保护 2、基础入门-Web应用-加速服务-CDN节点 3、基础入门-Web应用-文件托管-OSS存储 4、基础入门-Web应用-通讯服务-反向代理 5、基础入门-Web应用-运维安全-负载均衡 一、演示案例-Web-拓展架构-WAF保护-拦截攻击 原理&a…

Milvus×OPPO:如何构建更懂你的大模型助手

01. 背景 AI业务快速增长下传统关系型数据库无法满足需求。 2024年恰逢OPPO品牌20周年,OPPO也宣布正式进入AI手机的时代。超千万用户开始通过例如通话摘要、新小布助手、小布照相馆等搭载在OPPO手机上的应用体验AI能力。 与传统的应用不同的是,在AI驱动的…

002-日志增强版

日志增强版 一、需求二、引入依赖三、配置日志处理切面四、配置RequestWrapper五、效果展示 一、需求 需要打印请求参数和返回参数 二、引入依赖 <dependency><groupId>org.springframework.boot</groupId><artifactId>spring-boot-starter-aop<…

Spire.PDF for .NET【页面设置】演示:旋放大 PDF 边距而不改变页面大小

PDF 页边距是正文内容和页面边缘之间的空白。与 Word 不同&#xff0c;PDF 文档中的页边距不易修改&#xff0c;因为 Adobe 不提供任何功能供用户自由操作页边距。但是&#xff0c;您可以更改页面缩放比例&#xff08;放大/压缩内容&#xff09;或裁剪页面以获得合适的页边距。…

服务器数据恢复—EVA存储硬盘磁头和盘片损坏离线的数据恢复案例

服务器存储数据恢复环境&故障&#xff1a; 一台HP EVA存储中有23块硬盘&#xff0c;挂接到一台windows server操作系统的服务器。 EVA存储上有三个硬盘指示灯亮黄灯&#xff0c;此刻存储还能正常使用。管理员在更换硬盘的过程中&#xff0c;又出现一块硬盘对应的指示灯亮黄…

探索仓颉编程语言:官网上线,在线体验与版本下载全面启航

文章目录 每日一句正能量前言什么是仓颉编程语言仓颉编程语言的来历如何使用仓颉编程语言在线版本版本下载后记 每日一句正能量 当你被孤独感驱使着去寻找远离孤独的方法时&#xff0c;会处于一种非常可怕的状态。因为无法和自己相处的人也很难和别人相处&#xff0c;无法和别人…

idea 自动导包,并且禁止自动导 *(java.io.*)

自动导包配置 进入 idea 设置&#xff0c;可以按下图所示寻找位置&#xff0c;也可以直接输入 auto import 快速定位到配置。 Add unambiguous imports on the fly&#xff1a;自动帮我们优化导入的包Optimize imports on the fly&#xff1a;自动去掉一些没有用到的包 禁止导…

【时时三省】(C语言基础)结构体的自引用

山不在高&#xff0c;有仙则名。水不在深&#xff0c;有龙则灵。 ----CSDN 时时三省 结构体的自引用 在结构中包含一个类型为该结构体本身的成员是否可以呢&#xff1f; 在struct B里面包含了一个结构体struct A叫sa 结构体类型里面是可以包含另一个结构体类型变量作为它的成…

GoReplay开源工具使用教程

目录 一、GoReplay环境搭建 1、Mac、Linux安装GoReplay环境 二、GoReplay录制与重播 1、搭建练习接口 2、录制命令 3、重播命令 三、GoReplay单个命令 1、常用命令 2、其他命令 3、命令示例 4、性能测试 5、正则表达式 四、gorepaly组合命令 1、组合命令实例 2、…

宏海科技募资额有所缩减,最大销售和采购都重度依赖美的集团

《港湾商业观察》施子夫 11月29日&#xff0c;北交所上市审核委员会将召开2024年第24次上市委审议会议&#xff0c;届时将审议武汉宏海科技股份有限公司&#xff08;以下简称&#xff0c;宏海科技&#xff09;的首发上会事项。 在上会之前&#xff0c;宏海科技共收到北交所下…