(笔记)第三期书生·浦语大模型实战营(十一卷王场)–书生基础岛第5关---XTuner 微调个人小助手认知

学员闯关手册:https://aicarrier.feishu.cn/wiki/ZcgkwqteZi9s4ZkYr0Gcayg1n1g?open_in_browser=true
课程视频:https://www.bilibili.com/video/BV1tz421B72y/
课程文档:
https://github.com/InternLM/Tutorial/tree/camp3/docs/L1/XTuner
关卡作业:https://github.com/InternLM/Tutorial/blob/camp3/docs/L1/XTuner/task.md
开发机平台:https://studio.intern-ai.org.cn/
开发机平台介绍:https://aicarrier.feishu.cn/wiki/GQ1Qwxb3UiQuewk8BVLcuyiEnHe

在这里插入图片描述

准备工作

#1、创建虚拟环境
#创建开发机:Cuda12.2-conda
#克隆代码
mkdir -p /root/InternLM/Tutorial
git clone -b camp3  https://github.com/InternLM/Tutorial /root/InternLM/Tutorial
# 创建虚拟环境
conda create -n xtuner0121 python=3.10 -y# 激活虚拟环境(注意:后续的所有操作都需要在这个虚拟环境中进行)
conda activate xtuner0121# 安装一些必要的库
conda install pytorch==2.1.2 torchvision==0.16.2 torchaudio==2.1.2 pytorch-cuda=12.1 -c pytorch -c nvidia -y
# 安装其他依赖
pip install transformers==4.39.3
pip install streamlit==1.36.0
#2、安装 XTuner
# 创建一个目录,用来存放源代码
mkdir -p /root/InternLM/codecd /root/InternLM/codegit clone -b v0.1.21  https://github.com/InternLM/XTuner /root/InternLM/code/XTuner
# 进入到源码目录
cd /root/InternLM/code/XTuner
conda activate xtuner0121# 执行安装
pip install -e '.[deepspeed]'
#下面是更快的安装
#pip install -e '.[deepspeed]' -i https://mirrors.aliyun.com/pypi/simple/
#3、模型准备
# 创建一个目录,用来存放微调的所有资料,后续的所有操作都在该路径中进行
mkdir -p /root/InternLM/XTunercd /root/InternLM/XTunermkdir -p Shanghai_AI_Laboratoryln -s /root/share/new_models/Shanghai_AI_Laboratory/internlm2-chat-1_8b Shanghai_AI_Laboratory/internlm2-chat-1_8b
#使用tree命令来观察目录结构
apt-get install -y tree
tree -l
#4、xtuner_streamlit_demo.py
#tools/xtuner_streamlit_demo.py
import copy
import warnings
from dataclasses import asdict, dataclass
from typing import Callable, List, Optionalimport streamlit as st
import torch
from torch import nn
from transformers.generation.utils import (LogitsProcessorList,StoppingCriteriaList)
from transformers.utils import loggingfrom transformers import AutoTokenizer, AutoModelForCausalLM  # isort: skiplogger = logging.get_logger(__name__)model_name_or_path = "/root/InternLM/XTuner/Shanghai_AI_Laboratory/internlm2-chat-1_8b"@dataclass
class GenerationConfig:# this config is used for chat to provide more diversitymax_length: int = 2048top_p: float = 0.75temperature: float = 0.1do_sample: bool = Truerepetition_penalty: float = 1.000@torch.inference_mode()
def generate_interactive(model,tokenizer,prompt,generation_config: Optional[GenerationConfig] = None,logits_processor: Optional[LogitsProcessorList] = None,stopping_criteria: Optional[StoppingCriteriaList] = None,prefix_allowed_tokens_fn: Optional[Callable[[int, torch.Tensor],List[int]]] = None,additional_eos_token_id: Optional[int] = None,**kwargs,
):inputs = tokenizer([prompt], padding=True, return_tensors='pt')input_length = len(inputs['input_ids'][0])for k, v in inputs.items():inputs[k] = v.cuda()input_ids = inputs['input_ids']_, input_ids_seq_length = input_ids.shape[0], input_ids.shape[-1]if generation_config is None:generation_config = model.generation_configgeneration_config = copy.deepcopy(generation_config)model_kwargs = generation_config.update(**kwargs)bos_token_id, eos_token_id = (  # noqa: F841  # pylint: disable=W0612generation_config.bos_token_id,generation_config.eos_token_id,)if isinstance(eos_token_id, int):eos_token_id = [eos_token_id]if additional_eos_token_id is not None:eos_token_id.append(additional_eos_token_id)has_default_max_length = kwargs.get('max_length') is None and generation_config.max_length is not Noneif has_default_max_length and generation_config.max_new_tokens is None:warnings.warn(f"Using 'max_length''s default ({repr(generation_config.max_length)}) \to control the generation length. "'This behaviour is deprecated and will be removed from the \config in v5 of Transformers -- we'' recommend using `max_new_tokens` to control the maximum \length of the generation.',UserWarning,)elif generation_config.max_new_tokens is not None:generation_config.max_length = generation_config.max_new_tokens + \input_ids_seq_lengthif not has_default_max_length:logger.warn(  # pylint: disable=W4902f"Both 'max_new_tokens' (={generation_config.max_new_tokens}) "f"and 'max_length'(={generation_config.max_length}) seem to ""have been set. 'max_new_tokens' will take precedence. "'Please refer to the documentation for more information. ''(https://huggingface.co/docs/transformers/main/''en/main_classes/text_generation)',UserWarning,)if input_ids_seq_length >= generation_config.max_length:input_ids_string = 'input_ids'logger.warning(f"Input length of {input_ids_string} is {input_ids_seq_length}, "f"but 'max_length' is set to {generation_config.max_length}. "'This can lead to unexpected behavior. You should consider'" increasing 'max_new_tokens'.")# 2. Set generation parameters if not already definedlogits_processor = logits_processor if logits_processor is not None \else LogitsProcessorList()stopping_criteria = stopping_criteria if stopping_criteria is not None \else StoppingCriteriaList()logits_processor = model._get_logits_processor(generation_config=generation_config,input_ids_seq_length=input_ids_seq_length,encoder_input_ids=input_ids,prefix_allowed_tokens_fn=prefix_allowed_tokens_fn,logits_processor=logits_processor,)stopping_criteria = model._get_stopping_criteria(generation_config=generation_config,stopping_criteria=stopping_criteria)logits_warper = model._get_logits_warper(generation_config)unfinished_sequences = input_ids.new(input_ids.shape[0]).fill_(1)scores = Nonewhile True:model_inputs = model.prepare_inputs_for_generation(input_ids, **model_kwargs)# forward pass to get next tokenoutputs = model(**model_inputs,return_dict=True,output_attentions=False,output_hidden_states=False,)next_token_logits = outputs.logits[:, -1, :]# pre-process distributionnext_token_scores = logits_processor(input_ids, next_token_logits)next_token_scores = logits_warper(input_ids, next_token_scores)# sampleprobs = nn.functional.softmax(next_token_scores, dim=-1)if generation_config.do_sample:next_tokens = torch.multinomial(probs, num_samples=1).squeeze(1)else:next_tokens = torch.argmax(probs, dim=-1)# update generated ids, model inputs, and length for next stepinput_ids = torch.cat([input_ids, next_tokens[:, None]], dim=-1)model_kwargs = model._update_model_kwargs_for_generation(outputs, model_kwargs, is_encoder_decoder=False)unfinished_sequences = unfinished_sequences.mul((min(next_tokens != i for i in eos_token_id)).long())output_token_ids = input_ids[0].cpu().tolist()output_token_ids = output_token_ids[input_length:]for each_eos_token_id in eos_token_id:if output_token_ids[-1] == each_eos_token_id:output_token_ids = output_token_ids[:-1]response = tokenizer.decode(output_token_ids)yield response# stop when each sentence is finished# or if we exceed the maximum lengthif unfinished_sequences.max() == 0 or stopping_criteria(input_ids, scores):breakdef on_btn_click():del st.session_state.messages@st.cache_resource
def load_model():model = (AutoModelForCausalLM.from_pretrained(model_name_or_path,trust_remote_code=True).to(torch.bfloat16).cuda())tokenizer = AutoTokenizer.from_pretrained(model_name_or_path,trust_remote_code=True)return model, tokenizerdef prepare_generation_config():with st.sidebar:max_length = st.slider('Max Length',min_value=8,max_value=32768,value=2048)top_p = st.slider('Top P', 0.0, 1.0, 0.75, step=0.01)temperature = st.slider('Temperature', 0.0, 1.0, 0.1, step=0.01)st.button('Clear Chat History', on_click=on_btn_click)generation_config = GenerationConfig(max_length=max_length,top_p=top_p,temperature=temperature)return generation_configuser_prompt = '<|im_start|>user\n{user}<|im_end|>\n'
robot_prompt = '<|im_start|>assistant\n{robot}<|im_end|>\n'
cur_query_prompt = '<|im_start|>user\n{user}<|im_end|>\n\<|im_start|>assistant\n'def combine_history(prompt):messages = st.session_state.messagesmeta_instruction = ('')total_prompt = f"<s><|im_start|>system\n{meta_instruction}<|im_end|>\n"for message in messages:cur_content = message['content']if message['role'] == 'user':cur_prompt = user_prompt.format(user=cur_content)elif message['role'] == 'robot':cur_prompt = robot_prompt.format(robot=cur_content)else:raise RuntimeErrortotal_prompt += cur_prompttotal_prompt = total_prompt + cur_query_prompt.format(user=prompt)return total_promptdef main():# torch.cuda.empty_cache()print('load model begin.')model, tokenizer = load_model()print('load model end.')st.title('InternLM2-Chat-1.8B')generation_config = prepare_generation_config()# Initialize chat historyif 'messages' not in st.session_state:st.session_state.messages = []# Display chat messages from history on app rerunfor message in st.session_state.messages:with st.chat_message(message['role'], avatar=message.get('avatar')):st.markdown(message['content'])# Accept user inputif prompt := st.chat_input('What is up?'):# Display user message in chat message containerwith st.chat_message('user'):st.markdown(prompt)real_prompt = combine_history(prompt)# Add user message to chat historyst.session_state.messages.append({'role': 'user','content': prompt,})with st.chat_message('robot'):message_placeholder = st.empty()for cur_response in generate_interactive(model=model,tokenizer=tokenizer,prompt=real_prompt,additional_eos_token_id=92542,**asdict(generation_config),):# Display robot response in chat message containermessage_placeholder.markdown(cur_response + '▌')message_placeholder.markdown(cur_response)# Add robot response to chat historyst.session_state.messages.append({'role': 'robot','content': cur_response,  # pylint: disable=undefined-loop-variable})torch.cuda.empty_cache()if __name__ == '__main__':main()

微调前的模型对话

conda activate xtuner0121
streamlit run /root/InternLM/Tutorial/tools/xtuner_streamlit_demo.py
#Power shell
ssh -CNg -L 8501:127.0.0.1:8501 root@ssh.intern-ai.org.cn -p 34911
#浏览器:http://127.0.0.1:8501

指令跟随微调

数据集准备

cd /root/InternLM/XTuner
mkdir -p datas
touch datas/assistant.json
cd /root/InternLM/XTuner
touch xtuner_generate_assistant.py
cd /root/InternLM/XTuner
cp /root/InternLM/Tutorial/tools/xtuner_generate_assistant.py ./
#xtuner_generate_assistant.py
import json# 设置用户的名字
name = '伍鲜同志'
# 设置需要重复添加的数据次数
n = 8000# 初始化数据
data = [{"conversation": [{"input": "请介绍一下你自己", "output": "我是{}的小助手,内在是上海AI实验室书生·浦语的1.8B大模型哦".format(name)}]},{"conversation": [{"input": "你在实战营做什么", "output": "我在这里帮助{}完成XTuner微调个人小助手的任务".format(name)}]}
]# 通过循环,将初始化的对话数据重复添加到data列表中
for i in range(n):data.append(data[0])data.append(data[1])# 将data列表中的数据写入到'datas/assistant.json'文件中
with open('datas/assistant.json', 'w', encoding='utf-8') as f:# 使用json.dump方法将数据以JSON格式写入文件# ensure_ascii=False 确保中文字符正常显示# indent=4 使得文件内容格式化,便于阅读json.dump(data, f, ensure_ascii=False, indent=4)
# 将对应的name进行修改(在第4行的位置)
- name = '伍鲜同志'
+ name = "朱娅梅"
#生成数据文件
cd /root/InternLM/XTuner
conda activate xtuner0121
python xtuner_generate_assistant.py

修改配置文件

#配置文件
conda activate xtuner0121
xtuner list-cfg -p internlm2
cd /root/InternLM/XTuner
conda activate xtuner0121
xtuner copy-cfg internlm2_chat_1_8b_qlora_alpaca_e3 .#######################################################################
#                          PART 1  Settings                           #
#######################################################################
- pretrained_model_name_or_path = 'internlm/internlm2-chat-1_8b'
+ pretrained_model_name_or_path = '/root/InternLM/XTuner/Shanghai_AI_Laboratory/internlm2-chat-1_8b'- alpaca_en_path = 'tatsu-lab/alpaca'
+ alpaca_en_path = 'datas/assistant.json'evaluation_inputs = [
-    '请给我介绍五个上海的景点', 'Please tell me five scenic spots in Shanghai'
+    '请介绍一下你自己', 'Please introduce yourself'
]#######################################################################
#                      PART 3  Dataset & Dataloader                   #
#######################################################################
alpaca_en = dict(type=process_hf_dataset,
-   dataset=dict(type=load_dataset, path=alpaca_en_path),
+   dataset=dict(type=load_dataset, path='json', data_files=dict(train=alpaca_en_path)),tokenizer=tokenizer,max_length=max_length,
-   dataset_map_fn=alpaca_map_fn,
+   dataset_map_fn=None,template_map_fn=dict(type=template_map_fn_factory, template=prompt_template),remove_unused_columns=True,shuffle_before_pack=True,pack_to_max_length=pack_to_max_length,use_varlen_attn=use_varlen_attn)

修改完后的完整的配置文件是

#configs/internlm2_chat_1_8b_qlora_alpaca_e3_copy.py
# Copyright (c) OpenMMLab. All rights reserved.
import torch
from datasets import load_dataset
from mmengine.dataset import DefaultSampler
from mmengine.hooks import (CheckpointHook, DistSamplerSeedHook, IterTimerHook,LoggerHook, ParamSchedulerHook)
from mmengine.optim import AmpOptimWrapper, CosineAnnealingLR, LinearLR
from peft import LoraConfig
from torch.optim import AdamW
from transformers import (AutoModelForCausalLM, AutoTokenizer,BitsAndBytesConfig)from xtuner.dataset import process_hf_dataset
from xtuner.dataset.collate_fns import default_collate_fn
from xtuner.dataset.map_fns import alpaca_map_fn, template_map_fn_factory
from xtuner.engine.hooks import (DatasetInfoHook, EvaluateChatHook,VarlenAttnArgsToMessageHubHook)
from xtuner.engine.runner import TrainLoop
from xtuner.model import SupervisedFinetune
from xtuner.parallel.sequence import SequenceParallelSampler
from xtuner.utils import PROMPT_TEMPLATE, SYSTEM_TEMPLATE#######################################################################
#                          PART 1  Settings                           #
#######################################################################
# Model
pretrained_model_name_or_path = '/root/InternLM/XTuner/Shanghai_AI_Laboratory/internlm2-chat-1_8b'
use_varlen_attn = False# Data
alpaca_en_path = 'datas/assistant.json'
prompt_template = PROMPT_TEMPLATE.internlm2_chat
max_length = 2048
pack_to_max_length = True# parallel
sequence_parallel_size = 1# Scheduler & Optimizer
batch_size = 1  # per_device
accumulative_counts = 16
accumulative_counts *= sequence_parallel_size
dataloader_num_workers = 0
max_epochs = 3
optim_type = AdamW
lr = 2e-4
betas = (0.9, 0.999)
weight_decay = 0
max_norm = 1  # grad clip
warmup_ratio = 0.03# Save
save_steps = 500
save_total_limit = 2  # Maximum checkpoints to keep (-1 means unlimited)# Evaluate the generation performance during the training
evaluation_freq = 500
SYSTEM = SYSTEM_TEMPLATE.alpaca
evaluation_inputs = ['请介绍一下你自己', 'Please introduce yourself'
]#######################################################################
#                      PART 2  Model & Tokenizer                      #
#######################################################################
tokenizer = dict(type=AutoTokenizer.from_pretrained,pretrained_model_name_or_path=pretrained_model_name_or_path,trust_remote_code=True,padding_side='right')model = dict(type=SupervisedFinetune,use_varlen_attn=use_varlen_attn,llm=dict(type=AutoModelForCausalLM.from_pretrained,pretrained_model_name_or_path=pretrained_model_name_or_path,trust_remote_code=True,torch_dtype=torch.float16,quantization_config=dict(type=BitsAndBytesConfig,load_in_4bit=True,load_in_8bit=False,llm_int8_threshold=6.0,llm_int8_has_fp16_weight=False,bnb_4bit_compute_dtype=torch.float16,bnb_4bit_use_double_quant=True,bnb_4bit_quant_type='nf4')),lora=dict(type=LoraConfig,r=64,lora_alpha=16,lora_dropout=0.1,bias='none',task_type='CAUSAL_LM'))#######################################################################
#                      PART 3  Dataset & Dataloader                   #
#######################################################################
alpaca_en = dict(type=process_hf_dataset,dataset=dict(type=load_dataset, path='json', data_files=dict(train=alpaca_en_path)),tokenizer=tokenizer,max_length=max_length,dataset_map_fn=None,template_map_fn=dict(type=template_map_fn_factory, template=prompt_template),remove_unused_columns=True,shuffle_before_pack=True,pack_to_max_length=pack_to_max_length,use_varlen_attn=use_varlen_attn)sampler = SequenceParallelSampler \if sequence_parallel_size > 1 else DefaultSampler
train_dataloader = dict(batch_size=batch_size,num_workers=dataloader_num_workers,dataset=alpaca_en,sampler=dict(type=sampler, shuffle=True),collate_fn=dict(type=default_collate_fn, use_varlen_attn=use_varlen_attn))#######################################################################
#                    PART 4  Scheduler & Optimizer                    #
#######################################################################
# optimizer
optim_wrapper = dict(type=AmpOptimWrapper,optimizer=dict(type=optim_type, lr=lr, betas=betas, weight_decay=weight_decay),clip_grad=dict(max_norm=max_norm, error_if_nonfinite=False),accumulative_counts=accumulative_counts,loss_scale='dynamic',dtype='float16')# learning policy
# More information: https://github.com/open-mmlab/mmengine/blob/main/docs/en/tutorials/param_scheduler.md  # noqa: E501
param_scheduler = [dict(type=LinearLR,start_factor=1e-5,by_epoch=True,begin=0,end=warmup_ratio * max_epochs,convert_to_iter_based=True),dict(type=CosineAnnealingLR,eta_min=0.0,by_epoch=True,begin=warmup_ratio * max_epochs,end=max_epochs,convert_to_iter_based=True)
]# train, val, test setting
train_cfg = dict(type=TrainLoop, max_epochs=max_epochs)#######################################################################
#                           PART 5  Runtime                           #
#######################################################################
# Log the dialogue periodically during the training process, optional
custom_hooks = [dict(type=DatasetInfoHook, tokenizer=tokenizer),dict(type=EvaluateChatHook,tokenizer=tokenizer,every_n_iters=evaluation_freq,evaluation_inputs=evaluation_inputs,system=SYSTEM,prompt_template=prompt_template)
]if use_varlen_attn:custom_hooks += [dict(type=VarlenAttnArgsToMessageHubHook)]# configure default hooks
default_hooks = dict(# record the time of every iteration.timer=dict(type=IterTimerHook),# print log every 10 iterations.logger=dict(type=LoggerHook, log_metric_by_epoch=False, interval=10),# enable the parameter scheduler.param_scheduler=dict(type=ParamSchedulerHook),# save checkpoint per `save_steps`.checkpoint=dict(type=CheckpointHook,by_epoch=False,interval=save_steps,max_keep_ckpts=save_total_limit),# set sampler seed in distributed evrionment.sampler_seed=dict(type=DistSamplerSeedHook),
)# configure environment
env_cfg = dict(# whether to enable cudnn benchmarkcudnn_benchmark=False,# set multi process parametersmp_cfg=dict(mp_start_method='fork', opencv_num_threads=0),# set distributed parametersdist_cfg=dict(backend='nccl'),
)# set visualizer
visualizer = None# set log level
log_level = 'INFO'# load from which checkpoint
load_from = None# whether to resume training from the loaded checkpoint
resume = False# Defaults to use random seed and disable `deterministic`
randomness = dict(seed=None, deterministic=False)# set log processor
log_processor = dict(by_epoch=False)
cd /root/InternLM/XTuner
cp /root/InternLM/Tutorial/configs/internlm2_chat_1_8b_qlora_alpaca_e3_copy.py ./
启动微调
cd /root/InternLM/XTuner
conda activate xtuner0121
xtuner train ./internlm2_chat_1_8b_qlora_alpaca_e3_copy.py
模型转换
cd /root/InternLM/XTuner
conda activate xtuner0121# 先获取最后保存的一个pth文件
pth_file=`ls -t ./work_dirs/internlm2_chat_1_8b_qlora_alpaca_e3_copy/*.pth | head -n 1`
export MKL_SERVICE_FORCE_INTEL=1
export MKL_THREADING_LAYER=GNU
xtuner convert pth_to_hf ./internlm2_chat_1_8b_qlora_alpaca_e3_copy.py ${pth_file} ./hf
模型合并
cd /root/InternLM/XTuner
conda activate xtuner0121export MKL_SERVICE_FORCE_INTEL=1
export MKL_THREADING_LAYER=GNU
xtuner convert merge /root/InternLM/XTuner/Shanghai_AI_Laboratory/internlm2-chat-1_8b ./hf ./merged --max-shard-size 2GB

修改脚本文件xtuner_streamlit_demo.py并启动

# 直接修改脚本文件第18行
- model_name_or_path = "/root/InternLM/XTuner/Shanghai_AI_Laboratory/internlm2-chat-1_8b"
+ model_name_or_path = "/root/InternLM/XTuner/merged"
conda activate xtuner0121
streamlit run /root/InternLM/Tutorial/tools/xtuner_streamlit_demo.py
#Power shell端口映射:ssh -CNg -L 8501:127.0.0.1:8501 root@ssh.intern-ai.org.cn -p 43551
#浏览器:http://127.0.0.1:8501 

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.rhkb.cn/news/440449.html

如若内容造成侵权/违法违规/事实不符,请联系长河编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

3. BBP系列运动控制板(飞控板)简介

3.1. 概述 Bread Board Pilot(简称BBP) 是在积累了前期 Single Pilot 及 PH7 飞控板大量设计及使用经验的基础上&#xff0c;全新基于PH47代码框架开发的高灵活性&#xff0c; 高性能&#xff0c; 超低成本的最新一代飞控板设计。 目前&#xff0c;因为其使用便捷灵活&#xf…

晶体管最佳效率区域随频率逆时针旋转原因分析

晶体管最佳效率区域随频率逆时针旋转原因分析 在功率放大器的设计时&#xff0c;晶体管最佳区域随频率逆时针旋转。但是&#xff0c;对于一般的微带电路&#xff0c;匹配阻抗区域是随着频率顺时针旋转的&#xff08;也有称这个特性是Foster特性&#xff09;&#xff0c;因此功…

基于ssm 和uniapp 开发的微信小程序的学生选课系统设计与实现

博主介绍&#xff1a;专注于Java&#xff08;springboot ssm 等开发框架&#xff09; vue .net php phython node.js uniapp 微信小程序 等诸多技术领域和毕业项目实战、企业信息化系统建设&#xff0c;从业十五余年开发设计教学工作 ☆☆☆ 精彩专栏推荐订阅☆☆☆☆☆不…

传感器模块编程实践(二)W5500 SPI转以太网模块简介及驱动源码

文章目录 一.概要二.W5500芯片介绍W5500通讯协议介绍 三.W5500模块介绍四.W5500模块原理图五.W5500以太网模通讯实验六.CubeMX工程源代码下载七.小结 一.概要 我们介绍过单片机的以太网系统一般是由&#xff1a;单片机MACPHYRJ45。有些单片机比如STM32F407VET6芯片内部自带MAC…

事件抽取(Event Extraction, EE)

一、引言 事件抽取&#xff08;Event Extraction, EE&#xff09;是信息抽取领域中的一个重要任务&#xff0c;旨在从非结构化文本中识别和抽取事件相关的信息。事件抽取通常包括识别事件触发词、事件类型以及事件中的参与者、时间、地点等元素&#xff0c;最终将这些信息结构…

Excel中的屠龙大招

indirect的地位部分动摇&#xff0c;神坛下已初生大力骑士——“”。 (笔记模板由python脚本于2024年10月06日 18:57:11创建&#xff0c;本篇笔记适合同时喜欢python和Excel的coder翻阅) 【学习的细节是欢悦的历程】 Python 官网&#xff1a;https://www.python.org/ Free&…

支持向量机(SVM)基础教程

一、引言 支持向量机&#xff08;Support Vector Machine&#xff0c;简称SVM&#xff09;是一种高效的监督学习算法&#xff0c;广泛应用 于分类和回归分析。SVM以其强大的泛化能力、简洁的数学形式和优秀的分类效果而备受机器学 习领域的青睐。 二、SVM基本原理 2.1 最大间…

QT使用qss控制样式实现动态换肤

文章目录 设计QSS样式表动态加载QSS文件主函数调用QT提供了一种非常灵活的方式来使用QSS(Qt Style Sheet,类似于 CSS 的样式表),实现界面的动态换肤功能。QSS可以改变Qt应用程序中几乎所有可视组件的外观,包括颜色、字体、边框等。下面介绍一下如何通过QSS实现动态换肤。 设…

YOLO11改进|注意力机制篇|引入上下文锚注意力机制CAA

目录 一、【CAA】注意力机制1.1【CAA】注意力介绍1.2【CAA】核心代码 二、添加【CAA】注意力机制2.1STEP12.2STEP22.3STEP32.4STEP4 三、yaml文件与运行3.1yaml文件3.2运行成功截图 一、【CAA】注意力机制 1.1【CAA】注意力介绍 CAA注意力机制的结构图如下&#xff0c;下面根据…

51单片机的水质检测系统【proteus仿真+程序+报告+原理图+演示视频】

1、主要功能 该系统由AT89C51/STC89C52单片机LCD1602显示模块温度传感器ph传感器浑浊度传感器蓝牙继电器LED、按键和蜂鸣器等模块构成。适用于水质监测系统&#xff0c;含检测和调整水温、浑浊度、ph等相似项目。 可实现功能: 1、LCD1602实时显示水温、水体ph和浑浊度 2、温…

PYTHON实现HTTP request的一些有用的函数

前言 我们知道&#xff0c;当需要设计一个程序和服务器进行交互时&#xff0c;往往会用到HTTP的request&#xff0c;即服务器有一个对外接口REST API&#xff0c;因此当向服务器发送符合格式要求的HTTP request时&#xff0c;服务器会给出响应&#xff0c;甚至执行一些任务。如…

18734 拓扑排序

### 思路 1. **建模问题**&#xff1a;将课程和依赖关系建模为有向图&#xff0c;其中课程是节点&#xff0c;依赖关系是有向边。 2. **选择算法**&#xff1a;使用拓扑排序算法来确定课程的学习顺序。由于需要确保输出唯一性&#xff0c;同等条件下编号小的课程排在前面&…

fastAPI教程:路由操作及HTTP请求响应

FastAPI 三、路由操作 3.1 路由装饰器 路由装饰器&#xff0c;也叫路径操作装饰器。 FastAPI提供了一系列基于HTTP请求作为方法名的装饰器给开发者用于绑定url地址提供给外界操作API接口。 HTTP方法FastAPI代码描述GETapp.get()async 方法名(): pass获取数据POSTapp.post(…

吴恩达深度学习笔记:卷积神经网络(Foundations of Convolutional Neural Networks)2.5-2.6

目录 第四门课 卷积神经网络&#xff08;Convolutional Neural Networks&#xff09;第二周 深度卷积网络&#xff1a;实例探究&#xff08;Deep convolutional models: case studies&#xff09;2.5 网络中的网络以及 11 卷积&#xff08;Network in Network and 11 convoluti…

【AI知识点】批归一化(Batch Normalization)

批归一化&#xff08;Batch Normalization&#xff0c;BN&#xff09; 是一种用于加速神经网络训练并提高模型稳定性的方法&#xff0c;最早由 Sergey Ioffe 和 Christian Szegedy 在 2015 年提出。批归一化通过在每一层对神经网络中的激活值进行标准化&#xff0c;使得每一层的…

D29【python 接口自动化学习】- python基础之输入输出与文件操作

day29 格式化输出 学习日期&#xff1a;20241006 学习目标&#xff1a;输入输出与文件操作&#xfe63;-41 格式化输出&#xff1a;如何将执行结果通过屏幕输出&#xff1f; 学习笔记&#xff1a; 三种常用的格式化输出方式 百分号方式 format函数方式 总结 1. 格式化输出…

在ubuntu好部署jenkins发布vue项目时遇到的一些问题及解决方法以及使用jenkins发布vue项目-npm自动打包发布的实现

一、在ubuntu好部署jenkins发布vue项目时遇到的一些问题及解决方法 1. 问题&#xff1a;webpack-dev-server不是内部或外部命令&#xff0c;也不是可运行的程序 解决&#xff1a;使用webpack要安装webpack-cli这个包&#xff0c;才可以调用webpack和webpack-dev-server这些命…

Hive3.x版本调优总结

文章目录 第 1 章 Explain 查看执行计划&#xff08;重点&#xff09;1.1 创建测试用表1&#xff09;建大表、小表和 JOIN 后表的语句2&#xff09;分别向大表和小表中导入数据 1.2 基本语法1.3 案例实操 第 2 章 Hive 建表优化2.1 分区表2.1.1 分区表基本操作2.1.2 二级分区2.…

虚拟机 VMware 安装 macOS

macOS 界面 MAC OS IOS下载&#xff1a; amacOS Monterey by Techrechard.comwmacOS Monterey by Techrechard.com 下载&#xff1a;Unlocker-v2.0.1-x64 Mac OS X 虚拟机中更改屏幕分辨率 终端输入命令&#xff1a; sudo defaults write /Library/Preferences/com.apple.w…

2-114 基于matlab的CA模型

基于matlab的CA模型&#xff0c;Singer模型对单机动目标进行跟踪算法&#xff0c;具有10页实验文档。采用蒙特卡罗方法对一个二坐标雷达对一平面上运动的目标进行观测&#xff0c;得到跟踪滤波结果。程序已调通&#xff0c;可直接运行。 下载源程序请点链接&#xff1a;2-114 …