书生大模型实战营-基础关-XTuner 微调个人小助手认知

XTuner 微调个人小助手认知

  • 环境配置
  • 模型效果预览
  • 微调
    • 数据准备
    • 微调配置
    • 微调训练
    • 权重格式转换
    • 模型合并
    • 页面对话

环境配置

# 创建虚拟环境
conda create -n xtuner0812 python=3.10 -y# 激活虚拟环境(注意:后续的所有操作都需要在这个虚拟环境中进行)
conda activate xtuner0812# 安装一些必要的库
conda install pytorch==2.1.2 torchvision==0.16.2 torchaudio==2.1.2 pytorch-cuda=12.1 -c pytorch -c nvidia -y
# 安装其他依赖
pip install transformers==4.39.3
pip install streamlit==1.36.0#安装XTuner
mkdir -p /root/democd /root/demogit clone -b v0.1.21  https://github.com/InternLM/XTuner /root/demo/XTuner
cd /root/demo/XTuner# 执行安装
pip install -e '.[deepspeed]' -i https://mirrors.aliyun.com/pypi/simple/

模型效果预览

python -m streamlit run /root/Tutorial/tools/xtuner_streamlit_demo.py

在这里插入图片描述

微调

数据准备

使用以下脚本生成微调数据:

import json# 设置用户的名字
name = '靓仔'
# 设置需要重复添加的数据次数
n =  1875# 初始化数据
data = [{"conversation": [{"input": "请介绍一下你自己", "output": "我是{}的小助手,内在是上海AI实验室书生·浦语的1.8B大模型哦".format(name)}]},{"conversation": [{"input": "你是谁", "output": "我是{}的小助手,内在是上海AI实验室书生·浦语的1.8B大模型哦".format(name)}]},{"conversation": [{"input": "你在实战营做什么", "output": "我在这里帮助{}完成XTuner微调个人小助手的任务".format(name)}]},{"conversation": [{"input": "你可以做什么", "output": "我在这里帮助{}完成XTuner微调个人小助手的任务".format(name)}]}
]# 通过循环,将初始化的对话数据重复添加到data列表中
for i in range(n):data.append(data[0])data.append(data[1])# 将data列表中的数据写入到'datas/assistant.json'文件中
with open('datas/assistant.json', 'w', encoding='utf-8') as f:# 使用json.dump方法将数据以JSON格式写入文件# ensure_ascii=False 确保中文字符正常显示# indent=4 使得文件内容格式化,便于阅读json.dump(data, f, ensure_ascii=False, indent=4)

微调配置

# 查看所有配置文件
xtuner list-cfg # 匹配internlm2相关的配置文件
xtuner list-cfg -p internlm2# 复制配置文件到指定目录
xtuner copy-cfg internlm2_chat_1_8b_qlora_alpaca_e3 /root/demo/xtuner_demo/

修改后的配置文件如下:

# Copyright (c) OpenMMLab. All rights reserved.
import torch
from datasets import load_dataset
from mmengine.dataset import DefaultSampler
from mmengine.hooks import (CheckpointHook, DistSamplerSeedHook, IterTimerHook,LoggerHook, ParamSchedulerHook)
from mmengine.optim import AmpOptimWrapper, CosineAnnealingLR, LinearLR
from peft import LoraConfig
from torch.optim import AdamW
from transformers import (AutoModelForCausalLM, AutoTokenizer,BitsAndBytesConfig)from xtuner.dataset import process_hf_dataset
from xtuner.dataset.collate_fns import default_collate_fn
from xtuner.dataset.map_fns import alpaca_map_fn, template_map_fn_factory
from xtuner.engine.hooks import (DatasetInfoHook, EvaluateChatHook,VarlenAttnArgsToMessageHubHook)
from xtuner.engine.runner import TrainLoop
from xtuner.model import SupervisedFinetune
from xtuner.parallel.sequence import SequenceParallelSampler
from xtuner.utils import PROMPT_TEMPLATE, SYSTEM_TEMPLATE#######################################################################
#                          PART 1  Settings                           #
#######################################################################
# Model
pretrained_model_name_or_path = '/root/share/new_models/Shanghai_AI_Laboratory/internlm2-chat-1_8b' #修改此处模型路径
use_varlen_attn = False# Data
alpaca_en_path = '/root/demo/xtuner_demo/datas/assistant.json' #修改此处数据路径
prompt_template = PROMPT_TEMPLATE.internlm2_chat
max_length = 2048
pack_to_max_length = True# parallel
sequence_parallel_size = 1# Scheduler & Optimizer
batch_size = 1  # per_device
accumulative_counts = 16
accumulative_counts *= sequence_parallel_size
dataloader_num_workers = 0
max_epochs = 3
optim_type = AdamW
lr = 2e-4
betas = (0.9, 0.999)
weight_decay = 0
max_norm = 1  # grad clip
warmup_ratio = 0.03# Save
save_steps = 500
save_total_limit = 2  # Maximum checkpoints to keep (-1 means unlimited)# Evaluate the generation performance during the training
evaluation_freq = 500
SYSTEM = SYSTEM_TEMPLATE.alpaca
evaluation_inputs = ['请介绍一下你自己', 'Please introduce yourself'
] #修改此处#######################################################################
#                      PART 2  Model & Tokenizer                      #
#######################################################################
tokenizer = dict(type=AutoTokenizer.from_pretrained,pretrained_model_name_or_path=pretrained_model_name_or_path,trust_remote_code=True,padding_side='right')model = dict(type=SupervisedFinetune,use_varlen_attn=use_varlen_attn,llm=dict(type=AutoModelForCausalLM.from_pretrained,pretrained_model_name_or_path=pretrained_model_name_or_path,trust_remote_code=True,torch_dtype=torch.float16,quantization_config=dict(type=BitsAndBytesConfig,load_in_4bit=True,load_in_8bit=False,llm_int8_threshold=6.0,llm_int8_has_fp16_weight=False,bnb_4bit_compute_dtype=torch.float16,bnb_4bit_use_double_quant=True,bnb_4bit_quant_type='nf4')),lora=dict(type=LoraConfig,r=64,lora_alpha=16,lora_dropout=0.1,bias='none',task_type='CAUSAL_LM'))#######################################################################
#                      PART 3  Dataset & Dataloader                   #
#######################################################################
alpaca_en = dict(type=process_hf_dataset,dataset=dict(type=load_dataset, path='json', data_files=dict(train=alpaca_en_path)), #修改此处数据加载方式tokenizer=tokenizer,max_length=max_length,dataset_map_fn=None, #修改此处template_map_fn=dict(type=template_map_fn_factory, template=prompt_template),remove_unused_columns=True,shuffle_before_pack=True,pack_to_max_length=pack_to_max_length,use_varlen_attn=use_varlen_attn)sampler = SequenceParallelSampler \if sequence_parallel_size > 1 else DefaultSampler
train_dataloader = dict(batch_size=batch_size,num_workers=dataloader_num_workers,dataset=alpaca_en,sampler=dict(type=sampler, shuffle=True),collate_fn=dict(type=default_collate_fn, use_varlen_attn=use_varlen_attn))#######################################################################
#                    PART 4  Scheduler & Optimizer                    #
#######################################################################
# optimizer
optim_wrapper = dict(type=AmpOptimWrapper,optimizer=dict(type=optim_type, lr=lr, betas=betas, weight_decay=weight_decay),clip_grad=dict(max_norm=max_norm, error_if_nonfinite=False),accumulative_counts=accumulative_counts,loss_scale='dynamic',dtype='float16')# learning policy
# More information: https://github.com/open-mmlab/mmengine/blob/main/docs/en/tutorials/param_scheduler.md  # noqa: E501
param_scheduler = [dict(type=LinearLR,start_factor=1e-5,by_epoch=True,begin=0,end=warmup_ratio * max_epochs,convert_to_iter_based=True),dict(type=CosineAnnealingLR,eta_min=0.0,by_epoch=True,begin=warmup_ratio * max_epochs,end=max_epochs,convert_to_iter_based=True)
]# train, val, test setting
train_cfg = dict(type=TrainLoop, max_epochs=max_epochs)#######################################################################
#                           PART 5  Runtime                           #
#######################################################################
# Log the dialogue periodically during the training process, optional
custom_hooks = [dict(type=DatasetInfoHook, tokenizer=tokenizer),dict(type=EvaluateChatHook,tokenizer=tokenizer,every_n_iters=evaluation_freq,evaluation_inputs=evaluation_inputs,system=SYSTEM,prompt_template=prompt_template)
]if use_varlen_attn:custom_hooks += [dict(type=VarlenAttnArgsToMessageHubHook)]# configure default hooks
default_hooks = dict(# record the time of every iteration.timer=dict(type=IterTimerHook),# print log every 10 iterations.logger=dict(type=LoggerHook, log_metric_by_epoch=False, interval=10),# enable the parameter scheduler.param_scheduler=dict(type=ParamSchedulerHook),# save checkpoint per `save_steps`.checkpoint=dict(type=CheckpointHook,by_epoch=False,interval=save_steps,max_keep_ckpts=save_total_limit),# set sampler seed in distributed evrionment.sampler_seed=dict(type=DistSamplerSeedHook),
)# configure environment
env_cfg = dict(# whether to enable cudnn benchmarkcudnn_benchmark=False,# set multi process parametersmp_cfg=dict(mp_start_method='fork', opencv_num_threads=0),# set distributed parametersdist_cfg=dict(backend='nccl'),
)# set visualizer
visualizer = None# set log level
log_level = 'INFO'# load from which checkpoint
load_from = None# whether to resume training from the loaded checkpoint
resume = False# Defaults to use random seed and disable `deterministic`
randomness = dict(seed=None, deterministic=False)# set log processor
log_processor = dict(by_epoch=False)

微调训练

# 训练
xtuner train ./internlm2_chat_1_8b_qlora_alpaca_e3_copy.py

训练结束后,会在工作目录下得到一个work_dirs的目录,里面存有训练过程中的日志,以及训练后的权重。

权重格式转换

export MKL_SERVICE_FORCE_INTEL=1
export MKL_THREADING_LAYER=GNU
xtuner convert pth_to_hf ./internlm2_chat_1_8b_qlora_alpaca_e3_copy.py /root/demo/xtuner_demo/work_dirs/internlm2_chat_1_8b_qlora_alpaca_e3_copy/iter_192.pth ./hf

模型合并

对于 LoRA 或者 QLoRA 微调出来的模型其实并不是一个完整的模型,而是一个额外的层(Adapter),训练完的这个层最终还是要与原模型进行合并才能被正常的使用。

对于全量微调的模型(full)其实是不需要进行整合这一步的,因为全量微调修改的是原模型的权重而非微调一个新的 Adapter ,因此是不需要进行模型整合的。

export MKL_SERVICE_FORCE_INTEL=1
export MKL_THREADING_LAYER=GNU
xtuner convert merge /root/share/new_models/Shanghai_AI_Laboratory/internlm2-chat-1_8b ./hf ./merged --max-shard-size 2GB

页面对话

import copy
import warnings
from dataclasses import asdict, dataclass
from typing import Callable, List, Optionalimport streamlit as st
import torch
from torch import nn
from transformers.generation.utils import (LogitsProcessorList,StoppingCriteriaList)
from transformers.utils import loggingfrom transformers import AutoTokenizer, AutoModelForCausalLM  # isort: skiplogger = logging.get_logger(__name__)model_name_or_path = "/root/demo/xtuner_demo/merged"@dataclass
class GenerationConfig:# this config is used for chat to provide more diversitymax_length: int = 2048top_p: float = 0.75temperature: float = 0.1do_sample: bool = Truerepetition_penalty: float = 1.000@torch.inference_mode()
def generate_interactive(model,tokenizer,prompt,generation_config: Optional[GenerationConfig] = None,logits_processor: Optional[LogitsProcessorList] = None,stopping_criteria: Optional[StoppingCriteriaList] = None,prefix_allowed_tokens_fn: Optional[Callable[[int, torch.Tensor],List[int]]] = None,additional_eos_token_id: Optional[int] = None,**kwargs,
):inputs = tokenizer([prompt], padding=True, return_tensors='pt')input_length = len(inputs['input_ids'][0])for k, v in inputs.items():inputs[k] = v.cuda()input_ids = inputs['input_ids']_, input_ids_seq_length = input_ids.shape[0], input_ids.shape[-1]if generation_config is None:generation_config = model.generation_configgeneration_config = copy.deepcopy(generation_config)model_kwargs = generation_config.update(**kwargs)bos_token_id, eos_token_id = (  # noqa: F841  # pylint: disable=W0612generation_config.bos_token_id,generation_config.eos_token_id,)if isinstance(eos_token_id, int):eos_token_id = [eos_token_id]if additional_eos_token_id is not None:eos_token_id.append(additional_eos_token_id)has_default_max_length = kwargs.get('max_length') is None and generation_config.max_length is not Noneif has_default_max_length and generation_config.max_new_tokens is None:warnings.warn(f"Using 'max_length''s default ({repr(generation_config.max_length)}) \to control the generation length. "'This behaviour is deprecated and will be removed from the \config in v5 of Transformers -- we'' recommend using `max_new_tokens` to control the maximum \length of the generation.',UserWarning,)elif generation_config.max_new_tokens is not None:generation_config.max_length = generation_config.max_new_tokens + \input_ids_seq_lengthif not has_default_max_length:logger.warn(  # pylint: disable=W4902f"Both 'max_new_tokens' (={generation_config.max_new_tokens}) "f"and 'max_length'(={generation_config.max_length}) seem to ""have been set. 'max_new_tokens' will take precedence. "'Please refer to the documentation for more information. ''(https://huggingface.co/docs/transformers/main/''en/main_classes/text_generation)',UserWarning,)if input_ids_seq_length >= generation_config.max_length:input_ids_string = 'input_ids'logger.warning(f"Input length of {input_ids_string} is {input_ids_seq_length}, "f"but 'max_length' is set to {generation_config.max_length}. "'This can lead to unexpected behavior. You should consider'" increasing 'max_new_tokens'.")# 2. Set generation parameters if not already definedlogits_processor = logits_processor if logits_processor is not None \else LogitsProcessorList()stopping_criteria = stopping_criteria if stopping_criteria is not None \else StoppingCriteriaList()logits_processor = model._get_logits_processor(generation_config=generation_config,input_ids_seq_length=input_ids_seq_length,encoder_input_ids=input_ids,prefix_allowed_tokens_fn=prefix_allowed_tokens_fn,logits_processor=logits_processor,)stopping_criteria = model._get_stopping_criteria(generation_config=generation_config,stopping_criteria=stopping_criteria)logits_warper = model._get_logits_warper(generation_config)unfinished_sequences = input_ids.new(input_ids.shape[0]).fill_(1)scores = Nonewhile True:model_inputs = model.prepare_inputs_for_generation(input_ids, **model_kwargs)# forward pass to get next tokenoutputs = model(**model_inputs,return_dict=True,output_attentions=False,output_hidden_states=False,)next_token_logits = outputs.logits[:, -1, :]# pre-process distributionnext_token_scores = logits_processor(input_ids, next_token_logits)next_token_scores = logits_warper(input_ids, next_token_scores)# sampleprobs = nn.functional.softmax(next_token_scores, dim=-1)if generation_config.do_sample:next_tokens = torch.multinomial(probs, num_samples=1).squeeze(1)else:next_tokens = torch.argmax(probs, dim=-1)# update generated ids, model inputs, and length for next stepinput_ids = torch.cat([input_ids, next_tokens[:, None]], dim=-1)model_kwargs = model._update_model_kwargs_for_generation(outputs, model_kwargs, is_encoder_decoder=False)unfinished_sequences = unfinished_sequences.mul((min(next_tokens != i for i in eos_token_id)).long())output_token_ids = input_ids[0].cpu().tolist()output_token_ids = output_token_ids[input_length:]for each_eos_token_id in eos_token_id:if output_token_ids[-1] == each_eos_token_id:output_token_ids = output_token_ids[:-1]response = tokenizer.decode(output_token_ids)yield response# stop when each sentence is finished# or if we exceed the maximum lengthif unfinished_sequences.max() == 0 or stopping_criteria(input_ids, scores):breakdef on_btn_click():del st.session_state.messages@st.cache_resource
def load_model():model = (AutoModelForCausalLM.from_pretrained(model_name_or_path,trust_remote_code=True).to(torch.bfloat16).cuda())tokenizer = AutoTokenizer.from_pretrained(model_name_or_path,trust_remote_code=True)return model, tokenizerdef prepare_generation_config():with st.sidebar:max_length = st.slider('Max Length',min_value=8,max_value=32768,value=2048)top_p = st.slider('Top P', 0.0, 1.0, 0.75, step=0.01)temperature = st.slider('Temperature', 0.0, 1.0, 0.1, step=0.01)st.button('Clear Chat History', on_click=on_btn_click)generation_config = GenerationConfig(max_length=max_length,top_p=top_p,temperature=temperature)return generation_configuser_prompt = '<|im_start|>user\n{user}<|im_end|>\n'
robot_prompt = '<|im_start|>assistant\n{robot}<|im_end|>\n'
cur_query_prompt = '<|im_start|>user\n{user}<|im_end|>\n\<|im_start|>assistant\n'def combine_history(prompt):messages = st.session_state.messagesmeta_instruction = ('')total_prompt = f"<s><|im_start|>system\n{meta_instruction}<|im_end|>\n"for message in messages:cur_content = message['content']if message['role'] == 'user':cur_prompt = user_prompt.format(user=cur_content)elif message['role'] == 'robot':cur_prompt = robot_prompt.format(robot=cur_content)else:raise RuntimeErrortotal_prompt += cur_prompttotal_prompt = total_prompt + cur_query_prompt.format(user=prompt)return total_promptdef main():# torch.cuda.empty_cache()print('load model begin.')model, tokenizer = load_model()print('load model end.')st.title('InternLM2-Chat-1.8B')generation_config = prepare_generation_config()# Initialize chat historyif 'messages' not in st.session_state:st.session_state.messages = []# Display chat messages from history on app rerunfor message in st.session_state.messages:with st.chat_message(message['role'], avatar=message.get('avatar')):st.markdown(message['content'])# Accept user inputif prompt := st.chat_input('What is up?'):# Display user message in chat message containerwith st.chat_message('user'):st.markdown(prompt)real_prompt = combine_history(prompt)# Add user message to chat historyst.session_state.messages.append({'role': 'user','content': prompt,})with st.chat_message('robot'):message_placeholder = st.empty()for cur_response in generate_interactive(model=model,tokenizer=tokenizer,prompt=real_prompt,additional_eos_token_id=92542,**asdict(generation_config),):# Display robot response in chat message containermessage_placeholder.markdown(cur_response + '▌')message_placeholder.markdown(cur_response)# Add robot response to chat historyst.session_state.messages.append({'role': 'robot','content': cur_response,  # pylint: disable=undefined-loop-variable})torch.cuda.empty_cache()if __name__ == '__main__':main()

在这里插入图片描述

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.rhkb.cn/news/400503.html

如若内容造成侵权/违法违规/事实不符,请联系长河编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

Docker搭建Minio容器

Docker搭建Minio容器 前言 在上一集我们介绍了分布式文件存储行业解决方案以及技术选型。最终我们决定选用Minio作为分布式文件存储。 那么这集我们就在Docker上搭建Minio容器即可。 Docker搭建Minio容器步骤 创建Minio文件目录 我们选择创建/minio/data目录 修改目录权…

40.x86游戏实战-找出XXX遍历周围的类型

免责声明&#xff1a;内容仅供学习参考&#xff0c;请合法利用知识&#xff0c;禁止进行违法犯罪活动&#xff01; 本次游戏没法给 内容参考于&#xff1a;微尘网络安全 工具下载&#xff1a; 链接&#xff1a;https://pan.baidu.com/s/1rEEJnt85npn7N38Ai0_F2Q?pwd6tw3 提…

【C#】中IndexOf的用法

在 C# 中&#xff0c;IndexOf 方法是字符串和列表&#xff08;如 List<T>&#xff09;等数据结构中常用的方法&#xff0c;用于查找指定元素或子串首次出现的位置。以下是针对不同情况使用 IndexOf 的示例。 对于字符串 对于字符串类型&#xff0c;IndexOf 方法返回子字…

scanpy切换显示颜色总结

函数实现 import scanpy as sc adata sc.datasets.pbmc68k_reduced() print(adata) sc.pl.umap(adata,color["bulk_labels"])def change_show_color(adata,label,category_listNone,color_listNone):for i in range(len(color_list)):if(len(color_list[i])7):colo…

【人工智能】Transformers之Pipeline(九):物体检测(object-detection)

目录​​​​​​​ 一、引言 二、物体检测&#xff08;object-detection&#xff09; 2.1 概述 2.2 技术原理 2.3 应用场景 2.4 pipeline参数 2.4.1 pipeline对象实例化参数 2.4.2 pipeline对象使用参数 2.4 pipeline实战 2.5 模型排名 三、总结 一、引言 pipel…

Chromium编译指南2024 - Android篇:前置要求(一)

1.引言 欢迎阅读《Chromium编译指南2024 - Android篇》。本指南旨在帮助开发者理解和掌握在Android平台上编译Chromium的全过程。Chromium是一个开源的浏览器项目&#xff0c;由Google主导开发&#xff0c;并为多个现代浏览器提供基础代码。Android作为全球使用最广泛的移动操…

[Meachines] [Medium] Magic SQLI+文件上传+跳关TRP00F权限提升+环境变量劫持权限提升

信息收集 IP AddressOpening Ports10.10.10.185TCP:22,80 $ nmap -p- 10.10.10.185 --min-rate 1000 -sC -sV PORT STATE SERVICE VERSION 22/tcp open ssh OpenSSH 7.6p1 Ubuntu 4ubuntu0.3 (Ubuntu Linux; protocol 2.0) | ssh-hostkey: | 2048 06:d4:89:bf:51:…

redis面试(九)锁重入和互斥

可重入 1&#xff09;如果一开始这个锁是没有的&#xff0c;第一次来加锁&#xff0c;这段lua脚本会如何执行&#xff1f; "if (redis.call(‘exists’, KEYS[1]) 0) then " "redis.call(‘hset’, KEYS[1], ARGV[2], 1); " "redis.call(‘pexpi…

[0CTF 2016]piapiapia1

打开题目 看到登录口 字符串绕过长度限制strlen($_POST[nickname]) > 10

C语言中的结构体和位移段

在C语言中&#xff0c;结构体&#xff08;struct&#xff09;是一种用户自定义的数据类型&#xff0c;允许我们将不同类型的变量组合在一起&#xff0c;形成一个复合数据类型。结构体可以包含整型、浮点型、字符型等多种数据类型的成员。例如&#xff0c;我们可以定义一个表示人…

进程的退出函数及线程

函数wait (1)获取子进程退出状态 &#xff08;2&#xff09;回收资源销毁僵尸态子进程 #include <sys/types.h> #include <wait.h> int wait(int *status) 函数功能是&#xff1a;父进程一旦调用了wait就立即阻塞自己&#xff0c;由wait分析是否当前进程的某…

新160个crackme - 030-Acid Bytes.4

运行分析 需要破解Name和Serial PE分析 upx壳&#xff0c;32位 linux系统upx -d 脱壳 脱壳后发现是Delphi程序 静态分析&动态调试 ida搜索字符串&#xff0c;找到Your Name must be at least 6 Chars long !&#xff0c;双击进入 发现地址为红色&#xff0c;即函数未定义 选…

数据结构:顺序二叉树(堆)

目录 前言 一、堆的实现 1.1 头文件 1.2 堆的初始化及销毁 1.3 堆的插入 1.4 堆的删除 1.5 取堆顶数据和判空 前言 前面我们讲了二叉树有顺序结构和链式结构&#xff0c;今天就来讲一下顺序结构 普通的二叉树是不适合用数组来存储的&#xff0c;因为可能会存在大量的空间…

C:每日一题:二分查找

1、知识介绍&#xff1a; 1.1 概念&#xff1a; 二分查找是一种在有序数组中查找某一特定元素的搜索算法 1.2 基本思想&#xff1a; 每次将待查找的范围缩小一半&#xff0c;通过比较中间元素与目标元素的大小&#xff0c;来决定是在左半部分还是右半部分继续查找。 举个生…

servlet基础操作(get)

1&#xff0c;首先创建一个javaweb的项目 简历一般的java项目选中项目&#xff0c;双击shift出现搜索栏 找到这个框架&#xff0c;选择里面的javaweb&#xff0c;注意选择右侧版本显示为4.0的javaweb 之后部署Tomcat 我这里是本地&#xff0c;所以在本地选的是local 第一步实…

文心快码 Baidu Comate 前端工程师观点分享:行业现状(二)

本系列视频来自百度工程效能部的前端研发经理杨经纬&#xff0c;她在由开源中国主办的“AI编程革新研发效能”OSC源创会杭州站105期线下沙龙活动上&#xff0c;从一款文心快码&#xff08;Baidu Comate&#xff09;前端工程师的角度&#xff0c;分享了关于智能研发工具本身的研…

期望薪资3k,面试官笑了但没说话

吉祥知识星球http://mp.weixin.qq.com/s?__bizMzkwNjY1Mzc0Nw&mid2247485367&idx1&sn837891059c360ad60db7e9ac980a3321&chksmc0e47eebf793f7fdb8fcd7eed8ce29160cf79ba303b59858ba3a6660c6dac536774afb2a6330#rd 《网安面试指南》http://mp.weixin.qq.com/s?…

江科大/江协科技 STM32学习笔记P21

文章目录 ADC模数转换器ADC简介逐次逼近型ADCSTM32的ADCADC基本结构输入通道转换模式单次转换&#xff0c;非扫描模式连续转换&#xff0c;非扫描模式单次转换&#xff0c;扫描模式连续转换&#xff0c;扫描模式 触发控制数据对齐转换时间校准硬件电路电位器产生可调电压的电路…

DC-4靶机

扫描IP 端口扫描 目录扫描 访问80端口&#xff0c;发现一个登录界面 爆破密码 hydra -l admin -P rockyou.txt 192.168.254.153 http-post-form "/login.php:username^USER^&password^PASS^:Slogout" -F-l admin&#xff1a;指定用户名为 admin。 -P rockyou.t…

three.js 安装方法、基础简介、创建基础场景

threejs简介 Three.js是一个基于JavaScript编写的开源3D图形库&#xff0c;‌利用WebGL技术在网页上渲染3D图形。‌ 它提供了许多高级功能&#xff0c;‌如几何体、‌纹理、‌光照、‌阴影等&#xff0c;‌使得开发者能够快速创建复杂且逼真的3D场景。‌ threejs提供了丰富的功…