Mac平台M1PRO芯片MiniCPM-V-2.6网页部署跑通

Mac平台M1PRO芯片MiniCPM-V-2.6网页部署跑通

契机

2.6的小钢炮可以输入视频了,我必须拉到本地跑跑。主要解决2.6版本默认绑定flash_atten问题,pip install flash_attn也无法安装,因为强制依赖cuda。主要解决的就是这个问题,还有 BFloat16 is not supported on MPS问题解决。

环境

  • macos版本:版本15.0 Beta版(24A5279h) || 版本15.1 Beta版(24B5009l)
  • 芯片:m1 pro
  • 代码仓库:https://github.com/OpenBMB/MiniCPM-V.git
  • 分支:main
  • 代码版本:b0125d8a yiranyyu 2606375857@qq.com on 2024/8/9 at 10:25
  • python版本:3.9

解决问题

#拉下这个仓库
git clone [https://github.com/OpenBMB/MiniCPM-V.git](https://github.com/OpenBMB/MiniCPM-V.git) #把requirements.txt安装下
#modelscope需要手动安装
pip install http://thunlp.oss-cn-qingdao.aliyuncs.com/multi_modal/never_delete/modelscope_studio-0.4.0.9-py3-none-any.whl
#dcord如果安装有问题,参考我LAVIS博客#找到根目录web_demo_2.6.py运行
#首先添加环境变量,mps参数,见下图
--device mps
PYTORCH_ENABLE_MPS_FALLBACK=1

请添加图片描述


#第一次运行web_demo_2.6.py报错如下
ImportError: This modeling file requires the following packages that were not found in your environment: flash_attn. Run `pip install flash_attn`#直接修改代码
from typing import Union
from transformers.dynamic_module_utils import get_imports
from unittest.mock import patch
# fix the imports
def fixed_get_imports(filename: Union[str, os.PathLike]) -> list[str]:imports = get_imports(filename)if not torch.cuda.is_available() and "flash_attn" in imports:imports.remove("flash_attn")return imports#79行左右修改为
with patch("transformers.dynamic_module_utils.get_imports", fixed_get_imports):model = AutoModel.from_pretrained(model_path, trust_remote_code=True, torch_dtype=torch.bfloat16)model = model.to(device=device)

完整代码如下

#!/usr/bin/env python
# encoding: utf-8
import torch
import argparse
from transformers import AutoModel, AutoTokenizer
import gradio as gr
from PIL import Image
from decord import VideoReader, cpu
import io
import os
import copy
import requests
import base64
import json
import traceback
import re
import modelscope_studio as mgr
from typing import Union
from transformers.dynamic_module_utils import get_imports
from unittest.mock import patch# README, How to run demo on different devices# For Nvidia GPUs.
# python web_demo_2.6.py --device cuda# For Mac with MPS (Apple silicon or AMD GPUs).
# PYTORCH_ENABLE_MPS_FALLBACK=1 python web_demo_2.6.py --device mps# Argparser
parser = argparse.ArgumentParser(description='demo')
parser.add_argument('--device', type=str, default='cuda', help='cuda or mps')
parser.add_argument('--multi-gpus', action='store_true', default=False, help='use multi-gpus')
args = parser.parse_args()
device = args.device
assert device in ['cuda', 'mps']# fix the imports
def fixed_get_imports(filename: Union[str, os.PathLike]) -> list[str]:imports = get_imports(filename)if not torch.cuda.is_available() and "flash_attn" in imports:imports.remove("flash_attn")return imports# Load model
model_path = 'openbmb/MiniCPM-V-2_6'
if 'int4' in model_path:if device == 'mps':print('Error: running int4 model with bitsandbytes on Mac is not supported right now.')exit()model = AutoModel.from_pretrained(model_path, trust_remote_code=True)
else:if args.multi_gpus:from accelerate import load_checkpoint_and_dispatch, init_empty_weights, infer_auto_device_mapwith init_empty_weights():model = AutoModel.from_pretrained(model_path, trust_remote_code=True, attn_implementation='sdpa', torch_dtype=torch.bfloat16)device_map = infer_auto_device_map(model, max_memory={0: "10GB", 1: "10GB"},no_split_module_classes=['SiglipVisionTransformer', 'Qwen2DecoderLayer'])device_id = device_map["llm.model.embed_tokens"]device_map["llm.lm_head"] = device_id # firtt and last layer should be in same devicedevice_map["vpm"] = device_iddevice_map["resampler"] = device_iddevice_id2 = device_map["llm.model.layers.26"]device_map["llm.model.layers.8"] = device_id2device_map["llm.model.layers.9"] = device_id2device_map["llm.model.layers.10"] = device_id2device_map["llm.model.layers.11"] = device_id2device_map["llm.model.layers.12"] = device_id2device_map["llm.model.layers.13"] = device_id2device_map["llm.model.layers.14"] = device_id2device_map["llm.model.layers.15"] = device_id2device_map["llm.model.layers.16"] = device_id2#print(device_map)model = load_checkpoint_and_dispatch(model, model_path, dtype=torch.bfloat16, device_map=device_map)else:with patch("transformers.dynamic_module_utils.get_imports", fixed_get_imports):model = AutoModel.from_pretrained(model_path, trust_remote_code=True, torch_dtype=torch.bfloat16)model = model.to(device=device)
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
model.eval()ERROR_MSG = "Error, please retry"
model_name = 'MiniCPM-V 2.6'
MAX_NUM_FRAMES = 64
IMAGE_EXTENSIONS = {'.jpg', '.jpeg', '.png', '.bmp', '.tiff', '.webp'}
VIDEO_EXTENSIONS = {'.mp4', '.mkv', '.mov', '.avi', '.flv', '.wmv', '.webm', '.m4v'}def get_file_extension(filename):return os.path.splitext(filename)[1].lower()def is_image(filename):return get_file_extension(filename) in IMAGE_EXTENSIONSdef is_video(filename):return get_file_extension(filename) in VIDEO_EXTENSIONSform_radio = {'choices': ['Beam Search', 'Sampling'],#'value': 'Beam Search','value': 'Sampling','interactive': True,'label': 'Decode Type'
}def create_component(params, comp='Slider'):if comp == 'Slider':return gr.Slider(minimum=params['minimum'],maximum=params['maximum'],value=params['value'],step=params['step'],interactive=params['interactive'],label=params['label'])elif comp == 'Radio':return gr.Radio(choices=params['choices'],value=params['value'],interactive=params['interactive'],label=params['label'])elif comp == 'Button':return gr.Button(value=params['value'],interactive=True)def create_multimodal_input(upload_image_disabled=False, upload_video_disabled=False):return mgr.MultimodalInput(upload_image_button_props={'label': 'Upload Image', 'disabled': upload_image_disabled, 'file_count': 'multiple'},upload_video_button_props={'label': 'Upload Video', 'disabled': upload_video_disabled, 'file_count': 'single'},submit_button_props={'label': 'Submit'})def chat(img, msgs, ctx, params=None, vision_hidden_states=None):try:print('msgs:', msgs)answer = model.chat(image=None,msgs=msgs,tokenizer=tokenizer,**params)res = re.sub(r'(<box>.*</box>)', '', answer)res = res.replace('<ref>', '')res = res.replace('</ref>', '')res = res.replace('<box>', '')answer = res.replace('</box>', '')print('answer:', answer)return 0, answer, None, Noneexcept Exception as e:print(e)traceback.print_exc()return -1, ERROR_MSG, None, Nonedef encode_image(image):if not isinstance(image, Image.Image):if hasattr(image, 'path'):image = Image.open(image.path).convert("RGB")else:image = Image.open(image.file.path).convert("RGB")# resize to max_sizemax_size = 448*16if max(image.size) > max_size:w,h = image.sizeif w > h:new_w = max_sizenew_h = int(h * max_size / w)else:new_h = max_sizenew_w = int(w * max_size / h)image = image.resize((new_w, new_h), resample=Image.BICUBIC)return image## save by BytesIO and convert to base64#buffered = io.BytesIO()#image.save(buffered, format="png")#im_b64 = base64.b64encode(buffered.getvalue()).decode()#return {"type": "image", "pairs": im_b64}def encode_video(video):def uniform_sample(l, n):gap = len(l) / nidxs = [int(i * gap + gap / 2) for i in range(n)]return [l[i] for i in idxs]if hasattr(video, 'path'):vr = VideoReader(video.path, ctx=cpu(0))else:vr = VideoReader(video.file.path, ctx=cpu(0))sample_fps = round(vr.get_avg_fps() / 1)  # FPSframe_idx = [i for i in range(0, len(vr), sample_fps)]if len(frame_idx)>MAX_NUM_FRAMES:frame_idx = uniform_sample(frame_idx, MAX_NUM_FRAMES)video = vr.get_batch(frame_idx).asnumpy()video = [Image.fromarray(v.astype('uint8')) for v in video]video = [encode_image(v) for v in video]print('video frames:', len(video))return videodef check_mm_type(mm_file):if hasattr(mm_file, 'path'):path = mm_file.pathelse:path = mm_file.file.pathif is_image(path):return "image"if is_video(path):return "video"return Nonedef encode_mm_file(mm_file):if check_mm_type(mm_file) == 'image':return [encode_image(mm_file)]if check_mm_type(mm_file) == 'video':return encode_video(mm_file)return Nonedef make_text(text):#return {"type": "text", "pairs": text} # # For remote callreturn textdef encode_message(_question):files = _question.filesquestion = _question.textpattern = r"\[mm_media\]\d+\[/mm_media\]"matches = re.split(pattern, question)message = []if len(matches) != len(files) + 1:gr.Warning("Number of Images not match the placeholder in text, please refresh the page to restart!")assert len(matches) == len(files) + 1text = matches[0].strip()if text:message.append(make_text(text))for i in range(len(files)):message += encode_mm_file(files[i])text = matches[i + 1].strip()if text:message.append(make_text(text))return messagedef check_has_videos(_question):images_cnt = 0videos_cnt = 0for file in _question.files:if check_mm_type(file) == "image":images_cnt += 1else:videos_cnt += 1return images_cnt, videos_cntdef count_video_frames(_context):num_frames = 0for message in _context:for item in message["content"]:#if item["type"] == "image": # For remote callif isinstance(item, Image.Image):num_frames += 1return num_framesdef respond(_question, _chat_bot, _app_cfg, params_form):_context = _app_cfg['ctx'].copy()_context.append({'role': 'user', 'content': encode_message(_question)})images_cnt = _app_cfg['images_cnt']videos_cnt = _app_cfg['videos_cnt']files_cnts = check_has_videos(_question)if files_cnts[1] + videos_cnt > 1 or (files_cnts[1] + videos_cnt == 1 and files_cnts[0] + images_cnt > 0):gr.Warning("Only supports single video file input right now!")return _question, _chat_bot, _app_cfgif params_form == 'Beam Search':params = {'sampling': False,'num_beams': 3,'repetition_penalty': 1.2,"max_new_tokens": 2048}else:params = {'sampling': True,'top_p': 0.8,'top_k': 100,'temperature': 0.7,'repetition_penalty': 1.05,"max_new_tokens": 2048}if files_cnts[1] + videos_cnt > 0:params["max_inp_length"] = 4352 # 4096+256params["use_image_id"] = Falseparams["max_slice_nums"] = 1 if count_video_frames(_context) > 16 else 2code, _answer, _, sts = chat("", _context, None, params)images_cnt += files_cnts[0]videos_cnt += files_cnts[1]_context.append({"role": "assistant", "content": [make_text(_answer)]})_chat_bot.append((_question, _answer))if code == 0:_app_cfg['ctx']=_context_app_cfg['sts']=sts_app_cfg['images_cnt'] = images_cnt_app_cfg['videos_cnt'] = videos_cntupload_image_disabled = videos_cnt > 0upload_video_disabled = videos_cnt > 0 or images_cnt > 0return create_multimodal_input(upload_image_disabled, upload_video_disabled), _chat_bot, _app_cfgdef fewshot_add_demonstration(_image, _user_message, _assistant_message, _chat_bot, _app_cfg):ctx = _app_cfg["ctx"]message_item = []if _image is not None:image = Image.open(_image).convert("RGB")ctx.append({"role": "user", "content": [encode_image(image), make_text(_user_message)]})message_item.append({"text": "[mm_media]1[/mm_media]" + _user_message, "files": [_image]})else:if _user_message:ctx.append({"role": "user", "content": [make_text(_user_message)]})message_item.append({"text": _user_message, "files": []})else:message_item.append(None)if _assistant_message:ctx.append({"role": "assistant", "content": [make_text(_assistant_message)]})message_item.append({"text": _assistant_message, "files": []})else:message_item.append(None)_chat_bot.append(message_item)return None, "", "", _chat_bot, _app_cfgdef fewshot_respond(_image, _user_message, _chat_bot, _app_cfg, params_form):user_message_contents = []_context = _app_cfg["ctx"].copy()if _image:image = Image.open(_image).convert("RGB")user_message_contents += [encode_image(image)]if _user_message:user_message_contents += [make_text(_user_message)]if user_message_contents:_context.append({"role": "user", "content": user_message_contents})if params_form == 'Beam Search':params = {'sampling': False,'num_beams': 3,'repetition_penalty': 1.2,"max_new_tokens": 2048}else:params = {'sampling': True,'top_p': 0.8,'top_k': 100,'temperature': 0.7,'repetition_penalty': 1.05,"max_new_tokens": 2048}code, _answer, _, sts = chat("", _context, None, params)_context.append({"role": "assistant", "content": [make_text(_answer)]})if _image:_chat_bot.append([{"text": "[mm_media]1[/mm_media]" + _user_message, "files": [_image]},{"text": _answer, "files": []}])else:_chat_bot.append([{"text": _user_message, "files": [_image]},{"text": _answer, "files": []}])if code == 0:_app_cfg['ctx']=_context_app_cfg['sts']=stsreturn None, '', '', _chat_bot, _app_cfgdef regenerate_button_clicked(_question, _image, _user_message, _assistant_message, _chat_bot, _app_cfg, params_form):if len(_chat_bot) <= 1 or not _chat_bot[-1][1]:gr.Warning('No question for regeneration.')return '', _image, _user_message, _assistant_message, _chat_bot, _app_cfgif _app_cfg["chat_type"] == "Chat":images_cnt = _app_cfg['images_cnt']videos_cnt = _app_cfg['videos_cnt']_question = _chat_bot[-1][0]_chat_bot = _chat_bot[:-1]_app_cfg['ctx'] = _app_cfg['ctx'][:-2]files_cnts = check_has_videos(_question)images_cnt -= files_cnts[0]videos_cnt -= files_cnts[1]_app_cfg['images_cnt'] = images_cnt_app_cfg['videos_cnt'] = videos_cntupload_image_disabled = videos_cnt > 0upload_video_disabled = videos_cnt > 0 or images_cnt > 0_question, _chat_bot, _app_cfg = respond(_question, _chat_bot, _app_cfg, params_form)return _question, _image, _user_message, _assistant_message, _chat_bot, _app_cfgelse:last_message = _chat_bot[-1][0]last_image = Nonelast_user_message = ''if last_message.text:last_user_message = last_message.textif last_message.files:last_image = last_message.files[0].file.path_chat_bot = _chat_bot[:-1]_app_cfg['ctx'] = _app_cfg['ctx'][:-2]_image, _user_message, _assistant_message, _chat_bot, _app_cfg = fewshot_respond(last_image, last_user_message, _chat_bot, _app_cfg, params_form)return _question, _image, _user_message, _assistant_message, _chat_bot, _app_cfgdef flushed():return gr.update(interactive=True)def clear(txt_message, chat_bot, app_session):txt_message.files.clear()txt_message.text = ''chat_bot = copy.deepcopy(init_conversation)app_session['sts'] = Noneapp_session['ctx'] = []app_session['images_cnt'] = 0app_session['videos_cnt'] = 0return create_multimodal_input(), chat_bot, app_session, None, '', ''def select_chat_type(_tab, _app_cfg):_app_cfg["chat_type"] = _tabreturn _app_cfginit_conversation = [[None,{# The first message of bot closes the typewriter."text": "You can talk to me now","flushing": False}],
]css = """
video { height: auto !important; }
.example label { font-size: 16px;}
"""introduction = """## Features:
1. Chat with single image
2. Chat with multiple images
3. Chat with video
4. In-context few-shot learningClick `How to use` tab to see examples.
"""with gr.Blocks(css=css) as demo:with gr.Tab(model_name):with gr.Row():with gr.Column(scale=1, min_width=300):gr.Markdown(value=introduction)params_form = create_component(form_radio, comp='Radio')regenerate = create_component({'value': 'Regenerate'}, comp='Button')clear_button = create_component({'value': 'Clear History'}, comp='Button')with gr.Column(scale=3, min_width=500):app_session = gr.State({'sts':None,'ctx':[], 'images_cnt': 0, 'videos_cnt': 0, 'chat_type': 'Chat'})chat_bot = mgr.Chatbot(label=f"Chat with {model_name}", value=copy.deepcopy(init_conversation), height=600, flushing=False, bubble_full_width=False)with gr.Tab("Chat") as chat_tab:txt_message = create_multimodal_input()chat_tab_label = gr.Textbox(value="Chat", interactive=False, visible=False)txt_message.submit(respond,[txt_message, chat_bot, app_session, params_form],[txt_message, chat_bot, app_session])with gr.Tab("Few Shot") as fewshot_tab:fewshot_tab_label = gr.Textbox(value="Few Shot", interactive=False, visible=False)with gr.Row():with gr.Column(scale=1):image_input = gr.Image(type="filepath", sources=["upload"])with gr.Column(scale=3):user_message = gr.Textbox(label="User")assistant_message = gr.Textbox(label="Assistant")with gr.Row():add_demonstration_button = gr.Button("Add Example")generate_button = gr.Button(value="Generate", variant="primary")add_demonstration_button.click(fewshot_add_demonstration,[image_input, user_message, assistant_message, chat_bot, app_session],[image_input, user_message, assistant_message, chat_bot, app_session])generate_button.click(fewshot_respond,[image_input, user_message, chat_bot, app_session, params_form],[image_input, user_message, assistant_message, chat_bot, app_session])chat_tab.select(select_chat_type,[chat_tab_label, app_session],[app_session])chat_tab.select( # do clearclear,[txt_message, chat_bot, app_session],[txt_message, chat_bot, app_session, image_input, user_message, assistant_message])fewshot_tab.select(select_chat_type,[fewshot_tab_label, app_session],[app_session])fewshot_tab.select( # do clearclear,[txt_message, chat_bot, app_session],[txt_message, chat_bot, app_session, image_input, user_message, assistant_message])chat_bot.flushed(flushed,outputs=[txt_message])regenerate.click(regenerate_button_clicked,[txt_message, image_input, user_message, assistant_message, chat_bot, app_session, params_form],[txt_message, image_input, user_message, assistant_message, chat_bot, app_session])clear_button.click(clear,[txt_message, chat_bot, app_session],[txt_message, chat_bot, app_session, image_input, user_message, assistant_message])with gr.Tab("How to use"):with gr.Column():with gr.Row():image_example = gr.Image(value="http://thunlp.oss-cn-qingdao.aliyuncs.com/multi_modal/never_delete/m_bear2.gif", label='1. Chat with single or multiple images', interactive=False, width=400, elem_classes="example")example2 = gr.Image(value="http://thunlp.oss-cn-qingdao.aliyuncs.com/multi_modal/never_delete/video2.gif", label='2. Chat with video', interactive=False, width=400, elem_classes="example")example3 = gr.Image(value="http://thunlp.oss-cn-qingdao.aliyuncs.com/multi_modal/never_delete/fshot.gif", label='3. Few shot', interactive=False, width=400, elem_classes="example")# launch
demo.launch(share=False, debug=True, show_api=False, server_port=8885, server_name="0.0.0.0")
#第一次运行web_demo_2.6.py报错如下
File "/Usxxxxxxxckages/torch/nn/modules/module.py", line 1158, in convert
return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
TypeError: BFloat16 is not supported on MPS#重装依赖
pip3 install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cpu#再次运行就没问题了
#这里下载模型20g可能会等一段时间,最后借助魔法下载,看这网速在疯狂跑就没问题
#成功运行输出如下
Loading checkpoint shards: 100%|██████████| 4/4 [00:21<00:00,  5.33s/it]
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Running on local URL:  http://0.0.0.0:8885To create a public link, set `share=True` in `launch()`.
IMPORTANT: You are using gradio version 4.22.0, however version 4.29.0 is available, please upgrade.
--------

效果展示

图片理解

Sampling解码

请添加图片描述

Beam Search解码

请添加图片描述

视频理解

Sampling解码

请添加图片描述

Beam Search解码

请添加图片描述

系统占用

请添加图片描述

总结

  • 解决flash_attn强制依赖问题
  • 解决bfloat16在mps无法使用问题
  • 看系统占用是没走mps,添加的环境变量也可以看出
  • Sampling瞎回答,Beam Search回答很惊喜
  • Beam Search处理视频4秒,在m1pro下,当前代码中需要230s左右
  • ollama部署还在研究中…

写到最后

请添加图片描述

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.rhkb.cn/news/396533.html

如若内容造成侵权/违法违规/事实不符,请联系长河编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

移动端上拉分页加载更多(h5,小程序)

1.h5,使用原生方式监听页面滚动上拉分页加载更多 <template><div></div> </template><script> export default {data() {return {loadflag: true,maxpages: 0, //最大页码currentpage: 0, //当前页listData: [],config: {page: 1,pageSize: 15,…

Netty技术全解析:DelimiterBasedFrameDecoder类深度解析

❃博主首页 &#xff1a; 「码到三十五」 &#xff0c;同名公众号 :「码到三十五」&#xff0c;wx号 : 「liwu0213」 ☠博主专栏 &#xff1a; <mysql高手> <elasticsearch高手> <源码解读> <java核心> <面试攻关> ♝博主的话 &#xff1a…

Java中等题-交错字符串(力扣)

给定三个字符串 s1、s2、s3&#xff0c;请你帮忙验证 s3 是否是由 s1 和 s2 交错 组成的。 两个字符串 s 和 t 交错 的定义与过程如下&#xff0c;其中每个字符串都会被分割成若干 非空 子字符串 &#xff1a; s s1 s2 ... snt t1 t2 ... tm|n - m| < 1交错 是…

AI入门指南(二):算法、训练、模型、大模型是什么?

文章目录 一、前言二、算法是什么&#xff1f;概念实际应用 三、训练是什么&#xff1f;概念实际应用 四、模型是什么&#xff1f;概念实际应用小结 五、大模型是什么&#xff1f;概念大模型和小模型有什么区别&#xff1f;大模型分类实际应用 六、总结七、参考资料 一、前言 …

37.【C语言】指针(重难点)(B)

目录&#xff1a; 5.疑问解答&#xff1a;指针的解引用 6.指针或-整数 7.特殊类型*void指针 承接上篇19.【C语言】指针&#xff08;重难点&#xff09;&#xff08;A&#xff09; 5.疑问解答&#xff1a;指针的解引用 观察下列代码产生的现象 #include <stdio.h> int …

代码随想录算法训练营day39||动态规划07:多重背包+打家劫舍

多重背包理论 描述&#xff1a; 有N种物品和一个容量为V 的背包。 第i种物品最多有Mi件可用&#xff0c;每件耗费的空间是Ci &#xff0c;价值是Wi 。 求解将哪些物品装入背包可使这些物品的耗费的空间 总和不超过背包容量&#xff0c;且价值总和最大。 本质&#xff1a; …

yolov8旋转目标检测部署教程(附代码c++/python)

为了编写一个详细的YOLOv8旋转目标检测ONNX部署教程&#xff0c;我们需要考虑几个关键点&#xff1a;模型转换为ONNX格式、ONNX模型的部署以及后处理逻辑。由于YOLOv8本身还未发布&#xff0c;我们将基于现有的知识和技术来进行推断。 以下是部署YOLOv8旋转目标检测模型到ONNX…

【经验分享】ShardingSphere+Springboot-03 : COMPLEX_INLINE 复杂行表达式分片算法

文章目录 3.3 复杂分片算法3.3.1 COMPLEX_INLINE 复杂行表达式分片算法 3.3 复杂分片算法 3.3.1 COMPLEX_INLINE 复杂行表达式分片算法 复合分片比较灵活&#xff0c;适合于分片的字段比较多&#xff0c;分片比较复杂的场景&#xff0c;使用这种分片的话必须对自己的业务比较…

AWS生成式AI项目的全生命周期管理

随着人工智能技术的迅速发展&#xff0c;生成式 AI 已成为当今最具创新性和影响力的领域之一。生成式 AI 能够创建新的内容&#xff0c;如文本、图像、音频等&#xff0c;具有广泛的应用前景&#xff0c;如自然语言处理、计算机视觉、创意设计等。然而&#xff0c;构建一个成功…

PythonStudio 控件使用常用方式(十八)TCategoryButtons

PythonStudio是一个极强的开发Python的IDE工具&#xff0c;它使用的是Delphi的控件&#xff0c;常用的内容是与Delphi一致的。但是相关文档并一定完整。现在我试试能否逐步把它的控件常用用法写一点点&#xff0c;也作为PythonStudio的参考。 从1.2.1版开始&#xff0c;Python…

jsp-图书管理系统

一、系统介绍 本系统为图书管理系统&#xff0c;主要围绕图书管理和会员管理两个核心内容展开&#xff0c;图书管理包括图书的上架&#xff0c;下架&#xff0c;图书的借阅&#xff0c;归还&#xff0c;定损等&#xff1b; 会员管理包括会员注册&#xff0c;充值&#xff0c;损…

【Datawhale X 魔搭 】AI夏令营第四期AIGC方向,Task1:可图Kolors-LoRA风格AI图片生成入门(持续更新)

第一步&#xff1a;下载baseline文件 &#xff08;1&#xff09;安装lfs&#xff0c;用于git脚本命令下载大文件 git lfs install 在AI模型和数据集中&#xff0c;通常包含一些较大的文件&#xff0c;例如图像或模型参数。这些文件可能会超过普通Git仓库的处理能力。git lfs 可…

【Linux SQLite数据库】一、SQLite交叉编译与移植

SQLite 是一个用 C 语言编写的开源、轻量级、快速、独立且高可靠性的 SQL 数据库引擎&#xff0c;它提供了功能齐全的数据库解决方案。SQLite 几乎可以在所有的手机和计算机上运行&#xff0c;它被嵌入到无数人每天都在使用的众多应用程序中。此外&#xff0c;SQLite 还具有稳定…

【电控笔记z6】无感文献综述

高频注入 afabeta注入 lq/ld越大统好 凸极性大反电动势ZVCD pwm电压向量为主 增加动态特性 设计隆博戈估测器 高频注入: lq/ld比较大 运用在低转速 到高速的时候 , 切换到model_base的方法进行反电动势侦测 smo :速度无法很低 有个极限 受杂讯影响大 高速时候用 总结 用spm …

UE基础 —— 编辑器界面

菜单栏 UE中每个编辑器都有一个菜单栏&#xff0c;部分菜单会出现在所有编辑器窗口中&#xff0c;如File、Window、Help&#xff0c;其他则是其编辑器特有的&#xff1b; 主工具栏 UE中部分最常用的工具和命令的快捷方式&#xff1b; 1&#xff0c;保存按钮&#xff08;ctrls&a…

深入探索大模型:从基础到实践,开启AI之旅

摘要&#xff1a; 在人工智能领域&#xff0c;大模型技术正成为推动创新和进步的关键力量。对于初学者而言&#xff0c;掌握大模型的基本概念、理论和技术是至关重要的。 本文将为你提供一个全面的学习路线&#xff0c;帮助你从基础知识出发&#xff0c;逐步深入到大模型的实践…

探索Python的文本转换魔法:html2text库的奥秘

文章目录 **探索Python的文本转换魔法&#xff1a;html2text库的奥秘**背景&#xff1a;为何选择html2text&#xff1f;这个库是什么&#xff1f;如何安装这个库&#xff1f;简单使用&#xff1a;5个基本函数介绍场景应用&#xff1a;3个实际使用示例常见问题与解决方案总结 探…

Linux 进程调度(三)之进程的优先级

目录 一、概述二、进程的优先级1、基础概念2、优先级的意义3、查看优先级4、PRI 和 NI5、修改优先级6、控制进程的优先级的系统调用7、调整优先级的限制 一、概述 在 Linux 中&#xff0c;每个进程都有一个优先级。优先级决定了进程在系统资源分配中的先后顺序。Linux 中的进程…

NBT:单细胞转录组新降维可视化方法PHATE

新降维可视化 NGS系列文章包括NGS基础、转录组分析 &#xff08;Nature重磅综述|关于RNA-seq你想知道的全在这&#xff09;、ChIP-seq分析 &#xff08;ChIP-seq基本分析流程&#xff09;、单细胞测序分析 (重磅综述&#xff1a;三万字长文读懂单细胞RNA测序分析的最佳实践教程…

vue 日期控件 100天内的时间禁用不允许选择

vue 日期控件 100天内的时间禁用不允许选择&#xff0c;可以从101天选起 比如&#xff0c;2024年8月9号开始&#xff0c;100天内禁止选择&#xff0c;第101天之后的日期可以选&#xff0c;效果如图所示 // 日期控件代码 加上 :picker-options"pickerOptions" <…