大规模语言LLaVA:多模态GPT-4智能助手,融合语言与视觉,满足用户复杂需求

大规模语言LLaVA:多模态GPT-4智能助手,融合语言与视觉,满足用户复杂需求

一个面向多模式GPT-4级别能力构建的助手。它结合了自然语言处理和计算机视觉,为用户提供了强大的多模式交互和理解。LLaVA旨在更深入地理解和处理语言和视觉信息,从而实现更复杂的任务和对话。这个项目代表了下一代智能助手的发展方向,它能够更好地理解和应对用户需求。

  • 效果展示

demo链接:https://llava.hliu.cc/

当我来这里旅游时,我应该注意哪些事情?

在平静的湖面上游览码头时,有几件事你应该小心。首先,要注意天气状况,因为图像显示的是多云的天空。这可能表明天气突然变化的可能性,如下雨或强风,这可能使码头湿滑或不安全。其次,要注意水深和水位突然变化的可能性,因为码头位于湖中。最后,要小心任何可能出现在该地区的野生动物或昆虫,因为它们可能会对你的安全或健康构成威胁。为了确保安全愉快的体验,保持警惕和尊重环境是必不可少的。

  • 主流大模型之间对比

1.安装

  1. Clone this repository and navigate to LLaVA folder
git clone https://github.com/haotian-liu/LLaVA.git
cd LLaVA
  1. Install Package
conda create -n llava python=3.10 -y
conda activate llava
pip install --upgrade pip  # enable PEP 660 support
pip install -e .
  1. Install additional packages for training cases
pip install ninja
pip install flash-attn --no-build-isolation

1.1 升级到最新的代码库

git pull
pip uninstall transformers
pip install -e .

2.LLaVA 权重

Please check out our Model Zoo for all public LLaVA checkpoints, and the instructions of how to use the weights.

2.1 Demo

To run our demo, you need to prepare LLaVA checkpoints locally. Please follow the instructions here to download the checkpoints.

2.2 基于Gradio Web UI

要在本地启动Gradio demo,请依次运行以下命令。如果你计划启动多个模型工作者来比较不同的检查点,你只需要启动控制器和web服务器一次

  • Launch a controller
python -m llava.serve.controller --host 0.0.0.0 --port 10000
  • Launch a gradio web server.
python -m llava.serve.gradio_web_server --controller http://localhost:10000 --model-list-mode reload

您刚刚启动了grado web界面。现在,您可以打开带有打印在屏幕上的URL的web界面。您可能会注意到在模型列表中没有模型。别担心,我们还没有推出劳模。当你启动一个模型工作者时,它将被自动更新。

  • Launch a model worker

This is the actual worker that performs the inference on the GPU. Each worker is responsible for a single model specified in --model-path.

python -m llava.serve.model_worker --host 0.0.0.0 --controller http://localhost:10000 --port 40000 --worker http://localhost:40000 --model-path liuhaotian/llava-v1.5-13b

Wait until the process finishes loading the model and you see “Uvicorn running on …”. Now, refresh your Gradio web UI, and you will see the model you just launched in the model list.

You can launch as many workers as you want, and compare between different model checkpoints in the same Gradio interface. Please keep the --controller the same, and modify the --port and --worker to a different port number for each worker.

python -m llava.serve.model_worker --host 0.0.0.0 --controller http://localhost:10000 --port <different from 40000, say 40001> --worker http://localhost:<change accordingly, i.e. 40001> --model-path <ckpt2>

If you are using an Apple device with an M1 or M2 chip, you can specify the mps device by using the --device flag: --device mps.

  • Launch a model worker (Multiple GPUs, when GPU VRAM <= 24GB)

如果GPU的VRAM小于24GB(例如,RTX 3090, RTX 4090等),您可以尝试在多个GPU上运行它。如果您有多个GPU,我们最新的代码库将自动尝试使用多个GPU。你可以使用’ CUDA_VISIBLE_DEVICES '来指定使用哪个gpu。下面是使用前两个gpu运行的示例。

CUDA_VISIBLE_DEVICES=0,1 python -m llava.serve.model_worker --host 0.0.0.0 --controller http://localhost:10000 --port 40000 --worker http://localhost:40000 --model-path liuhaotian/llava-v1.5-13b
  • Launch a model worker (4-bit, 8-bit inference, quantized)

您可以使用量化位(4位,8位)启动模型工作器,这允许您在减少GPU内存占用的情况下运行推理,可能允许您在只有12GB VRAM的GPU上运行。请注意,使用量子化位的推理可能不如全精度模型准确。只需将’——load-4bit ‘或’——load-8bit '附加到您正在执行的model worker命令。下面是一个使用4位量化运行的示例。

python -m llava.serve.model_worker --host 0.0.0.0 --controller http://localhost:10000 --port 40000 --worker http://localhost:40000 --model-path liuhaotian/llava-v1.5-13b --load-4bit
  • Launch a model worker (LoRA weights, unmerged)

您可以使用LoRA权重启动模型工作器,而不将它们与基本检查点合并,以节省磁盘空间。会有额外的加载时间,而推理速度与合并的检查点相同。未合并的LoRA检查点在模型名称中没有“LoRA -merge”,并且通常比合并的检查点小得多(小于1GB) (7B为13G, 13B为25G)。

要加载未合并的LoRA权重,您只需要传递一个额外的参数’——model-base ',这是用于训练LoRA权重的基本LLM。您可以在模型动物园中查看每个LoRA权重的基本LLM。

python -m llava.serve.model_worker --host 0.0.0.0 --controller http://localhost:10000 --port 40000 --worker http://localhost:40000 --model-path liuhaotian/llava-v1-0719-336px-lora-vicuna-13b-v1.3 --model-base lmsys/vicuna-13b-v1.3

3.CLI 推理

使用LLaVA讨论图像,而不需要使用Gradio接口。支持多gpu、4位和8位量化推理。使用4位量化,对于我们的LLaVA-1.5-7B,它在单个GPU上使用不到8GB的VRAM。

python -m llava.serve.cli \--model-path liuhaotian/llava-v1.5-7b \--image-file "https://llava-vl.github.io/static/images/view.jpg" \--load-4bit

4.模型训练

以下是LLaVA v1.5的最新培训配置。对于遗留模型,请参考此版本的README。稍后我们将把它们添加到一个单独的文档中

LLaVA训练包括两个阶段:(1)特征对齐阶段:使用LAION-CC-SBU数据集的558K子集将“冻结预训练”视觉编码器连接到“冻结LLM”;(2)视觉指令调整阶段:使用150K gpt生成的多模态指令跟随数据,加上515K左右的学术任务VQA数据,来教模型遵循多模态指令。

LLaVA is trained on 8 A100 GPUs with 80GB memory. To train on fewer GPUs, you can reduce the per_device_train_batch_size and increase the gradient_accumulation_steps accordingly. Always keep the global batch size the same: per_device_train_batch_size x gradient_accumulation_steps x num_gpus.

4.1 超参数

We use a similar set of hyperparameters as Vicuna in finetuning. Both hyperparameters used in pretraining and finetuning are provided below.

  1. Pretraining
HyperparameterGlobal Batch SizeLearning rateEpochsMax lengthWeight decay
LLaVA-v1.5-13B2561e-3120480
  1. Finetuning
HyperparameterGlobal Batch SizeLearning rateEpochsMax lengthWeight decay
LLaVA-v1.5-13B1282e-5120480

4.2 下载 Vicuna checkpoints (automatically)

我们的基本模型Vicuna v1.5,这是一个指令调整聊天机器人,将自动下载,当你运行我们提供的训练脚本。不需要任何操作。

4.3 预训练 (特征对齐)

请下载我们在论文中使用的带有BLIP标题的LAION-CC-SBU数据集的558K子集在这里。

在8x A100 (80G)上,由于分辨率增加到336px, LLaVA-v1.5-13B的预训练大约需要5.5小时。LLaVA-v1.5-7B大约需要3.5小时。

Training script with DeepSpeed ZeRO-2: pretrain.sh.

  • --mm_projector_type mlp2x_gelu: the two-layer MLP vision-language connector.
  • --vision_tower openai/clip-vit-large-patch14-336: CLIP ViT-L/14 336px.

4.4 可视化训练调试

  1. Prepare data

Please download the annotation of the final mixture our instruction tuning data llava_v1_5_mix665k.json, and download the images from constituting datasets:

  • COCO: train2017
  • GQA: images
  • OCR-VQA: download script
  • TextVQA: train_val_images
  • VisualGenome: part1, part2

After downloading all of them, organize the data as follows in ./playground/data,

├── coco
│   └── train2017
├── gqa
│   └── images
├── ocr_vqa
│   └── images
├── textvqa
│   └── train_images
└── vg├── VG_100K└── VG_100K_2
  1. Start training!

You may download our pretrained projectors in Model Zoo. It is not recommended to use legacy projectors, as they may be trained with a different version of the codebase, and if any option is off, the model will not function/train as we expected.

Visual instruction tuning takes around 20 hours for LLaVA-v1.5-13B on 8x A100 (80G), due to the increased resolution to 336px. It takes around 10 hours for LLaVA-v1.5-7B on 8x A100 (40G).

Training script with DeepSpeed ZeRO-3: finetune.sh.

New options to note:

  • --mm_projector_type mlp2x_gelu: the two-layer MLP vision-language connector.
  • --vision_tower openai/clip-vit-large-patch14-336: CLIP ViT-L/14 336px.
  • --image_aspect_ratio pad: this pads the non-square images to square, instead of cropping them; it slightly reduces hallucination.
  • --group_by_modality_length True: this should only be used when your instruction tuning dataset contains both language (e.g. ShareGPT) and multimodal (e.g. LLaVA-Instruct). It makes the training sampler only sample a single modality (either image or language) during training, which we observe to speed up training by ~25%, and does not affect the final outcome.

5.模型评估

In LLaVA-1.5, we evaluate models on a diverse set of 12 benchmarks. To ensure the reproducibility, we evaluate the models with greedy decoding. We do not evaluate using beam search to make the inference process consistent with the chat demo of real-time outputs.

See Evaluation.md.

5.1 基于GPT协助的评估

我们的gpt辅助的多模态建模评估管道提供了对视觉语言模型能力的全面理解。详情请参阅我们的文章。

  1. Generate LLaVA responses
python model_vqa.py \--model-path ./checkpoints/LLaVA-13B-v0 \--question-file \playground/data/coco2014_val_qa_eval/qa90_questions.jsonl \--image-folder \/path/to/coco2014_val \--answers-file \/path/to/answer-file-our.jsonl
  1. Evaluate the generated responses. In our case, answer-file-ref.jsonl is the response generated by text-only GPT-4 (0314), with the context captions/boxes provided.
OPENAI_API_KEY="sk-***********************************" python llava/eval/eval_gpt_review_visual.py \--question playground/data/coco2014_val_qa_eval/qa90_questions.jsonl \--context llava/eval/table/caps_boxes_coco2014_val_80.jsonl \--answer-list \/path/to/answer-file-ref.jsonl \/path/to/answer-file-our.jsonl \--rule llava/eval/table/rule.json \--output /path/to/review.json
  1. Summarize the evaluation results
python summarize_gpt_review.py

6.模型合集

要使用llava -1.5检查点,您的llava软件包版本必须高于1.1.0。说明如何升级。

如果您有兴趣在模型动物园中加入任何其他细节,请打开一个问题:)

下面的模型权重是合并的权重。你不需要应用。LLaVA检查点的使用应该符合基本LLM的模型许可:Llama 2。

LLaVA-v1.5

VersionSizeScheduleCheckpointVQAv2GQAVizWizSQAT-VQAPOPEMMEMM-BenchMM-Bench-CNSEEDLLaVA-Bench-WildMM-Vet
LLaVA-1.57Bfull_ft-1eliuhaotian/llava-v1.5-7b78.562.050.066.858.285.91510.764.358.358.665.431.1
LLaVA-1.513Bfull_ft-1eliuhaotian/llava-v1.5-13b80.063.353.671.661.385.91531.367.763.661.672.536.1
LLaVA-1.57Blora-1ecoming soon
LLaVA-1.513Blora-1ecoming soon

LLaVA-v1

Note: We recommend using the most capable LLaVA-v1.5 series above for the best performance.

Base LLMVision EncoderPretrain DataPretraining scheduleFinetuning DataFinetuning scheduleLLaVA-Bench-ConvLLaVA-Bench-DetailLLaVA-Bench-ComplexLLaVA-Bench-OverallDownload
Vicuna-13B-v1.3CLIP-L-336pxLCS-558K1eLLaVA-Instruct-80Kproj-1e, lora-1e64.355.981.770.1LoRA LoRA-Merged
LLaMA-2-13B-ChatCLIP-LLCS-558K1eLLaVA-Instruct-80Kfull_ft-1e56.758.680.067.9ckpt
LLaMA-2-7B-ChatCLIP-LLCS-558K1eLLaVA-Instruct-80Klora-1e51.258.971.662.8LoRA

Projector weights

These are projector weights we have pretrained. You can use these projector weights for visual instruction tuning. They are just pretrained on image-text pairs, and are NOT instruction tuned, which means they do NOT follow instructions as good as our official models, and can output repetitive, lengthy, and garbled outputs. If you want to have nice conversations with LLaVA, use the checkpoints above (LLaVA v1.5).

NOTE: These projector weights are only compatible with the llava>=1.0.0, please check out the latest code base if your local code version is below v1.0.0.

NOTE: When you use our pretrained projector for visual instruction tuning, it is very important to use the same base LLM and vision encoder as the one we used for pretraining the projector. Otherwise, the performance will be very bad.

When using these projector weights to instruction tune your LMM, please make sure that these options are correctly set as follows,

--mm_use_im_start_end False
--mm_use_im_patch_token False
Base LLMVision EncoderProjectionPretrain DataPretraining scheduleDownload
Vicuna-13B-v1.5CLIP-L-336pxMLP-2xLCS-558K1eprojector
Vicuna-7B-v1.5CLIP-L-336pxMLP-2xLCS-558K1eprojector
LLaMA-2-13B-ChatCLIP-L-336pxLinearLCS-558K1eprojector
LLaMA-2-7B-ChatCLIP-L-336pxLinearLCS-558K1eprojector
LLaMA-2-13B-ChatCLIP-LLinearLCS-558K1eprojector
LLaMA-2-7B-ChatCLIP-LLinearLCS-558K1eprojector
Vicuna-13B-v1.3CLIP-L-336pxLinearLCS-558K1eprojector
Vicuna-7B-v1.3CLIP-L-336pxLinearLCS-558K1eprojector
Vicuna-13B-v1.3CLIP-LLinearLCS-558K1eprojector
Vicuna-7B-v1.3CLIP-LLinearLCS-558K1eprojector

Science QA Checkpoints

Base LLMVision EncoderPretrain DataPretraining scheduleFinetuning DataFinetuning scheduleDownload
Vicuna-13B-v1.3CLIP-LLCS-558K1eScienceQAfull_ft-12eckpt

Legacy Models (merged weights)

The model weights below are merged weights. You do not need to apply delta. The usage of LLaVA checkpoints should comply with the base LLM’s model license.

Base LLMVision EncoderPretrain DataPretraining scheduleFinetuning DataFinetuning scheduleDownload
MPT-7B-ChatCLIP-LLCS-558K1eLLaVA-Instruct-80Kfull_ft-1epreview

Legacy Models (delta weights)

The model weights below are delta weights. The usage of LLaVA checkpoints should comply with the base LLM’s model license: LLaMA.

You can add our delta to the original LLaMA weights to obtain the LLaVA weights.

Instructions:

  1. Get the original LLaMA weights in the huggingface format by following the instructions here.
  2. Use the following scripts to get LLaVA weights by applying our delta. It will automatically download delta weights from our Hugging Face account. In the script below, we use the delta weights of liuhaotian/LLaVA-7b-delta-v0 as an example. It can be adapted for other delta weights by changing the --delta argument (and base/target accordingly).
python3 -m llava.model.apply_delta \--base /path/to/llama-7b \--target /output/path/to/LLaVA-7B-v0 \--delta liuhaotian/LLaVA-7b-delta-v0
Base LLMVision EncoderPretrain DataPretraining scheduleFinetuning DataFinetuning scheduleDownload
Vicuna-13B-v1.1CLIP-LCC-595K1eLLaVA-Instruct-158Kfull_ft-3edelta-weights
Vicuna-7B-v1.1CLIP-LLCS-558K1eLLaVA-Instruct-80Kfull_ft-1edelta-weights
Vicuna-13B-v0CLIP-LCC-595K1eLLaVA-Instruct-158Kfull_ft-3edelta-weights
Vicuna-13B-v0CLIP-LCC-595K1eScienceQAfull_ft-12edelta-weights
Vicuna-7B-v0CLIP-LCC-595K1eLLaVA-Instruct-158Kfull_ft-3edelta-weights

Legacy Projector weights

The following projector weights are deprecated, and the support for them may be removed in the future. They do not support zero-shot inference. Please use the projector weights in the table above if possible.

NOTE: When you use our pretrained projector for visual instruction tuning, it is very important to use the same base LLM and vision encoder as the one we used for pretraining the projector. Otherwise, the performance will be very bad.

When using these projector weights to instruction tune your LMM, please make sure that these options are correctly set as follows,

--mm_use_im_start_end True
--mm_use_im_patch_token False
Base LLMVision EncoderPretrain DataPretraining scheduleDownload
Vicuna-7B-v1.1CLIP-LLCS-558K1eprojector
Vicuna-13B-v0CLIP-LCC-595K1eprojector
Vicuna-7B-v0CLIP-LCC-595K1eprojector

When using these projector weights to instruction tune your LMM, please make sure that these options are correctly set as follows,

--mm_use_im_start_end False
--mm_use_im_patch_token False
Base LLMVision EncoderPretrain DataPretraining scheduleDownload
Vicuna-13B-v0CLIP-LCC-595K1eprojector

7.数据集介绍

Data file nameSize
llava_instruct_150k.json229 MB
llava_instruct_80k.json229 MB
conversation_58k.json126 MB
detail_23k.json20.5 MB
complex_reasoning_77k.json79.6 MB

7.1 Pretraining Dataset

The pretraining dataset used in this release is a subset of CC-3M dataset, filtered with a more balanced concept coverage distribution. Please see here for a detailed description of the dataset structure and how to download the images.

If you already have CC-3M dataset on your disk, the image names follow this format: GCC_train_000000000.jpg. You may edit the image field correspondingly if necessary.

DataChat FileMeta DataSize
CC-3M Concept-balanced 595Kchat.jsonmetadata.json211 MB
LAION/CC/SBU BLIP-Caption Concept-balanced 558Kblip_laion_cc_sbu_558k.jsonmetadata.json181 MB

Important notice: Upon the request from the community, as ~15% images of the original CC-3M dataset are no longer accessible, we upload images.zip for better reproducing our work in research community. It must not be used for any other purposes. The use of these images must comply with the CC-3M license. This may be taken down at any time when requested by the original CC-3M dataset owner or owners of the referenced images.

7.2 GPT-4 Prompts

我们为GPT-4查询提供提示和少量样本,以更好地促进该领域的研究。请查看’ prompts '文件夹中的三种问题:对话、细节描述和复杂推理。

它们以’ system_message.txt ‘的格式组织,用于系统消息,’ abc_caps.txt ‘对用于少数几个示例用户输入,’ abc_conf .txt '用于少数几个示例参考输出。

请注意,它们的格式可能不同。例如,’ conversation ‘在’ json '中,详细描述是只回答的。我们在初步实验中选择的格式比我们尝试的一组有限的替代方案稍微好一些:“json”,更自然的格式,只有答案。如果有兴趣,您可以尝试其他变体或对此进行更仔细的研究。欢迎投稿!

更多优质内容请关注公号:汀丶人工智能;会提供一些相关的资源和优质文章,免费获取阅读。

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.rhkb.cn/news/164434.html

如若内容造成侵权/违法违规/事实不符,请联系长河编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

使用MFC创建一个SaleSystem

目录 1、项目的创建&#xff1a; 2、项目的配置&#xff1a; 3、设置窗口属性&#xff1a; &#xff08;1&#xff09;、设置图标 1&#xff09;、添加导入资源 2&#xff09;、代码初始化图标 &#xff08;2&#xff09;、设置标题 &#xff08;3&#xff09;、设置窗口…

如何解决香港服务器使用的常见问题

​  站长们在选择香港服务器租用时会考虑到它的各种性能以及稳定性&#xff0c;这是必须的。但是使用过程中还有些问题也不容忽视&#xff0c;比如&#xff1a;带宽资源是否短缺&#xff0c;是否存在安全漏洞&#xff0c;连接是否正常等这些问题也要考虑到。 香港服务器使用中…

整理uvc驱动相关函数的调用流程

目录 1、uvc_video.c初始化函数的调用关系 2、uvc_queue.c3、uvc_v4l2.c4、v4l2-core5、数据传输1、分配一个gadget请求2、请求一个queue 1、uvc_video.c // uvc_video.c uvc_video_encode_header uvc_video_encode_data uvc_video_encode_bulk uvc_video_encode_isoc uvcg_vi…

关闭mysql,关闭redis服务

1. 关闭redis服务&#xff1a; 查询redis安装目录&#xff1a; whereis redis which redis find / -name redis 关闭redis服务&#xff1a; redis-cli -h 127.0.0.1 -p 6379 auth 输入密码 shutdown 关闭redis服务 2. 关闭mysql服务&#xff1a; 查询mysql安装目录&…

Docker逃逸---SYS_PTRACE浅析

一、产生原因 用户授予了容器SYS_PTRACE权限&#xff0c;并且与宿主机共享一个进程命名空间(--pidhost)&#xff0c;使得容器内可以查看到宿主机的进程&#xff0c;攻击者可以利用进程注入&#xff0c;反弹shell&#xff0c;从而实现逃逸 二、利用条件 1、容器有SYS_PTRACE权…

(H5轮播)vue一个轮播里显示多个内容/一屏展示两个半内容

效果图 : html: <div class"content"><van-swipeclass"my-swipe com-long-swipe-indicator":autoplay"2500"indicator-color"#00C4FF"><van-swipe-itemclass"flex-row-wrap"v-for"(items, index) in M…

React +AntD + From组件重复提交数据(已解决)

开发场景&#xff1a; react Hooks andt 提交form表单内容给数据库(使用antd的form组件) 问题描述 提交是异步的&#xff0c;请提交方式是POST 方式 提交表单内容给后端&#xff0c;却产生了两次提交记录&#xff08;当然&#xff0c;数据新增了两条数据&#xff09;。可以…

VR虚拟展厅的亮点是什么?有哪些应用?

传统展厅主要是以静态陈列的形式来传达内容&#xff0c;而展示形式则有图片、视频等&#xff0c;虽然视频包含内容多&#xff0c;但是总体具有一定的局限性&#xff0c;客户体验感也较差&#xff0c;往往不能深入了解细节。随着VR技术越来越成熟&#xff0c;VR技术的广泛应用&a…

Qemu镜像安全加密测试

文章目录 简介1.已经过时的qemu自带的加密方式介绍1.1.创建secret uuid的xml1.2.产生uuid1.3.给secret赋值1.4.创建一个存储池1.5.在存储池中创建一个镜像1.6.在虚拟机中使用该镜像 2.弃用以上加密方式2.1.原作者Daniel Berrange的观点2.2.Markus Armbruster更深入的操作 3. LU…

在前端html页面中向服务器发送post登录请求

目录 前言 搭建服务器 搭建前端登录页面 获取表单值 使用axios发送post登录请求 前言 一般在html页面中向服务器发送post请求的模块为登录请求&#xff0c;本文将介绍如何向服务器发送post请求 搭建服务器 如何搭建服务器请看JWT认证这篇文章&#xff0c;有详细的解说。…

性能测试jmeter命令行运行+html测试报告解读

windows下打开jmeter的运行窗口&#xff0c;可以看到提示不要用GUI模式进行负载测试&#xff0c;如果要用负载测试&#xff0c;用cli模式&#xff0c;因为GUI模式运行jmeter比较消耗性能。 命令行模式 windows下找到jemeter所在文件夹&#xff0c;打开cmd输入命令。 jmeter -n…

Leetcode 第 365 场周赛题解

Leetcode 第 365 场周赛题解 Leetcode 第 365 场周赛题解题目1&#xff1a;2873. 有序三元组中的最大值 I思路代码复杂度分析 题目2&#xff1a;2874. 有序三元组中的最大值 II思路代码复杂度分析思路2 题目3&#xff1a;2875. 无限数组的最短子数组思路代码复杂度分析 题目4&a…

基于JAVA+SpringBoot+UniApp+Vue的前后端分离的手机移动端图书借阅平台

✌全网粉丝20W,csdn特邀作者、博客专家、CSDN新星计划导师、java领域优质创作者,博客之星、掘金/华为云/阿里云/InfoQ等平台优质作者、专注于Java技术领域和毕业项目实战✌ &#x1f345;文末获取项目下载方式&#x1f345; 一、项目背景介绍&#xff1a; 随着社会信息化的快速…

IPETRONIK数采与第三方软件集成

一 第三方软件 IPETRONIK公司提供IPEmotion用于车辆测试&#xff0c;但在某些特殊领域也有一些专业的软件&#xff0c;例如标定&#xff0c;则需要IPETRONIK数采来进行压力、温度、转速等信号的采集。 IPETRONIK提供了INCA和CANape插件&#xff0c;且这两款软件均可直接识别到…

微信小程序修改van-popup的背景颜色

效果图&#xff1a; van-popup背景颜色渐变 使用深度修改样式不生效&#xff0c;直接在 custom-style里面修改即可&#xff1b; <van-popup position"bottom"custom-style"height:25%;background:linear-gradient(95deg, #F8FCFF -0.03%, #EDF5FF 64.44…

【Qt之布局】QVBoxLayout、QHBoxLayout、QGridLayout、QFormLayout介绍及使用

在Qt中&#xff0c;布局管理器&#xff08;Layout&#xff09;用于管理窗口中的控件的位置和大小&#xff0c;以适应不同大小的窗口。 常用的布局管理器包括QVBoxLayout、QHBoxLayout、QGridLayout和QFormLayout。 先放张布局UI&#xff1a; 1. QVBoxLayout&#xff08;垂直布…

Linux性能优化--性能工具:磁盘I/O

6.0 概述 本章介绍的性能工具能帮助你评估磁盘I/O子系统的使用情况。这些工具可以展示哪些磁盘或分区已被使用&#xff0c;每个磁盘处理了多少I/O,发给这些磁盘的I/O请求要等多久才被处理。 阅读本章后&#xff0c;你将能够&#xff1a; 确定系统内磁盘I/O的总量和类型(读/写…

使用 ClickHouse 深入了解 Apache Parquet (一)

​ 【squids.cn】 全网zui低价RDS&#xff0c;免费的迁移工具DBMotion、数据库备份工具DBTwin、SQL开发工具等 自2013年作为Hadoop的列存储发布以来&#xff0c;Parquet几乎已经成为一种无处不在的文件交换格式&#xff0c;它提供了高效的存储和检索。这种采纳使其成为更近期的…

为网站配置SSL

HTTPS &#xff08;全称&#xff1a;Hyper Text Transfer Protocol over SecureSocket Layer&#xff09;&#xff0c;是以安全为目标的 HTTP 通道&#xff0c;在HTTP的基础上通过传输加密和身份认证保证了传输过程的安全性。HTTPS 在HTTP 的基础下加入SSL 层&#xff0c;HTTPS…

AWS S3加密

Hello大家好&#xff61; 在本课时我们将讨论S3加密相关的内容。 S3加密相关是认证考试的一个重要的主题考点&#xff0c;您需要了解亚马逊S3的几种不同类型的加密方式。| 首先是静态数据的加密&#xff0c;静态数据加密是指数据存储在亚马逊S3 数据中心的磁盘上时&#xff0…