分布式多卡训练(DDP)踩坑

多卡训练最近在跑yolov10版本的RT-DETR,用来进行目标检测。

单卡训练语句(正常运行):

python main.py

多卡训练语句:

需要通过torch.distributed.launch来启动,一般是单节点,其中CUDA_VISIBLE_DEVICES设置用的显卡编号,也可以不用,直接在main.py里面指定device也行,–nproc_pre_node 每个节点的显卡数量。

python -m torch.distributed.run --nproc_per_node=3 main.pyCUDA_VISIBLE_DEVICES=0,6,7 python -m torch.distributed.run --nproc_per_node=3 main.py

但是运行多卡训练之后,会报错,有的时候训练进程会卡住。错误信息如下,

[rank0]: Traceback (most recent call last):
[rank0]:   File "/home/zyy23/yolov10/run_detr.py", line 5, in <module>
[rank0]:     model.train(pretrained=True,
[rank0]:   File "/home/zyy23/yolov10/ultralytics/engine/model.py", line 657, in train
[rank0]:     self.trainer.train()
[rank0]:   File "/home/zyy23/yolov10/ultralytics/engine/trainer.py", line 213, in train
[rank0]:     self._do_train(world_size)
[rank0]:   File "/home/zyy23/yolov10/ultralytics/engine/trainer.py", line 381, in _do_train
[rank0]:     self.loss, self.loss_items = self.model(batch)
[rank0]:   File "/home/zyy23/anaconda3/envs/mypytorch_3.9/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
[rank0]:     return self._call_impl(*args, **kwargs)
[rank0]:   File "/home/zyy23/anaconda3/envs/mypytorch_3.9/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
[rank0]:     return forward_call(*args, **kwargs)
[rank0]:   File "/home/zyy23/anaconda3/envs/mypytorch_3.9/lib/python3.9/site-packages/torch/nn/parallel/distributed.py", line 1632, in forward
[rank0]:     inputs, kwargs = self._pre_forward(*inputs, **kwargs)
[rank0]:   File "/home/zyy23/anaconda3/envs/mypytorch_3.9/lib/python3.9/site-packages/torch/nn/parallel/distributed.py", line 1523, in _pre_forward
[rank0]:     if torch.is_grad_enabled() and self.reducer._rebuild_buckets():
[rank0]: RuntimeError: Expected to have finished reduction in the prior iteration before starting a new one. This error indicates that your module has parameters that were not used in producing loss. You can enable unused parameter detection by passing the keyword argument `find_unused_parameters=True` to `torch.nn.parallel.DistributedDataParallel`, and by
[rank0]: making sure all `forward` function outputs participate in calculating loss.
[rank0]: If you already have done the above, then the distributed data parallel module wasn't able to locate the output tensors in the return value of your module's `forward` function. Please include the loss function and the structure of the return value of `forward` of your module when reporting this issue (e.g. list, dict, iterable).
[rank0]: Parameters which did not receive grad for rank 0: model.28.dec_bbox_head.5.layers.2.bias, model.28.dec_bbox_head.5.layers.2.weight, model.28.dec_bbox_head.5.layers.1.bias, model.28.dec_bbox_head.5.layers.1.weight, model.28.dec_bbox_head.5.layers.0.bias, model.28.dec_bbox_head.5.layers.0.weight, model.28.dec_bbox_head.4.layers.2.bias, model.28.dec_bbox_head.4.layers.2.weight, model.28.dec_bbox_head.4.layers.1.bias, model.28.dec_bbox_head.4.layers.1.weight, model.28.dec_bbox_head.4.layers.0.bias, model.28.dec_bbox_head.4.layers.0.weight, model.28.dec_bbox_head.3.layers.2.bias, model.28.dec_bbox_head.3.layers.2.weight, model.28.dec_bbox_head.3.layers.1.bias, model.28.dec_bbox_head.3.layers.1.weight, model.28.dec_bbox_head.3.layers.0.bias, model.28.dec_bbox_head.3.layers.0.weight, model.28.dec_bbox_head.2.layers.2.bias, model.28.dec_bbox_head.2.layers.2.weight, model.28.dec_bbox_head.2.layers.1.bias, model.28.dec_bbox_head.2.layers.1.weight, model.28.dec_bbox_head.2.layers.0.bias, model.28.dec_bbox_head.2.layers.0.weight, model.28.dec_bbox_head.1.layers.2.bias, model.28.dec_bbox_head.1.layers.2.weight, model.28.dec_bbox_head.1.layers.1.bias, model.28.dec_bbox_head.1.layers.1.weight, model.28.dec_bbox_head.1.layers.0.bias, model.28.dec_bbox_head.1.layers.0.weight, model.28.dec_bbox_head.0.layers.2.bias, model.28.dec_bbox_head.0.layers.2.weight, model.28.dec_bbox_head.0.layers.1.bias, model.28.dec_bbox_head.0.layers.1.weight, model.28.dec_bbox_head.0.layers.0.bias, model.28.dec_bbox_head.0.layers.0.weight, model.28.enc_bbox_head.layers.2.bias, model.28.enc_bbox_head.layers.2.weight, model.28.enc_bbox_head.layers.1.bias, model.28.enc_bbox_head.layers.1.weight, model.28.enc_bbox_head.layers.0.bias, model.28.enc_bbox_head.layers.0.weight, model.28.denoising_class_embed.weight
[rank0]: Parameter indices which did not receive grad for rank 0: 510 521 522 523 524 525 526 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574
[rank1]:[E1122 21:12:02.018431947 ProcessGroupGloo.cpp:143] Rank 1 successfully reached monitoredBarrier, but received errors while waiting for send/recv from rank 0. Please check rank 0 logs for faulty rank.
[rank2]:[E1122 21:12:02.018445283 ProcessGroupGloo.cpp:143] Rank 2 successfully reached monitoredBarrier, but received errors while waiting for send/recv from rank 0. Please check rank 0 logs for faulty rank.
[rank1]: Traceback (most recent call last):
[rank1]:   File "/home/zyy23/yolov10/run_detr.py", line 5, in <module>
[rank1]:     model.train(pretrained=True,
[rank1]:   File "/home/zyy23/yolov10/ultralytics/engine/model.py", line 657, in train
[rank1]:     self.trainer.train()
[rank1]:   File "/home/zyy23/yolov10/ultralytics/engine/trainer.py", line 213, in train
[rank1]:     self._do_train(world_size)
[rank1]:   File "/home/zyy23/yolov10/ultralytics/engine/trainer.py", line 389, in _do_train
[rank1]:     self.scaler.scale(self.loss).backward()
[rank1]:   File "/home/zyy23/anaconda3/envs/mypytorch_3.9/lib/python3.9/site-packages/torch/_tensor.py", line 521, in backward
[rank1]:     torch.autograd.backward(
[rank1]:   File "/home/zyy23/anaconda3/envs/mypytorch_3.9/lib/python3.9/site-packages/torch/autograd/__init__.py", line 289, in backward
[rank1]:     _engine_run_backward(
[rank1]:   File "/home/zyy23/anaconda3/envs/mypytorch_3.9/lib/python3.9/site-packages/torch/autograd/graph.py", line 768, in _engine_run_backward
[rank1]:     return Variable._execution_engine.run_backward(  # Calls into the C++ engine to run the backward pass
[rank1]: RuntimeError: Rank 1 successfully reached monitoredBarrier, but received errors while waiting for send/recv from rank 0. Please check rank 0 logs for faulty rank.
[rank1]:  Original exception:
[rank1]: [../third_party/gloo/gloo/transport/tcp/pair.cc:534] Connection closed by peer [127.0.1.1]:27022
[rank2]: Traceback (most recent call last):
[rank2]:   File "/home/zyy23/yolov10/run_detr.py", line 5, in <module>
[rank2]:     model.train(pretrained=True,
[rank2]:   File "/home/zyy23/yolov10/ultralytics/engine/model.py", line 657, in train
[rank2]:     self.trainer.train()
[rank2]:   File "/home/zyy23/yolov10/ultralytics/engine/trainer.py", line 213, in train
[rank2]:     self._do_train(world_size)
[rank2]:   File "/home/zyy23/yolov10/ultralytics/engine/trainer.py", line 389, in _do_train
[rank2]:     self.scaler.scale(self.loss).backward()
[rank2]:   File "/home/zyy23/anaconda3/envs/mypytorch_3.9/lib/python3.9/site-packages/torch/_tensor.py", line 521, in backward
[rank2]:     torch.autograd.backward(
[rank2]:   File "/home/zyy23/anaconda3/envs/mypytorch_3.9/lib/python3.9/site-packages/torch/autograd/__init__.py", line 289, in backward
[rank2]:     _engine_run_backward(
[rank2]:   File "/home/zyy23/anaconda3/envs/mypytorch_3.9/lib/python3.9/site-packages/torch/autograd/graph.py", line 768, in _engine_run_backward
[rank2]:     return Variable._execution_engine.run_backward(  # Calls into the C++ engine to run the backward pass
[rank2]: RuntimeError: Rank 2 successfully reached monitoredBarrier, but received errors while waiting for send/recv from rank 0. Please check rank 0 logs for faulty rank.
[rank2]:  Original exception:
[rank2]: [../third_party/gloo/gloo/transport/tcp/pair.cc:534] Connection closed by peer [127.0.1.1]:27022
W1122 21:12:02.606069 139664836297920 torch/distributed/elastic/multiprocessing/api.py:858] Sending process 1281666 closing signal SIGTERM
W1122 21:12:02.608416 139664836297920 torch/distributed/elastic/multiprocessing/api.py:858] Sending process 1281667 closing signal SIGTERM
E1122 21:12:02.987694 139664836297920 torch/distributed/elastic/multiprocessing/api.py:833] failed (exitcode: 1) local_rank: 0 (pid: 1281665) of binary: /home/zyy23/anaconda3/envs/mypytorch_3.9/bin/python
Traceback (most recent call last):File "/home/zyy23/anaconda3/envs/mypytorch_3.9/lib/python3.9/runpy.py", line 197, in _run_module_as_mainreturn _run_code(code, main_globals, None,File "/home/zyy23/anaconda3/envs/mypytorch_3.9/lib/python3.9/runpy.py", line 87, in _run_codeexec(code, run_globals)File "/home/zyy23/anaconda3/envs/mypytorch_3.9/lib/python3.9/site-packages/torch/distributed/run.py", line 905, in <module>main()File "/home/zyy23/anaconda3/envs/mypytorch_3.9/lib/python3.9/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 348, in wrapperreturn f(*args, **kwargs)File "/home/zyy23/anaconda3/envs/mypytorch_3.9/lib/python3.9/site-packages/torch/distributed/run.py", line 901, in mainrun(args)File "/home/zyy23/anaconda3/envs/mypytorch_3.9/lib/python3.9/site-packages/torch/distributed/run.py", line 892, in runelastic_launch(File "/home/zyy23/anaconda3/envs/mypytorch_3.9/lib/python3.9/site-packages/torch/distributed/launcher/api.py", line 133, in __call__return launch_agent(self._config, self._entrypoint, list(args))File "/home/zyy23/anaconda3/envs/mypytorch_3.9/lib/python3.9/site-packages/torch/distributed/launcher/api.py", line 264, in launch_agentraise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
============================================================
run_detr.py FAILED
------------------------------------------------------------
Failures:<NO_OTHER_FAILURES>
------------------------------------------------------------
Root Cause (first observed failure):
[0]:time      : 2024-11-22_21:12:02host      : lab10rank      : 0 (local_rank: 0)exitcode  : 1 (pid: 1281665)error_file: <N/A>traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html

发生了runtimerror

RuntimeError: Expected to have finished reduction in the prior iteration before starting a new one. This error indicates that your module has parameters that were not used in producing loss. You can enable unused parameter detection by passing the keyword argument `find_unused_parameters=True` to `torch.nn.parallel.DistributedDataParallel`, and bymaking sure all `forward` function outputs participate in calculating loss.If you already have done the above, then the distributed data parallel module wasn’t able to locate the output tensors in the return value of your module’s `forward` function. Please include the loss function and the structure of the return value of `forward` of your module when reporting this issue (e.g. list, dict, iterable).

看不懂的话,用翻译软件翻译一下

运行时错误:预计在开始新迭代之前已完成前一次迭代的减少。此错误表明您的模块具有未用于产生损耗的参数。您可以通过 (1) 将关键字参数 find_unused_parameters=True 传递给 torch.nn.parallel.DistributedDataParallel 来启用未使用的参数检测; (2) 确保所有 forward 函数输出都参与计算损失。如果您已经完成了上述两个步骤,那么分布式数据并行模块无法在模块的 forward 函数的返回值中定位输出张量。报告此问题时,请包括损失函数和模块 forward 返回值的结构(例如 list、dict、iterable)。

错误原因:

  • 定义了网络层却没在forward()中使用

  • forward()返回的参数未用于梯度计算

  • 使用了不进行梯度回归的参数进行优化

有两种解决方法
一是在to torch.nn.parallel.DistributedDataParallel中加入find_unused_parameters参数并设置初始值为True,find_unused_parameters 是 PyTorch 中的一个参数,用于在分布式训练时优化梯度计算。
self.model = nn.parallel.DistributedDataParallel(self.model, 
device_ids=[RANK],find_unused_parameters=True)

find_unused_parameters参数的作用:

检测未使用的参数:当设置为 True 时,PyTorch 会在每个前向传播过程中检查哪些参数没有被使用。这对于某些模型来说非常有用,特别是当模型的某些部分在特定输入下可能不会被触发时。

减少损失计算时的内存开销,提高性能:通过识别并忽略未使用的参数,PyTorch 可以在计算梯度时减少内存开销,从而提高训练效率。在大型模型或复杂网络中,识别未使用的参数可以加速训练过程,尤其是在分布式设置中,因为可以避免不必要的梯度同步。

适用于动态计算图:对于需要动态变化的模型架构,使用 find_unused_parameters=True 可以确保所有参数都被正确处理。

但是它会增加额外的计算开销用于验证哪些参数是未使用的不用参加到损失的计算中,所以最好是仅在需要时使用此参数,尤其是在模型中确实存在未使用参数的情况下。

由于用的是yolov10,封装的太严密,一直找不到这条语句在哪个位置,找了好久,也尝试在模型初始化的位置和命令行加参数,都没成功,后来在ultralytics/engine里面的trainer.py找到了,在_setup_train函数下面。

def _setup_train(self, world_size):"""Builds dataloaders and optimizer on correct rank process."""# Modelself.run_callbacks("on_pretrain_routine_start")ckpt = self.setup_model()self.model = self.model.to(self.device)self.set_model_attributes()# Freeze layersfreeze_list = (self.args.freezeif isinstance(self.args.freeze, list)else range(self.args.freeze)if isinstance(self.args.freeze, int)else [])always_freeze_names = [".dfl"]  # always freeze these layersfreeze_layer_names = [f"model.{x}." for x in freeze_list] + always_freeze_namesfor k, v in self.model.named_parameters():# v.register_hook(lambda x: torch.nan_to_num(x))  # NaN to 0 (commented for erratic training results)if any(x in k for x in freeze_layer_names):LOGGER.info(f"Freezing layer '{k}'")v.requires_grad = Falseelif not v.requires_grad and v.dtype.is_floating_point:  # only floating point Tensor can require gradientsLOGGER.info(f"WARNING ?? setting 'requires_grad=True' for frozen layer '{k}'. ""See ultralytics.engine.trainer for customization of frozen layers.")v.requires_grad = True# Check AMPself.amp = torch.tensor(self.args.amp).to(self.device)  # True or Falseif self.amp and RANK in (-1, 0):  # Single-GPU and DDPcallbacks_backup = callbacks.default_callbacks.copy()  # backup callbacks as check_amp() resets themself.amp = torch.tensor(check_amp(self.model), device=self.device)callbacks.default_callbacks = callbacks_backup  # restore callbacksif RANK > -1 and world_size > 1:  # DDPdist.broadcast(self.amp, src=0)  # broadcast the tensor from rank 0 to all other ranks (returns None)self.amp = bool(self.amp)  # as booleanself.scaler = torch.cuda.amp.GradScaler(enabled=self.amp)if world_size > 1:self.model = nn.parallel.DistributedDataParallel(self.model, device_ids=[RANK],find_unused_parameters=True)
二是将环境变量TORCH_DISTRIBUTED_DEBUG设置为INFO或DETAIL,打印有关哪些特定参数在此级别上没有收到梯度的信息,作为此错误的一部分。上述报错信息没有显示这个提示,是因为我们已经使用了这条语句,使用之后,会看到哪些参数没有收到梯度信息。
TORCH_DISTRIBUTED_DEBUG=DETAIL python -m torch.distributed.run --nproc_per_node=3 main.py

或者是下面的语句,也可以看哪些参数没有更新

            for name, param in self.model.named_parameters():if param.grad is None:print("The None grad model is:")print(name)

在使用nn.Module的__init__方法时,如果使用self.xx这样的语句定义了层,但是这个层的计算结果后续没有用来计算loss,或者这个self层没有使用,都会导致报错。只需要在模型中仔细检查forward函数和init函数,检查__init__() 方法中是否存在self参数,在 forward 中没有使用的,注释掉即可。(要么使用,要么删除)

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.rhkb.cn/news/5170.html

如若内容造成侵权/违法违规/事实不符,请联系长河编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

RV1126+FFMPEG推流项目(8)AENC音频编码模块

本节分享的是AENC音频编码模块&#xff0c;是负责在AI模块通道里面取出收集到的音频数据&#xff0c;进行编码。了解AENC模块之前&#xff0c;先来看一个数据结构“RV1126_AENC_CONFIG”&#xff0c;这个数据结构是自己封装的&#xff0c;里面有AENC通道号&#xff0c;和内部描…

智能新浪潮:亚马逊云科技发布Amazon Nova模型

在2024亚马逊云科技re:Invent全球大会上&#xff0c;亚马逊云科技宣布推出新一代基础模型Amazon Nova&#xff0c;其隶属于Amazon Bedrock&#xff0c;这些模型精准切入不同领域&#xff0c;解锁多元业务可能&#xff0c;为人工智能领域带来革新。 带你认识一起了解Amazon Nova…

【Prometheus】PromQL进阶用法

✨✨ 欢迎大家来到景天科技苑✨✨ &#x1f388;&#x1f388; 养成好习惯&#xff0c;先赞后看哦~&#x1f388;&#x1f388; &#x1f3c6; 作者简介&#xff1a;景天科技苑 &#x1f3c6;《头衔》&#xff1a;大厂架构师&#xff0c;华为云开发者社区专家博主&#xff0c;…

C++《AVL树》

在之前的学习当中我们已经了解了二叉搜索树&#xff0c;并且我们知道二叉搜索树的查找效率是无法满足我们的要求&#xff0c;当二叉树为左或者右斜树查找的效率就很低下了&#xff0c;那么这本篇当中我们就要来学习对二叉搜索树进行优化的二叉树——AVL树。在此会先来了解AVL树…

微信小程序:实现单选,多选,通过变量控制单选/多选

一、实现单选功能 微信小程序提供了 radio 组件来实现单选功能。radio 组件需要配合 radio-group 使用。 1. WXML 代码 <radio-group bindchange"onRadioChange"><label wx:for"{{items}}" wx:key"id"><radio value"{{it…

《Effective Java》学习笔记——第1部分 创建对象和销毁对象的最佳实践

文章目录 第1部分 创建和销毁对象一、前言二、创建和销毁对象最佳实践内容1. 优先使用工厂方法而非直接使用构造器2. 避免创建不必要的对象3. 避免使用过多的构造器4. 避免使用原始类型&#xff08;Raw Types&#xff09;5. 避免创建对象的过度依赖6. 清理资源和关闭对象&#…

解决conda create速度过慢的问题

问题 构建了docker容器 想在容器中创建conda环境&#xff0c;但是conda create的时候速度一直很慢 解决办法 宿主机安装的是anaconda 能正常conda create,容器里安装的是miniforge conda create的时候速度一直很慢&#xff0c;因为容器和宿主机共享网络了&#xff0c;宿主机…

【知识分享】PCIe5.0 TxRx 电气设计参数汇总

目录 0 引言 1 参考时钟--Refclk 2 发射端通道设计 3 发送均衡技术 4 接收端通道设计 5 接收均衡技术 6 结语 7 参考文献 8 扩展阅读 0 引言 PCI Express Base Specification 5.0的电气规范中&#xff0c;关键技术要点如下&#xff1a; 1. 支持2.5、5.0、8.0、16.0和3…

Java 的初认识(一)

好久不见兄弟们&#xff01;之前更新完 C 语言的内容之后呢&#xff0c;我是做了一个“ 短暂 ”的休息昂&#xff0c;当然我自己的学习是没有停歇的&#xff0c;只是在更新博客这上面休息了一下&#xff0c;主要还是想让自己先把这部分的知识掌握透彻了之后&#xff0c;再来为大…

2024年美赛C题评委文章及O奖论文解读 | AI工具如何影响数学建模?从评委和O奖论文出发-O奖论文做对了什么?

模型假设仅仅是简单陈述吗&#xff1f;允许AI的使用是否降低了比赛难度&#xff1f;还在依赖机器学习的模型吗&#xff1f;处理题目的方法有哪些&#xff1f;O奖论文的优点在哪里&#xff1f; 本文调研了当年赛题的评委文章和O奖论文&#xff0c;这些问题都会在文章中一一解答…

C语言练习(17)

两个乒乓球队进行比赛&#xff0c;各出3人。甲队为A、B、C 3人&#xff0c;乙队为X、Y、Z 3人&#xff0c;并抽签决定比赛名单。有人向队员打听比赛的名单&#xff0c;A说他不和X比&#xff0c;C说他不和X、Z比&#xff0c;请编程序找出3对选手的对阵名单。 #include <stdi…

【回忆迷宫——处理方法+DFS】

题目 代码 #include <bits/stdc.h> using namespace std; const int N 250; int g[N][N]; bool vis[N][N]; int dx[4] {0, 0, -1, 1}; int dy[4] {-1, 1, 0, 0}; int nx 999, ny 999, mx, my; int x 101, y 101; //0墙 (1空地 2远方) bool jud(int x, int y) {if…

Flowable 审核功能封装

文章目录 引言I 查询当前用户需要审核的数据列表整体逻辑根据组获取任务数据根据审核人获取任务数据II 进行审核整体逻辑III 审核历史查询IV 流程图查看流程进度思路根据任务 ID 获取任务进度流程图引言 流程引擎功能封装 : 审核列表数据查询进行审核的整体逻辑:获取任务 Id,…

Java-数据结构-二叉树习题(2)

第一题、平衡二叉树 ① 暴力求解法 &#x1f4da; 思路提示&#xff1a; 该题要求我们判断给定的二叉树是否为"平衡二叉树"。 平衡二叉树指&#xff1a;该树所有节点的左右子树的高度相差不超过 1。 也就是说需要我们会求二叉树的高&#xff0c;并且要对节点内所…

github汉化

本文主要讲述了github如何汉化的方法。 目录 问题描述汉化步骤1.打开github&#xff0c;搜索github-chinese2.打开项目&#xff0c;打开README.md3.下载安装脚本管理器3.1 在README.md中往下滑动&#xff0c;找到浏览器与脚本管理器3.2 选择浏览器对应的脚本管理器3.2.1 点击去…

学习ASP.NET Core的身份认证(基于JwtBearer的身份认证8)

为进一步测试通过请求头传递token进行身份验证&#xff0c;在main.htm中增加layui的数据表格组件&#xff0c;并调用后台服务分页显示数据&#xff0c;后台分页查询数据接口如下所示&#xff08;测试时&#xff0c;直接将数据写死到代码中&#xff0c;没有查询数据库&#xff0…

Linux系统 C/C++编程基础——使用make工具和Makefile实现自动编译

ℹ️大家好&#xff0c;我是练小杰&#xff0c;今天周二了&#xff0c;距离除夕只有&#xff16;天了&#xff0c;新的一年就快到了&#x1f606; 本文是有关Linux C/C编程的make和Makefile实现自动编译相关知识点&#xff0c;后续会不断添加相关内容 ~~ 回顾:【Emacs编辑器、G…

68,[8] BUUCTF WEB [RoarCTF 2019]Simple Upload(未写完)

<?php // 声明命名空间&#xff0c;遵循 PSR-4 自动加载规范&#xff0c;命名空间为 Home\Controller namespace Home\Controller;// 导入 Think\Controller 类&#xff0c;以便扩展该类 use Think\Controller;// 定义 IndexController 类&#xff0c;继承自 Think\Control…

可以自己部署的微博 Mastodon

Mastodon&#xff08;又称乳齿象、长毛象或万象&#xff09;是一个自由开源的去中心化的分布式微博客社交网络。它的用户界面和操作方式跟推特&#xff08;Twitter&#xff09;类似&#xff0c;但整个网路并非由单一机构运作&#xff0c;而是以多个由不同营运者独立运作的伺服器…

机器学习-核函数(Kernel Function)

核函数&#xff08;Kernel Function&#xff09;是一种数学函数&#xff0c;主要用于将数据映射到一个更高维的特征空间&#xff0c;以便于在这个新特征空间中更容易找到数据的结构或模式。核函数的主要作用是在不需要显式计算高维特征空间的情况下&#xff0c;通过内积操作来实…