YOLO8 改进 009:引入 ASFF 对 YOLOv8 检测头进行优化(适用于小目标检测任务)

论文题目Learning Spatial Fusion for Single-Shot Object Detection

论文地址:Paper - ASFF

官方源码:GitHub - GOATmessi8/ASFF

 简 介 

多尺度特征融合是解决多尺度目标检测问题的关键技术,其中 FPN(特征金字塔网络)通过自顶向下的特征融合机制,将高层语义特征与低层细节特征进行简单结合,提升了检测效果。然而,FPN 的融合方法由于未充分考虑不同层级的特征图之间存在表征不一致性,可能引入冲突信息,限制了融合效果的进一步提升。ASFF(自适应空间特征融合)通过动态加权机制,在不同尺度和空间位置上自适应地融合特征,有效抑制了层级特征间的冲突信息,提高了多尺度目标检测的效果。这种优化方式体现了特征融合理论中对层次差异和空间适应性的关注。

 核 心 代 码 

(1)融合相邻层与非相邻层:

import torch
import torch.nn as nn
from ultralytics.utils.tal import dist2bbox, make_anchors
import math
import torch.nn.functional as F__all__ = ['ASFF_Detect']def autopad(k, p=None, d=1):  # kernel, padding, dilation"""Pad to 'same' shape outputs."""if d > 1:k = d * (k - 1) + 1 if isinstance(k, int) else [d * (x - 1) + 1 for x in k]  # actual kernel-sizeif p is None:p = k // 2 if isinstance(k, int) else [x // 2 for x in k]  # auto-padreturn pclass Conv(nn.Module):"""Standard convolution with args(ch_in, ch_out, kernel, stride, padding, groups, dilation, activation)."""default_act = nn.SiLU()  # default activationdef __init__(self, c1, c2, k=1, s=1, p=None, g=1, d=1, act=True):"""Initialize Conv layer with given arguments including activation."""super().__init__()self.conv = nn.Conv2d(c1, c2, k, s, autopad(k, p, d), groups=g, dilation=d, bias=False)self.bn = nn.BatchNorm2d(c2)self.act = self.default_act if act is True else act if isinstance(act, nn.Module) else nn.Identity()def forward(self, x):"""Apply convolution, batch normalization and activation to input tensor."""return self.act(self.bn(self.conv(x)))def forward_fuse(self, x):"""Perform transposed convolution of 2D data."""return self.act(self.conv(x))class DFL(nn.Module):"""Integral module of Distribution Focal Loss (DFL).Proposed in Generalized Focal Loss https://ieeexplore.ieee.org/document/9792391"""def __init__(self, c1=16):"""Initialize a convolutional layer with a given number of input channels."""super().__init__()self.conv = nn.Conv2d(c1, 1, 1, bias=False).requires_grad_(False)x = torch.arange(c1, dtype=torch.float)self.conv.weight.data[:] = nn.Parameter(x.view(1, c1, 1, 1))self.c1 = c1def forward(self, x):"""Applies a transformer layer on input tensor 'x' and returns a tensor."""b, c, a = x.shape  # batch, channels, anchorsreturn self.conv(x.view(b, 4, self.c1, a).transpose(2, 1).softmax(1)).view(b, 4, a)# return self.conv(x.view(b, self.c1, 4, a).softmax(1)).view(b, 4, a)class ASFFV5(nn.Module):def __init__(self, level, ch, multiplier=1, rfb=False, vis=False, act_cfg=True):"""ASFF version for YoloV5 .different than YoloV3multiplier should be 1, 0.5 which means, the channel of ASFF can be512, 256, 128 -> multiplier=1256, 128, 64 -> multiplier=0.5For even smaller, you need change code manually."""super(ASFFV5, self).__init__()self.level = levelself.dim = [int(ch[2] * multiplier), int(ch[1] * multiplier),int(ch[0] * multiplier)]# print(self.dim)self.inter_dim = self.dim[self.level]if level == 0:self.stride_level_1 = Conv(int(ch[1] * multiplier), self.inter_dim, 3, 2)self.stride_level_2 = Conv(int(ch[0] * multiplier), self.inter_dim, 3, 2)self.expand = Conv(self.inter_dim, int(ch[2] * multiplier), 3, 1)elif level == 1:self.compress_level_0 = Conv(int(ch[2] * multiplier), self.inter_dim, 1, 1)self.stride_level_2 = Conv(int(ch[0] * multiplier), self.inter_dim, 3, 2)self.expand = Conv(self.inter_dim, int(ch[1] * multiplier), 3, 1)elif level == 2:self.compress_level_0 = Conv(int(ch[2] * multiplier), self.inter_dim, 1, 1)self.compress_level_1 = Conv(int(ch[1] * multiplier), self.inter_dim, 1, 1)self.expand = Conv(self.inter_dim, int(ch[0] * multiplier), 3, 1)# when adding rfb, we use half number of channels to save memorycompress_c = 8 if rfb else 16self.weight_level_0 = Conv(self.inter_dim, compress_c, 1, 1)self.weight_level_1 = Conv(self.inter_dim, compress_c, 1, 1)self.weight_level_2 = Conv(self.inter_dim, compress_c, 1, 1)self.weight_levels = Conv(compress_c * 3, 3, 1, 1)self.vis = visdef forward(self, x):  # l,m,s"""# 128, 256, 512512, 256, 128from small -> large"""x_level_0 = x[2]  # lx_level_1 = x[1]  # mx_level_2 = x[0]  # s# print('x_level_0: ', x_level_0.shape)# print('x_level_1: ', x_level_1.shape)# print('x_level_2: ', x_level_2.shape)if self.level == 0:level_0_resized = x_level_0level_1_resized = self.stride_level_1(x_level_1)level_2_downsampled_inter = F.max_pool2d(x_level_2, 3, stride=2, padding=1)level_2_resized = self.stride_level_2(level_2_downsampled_inter)elif self.level == 1:level_0_compressed = self.compress_level_0(x_level_0)level_0_resized = F.interpolate(level_0_compressed, scale_factor=2, mode='nearest')level_1_resized = x_level_1level_2_resized = self.stride_level_2(x_level_2)elif self.level == 2:level_0_compressed = self.compress_level_0(x_level_0)level_0_resized = F.interpolate(level_0_compressed, scale_factor=4, mode='nearest')x_level_1_compressed = self.compress_level_1(x_level_1)level_1_resized = F.interpolate(x_level_1_compressed, scale_factor=2, mode='nearest')level_2_resized = x_level_2# print('level: {}, l1_resized: {}, l2_resized: {}'.format(self.level,#      level_1_resized.shape, level_2_resized.shape))level_0_weight_v = self.weight_level_0(level_0_resized)level_1_weight_v = self.weight_level_1(level_1_resized)level_2_weight_v = self.weight_level_2(level_2_resized)# print('level_0_weight_v: ', level_0_weight_v.shape)# print('level_1_weight_v: ', level_1_weight_v.shape)# print('level_2_weight_v: ', level_2_weight_v.shape)levels_weight_v = torch.cat((level_0_weight_v, level_1_weight_v, level_2_weight_v), 1)levels_weight = self.weight_levels(levels_weight_v)levels_weight = F.softmax(levels_weight, dim=1)fused_out_reduced = level_0_resized * levels_weight[:, 0:1, :, :] + \level_1_resized * levels_weight[:, 1:2, :, :] + \level_2_resized * levels_weight[:, 2:, :, :]out = self.expand(fused_out_reduced)if self.vis:return out, levels_weight, fused_out_reduced.sum(dim=1)else:return outclass ASFF_Detect(nn.Module):"""YOLOv8 Detect head for detection models."""dynamic = False  # force grid reconstructionexport = False  # export modeshape = Noneanchors = torch.empty(0)  # initstrides = torch.empty(0)  # initdef __init__(self, nc=80, ch=(), multiplier=1, rfb=False):"""Initializes the YOLOv8 detection layer with specified number of classes and channels."""super().__init__()self.nc = nc  # number of classesself.nl = len(ch)  # number of detection layersself.reg_max = 16  # DFL channels (ch[0] // 16 to scale 4/8/12/16/20 for n/s/m/l/x)self.no = nc + self.reg_max * 4  # number of outputs per anchorself.stride = torch.zeros(self.nl)  # strides computed during buildc2, c3 = max((16, ch[0] // 4, self.reg_max * 4)), max(ch[0], min(self.nc, 100))  # channelsself.cv2 = nn.ModuleList(nn.Sequential(Conv(x, c2, 3), Conv(c2, c2, 3), nn.Conv2d(c2, 4 * self.reg_max, 1)) for x in ch)self.cv3 = nn.ModuleList(nn.Sequential(Conv(x, c3, 3), Conv(c3, c3, 3), nn.Conv2d(c3, self.nc, 1)) for x in ch)self.dfl = DFL(self.reg_max) if self.reg_max > 1 else nn.Identity()self.l0_fusion = ASFFV5(level=0, ch=ch, multiplier=multiplier, rfb=rfb)self.l1_fusion = ASFFV5(level=1, ch=ch, multiplier=multiplier, rfb=rfb)self.l2_fusion = ASFFV5(level=2, ch=ch, multiplier=multiplier, rfb=rfb)def forward(self, x):"""Concatenates and returns predicted bounding boxes and class probabilities."""x1 = self.l0_fusion(x)x2 = self.l1_fusion(x)x3 = self.l2_fusion(x)x = [x3, x2, x1]shape = x[0].shape  # BCHWfor i in range(self.nl):x[i] = torch.cat((self.cv2[i](x[i]), self.cv3[i](x[i])), 1)if self.training:return xelif self.dynamic or self.shape != shape:self.anchors, self.strides = (x.transpose(0, 1) for x in make_anchors(x, self.stride, 0.5))self.shape = shapex_cat = torch.cat([xi.view(shape[0], self.no, -1) for xi in x], 2)if self.export and self.format in ('saved_model', 'pb', 'tflite', 'edgetpu', 'tfjs'):  # avoid TF FlexSplitV opsbox = x_cat[:, :self.reg_max * 4]cls = x_cat[:, self.reg_max * 4:]else:box, cls = x_cat.split((self.reg_max * 4, self.nc), 1)dbox = dist2bbox(self.dfl(box), self.anchors.unsqueeze(0), xywh=True, dim=1) * self.stridesif self.export and self.format in ('tflite', 'edgetpu'):# Normalize xywh with image size to mitigate quantization error of TFLite integer models as done in YOLOv5:# https://github.com/ultralytics/yolov5/blob/0c8de3fca4a702f8ff5c435e67f378d1fce70243/models/tf.py#L307-L309# See this PR for details: https://github.com/ultralytics/ultralytics/pull/1695img_h = shape[2] * self.stride[0]img_w = shape[3] * self.stride[0]img_size = torch.tensor([img_w, img_h, img_w, img_h], device=dbox.device).reshape(1, 4, 1)dbox /= img_sizey = torch.cat((dbox, cls.sigmoid()), 1)return y if self.export else (y, x)def bias_init(self):"""Initialize Detect() biases, WARNING: requires stride availability."""m = self  # self.model[-1]  # Detect() module# cf = torch.bincount(torch.tensor(np.concatenate(dataset.labels, 0)[:, 0]).long(), minlength=nc) + 1# ncf = math.log(0.6 / (m.nc - 0.999999)) if cf is None else torch.log(cf / cf.sum())  # nominal class frequencyfor a, b, s in zip(m.cv2, m.cv3, m.stride):  # froma[-1].bias.data[:] = 1.0  # boxb[-1].bias.data[:m.nc] = math.log(5 / m.nc / (640 / s) ** 2)  # cls (.01 objects, 80 classes, 640 img)if __name__ == "__main__":image1 = torch.rand(1, 128, 160, 160)image2 = torch.rand(1, 256, 80, 80)image3 = torch.rand(1, 512, 40, 40)image = [image1, image2, image3]channel = (128, 256, 512)model = ASFF_Detect(nc=80, ch=channel)out = model(image)print(out[1].shape)

(2)仅融合相邻层:

import torch
import torch.nn as nn
from ultralytics.utils.tal import dist2bbox, make_anchors
import math
import torch.nn.functional as F__all__ = ['ASFF_Detect']def autopad(k, p=None, d=1):  # kernel, padding, dilation"""Pad to 'same' shape outputs."""if d > 1:k = d * (k - 1) + 1 if isinstance(k, int) else [d * (x - 1) + 1 for x in k]  # actual kernel-sizeif p is None:p = k // 2 if isinstance(k, int) else [x // 2 for x in k]  # auto-padreturn pclass Conv(nn.Module):"""Standard convolution with args(ch_in, ch_out, kernel, stride, padding, groups, dilation, activation)."""default_act = nn.SiLU()  # default activationdef __init__(self, c1, c2, k=1, s=1, p=None, g=1, d=1, act=True):"""Initialize Conv layer with given arguments including activation."""super().__init__()self.conv = nn.Conv2d(c1, c2, k, s, autopad(k, p, d), groups=g, dilation=d, bias=False)self.bn = nn.BatchNorm2d(c2)self.act = self.default_act if act is True else act if isinstance(act, nn.Module) else nn.Identity()def forward(self, x):"""Apply convolution, batch normalization and activation to input tensor."""return self.act(self.bn(self.conv(x)))def forward_fuse(self, x):"""Perform transposed convolution of 2D data."""return self.act(self.conv(x))class DFL(nn.Module):"""Integral module of Distribution Focal Loss (DFL).Proposed in Generalized Focal Loss https://ieeexplore.ieee.org/document/9792391"""def __init__(self, c1=16):"""Initialize a convolutional layer with a given number of input channels."""super().__init__()self.conv = nn.Conv2d(c1, 1, 1, bias=False).requires_grad_(False)x = torch.arange(c1, dtype=torch.float)self.conv.weight.data[:] = nn.Parameter(x.view(1, c1, 1, 1))self.c1 = c1def forward(self, x):"""Applies a transformer layer on input tensor 'x' and returns a tensor."""b, c, a = x.shape  # batch, channels, anchorsreturn self.conv(x.view(b, 4, self.c1, a).transpose(2, 1).softmax(1)).view(b, 4, a)# return self.conv(x.view(b, self.c1, 4, a).softmax(1)).view(b, 4, a)class ASFFV5(nn.Module):def __init__(self, level, ch, multiplier=1, rfb=False, vis=False, act_cfg=True):super(ASFFV5, self).__init__()self.level = levelself.dim = [int(ch[2] * multiplier), int(ch[1] * multiplier), int(ch[0] * multiplier)]self.inter_dim = self.dim[self.level]if level == 0:self.stride_level_1 = Conv(int(ch[1] * multiplier), self.inter_dim, 3, 2)self.expand = Conv(self.inter_dim, int(ch[2] * multiplier), 3, 1)elif level == 1:self.compress_level_0 = Conv(int(ch[2] * multiplier), self.inter_dim, 1, 1)self.stride_level_2 = Conv(int(ch[0] * multiplier), self.inter_dim, 3, 2)self.expand = Conv(self.inter_dim, int(ch[1] * multiplier), 3, 1)elif level == 2:self.compress_level_1 = Conv(int(ch[1] * multiplier), self.inter_dim, 1, 1)self.expand = Conv(self.inter_dim, int(ch[0] * multiplier), 3, 1)compress_c = 8 if rfb else 16self.weight_level_0 = Conv(self.inter_dim, compress_c, 1, 1)self.weight_level_1 = Conv(self.inter_dim, compress_c, 1, 1)self.weight_level_2 = Conv(self.inter_dim, compress_c, 1, 1)if level == 1:self.weight_levels = Conv(compress_c * 3, 3, 1, 1)else:self.weight_levels = Conv(compress_c * 2, 2, 1, 1)self.vis = visdef forward(self, x):  # l,m,sx_level_0 = x[2]  # l (1,256,8,8)x_level_1 = x[1]  # m (1,128,16,16)x_level_2 = x[0]  # s (1,64,32,32)if self.level == 0:level_0_resized = x_level_0level_1_resized = self.stride_level_1(x_level_1)level_0_weight_v = self.weight_level_0(level_0_resized)level_1_weight_v = self.weight_level_1(level_1_resized)levels_weight_v = torch.cat((level_0_weight_v, level_1_weight_v), 1)levels_weight = self.weight_levels(levels_weight_v)levels_weight = F.softmax(levels_weight, dim=1)fused_out_reduced = level_0_resized * levels_weight[:, 0:1, :, :] + level_1_resized * levels_weight[:, 1:, :, :]elif self.level == 1:level_0_resized = self.compress_level_0(x_level_0)level_0_resized = F.interpolate(level_0_resized, scale_factor=2, mode='nearest')level_1_resized = x_level_1level_2_resized = self.stride_level_2(x_level_2)level_0_weight_v = self.weight_level_0(level_0_resized)level_1_weight_v = self.weight_level_1(level_1_resized)level_2_weight_v = self.weight_level_2(level_2_resized)levels_weight_v = torch.cat((level_0_weight_v, level_1_weight_v, level_2_weight_v), 1)levels_weight = self.weight_levels(levels_weight_v)levels_weight = F.softmax(levels_weight, dim=1)fused_out_reduced = level_0_resized * levels_weight[:, 0:1, :, :] + level_1_resized * levels_weight[:, 1:2, :, :] + level_2_resized * levels_weight[:, 2:, :, :]elif self.level == 2:level_1_resized = self.compress_level_1(x_level_1)level_1_resized = F.interpolate(level_1_resized, scale_factor=2, mode='nearest')level_2_resized = x_level_2level_1_weight_v = self.weight_level_1(level_1_resized)level_2_weight_v = self.weight_level_2(level_2_resized)levels_weight_v = torch.cat((level_1_weight_v, level_2_weight_v), 1)levels_weight = self.weight_levels(levels_weight_v)levels_weight = F.softmax(levels_weight, dim=1)fused_out_reduced = level_1_resized * levels_weight[:, 0:1, :, :] + level_2_resized * levels_weight[:, 1:, :, :]out = self.expand(fused_out_reduced)if self.vis:return out, levels_weight, fused_out_reduced.sum(dim=1)else:return outclass ASFF_Detect(nn.Module):"""YOLOv8 Detect head for detection models."""dynamic = False  # force grid reconstructionexport = False  # export modeshape = Noneanchors = torch.empty(0)  # initstrides = torch.empty(0)  # initdef __init__(self, nc=80, ch=(), multiplier=1, rfb=False):"""Initializes the YOLOv8 detection layer with specified number of classes and channels."""super().__init__()self.nc = nc  # number of classesself.nl = len(ch)  # number of detection layersself.reg_max = 16  # DFL channels (ch[0] // 16 to scale 4/8/12/16/20 for n/s/m/l/x)self.no = nc + self.reg_max * 4  # number of outputs per anchorself.stride = torch.zeros(self.nl)  # strides computed during buildc2, c3 = max((16, ch[0] // 4, self.reg_max * 4)), max(ch[0], min(self.nc, 100))  # channelsself.cv2 = nn.ModuleList(nn.Sequential(Conv(x, c2, 3), Conv(c2, c2, 3), nn.Conv2d(c2, 4 * self.reg_max, 1)) for x in ch)self.cv3 = nn.ModuleList(nn.Sequential(Conv(x, c3, 3), Conv(c3, c3, 3), nn.Conv2d(c3, self.nc, 1)) for x in ch)self.dfl = DFL(self.reg_max) if self.reg_max > 1 else nn.Identity()self.l0_fusion = ASFFV5(level=0, ch=ch, multiplier=multiplier, rfb=rfb)self.l1_fusion = ASFFV5(level=1, ch=ch, multiplier=multiplier, rfb=rfb)self.l2_fusion = ASFFV5(level=2, ch=ch, multiplier=multiplier, rfb=rfb)def forward(self, x):"""Concatenates and returns predicted bounding boxes and class probabilities."""x1 = self.l0_fusion(x)x2 = self.l1_fusion(x)x3 = self.l2_fusion(x)x = [x3, x2, x1]shape = x[0].shape  # BCHWfor i in range(self.nl):x[i] = torch.cat((self.cv2[i](x[i]), self.cv3[i](x[i])), 1)if self.training:return xelif self.dynamic or self.shape != shape:self.anchors, self.strides = (x.transpose(0, 1) for x in make_anchors(x, self.stride, 0.5))self.shape = shapex_cat = torch.cat([xi.view(shape[0], self.no, -1) for xi in x], 2)if self.export and self.format in ('saved_model', 'pb', 'tflite', 'edgetpu', 'tfjs'):  # avoid TF FlexSplitV opsbox = x_cat[:, :self.reg_max * 4]cls = x_cat[:, self.reg_max * 4:]else:box, cls = x_cat.split((self.reg_max * 4, self.nc), 1)dbox = dist2bbox(self.dfl(box), self.anchors.unsqueeze(0), xywh=True, dim=1) * self.stridesif self.export and self.format in ('tflite', 'edgetpu'):# Normalize xywh with image size to mitigate quantization error of TFLite integer models as done in YOLOv5:# https://github.com/ultralytics/yolov5/blob/0c8de3fca4a702f8ff5c435e67f378d1fce70243/models/tf.py#L307-L309# See this PR for details: https://github.com/ultralytics/ultralytics/pull/1695img_h = shape[2] * self.stride[0]img_w = shape[3] * self.stride[0]img_size = torch.tensor([img_w, img_h, img_w, img_h], device=dbox.device).reshape(1, 4, 1)dbox /= img_sizey = torch.cat((dbox, cls.sigmoid()), 1)return y if self.export else (y, x)def bias_init(self):"""Initialize Detect() biases, WARNING: requires stride availability."""m = self  # self.model[-1]  # Detect() module# cf = torch.bincount(torch.tensor(np.concatenate(dataset.labels, 0)[:, 0]).long(), minlength=nc) + 1# ncf = math.log(0.6 / (m.nc - 0.999999)) if cf is None else torch.log(cf / cf.sum())  # nominal class frequencyfor a, b, s in zip(m.cv2, m.cv3, m.stride):  # froma[-1].bias.data[:] = 1.0  # boxb[-1].bias.data[:m.nc] = math.log(5 / m.nc / (640 / s) ** 2)  # cls (.01 objects, 80 classes, 640 img)if __name__ == "__main__":image1 = torch.rand(1, 128, 160, 160)image2 = torch.rand(1, 256, 80, 80)image3 = torch.rand(1, 512, 40, 40)image = [image1, image2, image3]channel = (128, 256, 512)model = ASFF_Detect(nc=80, ch=channel)out = model(image)print(out[1].shape)

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.rhkb.cn/news/492419.html

如若内容造成侵权/违法违规/事实不符,请联系长河编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

利用Matlab绘制心性函数

第一种心性函数 我们利用下面这个参数方程在的区间上绘制一个心性函数 首先,我们在matlab中设置一个参量t在区间内,然后将参数t带入上面两个式子计算就可以得到心性函数对应的x-y坐标 代码示例 我们可以通过调整代码的颜色、线宽等属性改变心性函数的…

穷举vs暴搜vs深搜vs回溯vs剪枝专题一>全排列II

题目&#xff1a; 解析&#xff1a; 这题设计递归函数&#xff0c;主要把看如何剪枝 代码&#xff1a; class Solution {private List<List<Integer>> ret;private List<Integer> path;private boolean[] check;public List<List<Integer>> p…

react中实现导出excel文件

react中实现导出excel文件 一、安装依赖二、实现导出功能三、自定义列标题四、设置列宽度五、样式优化1、安装扩展库2、设置样式3、扩展样式功能 在 React 项目中实现点击按钮后导出数据为 Excel 文件&#xff0c;可以使用 xlsx 和 file-saver 这两个库。 一、安装依赖 在项目…

Vue前端开发-数据缓存

完成全局性的axios实例对象配置后&#xff0c;则可以在任意一个组件中直接调用这个对象&#xff0c;发送异步请求&#xff0c;获取服务端返回的数据&#xff0c;同时&#xff0c;针对那些不经常变化的数据&#xff0c;可以在请求过程中&#xff0c;进行数据缓存&#xff0c;并根…

Qt for Python (PySide6)设置程序图标和任务栏图标

环境 使用Qt for Python开发Windows应用程序。 Python版本&#xff1a;3.12 Qt版本&#xff1a;PySide6 前言 先上一个简单的测试程序 from PySide6.QtWidgets import QMainWindow,QLabel,QApplication from PySide6 import QtGui import sysclass MainWindow(QMainWindow)…

MySQL基础笔记(三)

在此特别感谢尚硅谷-康师傅的MySQL精品教程 获取更好的阅读体验请前往我的博客主站! 如果本文对你的学习有帮助&#xff0c;请多多点赞、评论、收藏&#xff0c;你们的反馈是我更新最大的动力&#xff01; 创建和管理表 1. 基础知识 1.1 一条数据存储的过程 存储数据是处理数…

FlashAttention理解

参考&#xff1a;https://github.com/Dao-AILab/flash-attention 文章目录 一、FlashAttention理解1. FlashAttention的特点&#xff1a;2. 工作原理3. 安装4. 代码示例5. flash_attn_func 参数说明6. 适用场景7. 总结 二、FlashAttention 1.X 2.X 3.X版本的区别与联系1. **Fla…

网络安全渗透有什么常见的漏洞吗?

弱口令与密码安全问题 THINKMO 01 暴力破解登录&#xff08;Weak Password Attack&#xff09; 在某次渗透测试中&#xff0c;测试人员发现一个网站的后台管理系统使用了非常简单的密码 admin123&#xff0c;而且用户名也是常见的 admin。那么攻击者就可以通过暴力破解工具&…

OpenCV基本图像处理操作(三)——图像轮廓

轮廓 cv2.findContours(img,mode,method) mode:轮廓检索模式 RETR_EXTERNAL &#xff1a;只检索最外面的轮廓&#xff1b;RETR_LIST&#xff1a;检索所有的轮廓&#xff0c;并将其保存到一条链表当中&#xff1b;RETR_CCOMP&#xff1a;检索所有的轮廓&#xff0c;并将他们组…

建投数据与腾讯云数据库TDSQL完成产品兼容性互认证

近日&#xff0c;经与腾讯云联合测试&#xff0c;建投数据自主研发的人力资源信息管理系统V3.0、招聘管理系统V3.0、绩效管理系统V2.0、培训管理系统V3.0通过腾讯云数据库TDSQL的技术认证&#xff0c;符合腾讯企业标准的要求&#xff0c;产品兼容性良好&#xff0c;性能卓越。 …

armsom产品Debian系统开发

第一章 构建 Debian Linux 系统 我们需要按【armsom产品编译&烧录Linux固件】全自动编译一次&#xff0c;默认是编译 Buildroot 系统&#xff0c;也会编 译 uboot 和内核&#xff0c;buildroot 某些软件包依赖内核&#xff0c;所以我们必须编译内核再编译 Buildroot。同 理…

[Linux] 进程信号概念 | 信号产生

&#x1fa90;&#x1fa90;&#x1fa90;欢迎来到程序员餐厅&#x1f4ab;&#x1f4ab;&#x1f4ab; 主厨&#xff1a;邪王真眼 主厨的主页&#xff1a;Chef‘s blog 所属专栏&#xff1a;青果大战linux 总有光环在陨落&#xff0c;总有新星在闪烁 为什么我的课设这么难…

小程序测试的测试内容有哪些?

在数字化快速发展的今天&#xff0c;小程序成为了很多企业进行产品推广和服务互动的重要平台。小程序的广泛应用使得对其质量的要求越来越高&#xff0c;小程序测试应运而生。这一过程不仅涉及功能的准确性&#xff0c;更涵盖了用户体验、性能、安全等多个维度。 小程序测试的…

使用 NVIDIA DALI 计算视频的光流

引言 光流&#xff08;Optical Flow&#xff09;是计算机视觉中的一种技术&#xff0c;主要用于估计视频中连续帧之间的运动信息。它通过分析像素在时间维度上的移动来预测运动场&#xff0c;广泛应用于目标跟踪、动作识别、视频稳定等领域。 光流的计算传统上依赖 CPU 或 GP…

微积分复习笔记 Calculus Volume 2 - 4.4 The Logistic Equation

4.4 The Logistic Equation - Calculus Volume 2 | OpenStax

双指针---有效三角形的个数

这里写自定义目录标题 题目链接 [有效三角形的个数](https://leetcode.cn/problems/valid-triangle-number/description/)问题分析代码解决执行用时 题目链接 有效三角形的个数 给定一个包含非负整数的数组 nums &#xff0c;返回其中可以组成三角形三条边的三元组个数。 示例…

【Linux】usb内核设备信息

usb内核设备信息 Linux内核中USB设备信息及拓扑结构可以从/sys/kernel/debug/usb/devices和/sys/bus/usb/devices中获取&#xff0c;下面介绍这些信息如何解读。 通过usbdump函数打印usb信息 [drivers/usb/core/devices.c] #define ALLOW_SERIAL_NUMBER/* Bus: 总线编号 Lev:…

Electron-Vue 开发下 dev/prod/webpack server各种路径设置汇总

背景 在实际开发中&#xff0c;我发现团队对于这几个路径的设置上是纯靠猜的&#xff0c;通过一点点地尝试来找到可行的路径&#xff0c;这是不应该的&#xff0c;我们应该很清晰地了解这几个概念&#xff0c;以下通过截图和代码进行细节讲解。 npm run dev 下的路径如何处理&…

devops和ICCID简介

Devops DevOps&#xff08;Development 和 Operations 的组合&#xff09;是一种软件开发和 IT 运维的哲学&#xff0c;旨在促进开发、技术运营和质量保障&#xff08;QA&#xff09;部门之间的沟通、协作与整合。它强调自动化流程&#xff0c;持续集成&#xff08;CI&#xf…

[HNCTF 2022 Week1]baby_rsa

源代码&#xff1a; from Crypto.Util.number import bytes_to_long, getPrime from gmpy2 import * from secret import flag m bytes_to_long(flag) p getPrime(128) q getPrime(128) n p * q e 65537 c pow(m,e,n) print(n,c) # 62193160459999883112594854240161159…