yolo增加mobileone

代码地址:GitHub - apple/ml-mobileone: This repository contains the official implementation of the research paper, "An Improved One millisecond Mobile Backbone".

论文地址:https://arxiv.org/abs/2206.04040

MobileOne出自Apple,它的作者声称在iPhone 12上MobileOne的推理时间只有1毫秒,这也是MobileOne这个名字中One的含义。从MobileOne的快速落地可以看到重参数化在移动端的潜力:简单、高效、即插即用。

图3中的左侧部分构成了MobileOne的一个完整building block。它由上下两部分构成,其中上面部分基于深度卷积(Depthwise Convolution),下面部分基于点卷积(Pointwise Convolution)。深度卷积与点卷积的术语来自于MobileNet。深度卷积本质上是一个分组卷积,它的分组数g与输入通道相同。而点卷积是一个1×1卷积。

图3中的深度卷积模块由三条分支构成。最左侧分支是1×1卷积;中间分支是过参数化的3×3卷积,即k个3×3卷积;右侧部分是一个包含BN层的shortcut连接。这里的1×1卷积和3×3卷积都是深度卷积(也即分组卷积,分组数g等于输入通道数)。

图3中的点卷积模块由两条分支构成。左侧分支是过参数化的1×1卷积,由k个1×1卷积构成。右侧分支是一个包含BN层的跳跃连接。在训练阶段,MobileOne就是由这样的building block堆叠而成。当训练完成后,可以使用重参数化方法将图3中左侧所示的building block重参数化图3中右侧的结构。

 1、yolov5

创建yolov5s-mobileone.yaml

# YOLOv5 🚀 by Ultralytics, GPL-3.0 license# Parameters
nc: 80  # number of classes
depth_multiple: 0.33  # model depth multiple
width_multiple: 0.50  # layer channel multiple
anchors:- [10,13, 16,30, 33,23]  # P3/8- [30,61, 62,45, 59,119]  # P4/16- [116,90, 156,198, 373,326]  # P5/32# YOLOv5 v6.0 backbone
backbone:# [from, number, module, args][[-1, 1, Conv, [64, 6, 2, 2]],  # 0-P1/2[ -1, 1, MobileOne, [ 128, True, 2] ],  # 1-P2/4[ -1, 1, MobileOne, [ 256, True, 8] ],  # 2-P3/8[ -1, 1, MobileOne, [ 512, True, 10] ],  # 3-P4/16[ -1, 1, MobileOne, [ 1024, True, 1] ],  # 4-P5/32[-1, 1, SPPF, [1024, 5]],  # 5]# YOLOv5 v6.0 head
head:[[-1, 1, Conv, [512, 1, 1]], # 6[-1, 1, nn.Upsample, [None, 2, 'nearest']], # 7[[-1, 3], 1, Concat, [1]],  # cat backbone P4[ -1, 1, MobileOne, [ 512, False, 3] ],  # 9[-1, 1, Conv, [256, 1, 1]],[-1, 1, nn.Upsample, [None, 2, 'nearest']],[[-1, 2], 1, Concat, [1]],  # cat backbone P3[ -1, 1, MobileOne, [ 256, False, 3] ],  # 13 (P3/8-small)[-1, 1, Conv, [256, 3, 2]],[[-1, 10], 1, Concat, [1]],  # cat head P4[ -1, 1, MobileOne, [ 512, False, 3] ],  # 16 (P4/16-medium)[-1, 1, Conv, [512, 3, 2]],[[-1, 6], 1, Concat, [1]],  # cat head P5[ -1, 1, MobileOne, [ 1024, False, 3] ],  # 19 (P5/32-large)[[13, 16, 19], 1, Detect, [nc, anchors]],  # Detect(P3, P4, P5)]

common.py中增加

from typing import Optional, List, Tuple
import torch.nn.functional as Fclass SEBlock(nn.Module):""" Squeeze and Excite module.Pytorch implementation of `Squeeze-and-Excitation Networks` -https://arxiv.org/pdf/1709.01507.pdf"""def __init__(self,in_channels: int,rd_ratio: float = 0.0625) -> None:""" Construct a Squeeze and Excite Module.:param in_channels: Number of input channels.:param rd_ratio: Input channel reduction ratio."""super(SEBlock, self).__init__()self.reduce = nn.Conv2d(in_channels=in_channels,out_channels=int(in_channels * rd_ratio),kernel_size=1,stride=1,bias=True)self.expand = nn.Conv2d(in_channels=int(in_channels * rd_ratio),out_channels=in_channels,kernel_size=1,stride=1,bias=True)def forward(self, inputs: torch.Tensor) -> torch.Tensor:""" Apply forward pass. """b, c, h, w = inputs.size()x = F.avg_pool2d(inputs, kernel_size=[h, w])x = self.reduce(x)x = F.relu(x)x = self.expand(x)x = torch.sigmoid(x)x = x.view(-1, c, 1, 1)return inputs * xclass MobileOneBlock(nn.Module):""" MobileOne building block.This block has a multi-branched architecture at train-timeand plain-CNN style architecture at inference timeFor more details, please refer to our paper:`An Improved One millisecond Mobile Backbone` -https://arxiv.org/pdf/2206.04040.pdf"""def __init__(self,in_channels: int,out_channels: int,kernel_size: int,stride: int = 1,padding: int = 0,dilation: int = 1,groups: int = 1,inference_mode: bool = False,use_se: bool = False,num_conv_branches: int = 1) -> None:""" Construct a MobileOneBlock module.:param in_channels: Number of channels in the input.:param out_channels: Number of channels produced by the block.:param kernel_size: Size of the convolution kernel.:param stride: Stride size.:param padding: Zero-padding size.:param dilation: Kernel dilation factor.:param groups: Group number.:param inference_mode: If True, instantiates model in inference mode.:param use_se: Whether to use SE-ReLU activations.:param num_conv_branches: Number of linear conv branches."""super(MobileOneBlock, self).__init__()self.inference_mode = inference_modeself.groups = groupsself.stride = strideself.kernel_size = kernel_sizeself.in_channels = in_channelsself.out_channels = out_channelsself.num_conv_branches = num_conv_branches# Check if SE-ReLU is requestedif use_se:self.se = SEBlock(out_channels)else:self.se = nn.Identity()self.activation = nn.ReLU()if inference_mode:self.reparam_conv = nn.Conv2d(in_channels=in_channels,out_channels=out_channels,kernel_size=kernel_size,stride=stride,padding=padding,dilation=dilation,groups=groups,bias=True)else:# Re-parameterizable skip connectionself.rbr_skip = nn.BatchNorm2d(num_features=in_channels) \if out_channels == in_channels and stride == 1 else None# Re-parameterizable conv branchesrbr_conv = list()for _ in range(self.num_conv_branches):rbr_conv.append(self._conv_bn(kernel_size=kernel_size,padding=padding))self.rbr_conv = nn.ModuleList(rbr_conv)# Re-parameterizable scale branchself.rbr_scale = Noneif kernel_size > 1:self.rbr_scale = self._conv_bn(kernel_size=1,padding=0)def forward(self, x: torch.Tensor) -> torch.Tensor:""" Apply forward pass. """# Inference mode forward pass.if self.inference_mode:return self.activation(self.se(self.reparam_conv(x)))# Multi-branched train-time forward pass.# Skip branch outputidentity_out = 0if self.rbr_skip is not None:identity_out = self.rbr_skip(x)# Scale branch outputscale_out = 0if self.rbr_scale is not None:scale_out = self.rbr_scale(x)# Other branchesout = scale_out + identity_outfor ix in range(self.num_conv_branches):out += self.rbr_conv[ix](x)return self.activation(self.se(out))def reparameterize(self):""" Following works like `RepVGG: Making VGG-style ConvNets Great Again` -https://arxiv.org/pdf/2101.03697.pdf. We re-parameterize multi-branchedarchitecture used at training time to obtain a plain CNN-like structurefor inference."""if self.inference_mode:returnkernel, bias = self._get_kernel_bias()self.reparam_conv = nn.Conv2d(in_channels=self.rbr_conv[0].conv.in_channels,out_channels=self.rbr_conv[0].conv.out_channels,kernel_size=self.rbr_conv[0].conv.kernel_size,stride=self.rbr_conv[0].conv.stride,padding=self.rbr_conv[0].conv.padding,dilation=self.rbr_conv[0].conv.dilation,groups=self.rbr_conv[0].conv.groups,bias=True)self.reparam_conv.weight.data = kernelself.reparam_conv.bias.data = bias# Delete un-used branchesfor para in self.parameters():para.detach_()self.__delattr__('rbr_conv')self.__delattr__('rbr_scale')if hasattr(self, 'rbr_skip'):self.__delattr__('rbr_skip')self.inference_mode = Truedef _get_kernel_bias(self) -> Tuple[torch.Tensor, torch.Tensor]:""" Method to obtain re-parameterized kernel and bias.Reference: https://github.com/DingXiaoH/RepVGG/blob/main/repvgg.py#L83:return: Tuple of (kernel, bias) after fusing branches."""# get weights and bias of scale branchkernel_scale = 0bias_scale = 0if self.rbr_scale is not None:kernel_scale, bias_scale = self._fuse_bn_tensor(self.rbr_scale)# Pad scale branch kernel to match conv branch kernel size.pad = self.kernel_size // 2kernel_scale = torch.nn.functional.pad(kernel_scale,[pad, pad, pad, pad])# get weights and bias of skip branchkernel_identity = 0bias_identity = 0if self.rbr_skip is not None:kernel_identity, bias_identity = self._fuse_bn_tensor(self.rbr_skip)# get weights and bias of conv brancheskernel_conv = 0bias_conv = 0for ix in range(self.num_conv_branches):_kernel, _bias = self._fuse_bn_tensor(self.rbr_conv[ix])kernel_conv += _kernelbias_conv += _biaskernel_final = kernel_conv + kernel_scale + kernel_identitybias_final = bias_conv + bias_scale + bias_identityreturn kernel_final, bias_finaldef _fuse_bn_tensor(self, branch) -> Tuple[torch.Tensor, torch.Tensor]:""" Method to fuse batchnorm layer with preceeding conv layer.Reference: https://github.com/DingXiaoH/RepVGG/blob/main/repvgg.py#L95:param branch::return: Tuple of (kernel, bias) after fusing batchnorm."""if isinstance(branch, nn.Sequential):kernel = branch.conv.weightrunning_mean = branch.bn.running_meanrunning_var = branch.bn.running_vargamma = branch.bn.weightbeta = branch.bn.biaseps = branch.bn.epselse:assert isinstance(branch, nn.BatchNorm2d)if not hasattr(self, 'id_tensor'):input_dim = self.in_channels // self.groupskernel_value = torch.zeros((self.in_channels,input_dim,self.kernel_size,self.kernel_size),dtype=branch.weight.dtype,device=branch.weight.device)for i in range(self.in_channels):kernel_value[i, i % input_dim,self.kernel_size // 2,self.kernel_size // 2] = 1self.id_tensor = kernel_valuekernel = self.id_tensorrunning_mean = branch.running_meanrunning_var = branch.running_vargamma = branch.weightbeta = branch.biaseps = branch.epsstd = (running_var + eps).sqrt()t = (gamma / std).reshape(-1, 1, 1, 1)return kernel * t, beta - running_mean * gamma / stddef _conv_bn(self,kernel_size: int,padding: int) -> nn.Sequential:""" Helper method to construct conv-batchnorm layers.:param kernel_size: Size of the convolution kernel.:param padding: Zero-padding size.:return: Conv-BN module."""mod_list = nn.Sequential()mod_list.add_module('conv', nn.Conv2d(in_channels=self.in_channels,out_channels=self.out_channels,kernel_size=kernel_size,stride=self.stride,padding=padding,groups=self.groups,bias=False))mod_list.add_module('bn', nn.BatchNorm2d(num_features=self.out_channels))return mod_list

 在yolo.py中增加

       if m in (Conv, GhostConv, Bottleneck, GhostBottleneck, SPP, SPPF, DWConv, MixConv2d, Focus, CrossConv,BottleneckCSP, C3, C3TR, C3SPP, C3Ghost, nn.ConvTranspose2d, DWConvTranspose2d, C3x, C2f_Add,MobileOne):c1, c2 = ch[f], args[0]if c2 != no:  # if not outputc2 = make_divisible(c2 * gw, 8)args = [c1, c2, *args[1:]]if m in [BottleneckCSP, C3, C3TR, C3Ghost, C3x, C2f_Add]:args.insert(2, n)  # number of repeatsn = 1

同时在yolo.py的basemodel中添加

    def fuse(self):  # fuse model Conv2d() + BatchNorm2d() layersLOGGER.info('Fusing layers... ')for m in self.model.modules():if isinstance(m, (Conv, DWConv)) and hasattr(m, 'bn'):m.conv = fuse_conv_and_bn(m.conv, m.bn)  # update convdelattr(m, 'bn')  # remove batchnormm.forward = m.forward_fuse  # update forwardif hasattr(m, 'reparameterize'):m.reparameterize()self.info()return self

运行yolo.py

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.rhkb.cn/news/112898.html

如若内容造成侵权/违法违规/事实不符,请联系长河编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

前端调用电脑摄像头

项目中需要前端调用,所以做了如下操作 先看一下效果吧 主要是基于vue3,通过canvas把画面转成base64的形式,然后是把base64转成 file文件,最后调用了一下上传接口 以下是代码 进入页面先调用一下摄像头 navigator.mediaDevices.ge…

新版HBuilderX在uni_modules创建搜索search组件

1、创建自定义组件 my-search 新版HBuilder没有了 component 文件夹,但是有 uni_modules 文件夹,用来创建组件: 右键 uni_modules 文件夹,点击 新建uni_modules创建在弹出框,填写组件名字,例如&#xff1a…

htmx-使HTML更强大

‍本文作者是360奇舞团开发工程师 htmx 让我们先来看一段俳句: javascript fatigue: longing for a hypertext already in hand 这个俳句很有意思,是开源项目htmx文档中写的,意思是说,我们已经有了超文本,为什么还要去使用javascr…

学习node之——如何在项目中使用MySQL、前后端的身份认证

上一篇文章只写了一丢丢,这篇才是正片,look look look 一、使用mysql模块操作数据库 1、查询数据 这里连接数据库的用户和密码都是我们在安装mysql时配置的密码。每个人的users表格里面数据不同,结果也会不一样哟! // 导入mys…

开源且强大的网络嗅探分析工具——Wireshark

Wireshark是一款强大的开源网络协议分析工具,旨在帮助用户深入了解网络通信的细节。通过捕获、解析和展示网络数据包,Wireshark能够帮助工程师诊断问题、优化性能,以及解决各种网络难题。无论是深入分析还是快速调试,Wireshark都是…

Android开发仿美团购物左右联动列表

概述 Android开发左右联动列表,仿照美团外卖点餐时,左右列表可以联动。 详细 Android开发仿美团购物左右联动列表 概述 左右联动列表是仿照美团外卖点餐时,左右列表可以联动。比如右边列表会有小项对应左边的,滑动时会置顶&a…

华为OD机试 - 数字序列比大小 - 贪心算法(Java 2023 B卷 100分)

目录 一、题目描述二、输入描述三、输出描述四、解题思路五、Java算法源码六、效果展示1、输入2、输出3、说明 华为OD机试 2023B卷题库疯狂收录中,刷题点这里 一、题目描述 A,B两个人万一个数字比大小的游戏,在游戏前,两个人会拿…

【1267. 统计参与通信的服务器】

来源:力扣(LeetCode) 描述: 这里有一幅服务器分布图,服务器的位置标识在 m * n 的整数矩阵网格 grid 中,1 表示单元格上有服务器,0 表示没有。 如果两台服务器位于同一行或者同一列&#xff…

Linux系统文件权限修改:permission denied

最近遇到文件夹权限的问题 通过命令发现www缺少写和执行的权限 然后赋予所有权限 下面是一些详解&#xff1a; 要赋予文件或目录写入权限&#xff0c;可以使用 chmod 命令。 命令的基本语法是&#xff1a; chmod <permissions> <file or directory>其中 <…

CSA研讨会|聚焦云原生安全,探讨技术与应用策略

为产业数字化保驾护航&#xff0c; 云原生安全体系如何有效抵御网络威胁&#xff1f; 网络安全的下一个十年&#xff0c; 云原生安全是网络安全创新之路吗&#xff1f; CNAPP部署现状&#xff0c;你了解多少&#xff1f; 9月6日&#xff08;周三&#xff09;下午14&#xff1a…

自动泊车的自动驾驶控制算法

1. 自动泊车系统 自动泊车系统(AutomatedParkingASSiSt,APA)利用车辆搭载的传感器感知车辆周边环境,扫描满足当前车辆停放的障碍物空间车位或线车位,并通过人机交互(HumanMachine Interface,HMI)获取驾驶员对目标车位的选择或自动确定目标车位,自动规划泊车路径,通过控制器向车…

17.CSS发光按钮悬停特效

效果 源码 <!DOCTYPE html> <html> <head><title>CSS Modern Button</title><link rel="stylesheet" type="text/css" href="style.css"> </head> <body><a href="#" style=&quo…

使用awvs进行web安全扫描

1、安装 docker pull secfa/docker-awvs docker run -it -d -name awvs -p 13443:3443 --cap-add LINUX_IMMUTABLE secfa/docker-awvs2、账号密码 # https://ip:13443/ # 用户名:adminadmin.com # 密码:Admin1233、使用 ps:需要征得甲方的同意

SPSS--s04典型相关分析

典型相关基本原理 典型相关分析是主成分分析和因子分析的进一步发展 ,是研究两组变量间的相互依赖关系 ,把两组变量之间的相互关系变为研究两个新的变量之间的相关,而且又不抛弃原来变量的信息 ,这两个新的变量分别由第一组变量和第二组变量的线性组合构成 ,并且两组变量的个数…

Java“牵手”1688淘口令转换API接口数据,1688API接口申请指南

1688平台商品淘口令接口是开放平台提供的一种API接口&#xff0c;通过调用API接口&#xff0c;开发者可以获取1688商品的标题、价格、库存、商品快递费用&#xff0c;宝贝ID&#xff0c;发货地&#xff0c;区域ID&#xff0c;快递费用&#xff0c;月销量、总销量、库存、详情描…

Oracle跨库访问DBLINK

1. DBLINK的介绍 Oracle在进行跨库访问时&#xff0c;可以创建DBLINK实现&#xff0c;比如要将UAT的表数据灌入开发环境&#xff0c;则可以使用UAT库为数据源&#xff0c;通过DBLINK实现将查出的数据灌入开发库。 简而言之就是在当前数据库中访问另一个数据库中的表中的数据 2…

Redis 持久化和发布订阅

一、持久化 Redis 是内存数据库&#xff0c;如果不将内存中的数据库状态保存到磁盘&#xff0c;那么一旦服务器进程退出&#xff0c;服务器中的数据库状态也会消失。所以 Redis 提供了持久化功能&#xff01; 1.1、RDB&#xff08;Redis DataBase&#xff09; 1.1.1 …

Stable Diffusion 提示词入门指南

前言 本文主要讲解 Stable Diffusion &#xff08;下文简称 SD&#xff09;提示词的用法&#xff0c;帮助大家生成更高质量的图片 本章节主要讲解文生图&#xff0c;其他类型读者可以自行探索。同时本文主要是以 Stable Diffusion Discard 的形式生成图片 如果各位对于图片隐…

第 7 章 排序算法(6)(快速排序)

7.9快速排序 7.9.1快速排序法介绍: 快速排序&#xff08;Quicksort&#xff09;是对冒泡排序的一种改进。基本思想是&#xff1a;通过一趟排序将要排序的数据分割成独立的两部分&#xff0c;其中一部分的所有数据都比另外一部分的所有数据都要小&#xff0c;然后再按此方法对…

leetcode 823. 带因子的二叉树(dp+双指针+Long类型)

leetcode 823. 带因子的二叉树(dp双指针Long类型) 题目表述 给出一个含有不重复整数元素的数组 arr &#xff0c;每个整数 arr[i] 均大于 1。 用这些整数来构建二叉树&#xff0c;每个整数可以使用任意次数。其中&#xff1a;每个非叶结点的值应等于它的两个子结点的值的乘积…