深度学习camp-第J4周:ResNet与DenseNet结合探索

  • 🍨 本文为🔗365天深度学习训练营 中的学习记录博客
  • 🍖 原作者:K同学啊

本周任务:

  • 探索ResNet和DenseNet的结合可能性
  • 本周任务较难,我们在chatGPT的帮助下完成

一、网络的构建

设计一种结合 ResNet 和 DenseNet 的网络架构,目标是在性能与复杂度之间实现平衡,同时保持与 DenseNet-121 相当的训练速度,可以通过以下步骤设计一种新的网络结构,称为 ResDenseNet(暂命名)。这种网络结构结合了 ResNet 的残差连接和 DenseNet 的密集连接优点,同时对复杂度加以控制。

设计思路
残差模块与密集模块结合:

在网络的不同阶段,使用残差模块(ResBlock)来捕获浅层特征。
在每个阶段的后期引入密集模块(DenseBlock),实现高效的特征复用。
通过调整每层的通道数,避免过多的计算和内存消耗。
瓶颈设计(Bottleneck Block):

每个模块采用瓶颈层,减少计算复杂度。
通过 1x1 卷积压缩和扩展特征通道数。
混合连接方式:

引入 局部密集连接,只连接同一模块内的层,避免 DenseNet 的全连接导致的内存开销。
在模块之间使用残差连接,便于信息流通。
网络深度与宽度的平衡:

将 DenseNet 的增长率(growth rate)减少,适当减少特征图通道数增长。
模块之间引入过渡层(Transition Layer)以压缩特征图尺寸和通道数。

import torch
import torch.nn as nnclass Bottleneck(nn.Module):def __init__(self, in_channels, growth_rate):super(Bottleneck, self).__init__()self.bn1 = nn.BatchNorm2d(in_channels)self.conv1 = nn.Conv2d(in_channels, 4 * growth_rate, kernel_size=1, stride=1, bias=False)self.bn2 = nn.BatchNorm2d(4 * growth_rate)self.conv2 = nn.Conv2d(4 * growth_rate, growth_rate, kernel_size=3, stride=1, padding=1, bias=False)def forward(self, x):out = self.conv1(self.bn1(x))out = self.conv2(self.bn2(out))return torch.cat([x, out], dim=1)class DenseBlock(nn.Module):def __init__(self, num_layers, in_channels, growth_rate):super(DenseBlock, self).__init__()self.layers = nn.ModuleList()for i in range(num_layers):self.layers.append(Bottleneck(in_channels + i * growth_rate, growth_rate))# 为了残差连接,可能需要调整通道数以匹配输入输出self.residual = nn.Conv2d(in_channels, in_channels + num_layers * growth_rate, kernel_size=1, bias=False)def forward(self, x):identity = self.residual(x)  # 将输入调整为与 DenseBlock 输出通道一致for layer in self.layers:x = layer(x)  # 密集连接,逐层拼接return x + identity  # 残差连接:输入与输出相加class TransitionLayer(nn.Module):def __init__(self, in_channels, out_channels):super(TransitionLayer, self).__init__()self.bn = nn.BatchNorm2d(in_channels)self.conv = nn.Conv2d(in_channels, out_channels, kernel_size=1, stride=1, bias=False)self.pool = nn.AvgPool2d(kernel_size=2, stride=2)def forward(self, x):x = self.conv(self.bn(x))return self.pool(x)class ResDenseNet(nn.Module):def __init__(self, num_classes=1000):super(ResDenseNet, self).__init__()self.stem = nn.Sequential(nn.Conv2d(3, 64, kernel_size=7, stride=2, padding=3, bias=False),nn.BatchNorm2d(64),nn.ReLU(inplace=True),nn.MaxPool2d(kernel_size=3, stride=2, padding=1))self.stage1 = self._make_stage(64, 128, num_layers=4, growth_rate=16)self.stage2 = self._make_stage(128, 256, num_layers=4, growth_rate=16)self.stage3 = self._make_stage(256, 512, num_layers=6, growth_rate=12)self.stage4 = self._make_stage(512, 1024, num_layers=6, growth_rate=12)self.classifier = nn.Linear(1024, num_classes)def _make_stage(self, in_channels, out_channels, num_layers, growth_rate):dense_block = DenseBlock(num_layers, in_channels, growth_rate)transition = TransitionLayer(in_channels + num_layers * growth_rate, out_channels)return nn.Sequential(dense_block, transition)def forward(self, x):x = self.stem(x)x = self.stage1(x)x = self.stage2(x)x = self.stage3(x)x = self.stage4(x)x = torch.mean(x, dim=[2, 3])  # Global Average Poolingreturn self.classifier(x)device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model =  ResDenseNet().to(device)
model

代码输出:

ResDenseNet((stem): Sequential((0): Conv2d(3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False)(1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): ReLU(inplace=True)(3): MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False))(stage1): Sequential((0): DenseBlock((layers): ModuleList((0): Bottleneck((bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv1): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv2): Conv2d(64, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False))(1): Bottleneck((bn1): BatchNorm2d(80, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv1): Conv2d(80, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv2): Conv2d(64, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False))(2): Bottleneck((bn1): BatchNorm2d(96, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv1): Conv2d(96, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv2): Conv2d(64, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False))(3): Bottleneck((bn1): BatchNorm2d(112, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv1): Conv2d(112, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv2): Conv2d(64, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)))(residual): Conv2d(64, 128, kernel_size=(1, 1), stride=(1, 1), bias=False))(1): TransitionLayer((bn): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)(pool): AvgPool2d(kernel_size=2, stride=2, padding=0)))(stage2): Sequential((0): DenseBlock((layers): ModuleList((0): Bottleneck((bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv1): Conv2d(128, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv2): Conv2d(64, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False))(1): Bottleneck((bn1): BatchNorm2d(144, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv1): Conv2d(144, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv2): Conv2d(64, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False))(2): Bottleneck((bn1): BatchNorm2d(160, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv1): Conv2d(160, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv2): Conv2d(64, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False))(3): Bottleneck((bn1): BatchNorm2d(176, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv1): Conv2d(176, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv2): Conv2d(64, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)))(residual): Conv2d(128, 192, kernel_size=(1, 1), stride=(1, 1), bias=False))(1): TransitionLayer((bn): BatchNorm2d(192, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv): Conv2d(192, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)(pool): AvgPool2d(kernel_size=2, stride=2, padding=0)))(stage3): Sequential((0): DenseBlock((layers): ModuleList((0): Bottleneck((bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv1): Conv2d(256, 48, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn2): BatchNorm2d(48, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv2): Conv2d(48, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False))(1): Bottleneck((bn1): BatchNorm2d(268, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv1): Conv2d(268, 48, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn2): BatchNorm2d(48, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv2): Conv2d(48, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False))(2): Bottleneck((bn1): BatchNorm2d(280, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv1): Conv2d(280, 48, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn2): BatchNorm2d(48, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv2): Conv2d(48, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False))(3): Bottleneck((bn1): BatchNorm2d(292, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv1): Conv2d(292, 48, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn2): BatchNorm2d(48, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv2): Conv2d(48, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False))(4): Bottleneck((bn1): BatchNorm2d(304, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv1): Conv2d(304, 48, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn2): BatchNorm2d(48, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv2): Conv2d(48, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False))(5): Bottleneck((bn1): BatchNorm2d(316, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv1): Conv2d(316, 48, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn2): BatchNorm2d(48, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv2): Conv2d(48, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)))(residual): Conv2d(256, 328, kernel_size=(1, 1), stride=(1, 1), bias=False))(1): TransitionLayer((bn): BatchNorm2d(328, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv): Conv2d(328, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)(pool): AvgPool2d(kernel_size=2, stride=2, padding=0)))(stage4): Sequential((0): DenseBlock((layers): ModuleList((0): Bottleneck((bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv1): Conv2d(512, 48, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn2): BatchNorm2d(48, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv2): Conv2d(48, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False))(1): Bottleneck((bn1): BatchNorm2d(524, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv1): Conv2d(524, 48, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn2): BatchNorm2d(48, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv2): Conv2d(48, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False))(2): Bottleneck((bn1): BatchNorm2d(536, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv1): Conv2d(536, 48, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn2): BatchNorm2d(48, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv2): Conv2d(48, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False))(3): Bottleneck((bn1): BatchNorm2d(548, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv1): Conv2d(548, 48, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn2): BatchNorm2d(48, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv2): Conv2d(48, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False))(4): Bottleneck((bn1): BatchNorm2d(560, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv1): Conv2d(560, 48, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn2): BatchNorm2d(48, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv2): Conv2d(48, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False))(5): Bottleneck((bn1): BatchNorm2d(572, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv1): Conv2d(572, 48, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn2): BatchNorm2d(48, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv2): Conv2d(48, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)))(residual): Conv2d(512, 584, kernel_size=(1, 1), stride=(1, 1), bias=False))(1): TransitionLayer((bn): BatchNorm2d(584, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv): Conv2d(584, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)(pool): AvgPool2d(kernel_size=2, stride=2, padding=0)))(classifier): Linear(in_features=1024, out_features=1000, bias=True)
)

代码输入:

import torchsummary as summary
summary.summary(model, (3, 224, 224))

代码输出:

----------------------------------------------------------------Layer (type)               Output Shape         Param #
================================================================Conv2d-1         [-1, 64, 112, 112]           9,408BatchNorm2d-2         [-1, 64, 112, 112]             128ReLU-3         [-1, 64, 112, 112]               0MaxPool2d-4           [-1, 64, 56, 56]               0Conv2d-5          [-1, 128, 56, 56]           8,192BatchNorm2d-6           [-1, 64, 56, 56]             128Conv2d-7           [-1, 64, 56, 56]           4,096BatchNorm2d-8           [-1, 64, 56, 56]             128Conv2d-9           [-1, 16, 56, 56]           9,216Bottleneck-10           [-1, 80, 56, 56]               0BatchNorm2d-11           [-1, 80, 56, 56]             160Conv2d-12           [-1, 64, 56, 56]           5,120BatchNorm2d-13           [-1, 64, 56, 56]             128Conv2d-14           [-1, 16, 56, 56]           9,216Bottleneck-15           [-1, 96, 56, 56]               0BatchNorm2d-16           [-1, 96, 56, 56]             192Conv2d-17           [-1, 64, 56, 56]           6,144BatchNorm2d-18           [-1, 64, 56, 56]             128Conv2d-19           [-1, 16, 56, 56]           9,216Bottleneck-20          [-1, 112, 56, 56]               0BatchNorm2d-21          [-1, 112, 56, 56]             224Conv2d-22           [-1, 64, 56, 56]           7,168BatchNorm2d-23           [-1, 64, 56, 56]             128Conv2d-24           [-1, 16, 56, 56]           9,216Bottleneck-25          [-1, 128, 56, 56]               0DenseBlock-26          [-1, 128, 56, 56]               0BatchNorm2d-27          [-1, 128, 56, 56]             256Conv2d-28          [-1, 128, 56, 56]          16,384AvgPool2d-29          [-1, 128, 28, 28]               0TransitionLayer-30          [-1, 128, 28, 28]               0Conv2d-31          [-1, 192, 28, 28]          24,576BatchNorm2d-32          [-1, 128, 28, 28]             256Conv2d-33           [-1, 64, 28, 28]           8,192BatchNorm2d-34           [-1, 64, 28, 28]             128Conv2d-35           [-1, 16, 28, 28]           9,216Bottleneck-36          [-1, 144, 28, 28]               0BatchNorm2d-37          [-1, 144, 28, 28]             288Conv2d-38           [-1, 64, 28, 28]           9,216BatchNorm2d-39           [-1, 64, 28, 28]             128Conv2d-40           [-1, 16, 28, 28]           9,216Bottleneck-41          [-1, 160, 28, 28]               0BatchNorm2d-42          [-1, 160, 28, 28]             320Conv2d-43           [-1, 64, 28, 28]          10,240BatchNorm2d-44           [-1, 64, 28, 28]             128Conv2d-45           [-1, 16, 28, 28]           9,216Bottleneck-46          [-1, 176, 28, 28]               0BatchNorm2d-47          [-1, 176, 28, 28]             352Conv2d-48           [-1, 64, 28, 28]          11,264BatchNorm2d-49           [-1, 64, 28, 28]             128Conv2d-50           [-1, 16, 28, 28]           9,216Bottleneck-51          [-1, 192, 28, 28]               0DenseBlock-52          [-1, 192, 28, 28]               0BatchNorm2d-53          [-1, 192, 28, 28]             384Conv2d-54          [-1, 256, 28, 28]          49,152AvgPool2d-55          [-1, 256, 14, 14]               0TransitionLayer-56          [-1, 256, 14, 14]               0Conv2d-57          [-1, 328, 14, 14]          83,968BatchNorm2d-58          [-1, 256, 14, 14]             512Conv2d-59           [-1, 48, 14, 14]          12,288BatchNorm2d-60           [-1, 48, 14, 14]              96Conv2d-61           [-1, 12, 14, 14]           5,184Bottleneck-62          [-1, 268, 14, 14]               0BatchNorm2d-63          [-1, 268, 14, 14]             536Conv2d-64           [-1, 48, 14, 14]          12,864BatchNorm2d-65           [-1, 48, 14, 14]              96Conv2d-66           [-1, 12, 14, 14]           5,184Bottleneck-67          [-1, 280, 14, 14]               0BatchNorm2d-68          [-1, 280, 14, 14]             560Conv2d-69           [-1, 48, 14, 14]          13,440BatchNorm2d-70           [-1, 48, 14, 14]              96Conv2d-71           [-1, 12, 14, 14]           5,184Bottleneck-72          [-1, 292, 14, 14]               0BatchNorm2d-73          [-1, 292, 14, 14]             584Conv2d-74           [-1, 48, 14, 14]          14,016BatchNorm2d-75           [-1, 48, 14, 14]              96Conv2d-76           [-1, 12, 14, 14]           5,184Bottleneck-77          [-1, 304, 14, 14]               0BatchNorm2d-78          [-1, 304, 14, 14]             608Conv2d-79           [-1, 48, 14, 14]          14,592BatchNorm2d-80           [-1, 48, 14, 14]              96Conv2d-81           [-1, 12, 14, 14]           5,184Bottleneck-82          [-1, 316, 14, 14]               0BatchNorm2d-83          [-1, 316, 14, 14]             632Conv2d-84           [-1, 48, 14, 14]          15,168BatchNorm2d-85           [-1, 48, 14, 14]              96Conv2d-86           [-1, 12, 14, 14]           5,184Bottleneck-87          [-1, 328, 14, 14]               0DenseBlock-88          [-1, 328, 14, 14]               0BatchNorm2d-89          [-1, 328, 14, 14]             656Conv2d-90          [-1, 512, 14, 14]         167,936AvgPool2d-91            [-1, 512, 7, 7]               0TransitionLayer-92            [-1, 512, 7, 7]               0Conv2d-93            [-1, 584, 7, 7]         299,008BatchNorm2d-94            [-1, 512, 7, 7]           1,024Conv2d-95             [-1, 48, 7, 7]          24,576BatchNorm2d-96             [-1, 48, 7, 7]              96Conv2d-97             [-1, 12, 7, 7]           5,184Bottleneck-98            [-1, 524, 7, 7]               0BatchNorm2d-99            [-1, 524, 7, 7]           1,048Conv2d-100             [-1, 48, 7, 7]          25,152BatchNorm2d-101             [-1, 48, 7, 7]              96Conv2d-102             [-1, 12, 7, 7]           5,184Bottleneck-103            [-1, 536, 7, 7]               0BatchNorm2d-104            [-1, 536, 7, 7]           1,072Conv2d-105             [-1, 48, 7, 7]          25,728BatchNorm2d-106             [-1, 48, 7, 7]              96Conv2d-107             [-1, 12, 7, 7]           5,184Bottleneck-108            [-1, 548, 7, 7]               0BatchNorm2d-109            [-1, 548, 7, 7]           1,096Conv2d-110             [-1, 48, 7, 7]          26,304BatchNorm2d-111             [-1, 48, 7, 7]              96Conv2d-112             [-1, 12, 7, 7]           5,184Bottleneck-113            [-1, 560, 7, 7]               0BatchNorm2d-114            [-1, 560, 7, 7]           1,120Conv2d-115             [-1, 48, 7, 7]          26,880BatchNorm2d-116             [-1, 48, 7, 7]              96Conv2d-117             [-1, 12, 7, 7]           5,184Bottleneck-118            [-1, 572, 7, 7]               0BatchNorm2d-119            [-1, 572, 7, 7]           1,144Conv2d-120             [-1, 48, 7, 7]          27,456BatchNorm2d-121             [-1, 48, 7, 7]              96Conv2d-122             [-1, 12, 7, 7]           5,184Bottleneck-123            [-1, 584, 7, 7]               0DenseBlock-124            [-1, 584, 7, 7]               0BatchNorm2d-125            [-1, 584, 7, 7]           1,168Conv2d-126           [-1, 1024, 7, 7]         598,016AvgPool2d-127           [-1, 1024, 3, 3]               0TransitionLayer-128           [-1, 1024, 3, 3]               0Linear-129                 [-1, 1000]       1,025,000
================================================================
Total params: 2,734,104
Trainable params: 2,734,104
Non-trainable params: 0
----------------------------------------------------------------
Input size (MB): 0.57
Forward/backward pass size (MB): 95.40
Params size (MB): 10.43
Estimated Total Size (MB): 106.41
----------------------------------------------------------------

接下来我们简单阅读我们构建的网络:

  1. 首先我们构建Bottleneck,bottleneck的主要目的是构建denseblock的组成部分,通过两次归一化层以及两次卷积构成
  2. 随后我们构建Denseblock,并且使用残差连接
  3. 构建transition层进行池化,最终能够全连接
  4. 整体网络构建如下:
Input (224x224x3)||   Conv2d (7x7, stride=2)|   BatchNorm2d|   ReLU|   MaxPool2d (3x3, stride=2)v
Stem Layer (64 channels)|v
Stage 1: DenseBlock + TransitionLayer (64 -> 128 channels)|  v
Stage 2: DenseBlock + TransitionLayer (128 -> 256 channels)|v
Stage 3: DenseBlock + TransitionLayer (256 -> 512 channels)|v
Stage 4: DenseBlock + TransitionLayer (512 -> 1024 channels)|v
Global Average Pooling (1024x1x1)|v
Fully Connected Layer (1024 -> num_classes)|v
Output (num_classes)

二、对上周的乳腺癌识别

import pathlib
data_dir = './data/J3-1-data'
data_dir = pathlib.Path(data_dir)data_path = list(data_dir.glob('*'))
classNames = [path.name for path in data_path]
print(classNames)

代码输出:

['0', '1']
from torch.utils.data import DataLoader
from torchvision import datasets, transformstrain_transforms = transforms.Compose([transforms.Resize([224, 224]),transforms.ToTensor(),transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
])total_data = datasets.ImageFolder(data_dir, transform=train_transforms)
total_data

代码输出:

Dataset ImageFolderNumber of datapoints: 13403Root location: data\J3-1-dataStandardTransform
Transform: Compose(Resize(size=[224, 224], interpolation=bilinear, max_size=None, antialias=True)ToTensor()Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]))
train_size = int(0.7 * len(total_data)) 
remain_size  = len(total_data) - train_size  
train_dataset, remain_dataset = torch.utils.data.random_split(total_data, [train_size, remain_size])
test_size = int(0.6 * len(remain_dataset))
validate_size = len(remain_dataset) - test_size
test_dataset, validate_dataset = torch.utils.data.random_split(remain_dataset, [test_size, validate_size]) #随机分配数据
train_dataset, test_dataset, validate_dataset

代码输出:

(<torch.utils.data.dataset.Subset at 0x2138402dbb0>,<torch.utils.data.dataset.Subset at 0x21383feb590>,<torch.utils.data.dataset.Subset at 0x21383ece690>)
batch_size = 32train_dl = DataLoader(train_dataset, batch_size=batch_size,shuffle=True)test_dl = DataLoader(test_dataset,batch_size = batch_size,shuffle = True
)validate_dl = DataLoader(validate_dataset,batch_size = batch_size,shuffle = False
)for x, y in validate_dl:print("shape of x [N, C, H, W]:", x.shape)print("shape of y:", y.shape, y.dtype)break

代码输出:

shape of x [N, C, H, W]: torch.Size([32, 3, 224, 224])
shape of y: torch.Size([32]) torch.int64
def train(dataloader, model, loss_fn, optimizer):size = len(dataloader.dataset)num_batches = len(dataloader)train_loss, train_acc = 0, 0for x, y in dataloader:x, y = x.to(device), y.to(device)pred = model(x)loss = loss_fn(pred, y)#backwardoptimizer.zero_grad()loss.backward()optimizer.step()train_acc += (pred.argmax(1) == y).type(torch.float).sum().item()train_loss += loss.item()train_acc /= sizetrain_loss /= num_batchesreturn train_acc, train_lossdef test(dataloader, model, loss_fn):size = len(dataloader.dataset)num_batches = len(dataloader)test_loss, test_acc = 0, 0for x, y in dataloader:x, y = x.to(device), y.to(device)pred = model(x)loss = loss_fn(pred, y)test_acc += (pred.argmax(1) == y).type(torch.float).sum().item()test_loss += loss.item()test_acc /= sizetest_loss /= num_batchesreturn test_acc, test_loss

训练:

import copy
from torch.optim.lr_scheduler import ReduceLROnPlateauopt = torch.optim.Adam(model.parameters(), lr= 1e-4)
scheduler = ReduceLROnPlateau(opt, mode='min', factor=0.1, patience=5, verbose=True) # 当指标(如损失)连续 5 次没有改善时,将学习率乘以 0.1
loss_fn = nn.CrossEntropyLoss() # 交叉熵epochs = 32train_loss = []
train_acc  = []
test_loss  = []
test_acc   = []best_acc = 0    # 设置一个最佳准确率,作为最佳模型的判别指标for epoch in range(epochs):model.train()epoch_train_acc, epoch_train_loss = train(train_dl, model, loss_fn, opt)model.eval()epoch_test_acc, epoch_test_loss = test(test_dl, model, loss_fn)scheduler.step(epoch_test_loss)if epoch_test_acc > best_acc:best_acc = epoch_test_accbest_model = copy.deepcopy(model)train_acc.append(epoch_train_acc)train_loss.append(epoch_train_loss)test_acc.append(epoch_test_acc)test_loss.append(epoch_test_loss)# 获取当前的学习率lr = opt.state_dict()['param_groups'][0]['lr']template = ('Epoch:{:2d}, Train_acc:{:.1f}%, Train_loss:{:.3f}, Test_acc:{:.1f}%, Test_loss:{:.3f}, Lr:{:.2E}')print(template.format(epoch+1, epoch_train_acc*100, epoch_train_loss, epoch_test_acc*100, epoch_test_loss, lr))# 保存最佳模型到文件中
PATH = './best_model.pth'  # 保存的参数文件名
torch.save(best_model.state_dict(), PATH)print('Done')

代码输出:

Epoch: 1, Train_acc:80.7%, Train_loss:0.892, Test_acc:71.2%, Test_loss:1.992, Lr:1.00E-04
Epoch: 2, Train_acc:82.5%, Train_loss:0.409, Test_acc:83.9%, Test_loss:0.393, Lr:1.00E-04
Epoch: 3, Train_acc:83.4%, Train_loss:0.395, Test_acc:82.8%, Test_loss:0.443, Lr:1.00E-04
Epoch: 4, Train_acc:83.8%, Train_loss:0.380, Test_acc:84.1%, Test_loss:0.378, Lr:1.00E-04
Epoch: 5, Train_acc:84.2%, Train_loss:0.375, Test_acc:54.6%, Test_loss:1.337, Lr:1.00E-04
Epoch: 6, Train_acc:84.2%, Train_loss:0.378, Test_acc:84.7%, Test_loss:0.354, Lr:1.00E-04
Epoch: 7, Train_acc:84.7%, Train_loss:0.368, Test_acc:64.4%, Test_loss:0.696, Lr:1.00E-04
Epoch: 8, Train_acc:84.9%, Train_loss:0.360, Test_acc:84.7%, Test_loss:0.493, Lr:1.00E-04
Epoch: 9, Train_acc:85.1%, Train_loss:0.362, Test_acc:73.7%, Test_loss:0.506, Lr:1.00E-04
Epoch:10, Train_acc:85.2%, Train_loss:0.350, Test_acc:77.3%, Test_loss:0.791, Lr:1.00E-04
Epoch:11, Train_acc:85.5%, Train_loss:0.352, Test_acc:53.7%, Test_loss:2.223, Lr:1.00E-04
Epoch:12, Train_acc:85.6%, Train_loss:0.351, Test_acc:84.5%, Test_loss:0.438, Lr:1.00E-05
Epoch:13, Train_acc:86.7%, Train_loss:0.321, Test_acc:87.4%, Test_loss:0.295, Lr:1.00E-05
Epoch:14, Train_acc:86.5%, Train_loss:0.314, Test_acc:87.3%, Test_loss:0.296, Lr:1.00E-05
Epoch:15, Train_acc:87.2%, Train_loss:0.310, Test_acc:87.1%, Test_loss:0.320, Lr:1.00E-05
Epoch:16, Train_acc:87.6%, Train_loss:0.307, Test_acc:87.2%, Test_loss:0.297, Lr:1.00E-05
Epoch:17, Train_acc:87.4%, Train_loss:0.309, Test_acc:88.2%, Test_loss:0.289, Lr:1.00E-05
Epoch:18, Train_acc:87.0%, Train_loss:0.310, Test_acc:87.6%, Test_loss:0.293, Lr:1.00E-05
Epoch:19, Train_acc:87.1%, Train_loss:0.305, Test_acc:88.3%, Test_loss:0.281, Lr:1.00E-05
Epoch:20, Train_acc:87.6%, Train_loss:0.298, Test_acc:87.6%, Test_loss:0.299, Lr:1.00E-05
Epoch:21, Train_acc:87.5%, Train_loss:0.299, Test_acc:87.9%, Test_loss:0.289, Lr:1.00E-05
Epoch:22, Train_acc:87.5%, Train_loss:0.299, Test_acc:88.3%, Test_loss:0.292, Lr:1.00E-05
Epoch:23, Train_acc:88.0%, Train_loss:0.296, Test_acc:86.4%, Test_loss:0.347, Lr:1.00E-05
Epoch:24, Train_acc:87.7%, Train_loss:0.299, Test_acc:88.1%, Test_loss:0.286, Lr:1.00E-05
Epoch:25, Train_acc:87.8%, Train_loss:0.294, Test_acc:86.4%, Test_loss:0.327, Lr:1.00E-06
Epoch:26, Train_acc:87.9%, Train_loss:0.290, Test_acc:87.5%, Test_loss:0.291, Lr:1.00E-06
Epoch:27, Train_acc:88.2%, Train_loss:0.286, Test_acc:88.9%, Test_loss:0.272, Lr:1.00E-06
Epoch:28, Train_acc:88.1%, Train_loss:0.287, Test_acc:88.6%, Test_loss:0.277, Lr:1.00E-06
Epoch:29, Train_acc:88.2%, Train_loss:0.286, Test_acc:89.4%, Test_loss:0.269, Lr:1.00E-06
Epoch:30, Train_acc:88.1%, Train_loss:0.285, Test_acc:89.1%, Test_loss:0.271, Lr:1.00E-06
Epoch:31, Train_acc:88.1%, Train_loss:0.288, Test_acc:88.9%, Test_loss:0.274, Lr:1.00E-06
Epoch:32, Train_acc:87.9%, Train_loss:0.291, Test_acc:89.1%, Test_loss:0.275, Lr:1.00E-06
Done

结果上看不如上次的DenseNet121

结果可视化:

import matplotlib.pyplot as plt
epochs_range = range(epochs)plt.figure(figsize=(12, 3))
plt.subplot(1, 2, 1)plt.plot(epochs_range, train_acc, label='Training Accuracy')
plt.plot(epochs_range, test_acc, label='Test Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')plt.subplot(1, 2, 2)
plt.plot(epochs_range, train_loss, label='Training Loss')
plt.plot(epochs_range, test_loss, label='Test Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.show()

代码输出:
在这里插入图片描述
对验证集的准确率:

def validate(dataloader, model):model.eval()size = len(dataloader.dataset)num_batches = len(dataloader)validate_acc = 0for x, y in dataloader:x, y = x.to(device), y.to(device)pred = model(x)validate_acc += (pred.argmax(1) == y).type(torch.float).sum().item()validate_acc /= sizereturn validate_acc# 计算验证集准确率
validate_acc = validate(validate_dl, best_model)
print(f"Validation Accuracy: {validate_acc:.2%}")

代码输出:

Validation Accuracy: 89.37%

达到89.4%

三、总结

这次的结合主要是在和GPT一起完成的,主要是简单的结合,看到很多人说文献中报道过DPN结构,我待会儿也会去看看。

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.rhkb.cn/news/490374.html

如若内容造成侵权/违法违规/事实不符,请联系长河编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

爬虫学习案例5

爬取b站一个视频 罗翔老师某一个视频很刑 单个完整代码&#xff1a; 安装依赖库 pip install lxml requests import osimport requests import re from lxml import etree import json # 格式化展开输出 from pprint import pprint # 导入进程模块 import subprocess head…

win服务器的架设、windows server 2012 R2 系统的下载与安装使用

文章目录 windows server 2012 R2 系统的下载与安装使用1 windows server 2012 的下载2 打开 VMware 虚拟机软件&#xff08;1&#xff09;新建虚拟机&#xff08;2&#xff09;设置虚拟机&#xff08;3&#xff09;打开虚拟机 windows server 2012&#xff08;4&#xff09;进…

【51单片机】矩阵按键快速上手

51单片机矩阵按键是一种在单片机应用系统中广泛使用的按键排列方式&#xff0c;特别适用于需要多个按键但I/O口资源有限的情况。以下是对51单片机矩阵按键的详细介绍&#xff1a; 一、矩阵按键的基本概念 ‌定义‌&#xff1a;矩阵按键&#xff0c;又称行列键盘&#xff0c;是…

steel-browser - 专为AI应用构建的开源浏览器自动化 API

Steel是一个开源浏览器 API&#xff0c;可以轻松构建与 Web 交互的 AI 应用程序和代理。您无需从头开始构建自动化基础设施&#xff0c;而是可以专注于 AI 应用程序&#xff0c;而 Steel 会处理复杂性。 2300 Stars 99 Forks 4 Issues 5 贡献者 Apache-2.0 License TypeScript …

C# 与PLC数据交互

点击跳转下载地址 点击跳转胡工科技官网

Python中实现YOLO目标检测

文章目录 Python中实现YOLO目标检测一、引言二、环境准备1、安装依赖 2、下载预训练模型三、目标检测1、图像检测2、视频检测 四、使用示例1、轨迹追踪 五、总结 Python中实现YOLO目标检测 一、引言 YOLO&#xff08;You Only Look Once&#xff09;是一种流行的实时目标检测…

Android实现RecyclerView边缘渐变效果

Android实现RecyclerView边缘渐变效果 1.前言&#xff1a; 是指在RecyclerView中实现淡入淡出效果的边缘效果。通过这种效果&#xff0c;可以使RecyclerView的边缘在滚动时逐渐淡出或淡入&#xff0c;以提升用户体验。 2.Recyclerview属性&#xff1a; 2.1、requiresFading…

汽车免拆诊断案例 | 2014款保时捷卡宴车发动机偶尔无法起动

故障现象 一辆2014款保时捷卡宴车&#xff0c;搭载3.0T 发动机&#xff0c;累计行驶里程约为18万km。车主反映&#xff0c;发动机偶尔无法起动。 故障诊断 接车后试车&#xff0c;发动机起动及运转均正常。用故障检测仪检测&#xff0c;发动机控制单元&#xff08;DME&#x…

Hadoop生态圈框架部署(十一)- Sqoop安装与配置

文章目录 前言一、Sqoop安装与配置&#xff08;手动安装配置&#xff09;1. 下载Sqoop安装包并上传到Linux1.1 下载1.2 上传 2. 解压Sqoop安装包2.1 解压2.2 重命名 3. 配置Sqoop3.1 修改 sqoop-env.sh 配置文件3.2 配置jar包3.2.1 配置MySQL驱动jar包3.2.2 配置commons-lang-2…

Jenkins与SonarQube持续集成搭建及坑位详解

Jenkins和SonarQube都是软件开发过程中常用的工具,它们在代码管理、构建、测试和质量管理方面发挥着重要作用。以下是关于Jenkins与SonarQube的作用及整合步骤环境搭建的详细解释: 一、Jenkins与SonarQube的作用 Jenkins: Jenkins是一个开源的持续集成和交付工具,它可以帮…

Linux驱动开发(13):输入子系统–按键输入实验

计算机的输入设备繁多&#xff0c;有按键、鼠标、键盘、触摸屏、游戏手柄等等&#xff0c;Linux内核为了能够将所有的输入设备进行统一的管理&#xff0c; 设计了输入子系统。为上层应用提供了统一的抽象层&#xff0c;各个输入设备的驱动程序只需上报产生的输入事件即可。 下…

关于Postgresql旧版本安装

抛出问题 局点项目现场&#xff0c;要求对如下三类资产做安全加固&#xff0c;需要在公司侧搭建测试验证环境&#xff0c;故有此篇。 bclinux 8.2 tomcat-8.5.59 postgrel -11 随着PG迭代&#xff0c;老旧版本仅提供有限维护。如果想安装老版本可能就要费劲儿一些。现在&…

继电器控制与C++编程:实现安全开关控制的技术分享

在现代生活中,继电器作为一种重要的电气控制元件,在电气设备的安全控制中起到了至关重要的作用。通过低电流控制高电流,继电器能够有效地隔离控制电路与被控设备,从而保障使用者的安全。本项目将介绍如何通过树莓派Pico与继电器模块结合,使用C++编程实现继电器的控制。 一…

时序论文31|NIPS24自注意力机制真的对时序预测任务有效吗?

论文标题&#xff1a;Are Self-Attentions Effective for Time Series Forecasting? 论文链接&#xff1a;https://arxiv.org/pdf/2409.18696 代码链接&#xff1a;https://github.com/dongbeank/CATS 前言 本文将重点转向探究自注意力机制在其中的有效性&#xff0c;提出…

ip_done

文章目录 路由结论 IP分片 数据链路层重谈Mac地址MAC帧报头局域网的通信原理MSS&#xff0c;以及MAC帧对上层的影响ARP协议 1.公司是不是这样呢? 类似的要给运营商交钱&#xff0c;构建公司的子网&#xff0c;具有公司级别的入口路由器 2&#xff0e;为什么要这样呢?? IP地…

计算机网络错题

文章目录 码分复用透明传输差错检测停止-等待协议回退N帧协议CSMA/CD协议以太网交换机Vlanip地址的无分类编制方法ip地址的应用规划ip数据包的发送和转发过程路由信息协议IPI2016201720202022 2.5信道 码分复用 透明传输 差错检测 停止-等待协议 回退N帧协议 CSMA/CD协议 以太网…

2024 年 9 月区块链游戏研报:行业回暖,Telegram 游戏引发热潮

作者&#xff1a;Stella L (stellafootprint.network) 数据来源&#xff1a;Footprint Analytics Games Research Page 9 月份&#xff0c;区块链游戏代币的市场总值增长了 29.2%&#xff0c;达到 232 亿美元&#xff0c;日活跃用户&#xff08;DAU&#xff09;数量上升了 1…

Https身份鉴权(小迪网络安全笔记~

附&#xff1a;完整笔记目录~ ps&#xff1a;本人小白&#xff0c;笔记均在个人理解基础上整理&#xff0c;若有错误欢迎指正&#xff01; 5.2 Https&身份鉴权 引子&#xff1a;上一篇主要对Http数据包结构、内容做了介绍&#xff0c;本篇则聊聊Https、身份鉴权等技术。 …

ORACLE逗号分隔的字符串字段,关联表查询

使用场景如下&#xff1a; oracle12 以前的写法&#xff1a; selectt.pro_ids,wm_concat(t1.name) pro_names from info t,product t1 where instr(,||t.pro_ids|| ,,,|| t1.id|| ,) > 0 group by pro_ids oracle12 以后的写法&#xff1a; selectt.pro_ids,listagg(DIS…

MySQL八股文

MySQL 自己学习过程中的MySQL八股笔记。 主要来源于 小林coding 牛客MySQL面试八股文背诵版 以及b站和其他的网上资料。 MySQL是一种开放源代码的关系型数据库管理系统&#xff08;RDBMS&#xff09;&#xff0c;使用最常用的数据库管理语言–结构化查询语言&#xff08;SQL&…