基于Pytorch的CIFAR100数据集上从ResNet50到VGG16的知识蒸馏实验记录

知识蒸馏的概念

可以参照NeurIPS2015的论文“Distilling the Knowledge in a Neural Network”了解知识蒸馏的概念。

在这里插入图片描述

知识蒸馏的狭义概念就是从复杂模型中迁移知识来提升简单模型的性能。复杂模型称之为教师模型,简单模型称之为学生模型。最近,笔者重温了知识蒸馏的概念,并在CIFAR100数据集上对知识蒸馏进行了验证和实验。
logits,硬目标,软目标的概念:logits指的是网络最后一层的输出概率,硬目标指的是真值标签的one-hot编码,软目标指的是对logits进行softmax之后的概率。
加入温度系数的软目标,为了让softmax之后的概率分布更加软化,Hinton提出了使用了温度参数对logits进行softmax的软化处理,

在这里插入图片描述
T为温度,T越大,概率分布更加平缓。

数据集 CIFAR100,是一个经典的图像分类模型,有100个图像类别

数据集直接采用Pytorch定义的官方数据集进行加载
import torchvision
from torchvision import transformsCIFAR100_TRAIN_MEAN = (0.5070751592371323, 0.48654887331495095, 0.4409178433670343)
CIFAR100_TRAIN_STD = (0.2673342858792401, 0.2564384629170883, 0.27615047132568404)transform_train = transforms.Compose([transforms.RandomCrop(32, padding=4),transforms.RandomHorizontalFlip(),transforms.RandomRotation(15),transforms.ToTensor(),transforms.Normalize(CIFAR100_TRAIN_MEAN, CIFAR100_TRAIN_STD)])transform_test = transforms.Compose([transforms.ToTensor(),transforms.Normalize(CIFAR100_TRAIN_MEAN, CIFAR100_TRAIN_STD)])train_dataset = torchvision.datasets.cifar.CIFAR100(root = "./dataset/",train=True,transform=transform_train,download=True
)
test_dataset = torchvision.datasets.cifar.CIFAR100(root = "./dataset/",train = False,transform=transform_test,download=True
)
train_loader = DataLoader(dataset=train_dataset, batch_size=128, num_workers=4, shuffle=True)
test_loader = DataLoader(dataset=test_dataset, batch_size=128, num_workers=4, shuffle=False)

分类模型:采用ResNet50作为教师模型,VGG16作为学生模型。

VGG16网络定义代码
"""vgg in pytorch[1] Karen Simonyan, Andrew ZissermanVery Deep Convolutional Networks for Large-Scale Image Recognition.https://arxiv.org/abs/1409.1556v6
"""
'''VGG11/13/16/19 in Pytorch.'''import torch
import torch.nn as nncfg = {'A' : [64,     'M', 128,      'M', 256, 256,           'M', 512, 512,           'M', 512, 512,           'M'],'B' : [64, 64, 'M', 128, 128, 'M', 256, 256,           'M', 512, 512,           'M', 512, 512,           'M'],'D' : [64, 64, 'M', 128, 128, 'M', 256, 256, 256,      'M', 512, 512, 512,      'M', 512, 512, 512,      'M'],'E' : [64, 64, 'M', 128, 128, 'M', 256, 256, 256, 256, 'M', 512, 512, 512, 512, 'M', 512, 512, 512, 512, 'M']
}class VGG(nn.Module):def __init__(self, features, num_class=100):super().__init__()self.features = featuresself.classifier = nn.Sequential(nn.Linear(512, 4096),nn.ReLU(inplace=True),nn.Dropout(),nn.Linear(4096, 4096),nn.ReLU(inplace=True),nn.Dropout(),nn.Linear(4096, num_class))def forward(self, x):output = self.features(x)output = output.view(output.size()[0], -1)output = self.classifier(output)return outputdef make_layers(cfg, batch_norm=False):layers = []input_channel = 3for l in cfg:if l == 'M':layers += [nn.MaxPool2d(kernel_size=2, stride=2)]continuelayers += [nn.Conv2d(input_channel, l, kernel_size=3, padding=1)]if batch_norm:layers += [nn.BatchNorm2d(l)]layers += [nn.ReLU(inplace=True)]input_channel = lreturn nn.Sequential(*layers)def vgg16_bn():return VGG(make_layers(cfg['D'], batch_norm=True))
ResNet50网络定义代码
"""resnet in pytorch[1] Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun.Deep Residual Learning for Image Recognitionhttps://arxiv.org/abs/1512.03385v1
"""import torch
import torch.nn as nnclass BasicBlock(nn.Module):"""Basic Block for resnet 18 and resnet 34"""#BasicBlock and BottleNeck block#have different output size#we use class attribute expansion#to distinctexpansion = 1def __init__(self, in_channels, out_channels, stride=1):super().__init__()#residual functionself.residual_function = nn.Sequential(nn.Conv2d(in_channels, out_channels, kernel_size=3, stride=stride, padding=1, bias=False),nn.BatchNorm2d(out_channels),nn.ReLU(inplace=True),nn.Conv2d(out_channels, out_channels * BasicBlock.expansion, kernel_size=3, padding=1, bias=False),nn.BatchNorm2d(out_channels * BasicBlock.expansion))#shortcutself.shortcut = nn.Sequential()#the shortcut output dimension is not the same with residual function#use 1*1 convolution to match the dimensionif stride != 1 or in_channels != BasicBlock.expansion * out_channels:self.shortcut = nn.Sequential(nn.Conv2d(in_channels, out_channels * BasicBlock.expansion, kernel_size=1, stride=stride, bias=False),nn.BatchNorm2d(out_channels * BasicBlock.expansion))def forward(self, x):return nn.ReLU(inplace=True)(self.residual_function(x) + self.shortcut(x))class BottleNeck(nn.Module):"""Residual block for resnet over 50 layers"""expansion = 4def __init__(self, in_channels, out_channels, stride=1):super().__init__()self.residual_function = nn.Sequential(nn.Conv2d(in_channels, out_channels, kernel_size=1, bias=False),nn.BatchNorm2d(out_channels),nn.ReLU(inplace=True),nn.Conv2d(out_channels, out_channels, stride=stride, kernel_size=3, padding=1, bias=False),nn.BatchNorm2d(out_channels),nn.ReLU(inplace=True),nn.Conv2d(out_channels, out_channels * BottleNeck.expansion, kernel_size=1, bias=False),nn.BatchNorm2d(out_channels * BottleNeck.expansion),)self.shortcut = nn.Sequential()if stride != 1 or in_channels != out_channels * BottleNeck.expansion:self.shortcut = nn.Sequential(nn.Conv2d(in_channels, out_channels * BottleNeck.expansion, stride=stride, kernel_size=1, bias=False),nn.BatchNorm2d(out_channels * BottleNeck.expansion))def forward(self, x):return nn.ReLU(inplace=True)(self.residual_function(x) + self.shortcut(x))class ResNet(nn.Module):def __init__(self, block, num_block, num_classes=100):super().__init__()self.in_channels = 64self.conv1 = nn.Sequential(nn.Conv2d(3, 64, kernel_size=3, padding=1, bias=False),nn.BatchNorm2d(64),nn.ReLU(inplace=True))#we use a different inputsize than the original paper#so conv2_x's stride is 1self.conv2_x = self._make_layer(block, 64, num_block[0], 1)self.conv3_x = self._make_layer(block, 128, num_block[1], 2)self.conv4_x = self._make_layer(block, 256, num_block[2], 2)self.conv5_x = self._make_layer(block, 512, num_block[3], 2)self.avg_pool = nn.AdaptiveAvgPool2d((1, 1))self.fc = nn.Linear(512 * block.expansion, num_classes)def _make_layer(self, block, out_channels, num_blocks, stride):"""make resnet layers(by layer i didnt mean this 'layer' was thesame as a neuron netowork layer, ex. conv layer), one layer maycontain more than one residual blockArgs:block: block type, basic block or bottle neck blockout_channels: output depth channel number of this layernum_blocks: how many blocks per layerstride: the stride of the first block of this layerReturn:return a resnet layer"""# we have num_block blocks per layer, the first block# could be 1 or 2, other blocks would always be 1strides = [stride] + [1] * (num_blocks - 1)layers = []for stride in strides:layers.append(block(self.in_channels, out_channels, stride))self.in_channels = out_channels * block.expansionreturn nn.Sequential(*layers)def forward(self, x):output = self.conv1(x)output = self.conv2_x(output)output = self.conv3_x(output)output = self.conv4_x(output)output = self.conv5_x(output)output = self.avg_pool(output)output = output.view(output.size(0), -1)output = self.fc(output)return outputdef resnet50():""" return a ResNet 50 object"""return ResNet(BottleNeck, [3, 4, 6, 3])

先单独训练教师模型和学生模型,分别统计教师模型学生模型的精度

损失函数 nn.CrossEntropyLoss()
优化器 torch.optim.SGD(model.parameters(), lr=0.02, momentum=0.9, weight_decay=5e-4)
学习率曲线 torch.optim.lr_scheduler.MultiStepLR(optimizer, milestones=[60, 120, 160], gamma=0.2)
epochs = 200
教师模型训练代码
import torch
from torch import nn
from tqdm import tqdm
import torchvision
from torchvision import transforms
from torch.utils.data import DataLoader
from  my_resnet import resnet50def TeacherModel():""" return a ResNet 50 object"""model = resnet50()return modeldevice = torch.device("cuda" if torch.cuda.is_available() else "cpu")CIFAR100_TRAIN_MEAN = (0.5070751592371323, 0.48654887331495095, 0.4409178433670343)
CIFAR100_TRAIN_STD = (0.2673342858792401, 0.2564384629170883, 0.27615047132568404)transform_train = transforms.Compose([transforms.RandomCrop(32, padding=4),transforms.RandomHorizontalFlip(),transforms.RandomRotation(15),transforms.ToTensor(),transforms.Normalize(CIFAR100_TRAIN_MEAN, CIFAR100_TRAIN_STD)])transform_test = transforms.Compose([transforms.ToTensor(),transforms.Normalize(CIFAR100_TRAIN_MEAN, CIFAR100_TRAIN_STD)])train_dataset = torchvision.datasets.cifar.CIFAR100(root = "./dataset/",train=True,transform=transform_train,download=True
)
test_dataset = torchvision.datasets.cifar.CIFAR100(root = "./dataset/",train = False,transform=transform_test,download=True
)
train_loader = DataLoader(dataset=train_dataset, batch_size=128, num_workers=4, shuffle=True)
test_loader = DataLoader(dataset=test_dataset, batch_size=128, num_workers=4, shuffle=False)if __name__ == "__main__":"""从头训练教师模型"""model = TeacherModel().to(device)criterion = nn.CrossEntropyLoss()optimizer = torch.optim.SGD(model.parameters(), lr=0.02, momentum=0.9, weight_decay=5e-4)train_scheduler = torch.optim.lr_scheduler.MultiStepLR(optimizer, milestones=[60, 120, 160], gamma=0.2) #learning rate decayiter_per_epoch = len(train_loader)epochs = 200best_acc = 0.0global_step = 0for epoch in range(epochs):model.train()train_scheduler.step(epoch)for data, targets in tqdm(train_loader):data = data.to(device)targets = targets.to(device)optimizer.zero_grad()prediction = model(data)loss = criterion(prediction, targets)loss.backward()optimizer.step()global_step += 1model.eval()num_correct = 0num_samples = 0with torch.no_grad():for x, y in test_loader:x = x.to(device)y = y.to(device)prediction = model(x)prediction = prediction.max(1).indicesnum_correct += (prediction == y).sum()num_samples += prediction.size(0)acc = (num_correct/num_samples).item()if acc > best_acc:torch.save(model.state_dict(), './weights/teacher_cifar100/teacher_{}.pth'.format(acc))best_acc = accprint("Epoch {}: 当前模型最佳精度为:{:.4f}".format(epoch, best_acc))"""教师模型Epoch 199: 当前模型最佳精度为:0.7840"""
教师模型的分类精度为78.40%
学生模型的训练代码
import torch
from torch import nn
from tqdm import tqdm
import torchvision
from torchvision import transforms
from torch.utils.data import DataLoader
from my_vgg import vgg16_bndef StudentModel():model = vgg16_bn()return modeldevice = torch.device("cuda" if torch.cuda.is_available() else "cpu")CIFAR100_TRAIN_MEAN = (0.5070751592371323, 0.48654887331495095, 0.4409178433670343)
CIFAR100_TRAIN_STD = (0.2673342858792401, 0.2564384629170883, 0.27615047132568404)transform_train = transforms.Compose([transforms.RandomCrop(32, padding=4),transforms.RandomHorizontalFlip(),transforms.RandomRotation(15),transforms.ToTensor(),transforms.Normalize(CIFAR100_TRAIN_MEAN, CIFAR100_TRAIN_STD)])transform_test = transforms.Compose([transforms.ToTensor(),transforms.Normalize(CIFAR100_TRAIN_MEAN, CIFAR100_TRAIN_STD)])train_dataset = torchvision.datasets.cifar.CIFAR100(root = "./dataset/",train=True,transform=transform_train,download=True
)
test_dataset = torchvision.datasets.cifar.CIFAR100(root = "./dataset/",train = False,transform=transform_test,download=True
)
train_loader = DataLoader(dataset=train_dataset, batch_size=128, num_workers=4, shuffle=True)
test_loader = DataLoader(dataset=test_dataset, batch_size=128, num_workers=4, shuffle=False)if __name__ == "__main__":"""从头训练学生模型"""model = StudentModel().to(device)criterion = nn.CrossEntropyLoss()optimizer = torch.optim.SGD(model.parameters(), lr=0.02, momentum=0.9, weight_decay=5e-4)train_scheduler = torch.optim.lr_scheduler.MultiStepLR(optimizer, milestones=[60, 120, 160], gamma=0.2) #learning rate decayiter_per_epoch = len(train_loader)epochs = 200best_acc = 0.0global_step = 0for epoch in range(epochs):model.train()train_scheduler.step(epoch)for data, targets in tqdm(train_loader):data = data.to(device)targets = targets.to(device)optimizer.zero_grad()prediction = model(data)loss = criterion(prediction, targets)loss.backward()optimizer.step()global_step += 1model.eval()num_correct = 0num_samples = 0with torch.no_grad():for x, y in test_loader:x = x.to(device)y = y.to(device)prediction = model(x)prediction = prediction.max(1).indicesnum_correct += (prediction == y).sum()num_samples += prediction.size(0)acc = (num_correct/num_samples).item()if acc > best_acc:torch.save(model.state_dict(), './weights/student_cifar100_vgg16/student_{}.pth'.format(acc))best_acc = accprint("Epoch {}: 当前模型最佳精度为:{:.4f}".format(epoch, best_acc))"""学生模型 VGG16Epoch 199: 当前模型最佳精度为:0.7121"""
学生模型的训练精度为71.21%

教师-学生模型蒸馏训练,学生损失为CE交叉熵损失,蒸馏损失为KL散度损失

重点一:蒸馏学生损失loss=(1-alpha) * T * T * soft_loss + alpha * hard_loss,其中alpha为权重参数,T为Temperature温度参数,用于软目标化

具体可参见 bilibili视频

重点二:蒸馏损失的计算方式,student_predictions需要处以温度参数后进行F.log_softmax变成软目标,teacher_predictions需要处以温度参数

distillation_loss = soft_loss(F.log_softmax(student_predictions / Temp, dim=1), F.softmax(teacher_predictions / Temp, dim=1))

重点三:教师模型需要eval(), 得到教师模型输出需要 with torch.no_grad()和.detach()
with torch.no_grad():teacher_predictions = teacher_model(data)teacher_predictions = teacher_predictions.detach() 
重点四:损失权重参数alpha和温度系数T的设定,笔者参照bilibili视频的设定,设置alpha为0.3,温度系数T为4
蒸馏训练代码
import torch
from torch import nn
import torch.nn.functional as F
import torchvision
from torchvision import transforms
from torch.utils.data import DataLoader
from torchinfo import summary
from tqdm import tqdm
from teacher_cifar100 import TeacherModel
from vgg_student_cifar100 import StudentModeltorch.manual_seed(0)device = torch.device("cuda" if torch.cuda.is_available() else "cpu")torch.backends.cudnn.benchmark = TrueCIFAR100_TRAIN_MEAN = (0.5070751592371323, 0.48654887331495095, 0.4409178433670343)
CIFAR100_TRAIN_STD = (0.2673342858792401, 0.2564384629170883, 0.27615047132568404)transform_train = transforms.Compose([transforms.RandomCrop(32, padding=4),transforms.RandomHorizontalFlip(),transforms.RandomRotation(15),transforms.ToTensor(),transforms.Normalize(CIFAR100_TRAIN_MEAN, CIFAR100_TRAIN_STD)])transform_test = transforms.Compose([transforms.ToTensor(),transforms.Normalize(CIFAR100_TRAIN_MEAN, CIFAR100_TRAIN_STD)])#load MNIST datasets
train_dataset = torchvision.datasets.cifar.CIFAR100(root = "./dataset/",train=True,transform=transform_train,download=True
)
test_dataset = torchvision.datasets.cifar.CIFAR100(root = "./dataset/",train = False,transform=transform_test,download=True
)
train_loader = DataLoader(dataset=train_dataset, batch_size=128, num_workers=4, shuffle=True)
test_loader = DataLoader(dataset=test_dataset, batch_size=128, num_workers=4, shuffle=False)if __name__ == "__main__":"""从头训练教师模型"""teacher_model = TeacherModel().to(device).eval()teacher_model.load_state_dict(torch.load("./weights/teacher_cifar100/teacher_0.7839999794960022.pth"))student_model = StudentModel().to(device)Temp = 4alpha = 0.3hard_loss = nn.CrossEntropyLoss()soft_loss = nn.KLDivLoss(reduction='batchmean')optimizer = torch.optim.SGD(student_model.parameters(), lr=0.02, momentum=0.9, weight_decay=5e-4)train_scheduler = torch.optim.lr_scheduler.MultiStepLR(optimizer, milestones=[60, 120, 160], gamma=0.2) #learning rate decayiter_per_epoch = len(train_loader)epochs = 200best_acc = 0.0global_step = 0for epoch in range(epochs):student_model.train()train_scheduler.step(epoch)for data, targets in tqdm(train_loader):data = data.to(device)targets = targets.to(device)optimizer.zero_grad()#教师预测with torch.no_grad():teacher_predictions = teacher_model(data)teacher_predictions = teacher_predictions.detach() #参照https://www.bilibili.com/video/BV1Go4y1u72L/?spm_id_from=333.337.search-card.all.click&vd_source=e71c4eae27444c44f2de6239f04c4757student_predictions = student_model(data)student_loss = hard_loss(student_predictions, targets)distillation_loss = soft_loss(F.log_softmax(student_predictions / Temp, dim=1),  ##参照https://www.bilibili.com/video/BV1Go4y1u72L/?spm_id_from=333.337.search-card.all.click&vd_source=e71c4eae27444c44f2de6239f04c4757F.softmax(teacher_predictions / Temp, dim=1))loss = (1 - alpha) * Temp * Temp * distillation_loss + alpha * student_loss #T2 参照https://www.bilibili.com/video/BV1Go4y1u72L/?spm_id_from=333.337.search-card.all.click&vd_source=e71c4eae27444c44f2de6239f04c4757loss.backward()optimizer.step()global_step += 1student_model.eval()num_correct = 0num_samples = 0with torch.no_grad():for x, y in test_loader:x = x.to(device)y = y.to(device)prediction = student_model(x)prediction = prediction.max(1).indicesnum_correct += (prediction == y).sum()num_samples += prediction.size(0)acc = (num_correct/num_samples).item()if acc > best_acc:torch.save(student_model.state_dict(), './weights/knowledge_distillation_cifar100_vgg16/student_{}.pth'.format(acc))best_acc = accprint("Epoch {}: 当前模型最佳精度为:{:.4f}".format(epoch, best_acc))"""蒸馏学生模型  ResNet50 --> VGG16ResNet50  当前模型最佳精度为:0.7840VGG16     当前模型最佳精度为:0.7121Temp = 4  alpha = 0.3   Acc  Epoch 199: 当前模型最佳精度为:0.7388"""

知识蒸馏实验对比结果

模型网络结构分类精度
学生模型VGG1671.21%
教师模型ResNet5078.40%
蒸馏学生模型VGG1673.88%

实验总结分析

通过在CIFAR100数据集上的从ResNet50到VGG16的教师-学生模型的蒸馏实验,表明了Hinton等人提出的知识蒸馏的有效性。同时,通过实验的细节设置,笔者注意到了知识蒸馏的几个设置,soft_loss的计算有F.softmax和F.log_softmax的区别,教师模型需要eval和detach消除梯度,温度参数T和损失平衡系数alpha的选择,soft_loss需要乘以T2的系数,都是需要注意的细节问题。

致谢

[1] Geoffrey Hinton, Oriol Vinyals, and Jeff Dean, “Distilling the Knowledge in a Neural Network,” in NeurIPS 2025.

[2] https://github.com/weiaicunzai/pytorch-cifar100

[3] https://www.bilibili.com/video/BV1Go4y1u72L/?spm_id_from=333.337.search-card.all.click&vd_source=e71c4eae27444c44f2de6239f04c4757

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.rhkb.cn/news/481696.html

如若内容造成侵权/违法违规/事实不符,请联系长河编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

#Java-JDK7、8的时间相关类,包装类

1. JDK7-Date类 我们先来看时间的相关知识点 世界标准时间: 格林尼治时间/格林威治时间(Greenwich Mean Time)简称GMT。目前世界标准时间(UTC)已经替换为:原子钟中国标准时间: 世界标准时间8小时 时间单位换算: 1秒1000毫秒 1毫秒1000微秒 1微秒1000纳秒 Date类 Date类…

glog在vs2022 hello world中使用

准备工作 设置dns为阿里云dns 223.5.5.5,下载cmake,vs2022,git git clone https://github.com/google/glog.git cd glog mkdir build cd build cmake .. 拷贝文件 新建hello world并设置 设置预处理器增加GLOG_USE_GLOG_EXPORT;GLOG_NO_AB…

深度学习:梯度下降法

损失函数 L:衡量单一训练样例的效果。 成本函数 J:用于衡量 w 和 b 的效果。 如何使用梯度下降法来训练或学习训练集上的参数w和b ? 成本函数J是参数w和b的函数,它被定义为平均值; 损失函数L可以衡量你的算法效果&a…

半桥LLC谐振变换器及同步整流MATLAB仿真(二)

在上文《半桥LLC谐振变换器及同步整流MATLAB仿真(一)》讲解了半桥LLC谐振变换器的工作原理,本文将利用MATLAB搭建电路模型进行仿真。 参数:输入电压:400Vdc;输出电压范围:36-50V ;输…

利用若依代码生成器实现课程管理模块开发

目录 前言1. 环境准备1.1 数据库表设计与导入 2. 使用若依代码生成器生成模块代码2.1 导入数据库表2.2 配置生成规则2.2.1 基本信息配置2.2.2 字段信息配置2.2.3 生成信息配置 3. 下载与集成生成代码3.1 解压与集成3.2 启动项目并验证 4. 优化与扩展4.1 前端优化4.2 后端扩展 结…

AI前景分析展望——GPTo1 SoraAI

引言 人工智能(AI)领域的飞速发展已不仅仅局限于学术研究,它已渗透到各个行业,影响着从生产制造到创意产业的方方面面。在这场技术革新的浪潮中,一些领先的AI模型,像Sora和OpenAI的O1,凭借其强大…

springboot359智慧草莓基地管理系统(论文+源码)_kaic

毕 业 设 计(论 文) 题目:智慧草莓基地管理系统 摘 要 现代经济快节奏发展以及不断完善升级的信息化技术,让传统数据信息的管理升级为软件存储,归纳,集中处理数据信息的管理方式。本智慧草莓基地管理系统就…

排序算法之插入排序篇

插入排序 思路&#xff1a; 就是将没有排序的元素逐步地插入到已经排好序的元素后面&#xff0c;保持元素的有序 视频的实现过程如下&#xff1a; 插入排序全过程 代码实现过程如下&#xff1a; public static void Insertion(int[] arr) { for (int i 1; i < arr.length…

【机器学习】支持向量机SVR、SVC分析简明教程

关于使用SVM进行回归分析的介绍很少&#xff0c;在这里&#xff0c;我们讨论一下SVR的理论知识&#xff0c;并对该方法有一个简明的理解。 1. SVC简单介绍 SVR全称是support vector regression&#xff0c;是SVM&#xff08;支持向量机support vector machine&#xff09;对回…

SpringMVC-08-json

8. Json 8.1. 什么是Json JSON(JavaScript Object Notation, JS 对象标记) 是一种轻量级的数据交换格式&#xff0c;目前使用特别广泛。采用完全独立于编程语言的文本格式来存储和表示数据。简洁和清晰的层次结构使得 JSON 成为理想的数据交换语言。易于人阅读和编写&#xf…

APM32使用I2C驱动OLED

实验效果 本次实验主要讲APM32的I2C外设的初始化和APM32作为主机如何发送数据&#xff0c;OLED的驱动写起来较难本次实验不涉及。由于条件有限本次只能讲主机发送&#xff0c;接收也没有涉及。 硬件原理图 源代码 I2C初始化部分 #ifndef __BSP__IIC_H__ #define __BSP__IIC_…

QT布局详解

ui设计器设计界面很方便&#xff0c;为什么还要手写代码? (1)更好的控制布局 (2)更好的设置qss (3)代码复用 创建水平布局 包含头文件 #include<QHBoxLayout> 创建水平布局QHBoxLayout *pHLay new QHBoxLayout(父窗口指针);//一般填this QPushButton *pBtn1 n…

宏集eXware物联网网关在水务管理系统上的应用

一、前言 水务管理系统涵盖了对城市水网、供水、排水、污水处理等多个环节的监控与管理。随着物联网&#xff08;IoT&#xff09;技术的快速发展&#xff0c;物联网网关逐渐成为水务管理系统中的关键组成部分。 宏集物联网网关以其高效的数据采集、传输和管理功能&#xff0c…

不修改内核镜像的情况下,使用内核模块实现高效监控调度时延

一、背景 在之前的博客 调度时延的观测_csdn 调度时延的观测 杰克崔-CSDN博客 里&#xff0c;我们讲了多种监控调度时延的方法&#xff0c;有依靠系统现有节点来监控&#xff0c;但是依赖系统现有节点做不到每个单词调度时延的监控&#xff0c;也讲了通过修改内核代码&#xf…

在 ASP.NET C# Web API 中实现 Serilog 以增强请求和响应的日志记录

介绍 日志记录是任何 Web 应用程序的关键方面。它有助于调试、性能监控和了解用户交互。在 ASP.NET C# 中&#xff0c;集成 Serilog 作为记录请求和响应&#xff08;包括传入和传出的数据&#xff09;的中间件可以显著提高 Web API 的可观察性和故障排除能力。 在过去的几周里&…

【开源免费】基于Vue和SpringBoot的技术交流分享平台(附论文)

博主说明&#xff1a;本文项目编号 T 053 &#xff0c;文末自助获取源码 \color{red}{T053&#xff0c;文末自助获取源码} T053&#xff0c;文末自助获取源码 目录 一、系统介绍二、演示录屏三、启动教程四、功能截图五、文案资料5.1 选题背景5.2 国内外研究现状5.3 可行性分析…

JVM指令集概览:基础与应用

写在文章开头 在现代软件开发中,Java 语言凭借其“一次编写,到处运行”的理念成为了企业级应用的首选之一。这一理念的背后支撑技术正是 Java 虚拟机(JVM)。JVM 是一个抽象的计算机,它实现了 Java 编程语言的各种特性,并且能够执行编译后的字节码文件。了解 JVM 的工作原…

电子应用设计方案-33:智能AI投影仪系统方案设计

智能 AI 投影仪系统方案设计 一、引言 随着科技的不断进步&#xff0c;投影仪在家庭娱乐、商务办公和教育培训等领域的应用越来越广泛。智能 AI 投影仪作为一种创新的投影设备&#xff0c;结合了人工智能技术&#xff0c;为用户带来更便捷、智能和个性化的使用体验。 二、系统…

基于springboot 的体质测试数据分析及可视化设计LWPPT

技术可行性&#xff1a;技术背景 本企业网站在Windows操作系统中进行开发&#xff0c;并且目前PC机的性能已经可以胜任普通网站的web服务器。系统开发所使用的技术也都是自身所具有的&#xff0c;也是当下广泛应用的技术之一。 系统的开发环境和配置都是可以自行安装的&#x…

SQL进阶——C++与SQL进阶实践

在C开发中&#xff0c;SQL数据库的操作是开发者常见的任务之一。虽然前面我们已经介绍了如何在C中通过数据库连接执行基本的SQL查询&#xff0c;但在实际项目中&#xff0c;我们通常需要更加复杂和高效的数据库操作。存储过程与函数的调用、复杂SQL查询的编写、以及动态构造SQL…