【人工智能学习之HDGCN18关键点修改】

【人工智能学习之HDGCN18关键点修改】

  • 训练部分
  • 修改部分

训练部分

请参考文章:【人工智能学习之HDGCN训练自己的数据集】

修改部分

在这里插入图片描述
参考源码中25关键点的区域划分,我们将18关键点划分为:

头部:

  • 鼻子
  • 左眼和左耳
  • 右眼和右耳

上肢:

  • 左肩、左肘、左腕
  • 右肩、右肘、右腕

下肢:

  • 左髋、左膝、左踝
  • 右髋、右膝、右踝

躯干:

  • 颈部、左右肩、左右髋

对于【人工智能学习之HDGCN训练自己的数据集】中模型移植与修改部分,我的修改内容如下:
HDhierarchy.py:

from audioop import reverse
import sys
import numpy as npsys.path.extend(['../'])num_node = 18import numpy as npdef edge2mat(link, num_node):A = np.zeros((num_node, num_node))for i, j in link:A[j, i] = 1return Adef normalize_digraph(A):Dl = np.sum(A, 0)h, w = A.shapeDn = np.zeros((w, w))for i in range(w):if Dl[i] > 0:Dn[i, i] = Dl[i] ** (-1)AD = np.dot(A, Dn)return ADdef get_spatial_graph(num_node, hierarchy):A = []for i in range(len(hierarchy)):A.append(normalize_digraph(edge2mat(hierarchy[i], num_node)))A = np.stack(A)return Adef get_spatial_graph_original(num_node, self_link, inward, outward):I = edge2mat(self_link, num_node)In = normalize_digraph(edge2mat(inward, num_node))Out = normalize_digraph(edge2mat(outward, num_node))A = np.stack((I, In, Out))return Adef normalize_adjacency_matrix(A):node_degrees = A.sum(-1)degs_inv_sqrt = np.power(node_degrees, -0.5)norm_degs_matrix = np.eye(len(node_degrees)) * degs_inv_sqrtreturn (norm_degs_matrix @ A @ norm_degs_matrix).astype(np.float32)def get_graph(num_node, edges):I = edge2mat(edges[0], num_node)Forward = normalize_digraph(edge2mat(edges[1], num_node))Reverse = normalize_digraph(edge2mat(edges[2], num_node))A = np.stack((I, Forward, Reverse))return A  # 3, 25, 25def get_hierarchical_graph(num_node, edges):A = []for edge in edges:A.append(get_graph(num_node, edge))A = np.stack(A)return Adef get_groups(dataset='NTU', CoM=18):groups = []if dataset == 'NTU':if CoM == 2:groups.append([2])groups.append([1, 21])groups.append([13, 17, 3, 5, 9])groups.append([14, 18, 4, 6, 10])groups.append([15, 19, 7, 11])groups.append([16, 20, 8, 12])groups.append([22, 23, 24, 25])## Center of mass : 21elif CoM == 21:groups.append([21])groups.append([2, 3, 5, 9])groups.append([4, 6, 10, 1])groups.append([7, 11, 13, 17])groups.append([8, 12, 14, 18])groups.append([22, 23, 24, 25, 15, 19])groups.append([16, 20])## Center of Mass : 1elif CoM == 1:groups.append([1])groups.append([2, 13, 17])groups.append([14, 18, 21])groups.append([3, 5, 9, 15, 19])groups.append([4, 6, 10, 16, 20])groups.append([7, 11])groups.append([8, 12, 22, 23, 24, 25])elif CoM == 18:# 头部groups.append([1])  # 鼻子groups.append([15, 17])  # 左眼和左耳groups.append([16, 18])  # 右眼和右耳# 上肢groups.append([3, 4, 5])  # 左肩、左肘、左腕groups.append([6, 7, 8])  # 右肩、右肘、右腕# 下肢groups.append([9, 10, 11])  # 左髋、左膝、左踝groups.append([12, 13, 14])  # 右髋、右膝、右踝# 躯干groups.append([2, 3, 6, 9, 12])  # 颈部、左右肩、左右髋else:raise ValueError()return groupsdef get_edgeset(dataset='NTU', CoM=18):groups = get_groups(dataset=dataset, CoM=CoM)for i, group in enumerate(groups):group = [i - 1 for i in group]groups[i] = groupidentity = []forward_hierarchy = []reverse_hierarchy = []for i in range(len(groups) - 1):self_link = groups[i] + groups[i + 1]self_link = [(i, i) for i in self_link]identity.append(self_link)forward_g = []for j in groups[i]:for k in groups[i + 1]:forward_g.append((j, k))forward_hierarchy.append(forward_g)reverse_g = []for j in groups[-1 - i]:for k in groups[-2 - i]:reverse_g.append((j, k))reverse_hierarchy.append(reverse_g)edges = []for i in range(len(groups) - 1):edges.append([identity[i], forward_hierarchy[i], reverse_hierarchy[-1 - i]])return edgesclass Graph:def __init__(self, CoM=18, labeling_mode='spatial'):self.num_node = num_nodeself.CoM = CoMself.A = self.get_adjacency_matrix(labeling_mode)def get_adjacency_matrix(self, labeling_mode=None):if labeling_mode is None:return self.Aif labeling_mode == 'spatial':A = get_hierarchical_graph(num_node, get_edgeset(dataset='NTU', CoM=self.CoM)) # L, 3, 25, 25else:raise ValueError()return A, self.CoM

以及网络的部分修改:
hd_gcn.py:

import torch
import torch.nn as nn
import mathimport numpy as npfrom einops import rearrange, repeatfrom net.HDhierarchy import get_groupsdef import_class(name):components = name.split('.')mod = __import__(components[0])for comp in components[1:]:mod = getattr(mod, comp)return moddef conv_branch_init(conv, branches):weight = conv.weightn = weight.size(0)k1 = weight.size(1)k2 = weight.size(2)nn.init.normal_(weight, 0, math.sqrt(2. / (n * k1 * k2 * branches)))if conv.bias is not None:nn.init.constant_(conv.bias, 0)def conv_init(conv):if conv.weight is not None:nn.init.kaiming_normal_(conv.weight, mode='fan_out')if conv.bias is not None:nn.init.constant_(conv.bias, 0)def bn_init(bn, scale):nn.init.constant_(bn.weight, scale)nn.init.constant_(bn.bias, 0)def weights_init(m):classname = m.__class__.__name__if classname.find('Conv') != -1:if hasattr(m, 'weight'):nn.init.kaiming_normal_(m.weight, mode='fan_out')if hasattr(m, 'bias') and m.bias is not None and isinstance(m.bias, torch.Tensor):nn.init.constant_(m.bias, 0)elif classname.find('BatchNorm') != -1:if hasattr(m, 'weight') and m.weight is not None:m.weight.data.normal_(1.0, 0.02)if hasattr(m, 'bias') and m.bias is not None:m.bias.data.fill_(0)class TemporalConv(nn.Module):def __init__(self, in_channels, out_channels, kernel_size, stride=1, dilation=1):super(TemporalConv, self).__init__()pad = (kernel_size + (kernel_size - 1) * (dilation - 1) - 1) // 2self.conv = nn.Conv2d(in_channels,out_channels,kernel_size=(kernel_size, 1),padding=(pad, 0),stride=(stride, 1),dilation=(dilation, 1),bias=False)self.bias = nn.Parameter(torch.zeros(1, out_channels, 1, 1), requires_grad=True)self.bn = nn.BatchNorm2d(out_channels)def forward(self, x):x = self.conv(x) + self.biasx = self.bn(x)return xclass MultiScale_TemporalConv(nn.Module):def __init__(self,in_channels,out_channels,kernel_size=5,stride=1,dilations=[1,2],residual=True,residual_kernel_size=1):super().__init__()assert out_channels % (len(dilations) + 2) == 0, '# out channels should be multiples of # branches'# Multiple branches of temporal convolutionself.num_branches = len(dilations) + 2branch_channels = out_channels // self.num_branchesif type(kernel_size) == list:assert len(kernel_size) == len(dilations)else:kernel_size = [kernel_size] * len(dilations)# Temporal Convolution branchesself.branches = nn.ModuleList([nn.Sequential(nn.Conv2d(in_channels,branch_channels,kernel_size=1,padding=0),nn.BatchNorm2d(branch_channels),nn.ReLU(inplace=True),TemporalConv(branch_channels,branch_channels,kernel_size=ks,stride=stride,dilation=dilation),)for ks, dilation in zip(kernel_size, dilations)])# Additional Max & 1x1 branchself.branches.append(nn.Sequential(nn.Conv2d(in_channels, branch_channels, kernel_size=1, padding=0),nn.BatchNorm2d(branch_channels),nn.ReLU(inplace=True),nn.MaxPool2d(kernel_size=(3, 1), stride=(stride, 1), padding=(1, 0)),nn.BatchNorm2d(branch_channels)))self.branches.append(nn.Sequential(nn.Conv2d(in_channels, branch_channels, kernel_size=1, padding=0, stride=(stride, 1)),nn.BatchNorm2d(branch_channels)))# Residual connectionif not residual:self.residual = lambda x: 0elif (in_channels == out_channels) and (stride == 1):self.residual = lambda x: xelse:self.residual = TemporalConv(in_channels, out_channels, kernel_size=residual_kernel_size, stride=stride)# initializeself.apply(weights_init)def forward(self, x):branch_outs = []for tempconv in self.branches:out = tempconv(x)branch_outs.append(out)out = torch.cat(branch_outs, dim=1)out += self.residual(x)return outclass residual_conv(nn.Module):def __init__(self, in_channels, out_channels, kernel_size=5, stride=1):super(residual_conv, self).__init__()pad = int((kernel_size - 1) / 2)self.conv = nn.Conv2d(in_channels, out_channels, kernel_size=(kernel_size, 1), padding=(pad, 0),stride=(stride, 1))self.bn = nn.BatchNorm2d(out_channels)self.relu = nn.ReLU(inplace=True)conv_init(self.conv)bn_init(self.bn, 1)def forward(self, x):x = self.bn(self.conv(x))return xclass EdgeConv(nn.Module):def __init__(self, in_channels, out_channels, k):super(EdgeConv, self).__init__()self.k = kself.conv = nn.Sequential(nn.Conv2d(in_channels*2, out_channels, kernel_size=1, bias=False),nn.BatchNorm2d(out_channels),nn.LeakyReLU(inplace=True, negative_slope=0.2))for m in self.modules():if isinstance(m, nn.Conv2d):conv_init(m)elif isinstance(m, nn.BatchNorm2d):bn_init(m, 1)def forward(self, x, dim=4): # N, C, T, Vif dim == 3:N, C, L = x.size()passelse:N, C, T, V = x.size()x = x.mean(dim=-2, keepdim=False) # N, C, Vx = self.get_graph_feature(x, self.k)x = self.conv(x)x = x.max(dim=-1, keepdim=False)[0]if dim == 3:passelse:x = repeat(x, 'n c v -> n c t v', t=T)return xdef knn(self, x, k):inner = -2 * torch.matmul(x.transpose(2, 1), x) # N, V, Vxx = torch.sum(x**2, dim=1, keepdim=True)pairwise_distance = - xx - inner - xx.transpose(2, 1)idx = pairwise_distance.topk(k=k, dim=-1)[1] # N, V, kreturn idxdef get_graph_feature(self, x, k, idx=None):N, C, V = x.size()if idx is None:idx = self.knn(x, k=k)device = x.get_device()idx_base = torch.arange(0, N, device=device).view(-1, 1, 1) * Vidx = idx + idx_baseidx = idx.view(-1)x = rearrange(x, 'n c v -> n v c')feature = rearrange(x, 'n v c -> (n v) c')[idx, :]feature = feature.view(N, V, k, C)x = repeat(x, 'n v c -> n v k c', k=k)feature = torch.cat((feature - x, x), dim=3)feature = rearrange(feature, 'n v k c -> n c v k')return featureclass AHA(nn.Module):def __init__(self, in_channels, num_layers, CoM):super(AHA, self).__init__()self.num_layers = num_layersgroups = get_groups(dataset='NTU', CoM=CoM)for i, group in enumerate(groups):group = [i - 1 for i in group]groups[i] = groupinter_channels = in_channels // 4self.layers = [groups[i] + groups[i + 1] for i in range(len(groups) - 1)]self.conv_down = nn.Sequential(nn.Conv2d(in_channels, inter_channels, kernel_size=1),nn.BatchNorm2d(inter_channels),nn.ReLU(inplace=True))self.edge_conv = EdgeConv(inter_channels, inter_channels, k=3)self.aggregate = nn.Conv1d(inter_channels, in_channels, kernel_size=1)self.sigmoid = nn.Sigmoid()def forward(self, x):N, C, L, T, V = x.size()x_t = x.max(dim=-2, keepdim=False)[0]x_t = self.conv_down(x_t)x_sampled = []for i in range(self.num_layers):s_t = x_t[:, :, i, self.layers[i]]s_t = s_t.mean(dim=-1, keepdim=True)x_sampled.append(s_t)x_sampled = torch.cat(x_sampled, dim=2)att = self.edge_conv(x_sampled, dim=3)att = self.aggregate(att).view(N, C, L, 1, 1)out = (x * self.sigmoid(att)).sum(dim=2, keepdim=False)return outclass HD_Gconv(nn.Module):def __init__(self, in_channels, out_channels, A, adaptive=True, residual=True, att=False, CoM=18):super(HD_Gconv, self).__init__()self.num_layers = A.shape[0]self.num_subset = A.shape[1]self.att = attinter_channels = out_channels // (self.num_subset + 1)self.adaptive = adaptiveif adaptive:self.PA = nn.Parameter(torch.from_numpy(A.astype(np.float32)), requires_grad=True)else:raise ValueError()self.conv_down = nn.ModuleList()self.conv = nn.ModuleList()for i in range(self.num_layers):self.conv_d = nn.ModuleList()self.conv_down.append(nn.Sequential(nn.Conv2d(in_channels, inter_channels, kernel_size=1),nn.BatchNorm2d(inter_channels),nn.ReLU(inplace=True)))for j in range(self.num_subset):self.conv_d.append(nn.Sequential(nn.Conv2d(inter_channels, inter_channels, kernel_size=1),nn.BatchNorm2d(inter_channels)))self.conv_d.append(EdgeConv(inter_channels, inter_channels, k=5))self.conv.append(self.conv_d)if self.att:self.aha = AHA(out_channels, num_layers=self.num_layers, CoM=CoM)if residual:if in_channels != out_channels:self.down = nn.Sequential(nn.Conv2d(in_channels, out_channels, 1),nn.BatchNorm2d(out_channels))else:self.down = lambda x: xelse:self.down = lambda x: 0self.bn = nn.BatchNorm2d(out_channels)# 7개 conv layerself.relu = nn.ReLU(inplace=True)for m in self.modules():if isinstance(m, nn.Conv2d):conv_init(m)elif isinstance(m, nn.BatchNorm2d):bn_init(m, 1)bn_init(self.bn, 1e-6)def forward(self, x):A = self.PAout = []for i in range(self.num_layers):y = []x_down = self.conv_down[i](x)for j in range(self.num_subset):z = torch.einsum('n c t u, v u -> n c t v', x_down, A[i, j])z = self.conv[i][j](z)y.append(z)y_edge = self.conv[i][-1](x_down)y.append(y_edge)y = torch.cat(y, dim=1)out.append(y)out = torch.stack(out, dim=2)if self.att:out = self.aha(out)else:out = out.sum(dim=2, keepdim=False)out = self.bn(out)out += self.down(x)out = self.relu(out)return outclass TCN_GCN_unit(nn.Module):def __init__(self, in_channels, out_channels, A, stride=1, residual=True, adaptive=True,kernel_size=5, dilations=[1, 2], att=True, CoM=18):super(TCN_GCN_unit, self).__init__()self.gcn1 = HD_Gconv(in_channels, out_channels, A, adaptive=adaptive, att=att, CoM=CoM)self.tcn1 = MultiScale_TemporalConv(out_channels, out_channels, kernel_size=kernel_size, stride=stride, dilations=dilations,residual=False)self.relu = nn.ReLU(inplace=True)if not residual:self.residual = lambda x: 0elif (in_channels == out_channels) and (stride == 1):self.residual = lambda x: xelse:self.residual = residual_conv(in_channels, out_channels, kernel_size=1, stride=stride)def forward(self, x):y = self.relu(self.tcn1(self.gcn1(x)) + self.residual(x))return yclass Model(nn.Module):def __init__(self, num_class=2, num_point=18, num_person=1, graph=None, graph_args=dict(), in_channels=3,drop_out=0, adaptive=True):super(Model, self).__init__()if graph is None:raise ValueError()else:Graph = import_class(graph)self.graph = Graph(**graph_args)A, CoM = self.graph.Aself.dataset = 'NTU' if num_point == 18 else 'UCLA'self.num_class = num_classself.num_point = num_pointself.data_bn = nn.BatchNorm1d(num_person * in_channels * num_point)base_channels = 64self.l1 = TCN_GCN_unit(3, base_channels, A, residual=False, adaptive=adaptive, att=False, CoM=CoM)self.l2 = TCN_GCN_unit(base_channels, base_channels, A, adaptive=adaptive, CoM=CoM)self.l3 = TCN_GCN_unit(base_channels, base_channels, A, adaptive=adaptive, CoM=CoM)self.l4 = TCN_GCN_unit(base_channels, base_channels, A, adaptive=adaptive, CoM=CoM)self.l5 = TCN_GCN_unit(base_channels, base_channels*2, A, stride=2, adaptive=adaptive, CoM=CoM)self.l6 = TCN_GCN_unit(base_channels*2, base_channels*2, A, adaptive=adaptive, CoM=CoM)self.l7 = TCN_GCN_unit(base_channels*2, base_channels*2, A, adaptive=adaptive, CoM=CoM)self.l8 = TCN_GCN_unit(base_channels*2, base_channels*4, A, stride=2, adaptive=adaptive, CoM=CoM)self.l9 = TCN_GCN_unit(base_channels*4, base_channels*4, A, adaptive=adaptive, CoM=CoM)self.l10 = TCN_GCN_unit(base_channels*4, base_channels*4, A, adaptive=adaptive, CoM=CoM)self.fc = nn.Linear(base_channels*4, num_class)nn.init.normal_(self.fc.weight, 0, math.sqrt(2. / num_class))bn_init(self.data_bn, 1)if drop_out:self.drop_out = nn.Dropout(drop_out)else:self.drop_out = lambda x: xdef forward(self, x):N, C, T, V, M = x.size()x = rearrange(x, 'n c t v m -> n (m v c) t')x = self.data_bn(x)x = rearrange(x, 'n (m v c) t -> (n m) c t v', m=M, v=V)x = self.l1(x)x = self.l2(x)x = self.l3(x)x = self.l4(x)x = self.l5(x)x = self.l6(x)x = self.l7(x)x = self.l8(x)x = self.l9(x)x = self.l10(x)# N*M,C,T,Vc_new = x.size(1)x = x.view(N, M, c_new, -1)x = x.mean(3).mean(1)x = self.drop_out(x)return self.fc(x) 

最后我们编写训练所需的yaml配置文件:

work_dir: ./work_dir/recognition/kinetics_skeleton/HD_GCN# feeder
feeder: feeder.feeder.Feeder
train_feeder_args:random_choose: Truerandom_move: Truewindow_size: 30data_path: C:/WorkFiles/company_server_SSH/st-gcn-master/dataset/HDdataset/train_data.npylabel_path: C:/WorkFiles/company_server_SSH/st-gcn-master/dataset/HDdataset/train_label.pkl
test_feeder_args:data_path: C:/WorkFiles/company_server_SSH/st-gcn-master/dataset/HDdataset/val_data.npylabel_path: C:/WorkFiles/company_server_SSH/st-gcn-master/dataset/HDdataset/val_label.pkl# model
model: net.hd_gcn.Model
model_args:in_channels: 3num_class: 2num_person: 1graph: net.HDhierarchy.Graphgraph_args:labeling_mode: 'spatial'CoM: 18# training
device: [0]
batch_size: 64
test_batch_size: 64#optim
base_lr: 0.01
step: [20, 40, 60, 80]
num_epoch: 100

另外有一点需要注意,如果训练数据集出现的人数是多人的话,需要修改相应的num_person,否则data_bn会报一个batch_normal维度不匹配的错误。

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.rhkb.cn/news/492284.html

如若内容造成侵权/违法违规/事实不符,请联系长河编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

面试题整理3----nc命令的常见用法

面试题整理3----nc命令的常见用法 1. NC是什么2. NC的常用参数2.1 开启指定端口TCP监听(-l小写的L)2.2 测试端口是否能访问(-v)2.3 开启指定端口UDP监听(-u)2.4 端口扫描(-z)2.5 指定超时时间(-w)2.6 指定本地端口号连接(-p)2.7 指定的命令(-e) 1. NC是什么 nc(Net…

C语言实现八大排序算法

目录 1.插入排序 1.1 直接插入排序 1.2 希尔排序 2. 选择排序 2.1 直接选择排序 2.2 堆排序 *TopK问题: 3. 交换排序 3.1 冒泡排序 3.2 快速排序 1. Hoare版本 2. 挖坑法 3. 前后指针法 4. 快速排序优化 5. 非递归快速排序 4.归并排序 1.递归式归并…

【昇腾】NPU ID:物理ID、逻辑ID、芯片映射关系

起因: https://www.hiascend.com/document/detail/zh/Atlas%20200I%20A2/23.0.0/re/npu/npusmi_013.html npu-smi info -l查询所有NPU设备: [naienotebook-npu-bd130045-55bbffd786-lr6t8 DCNN]$ npu-smi info -lTotal Count : 1NPU…

使用Python脚本进行编写批量根据源IP进行查询的语句用于态势感知攻击行为的搜索

使用Python脚本进行编写批量根据源IP进行查询的语句 以下根据ip-list集里面的IP地址(可以自行扩充),然后采用srcaddress "{ip}" or 的形式进行打印并存储在路径为:桌面的IOC结果.txt --------------------------代码如…

【Qt】信号、槽

目录 一、信号和槽的基本概念 二、connect函数:关联信号和槽 例子: 三、自定义信号和槽 1.自定义槽函数 2.自定义信号函数 例子: 四、带参的信号和槽 例子: 五、Q_OBJECT宏 六、断开信号和槽的连接 例子: …

揭开 Choerodon UI 拖拽功能的神秘面纱

01 引言 系统的交互方式主要由点击、选择等组成。为了提升 HZERO 系统的用户体验、减少部分操作步骤,组件库集成了卓越的拖拽功能,让用户可以更高效流畅的操作系统。 例如:表格支持多行拖拽排序、跨表数据调整、个性化调整列顺序&#xff1…

低代码企业管理的革命:Microi吾码产品深度测评

低代码平台Microi吾码:帮助企业快速构建自定义数据管理与自动化系统 在现代企业的数字化转型过程中,如何快速响应市场变化并高效管理内部数据,已成为各类企业面临的重要挑战。低代码平台作为一种创新的技术解决方案,为企业提供了…

机器学习之交叉熵

交叉熵(Cross-Entropy)是机器学习中用于衡量预测分布与真实分布之间差异的一种损失函数,特别是在分类任务中非常常见。它源于信息论,反映了两个概率分布之间的距离。 交叉熵的数学定义 对于分类任务,假设我们有&#…

C# OpenCvSharp DNN 实现百度网盘AI大赛-表格检测第2名方案第一部分-表格边界框检测

目录 说明 效果 模型 项目 代码 frmMain.cs YoloDet.cs 参考 下载 其他 说明 百度网盘AI大赛-表格检测的第2名方案。 该算法包含表格边界框检测、表格分割和表格方向识别三个部分,首先,ppyoloe-plus-x 对边界框进行预测,并对置信…

图形学笔记 - 5. 光线追踪 - RayTracing

Whitted-Style Ray tracing 为什么要光线追踪 光栅化不能很好地处理全局效果 软阴影尤其是当光线反射不止一次的时候 栅格化速度很快,但质量相对较低 光线追踪是准确的,但速度很慢 光栅化:实时,光线追踪:离线~10K …

day15 python(3)——python基础(完结!!)

【没有所谓的运气🍬,只有绝对的努力✊】 目录 1、函数 1.1 函数传参中的拆包 1.2 匿名函数的定义 1.3 匿名函数练习 1.4 匿名函数应用——列表中的字典排序 2、面向对象 OOP 2.1 面向对象介绍 2.2 类和对象 2.3 类的构成和设计 2.4 面向对象代码…

C语言破解鸡蛋问题

破解鸡蛋问题 问题分析算法思路选择枚举法思路数据结构应用数组的应用变量的合理定义代码实现伪代码示例C 语言代码展示结果验证与分析不同输入验证复杂度分析问题分析 在这个 “鸡蛋问题” 中,已知条件表明这堆鸡蛋按两个两个地拿、三个三个地拿、四个四个地拿时,最后都剩一…

XXE-Lab靶场漏洞复现

1.尝试登录 输入账号admin/密码admin进行登录&#xff0c;并未有页面进行跳转 2.尝试抓包分析请求包数据 我们可以发现页面中存在xml请求&#xff0c;我们就可以构造我们的xml请求语句来获取想要的数据 3.构造语句 <?xml version"1.0" ?> <!DOCTYPE fo…

安卓主板_MTK联发科android主板方案

在当前智能设备的发展中&#xff0c;安卓主板的配置灵活性和性能优化显得尤为重要。安卓主板的联发科方案&#xff0c;在芯片上&#xff0c;搭载联发科MTK6761、MT8766、MT6765、MT6762、MT8768、MT8390、MTK8370以及MT8788等型号&#xff0c;均基于64位的四核或八核架构设计。…

计算机网络知识点全梳理(三.TCP知识点总结)

目录 TCP基本概念 为什么需要TCP 什么是TCP 什么是TCP链接 如何唯一确定一个 TCP 连接 TCP三次握手 握手流程 为什么是三次握手&#xff0c;而不是两次、四次 为什么客户端和服务端的初始序列号 ISN 不同 既然 IP 层会分片&#xff0c;为什么 TCP 层还需要 MSS TCP四…

0004.基于springboot+elementui的在线考试系统

适合初学同学练手项目&#xff0c;部署简单&#xff0c;代码简洁清晰&#xff1b; 愿世界和平再无bug 一、系统架构 前端&#xff1a;vue| elementui 后端&#xff1a;springboot | mybatis-plus 环境&#xff1a;jdk1.8 | mysql | maven 二、登录角色 1.管理员 2.老师 …

[面试题]--索引用了什么数据结构?有什么特点?

答&#xff1a;使用了B树&#xff1a; 时间复杂度&#xff1a;O(logN),可以有效控制树高 B树特点&#xff1a; 1.叶子节点之间有相互链接的作用&#xff0c;会指向下一个相近的兄弟节点。 MySQL在组织叶子节点使用的是双向链表 2.非叶子节点的值都保存在叶子节点当中 MySQL非叶…

ansible剧本快速上手

playbook剧本介绍 是什么&#xff1a;能户长期保存&#xff0c;且能实现批量配置、部署…的文件格式&#xff1a;yaml格式。用 空格 冒号 头号 句号语法检测&#xff1a;ansible-playbook --syntax-check install-zabbix.yaml或则 -C检测取消默认任务&#xff1a;gather_facts…

Element plus 下拉框组件选中一个选项后显示的是 value 而不是 label

最近刚进行 Vue3 Element plus 项目实践&#xff0c;在进行表单二次封装的时候&#xff0c;表单元素 select 下拉框组件选中一个选项后显示的是 value 而不是 label&#xff0c;下面上代码&#xff1a; 原来的写法&#xff1a; <el-selectv-if"v.type select"…

重新定义页签!Choerodon UI Tabs让管理更高效

01 引言 Tabs 组件通过提供平级区域&#xff0c;将大块内容进行有效的收纳和展现&#xff0c;从而保持界面整洁。但在企业应用的快速发展中&#xff0c;这样传统的页签组件已无法满足我们对界面布局和个性化展示的追求。Choerodon UI Tabs 组件通过支持多级分组、个性化配置、…