【AI】LoFTR图像匹配算法源码解析

0.LoFTR简介

Local Feature Transformers (LoFTR)是一种Detector-free的局部特征匹配方法,使用了具有自注意层和互注意层的Transformer模块来处理从卷积网络中提取的密集局部特征:首先在低特征分辨率(图像维度的1/8)上提取密集匹配,然后从这些匹配中选择具有高可信度的匹配,使用基于相关的方法将其细化到高分辨率的亚像素级别。这样,模型的大感受野使转换后的特征符能够体现出上下文和位置信息,通过多次自注意力和互注意层,LoFTR学习在GT中的匹配先验。另外,LOFTR还采用Linear Attention方法将计算复杂度降低到可接受的水平。

源码地址:https://github.com/zju3dv/LoFTR
论文下载地址:https://arxiv.org/pdf/2104.00680.pdf

在LoFTR之前的图像匹配算法中,都比较依赖于检测到的特征点,一旦点找不到,就没有办法完成匹配;对于位置不同的两个点,如果它们的背景特征相似或者说缺乏纹理特征,也会导致匹配失败。LoFTR的优势就是不需要先得到特征点,而且采用End2End的方式,用起来比较方便。
在这里插入图片描述

  1. 局部特征CNN从图像对中提取粗粒度特征图 F ~ A \tilde{F}^{A} F~A , F ~ B \tilde{F}^{B} F~B,以及细粒度特征图 F ^ A \hat{F}^{A} F^A , F ^ B \hat{F}^{B} F^B
  2. 粗粒度的特征经过flatten操作并添加位置编码,然后由LoFTR模块进行处理;该模块中包括自注意和互注意层,并重复
    Nc 次
  3. 使用可微匹配层来匹配LoFTR模块输出的特征 F ~ t r A \tilde{F}_{tr}^{A} F~trA , F ~ t r B \tilde{F}_{tr}^{B} F~trB,得到一个置信矩阵 P c P_c Pc ,然后根据置信阈值和最近邻算法(MNN)选择匹配对,得到粗粒度的匹配预测 M c M_c Mc
  4. 对于每个粗粒度匹配对 ,从细粒度级特征图中裁剪一个大小为w*w的局部窗口,粗粒度匹配将在此窗口内细化为亚像素级别,并作为最终的匹配预测 M f M_f Mf

1.源码解析

源码的结构层次比较鲜明,在模型文件loftr.py的forward函数中,可以清晰的看到整个模型的前向路径:

  1. Local Feature CNN
  2. coarse-level loftr module
  3. match coarse-level
  4. fine-level refinement
  5. match fine-level

基础特征提取模块Local Feature CNN

通过CNN提取特征图

# 1. Local Feature CNNdata.update({'bs': data['image0'].size(0),'hw0_i': data['image0'].shape[2:], 'hw1_i': data['image1'].shape[2:]})if data['hw0_i'] == data['hw1_i']:  # faster & better BN convergenceprint(torch.cat([data['image0'], data['image1']], dim=0).shape)feats_c, feats_f = self.backbone(torch.cat([data['image0'], data['image1']], dim=0))print(feats_c.shape) # 1/8print(feats_f.shape) # 1/2(feat_c0, feat_c1), (feat_f0, feat_f1) = feats_c.split(data['bs']), feats_f.split(data['bs'])print(feat_c0.shape)print(feat_c1.shape)print(feat_f0.shape)print(feat_f1.shape)else:  # handle different input shapes(feat_c0, feat_f0), (feat_c1, feat_f1) = self.backbone(data['image0']), self.backbone(data['image1'])print(feat_c0.shape)print(feat_c1.shape)print(feat_f0.shape)print(feat_f1.shape)data.update({'hw0_c': feat_c0.shape[2:], 'hw1_c': feat_c1.shape[2:],'hw0_f': feat_f0.shape[2:], 'hw1_f': feat_f1.shape[2:]})

这里的backbone就是一系列的卷积和连接操作,可以跳入self.backbone中去:

    def forward(self, x):# ResNet Backbonex0 = self.relu(self.bn1(self.conv1(x)))x1 = self.layer1(x0)  # 1/2x2 = self.layer2(x1)  # 1/4x3 = self.layer3(x2)  # 1/8# FPNx3_out = self.layer3_outconv(x3)x3_out_2x = F.interpolate(x3_out, scale_factor=2., mode='bilinear', align_corners=True)x2_out = self.layer2_outconv(x2)x2_out = self.layer2_outconv2(x2_out+x3_out_2x)x2_out_2x = F.interpolate(x2_out, scale_factor=2., mode='bilinear', align_corners=True)x1_out = self.layer1_outconv(x1)x1_out = self.layer1_outconv2(x1_out+x2_out_2x)return [x3_out, x1_out]

注意力机制应用 coarse-level loftr module

        # 2. coarse-level loftr module# add featmap with positional encoding, then flatten it to sequence [N, HW, C]# 添加位置编码feat_c0 = rearrange(self.pos_encoding(feat_c0), 'n c h w -> n (h w) c')print(feat_c0.shape)feat_c1 = rearrange(self.pos_encoding(feat_c1), 'n c h w -> n (h w) c')print(feat_c1.shape)mask_c0 = mask_c1 = None  # mask is useful in trainingif 'mask0' in data:mask_c0, mask_c1 = data['mask0'].flatten(-2), data['mask1'].flatten(-2)# 进入transformer模块,这是论文的核心模块feat_c0, feat_c1 = self.loftr_coarse(feat_c0, feat_c1, mask_c0, mask_c1)print(feat_c0.shape)print(feat_c1.shape)

进入self.loftr_coarse模块,这里需要计算自身的attention注意力,还需要将两张图像计算cross attention,从代码中的循环中可以看到,self和cross两种操作分别是自己和自己计算注意力以及自己和其他特征图计算注意力,从layer的计算参数可以明确这一点。

    def forward(self, feat0, feat1, mask0=None, mask1=None):"""Args:feat0 (torch.Tensor): [N, L, C]feat1 (torch.Tensor): [N, S, C]mask0 (torch.Tensor): [N, L] (optional)mask1 (torch.Tensor): [N, S] (optional)"""assert self.d_model == feat0.size(2), "the feature number of src and transformer must be equal"for layer, name in zip(self.layers, self.layer_names):if name == 'self':feat0 = layer(feat0, feat0, mask0, mask0)print(feat0.shape)feat1 = layer(feat1, feat1, mask1, mask1)print(feat1.shape)elif name == 'cross':feat0 = layer(feat0, feat1, mask0, mask1)print(feat0.shape)feat1 = layer(feat1, feat0, mask1, mask0)print(feat1.shape)else:raise KeyErrorprint(feat0.shape)print(feat1.shape)return feat0, feat1

具体的计算layer在LoFTREncoderLayer定义中,这里就是基本的attention计算方法,主要是QKV的计算和一些线性计算、连接操作。

    def forward(self, x, source, x_mask=None, source_mask=None):"""Args:x (torch.Tensor): [N, L, C]source (torch.Tensor): [N, S, C]x_mask (torch.Tensor): [N, L] (optional)source_mask (torch.Tensor): [N, S] (optional)"""bs = x.size(0)query, key, value = x, source, source# multi-head attentionquery = self.q_proj(query).view(bs, -1, self.nhead, self.dim)  # [N, L, (H, D)]key = self.k_proj(key).view(bs, -1, self.nhead, self.dim)  # [N, S, (H, D)]value = self.v_proj(value).view(bs, -1, self.nhead, self.dim)message = self.attention(query, key, value, q_mask=x_mask, kv_mask=source_mask)  # [N, L, (H, D)]message = self.merge(message.view(bs, -1, self.nhead*self.dim))  # [N, L, C]message = self.norm1(message)# feed-forward networkmessage = self.mlp(torch.cat([x, message], dim=2))message = self.norm2(message)return x + message

粗粒度匹配模块 match coarse-level

        # 3. match coarse-levelself.coarse_matching(feat_c0, feat_c1, data, mask_c0=mask_c0, mask_c1=mask_c1)

跟入coarse_matching模块
通过注意力机制获取到两个图像的特征图,进入到粗粒度匹配模块,粗粒度匹配是采用的内积的计算方式,即下面代码的torch.einsum计算,然后通过softmax转成概率值。

    def forward(self, feat_c0, feat_c1, data, mask_c0=None, mask_c1=None):"""Args:feat0 (torch.Tensor): [N, L, C]feat1 (torch.Tensor): [N, S, C]data (dict)mask_c0 (torch.Tensor): [N, L] (optional)mask_c1 (torch.Tensor): [N, S] (optional)Update:data (dict): {'b_ids' (torch.Tensor): [M'],'i_ids' (torch.Tensor): [M'],'j_ids' (torch.Tensor): [M'],'gt_mask' (torch.Tensor): [M'],'mkpts0_c' (torch.Tensor): [M, 2],'mkpts1_c' (torch.Tensor): [M, 2],'mconf' (torch.Tensor): [M]}NOTE: M' != M during training."""N, L, S, C = feat_c0.size(0), feat_c0.size(1), feat_c1.size(1), feat_c0.size(2)# normalizefeat_c0, feat_c1 = map(lambda feat: feat / feat.shape[-1]**.5,[feat_c0, feat_c1])if self.match_type == 'dual_softmax':sim_matrix = torch.einsum("nlc,nsc->nls", feat_c0,feat_c1) / self.temperatureprint(sim_matrix.shape)if mask_c0 is not None:sim_matrix.masked_fill_(~(mask_c0[..., None] * mask_c1[:, None]).bool(),-INF)conf_matrix = F.softmax(sim_matrix, 1) * F.softmax(sim_matrix, 2)print(conf_matrix.shape)elif self.match_type == 'sinkhorn':# sinkhorn, dustbin includedsim_matrix = torch.einsum("nlc,nsc->nls", feat_c0, feat_c1)if mask_c0 is not None:sim_matrix[:, :L, :S].masked_fill_(~(mask_c0[..., None] * mask_c1[:, None]).bool(),-INF)# build uniform prior & use sinkhornlog_assign_matrix = self.log_optimal_transport(sim_matrix, self.bin_score, self.skh_iters)assign_matrix = log_assign_matrix.exp()conf_matrix = assign_matrix[:, :-1, :-1]# filter prediction with dustbin score (only in evaluation mode)if not self.training and self.skh_prefilter:filter0 = (assign_matrix.max(dim=2)[1] == S)[:, :-1]  # [N, L]filter1 = (assign_matrix.max(dim=1)[1] == L)[:, :-1]  # [N, S]conf_matrix[filter0[..., None].repeat(1, 1, S)] = 0conf_matrix[filter1[:, None].repeat(1, L, 1)] = 0if self.config['sparse_spvs']:data.update({'conf_matrix_with_bin': assign_matrix.clone()})data.update({'conf_matrix': conf_matrix})# predict coarse matches from conf_matrixdata.update(**self.get_coarse_match(conf_matrix, data))

跟入get_coarse_match模块
通过softmax获取到概率值之后,然后根据阈值使用mask矩阵过滤掉低于阈值的值,然后使用互最近邻来确定互相匹配关系。

    def get_coarse_match(self, conf_matrix, data):"""Args:conf_matrix (torch.Tensor): [N, L, S]data (dict): with keys ['hw0_i', 'hw1_i', 'hw0_c', 'hw1_c']Returns:coarse_matches (dict): {'b_ids' (torch.Tensor): [M'],'i_ids' (torch.Tensor): [M'],'j_ids' (torch.Tensor): [M'],'gt_mask' (torch.Tensor): [M'],'m_bids' (torch.Tensor): [M],'mkpts0_c' (torch.Tensor): [M, 2],'mkpts1_c' (torch.Tensor): [M, 2],'mconf' (torch.Tensor): [M]}"""axes_lengths = {'h0c': data['hw0_c'][0],'w0c': data['hw0_c'][1],'h1c': data['hw1_c'][0],'w1c': data['hw1_c'][1]}_device = conf_matrix.device# 1. confidence thresholdingmask = conf_matrix > self.thrprint(mask.shape)mask = rearrange(mask, 'b (h0c w0c) (h1c w1c) -> b h0c w0c h1c w1c',**axes_lengths)print(mask.shape)if 'mask0' not in data:mask_border(mask, self.border_rm, False)else:mask_border_with_padding(mask, self.border_rm, False,data['mask0'], data['mask1'])mask = rearrange(mask, 'b h0c w0c h1c w1c -> b (h0c w0c) (h1c w1c)',**axes_lengths)print(mask.shape)# 2. mutual nearestprint(conf_matrix.max(dim=2, keepdim=True)[0].shape)mask = mask \* (conf_matrix == conf_matrix.max(dim=2, keepdim=True)[0]) \* (conf_matrix == conf_matrix.max(dim=1, keepdim=True)[0])print(mask.shape)# 3. find all valid coarse matches# this only works when at most one `True` in each rowmask_v, all_j_ids = mask.max(dim=2)print(mask_v.shape)print(all_j_ids.shape)b_ids, i_ids = torch.where(mask_v)print(b_ids.shape)print(i_ids.shape)j_ids = all_j_ids[b_ids, i_ids]print(j_ids.shape)mconf = conf_matrix[b_ids, i_ids, j_ids]print(mconf.shape)# 4. Random sampling of training samples for fine-level LoFTR# (optional) pad samples with gt coarse-level matchesif self.training:# NOTE:# The sampling is performed across all pairs in a batch without manually balancing# #samples for fine-level increases w.r.t. batch_sizeif 'mask0' not in data:num_candidates_max = mask.size(0) * max(mask.size(1), mask.size(2))else:num_candidates_max = compute_max_candidates(data['mask0'], data['mask1'])num_matches_train = int(num_candidates_max *self.train_coarse_percent)num_matches_pred = len(b_ids)assert self.train_pad_num_gt_min < num_matches_train, "min-num-gt-pad should be less than num-train-matches"# pred_indices is to select from predictionif num_matches_pred <= num_matches_train - self.train_pad_num_gt_min:pred_indices = torch.arange(num_matches_pred, device=_device)else:pred_indices = torch.randint(num_matches_pred,(num_matches_train - self.train_pad_num_gt_min, ),device=_device)# gt_pad_indices is to select from gt padding. e.g. max(3787-4800, 200)gt_pad_indices = torch.randint(len(data['spv_b_ids']),(max(num_matches_train - num_matches_pred,self.train_pad_num_gt_min), ),device=_device)mconf_gt = torch.zeros(len(data['spv_b_ids']), device=_device)  # set conf of gt paddings to all zerob_ids, i_ids, j_ids, mconf = map(lambda x, y: torch.cat([x[pred_indices], y[gt_pad_indices]],dim=0),*zip([b_ids, data['spv_b_ids']], [i_ids, data['spv_i_ids']],[j_ids, data['spv_j_ids']], [mconf, mconf_gt]))# These matches select patches that feed into fine-level networkcoarse_matches = {'b_ids': b_ids, 'i_ids': i_ids, 'j_ids': j_ids}# 4. Update with matches in original image resolutionscale = data['hw0_i'][0] / data['hw0_c'][0]scale0 = scale * data['scale0'][b_ids] if 'scale0' in data else scalescale1 = scale * data['scale1'][b_ids] if 'scale1' in data else scalemkpts0_c = torch.stack([i_ids % data['hw0_c'][1], i_ids // data['hw0_c'][1]],dim=1) * scale0mkpts1_c = torch.stack([j_ids % data['hw1_c'][1], j_ids // data['hw1_c'][1]],dim=1) * scale1# These matches is the current prediction (for visualization)coarse_matches.update({'gt_mask': mconf == 0,'m_bids': b_ids[mconf != 0],  # mconf == 0 => gt matches'mkpts0_c': mkpts0_c[mconf != 0],'mkpts1_c': mkpts1_c[mconf != 0],'mconf': mconf[mconf != 0]})return coarse_matches

精细化调整 fine-level refinement

将特征图级别的匹配转换成像素点级别的匹配

        # 4. fine-level refinement# 预处理操作,拆解特征图,提取匹配到的候选点feat_f0_unfold, feat_f1_unfold = self.fine_preprocess(feat_f0, feat_f1, feat_c0, feat_c1, data)if feat_f0_unfold.size(0) != 0:  # at least one coarse level predictedfeat_f0_unfold, feat_f1_unfold = self.loftr_fine(feat_f0_unfold, feat_f1_unfold)

这里self.loftr_fine也是进入到attention的计算,与粗粒度的计算时用的相同的算法

细粒度匹配 match fine-level

通过期望计算得到最终的输出结果

        # 5. match fine-levelself.fine_matching(feat_f0_unfold, feat_f1_unfold, data)

跳入FineMatching的forward函数中
原文中使用55的矩阵进行细粒度匹配,这里匹配的时候会计算55矩阵中各个点与中间点的关系,生成热度图heatmap,然后计算这个热度图的期望dsnt.spatial_expectation2d,然后还原到图像中即可。

    def forward(self, feat_f0, feat_f1, data):"""Args:feat0 (torch.Tensor): [M, WW, C]feat1 (torch.Tensor): [M, WW, C]data (dict)Update:data (dict):{'expec_f' (torch.Tensor): [M, 3],'mkpts0_f' (torch.Tensor): [M, 2],'mkpts1_f' (torch.Tensor): [M, 2]}"""M, WW, C = feat_f0.shapeW = int(math.sqrt(WW))scale = data['hw0_i'][0] / data['hw0_f'][0]self.M, self.W, self.WW, self.C, self.scale = M, W, WW, C, scale# corner case: if no coarse matches foundif M == 0:assert self.training == False, "M is always >0, when training, see coarse_matching.py"# logger.warning('No matches found in coarse-level.')data.update({'expec_f': torch.empty(0, 3, device=feat_f0.device),'mkpts0_f': data['mkpts0_c'],'mkpts1_f': data['mkpts1_c'],})returnfeat_f0_picked = feat_f0_picked = feat_f0[:, WW//2, :]print(feat_f0_picked.shape)sim_matrix = torch.einsum('mc,mrc->mr', feat_f0_picked, feat_f1)print(sim_matrix.shape)softmax_temp = 1. / C**.5heatmap = torch.softmax(softmax_temp * sim_matrix, dim=1).view(-1, W, W)print(heatmap.shape)# compute coordinates from heatmapcoords_normalized = dsnt.spatial_expectation2d(heatmap[None], True)[0]  # [M, 2]print(coords_normalized.shape)grid_normalized = create_meshgrid(W, W, True, heatmap.device).reshape(1, -1, 2)  # [1, WW, 2]print(grid_normalized.shape)# compute std over <x, y>var = torch.sum(grid_normalized**2 * heatmap.view(-1, WW, 1), dim=1) - coords_normalized**2  # [M, 2]std = torch.sum(torch.sqrt(torch.clamp(var, min=1e-10)), -1)  # [M]  clamp needed for numerical stability# for fine-level supervisiondata.update({'expec_f': torch.cat([coords_normalized, std.unsqueeze(1)], -1)})# compute absolute kpt coordsself.get_fine_match(coords_normalized, data)

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.rhkb.cn/news/230425.html

如若内容造成侵权/违法违规/事实不符,请联系长河编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

python使用selenium操作浏览器的教程

嗨喽~大家好呀&#xff0c;这里是魔王呐 ❤ ~! python更多源码/资料/解答/教程等 点击此处跳转文末名片免费获取 重复的操作令手工测试苦不堪言&#xff0c;于是自动化测试出现了&#xff01; 作为web应用里最出名的自动化测试工具&#xff0c;selenium让web应用的测试轻松了…

Unity之地形的构建

PS&#xff1a;公司没活干&#xff0c;好无聊偷偷摸鱼学Unity&#xff0c;害怕自己学完之后忘记&#xff0c;写下这一篇博客 先来看一下效果图&#xff1a;有山有水有树有草地 创建一个新的Unity3D项目 这里要用到Unity官方的免费资源包&#xff08;现在好像已经下架了百度网盘…

芯课堂 | LVGL基础知识(二)

引言 在 LVGL 中&#xff0c;用户界面的基本构建块是对象&#xff0c;也称为小部件(widget)。默认情况下&#xff0c;LVGL在背景上绘制旧对象&#xff0c;在前景上绘制新对象。 对象层级(Layers) 创建对象层级顺序 默认情况下&#xff0c;LVGL在背景上绘制旧对象&#xff0c…

redis重启后数据丢失问题解决(亲测好用)

redis修改密码重启后发现redis中的数据丢失了 解决办法&#xff1a; 首先在redis的安装目录下查找重启之前的dump.rdb文件&#xff0c;发现只有当天的一个dump.rdb文件&#xff0c;确认不是重启备份的文件 然后我就全盘找一下dump.rdb的备份文件&#xff0c;找到前一天的备份…

机器学习中的监督学习基本算法-逻辑回归简单介绍

逻辑回归 逻辑回归&#xff08;Logistic Regression&#xff09;是一种用于解决二分类问题的统计学习方法&#xff0c;尽管名字中带有"回归"一词&#xff0c;但实际上它是一种分类算法。逻辑回归的主要目标是通过学习从输入特征到一个离散的输出&#xff08;通常是0…

前端技术-调试工具(上)

页面制作之调试工具 常用的调试工具有Chrome浏览器的调试工具&#xff0c;火狐浏览器的Firebug插件调试工具&#xff0c;IE的开发人员工具等。它们的功能与使用方法大致相似。Chrome浏览器简洁快速&#xff0c;功能强大这里主要介绍Chrome浏览器的调试工具。 打开 Google Chrom…

【BIAI】Lecture2-Visual system

Lecture 2 - Visual System 专业术语 central nervous system(CNS) 中枢神经系统 pupil 瞳孔 iris 虹膜 cornea 角膜 retina 视网膜 fovea 中央凹 或 黄斑区 kens 晶状体 optic nerve 视神经 Bipolar cells 双极细胞 Ganglion cells 神经节细胞 rods 杆状细胞 cones 锥状细胞 …

视频格式网络地址转换视频到本地,获取封面、时长,其他格式转换成mp4

使用ffmpeg软件转换网络视频&#xff0c;先从官网下载对应操作系统环境的包 注意:网络地址需要是视频格式结尾&#xff0c;例如.mp4,.flv 等 官网地址&#xff1a;Download FFmpeg window包&#xff1a; linux包&#xff1a; 如果下载缓慢&#xff0c;下载迅雷安装使用…

imgaug库指南(一):从入门到精通的【图像增强】之旅

文章目录 引言imgaug简介安装和导入imgaug代码示例imgaug的强大之处和用途小结结尾 引言 在深度学习和计算机视觉的世界里&#xff0c;数据是模型训练的基石&#xff0c;其质量与数量直接影响着模型的性能。然而&#xff0c;获取大量高质量的标注数据往往需要耗费大量的时间和…

清风数学建模笔记-多分类-fisher线性判别分析

内容&#xff1a;Fisher线性判别分析 一.介绍&#xff1a; 1.给定的训练姐&#xff0c;设法投影到一维的直线上&#xff0c;使得同类样例的投影点尽可能接近和密集&#xff0c;异类投影点尽可能远离。 2.如何同类尽可能接近&#xff1a;方差越小 3.如何异类尽可能远离&#…

AI:111-基于深度学习的工业设备状态监测

🚀点击这里跳转到本专栏,可查阅专栏顶置最新的指南宝典~ 🎉🎊🎉 你的技术旅程将在这里启航! 从基础到实践,深入学习。无论你是初学者还是经验丰富的老手,对于本专栏案例和项目实践都有参考学习意义。 ✨✨✨ 每一个案例都附带有在本地跑过的关键代码,详细讲解供…

122基于matlab的CSO-SVM,BA-SVM模式识别模型

基于matlab的CSO-SVM&#xff0c;BA-SVM模式识别模型。优化SVM的两个参数晚上最佳参数确定。输出分类识别结果和准确率。数据可更换自己的&#xff0c;程序已调通&#xff0c;可直接运行。 122鸡群优化算法蝙蝠优化算法 (xiaohongshu.com)

关于设计模式的一点总结

一、GoF 23种设计模式 1.分类 GoF 23种设计模式可分为几类&#xff1a;创建型、结构型和行为型。如下表 分类设计模式创建型单例模式、工厂方法模式、抽象工厂模式、原型模式、建造者模式结构型代理模式、适配器模式、装饰者模式、桥接模式、组合模式、门面模式、享元模式行…

使用Python做个可视化的“剪刀石头布”小游戏

目录 一、引言 二、环境准备与基础知识 三、游戏界面制作 四、游戏逻辑实现 五、代码示例 六、游戏测试与优化 七、扩展与改进 八、总结 一、引言 “剪刀石头布”是一种古老的手势游戏&#xff0c;它简单易懂&#xff0c;趣味性强&#xff0c;适合各个年龄段的人参与。…

勒索检测能力升级,亚信安全发布《勒索家族和勒索事件监控报告》

评论员简评 近期(12.08-12.14)共发生勒索事件119起&#xff0c;相较之前呈现持平趋势。 与上周相比&#xff0c;近期仍然流行的勒索家族为lockbit3和8base。在涉及的勒索家族中&#xff0c;活跃程度Top5的勒索家族分别是&#xff1a;lockbit3、siegedsec、dragonforce、8base和…

vue3中pdf打印问题处理

1 get请求参数问题 之前的请求是post得不到参数&#xff0c;今天发现的问题很奇怪&#xff0c;从前端进入网关&#xff0c;网关居然得不到参数。 前端代码 const print () > {let linkUrlStr proxy.$tool.getUrlStr(proxy.$api.invOrder.psiInvOrder.printSalOutstock,{a…

error:0308010C:digital envelope routines::unsupported 前端项目错误

直接启动命令中增加&#xff1a; SET NODE_OPTIONS--openssl-legacy-provider && vue-cli-service serve

【电商项目实战】实现订单超时支付取消

&#x1f389;&#x1f389;欢迎来到我的CSDN主页&#xff01;&#x1f389;&#x1f389; &#x1f3c5;我是Java方文山&#xff0c;一个在CSDN分享笔记的博主。&#x1f4da;&#x1f4da; &#x1f31f;推荐给大家我的专栏《电商项目实战》。&#x1f3af;&#x1f3af; &am…

客服智能管理系统是如何应用的

客服系统有很多种类&#xff0c;针对不同场景的客服使用的客服系统也不同&#xff0c;如有网店里的在线客服、实体店里的电话客服、网站上的在线客服、公共服务型的热线客服、售后服务客服等等。所谓客服智能管理系统就是一种可以把多个客服场景都管理起来的系统&#xff0c;提…

【Proteus仿真】【STM32单片机】超声波测距系统

文章目录 一、功能简介二、软件设计三、实验现象联系作者 一、功能简介 本项目使用Proteus8仿真STM32单片机控制器&#xff0c;使用动态数码管、按键、HCSR04超声波、蜂鸣器模块等。 主要功能&#xff1a; 系统运行后&#xff0c;数码管显示超声波检测距离&#xff0c;当检测…