基于华为atlas下的yolov5+BoT-SORT/ByteTrack煤矿箕斗状态识别大探索

写在前面:

本项目的代码原型基于yolov5+yolov8。其中检测模型使用的yolov5,跟踪模型使用的yolov8。

这里说明以下,为什么不整体都选择yolov8呢,v8无疑是比v5优秀的,但是atlas这块经过不断尝试没有过去,所以只能选择v5。那为什么跟踪模型选择yolov8呢,其实我这里要做的是实时视频的处理,我也不想使用deepsort那种带识别模型的笨重型跟踪框架,看了yolov8的代码,觉得相当可以,就选择了yolov8中的跟踪。原本我以为自己的水平是扣不出这块跟踪代码的,毕竟是网上大佬们经过多年迭代修改的代码,代码水平是远在我之上的。做好一件事情的最好方法,就是立刻开始做,在连续加班了2个晚上后,终于扣出来了,过程是曲折的,结果是美好的。一与一,勇者得强尔。

参考代码git链接:

Yolov5:https://github.com/ultralytics/yolov5.git (v6.1版本)

Yolov8:https://github.com/ultralytics/ultralytics.git

项目目的:

识别箕斗的状态,运行(run),静止(still),识别画面中箕斗数量(num)。

目前本文方法同时支持BoT-SORT/ByteTrack两种跟踪算法。

跟踪算法浅析:

BoT-SORT 算法:

BoT-SORT(Bottleneck Transformers for Multiple Object Tracking and Segmentation)是一种基于深度学习的多目标跟踪算法。

它的主要特点包括:

  1. 利用了 Transformer 架构的优势,能够对目标的特征进行有效的编码和关联。例如,在处理复杂场景中的目标时,能够捕捉到长距离的依赖关系,从而更准确地跟踪目标。
  2. 对目标的外观特征和运动特征进行融合。
    通过结合外观信息和运动预测,提高了跟踪的准确性和稳定性。比如在目标被遮挡或短暂消失后重新出现时,能够更可靠地重新识别和跟踪。

ByteTrack 算法:

ByteTrack 是一种高效且准确的多目标跟踪算法。

其突出特点如下:

  1. 采用了一种简单而有效的关联策略。
    它不仅仅依赖于高分检测框,还充分利用低分检测框中的信息,大大减少了目标丢失的情况。例如,在车辆密集的交通场景中,能够准确跟踪那些被部分遮挡的车辆。
  2. 具有较高的计算效率。
    能够在保证跟踪效果的同时,降低计算成本,适用于实时应用场景。

区别:

  1. 准确性:

BoT-SORT 在 MOT17 和 MOT20 测试集的 MOTChallenge 数据集中排名第一,对于 MOT17 实现了 80.5 MOTA、80.2 IDF1 和 65.0 HOTA。而 ByteTrack 在速度达到30FPS(单张 V100)的情况下,各项指标也均有突破。相比 deep sort,ByteTrack 在遮挡情况下的提升非常明显。

  1. 速度:

ByteTrack 预测的速度感觉比 BoT-SORT 快一些,更加流畅。

  1. 其他指标:

BoT-SORT 可以很好地应对目标被遮挡或短暂消失后重新出现的情况,能够更可靠地重新识别和跟踪。而 ByteTrack 没有采用外表特征进行匹配,所以跟踪的效果非常依赖检测的效果,也就是说如果检测器的效果很好,跟踪也会取得不错的效果,但是如果检测的效果不好,那么会严重影响跟踪的效果。

数据集准备:

数据基于视频分解而成图片得到,基于labelimg标注,自己大概标了4天吧,一共872张。

Yolov5模型训练:

数据集目录格式如下,

data/jidou.yaml配置文件内容,

path: ./datasets/jidou  # dataset root dir
train: images/train  # train images (relative to 'path') 128 images
val: images/train  # val images (relative to 'path') 128 images
test:  images/train # test images (optional)# Classes
nc: 1  # number of classes
names: ['jidou']

开始训练,

python3 train.py --img 640 --epochs 100 --data ./data/jidou.yaml --weights yolov5s.pt

模型转化,pt模型转化为onnx,

python export.py --weights ./jidou_model/best.pt –simplify

onnx模型转化为atlas模型,

atc  --input_shape="images:1,3,640,640" --out_nodes="/model.24/Transpose:0;/model.24/Transpose_1:0;/model.24/Transpose_2:0" --output_type=FP32 --input_format=NCHW --output="./yolov5_add_bs1_fp16" --soc_version=Ascend310P3 --framework=5 --model="./best.onnx" --insert_op_conf=./insert_op.cfg

其中,fusion_result.json文件内容,

[{"graph_fusion": {"AConv2dMulFusion": {"effect_times": "0","match_times": "57"},"ConstToAttrPass": {"effect_times": "5","match_times": "5"},"ConvConcatFusionPass": {"effect_times": "0","match_times": "13"},"ConvFormatRefreshFusionPass": {"effect_times": "0","match_times": "60"},"ConvToFullyConnectionFusionPass": {"effect_times": "0","match_times": "60"},"ConvWeightCompressFusionPass": {"effect_times": "0","match_times": "60"},"CubeTransFixpipeFusionPass": {"effect_times": "0","match_times": "3"},"FIXPIPEAPREQUANTFUSIONPASS": {"effect_times": "0","match_times": "60"},"FIXPIPEFUSIONPASS": {"effect_times": "0","match_times": "60"},"MulAddFusionPass": {"effect_times": "0","match_times": "14"},"MulSquareFusionPass": {"effect_times": "0","match_times": "57"},"RefreshInt64ToInt32FusionPass": {"effect_times": "1","match_times": "1"},"RemoveCastFusionPass": {"effect_times": "0","match_times": "123"},"ReshapeTransposeFusionPass": {"effect_times": "0","match_times": "3"},"SplitConvConcatFusionPass": {"effect_times": "0","match_times": "13"},"TransdataCastFusionPass": {"effect_times": "0","match_times": "63"},"TransposedUpdateFusionPass": {"effect_times": "3","match_times": "3"},"V200NotRequantFusionPass": {"effect_times": "0","match_times": "7"},"ZConcatDFusionPass": {"effect_times": "0","match_times": "13"}},"session_and_graph_id": "0_0","ub_fusion": {"AutomaticUbFusion": {"effect_times": "1","match_times": "1","repository_hit_times": "0"},"TbeAippCommonFusionPass": {"effect_times": "1","match_times": "1","repository_hit_times": "0"},"TbeConvSigmoidMulQuantFusionPass": {"effect_times": "56","match_times": "56","repository_hit_times": "0"}}
}]

insert_op.cfg文件内容,

aipp_op {
aipp_mode : static
related_input_rank : 0
input_format : YUV420SP_U8
src_image_size_w : 640
src_image_size_h : 640
crop : false
csc_switch : true
rbuv_swap_switch : false
matrix_r0c0 : 256
matrix_r0c1 : 0
matrix_r0c2 : 359
matrix_r1c0 : 256
matrix_r1c1 : -88
matrix_r1c2 : -183
matrix_r2c0 : 256
matrix_r2c1 : 454
matrix_r2c2 : 0
input_bias_0 : 0
input_bias_1 : 128
input_bias_2 : 128
var_reci_chn_0 : 0.0039216
var_reci_chn_1 : 0.0039216
var_reci_chn_2 : 0.0039216
}

jidou.names文件内容,

jidou
yolov5_add_bs1_fp16.cfg文件内容,
CLASS_NUM=1
BIASES_NUM=18
BIASES=10,13,16,30,33,23,30,61,62,45,59,119,116,90,156,198,373,326
SCORE_THRESH=0.25
#SEPARATE_SCORE_THRESH=0.001,0.001,0.001,0.001,0.001,0.001,0.001,0.001,0.001,0.001
OBJECTNESS_THRESH=0.0
IOU_THRESH=0.5
YOLO_TYPE=3
ANCHOR_DIM=3
MODEL_TYPE=2
RESIZE_FLAG=0
YOLO_VERSION=5

代码编写之跟踪代码剥离:

剥离得整体思路如下,

  1. 先吧原始代码跑起来,效果测试是对的。
  2. 熟悉代码,主要熟悉trackers下面的py文件,engine中的predictor.py,results.py,model.py。
  3. 熟悉跟踪的本质,其实就是2个函数,一个初始化函数,一个update函数。
  4. 将模型和跟踪部分先剥离开(使用model.predict替换model.track)。
  5. 剥离Results结构体(使用传统的list替换Results得到更加通用的上下文传递变量)。
  6. 实现update函数(自己写代码替换tracker.update函数和predictor.results[i].update(**update_args)函数)。
  7. 剥离跟踪的配置文件yaml文件,选择在跟踪函数初始化赋值。
  8. 剥离其他依赖文件,metrics.py,ops.py。
  9. 剥离torch依赖,metrics.py中的batch_probiou函数基于numpy实现。
  10. 细节处bug修改。

最终跟踪代码track.py如下,

import os
import json
import cv2
import numpy as np
from plots import box_label, colorsfrom collections import defaultdict
from trackers.bot_sort import BOTSORT
from trackers.byte_tracker import BYTETracker
from names import namesclass TRACK(object):def __init__(self):#跟踪self.frame_rate=30#BOTSORTself.tracker = BOTSORT(frame_rate=self.frame_rate)#BYTETracker#self.tracker = BYTETracker(frame_rate=self.frame_rate)self.track_history = defaultdict(lambda: [])self.move_state = defaultdict(lambda: [])self.move_state_dict = {0:"still" ,1:"run"}self.distance = 5def track(self, track_results, frame):if len(track_results[0]["cls"]) != 0:tracks = self.tracker.update(track_results[0], frame)if len(tracks) != 0:idx = tracks[:, -1].astype(int)if track_results[0]["id"] is not None:track_results[0]["id"] = np.array([track_results[0]["id"][i] for i in idx])else:track_results[0]["id"] = np.array(tracks[:, 4].astype(int))track_results[0]["cls"] = np.array([track_results[0]["cls"][i] for i in idx])track_results[0]["conf"] = np.array([track_results[0]["conf"][i] for i in idx])track_results[0]["xywh"] = np.array([track_results[0]["xywh"][i] for i in idx])#跟新track_history, move_stateboxes = track_results[0]["xywh"]clses = track_results[0]["cls"]track_ids = []if track_results[0]["id"] is not None:track_ids = track_results[0]["id"].tolist()# Your code for processing track_idselse:print("No tracks found in this frame")# Plot the tracksfor cls, box, track_id in zip(clses, boxes, track_ids):x, y, w, h = boxtrack = self.track_history[track_id]track.append((float(x+w/2.0), float(y+h/2.0)))  # x, y center pointif len(track) > 30:  # retain 90 tracks for 90 framestrack.pop(0)if len(track)>=self.frame_rate:if abs(track[-1][0]-track[0][0]) + abs(track[-1][1]-track[0][1])>= self.distance:self.move_state[track_id] = self.move_state_dict[1]else:self.move_state[track_id] = self.move_state_dict[0]else:self.move_state[track_id] = self.move_state_dict[0]return track_resultsdef draw(self, image, track_results):# draw the result and save imagefor index, info in enumerate(track_results[0]["xywh"]):xyxy = [int(info[0]), int(info[1]), int(info[0])+int(info[2]), int(info[1])+int(info[3])]classVec = int(track_results[0]["cls"][index])conf = float(track_results[0]["conf"][index])if track_results[0]["id"] is not None:id = int(track_results[0]["id"][index])else:id = ""if id =="":label = f'{names[classVec]} {conf:.4f} track_id {id}'else:label = f'{names[classVec]} {conf:.4f} track_id {id} state {self.move_state[id]}'annotated_frame = box_label(image, xyxy, label, color=colors[classVec])cv2.putText(annotated_frame, "num:{}".format(len(track_results[0]["cls"])), (10,30), cv2.FONT_HERSHEY_SIMPLEX, 1.0, (255, 0, 0),thickness=2, lineType=cv2.LINE_AA)boxes = track_results[0]["xywh"]clses = track_results[0]["cls"]track_ids = []if track_results[0]["id"] is not None:track_ids = track_results[0]["id"].tolist()# Your code for processing track_idselse:print("No tracks found in this frame")# Plot the tracksfor cls, box, track_id in zip(clses, boxes, track_ids):x, y, w, h = boxtrack = self.track_history[track_id]# Draw the tracking linespoints = np.hstack(track).astype(np.int32).reshape((-1, 1, 2))cv2.polylines(annotated_frame,[points],isClosed=False,color=colors[cls],thickness=4,)return annotated_frame

metrics.py代码如下,

# Ultralytics YOLO 🚀, AGPL-3.0 license
"""Model validation metrics."""import numpy as npdef bbox_ioa(box1, box2, iou=False, eps=1e-7):"""Calculate the intersection over box2 area given box1 and box2. Boxes are in x1y1x2y2 format.Args:box1 (np.ndarray): A numpy array of shape (n, 4) representing n bounding boxes.box2 (np.ndarray): A numpy array of shape (m, 4) representing m bounding boxes.iou (bool): Calculate the standard IoU if True else return inter_area/box2_area.eps (float, optional): A small value to avoid division by zero. Defaults to 1e-7.Returns:(np.ndarray): A numpy array of shape (n, m) representing the intersection over box2 area."""# Get the coordinates of bounding boxesb1_x1, b1_y1, b1_x2, b1_y2 = box1.Tb2_x1, b2_y1, b2_x2, b2_y2 = box2.T# Intersection areainter_area = (np.minimum(b1_x2[:, None], b2_x2) - np.maximum(b1_x1[:, None], b2_x1)).clip(0) * (np.minimum(b1_y2[:, None], b2_y2) - np.maximum(b1_y1[:, None], b2_y1)).clip(0)# Box2 areaarea = (b2_x2 - b2_x1) * (b2_y2 - b2_y1)if iou:box1_area = (b1_x2 - b1_x1) * (b1_y2 - b1_y1)area = area + box1_area[:, None] - inter_area# Intersection over box2 areareturn inter_area / (area + eps)def batch_probiou(obb1, obb2, eps=1e-7):"""Calculate the prob IoU between oriented bounding boxes, https://arxiv.org/pdf/2106.06072v1.pdf.Args:obb1 ( np.ndarray): A tensor of shape (N, 5) representing ground truth obbs, with xywhr format.obb2 ( np.ndarray): A tensor of shape (M, 5) representing predicted obbs, with xywhr format.eps (float, optional): A small value to avoid division by zero. Defaults to 1e-7.Returns:(np.ndarray): A tensor of shape (N, M) representing obb similarities."""x1, y1 = np.split(obb1[..., :2], 2, axis=-1)x2, y2 = (x.squeeze(-1)[None] for x in np.split(obb2[..., :2],2, axis=-1))a1, b1, c1 = _get_covariance_matrix(obb1)a2, b2, c2 = (x.squeeze(-1)[None] for x in _get_covariance_matrix(obb2))t1 = (((a1 + a2) * np.power(y1 - y2, 2) + (b1 + b2) * np.power(x1 - x2, 2)) / ((a1 + a2) * (b1 + b2) - np.power(c1 + c2, 2) + eps)) * 0.25t2 = (((c1 + c2) * (x2 - x1) * (y1 - y2)) / ((a1 + a2) * (b1 + b2) - np.power(c1 + c2, 2) + eps)) * 0.5t3 = np.log(((a1 + a2) * (b1 + b2) - np.power(c1 + c2, 2))/ (4 * np.clip(a1 * b1 - np.power(c1, 2),0, np.inf) * np.sqrt(np.clip(a2 * b2 - np.power(c2, 2), 0, np.inf)) + eps)+ eps) * 0.5bd = np.clip(t1 + t2 + t3, eps, 100.0)hd = np.sqrt(1.0 - np.exp(-bd) + eps)return 1 - hddef _get_covariance_matrix(boxes):"""Generating covariance matrix from obbs.Args:boxes (np.ndarray): A tensor of shape (N, 5) representing rotated bounding boxes, with xywhr format.Returns:(np.ndarray): Covariance metrixs corresponding to original rotated bounding boxes."""# Gaussian bounding boxes, ignore the center points (the first two columns) because they are not needed here.gbbs = np.concatenate((np.power(boxes[:, 2:4],2) / 12, boxes[:, 4:]), axis=-1)a, b, c = np.split(gbbs, 3, axis=-1)cos = np.cos(c)sin = np.sin(c)cos2 = np.power(cos, 2)sin2 = np.power(sin, 2)return a * cos2 + b * sin2, a * sin2 + b * cos2, (a - b) * cos * sin

代码编写之检测代码yolov5.py实现:

import os
import json
import cv2
from StreamManagerApi import StreamManagerApi, MxDataInput
import numpy as np
from plots import box_label, colors
from utils import  scale_coords, xyxy2xywh, is_legal, preprocfrom track import TRACKfrom names import names
import timeclass YOLOV5(object):def __init__(self):# init stream managerself.streamManagerApi = StreamManagerApi()ret = self.streamManagerApi.InitManager()if ret != 0:print("Failed to init Stream manager, ret=%s" % str(ret))exit()# create streams by pipeline config filewith open("./pipeline/jidou.pipeline", 'rb') as f:pipelineStr = f.read()ret = self.streamManagerApi.CreateMultipleStreams(pipelineStr)if ret != 0:print("Failed to create Stream, ret=%s" % str(ret))exit()def process(self, image):# Construct the input of the streamdataInput = MxDataInput()h0, w0 = image.shape[:2]r = 640 / max(h0, w0)  # ratioinput_shape = (640, 640)pre_img = preproc(image, input_shape)[0]pre_img = np.ascontiguousarray(pre_img)image_bytes = cv2.imencode('.jpg', pre_img)[1].tobytes()dataInput.data = image_bytes# Inputs data to a specified stream based on streamName.STREAMNAME = b'classification+detection'INPLUGINID = 0uniqueId = self.streamManagerApi.SendDataWithUniqueId(STREAMNAME, INPLUGINID, dataInput)if uniqueId < 0:print("Failed to send data to stream.")exit()# Obtain the inference result by specifying streamName and uniqueId.inferResult = self.streamManagerApi.GetResultWithUniqueId(STREAMNAME, uniqueId, 10000)if inferResult.errorCode != 0:print("GetResultWithUniqueId error. errorCode=%d, errorMsg=%s" % (inferResult.errorCode, inferResult.data.decode()))exit()results = json.loads(inferResult.data.decode())track_results = [{"id":None, "className":[],"cls":[],"conf":[],  "xywh":[]}]for num, info in enumerate(results['MxpiObject']):xyxy = [int(info['x0']), int(info['y0']), int(info['x1']), int(info['y1'])]xyxy = scale_coords(pre_img.shape[:2], np.array(xyxy), image.shape[:2])classVec = info["classVec"]track_results[0]["className"].append(names[classVec[0]["classId"]])track_results[0]["cls"].append(classVec[0]["classId"])track_results[0]["conf"].append(classVec[0]["confidence"])track_results[0]["xywh"].append([xyxy[0], xyxy[1], xyxy[2]-xyxy[0], xyxy[3]-xyxy[1]])track_results[0]["cls"] = np.array(track_results[0]["cls"])track_results[0]["conf"] = np.array(track_results[0]["conf"])track_results[0]["xywh"] = np.array(track_results[0]["xywh"])return track_resultsdef __del__(self):# destroy streamsself.streamManagerApi.DestroyAllStreams()def draw(self, image, track_results):# draw the result and save imagefor index, info in enumerate(track_results[0]["xywh"]):xyxy = [int(info[0]), int(info[1]), int(info[0])+int(info[2]), int(info[1])+int(info[3])]classVec = int(track_results[0]["cls"][index])conf = float(track_results[0]["conf"][index])if track_results[0]["id"] is not None:id = int(track_results[0]["id"][index])else:id = ""label = f'{names[classVec]} {conf:.4f}'annotated_frame = box_label(image, xyxy, label, color=colors[classVec])return annotated_framedef test_img():# read imageORI_IMG_PATH = "./test_images/00004.jpg"image = cv2.imread(ORI_IMG_PATH, 1)yolov5 = YOLOV5()track_results = yolov5.process(image)print(track_results)save_img = yolov5.draw(image, track_results)cv2.imwrite('./result.jpg', save_img)def test_video():yolov5 = YOLOV5()tracker = TRACK()# Open the video filevideo_path = "./test_images/jidou.mp4"cap = cv2.VideoCapture(video_path)fourcc = cv2.VideoWriter_fourcc('X', 'V', 'I', 'D') # 确定视频被保存后的编码格式output = cv2.VideoWriter("output.mp4", fourcc, 20, (1280, 720)) # 创建VideoWriter类对象# Loop through the video frameswhile cap.isOpened():# Read a frame from the videosuccess, frame = cap.read()if success:# Run YOLOv8 tracking on the frame, persisting tracks between framest1 = time.time()track_results = yolov5.process(frame)t2 = time.time()track_results = tracker.track(track_results, frame)t3 = time.time()annotated_frame = tracker.draw(frame, track_results)t4 = time.time()print("time", t2-t1, t3-t2, t4-t3, t4-t1)output.write(annotated_frame)# Display the annotated frame#cv2.imshow("YOLOv8 Tracking", annotated_frame)# Break the loop if 'q' is pressedif cv2.waitKey(1) & 0xFF == ord("q"):breakelse:# Break the loop if the end of the video is reachedbreak# Release the video capture object and close the display windowcap.release()cv2.destroyAllWindows()if __name__ == '__main__':#test_img()test_video()

最终整体代码目录结构:

最终效果:

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.rhkb.cn/news/403181.html

如若内容造成侵权/违法违规/事实不符,请联系长河编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

AWS boto3 脚本访问 AWS 资源

AWS boto3 脚本访问 AWS 资源 引言boto3主要功能常见用例安装和基本使用 boto3.Client() 低级客户端基本用法关键参数 boto3.resource() 高级客户端常见参数用法 boto3.resource VS boto3.client相似点不同点总结 关于身份验证凭证隐式身份凭证显式身份验证凭证assuem role如何…

出海笔记精华问答 | 第四期

更新出海问答第四期&#xff0c;希望可以继续帮助大家解决问题哈。 Q1:当stripe把资金全退给客户但是货又发了&#xff0c;这是什么情况&#xff1f; A1: 这种情况一般是stripe不跟你合作了或者发生了争议。 Q2:如何知道stripe回复你的邮件是人工回复还是机器人回复&#xff…

Linux基础入门---安装vmware

&#x1f600;前言 本篇博文是关于Linux基础入门和vmwarel5.5下载&#xff0c;希望你能够喜欢。 &#x1f3e0;个人主页&#xff1a;晨犀主页 &#x1f9d1;个人简介&#xff1a;大家好&#xff0c;我是晨犀&#xff0c;希望我的文章可以帮助到大家&#xff0c;您的满意是我的动…

Merkle树(Merkle Tree):高效地验证某个数据块是否包含在数据集中

目录 Merkle树(Merkle Tree) 一、基本结构 二、构建过程 三、主要作用 四、应用领域 Merkle树(Merkle Tree) Merkle树(Merkle Tree),也被称为默克尔树或Merkle哈希树,是一种基于哈希的数据结构,主要用于验证大规模数据集的完整性和一致性。它的名字来源于其发明…

大数据技术——实战项目:广告数仓(第七部分)数仓工作流调度实操

目录 第12章 广告数仓全流程调度 12.2 新数据生成 12.2.1 广告监测日志 12.2.2 广告管理平台数据 12.3 工作流调度实操 12.3.1 DolphinScheduler集群模式 12.3.2 DolphinScheduler单机模式 第12章 广告数仓全流程调度 12.1 调度工具Dolphinscheduler DolphinScheduler…

VirtualBox上的Oracle Linux虚拟机安装Docker全流程

1.安装docker依赖 yum install -y yum-utils device-mapper-persistent-data lvm2 2.安装docker仓库 yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo 生成docker的yum源配置到在 /etc/yum.repos.d/docker-ce.repo 3.安装D…

Linux内核分析(调度类和调度实体)

文章目录 前言一、调度类1. stop_sched_class2. dl_sched_class3. rt_sched_class4. fair_sched_class5. idle_sched_class总结 二、调度类中的操作函数三、调度实体 前言 调度是操作系统内核的一个关键职责&#xff0c;它涉及到如何合理分配CPU时间给不同的进程或线程。在Lin…

uniapp打包H5的时候 清楚缓存(不安装依赖的前提下)

问题 在写项目的时候&#xff0c;打包好一个H5 发布成功&#xff0c;后来又重新打包新的包进行更新迭代&#xff0c;但是用户手机上还是上一个版本&#xff0c;本地缓存还是没有清除。 解决问题 步骤一&#xff1a;html不缓存 在html中&#xff0c;解决缓存的方法主要是依赖…

Keepalived学习

环境准备&#xff1a;两台服务器&#xff0c;两台客户机&#xff0c;关闭火墙和selinux 在两台主机上安装ka yum install keepalived -y 开启软件 keepalived配置 进入文件 vim /etc/keepalived/keepalived.conf 修改配置 配置slave 效果 在另一台路由配置 抢占模式和非…

简洁清新个人博客网页模板演示学习

30多款个人博客、个人网站、个人主页divcss,html在线预览,个人静态页面模板(wordpress、帝国cms、typecho主题模板).这些简洁和优雅的博客网页模板,为那些想成为创建博客的个人或媒体提供灵感设计。网页模板可以记录旅游、生活方式、食品或摄影博客等网站。部分网页模板来源网友…

二叉树进阶之二叉搜索树:一切的根源

前言&#xff1a; 在学完了简单的容器与C面向对象的三大特性之后&#xff0c;我们首先接触的就是map与set两大容器&#xff0c;但是这两个容器底层实现的原理是什么呢&#xff1f;我们不而知&#xff0c;今天&#xff0c;主要来为学习map与set的底层原理而打好基础&#xff0c…

面试官:Java虚拟机是什么,Java虚拟机的内存模型是什么样子的?

哈喽&#xff01;大家好&#xff0c;我是小奇&#xff0c;一个专给面试官添堵的撑序员 小奇打算以轻松幽默的对话方式来分享一些技术&#xff0c;如果你觉得通过小奇的文章学到了东西&#xff0c;那就给小奇一个赞吧 文章持续更新&#xff0c;可以微信搜索【小奇JAVA面试】第一…

基于WEB的旅游推荐系统设计与实现

TOC springboot280基于WEB的旅游推荐系统设计与实现 第1章 绪论 1.1选题动因 当前的网络技术&#xff0c;软件技术等都具备成熟的理论基础&#xff0c;市场上也出现各种技术开发的软件&#xff0c;这些软件都被用于各个领域&#xff0c;包括生活和工作的领域。随着电脑和笔…

SVN权限控制解析

一、基础数据说明 1. 代码目录存在多级 图1-1 SVN目录 如图1-1&#xff1a; 第一级目录是 科顺&#xff0c;代表 科顺项目&#xff0c;项目文件包括 文档、代码等等。第二级目录 分别是 2.1 resources 用于存放文档&#xff0c;开发和实施均需…

Qt登录窗口

#include "widget.h" #include "ui_widget.h"Widget::Widget(QWidget *parent): QWidget(parent), ui(new Ui::Widget),btn(new QPushButton("取消", this)),login_btn(new QPushButton("登录", this)) { ui->setupUi(this);thi…

Llama 3.1深度解析:405B、70B及8B模型的多语言与长上下文处理能力

Llama 3.1 发布了&#xff01;今天我们迎来了 Llama 家族的新成员 Llama 3.1 进入 Hugging Face 平台。我们很高兴与 Meta 合作&#xff0c;确保在 Hugging Face 生态系统中实现最佳集成。Hub 上现有八个开源权重模型 (3 个基础模型和 5 个微调模型)。 Llama 3.1 有三种规格: …

法线纹理贴图计算(切线空间世界空间)

效率&#xff1a; 在切线空间中计算&#xff0c;效率更高&#xff0c;因为可以在顶点着色器中就完成对光照、视角方向的矩 阵变换&#xff0c;计算量相对较小。( 矩阵变换在顶点着色器中计算) 在世界空间中计算&#xff0c;效率较低&#xff0c;由于需要对法线贴图进行采样&a…

mybatis druid postgresql statement超时原理原理

yaml设置超时 mybatis-plus:mapper-locations: classpath:/mapper/*.xml # MyBatis Mapper XML文件的位置type-aliases-package: com.company.mi.entity # 实体类所在的包configuration:default-statement-timeout: 10 configuration 设置超时 BaseStatementHandler设置超时 …

Thread 类的基本用法

目录 什么是线程&#xff1f; 编写多线程程序 线程创建的方式 继承 Thread 类&#xff0c;重写 run 方法 实现 Runnable 接口&#xff0c;重写 run 方法 匿名内部类创建 Thread 子类 匿名内部类创建 Runnable 子类对象 lambda表达式 Thread 类和常用方法 Thread 的常…

node.js part1

Node.js Node.js 是一个跨平台JavaScript 运行环境&#xff0c;使开发者可以搭建服务器端的 JavaScript 应用程序。作用&#xff1a;使用Node.js编写服务器端程序 编写数据接口&#xff0c;提供网页资源浏览功能等等 前端工程化&#xff1a;为后续学习Vue和React等框架做铺垫. …