Mac 电脑配置yolov8运行环境实现目标追踪、计数、画出轨迹、多线程

🥇 版权: 本文由【墨理学AI】原创首发、各位读者大大、敬请查阅、感谢三连
🎉 声明: 作为全网 AI 领域 干货最多的博主之一,❤️ 不负光阴不负卿 ❤️

0-9

文章目录

    • 📙 Mac 电脑 配置 yolov8 环境
    • 📙 代码运行
        • 推理测试
        • 模型训练 - 转 onnx
        • 视频-目标检测
        • 调用 Mac 电脑摄像头
        • PersistingTracksLoop 持续目标跟踪
        • Plotting Tracks 画轨迹
        • Multithreaded Tracking - 多线程运行示例
    • 📙 YOLO 系列实战博文汇总如下
        • 🟦 YOLO 理论讲解学习篇
        • 🟧 Yolov5 系列
        • 🟨 YOLOX 系列
        • 🟦 Yolov3 系列
        • 🟨 YOLOX 系列
        • 🟦 持续补充更新
    • ❤️ 人生苦短, 欢迎和墨理一起学AI

📙 Mac 电脑 配置 yolov8 环境

  • YOLO 推理测试、小数据集训练,基础版 Mac 即可满足
  • 博主这里代码运行的 Mac 版本为 M1 Pro

conda 环境搭建步骤如下


conda create -n yolopy39 python=3.9
conda activate yolopy39pip3 install torch torchvision torchaudio# ultralytics 对 opencv-python 的版本需求如下
pip3 install opencv-python>=4.6.0
# 因此我选择安装的版本如下
pip3 install opencv-python==4.6.0.66cd Desktopmkdir molicd moligit clone https://github.com/ultralytics/ultralytics.git
pip install -e .pwd             
/Users/moli/Desktop/moli/ultralytics

📙 代码运行

代码运行主要参考如下两个官方教程

  • https://github.com/ultralytics/ultralytics
  • https://docs.ultralytics.com/modes/track/#persisting-tracks-loop
推理测试

yolo predict model=yolov8n.pt source='https://ultralytics.com/images/bus.jpg'

yolo predict model=yolov8n.pt source='https://ultralytics.com/images/bus.jpg'
# 输出如下
Matplotlib is building the font cache; this may take a moment.
Downloading https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8n.pt to 'yolov8n.pt'...
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 6.25M/6.25M [01:34<00:00, 69.6kB/s]
Ultralytics YOLOv8.2.77 🚀 Python-3.9.19 torch-2.2.2 CPU (Apple M1 Pro)
[W NNPACK.cpp:64] Could not initialize NNPACK! Reason: Unsupported hardware.
YOLOv8n summary (fused): 168 layers, 3,151,904 parameters, 0 gradients, 8.7 GFLOPsDownloading https://ultralytics.com/images/bus.jpg to 'bus.jpg'...
100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 134k/134k [00:00<00:00, 470kB/s]
image 1/1 /Users/moli/Desktop/moli/ultralytics/bus.jpg: 640x480 4 persons, 1 bus, 1 stop sign, 221.3ms
Speed: 5.8ms preprocess, 221.3ms inference, 4.0ms postprocess per image at shape (1, 3, 640, 480)
Results saved to /Users/moli/Desktop/moli/ultralytics/runs/detect/predict
模型训练 - 转 onnx

vim train_test.py

from ultralytics import YOLO# Load a model
model = YOLO("yolov8n.yaml")  # build a new model from scratch
model = YOLO("yolov8n.pt")  # load a pretrained model (recommended for training)# Use the model
model.train(data="coco8.yaml", epochs=3)  # train the model
metrics = model.val()  # evaluate model performance on the validation set
results = model("https://ultralytics.com/images/bus.jpg")  # predict on an image# 转换 onnx 也是封装好的模块,这里调用传参即可
path = model.export(format="onnx")  # export the model to ONNX format

运行输出如下

python train_test.py [W NNPACK.cpp:64] Could not initialize NNPACK! Reason: Unsupported hardware.
Ultralytics YOLOv8.2.77 🚀 Python-3.9.19 torch-2.2.2 CPU (Apple M1 Pro)
engine/trainer: task=detect, mode=train, model=yolov8n.pt, data=coco8.yaml, epochs=3, time=None, patience=100, batch=16, imgsz=640, save=True, save_period=-1, cache=False, device=None, workers=8, project=None, name=train, exist_ok=False, pretrained=True, optimizer=auto, verbose=True, seed=0, deterministic=True, single_cls=False, rect=False, cos_lr=False, close_mosaic=10, resume=False, amp=True, fraction=1.0, profile=False, freeze=None, multi_scale=False, overlap_mask=True, mask_ratio=4, dropout=0.0, val=True, split=val, save_json=False, save_hybrid=False, conf=None, iou=0.7, max_det=300, half=False, dnn=False, plots=True, source=None, vid_stride=1, stream_buffer=False, visualize=False, augment=False, agnostic_nms=False, classes=None, retina_masks=False, embed=None, show=False, save_frames=False, save_txt=False, save_conf=False, save_crop=False, show_labels=True, show_conf=True, show_boxes=True, line_width=None, format=torchscript, keras=False, optimize=False, int8=False, dynamic=False, simplify=False, opset=None, workspace=4, nms=False, lr0=0.01, lrf=0.01, momentum=0.937, weight_decay=0.0005, warmup_epochs=3.0, warmup_momentum=0.8, warmup_bias_lr=0.1, box=7.5, cls=0.5, dfl=1.5, pose=12.0, kobj=1.0, label_smoothing=0.0, nbs=64, hsv_h=0.015, hsv_s=0.7, hsv_v=0.4, degrees=0.0, translate=0.1, scale=0.5, shear=0.0, perspective=0.0, flipud=0.0, fliplr=0.5, bgr=0.0, mosaic=1.0, mixup=0.0, copy_paste=0.0, auto_augment=randaugment, erasing=0.4, crop_fraction=1.0, cfg=None, tracker=botsort.yaml, save_dir=/Users/moli/Desktop/moli/ultralytics/runs/detect/trainDataset 'coco8.yaml' images not found ⚠️, missing path '/Users/moli/Desktop/moli/datasets/coco8/images/val'
Downloading https://ultralytics.com/assets/coco8.zip to '/Users/moli/Desktop/moli/datasets/coco8.zip'...
100%|███████████████████████████████████████████████████████████████████████████████████████████| 433k/433k [00:03<00:00, 135kB/s]
Unzipping /Users/moli/Desktop/moli/datasets/coco8.zip to /Users/moli/Desktop/moli/datasets/coco8...: 100%|██████████| 25/25 [00:00
Dataset download success ✅ (5.4s), saved to /Users/moli/Desktop/moli/datasets...
...Logging results to /Users/moli/Desktop/moli/ultralytics/runs/detect/train
Starting training for 3 epochs...Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size1/3         0G      1.412      2.815      1.755         22        640: 100%|██████████| 1/1 [00:01<00:00,  1.90s/it]Class     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%|██████████| 1/1 [00:00<00:00,  1.30all          4         17      0.613      0.883      0.888      0.616Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size2/3         0G      1.249      2.621      1.441         23        640: 100%|██████████| 1/1 [00:01<00:00,  1.51s/it]Class     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%|██████████| 1/1 [00:00<00:00,  2.24all          4         17      0.598      0.896      0.888      0.618Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size3/3         0G      1.142      4.221      1.495         16        640: 100%|██████████| 1/1 [00:01<00:00,  1.50s/it]Class     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%|██████████| 1/1 [00:00<00:00,  2.06all          4         17       0.58      0.833      0.874      0.6133 epochs completed in 0.002 hours.
Optimizer stripped from /Users/moli/Desktop/moli/ultralytics/runs/detect/train/weights/last.pt, 6.5MB
Optimizer stripped from /Users/moli/Desktop/moli/ultralytics/runs/detect/train/weights/best.pt, 6.5MBValidating /Users/moli/Desktop/moli/ultralytics/runs/detect/train/weights/best.pt...
Ultralytics YOLOv8.2.77 🚀 Python-3.9.19 torch-2.2.2 CPU (Apple M1 Pro)
Model summary (fused): 168 layers, 3,151,904 parameters, 0 gradients, 8.7 GFLOPsClass     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%|██████████| 1/1 [00:00<00:00,  1.72all          4         17      0.599      0.898      0.888      0.618person          3         10      0.647        0.5       0.52       0.29dog          1          1      0.315          1      0.995      0.597horse          1          2      0.689          1      0.995      0.598elephant          1          2      0.629      0.887      0.828      0.332umbrella          1          1      0.539          1      0.995      0.995potted plant          1          1      0.774          1      0.995      0.895
Speed: 4.2ms preprocess, 134.0ms inference, 0.0ms loss, 0.8ms postprocess per image
Results saved to /Users/moli/Desktop/moli/ultralytics/runs/detect/train
Ultralytics YOLOv8.2.77 🚀 Python-3.9.19 torch-2.2.2 CPU (Apple M1 Pro)
Model summary (fused): 168 layers, 3,151,904 parameters, 0 gradients, 8.7 GFLOPs
val: Scanning /Users/moli/Desktop/moli/datasets/coco8/labels/val.cache... 4 images, 0 backgrounds, 0 corrupt: 100%|██████████| 4/4Class     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%|██████████| 1/1 [00:00<00:00,  2.02all          4         17      0.599      0.898      0.888      0.618person          3         10      0.647        0.5       0.52       0.29dog          1          1      0.315          1      0.995      0.597horse          1          2      0.689          1      0.995      0.598elephant          1          2      0.629      0.887      0.828      0.332umbrella          1          1      0.539          1      0.995      0.995potted plant          1          1      0.774          1      0.995      0.895
Speed: 4.1ms preprocess, 113.0ms inference, 0.0ms loss, 0.7ms postprocess per image
Results saved to /Users/moli/Desktop/moli/ultralytics/runs/detect/train2image 1/1 /Users/moli/Desktop/moli/ultralytics/ultralytics/assets/bus.jpg: 640x480 4 persons, 1 bus, 188.4ms
Speed: 3.9ms preprocess, 188.4ms inference, 1.0ms postprocess per image at shape (1, 3, 640, 480)
Ultralytics YOLOv8.2.77 🚀 Python-3.9.19 torch-2.2.2 CPU (Apple M1 Pro)# 开始模型转换PyTorch: starting from '/Users/moli/Desktop/moli/ultralytics/runs/detect/train/weights/best.pt' with input shape (1, 3, 640, 640) BCHW and output shape(s) (1, 84, 8400) (6.2 MB)
requirements: Ultralytics requirement ['onnx>=1.12.0'] not found, attempting AutoUpdate...Looking in indexes: http://pypi.douban.com/simple, http://mirrors.aliyun.com/pypi/simple/, https://pypi.tuna.tsinghua.edu.cn/simple/, http://pypi.mirrors.ustc.edu.cn/simple/
Collecting onnx>=1.12.0Downloading http://mirrors.ustc.edu.cn/pypi/packages/4e/35/abbf2fa3dbb96b430f6e810e3fb7bc042ed150f371cb1aedb47052c40f8e/onnx-1.16.2-cp39-cp39-macosx_11_0_universal2.whl (16.5 MB)━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 16.5/16.5 MB 11.4 MB/s eta 0:00:00
Requirement already satisfied: numpy>=1.20 in /Users/moli/opt/anaconda3/envs/yolopy39/lib/python3.9/site-packages (from onnx>=1.12.0) (1.26.4)
Collecting protobuf>=3.20.2 (from onnx>=1.12.0)Downloading http://mirrors.ustc.edu.cn/pypi/packages/ca/bc/bceb11aa96dd0b2ae7002d2f46870fbdef7649a0c28420f0abb831ee3294/protobuf-5.27.3-cp38-abi3-macosx_10_9_universal2.whl (412 kB)
Installing collected packages: protobuf, onnx
Successfully installed onnx-1.16.2 protobuf-5.27.3requirements: AutoUpdate success ✅ 22.0s, installed 1 package: ['onnx>=1.12.0']
requirements: ⚠️ Restart runtime or rerun command for updates to take effectONNX: starting export with onnx 1.16.2 opset 17...
ONNX: export success ✅ 24.4s, saved as '/Users/moli/Desktop/moli/ultralytics/runs/detect/train/weights/best.onnx' (12.2 MB)Export complete (26.1s)
Results saved to /Users/moli/Desktop/moli/ultralytics/runs/detect/train/weights
Predict:         yolo predict task=detect model=/Users/moli/Desktop/moli/ultralytics/runs/detect/train/weights/best.onnx imgsz=640  
Validate:        yolo val task=detect model=/Users/moli/Desktop/moli/ultralytics/runs/detect/train/weights/best.onnx imgsz=640 data=/Users/moli/Desktop/moli/ultralytics/ultralytics/cfg/datasets/coco8.yaml  
Visualize:       https://netron.app

可以看到运行成功、训练、转换 onnx 如下

ls runs/detect/train/       F1_curve.png			R_curve.png			confusion_matrix_normalized.png	results.csv			train_batch1.jpg		val_batch0_pred.jpg
PR_curve.png			args.yaml			labels.jpg			results.png			train_batch2.jpg		weights
P_curve.png			confusion_matrix.png		labels_correlogram.jpg		train_batch0.jpg		val_batch0_labels.jpg(yolopy39) moli@molideMacBook-Pro ultralytics % ls runs/detect/train/weights 
best.onnx	best.pt		last.pt
视频-目标检测
cat yolov8_1.py 
from ultralytics import YOLO# Load an official or custom model
model = YOLO("yolov8n.pt")  # Load an official Detect model
#model = YOLO("yolov8n-seg.pt")  # Load an official Segment model
#model = YOLO("yolov8n-pose.pt")  # Load an official Pose model
#model = YOLO("path/to/best.pt")  # Load a custom trained model# Perform tracking with the model
source = 'video/people.mp4'
results = model.track(source, show=True)  # Tracking with default tracker

代码运行效果如下:

1-001

调用 Mac 电脑摄像头

source = 0 即可

from ultralytics import YOLO# Load an official or custom model
model = YOLO("yolov8n.pt")  # Load an official Detect model#source = 'video/people.mp4'
source = 0
results = model.track(source, show=True)  # Tracking with default tracker# results = model.track(source, show=True, tracker="bytetrack.yaml")  # with ByteTrack

效果示例如下

1-0001

PersistingTracksLoop 持续目标跟踪
  • https://docs.ultralytics.com/modes/track/#tracker-selection

vim yolov8PersistingTracksLoop.py

                  
import cv2from ultralytics import YOLO# Load the YOLOv8 model
model = YOLO("yolov8n.pt")# Open the video file
video_path = "./video/test_people.mp4"
cap = cv2.VideoCapture(video_path)# Loop through the video frames
while cap.isOpened():# Read a frame from the videosuccess, frame = cap.read()if success:# Run YOLOv8 tracking on the frame, persisting tracks between framesresults = model.track(frame, persist=True)# Visualize the results on the frameannotated_frame = results[0].plot()# Display the annotated framecv2.imshow("YOLOv8 Tracking", annotated_frame)# Break the loop if 'q' is pressedif cv2.waitKey(1) & 0xFF == ord("q"):breakelse:# Break the loop if the end of the video is reachedbreak# Release the video capture object and close the display window
cap.release()
cv2.destroyAllWindows()

python3 yolov8PersistingTracksLoop.py 运行效果如下

2-0003

Plotting Tracks 画轨迹

vim yolov8PlottingTracks.py


from collections import defaultdictimport cv2
import numpy as npfrom ultralytics import YOLO# Load the YOLOv8 model
model = YOLO("yolov8n.pt")# Open the video file
video_path = "./video/test_people.mp4"
cap = cv2.VideoCapture(video_path)# Store the track history
track_history = defaultdict(lambda: [])# Loop through the video frames
while cap.isOpened():# Read a frame from the videosuccess, frame = cap.read()if success:# Run YOLOv8 tracking on the frame, persisting tracks between framesresults = model.track(frame, persist=True)# Get the boxes and track IDsboxes = results[0].boxes.xywh.cpu()track_ids = results[0].boxes.id.int().cpu().tolist()# Visualize the results on the frameannotated_frame = results[0].plot()# Plot the tracksfor box, track_id in zip(boxes, track_ids):x, y, w, h = boxtrack = track_history[track_id]track.append((float(x), float(y)))  # x, y center pointif len(track) > 30:  # retain 90 tracks for 90 framestrack.pop(0)# Draw the tracking linespoints = np.hstack(track).astype(np.int32).reshape((-1, 1, 2))cv2.polylines(annotated_frame, [points], isClosed=False, color=(230, 230, 230), thickness=10)# Display the annotated framecv2.imshow("YOLOv8 Tracking", annotated_frame)# Break the loop if 'q' is pressedif cv2.waitKey(1) & 0xFF == ord("q"):breakelse:# Break the loop if the end of the video is reachedbreak# Release the video capture object and close the display window
cap.release()
cv2.destroyAllWindows()

python3 yolov8PlottingTracks.py 运行效果如下,可以看看行人后有轨迹

2-0005

Multithreaded Tracking - 多线程运行示例

vim yolov8MultithreadedTracking.py

  • 这里加载两个模型,运行两个线程,出现线程拥挤、导致无法弹窗,代码需要进一步修改
import threadingimport cv2from ultralytics import YOLOdef run_tracker_in_thread(filename, model, file_index):"""Runs a video file or webcam stream concurrently with the YOLOv8 model using threading.This function captures video frames from a given file or camera source and utilizes the YOLOv8 model for objecttracking. The function runs in its own thread for concurrent processing.Args:filename (str): The path to the video file or the identifier for the webcam/external camera source.model (obj): The YOLOv8 model object.file_index (int): An index to uniquely identify the file being processed, used for display purposes.Note:Press 'q' to quit the video display window."""video = cv2.VideoCapture(filename)  # Read the video filewhile True:ret, frame = video.read()  # Read the video frames# Exit the loop if no more frames in either videoif not ret:break# Track objects in frames if availableresults = model.track(frame, persist=True)res_plotted = results[0].plot()cv2.imshow(f"Tracking_Stream_{file_index}", res_plotted)key = cv2.waitKey(1)if key == ord("q"):break# Release video sourcesvideo.release()# Load the models
model1 = YOLO("yolov8n.pt")
model2 = YOLO("yolov8n-seg.pt")# Define the video files for the trackers
video_file1 = "video/test_people.mp4"  # Path to video file, 0 for webcam
#video_file2 = 'video/test_traffic.mp4'  # Path to video file, 0 for webcam, 1 for external camera
video_file2 = 0
# Create the tracker threads
tracker_thread1 = threading.Thread(target=run_tracker_in_thread, args=(video_file1, model1, 1), daemon=True)
tracker_thread2 = threading.Thread(target=run_tracker_in_thread, args=(video_file2, model2, 2), daemon=True)# Start the tracker threads
tracker_thread1.start()
tracker_thread2.start()# Wait for the tracker threads to finish
tracker_thread1.join()
tracker_thread2.join()# Clean up and close windows
cv2.destroyAllWindows()

📙 YOLO 系列实战博文汇总如下


🟦 YOLO 理论讲解学习篇
🟧 Yolov5 系列
  • 💜 YOLOv5 环境搭建 | coco128 训练示例 |❤️ 详细记录❤️ |【YOLOv5】
  • 💜 YOLOv5 COCO数据集 训练 | 【YOLOv5 训练】
🟨 YOLOX 系列
  • 💛 YOLOX 环境搭建 | 测试 | COCO训练复现 【YOLOX 实战】
  • 💛 YOLOX (pytorch)模型 ONNX export | 运行推理【YOLOX 实战二】
  • 💛 YOLOX (pytorch)模型 转 ONNX 转 ncnn 之运行推理【YOLOX 实战三】
  • 💛 YOLOX (pytorch)模型 转 tensorRT 之运行推理【YOLOX 实战四】
🟦 Yolov3 系列
  • 💙 yolov3(darknet )训练 - 测试 - 模型转换❤️darknet 转 ncnn 之C++运行推理❤️【yolov3 实战一览】
  • 💙 YOLOv3 ncnn 模型 yolov3-spp.cpp ❤️【YOLOv3之Ncnn推理实现———附代码】
🟨 YOLOX 系列
  • Ubuntu 22.04 搭建 yolov8 环境 运行示例代码(轨迹跟踪、过线 人数统计、目标热力图)
🟦 持续补充更新

❤️ 人生苦短, 欢迎和墨理一起学AI


  • 🎉 作为全网 AI 领域 干货最多的博主之一,❤️ 不负光阴不负卿 ❤️
  • ❤️ 如果文章对你有些许帮助、蟹蟹各位读者大大点赞、评论鼓励博主的每一分认真创作

9-9

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.rhkb.cn/news/432843.html

如若内容造成侵权/违法违规/事实不符,请联系长河编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

[Redis][哨兵][上]详细讲解

目录 0.前言1.基本概念1.相关名词解释2.主从复制的问题3.人工恢复主节点故障4.哨兵自动恢复主节点故障 0.前言 说明&#xff1a;该章节相关操作不需要记忆&#xff0c;理解流程和原理即可&#xff0c;用的时候能自主查到即可Redis的主从复制模式下&#xff0c;⼀旦主节点由于故…

opencv实战项目二十七:基于meanshif的视频脸部跟踪

文章目录 前言一、Mean Shift是什么&#xff1f;二、opencv中meanshift使用流程三、使用代码&#xff1a;四、效果&#xff1a; 前言 在当今这个信息化时代&#xff0c;图像和视频处理技术已经渗透到我们生活的方方面面&#xff0c;从安防监控、智能交通到人机交互等领域&…

如何恢复被删除的 GitLab 项目?

GitLab 是一个全球知名的一体化 DevOps 平台&#xff0c;很多人都通过私有化部署 GitLab 来进行源代码托管。极狐GitLab 是 GitLab 在中国的发行版&#xff0c;专门为中国程序员服务。可以一键式部署极狐GitLab。 学习极狐GitLab 的相关资料&#xff1a; 极狐GitLab 官网极狐…

MYSQL求月份同比数据和环比数据

1.需求题目如下 1.首先求出每月每个account_id 对应的amount金额 2.利用表自关联&#xff0c;获取上月&#xff0c;上年对应月份及金额&#xff0c; 关联条件利用 主表月份-1个月上月月份 和 主表月份-1年上年月份 3.最后求同比和环比 附代码及测试数据 CREATE TABLE transa…

Go基础学习06-Golang标准库container/list(双向链表)深入讲解;延迟初始化技术;Element;List;Ring

基础介绍 单向链表中的每个节点包含数据和指向下一个节点的指针。其特点是每个节点只知道下一个节点的位置&#xff0c;使得数据只能单向遍历。 示意图如下&#xff1a; 双向链表中的每个节点都包含指向前一个节点和后一个节点的指针。这使得在双向链表中可以从前向后或从后…

皮肤病检测-目标检测数据集(包括VOC格式、YOLO格式)

皮肤病检测-目标检测数据集&#xff08;包括VOC格式、YOLO格式 数据集&#xff1a; 链接&#xff1a;https://pan.baidu.com/s/1XNTo-HsBCHJp2UA-dpn5Og?pwdlizo 提取码&#xff1a;lizo 数据集信息介绍&#xff1a; 共有 2025 张图像和一一对应的标注文件 标注文件格式提供…

说说海外云手机的自动化功能

在全球社交媒体营销中&#xff0c;通过自动化功能&#xff0c;企业不再需要耗费大量时间和精力手动监控和操作每台设备。这意味着&#xff0c;企业可以显著提升效率、节省成本&#xff0c;同时减少对人力资源的依赖。那么&#xff0c;海外云手机的自动化功能具体能带来哪些优势…

Eclipse Memory Analyzer (MAT)提示No java virtual machine was found ...解决办法

1&#xff0c;下载mat后安装&#xff0c;打开时提示 jdk版本低&#xff0c;需要升级到jdk17及以上版本&#xff0c;无奈就下载了jdk17&#xff0c;结果安装后提示没有jre环境&#xff0c;然后手动生成jre目录&#xff0c;命令如下&#xff1a; 进入jdk17目录&#xff1a;执行&…

基于Springboot+微信小程序 的高校社团管理小程序(含源码+数据库+lw)

1.开发环境 开发系统:Windows10/11 架构模式:MVC/前后端分离 JDK版本: Java JDK1.8 开发工具:IDEA 数据库版本: mysql5.7或8.0 数据库可视化工具: navicat 服务器: SpringBoot自带 apache tomcat 主要技术: Java,Springboot,mybatis,mysql,vue 2.视频演示地址 3.功能 系统定…

使用Postman搞定各种接口token实战

现在许多项目都使用jwt来实现用户登录和数据权限&#xff0c;校验过用户的用户名和密码后&#xff0c;会向用户响应一段经过加密的token&#xff0c;在这段token中可能储存了数据权限等&#xff0c;在后期的访问中&#xff0c;需要携带这段token&#xff0c;后台解析这段token才…

视频单目标跟踪研究

由于对视频单目标跟踪并不是很熟悉&#xff0c;所以首先得对该领域有个大致的了解。 视频目标跟踪是计算机视觉领域重要的基础性研究问题之一&#xff0c;是指在视频序列第一帧指定目标 后&#xff0c;在后续帧持续跟踪目标&#xff0c;即利用边界框&#xff08;通常用矩形框表…

解决sortablejs+el-table表格内限制回撤和拖拽回撤失败问题

应用场景&#xff1a; table内同一类型可拖拽&#xff0c;不支持不同类型拖拽&#xff08;主演可拖拽交换位置&#xff0c;非主演和主演不可交换位置&#xff09;,类型不同拖拽效果需还原&#xff0c;试了好几次el-table数据更新了&#xff0c;但是表格样式和数据不能及时保持…

ArrayList源码实现(一)

ArrayList源码实现&#xff08;一&#xff09; 1. ArrayList的大小是如何自动增加的&#xff1f; 初始化 在构造函数中&#xff0c;可以设定列表的初始值大小&#xff0c;如果没有的话默认使用&#xff0c;提供的静态数据 public ArrayList(int initialCapacity) {if (initi…

RabbitMQ应用

RabbitMQ 共提供了7种⼯作模式, 进⾏消息传递 一、七种模式的概述 1、Simple(简单模式) P&#xff1a;生产者&#xff0c;就是发送消息的程序 C&#xff1a;消费者&#xff0c;就是接收消息的程序 Queue&#xff1a;消息队列&#xff0c;类似⼀个邮箱, 可以缓存消息; ⽣产者…

UniApp基于xe-upload实现文件上传组件

xe-upload地址&#xff1a;文件选择、文件上传组件&#xff08;图片&#xff0c;视频&#xff0c;文件等&#xff09; - DCloud 插件市场 致敬开发者&#xff01;&#xff01;&#xff01; 感觉好用的话&#xff0c;给xe-upload的作者一个好评 背景&#xff1a;开发中经常会有…

几个可以给pdf加密的方法,pdf加密详细教程。

几个可以给pdf加密的方法&#xff0c;pdf加密详细教程。在信息快速传播的今天&#xff0c;PDF文件已经成为重要的文档格式&#xff0c;被广泛应用于工作、学习和个人事务中。然而&#xff0c;随着数字内容的增加&#xff0c;数据安全和隐私保护的问题愈发凸显。无论是商业机密、…

CAT1 RTU软硬件设计开源资料分析(TCP协议+Modbus协议+GNSS定位版本 )

01 CAT1 RTU方案简介&#xff1a; 远程终端单元( Remote Terminal Unit&#xff0c;RTU)&#xff0c;一种针对通信距离较长和工业现场环境恶劣而设计的具有模块化结构的、特殊的计算机测控单元&#xff0c;它将末端检测仪表和执行机构与远程控制中心相连接。 奇迹TCP RTUGNS…

OpenHarmony(鸿蒙南向)——平台驱动指南【PWM】

往期知识点记录&#xff1a; 鸿蒙&#xff08;HarmonyOS&#xff09;应用层开发&#xff08;北向&#xff09;知识点汇总 鸿蒙&#xff08;OpenHarmony&#xff09;南向开发保姆级知识点汇总~ 持续更新中…… 概述 功能简介 PWM即脉冲宽度调制&#xff08;Pulse Width Modul…

Flutter中使用FFI的方式链接C/C++的so库(harmonyos)

Flutter中使用FFI的方式链接C/C库&#xff08;harmonyos&#xff09; FFI plugin创建和so的配置FFI插件对so库的使用 FFI plugin创建和so的配置 首先我们可以根据下面的链接生成FFI plugin插件&#xff1a;开发FFI plugin插件 然后在主项目中pubspec.yaml 添加插件的依赖路径&…

排序--堆排序【图文详解】

二叉树的相关概念 叶子&#xff1a;没有子节点的节点叫叶子节点 大根堆&#xff1a;所有的父亲大于儿子 小根堆&#xff1a;所有的儿子大于父亲 父亲于儿子的的下标关系&#xff1a; 父亲的下标为i &#xff0c;那么左孩子的下标为2*i1&#xff0c;右孩子的下标为2i2 子的下…