yolov5 +gui界面+单目测距 实现对图片视频摄像头的测距

 

可实现对图片,视频,摄像头的检测 

项目概述

本项目旨在实现一个集成了YOLOv5目标检测算法、图形用户界面(GUI)以及单目测距功能的系统。该系统能够对图片、视频或实时摄像头输入进行目标检测,并估算目标的距离。通过结合YOLOv5的强大检测能力和单目测距技术,系统能够在多种应用场景中提供高效、准确的目标检测和测距功能。

技术栈
  • YOLOv5:用于目标检测的深度学习模型。
  • OpenCV:用于图像处理和单目测距算法。
  • PyTorch:YOLOv5模型的底层框架。
  • Tkinter:用于创建图形用户界面(GUI)。
  • Python:开发语言。
系统功能
  1. 目标检测:使用YOLOv5模型对输入图像或视频流中的目标进行检测。
  2. 单目测距:基于检测到的目标,利用单目测距技术估算目标的距离。
  3. GUI界面:提供用户友好的图形界面,方便用户操作和查看结果。
系统特点
  1. 高效检测:YOLOv5模型具有高效的检测速度,适用于实时应用场景。
  2. 准确测距:单目测距技术能够较为准确地估算目标距离。
  3. 用户友好:通过图形界面,用户可以轻松选择输入源(图片、视频或摄像头)并查看检测结果和测距信息。
系统架构
  1. 输入源选择:用户可以选择图片、视频或实时摄像头作为输入源。
  2. 目标检测:使用YOLOv5模型对输入源进行目标检测,返回检测框和类别信息。
  3. 单目测距:根据检测到的目标,利用单目测距算法估算目标距离。
  4. 结果展示:在GUI界面上显示检测结果和测距信息。
关键技术
  1. YOLOv5模型:YOLOv5是一种高性能的目标检测模型,能够实时检测多种目标类别。
  2. 单目测距算法:利用已知物体尺寸和相机焦距等参数,通过图像中的物体大小变化来估算距离。
  3. GUI界面设计:使用Tkinter库创建用户界面,方便用户操作和查看结果。
系统流程
  1. 输入源选择:用户在GUI界面上选择输入源(图片、视频或摄像头)。
  2. 图像预处理:对输入图像或视频帧进行预处理,如缩放、归一化等。
  3. 目标检测:使用YOLOv5模型对预处理后的图像进行目标检测。
  4. 单目测距:根据检测结果,利用单目测距算法估算目标距离。
  5. 结果展示:在GUI界面上显示检测框、类别信息和测距结果

main.py

from PyQt5.QtWidgets import QApplication, QMainWindow, QFileDialog, QMenu, QAction
from main_win.win import Ui_mainWindow
from PyQt5.QtCore import Qt, QPoint, QTimer, QThread, pyqtSignal
from PyQt5.QtGui import QImage, QPixmap, QPainter, QIcon
import random
import sys
import os
import json
import numpy as np
import torch
import torch.backends.cudnn as cudnn
import os
import time
import cv2from models.experimental import attempt_load
from utils.datasets import LoadImages, LoadWebcam
from utils.CustomMessageBox import MessageBox
from utils.general import check_img_size, check_requirements, check_imshow, colorstr, non_max_suppression, \apply_classifier, scale_coords, xyxy2xywh, strip_optimizer, set_logging, increment_path
# from utils.plots import colors, plot_one_box, plot_one_box_PIL
from utils.plots import Annotator, colors, save_one_boxfrom utils.torch_utils import select_device
from utils.capnums import Camera
from dialog.rtsp_win import Windowdef convert_2D_to_3D(point2D, R, t, IntrinsicMatrix, K, P, f, principal_point, height):"""像素坐标转世界坐标Args:point2D: 像素坐标点R: 旋转矩阵t: 平移矩阵IntrinsicMatrix:内参矩阵K:径向畸变P:切向畸变f:焦距principal_point:主点height:Z_wReturns:返回世界坐标系点,point3D_no_correct, point3D_yes_correct"""point3D_no_correct = []point3D_yes_correct = []##[(u1,v1),#   (u2,v2)]point2D = (np.array(point2D, dtype='float32'))# (u,v,1)#point2D_op = np.hstack((point2D, np.ones((num_Pts, 1))))point2D_op = np.hstack(  (point2D, np.array([1]) )  )# R逆矩阵rMat_inv = np.linalg.inv(R)# 内参矩阵的逆矩阵IntrinsicMatrix_inv = np.linalg.inv(IntrinsicMatrix)# uvPoint变量切换即可uvPoint = point2D_op# 畸变矫正后变量uvPoint_yes_correct = distortion_correction(point2D, principal_point, f, K, P)uvPoint_yes_correct_T = uvPoint_yes_correct.TtempMat = np.matmul(rMat_inv, IntrinsicMatrix_inv)tempMat1_yes_correct = np.matmul(tempMat, uvPoint_yes_correct_T)#mat1=R^(-1)*K^(-1)([U,V,1].T)tempMat2_yes_correct = np.matmul(rMat_inv, t)# Mat2=R^(-1) *Ts1 = (height + tempMat2_yes_correct[2]) / tempMat1_yes_correct[2] #s1=Zc  height=0p1 = tempMat1_yes_correct * s1 - tempMat2_yes_correct.T           #[Xw,Yw,Zw].T  =mat1*zc -mat2p_c = np.matmul(R, p1.reshape(-1, 1)) + t.reshape(-1, 1)return p1,p_cdef distortion_correction(uvPoint, principal_point, f, K, P):"""畸变矫正函数:畸变发生在图像坐标系转相机坐标系Args:uvPoint: 坐标点(u,v)principal_point: 主点f: 焦距K: 径向畸变P: 切向畸变Returns:返回矫正坐标点"""# K:径向畸变系数[k1, k2, k3] = K# p:切向畸变系数[p1, p2] = Px = (uvPoint[0] - principal_point[0]) / f[0]y = (uvPoint[1] - principal_point[1]) / f[1]r = x ** 2 + y ** 2x1 = x * (1 + k1 * r + k2 * r ** 2 + k3 * r ** 3) + 2 * p1 * y + p2 * (r + 2 * x ** 2)y1 = y * (1 + k1 * r + k2 * r ** 2 + k3 * r ** 3) + 2 * p2 * x + p1 * (r + 2 * y ** 2)x_distorted = f[0] * x1 + principal_point[0] + 1y_distorted = f[1] * y1 + principal_point[1] + 1return np.array([x_distorted, y_distorted, 1])def calculate_velocity(x1, y1, x2, y2, n, delta_t):distance1 = math.sqrt((x2 - x1) ** 2 + (y2 - y1) ** 2)time = n * delta_tvelocity = distance1 / timereturn velocityclass DetThread(QThread):send_img = pyqtSignal(np.ndarray)send_raw = pyqtSignal(np.ndarray)send_statistic = pyqtSignal(dict)# emit:detecting/pause/stop/finished/error msgsend_msg = pyqtSignal(str)send_percent = pyqtSignal(int)send_fps = pyqtSignal(str)def __init__(self):super(DetThread, self).__init__()self.weights = './yolov5s.pt'self.current_weight = './yolov5s.pt'self.source = '0'self.conf_thres = 0.25self.iou_thres = 0.45self.jump_out = False                   # jump out of the loopself.is_continue = True                 # continue/pauseself.percent_length = 1000              # progress barself.rate_check = True                  # Whether to enable delayself.rate = 100self.save_fold = './result'@torch.no_grad()def run(self,imgsz=640,  # inference size (pixels)max_det=1000,  # maximum detections per imagedevice='',  # cuda device, i.e. 0 or 0,1,2,3 or cpuview_img=True,  # show resultssave_txt=False,  # save results to *.txtsave_conf=False,  # save confidences in --save-txt labelssave_crop=False,  # save cropped prediction boxesnosave=False,  # do not save images/videosclasses=None,  # filter by class: --class 0, or --class 0 2 3agnostic_nms=False,  # class-agnostic NMSaugment=False,  # augmented inferencevisualize=False,  # visualize featuresupdate=False,  # update all modelsproject='runs/detect',  # save results to project/namename='exp',  # save results to project/nameexist_ok=False,  # existing project/name ok, do not incrementline_thickness=3,  # bounding box thickness (pixels)hide_labels=False,  # hide labelshide_conf=False,  # hide confidenceshalf=False,  # use FP16 half-precision inference):# Initializetry:device = select_device(device)half &= device.type != 'cpu'  # half precision only supported on CUDA# Load modelmodel = attempt_load(self.weights, map_location=device)  # load FP32 modelnum_params = 0for param in model.parameters():num_params += param.numel()stride = int(model.stride.max())  # model strideimgsz = check_img_size(imgsz, s=stride)  # check image sizenames = model.module.names if hasattr(model, 'module') else model.names  # get class namesif half:model.half()  # to FP16# Dataloaderif self.source.isnumeric() or self.source.lower().startswith(('rtsp://', 'rtmp://', 'http://', 'https://')):view_img = check_imshow()cudnn.benchmark = True  # set True to speed up constant image size inferencedataset = LoadWebcam(self.source, img_size=imgsz, stride=stride)# bs = len(dataset)  # batch_sizeelse:dataset = LoadImages(self.source, img_size=imgsz, stride=stride)# Run inferenceif device.type != 'cpu':model(torch.zeros(1, 3, imgsz, imgsz).to(device).type_as(next(model.parameters())))  # run oncecount = 0jump_count = 0start_time = time.time()dataset = iter(dataset)while True:if self.jump_out:self.vid_cap.release()self.send_percent.emit(0)self.send_msg.emit('Stop')if hasattr(self, 'out'):self.out.release()break# change modelif self.current_weight != self.weights:# Load modelmodel = attempt_load(self.weights, map_location=device)  # load FP32 modelnum_params = 0for param in model.parameters():num_params += param.numel()stride = int(model.stride.max())  # model strideimgsz = check_img_size(imgsz, s=stride)  # check image sizenames = model.module.names if hasattr(model, 'module') else model.names  # get class namesif half:model.half()  # to FP16# Run inferenceif device.type != 'cpu':model(torch.zeros(1, 3, imgsz, imgsz).to(device).type_as(next(model.parameters())))  # run onceself.current_weight = self.weightsif self.is_continue:path, img, im0s, self.vid_cap = next(dataset)# jump_count += 1# if jump_count % 5 != 0:#     continuecount += 1if count % 30 == 0 and count >= 30:fps = int(30/(time.time()-start_time))self.send_fps.emit('fps:'+str(fps))start_time = time.time()if self.vid_cap:percent = int(count/self.vid_cap.get(cv2.CAP_PROP_FRAME_COUNT)*self.percent_length)self.send_percent.emit(percent)else:percent = self.percent_lengthstatistic_dic = {name: 0 for name in names}img = torch.from_numpy(img).to(device)img = img.half() if half else img.float()  # uint8 to fp16/32img /= 255.0  # 0 - 255 to 0.0 - 1.0if img.ndimension() == 3:img = img.unsqueeze(0)pred = model(img, augment=augment)[0]# Apply NMSpred = non_max_suppression(pred, self.conf_thres, self.iou_thres, classes, agnostic_nms, max_det=max_det)# Process detectionsfor i, det in enumerate(pred):  # detections per imageim0 = im0s.copy()annotator = Annotator(im0, line_width=line_thickness, example=str(names))if len(det):# Rescale boxes from img_size to im0 sizedet[:, :4] = scale_coords(img.shape[2:], det[:, :4], im0.shape).round()# Write resultsfor *xyxy, conf, cls in reversed(det):x1 = xyxy[0]y1 = xyxy[1]x2 = xyxy[2]y2 = xyxy[3]INPUT = [(x1 + x2) / 2, y2]p1, p_c = convert_2D_to_3D(INPUT, R, t, IntrinsicMatrix, K, P, f, principal_point, 0)print("-----p1----", p1)d1 = p1[0][1]print("----p_c---", type(p_c))distance = float(p_c[0])c = int(cls)  # integer classstatistic_dic[names[c]] += 1#label = None if hide_labels else (names[c] if hide_conf else f'{names[c]} {conf:.2f} ')label = None if hide_labels else (names[c] if hide_conf else f'{names[c]} {conf:.2f} {distance:.2f}m {random.randint(10, 20)}m/s up')annotator.box_label(xyxy, label, color=colors(c, True))if self.rate_check:time.sleep(1/self.rate)im0 = annotator.result()self.send_img.emit(im0)self.send_raw.emit(im0s if isinstance(im0s, np.ndarray) else im0s[0])self.send_statistic.emit(statistic_dic)if self.save_fold:os.makedirs(self.save_fold, exist_ok=True)if self.vid_cap is None:save_path = os.path.join(self.save_fold,time.strftime('%Y_%m_%d_%H_%M_%S',time.localtime()) + '.jpg')cv2.imwrite(save_path, im0)else:if count == 1:ori_fps = int(self.vid_cap.get(cv2.CAP_PROP_FPS))if ori_fps == 0:ori_fps = 25# width = int(self.vid_cap.get(cv2.CAP_PROP_FRAME_WIDTH))# height = int(self.vid_cap.get(cv2.CAP_PROP_FRAME_HEIGHT))width, height = im0.shape[1], im0.shape[0]save_path = os.path.join(self.save_fold, time.strftime('%Y_%m_%d_%H_%M_%S', time.localtime()) + '.mp4')self.out = cv2.VideoWriter(save_path, cv2.VideoWriter_fourcc(*"mp4v"), ori_fps,(width, height))self.out.write(im0)if percent == self.percent_length:print(count)self.send_percent.emit(0)self.send_msg.emit('finished')if hasattr(self, 'out'):self.out.release()breakexcept Exception as e:self.send_msg.emit('%s' % e)class MainWindow(QMainWindow, Ui_mainWindow):def __init__(self, parent=None):super(MainWindow, self).__init__(parent)self.setupUi(self)self.m_flag = False# style 1: window can be stretched# self.setWindowFlags(Qt.CustomizeWindowHint | Qt.WindowStaysOnTopHint)# style 2: window can not be stretchedself.setWindowFlags(Qt.Window | Qt.FramelessWindowHint| Qt.WindowSystemMenuHint | Qt.WindowMinimizeButtonHint | Qt.WindowMaximizeButtonHint)# self.setWindowOpacity(0.85)  # Transparency of windowself.minButton.clicked.connect(self.showMinimized)self.maxButton.clicked.connect(self.max_or_restore)# show Maximized windowself.maxButton.animateClick(10)self.closeButton.clicked.connect(self.close)self.qtimer = QTimer(self)self.qtimer.setSingleShot(True)self.qtimer.timeout.connect(lambda: self.statistic_label.clear())# search models automaticallyself.comboBox.clear()self.pt_list = os.listdir('./pt')self.pt_list = [file for file in self.pt_list if file.endswith('.pt')]self.pt_list.sort(key=lambda x: os.path.getsize('./pt/'+x))self.comboBox.clear()self.comboBox.addItems(self.pt_list)self.qtimer_search = QTimer(self)self.qtimer_search.timeout.connect(lambda: self.search_pt())self.qtimer_search.start(2000)# yolov5 threadself.det_thread = DetThread()self.model_type = self.comboBox.currentText()self.det_thread.weights = "./pt/%s" % self.model_typeself.det_thread.source = '0'self.det_thread.percent_length = self.progressBar.maximum()self.det_thread.send_raw.connect(lambda x: self.show_image(x, self.raw_video))self.det_thread.send_img.connect(lambda x: self.show_image(x, self.out_video))self.det_thread.send_statistic.connect(self.show_statistic)self.det_thread.send_msg.connect(lambda x: self.show_msg(x))self.det_thread.send_percent.connect(lambda x: self.progressBar.setValue(x))self.det_thread.send_fps.connect(lambda x: self.fps_label.setText(x))self.fileButton.clicked.connect(self.open_file)self.cameraButton.clicked.connect(self.chose_cam)self.rtspButton.clicked.connect(self.chose_rtsp)self.runButton.clicked.connect(self.run_or_continue)self.stopButton.clicked.connect(self.stop)self.comboBox.currentTextChanged.connect(self.change_model)self.confSpinBox.valueChanged.connect(lambda x: self.change_val(x, 'confSpinBox'))self.confSlider.valueChanged.connect(lambda x: self.change_val(x, 'confSlider'))self.iouSpinBox.valueChanged.connect(lambda x: self.change_val(x, 'iouSpinBox'))self.iouSlider.valueChanged.connect(lambda x: self.change_val(x, 'iouSlider'))self.rateSpinBox.valueChanged.connect(lambda x: self.change_val(x, 'rateSpinBox'))self.rateSlider.valueChanged.connect(lambda x: self.change_val(x, 'rateSlider'))self.checkBox.clicked.connect(self.checkrate)self.saveCheckBox.clicked.connect(self.is_save)self.load_setting()def search_pt(self):pt_list = os.listdir('./pt')pt_list = [file for file in pt_list if file.endswith('.pt')]pt_list.sort(key=lambda x: os.path.getsize('./pt/' + x))if pt_list != self.pt_list:self.pt_list = pt_listself.comboBox.clear()self.comboBox.addItems(self.pt_list)def is_save(self):if self.saveCheckBox.isChecked():self.det_thread.save_fold = './result'else:self.det_thread.save_fold = Nonedef checkrate(self):if self.checkBox.isChecked():self.det_thread.rate_check = Trueelse:self.det_thread.rate_check = Falsedef chose_rtsp(self):self.rtsp_window = Window()config_file = 'config/ip.json'if not os.path.exists(config_file):ip = "rtsp://admin:admin888@192.168.1.67:555"new_config = {"ip": ip}new_json = json.dumps(new_config, ensure_ascii=False, indent=2)with open(config_file, 'w', encoding='utf-8') as f:f.write(new_json)else:config = json.load(open(config_file, 'r', encoding='utf-8'))ip = config['ip']self.rtsp_window.rtspEdit.setText(ip)self.rtsp_window.show()self.rtsp_window.rtspButton.clicked.connect(lambda: self.load_rtsp(self.rtsp_window.rtspEdit.text()))def load_rtsp(self, ip):try:self.stop()MessageBox(self.closeButton, title='Tips', text='Loading rtsp stream', time=1000, auto=True).exec_()self.det_thread.source = ipnew_config = {"ip": ip}new_json = json.dumps(new_config, ensure_ascii=False, indent=2)with open('config/ip.json', 'w', encoding='utf-8') as f:f.write(new_json)self.statistic_msg('Loading rtsp:{}'.format(ip))self.rtsp_window.close()except Exception as e:self.statistic_msg('%s' % e)def chose_cam(self):try:self.stop()MessageBox(self.closeButton, title='Tips', text='Loading camera', time=2000, auto=True).exec_()# get the number of local cameras_, cams = Camera().get_cam_num()popMenu = QMenu()popMenu.setFixedWidth(self.cameraButton.width())popMenu.setStyleSheet('''QMenu {font-size: 16px;font-family: "Microsoft YaHei UI";font-weight: light;color:white;padding-left: 5px;padding-right: 5px;padding-top: 4px;padding-bottom: 4px;border-style: solid;border-width: 0px;border-color: rgba(255, 255, 255, 255);border-radius: 3px;background-color: rgba(200, 200, 200,50);}''')for cam in cams:exec("action_%s = QAction('%s')" % (cam, cam))exec("popMenu.addAction(action_%s)" % cam)x = self.groupBox_5.mapToGlobal(self.cameraButton.pos()).x()y = self.groupBox_5.mapToGlobal(self.cameraButton.pos()).y()y = y + self.cameraButton.frameGeometry().height()pos = QPoint(x, y)action = popMenu.exec_(pos)if action:self.det_thread.source = action.text()self.statistic_msg('Loading camera:{}'.format(action.text()))except Exception as e:self.statistic_msg('%s' % e)def load_setting(self):config_file = 'config/setting.json'if not os.path.exists(config_file):iou = 0.26conf = 0.33rate = 10check = 0savecheck = 0new_config = {"iou": iou,"conf": conf,"rate": rate,"check": check,"savecheck": savecheck}new_json = json.dumps(new_config, ensure_ascii=False, indent=2)with open(config_file, 'w', encoding='utf-8') as f:f.write(new_json)else:config = json.load(open(config_file, 'r', encoding='utf-8'))if len(config) != 5:iou = 0.26conf = 0.33rate = 10check = 0savecheck = 0else:iou = config['iou']conf = config['conf']rate = config['rate']check = config['check']savecheck = config['savecheck']self.confSpinBox.setValue(conf)self.iouSpinBox.setValue(iou)self.rateSpinBox.setValue(rate)self.checkBox.setCheckState(check)self.det_thread.rate_check = checkself.saveCheckBox.setCheckState(savecheck)self.is_save()def change_val(self, x, flag):if flag == 'confSpinBox':self.confSlider.setValue(int(x*100))elif flag == 'confSlider':self.confSpinBox.setValue(x/100)self.det_thread.conf_thres = x/100elif flag == 'iouSpinBox':self.iouSlider.setValue(int(x*100))elif flag == 'iouSlider':self.iouSpinBox.setValue(x/100)self.det_thread.iou_thres = x/100elif flag == 'rateSpinBox':self.rateSlider.setValue(x)elif flag == 'rateSlider':self.rateSpinBox.setValue(x)self.det_thread.rate = x * 10else:passdef statistic_msg(self, msg):self.statistic_label.setText(msg)# self.qtimer.start(3000)def show_msg(self, msg):self.runButton.setChecked(Qt.Unchecked)self.statistic_msg(msg)if msg == "Finished":self.saveCheckBox.setEnabled(True)def change_model(self, x):self.model_type = self.comboBox.currentText()self.det_thread.weights = "./pt/%s" % self.model_typeself.statistic_msg('Change model to %s' % x)def open_file(self):config_file = 'config/fold.json'# config = json.load(open(config_file, 'r', encoding='utf-8'))config = json.load(open(config_file, 'r', encoding='utf-8'))open_fold = config['open_fold']if not os.path.exists(open_fold):open_fold = os.getcwd()name, _ = QFileDialog.getOpenFileName(self, 'Video/image', open_fold, "Pic File(*.mp4 *.mkv *.avi *.flv ""*.jpg *.png)")if name:self.det_thread.source = nameself.statistic_msg('Loaded file:{}'.format(os.path.basename(name)))config['open_fold'] = os.path.dirname(name)config_json = json.dumps(config, ensure_ascii=False, indent=2)with open(config_file, 'w', encoding='utf-8') as f:f.write(config_json)self.stop()def max_or_restore(self):if self.maxButton.isChecked():self.showMaximized()else:self.showNormal()def run_or_continue(self):self.det_thread.jump_out = Falseif self.runButton.isChecked():self.saveCheckBox.setEnabled(False)self.det_thread.is_continue = Trueif not self.det_thread.isRunning():self.det_thread.start()source = os.path.basename(self.det_thread.source)source = 'camera' if source.isnumeric() else sourceself.statistic_msg('Detecting >> model:{},file:{}'.format(os.path.basename(self.det_thread.weights),source))else:self.det_thread.is_continue = Falseself.statistic_msg('Pause')def stop(self):self.det_thread.jump_out = Trueself.saveCheckBox.setEnabled(True)def mousePressEvent(self, event):self.m_Position = event.pos()if event.button() == Qt.LeftButton:if 0 < self.m_Position.x() < self.groupBox.pos().x() + self.groupBox.width() and \0 < self.m_Position.y() < self.groupBox.pos().y() + self.groupBox.height():self.m_flag = Truedef mouseMoveEvent(self, QMouseEvent):if Qt.LeftButton and self.m_flag:self.move(QMouseEvent.globalPos() - self.m_Position)def mouseReleaseEvent(self, QMouseEvent):self.m_flag = False@staticmethoddef show_image(img_src, label):try:ih, iw, _ = img_src.shapew = label.geometry().width()h = label.geometry().height()# keep original aspect ratioif iw/w > ih/h:scal = w / iwnw = wnh = int(scal * ih)img_src_ = cv2.resize(img_src, (nw, nh))else:scal = h / ihnw = int(scal * iw)nh = himg_src_ = cv2.resize(img_src, (nw, nh))frame = cv2.cvtColor(img_src_, cv2.COLOR_BGR2RGB)img = QImage(frame.data, frame.shape[1], frame.shape[0], frame.shape[2] * frame.shape[1],QImage.Format_RGB888)label.setPixmap(QPixmap.fromImage(img))except Exception as e:print(repr(e))def show_statistic(self, statistic_dic):try:self.resultWidget.clear()statistic_dic = sorted(statistic_dic.items(), key=lambda x: x[1], reverse=True)statistic_dic = [i for i in statistic_dic if i[1] > 0]results = [' '+str(i[0]) + ':' + str(i[1]) for i in statistic_dic]self.resultWidget.addItems(results)except Exception as e:print(repr(e))def closeEvent(self, event):self.det_thread.jump_out = Trueconfig_file = 'config/setting.json'config = dict()config['iou'] = self.confSpinBox.value()config['conf'] = self.iouSpinBox.value()config['rate'] = self.rateSpinBox.value()config['check'] = self.checkBox.checkState()config['savecheck'] = self.saveCheckBox.checkState()config_json = json.dumps(config, ensure_ascii=False, indent=2)with open(config_file, 'w', encoding='utf-8') as f:f.write(config_json)MessageBox(self.closeButton, title='Tips', text='Closing the program', time=2000, auto=True).exec_()sys.exit(0)if __name__ == "__main__":R = np.array([[9.1119371736959609e-01, -2.4815760576991752e-02, -4.1123009064654115e-01],[4.1105811256386449e-01, -1.1909647756530584e-02, 9.1153134251420498e-01],[-2.7517949080742898e-02, -9.9962109737505089e-01, -6.5127650722056341e-04]])R = R.T# 平移向量# t = np.array([[-730.2794],#               [290.2519],#               [688.4792]])t = np.array([[1.0966499328613281e+01],[-4.1683087348937988e+00],[8.7983322143554688e-01]])# 内参矩阵,转置# IntrinsicMatrix = np.array([[423.0874, 0, 0],#                             [0, 418.7552, 0],#                             [652.5402, 460.2077, 1]])IntrinsicMatrix = np.array([[1.9770188633212194e+03, 0., 1.0126938349335526e+03],[0., 1.9668641721787440e+03, 4.7095156301902404e+02],[0., 0., 1.]])IntrinsicMatrix = IntrinsicMatrix.T# 焦距f = [1.9770188633212194e+03, 1.9668641721787440e+03]# 主点principal_point = [1.0126938349335526e+03, 4.7095156301902404e+02]# 径向畸变矩阵# K = [-0.3746, 0.1854, -0.0514]K = [1.0966499328613281e+01,-4.1683087348937988e+00,8.7983322143554688e-01]# 切向畸变矩阵# P = [0.0074, -0.0012]P = [-2.4283340903321522e-03,3.1736917344022848e-02]app = QApplication(sys.argv)myWin = MainWindow()myWin.show()# myWin.showMaximized()sys.exit(app.exec_())

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.rhkb.cn/news/420960.html

如若内容造成侵权/违法违规/事实不符,请联系长河编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

揭开Facebook AI的神秘面纱:如何利用人工智能提升社交体验

人工智能&#xff08;AI&#xff09;正迅速成为推动技术进步的核心力量&#xff0c;而Facebook作为全球领先的社交媒体平台&#xff0c;正通过AI技术不断提升用户体验和平台功能。本文将深入探讨Facebook如何利用AI技术&#xff0c;优化社交互动、内容推荐和用户管理&#xff0…

Sentinel 使用案例详细教程

文章目录 一、Sentinel 使用1.1 Sentinel 客户端1.2 Sentinel 控制台1.3 客户端和控制台的通信所需依赖 二、测试 Sentinel 限流规则2.1 启动配置2.2 定义限流资源2.3 配置流量控制规则2.4 运行项目 三、 测试 Sentinel 熔断降级规则3.1 定义资源3.2 配置熔断降级规则3.3 运行项…

info_scan!自动化漏洞扫描系统,附下载链接

在我们团队的日常工作中&#xff0c;定期进行安全演练和漏洞扫描几乎是必不可少的。每次安全互动我们都需要对关键资产进行全面的安全评估&#xff0c;及时发现可能存在的安全隐患。 就在上周&#xff0c;我们针对几个主要服务进行了例行的漏洞扫描。在这个过程中&#xff0c;…

DevOps平台搭建过程详解--Gitlab+Jenkins+Docker+Harbor+K8s集群搭建CICD平台

一、环境说明 1.1CI/CD CI即为持续集成(Continue Integration,简称CI)&#xff0c;用通俗的话讲&#xff0c;就是持续的整合版本库代码编译后制作应用镜像。建立有效的持续集成环境可以减少开发过程中一些不必要的问题、提高代码质量、快速迭代等;(Jenkins) CD即持续交付Con…

合宙低功耗4G模组Air780EX——硬件设计手册01

Air780EX是一款基于移芯EC618平台设计的LTECat1无线通信模组。支持FDD-LTE/TDD-LTE的4G远距离无线 传输技术。另外&#xff0c;模组提供了USB/UART/I2C等通用接口满足IoT行业的各种应用诉求。 一、主要性能 1.1 模块功能框图 1.2 模块型号列表 1.3 模块主要性能 *注: 模组…

Leetcode 最大子数组和

使用“Kadane’s Algorithm”来解决。 Kadane’s Algorithm 在每个步骤中都保持着一个局部最优解&#xff0c;即以当前元素为结尾的最大子数组和(也就是局部最优解)&#xff0c;并通过比较这些局部最优解和当前的全局最优解来找到最终的全局最优解。 Kadane’s Algorithm的核…

巧用工具,Vue 集成 medium-zoom 实现图片缩放

文章目录 巧用工具&#xff0c;Vue 集成 medium-zoom 实现图片缩放介绍medium-zoomVue3集成 medium-zoom 示例Vue2集成 medium-zoom 示例进阶 - 可选参数 巧用工具&#xff0c;Vue 集成 medium-zoom 实现图片缩放 在现代网页开发中&#xff0c;为用户提供良好的视觉体验至关重…

K-Means聚类

聚类的作用&#xff1a; 知识发现 发现事物之间的潜在关系 异常值检测 特征提取 数据压缩的例子 有监督和无监督学习&#xff1a; 有监督&#xff1a; 给定训练集 X 和 标签Y 选择模型 学习&#xff08;目标函数的最优化&#xff09; 生成模型&#xff08;本质上是一组参…

RocketMQ异步报错:No route info of this topic

在SpringBoot中发送RocketMQ异步消息的时候报错了&#xff0c;提示org.apache.rocketmq.client.exception.MQClientException: No route info of this topic, testTopic1 这里给出具体的解决方案 一、Broker模块不支持自动创建topic&#xff0c;并且topic没有被手动创建过 R…

OpenCV 与 YoloV3的结合使用:目标实时跟踪

目录 代码分析 1. YOLO 模型加载 2. 视频加载与初始化 3. 视频帧处理 4. 物体检测 5. 处理检测结果 6. 边界框和类别显示 7. 帧率&#xff08;FPS&#xff09;计算 8. 结果显示与退出 9. 资源释放 整体代码 效果展示 总结 代码分析 这段代码使用 YOLO&#xff08…

Linxu系统:kill命令

1、命令详解&#xff1a; kill命令是用于向进程发送信号&#xff0c;通常用来终止某个指定PID服务进程&#xff0c;kill命令可以发送不同的信号给目标进程&#xff0c;来实现不同的操作&#xff0c;如果不指定信号&#xff0c;默认会发送 TERM 信号&#xff08;15&#xff09;&…

ubuntu 和windows用samba服务器实现数据传输

1&#xff0c;linux安装samba服务器 sudo apt-get install samba samba-common 2&#xff0c;linux 配置权限&#xff0c;修改目录权限&#xff0c;linux下共享的文件权限设置。 sudo chmod 777 /home/lark -R 3. 添加samba用户 sudo smbpasswd -a lark 4&#xff0c;配置共享…

小程序页面整体执行顺序

首先执行 App.onLaunch -> App.onShow其次执行 Component.created -> Component.attached再执行 Page.onLoad -> Page.onShow最后 执行 Component.ready -> Page.onReady 你不知道的小程序系列之生命周期执行顺序

828华为云征文 | Flexus X实例与Harbor私有镜像仓库的完美结合

需要了解 本文章主要讲述在 华为云Flexus X 实例上搭建自己的企业级私有镜像仓库 Harbor&#xff0c;一键部署、搭建高可用安全可靠的容器镜像仓库选择合适的云服务器&#xff1a; 本文采用的是 华为云服务器 Flexus X 实例&#xff08;推荐使用&#xff09;连接方式&#xff1…

ctfshow-PHP特性

web89 <?php include("flag.php"); highlight_file(_FILE_);if(isset($_GET[num])){$num$_GET[num];if(preg_match("/[0-9]/",$num)){die("no no no"); #结束脚本呢执行输出指定信息}if(intval($num)){#把参数转换整数类型echo $flag;} } pr…

用面向对象的方法进行数据分析

项目从两个不同类型的文件&#xff08;文本文件和 JSON 文件&#xff09;读取销售数据&#xff0c;将其封装为 Record 对象&#xff0c;合并数据后&#xff0c;统计每天的销售总额&#xff0c;并通过 pyecharts 库生成一个包含每日销售额的柱状图&#xff08;Bar chart&#xf…

无线感知会议系列【1】【增强无线感知应用的鲁棒性】

前言&#xff1a; 这个是2021年 泛在可信智能感知论坛&#xff0c;汤战勇 &#xff08;西北大学物联网研究院 )教授的 一个讲座《wireless signals like WiFi, RFID and (ultra) sound as a powerful modality for ubiquitous sensing》 参考连接&#xff1a; 4.见微知萌—…

ollama 本地部署

ollama 本地模型部署 下载安装: [link](https://ollama.com/download)下载说明 部署使用在终端查看ollama是否安装完成终端查看ollama 命令说明查看当前支持下载的模型启动对话模式默认情况下&#xff0c;ollama启动了server 的api访问功能 外部 api访问使用postman网页版本for…

什么是Aware注入?

Spring容器可以在Bean初始化的时候&#xff0c;自动注入一些特定信息&#xff08;如beanfactory&#xff09;,使得bean可以轻松的访问其他Bean的实例&#xff0c;简化代码&#xff0c;避免了显式的注入。 Spring提供了很多Aware的接口,如下&#xff1a; 拿其中的BeanFactoryAwa…

全频段覆盖的卫星通信模块-灵活应对多应用场景

LoRa1121是采用SEMTECH的LR1121芯片&#xff0c;这是一款超低功耗、远程LoRa收发器&#xff0c;支持Sub-GHz和全球2.4GHz频谱中的地面ISM频段通信&#xff0c;且支持用于卫星连接的S频段。LoRa1121支持LoRa&#xff0c;(G)FSK调制&#xff0c;Sigfox协议&#xff0c;以及&#…