67、yolov8目标检测和旋转目标检测算法部署Atlas 200I DK A2开发板上

基本思想:需求部署yolov8目标检测和旋转目标检测算法部署atlas 200dk 开发板上

一、转换模型 链接: https://pan.baidu.com/s/1hJPX2QvybI4AGgeJKO6QgQ?pwd=q2s5 提取码: q2s5

from ultralytics import YOLO# Load a model
model = YOLO("yolov8s.yaml")  # build a new model from scratch
model = YOLO("yolov8s.pt")  # load a pretrained model (recommended for training)# Use the modelresults = model(r"F:\zhy\Git\McQuic\bus.jpg")  # predict on an image
path = model.export(format="onnx")  # export the model to ONNX format

精简模型


(base) root@davinci-mini:~/sxj731533730# python3 -m  onnxsim yolov8s.onnx yolov8s_sim.onnx
Simplifying...
Finish! Here is the difference:
┏━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━┓
┃            ┃ Original Model ┃ Simplified Model ┃
┡━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━┩
│ Add        │ 9              │ 8                │
│ Concat     │ 19             │ 19               │
│ Constant   │ 22             │ 0                │
│ Conv       │ 64             │ 64               │
│ Div        │ 2              │ 1                │
│ Gather     │ 1              │ 0                │
│ MaxPool    │ 3              │ 3                │
│ Mul        │ 60             │ 58               │
│ Reshape    │ 5              │ 5                │
│ Resize     │ 2              │ 2                │
│ Shape      │ 1              │ 0                │
│ Sigmoid    │ 58             │ 58               │
│ Slice      │ 2              │ 2                │
│ Softmax    │ 1              │ 1                │
│ Split      │ 9              │ 9                │
│ Sub        │ 2              │ 2                │
│ Transpose  │ 1              │ 1                │
│ Model Size │ 42.8MiB        │ 42.7MiB          │
└────────────┴────────────────┴──────────────────┘

使用huawei板子进行转换模型

(base) root@davinci-mini:~/sxj731533730# atc --model=yolov8s.onnx --framework=5 --output=yolov8s --input_format=NCHW --i
nput_shape="images:1,3,640,640" --log=error --soc_version=Ascend310B1

配置pycharm professional

pycharm professional 设置
/usr/local/miniconda3/bin/python3
LD_LIBRARY_PATH=/usr/local/Ascend/ascend-toolkit/latest/lib64:/usr/local/Ascend/ascend-toolkit/latest/lib64/plugin/opskernel:/usr/local/Ascend/ascend-toolkit/latest/lib64/plugin/nnengine:$LD_LIBRARY_PATH;PYTHONPATH=/usr/local/Ascend/ascend-toolkit/latest/python/site-packages:/usr/local/Ascend/ascend-toolkit/latest/opp/op_impl/built-in/ai_core/tbe:$PYTHONPATH;PATH=/usr/local/Ascend/ascend-toolkit/latest/bin:/usr/local/Ascend/ascend-toolkit/latest/compiler/ccec_compiler/bin:$PATH;ASCEND_AICPU_PATH=/usr/local/Ascend/ascend-toolkit/latest;ASCEND_OPP_PATH=/usr/local/Ascend/ascend-toolkit/latest/opp;TOOLCHAIN_HOME=/usr/local/Ascend/ascend-toolkit/latest/toolkit;ASCEND_HOME_PATH=/usr/local/Ascend/ascend-toolkit/latest/usr/local/miniconda3/bin/python3.9 /home/sxj731533730/sgagtyeray.py

测试代码pyhon

import ctypes
import os
import shutil
import random
import sys
import threading
import time
import cv2
import numpy as np
import numpy as np
import cv2
from ais_bench.infer.interface import InferSession
class_names = ['person', 'bicycle', 'car', 'motorcycle', 'airplane', 'bus', 'train', 'truck', 'boat', 'traffic light','fire hydrant', 'stop sign', 'parking meter', 'bench', 'bird', 'cat', 'dog', 'horse', 'sheep', 'cow','elephant', 'bear', 'zebra', 'giraffe', 'backpack', 'umbrella', 'handbag', 'tie', 'suitcase', 'frisbee','skis', 'snowboard', 'sports ball', 'kite', 'baseball bat', 'baseball glove', 'skateboard', 'surfboard','tennis racket', 'bottle', 'wine glass', 'cup', 'fork', 'knife', 'spoon', 'bowl', 'banana', 'apple','sandwich', 'orange', 'broccoli', 'carrot', 'hot dog', 'pizza', 'donut', 'cake', 'chair', 'couch','potted plant', 'bed', 'dining table', 'toilet', 'tv', 'laptop', 'mouse', 'remote', 'keyboard','cell phone', 'microwave', 'oven', 'toaster', 'sink', 'refrigerator', 'book', 'clock', 'vase','scissors', 'teddy bear', 'hair drier', 'toothbrush']# Create a list of colors for each class where each color is a tuple of 3 integer values
rng = np.random.default_rng(3)
colors = rng.uniform(0, 255, size=(len(class_names), 3))model_path = 'yolov8s.om'
IMG_PATH = 'bus.jpg'conf_threshold = 0.5
iou_threshold = 0.4
input_w=640
input_h=640def get_img_path_batches(batch_size, img_dir):ret = []batch = []for root, dirs, files in os.walk(img_dir):for name in files:if len(batch) == batch_size:ret.append(batch)batch = []batch.append(os.path.join(root, name))if len(batch) > 0:ret.append(batch)return retdef plot_one_box(x, img, color=None, label=None, line_thickness=None):"""description: Plots one bounding box on image img,this function comes from YoLov8 project.param:x:      a box likes [x1,y1,x2,y2]img:    a opencv image objectcolor:  color to draw rectangle, such as (0,255,0)label:  strline_thickness: intreturn:no return"""tl = (line_thickness or round(0.002 * (img.shape[0] + img.shape[1]) / 2) + 1)  # line/font thicknesscolor = color or [random.randint(0, 255) for _ in range(3)]c1, c2 = (int(x[0]), int(x[1])), (int(x[2]), int(x[3]))cv2.rectangle(img, c1, c2, color, thickness=tl, lineType=cv2.LINE_AA)if label:tf = max(tl - 1, 1)  # font thicknesst_size = cv2.getTextSize(label, 0, fontScale=tl / 3, thickness=tf)[0]c2 = c1[0] + t_size[0], c1[1] - t_size[1] - 3cv2.rectangle(img, c1, c2, color, -1, cv2.LINE_AA)  # filledcv2.putText(img,label,(c1[0], c1[1] - 2),0,tl / 3,[225, 255, 255],thickness=tf,lineType=cv2.LINE_AA,)def preprocess_image( raw_bgr_image):"""description: Convert BGR image to RGB,resize and pad it to target size, normalize to [0,1],transform to NCHW format.param:input_image_path: str, image pathreturn:image:  the processed imageimage_raw: the original imageh: original heightw: original width"""image_raw = raw_bgr_imageh, w, c = image_raw.shapeimage = cv2.cvtColor(image_raw, cv2.COLOR_BGR2RGB)# Calculate widht and height and paddingsr_w = input_w / wr_h =  input_h / hif r_h > r_w:tw =  input_wth = int(r_w * h)tx1 = tx2 = 0ty1 = int(( input_h - th) / 2)ty2 =  input_h - th - ty1else:tw = int(r_h * w)th =  input_htx1 = int(( input_w - tw) / 2)tx2 =  input_w - tw - tx1ty1 = ty2 = 0# Resize the image with long side while maintaining ratioimage = cv2.resize(image, (tw, th))# Pad the short side with (128,128,128)image = cv2.copyMakeBorder(image, ty1, ty2, tx1, tx2, cv2.BORDER_CONSTANT, None, (128, 128, 128))image = image.astype(np.float32)# Normalize to [0,1]image /= 255.0# HWC to CHW format:image = np.transpose(image, [2, 0, 1])# CHW to NCHW formatimage = np.expand_dims(image, axis=0)# Convert the image to row-major order, also known as "C order":image = np.ascontiguousarray(image)return image, image_raw, h, w,r_h,r_wdef xywh2xyxy(  origin_h, origin_w, x):"""description:    Convert nx4 boxes from [x, y, w, h] to [x1, y1, x2, y2] where xy1=top-left, xy2=bottom-rightparam:origin_h:   height of original imageorigin_w:   width of original imagex:          A boxes numpy, each row is a box [center_x, center_y, w, h]return:y:          A boxes numpy, each row is a box [x1, y1, x2, y2]"""y = np.zeros_like(x)r_w =  input_w / origin_wr_h =  input_h / origin_hif r_h > r_w:y[:, 0] = x[:, 0]y[:, 2] = x[:, 2]y[:, 1] = x[:, 1] - ( input_h - r_w * origin_h) / 2y[:, 3] = x[:, 3] - ( input_h - r_w * origin_h) / 2y /= r_welse:y[:, 0] = x[:, 0] - ( input_w - r_h * origin_w) / 2y[:, 2] = x[:, 2] - ( input_w - r_h * origin_w) / 2y[:, 1] = x[:, 1]y[:, 3] = x[:, 3]y /= r_hreturn y
def rescale_boxes( boxes,img_width, img_height, input_width, input_height):# Rescale boxes to original image dimensionsinput_shape = np.array([input_width, input_height,input_width, input_height])boxes = np.divide(boxes, input_shape, dtype=np.float32)boxes *= np.array([img_width, img_height, img_width, img_height])return boxes
def xywh2xyxy(x):# Convert bounding box (x, y, w, h) to bounding box (x1, y1, x2, y2)y = np.copy(x)y[..., 0] = x[..., 0] - x[..., 2] / 2y[..., 1] = x[..., 1] - x[..., 3] / 2y[..., 2] = x[..., 0] + x[..., 2] / 2y[..., 3] = x[..., 1] + x[..., 3] / 2return y
def compute_iou(box, boxes):# Compute xmin, ymin, xmax, ymax for both boxesxmin = np.maximum(box[0], boxes[:, 0])ymin = np.maximum(box[1], boxes[:, 1])xmax = np.minimum(box[2], boxes[:, 2])ymax = np.minimum(box[3], boxes[:, 3])# Compute intersection areaintersection_area = np.maximum(0, xmax - xmin) * np.maximum(0, ymax - ymin)# Compute union areabox_area = (box[2] - box[0]) * (box[3] - box[1])boxes_area = (boxes[:, 2] - boxes[:, 0]) * (boxes[:, 3] - boxes[:, 1])union_area = box_area + boxes_area - intersection_area# Compute IoUiou = intersection_area / union_areareturn iou
def nms(boxes, scores, iou_threshold):# Sort by scoresorted_indices = np.argsort(scores)[::-1]keep_boxes = []while sorted_indices.size > 0:# Pick the last boxbox_id = sorted_indices[0]keep_boxes.append(box_id)# Compute IoU of the picked box with the restious = compute_iou(boxes[box_id, :], boxes[sorted_indices[1:], :])# Remove boxes with IoU over the thresholdkeep_indices = np.where(ious < iou_threshold)[0]# print(keep_indices.shape, sorted_indices.shape)sorted_indices = sorted_indices[keep_indices + 1]return keep_boxesdef multiclass_nms(boxes, scores, class_ids, iou_threshold):unique_class_ids = np.unique(class_ids)keep_boxes = []for class_id in unique_class_ids:class_indices = np.where(class_ids == class_id)[0]class_boxes = boxes[class_indices,:]class_scores = scores[class_indices]class_keep_boxes = nms(class_boxes, class_scores, iou_threshold)keep_boxes.extend(class_indices[class_keep_boxes])return keep_boxes
def extract_boxes(predictions,img_width, img_height, input_width, input_height):# Extract boxes from predictionsboxes = predictions[:, :4]# Scale boxes to original image dimensionsboxes = rescale_boxes(boxes,img_width, img_height, input_width, input_height)# Convert boxes to xyxy formatboxes = xywh2xyxy(boxes)return boxes
def process_output(output,img_width, img_height, input_width, input_height):predictions = np.squeeze(output[0]).T# Filter out object confidence scores below thresholdscores = np.max(predictions[:, 4:], axis=1)predictions = predictions[scores > conf_threshold, :]scores = scores[scores > conf_threshold]if len(scores) == 0:return [], [], []# Get the class with the highest confidenceclass_ids = np.argmax(predictions[:, 4:], axis=1)# Get bounding boxes for each objectboxes = extract_boxes(predictions,img_width, img_height, input_width, input_height)# Apply non-maxima suppression to suppress weak, overlapping bounding boxes# indices = nms(boxes, scores, self.iou_threshold)indices = multiclass_nms(boxes, scores, class_ids, iou_threshold)return boxes[indices], scores[indices], class_ids[indices]def bbox_iou(  box1, box2, x1y1x2y2=True):"""description: compute the IoU of two bounding boxesparam:box1: A box coordinate (can be (x1, y1, x2, y2) or (x, y, w, h))box2: A box coordinate (can be (x1, y1, x2, y2) or (x, y, w, h))x1y1x2y2: select the coordinate formatreturn:iou: computed iou"""if not x1y1x2y2:# Transform from center and width to exact coordinatesb1_x1, b1_x2 = box1[:, 0] - box1[:, 2] / 2, box1[:, 0] + box1[:, 2] / 2b1_y1, b1_y2 = box1[:, 1] - box1[:, 3] / 2, box1[:, 1] + box1[:, 3] / 2b2_x1, b2_x2 = box2[:, 0] - box2[:, 2] / 2, box2[:, 0] + box2[:, 2] / 2b2_y1, b2_y2 = box2[:, 1] - box2[:, 3] / 2, box2[:, 1] + box2[:, 3] / 2else:# Get the coordinates of bounding boxesb1_x1, b1_y1, b1_x2, b1_y2 = box1[:, 0], box1[:, 1], box1[:, 2], box1[:, 3]b2_x1, b2_y1, b2_x2, b2_y2 = box2[:, 0], box2[:, 1], box2[:, 2], box2[:, 3]# Get the coordinates of the intersection rectangleinter_rect_x1 = np.maximum(b1_x1, b2_x1)inter_rect_y1 = np.maximum(b1_y1, b2_y1)inter_rect_x2 = np.minimum(b1_x2, b2_x2)inter_rect_y2 = np.minimum(b1_y2, b2_y2)# Intersection areainter_area = np.clip(inter_rect_x2 - inter_rect_x1 + 1, 0, None) * \np.clip(inter_rect_y2 - inter_rect_y1 + 1, 0, None)# Union Areab1_area = (b1_x2 - b1_x1 + 1) * (b1_y2 - b1_y1 + 1)b2_area = (b2_x2 - b2_x1 + 1) * (b2_y2 - b2_y1 + 1)iou = inter_area / (b1_area + b2_area - inter_area + 1e-16)return ioudef non_max_suppression(  prediction, origin_h, origin_w, conf_thres=0.5, nms_thres=0.4):"""description: Removes detections with lower object confidence score than 'conf_thres' and performsNon-Maximum Suppression to further filter detections.param:prediction: detections, (x1, y1, x2, y2, conf, cls_id)origin_h: original image heightorigin_w: original image widthconf_thres: a confidence threshold to filter detectionsnms_thres: a iou threshold to filter detectionsreturn:boxes: output after nms with the shape (x1, y1, x2, y2, conf, cls_id)"""# Get the boxes that score > CONF_THRESHboxes = prediction[prediction[:, 4] >= conf_thres]# Trandform bbox from [center_x, center_y, w, h] to [x1, y1, x2, y2]boxes[:, :4] =  xywh2xyxy(origin_h, origin_w, boxes[:, :4])# clip the coordinatesboxes[:, 0] = np.clip(boxes[:, 0], 0, origin_w - 1)boxes[:, 2] = np.clip(boxes[:, 2], 0, origin_w - 1)boxes[:, 1] = np.clip(boxes[:, 1], 0, origin_h - 1)boxes[:, 3] = np.clip(boxes[:, 3], 0, origin_h - 1)# Object confidenceconfs = boxes[:, 4]# Sort by the confsboxes = boxes[np.argsort(-confs)]# Perform non-maximum suppressionkeep_boxes = []while boxes.shape[0]:large_overlap =  bbox_iou(np.expand_dims(boxes[0, :4], 0), boxes[:, :4]) > nms_threslabel_match = boxes[0, -1] == boxes[:, -1]# Indices of boxes with lower confidence scores, large IOUs and matching labelsinvalid = large_overlap & label_matchkeep_boxes += [boxes[0]]boxes = boxes[~invalid]boxes = np.stack(keep_boxes, 0) if len(keep_boxes) else np.array([])return boxesdef draw_box( image: np.ndarray, box: np.ndarray, color: tuple[int, int, int] = (0, 0, 255),thickness: int = 2) -> np.ndarray:x1, y1, x2, y2 = box.astype(int)return cv2.rectangle(image, (x1, y1), (x2, y2), color, thickness)def draw_text(image: np.ndarray, text: str, box: np.ndarray, color: tuple[int, int, int] = (0, 0, 255),font_size: float = 0.001, text_thickness: int = 2) -> np.ndarray:x1, y1, x2, y2 = box.astype(int)(tw, th), _ = cv2.getTextSize(text=text, fontFace=cv2.FONT_HERSHEY_SIMPLEX,fontScale=font_size, thickness=text_thickness)th = int(th * 1.2)cv2.rectangle(image, (x1, y1),(x1 + tw, y1 - th), color, -1)return cv2.putText(image, text, (x1, y1), cv2.FONT_HERSHEY_SIMPLEX, font_size, (255, 255, 255), text_thickness, cv2.LINE_AA)def draw_detections(image, boxes, scores, class_ids, mask_alpha=0.3):det_img = image.copy()img_height, img_width = image.shape[:2]font_size = min([img_height, img_width]) * 0.0006text_thickness = int(min([img_height, img_width]) * 0.001)# Draw bounding boxes and labels of detectionsfor class_id, box, score in zip(class_ids, boxes, scores):color = colors[class_id]draw_box(det_img, box, color)label = class_names[class_id]caption = f'{label} {int(score * 100)}%'draw_text(det_img, caption, box, color, font_size, text_thickness)return det_imgif __name__ == "__main__":# load custom plugin and engine# loa# 初始化推理模型model = InferSession(0, model_path)image = cv2.imread(IMG_PATH)h,w,_=image.shape# img, ratio, (dw, dh) = letterbox(img, new_shape=(IMG_SIZE, IMG_SIZE))img = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)img = cv2.resize(img, (input_w, input_h))img_ = img / 255.0  # 归一化到0~1img_ = img_.transpose(2, 0, 1)img_ = np.ascontiguousarray(img_, dtype=np.float32)outputs = model.infer([img_])import picklewith open('yolov8.pkl', 'wb') as f:pickle.dump(outputs, f)with open('yolov8.pkl', 'rb') as f:outputs = pickle.load(f)#print(outputs)boxes, scores, class_ids  = process_output(outputs,w,h,input_w,input_h)print(boxes, scores, class_ids)original_image=draw_detections(image, boxes, scores, class_ids, mask_alpha=0.3)#print(boxes, scores, class_ids)cv2.imwrite("result.jpg",original_image)print("------------------")

测试结果

[INFO] acl init success
[INFO] open device 0 success
[INFO] load model yolov8s.om success
[INFO] create model description success
[[5.0941406e+01 4.0057031e+02 2.4648047e+02 9.0386719e+02][6.6951562e+02 3.9065625e+02 8.1000000e+02 8.8087500e+02][2.2377832e+02 4.0732031e+02 3.4512012e+02 8.5999219e+02][3.1640625e-01 5.5096875e+02 7.3762207e+01 8.7075000e+02][3.1640625e+00 2.2865625e+02 8.0873438e+02 7.4503125e+02]] [0.9140625  0.9121094  0.87597656 0.55566406 0.8984375 ] [0 0 0 0 5]
------------------
[INFO] unload model success, model Id is 1
[INFO] end to destroy context
[INFO] end to reset device is 0
[INFO] end to finalize aclProcess finished with exit code 0

二、c++版本

cmakelists.txt

cmake_minimum_required(VERSION 3.16)
project(untitled10)
set(CMAKE_CXX_FLAGS "-std=c++11")
set(CMAKE_CXX_STANDARD 11)
add_definitions(-DENABLE_DVPP_INTERFACE)include_directories(/usr/local/samples/cplusplus/common/acllite/include)
include_directories(/usr/local/Ascend/ascend-toolkit/latest/aarch64-linux/include)
find_package(OpenCV REQUIRED)
#message(STATUS ${OpenCV_INCLUDE_DIRS})
#添加头文件
include_directories(${OpenCV_INCLUDE_DIRS})
#链接Opencv库
add_library(libascendcl SHARED IMPORTED)
set_target_properties(libascendcl PROPERTIES IMPORTED_LOCATION /usr/local/Ascend/ascend-toolkit/latest/aarch64-linux/lib64/libascendcl.so)
add_library(libacllite SHARED IMPORTED)
set_target_properties(libacllite PROPERTIES IMPORTED_LOCATION /usr/local/samples/cplusplus/common/acllite/out/aarch64/libacllite.so)add_executable(untitled10 main.cpp)
target_link_libraries(untitled10 ${OpenCV_LIBS} libascendcl libacllite)

main.cpp

#include <opencv2/opencv.hpp>
#include "AclLiteUtils.h"
#include "AclLiteImageProc.h"
#include "AclLiteResource.h"
#include "AclLiteError.h"
#include "AclLiteModel.h"
#include <opencv2/dnn.hpp>
#include <opencv2/imgproc.hpp>
#include <opencv2/highgui.hpp>
#include <algorithm>using namespace std;
using namespace cv;typedef enum Result {SUCCESS = 0,FAILED = 1
} Result;
std::vector<std::string> CLASS_NAMES = {"person", "bicycle", "car", "motorcycle", "airplane", "bus", "train", "truck", "boat","traffic light","fire hydrant", "stop sign", "parking meter", "bench", "bird", "cat", "dog", "horse","sheep", "cow","elephant", "bear", "zebra", "giraffe", "backpack", "umbrella", "handbag", "tie","suitcase", "frisbee","skis", "snowboard", "sports ball", "kite", "baseball bat", "baseball glove","skateboard", "surfboard","tennis racket", "bottle", "wine glass", "cup", "fork", "knife", "spoon", "bowl","banana", "apple","sandwich", "orange", "broccoli", "carrot", "hot dog", "pizza", "donut", "cake","chair", "couch","potted plant", "bed", "dining table", "toilet", "tv", "laptop", "mouse", "remote","keyboard", "cell phone","microwave", "oven", "toaster", "sink", "refrigerator", "book", "clock", "vase","scissors", "teddy bear","hair drier", "toothbrush"};const std::vector<std::vector<float>> COLOR_LIST = {{1, 1, 1},{0.098, 0.325, 0.850},{0.125, 0.694, 0.929},{0.556, 0.184, 0.494},{0.188, 0.674, 0.466},{0.933, 0.745, 0.301},{0.184, 0.078, 0.635},{0.300, 0.300, 0.300},{0.600, 0.600, 0.600},{0.000, 0.000, 1.000},{0.000, 0.500, 1.000},{0.000, 0.749, 0.749},{0.000, 1.000, 0.000},{1.000, 0.000, 0.000},{1.000, 0.000, 0.667},{0.000, 0.333, 0.333},{0.000, 0.667, 0.333},{0.000, 1.000, 0.333},{0.000, 0.333, 0.667},{0.000, 0.667, 0.667},{0.000, 1.000, 0.667},{0.000, 0.333, 1.000},{0.000, 0.667, 1.000},{0.000, 1.000, 1.000},{0.500, 0.333, 0.000},{0.500, 0.667, 0.000},{0.500, 1.000, 0.000},{0.500, 0.000, 0.333},{0.500, 0.333, 0.333},{0.500, 0.667, 0.333},{0.500, 1.000, 0.333},{0.500, 0.000, 0.667},{0.500, 0.333, 0.667},{0.500, 0.667, 0.667},{0.500, 1.000, 0.667},{0.500, 0.000, 1.000},{0.500, 0.333, 1.000},{0.500, 0.667, 1.000},{0.500, 1.000, 1.000},{1.000, 0.333, 0.000},{1.000, 0.667, 0.000},{1.000, 1.000, 0.000},{1.000, 0.000, 0.333},{1.000, 0.333, 0.333},{1.000, 0.667, 0.333},{1.000, 1.000, 0.333},{1.000, 0.000, 0.667},{1.000, 0.333, 0.667},{1.000, 0.667, 0.667},{1.000, 1.000, 0.667},{1.000, 0.000, 1.000},{1.000, 0.333, 1.000},{1.000, 0.667, 1.000},{0.000, 0.000, 0.333},{0.000, 0.000, 0.500},{0.000, 0.000, 0.667},{0.000, 0.000, 0.833},{0.000, 0.000, 1.000},{0.000, 0.167, 0.000},{0.000, 0.333, 0.000},{0.000, 0.500, 0.000},{0.000, 0.667, 0.000},{0.000, 0.833, 0.000},{0.000, 1.000, 0.000},{0.167, 0.000, 0.000},{0.333, 0.000, 0.000},{0.500, 0.000, 0.000},{0.667, 0.000, 0.000},{0.833, 0.000, 0.000},{1.000, 0.000, 0.000},{0.000, 0.000, 0.000},{0.143, 0.143, 0.143},{0.286, 0.286, 0.286},{0.429, 0.429, 0.429},{0.571, 0.571, 0.571},{0.714, 0.714, 0.714},{0.857, 0.857, 0.857},{0.741, 0.447, 0.000},{0.741, 0.717, 0.314},{0.000, 0.500, 0.500}
};struct Object {// The object class.int label{};// The detection's confidence probability.float probability{};// The object bounding box rectangle.cv::Rect_<float> rect;};float clamp_T(float value, float low, float high) {if (value < low) return low;if (high < value) return high;return value;
}
void  drawObjectLabels(cv::Mat& image, const std::vector<Object> &objects, unsigned int scale=1) {// If segmentation information is present, start with that// Bounding boxes and annotationsfor (auto & object : objects) {// Choose the colorint colorIndex = object.label % COLOR_LIST.size(); // We have only defined 80 unique colorscv::Scalar color = cv::Scalar(COLOR_LIST[colorIndex][0],COLOR_LIST[colorIndex][1],COLOR_LIST[colorIndex][2]);float meanColor = cv::mean(color)[0];cv::Scalar txtColor;if (meanColor > 0.5){txtColor = cv::Scalar(0, 0, 0);}else{txtColor = cv::Scalar(255, 255, 255);}const auto& rect = object.rect;// Draw rectangles and textchar text[256];sprintf(text, "%s %.1f%%", CLASS_NAMES[object.label].c_str(), object.probability * 100);int baseLine = 0;cv::Size labelSize = cv::getTextSize(text, cv::FONT_HERSHEY_SIMPLEX, 0.35 * scale, scale, &baseLine);cv::Scalar txt_bk_color = color * 0.7 * 255;int x = object.rect.x;int y = object.rect.y + 1;cv::rectangle(image, rect, color * 255, scale + 1);cv::rectangle(image, cv::Rect(cv::Point(x, y), cv::Size(labelSize.width, labelSize.height + baseLine)),txt_bk_color, -1);cv::putText(image, text, cv::Point(x, y + labelSize.height), cv::FONT_HERSHEY_SIMPLEX, 0.35 * scale, txtColor, scale);}
}
int main() {const char *modelPath = "../model/yolov8s.om";const char *fileName="../bus.jpg";auto image_process_mode="letter_box";const int label_num=80;const float nms_threshold = 0.6;const float conf_threshold = 0.25;int TOP_K=100;std::vector<int> iImgSize={640,640};float resizeScales=1;cv::Mat iImg = cv::imread(fileName);int m_imgWidth=iImg.cols;int m_imgHeight=iImg.rows;cv::Mat oImg = iImg.clone();cv::cvtColor(oImg, oImg, cv::COLOR_BGR2RGB);cv::Mat detect_image;cv::resize(oImg, detect_image, cv::Size(iImgSize.at(0), iImgSize.at(1)));float m_ratio_w =  1.f /  (iImgSize.at(0) / static_cast<float>(iImg.cols));float m_ratio_h =  1.f /  (iImgSize.at(1) / static_cast<float>(iImg.rows));float* imageBytes;AclLiteResource aclResource_;AclLiteImageProc imageProcess_;AclLiteModel model_;aclrtRunMode runMode_;ImageData resizedImage_;const char *modelPath_;int32_t modelWidth_;int32_t modelHeight_;AclLiteError ret = aclResource_.Init();if (ret == FAILED) {ACLLITE_LOG_ERROR("resource init failed, errorCode is %d", ret);return FAILED;}ret = aclrtGetRunMode(&runMode_);if (ret == FAILED) {ACLLITE_LOG_ERROR("get runMode failed, errorCode is %d", ret);return FAILED;}// init dvpp resourceret = imageProcess_.Init();if (ret == FAILED) {ACLLITE_LOG_ERROR("imageProcess init failed, errorCode is %d", ret);return FAILED;}// load model from fileret = model_.Init(modelPath);if (ret == FAILED) {ACLLITE_LOG_ERROR("model init failed, errorCode is %d", ret);return FAILED;}// data standardizationfloat meanRgb[3] = {0, 0, 0};float stdRgb[3]  = {1/255.0f, 1/255.0f, 1/255.0f};int32_t channel = detect_image.channels();int32_t resizeHeight = detect_image.rows;int32_t resizeWeight = detect_image.cols;imageBytes = (float *) malloc(channel * (iImgSize.at(0)) * (iImgSize.at(1)) * sizeof(float));memset(imageBytes, 0, channel * (iImgSize.at(0)) * (iImgSize.at(1)) * sizeof(float));// image to bytes with shape HWC to CHW, and switch channel BGR to RGBfor (int c = 0; c < channel; ++c) {for (int h = 0; h < resizeHeight; ++h) {for (int w = 0; w < resizeWeight; ++w) {int dstIdx = c * resizeHeight * resizeWeight + h * resizeWeight + w;imageBytes[dstIdx] = static_cast<float>((detect_image.at<cv::Vec3b>(h, w)[c] -1.0f * meanRgb[c]) * 1.0f * stdRgb[c] );}}}std::vector <InferenceOutput> inferOutputs;ret = model_.CreateInput(static_cast<void *>(imageBytes),channel * iImgSize.at(0) * iImgSize.at(1) * sizeof(float));if (ret == FAILED) {ACLLITE_LOG_ERROR("CreateInput failed, errorCode is %d", ret);return FAILED;}// inferenceret = model_.Execute(inferOutputs);if (ret != ACL_SUCCESS) {ACLLITE_LOG_ERROR("execute model failed, errorCode is %d", ret);return FAILED;}float *rawData = static_cast<float *>(inferOutputs[0].data.get());std::vector<cv::Rect> bboxes;std::vector<float> scores;std::vector<int> labels;std::vector<int> indices;auto numChannels =84;auto numAnchors =8400;auto numClasses = CLASS_NAMES.size();cv::Mat output = cv::Mat(numChannels, numAnchors, CV_32F,rawData);output = output.t();// Get all the YOLO proposalsfor (int i = 0; i < numAnchors; i++) {auto rowPtr = output.row(i).ptr<float>();auto bboxesPtr = rowPtr;auto scoresPtr = rowPtr + 4;auto maxSPtr = std::max_element(scoresPtr, scoresPtr + numClasses);float score = *maxSPtr;if (score > conf_threshold) {float x = *bboxesPtr++;float y = *bboxesPtr++;float w = *bboxesPtr++;float h = *bboxesPtr;float x0 = clamp_T((x - 0.5f * w) * m_ratio_w, 0.f, m_imgWidth);float y0 =clamp_T((y - 0.5f * h) * m_ratio_h, 0.f, m_imgHeight);float x1 = clamp_T((x + 0.5f * w) * m_ratio_w, 0.f, m_imgWidth);float y1 = clamp_T((y + 0.5f * h) * m_ratio_h, 0.f, m_imgHeight);int label = maxSPtr - scoresPtr;cv::Rect_<float> bbox;bbox.x = x0;bbox.y = y0;bbox.width = x1 - x0;bbox.height = y1 - y0;bboxes.push_back(bbox);labels.push_back(label);scores.push_back(score);}}// Run NMScv::dnn::NMSBoxes(bboxes, scores, conf_threshold, nms_threshold, indices);std::vector<Object> objects;// Choose the top k detectionsint cnt = 0;for (auto& chosenIdx : indices) {if (cnt >= TOP_K) {break;}Object obj{};obj.probability = scores[chosenIdx];obj.label = labels[chosenIdx];obj.rect = bboxes[chosenIdx];objects.push_back(obj);cnt += 1;}drawObjectLabels( iImg, objects);cv::imwrite("../result_c++.jpg",iImg);model_.DestroyResource();imageProcess_.DestroyResource();aclResource_.Release();return SUCCESS;
}

测试结果

三、旋转目标

待需

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.rhkb.cn/news/291120.html

如若内容造成侵权/违法违规/事实不符,请联系长河编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

物流监控升级,百递云·API开放平台助力某电商平台实现高效物流管理

不论是电商平台自身还是消费者&#xff0c;都有着对物流监控的强烈需求。 消费者下单后be like: 每十分钟看一次快递轨迹 放心&#xff0c;电商平台也一样关心物流状态&#xff01;怎样第一时间向用户传递物流状态&#xff1f; 怎么减少重复的提问和投诉&#xff1f;怎样监管…

Collection与数据结构 顺序表与ArrayList

1. 线性表 线性表&#xff08;linear list&#xff09;是n个具有相同特性的数据元素的有限序列。 线性表是一种在实际中广泛使用的数据结构&#xff0c;常见的线性表&#xff1a;顺序表、链表、栈、队列… 线性表在逻辑上是线性结构&#xff0c;也就说是连续的一条直线。但是在…

【Docker】搭建强大的Nginx可视化配置工具 - nginxWebUI

【Docker】搭建强大的Nginx可视化配置工具 - nginxWebUI 前言 本教程基于绿联的NAS设备DX4600 Pro的docker功能进行搭建。 简介 NginxWebUI是一个基于Java的&#xff0c;专门用来管理Nginx的图形界面工具。它是开源的&#xff0c;使用相对简单且功能全面。 使用NginxWebUI…

接劳巴,拔掉KL15,MCU重启。不接劳巴,拔掉KL15,MCU正常下电

最近遇到一个神奇的现象&#xff0c;在调试一个单KL15的项目&#xff0c;发现接着劳特巴赫调试器&#xff0c;然后拔掉KL15&#xff0c;软件进入了重启Reset函数&#xff0c;没有进入期望的下电SwitchOff函数。 而不接劳特巴赫&#xff0c;拔掉KL15&#xff0c;观测电流&#…

Qt实现Kermit协议

1 概述 Kermit文件运输协议提供了一条从大型计算机下载文件到微机的途径。它已被用于进行公用数据传输。 其特性如下: Kermit文件运输协议是一个半双工的通信协议。它支持7位ASCII字符。数据以可多达96字节长度的可变长度的分组形式传输。对每个被传送分组需要一个确认。Kerm…

python-判断列表字典循环

比较运算符 不等于 &#xff01; if 布尔值&#xff1a; [执行语句-真实执行] else: [执行语句] mood_index int(input("对象今天的心情指数的是&#xff1a;")) if mood_index > 60:print("恭喜&#xff0c;今晚应该可以带游戏&#xff0c;去吧")…

2024年水电站大坝安全监测工作提升要点

根据《水电站大坝运行安全监督管理规定》&#xff08;国家发改委令第23号&#xff09;和《水电站大坝运行安全信息报送办法》&#xff08;国能安全〔2016〕261号&#xff09;的相关规定、要求&#xff0c;电力企业应当在汛期向我中心报送每日大坝汛情。近期&#xff0c;全国各地…

【机器学习】深度解析KNN算法

深度解析KNN算法 KNN&#xff08;K-最近邻&#xff09;算法是机器学习中一种基本且广泛应用的算法&#xff0c;它的实现简单直观&#xff0c;应用范围广泛&#xff0c;从图像识别到推荐系统都有其身影。然而&#xff0c;随着数据量的增长&#xff0c;KNN算法面临着严峻的效率挑…

[yolox]ubuntu上部署yolox的ncnn模型

首先转换pytorch->onnx->param模型&#xff0c;这个过程可以查资料步骤有点多&#xff0c;参考blog.51cto.com/u_15660370/6408303&#xff0c;这里重点讲解转换后部署。 测试环境&#xff1a; ubuntu18.04 opencv3.4.4(编译过程省略&#xff0c;参考我其他博客) 安装…

【智能家居入门3】(MQTT服务器、MQTT协议、微信小程序、STM32)

前面已经写了三篇博客关于智能家居的&#xff0c;服务器全都是使用ONENET中国移动&#xff0c;他最大的优点就是作为数据收发的中转站是免费的。本篇使用专门适配MQTT协议的MQTT服务器&#xff0c;有公用的&#xff0c;也可以自己搭建&#xff08;应该要钱&#xff09;&#xf…

常见的数学方法

Math类表示数学类&#xff0c;其中的数学方法都被定义成为static形式&#xff0c;所以可以直接通过Math类的类名调用某个数学方法。语法格式&#xff1a; Math.xxx(参数)&#xff1b; 例题 输入n个整数a1,a2,a3,......an,求这n个数的最大值max&#xff0c;最小值min&#xff0…

算法之并查集

并查集&#xff08;Union-find Data Structure&#xff09;是一种树型的数据结构。它的特点是由子结点找到父亲结点&#xff0c;用于处理一些不交集&#xff08;Disjoint Sets&#xff09;的合并及查询问题。 Find&#xff1a;确定元素属于哪一个子集。它可以被用来确定两个元…

【御控物联】 IOT异构数据JSON转化(场景案例一)

文章目录 前言技术资料 前言 随着物联网、大数据、智能制造技术的不断发展&#xff0c;越来越多的企业正在进行工厂的智能化转型升级。转型升级第一步往往是设备的智能化改造&#xff0c;助力设备数据快速上云&#xff0c;实现设备数据共享和场景互联。然而&#xff0c;在生产…

【蓝桥杯】填空题技巧|巧用编译器|用Python处理大数和字符|心算手数|思维题

目录 一、填空题 1.巧用编译器 2.巧用Excel 3. 用Python处理大数 4.用Python处理字符 5.心算手数 二、思维题 推荐 前些天发现了一个巨牛的人工智能学习网站&#xff0c;通俗易懂&#xff0c;风趣幽默&#xff0c;忍不住分享一下给大家。【点击跳转到网站】 一、填空题 …

蓝桥杯day14刷题日记

P8707 [蓝桥杯 2020 省 AB1] 走方格 思路&#xff1a;很典型的动态规划问题&#xff0c;对于偶数格特判&#xff0c;其他的正常遍历一遍&#xff0c;现在所处的格子的方案数等于左边的格子的方案数加上上面格子的方案数之和 #include <iostream> using namespace std; …

蓝桥杯物联网竞赛_STM32L071_13_定时器

CubeMx配置LPTIM: counts internal clock events 计数内部时钟事件 prescaler 预分频器 updata end of period 更新期末 kil5配置&#xff1a; 中断回调函数完善一下&#xff1a; void HAL_LPTIM_AutoReloadMatchCallback(LPTIM_HandleTypeDef *hlptim){if(cnt ! 10) cnt…

iOS - Runloop介绍

文章目录 iOS - Runloop介绍1. 简介1.1 顾名思义1.2. 应用范畴1.3. 如果没有runloop1.4. 如果有了runloop 2. Runloop对象3. Runloop与线程4. 获取Runloop对象4.1 Foundation4.2 Core Foundation4.3 示例 5. Runloop相关的类5.1 Core Foundation中关于RunLoop的5个类5.2 CFRunL…

152 Linux C++ 通讯架构实战7 ,makefile编写改成for cpp,读配置文件,内存泄漏查找,设置标题实战

读写配置文件代码实战。nginx.conf 一个项目要启动&#xff0c;需要配置很多信息&#xff0c;第一项就是学习如何配置一个项目 nginx.conf的内容 #是注释行&#xff0c; #每个有效配置项用 等号 处理&#xff0c;等号前不超过40个字符&#xff0c;等号后不超过400个字符&#…

SpringBoot分布式锁自定义注解处理幂等性

SpringBoot分布式锁自定义注解处理幂等性 注解简介 注解&#xff08;Annotation&#xff09;是Java SE 5.0 版本开始引入的概念&#xff0c;它是对 Java 源代码的说明&#xff0c;是一种元数据&#xff08;描述数据的数据&#xff09;。 Java中的注解主要分为以下三类: JDK…

【QT+QGIS跨平台编译】043:【protoc+Qt跨平台编译】(一套代码、一套框架,跨平台编译)

点击查看专栏目录 文章目录 一、protoc介绍二、文件下载三、文件分析四、pro文件五、编译实践一、protoc介绍 protoc 是 Protocol Buffers 的官方编译器 可执行文件版本。与在 Linux 或 macOS 等系统上的 protoc 类似,protoc.exe 在 Windows 环境下提供了相同的功能,用于将 .…