yolov8x-p2 实现 tensorrt 推理

简述

在最开始的yolov8提供的不同size的版本,包括n、s、m、l、x(模型规模依次增大,通过depth, width, max_channels控制大小),这些都是通过P3、P4和P5提取图片特征;
正常的yolov8对象检测模型输出层是P3、P4、P5三个输出层,为了提升对小目标的检测能力,新版本的yolov8 已经包含了P2层(P2层做的卷积次数少,特征图的尺寸(分辨率)较大,更加利于小目标识别),有四个输出层。Backbone部分的结果没有改变,但是Neck跟Head部分模型结构做了调整。

在这里插入图片描述


yolov8-p2 yaml

# Ultralytics YOLO 🚀, AGPL-3.0 license
# YOLOv8 object detection model with P2-P5 outputs. For Usage examples see https://docs.ultralytics.com/tasks/detect# Parameters
nc: 80  # number of classes
scales: # model compound scaling constants, i.e. 'model=yolov8n.yaml' will call yolov8.yaml with scale 'n'# [depth, width, max_channels]n: [0.33, 0.25, 1024]s: [0.33, 0.50, 1024]m: [0.67, 0.75, 768]l: [1.00, 1.00, 512]x: [1.00, 1.25, 512]# YOLOv8.0 backbone
backbone:# [from, repeats, module, args]- [-1, 1, Conv, [64, 3, 2]]  # 0-P1/2- [-1, 1, Conv, [128, 3, 2]]  # 1-P2/4- [-1, 3, C2f, [128, True]]- [-1, 1, Conv, [256, 3, 2]]  # 3-P3/8- [-1, 6, C2f, [256, True]]- [-1, 1, Conv, [512, 3, 2]]  # 5-P4/16- [-1, 6, C2f, [512, True]]- [-1, 1, Conv, [1024, 3, 2]]  # 7-P5/32- [-1, 3, C2f, [1024, True]]- [-1, 1, SPPF, [1024, 5]]  # 9# YOLOv8.0-p2 head
head:- [-1, 1, nn.Upsample, [None, 2, 'nearest']]- [[-1, 6], 1, Concat, [1]]  # cat backbone P4- [-1, 3, C2f, [512]]  # 12- [-1, 1, nn.Upsample, [None, 2, 'nearest']]- [[-1, 4], 1, Concat, [1]]  # cat backbone P3- [-1, 3, C2f, [256]]  # 15 (P3/8-small)- [-1, 1, nn.Upsample, [None, 2, 'nearest']]- [[-1, 2], 1, Concat, [1]]  # cat backbone P2- [-1, 3, C2f, [128]]  # 18 (P2/4-xsmall)- [-1, 1, Conv, [128, 3, 2]]- [[-1, 15], 1, Concat, [1]]  # cat head P3- [-1, 3, C2f, [256]]  # 21 (P3/8-small)- [-1, 1, Conv, [256, 3, 2]]- [[-1, 12], 1, Concat, [1]]  # cat head P4- [-1, 3, C2f, [512]]  # 24 (P4/16-medium)- [-1, 1, Conv, [512, 3, 2]]- [[-1, 9], 1, Concat, [1]]  # cat head P5- [-1, 3, C2f, [1024]]  # 27 (P5/32-large)- [[18, 21, 24, 27], 1, Detect, [nc]]  # Detect(P2, P3, P4, P5)

yolov8-p2 tensort 实现

参考: https://github.com/wang-xinyu/tensorrtx/tree/master/yolov8

  1. model.cpp 中增加 buildEngineYolov8x_p2 方法.

    Backbone

    • backbone 和 yolov8 一样 , 无需改动,照搬下来就行.

          /************************************************************************************************************************************************  YOLOV8 BACKBONE  ***************************************************************************************************************************************************/nvinfer1::IElementWiseLayer *conv0 = convBnSiLU(network, weightMap, *data, 80, 3, 2, 1, "model.0");nvinfer1::IElementWiseLayer *conv1 = convBnSiLU(network, weightMap, *conv0->getOutput(0), 160, 3, 2, 1, "model.1");nvinfer1::IElementWiseLayer *conv2 = C2F(network, weightMap, *conv1->getOutput(0), 160, 160, 3, true, 0.5, "model.2");nvinfer1::IElementWiseLayer *conv3 = convBnSiLU(network, weightMap, *conv2->getOutput(0), 320, 3, 2, 1, "model.3");nvinfer1::IElementWiseLayer *conv4 = C2F(network, weightMap, *conv3->getOutput(0), 320, 320, 6, true, 0.5, "model.4");nvinfer1::IElementWiseLayer *conv5 = convBnSiLU(network, weightMap, *conv4->getOutput(0), 640, 3, 2, 1, "model.5");nvinfer1::IElementWiseLayer *conv6 = C2F(network, weightMap, *conv5->getOutput(0), 640, 640, 6, true, 0.5, "model.6");nvinfer1::IElementWiseLayer *conv7 = convBnSiLU(network, weightMap, *conv6->getOutput(0), 640, 3, 2, 1, "model.7");nvinfer1::IElementWiseLayer *conv8 = C2F(network, weightMap, *conv7->getOutput(0), 640, 640, 3, true, 0.5, "model.8");nvinfer1::IElementWiseLayer *conv9 = SPPF(network, weightMap, *conv8->getOutput(0), 640, 640, 5, "model.9");
      

    Head

    • 由3个输出层 (P3、P4、P5) 变成4个输出层 (P2、P3、P4、P5)

      HEAD
        /*************************************************************************************************************************************************  YOLOV8 HEAD  ******************************************************************************************************************************************************/float scale[] = {1.0, 2.0, 2.0};nvinfer1::IResizeLayer *upsample10 = network->addResize(*conv9->getOutput(0));upsample10->setResizeMode(nvinfer1::ResizeMode::kNEAREST);upsample10->setScales(scale, 3);nvinfer1::ITensor *inputTensor11[] = {upsample10->getOutput(0), conv6->getOutput(0)};nvinfer1::IConcatenationLayer *cat11 = network->addConcatenation(inputTensor11, 2);nvinfer1::IElementWiseLayer *conv12 = C2F(network, weightMap, *cat11->getOutput(0), 640, 640, 3, false, 0.5, "model.12");nvinfer1::IResizeLayer *upsample13 = network->addResize(*conv12->getOutput(0));upsample13->setResizeMode(nvinfer1::ResizeMode::kNEAREST);upsample13->setScales(scale, 3);nvinfer1::ITensor *inputTensor14[] = {upsample13->getOutput(0), conv4->getOutput(0)};nvinfer1::IConcatenationLayer *cat14 = network->addConcatenation(inputTensor14, 2);nvinfer1::IElementWiseLayer *conv15 = C2F(network, weightMap, *cat14->getOutput(0), 320, 320, 3, false, 0.5, "model.15");nvinfer1::IResizeLayer *upsample16 = network->addResize(*conv15->getOutput(0));upsample16->setResizeMode(nvinfer1::ResizeMode::kNEAREST);upsample16->setScales(scale, 3);nvinfer1::ITensor *inputTensor17[] = {upsample16->getOutput(0), conv2->getOutput(0)};nvinfer1::IConcatenationLayer *cat17 = network->addConcatenation(inputTensor17, 2);nvinfer1::IElementWiseLayer *conv18 = C2F(network, weightMap, *cat17->getOutput(0), 160, 160, 3, false, 0.5, "model.18");nvinfer1::IElementWiseLayer *conv19 = convBnSiLU(network, weightMap, *conv18->getOutput(0), 160, 3, 2, 1, "model.19");nvinfer1::ITensor *inputTensor20[] = {conv19->getOutput(0), conv15->getOutput(0)};nvinfer1::IConcatenationLayer *cat20 = network->addConcatenation(inputTensor20, 2);nvinfer1::IElementWiseLayer *conv21 = C2F(network, weightMap, *cat20->getOutput(0), 320, 320, 3, false, 0.5, "model.21");nvinfer1::IElementWiseLayer *conv22 = convBnSiLU(network, weightMap, *conv21->getOutput(0), 320, 3, 2, 1, "model.22");nvinfer1::ITensor *inputTensor23[] = {conv22->getOutput(0), conv12->getOutput(0)};nvinfer1::IConcatenationLayer *cat23 = network->addConcatenation(inputTensor23, 2);nvinfer1::IElementWiseLayer *conv24 = C2F(network, weightMap, *cat23->getOutput(0), 640, 640, 3, false, 0.5, "model.24");nvinfer1::IElementWiseLayer *conv25 = convBnSiLU(network, weightMap, *conv24->getOutput(0), 640, 3, 2, 1, "model.25");nvinfer1::ITensor *inputTensor26[] = {conv25->getOutput(0), conv9->getOutput(0)};nvinfer1::IConcatenationLayer *cat26 = network->addConcatenation(inputTensor26, 2);nvinfer1::IElementWiseLayer *conv27 = C2F(network, weightMap, *cat26->getOutput(0), 640, 640, 3, false, 0.5, "model.27");
      
      OUTPUT
      /****************************************************************************************************************************************************  YOLOV8 OUTPUT  *************************************************************************************************************************************************/// output0nvinfer1::IElementWiseLayer *conv28_cv2_0_0 = convBnSiLU(network, weightMap, *conv18->getOutput(0), 64, 3, 1, 1, "model.28.cv2.0.0");nvinfer1::IElementWiseLayer *conv28_cv2_0_1 = convBnSiLU(network, weightMap, *conv28_cv2_0_0->getOutput(0), 64, 3, 1, 1, "model.28.cv2.0.1");nvinfer1::IConvolutionLayer *conv28_cv2_0_2 = network->addConvolutionNd(*conv28_cv2_0_1->getOutput(0), 64, nvinfer1::DimsHW{1, 1}, weightMap["model.28.cv2.0.2.weight"], weightMap["model.28.cv2.0.2.bias"]);conv28_cv2_0_2->setStrideNd(nvinfer1::DimsHW{1, 1});conv28_cv2_0_2->setPaddingNd(nvinfer1::DimsHW{0, 0});nvinfer1::IElementWiseLayer *conv28_cv3_0_0 = convBnSiLU(network, weightMap, *conv18->getOutput(0), 160, 3, 1, 1, "model.28.cv3.0.0");nvinfer1::IElementWiseLayer *conv28_cv3_0_1 = convBnSiLU(network, weightMap, *conv28_cv3_0_0->getOutput(0), 160, 3, 1, 1, "model.28.cv3.0.1");nvinfer1::IConvolutionLayer *conv28_cv3_0_2 = network->addConvolutionNd(*conv28_cv3_0_1->getOutput(0), kNumClass, nvinfer1::DimsHW{1, 1}, weightMap["model.28.cv3.0.2.weight"], weightMap["model.28.cv3.0.2.bias"]);conv28_cv3_0_2->setStride(nvinfer1::DimsHW{1, 1});conv28_cv3_0_2->setPadding(nvinfer1::DimsHW{0, 0});nvinfer1::ITensor *inputTensor28_0[] = {conv28_cv2_0_2->getOutput(0), conv28_cv3_0_2->getOutput(0)};nvinfer1::IConcatenationLayer *cat28_0 = network->addConcatenation(inputTensor28_0, 2); // P2// output1nvinfer1::IElementWiseLayer *conv28_cv2_1_0 = convBnSiLU(network, weightMap, *conv21->getOutput(0), 64, 3, 1, 1, "model.28.cv2.1.0");nvinfer1::IElementWiseLayer *conv28_cv2_1_1 = convBnSiLU(network, weightMap, *conv28_cv2_1_0->getOutput(0), 64, 3, 1, 1, "model.28.cv2.1.1");nvinfer1::IConvolutionLayer *conv28_cv2_1_2 = network->addConvolutionNd(*conv28_cv2_1_1->getOutput(0), 64, nvinfer1::DimsHW{1, 1}, weightMap["model.28.cv2.1.2.weight"], weightMap["model.28.cv2.1.2.bias"]);conv28_cv2_1_2->setStrideNd(nvinfer1::DimsHW{1, 1});conv28_cv2_1_2->setPaddingNd(nvinfer1::DimsHW{0, 0});nvinfer1::IElementWiseLayer *conv28_cv3_1_0 = convBnSiLU(network, weightMap, *conv21->getOutput(0), 160, 3, 1, 1, "model.28.cv3.1.0");nvinfer1::IElementWiseLayer *conv28_cv3_1_1 = convBnSiLU(network, weightMap, *conv28_cv3_1_0->getOutput(0), 160, 3, 1, 1, "model.28.cv3.1.1");nvinfer1::IConvolutionLayer *conv28_cv3_1_2 = network->addConvolutionNd(*conv28_cv3_1_1->getOutput(0), kNumClass, nvinfer1::DimsHW{1, 1}, weightMap["model.28.cv3.1.2.weight"], weightMap["model.28.cv3.1.2.bias"]);conv28_cv3_1_2->setStrideNd(nvinfer1::DimsHW{1, 1});conv28_cv3_1_2->setPaddingNd(nvinfer1::DimsHW{0, 0});nvinfer1::ITensor *inputTensor28_1[] = {conv28_cv2_1_2->getOutput(0), conv28_cv3_1_2->getOutput(0)};nvinfer1::IConcatenationLayer *cat28_1 = network->addConcatenation(inputTensor28_1, 2);// output2nvinfer1::IElementWiseLayer *conv28_cv2_2_0 = convBnSiLU(network, weightMap, *conv24->getOutput(0), 64, 3, 1, 1, "model.28.cv2.2.0");nvinfer1::IElementWiseLayer *conv28_cv2_2_1 = convBnSiLU(network, weightMap, *conv28_cv2_2_0->getOutput(0), 64, 3, 1, 1, "model.28.cv2.2.1");nvinfer1::IConvolutionLayer *conv28_cv2_2_2 = network->addConvolution(*conv28_cv2_2_1->getOutput(0), 64, nvinfer1::DimsHW{1, 1}, weightMap["model.28.cv2.2.2.weight"], weightMap["model.28.cv2.2.2.bias"]);nvinfer1::IElementWiseLayer *conv28_cv3_2_0 = convBnSiLU(network, weightMap, *conv24->getOutput(0), 160, 3, 1, 1, "model.28.cv3.2.0");nvinfer1::IElementWiseLayer *conv28_cv3_2_1 = convBnSiLU(network, weightMap, *conv28_cv3_2_0->getOutput(0), 160, 3, 1, 1, "model.28.cv3.2.1");nvinfer1::IConvolutionLayer *conv28_cv3_2_2 = network->addConvolution(*conv28_cv3_2_1->getOutput(0), kNumClass, nvinfer1::DimsHW{1, 1}, weightMap["model.28.cv3.2.2.weight"], weightMap["model.28.cv3.2.2.bias"]);nvinfer1::ITensor *inputTensor28_2[] = {conv28_cv2_2_2->getOutput(0), conv28_cv3_2_2->getOutput(0)};nvinfer1::IConcatenationLayer *cat28_2 = network->addConcatenation(inputTensor28_2, 2);// output3nvinfer1::IElementWiseLayer *conv28_cv2_3_0 = convBnSiLU(network, weightMap, *conv27->getOutput(0), 64, 3, 1, 1, "model.28.cv2.3.0");nvinfer1::IElementWiseLayer *conv28_cv2_3_1 = convBnSiLU(network, weightMap, *conv28_cv2_3_0->getOutput(0), 64, 3, 1, 1, "model.28.cv2.3.1");nvinfer1::IConvolutionLayer *conv28_cv2_3_2 = network->addConvolution(*conv28_cv2_3_1->getOutput(0), 64, nvinfer1::DimsHW{1, 1}, weightMap["model.28.cv2.3.2.weight"], weightMap["model.28.cv2.3.2.bias"]);nvinfer1::IElementWiseLayer *conv28_cv3_3_0 = convBnSiLU(network, weightMap, *conv27->getOutput(0), 160, 3, 1, 1, "model.28.cv3.3.0");nvinfer1::IElementWiseLayer *conv28_cv3_3_1 = convBnSiLU(network, weightMap, *conv28_cv3_3_0->getOutput(0), 160, 3, 1, 1, "model.28.cv3.3.1");nvinfer1::IConvolutionLayer *conv28_cv3_3_2 = network->addConvolution(*conv28_cv3_3_1->getOutput(0), kNumClass, nvinfer1::DimsHW{1, 1}, weightMap["model.28.cv3.3.2.weight"], weightMap["model.28.cv3.3.2.bias"]);nvinfer1::ITensor *inputTensor28_3[] = {conv28_cv2_3_2->getOutput(0), conv28_cv3_3_2->getOutput(0)};nvinfer1::IConcatenationLayer *cat28_3 = network->addConcatenation(inputTensor28_3, 2);
      
      DETECT
      /****************************************************************************************************************************************************  YOLOV8 DETECT  *************************************************************************************************************************************************/// P2nvinfer1::IShuffleLayer *shuffle28_0 = network->addShuffle(*cat28_0->getOutput(0));shuffle28_0->setReshapeDimensions(nvinfer1::Dims2{64 + kNumClass, (kInputH / 4) * (kInputW / 4)});nvinfer1::ISliceLayer *split28_0_0 = network->addSlice(*shuffle28_0->getOutput(0), nvinfer1::Dims2{0, 0}, nvinfer1::Dims2{64, (kInputH / 4) * (kInputW / 4)}, nvinfer1::Dims2{1, 1});nvinfer1::ISliceLayer *split28_0_1 = network->addSlice(*shuffle28_0->getOutput(0), nvinfer1::Dims2{64, 0}, nvinfer1::Dims2{kNumClass, (kInputH / 4) * (kInputW / 4)}, nvinfer1::Dims2{1, 1});nvinfer1::IShuffleLayer *dfl28_0 = DFL(network, weightMap, *split28_0_0->getOutput(0), 4, (kInputH / 4) * (kInputW / 4), 1, 1, 0, "model.28.dfl.conv.weight");nvinfer1::ITensor *inputTensor28_dfl_0[] = {dfl28_0->getOutput(0), split28_0_1->getOutput(0)};nvinfer1::IConcatenationLayer *cat28_dfl_0 = network->addConcatenation(inputTensor28_dfl_0, 2);// P3nvinfer1::IShuffleLayer *shuffle28_1 = network->addShuffle(*cat28_1->getOutput(0));shuffle28_1->setReshapeDimensions(nvinfer1::Dims2{64 + kNumClass, (kInputH / 8) * (kInputW / 8)});nvinfer1::ISliceLayer *split28_1_0 = network->addSlice(*shuffle28_1->getOutput(0), nvinfer1::Dims2{0, 0}, nvinfer1::Dims2{64, (kInputH / 8) * (kInputW / 8)}, nvinfer1::Dims2{1, 1});nvinfer1::ISliceLayer *split28_1_1 = network->addSlice(*shuffle28_1->getOutput(0), nvinfer1::Dims2{64, 0}, nvinfer1::Dims2{kNumClass, (kInputH / 8) * (kInputW / 8)}, nvinfer1::Dims2{1, 1});nvinfer1::IShuffleLayer *dfl28_1 = DFL(network, weightMap, *split28_1_0->getOutput(0), 4, (kInputH / 8) * (kInputW / 8), 1, 1, 0, "model.28.dfl.conv.weight");nvinfer1::ITensor *inputTensor28_dfl_1[] = {dfl28_1->getOutput(0), split28_1_1->getOutput(0)};nvinfer1::IConcatenationLayer *cat28_dfl_1 = network->addConcatenation(inputTensor28_dfl_1, 2);// P4nvinfer1::IShuffleLayer *shuffle28_2 = network->addShuffle(*cat28_2->getOutput(0));shuffle28_2->setReshapeDimensions(nvinfer1::Dims2{64 + kNumClass, (kInputH / 16) * (kInputW / 16)});nvinfer1::ISliceLayer *split28_2_0 = network->addSlice(*shuffle28_2->getOutput(0), nvinfer1::Dims2{0, 0}, nvinfer1::Dims2{64, (kInputH / 16) * (kInputW / 16)}, nvinfer1::Dims2{1, 1});nvinfer1::ISliceLayer *split28_2_1 = network->addSlice(*shuffle28_2->getOutput(0), nvinfer1::Dims2{64, 0}, nvinfer1::Dims2{kNumClass, (kInputH / 16) * (kInputW / 16)}, nvinfer1::Dims2{1, 1});nvinfer1::IShuffleLayer *dfl28_2 = DFL(network, weightMap, *split28_2_0->getOutput(0), 4, (kInputH / 16) * (kInputW / 16), 1, 1, 0, "model.28.dfl.conv.weight");nvinfer1::ITensor *inputTensor28_dfl_2[] = {dfl28_2->getOutput(0), split28_2_1->getOutput(0)};nvinfer1::IConcatenationLayer *cat28_dfl_2 = network->addConcatenation(inputTensor28_dfl_2, 2);// P5nvinfer1::IShuffleLayer *shuffle28_3 = network->addShuffle(*cat28_3->getOutput(0));shuffle28_3->setReshapeDimensions(nvinfer1::Dims2{64 + kNumClass, (kInputH / 32) * (kInputW / 32)});nvinfer1::ISliceLayer *split28_3_0 = network->addSlice(*shuffle28_3->getOutput(0), nvinfer1::Dims2{0, 0}, nvinfer1::Dims2{64, (kInputH / 32) * (kInputW / 32)}, nvinfer1::Dims2{1, 1});nvinfer1::ISliceLayer *split28_3_1 = network->addSlice(*shuffle28_3->getOutput(0), nvinfer1::Dims2{64, 0}, nvinfer1::Dims2{kNumClass, (kInputH / 32) * (kInputW / 32)}, nvinfer1::Dims2{1, 1});nvinfer1::IShuffleLayer *dfl28_3 = DFL(network, weightMap, *split28_3_0->getOutput(0), 4, (kInputH / 32) * (kInputW / 32), 1, 1, 0, "model.28.dfl.conv.weight");nvinfer1::ITensor *inputTensor28_dfl_3[] = {dfl28_3->getOutput(0), split28_3_1->getOutput(0)};nvinfer1::IConcatenationLayer *cat28_dfl_3 = network->addConcatenation(inputTensor28_dfl_3, 2);nvinfer1::IPluginV2Layer *yolo = addYoLoLayer(network, std::vector<nvinfer1::IConcatenationLayer *>{cat28_dfl_0, cat28_dfl_1, cat28_dfl_2, cat28_dfl_3});yolo->getOutput(0)->setName(kOutputTensorName);network->markOutput(*yolo->getOutput(0));
      
  2. 修改 yololayer.cuforwardGpu 方法

     void YoloLayerPlugin::forwardGpu(const float *const *inputs, float *output, cudaStream_t stream, int mYoloV8netHeight, int mYoloV8NetWidth, int batchSize)
    {int outputElem = 1 + mMaxOutObject * sizeof(Detection) / sizeof(float);cudaMemsetAsync(output, 0, sizeof(float), stream);for (int idx = 0; idx < batchSize; ++idx){CUDA_CHECK(cudaMemsetAsync(output + idx * outputElem, 0, sizeof(float), stream));}int numElem = 0;   // int grids[3][2] = {{mYoloV8netHeight / 8, mYoloV8NetWidth / 8}, {mYoloV8netHeight / 16, mYoloV8NetWidth / 16}, {mYoloV8netHeight / 32, mYoloV8NetWidth / 32}};// todo int grids[4][2] = {{mYoloV8netHeight / 4, mYoloV8NetWidth / 4}, {mYoloV8netHeight / 8, mYoloV8NetWidth / 8}, {mYoloV8netHeight / 16, mYoloV8NetWidth / 16}, {mYoloV8netHeight / 32, mYoloV8NetWidth / 32}};// int strides[] = { 8, 16, 32 };// todo int strides[] = {4, 8, 16, 32};// for (unsigned int i = 0; i < 3; i++)// todo for (unsigned int i = 0; i < 4; i++){int grid_h = grids[i][0];int grid_w = grids[i][1];int stride = strides[i];numElem = grid_h * grid_w * batchSize;if (numElem < mThreadCount)mThreadCount = numElem;CalDetection<<<(numElem + mThreadCount - 1) / mThreadCount, mThreadCount, 0, stream>>>(inputs[i], output, numElem, mMaxOutObject, grid_h, grid_w, stride, mClassCount, outputElem);}
    }
    
  3. 修改 main.cpp -> serialize_engine ,增加一个 sub_type

      ...else if (sub_type == "x-p2"){serialized_engine = buildEngineYolov8x_p2(builder, config, DataType::kFLOAT, wts_name);}...
    
  4. 参考作者 (https://github.com/wang-xinyu/tensorrtx/tree/master/yolov8) , 获取wts , 然后生成模型.
    ./yolov8 -s ./weights/xxx.wts ./weights/xxx.engine x-p2

  5. 推理模型测试
    ./yolov8 -d xxx.engine ../images g

    在这里插入图片描述


END

  • 官网中没有找到p2的预训练模型,所以需要根据自己数据集训练模型
  • 自己训练模型需要更改 config.h 中对应的参数.
  • 以上纯手工输出,若有不对,欢迎大佬指正.

参考:

  • https://github.com/ultralytics/ultralytics/tree/main
  • https://github.com/wang-xinyu/tensorrtx/tree/master/yolov8

在这里插入图片描述

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.rhkb.cn/news/168346.html

如若内容造成侵权/违法违规/事实不符,请联系长河编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

【微服务】Feign 整合 Sentinel,深入探索 Sentinel 的隔离和熔断降级规则,以及授权规则和自定义异常返回结果

文章目录 前言一、Feign 整合 Sentinel1.1 实现步骤1.2 FallbackFactory 示例 二、Sentinel 实现隔离2.1 隔离的实现方法2.2 Sentinel 实现线程隔离示例 三、熔断降级规则3.1 熔断降级原理及其流程3.2 熔断策略 —— 慢调用3.3 熔断策略 —— 异常比例和异常数 四、授权规则4.1…

文件的基本操作(创建文件,删除文件,读写文件,打开文件,关闭文件)

1.创建文件(create系统调用) 1.进行Create系统调用时&#xff0c; 需要提供的几个主要参数: 1.所需的外存空间大小&#xff08;如:一个盘块&#xff0c;即1KB) 2&#xff0e;文件存放路径&#xff08;“D:/Demo”) 3.文件名&#xff08;这个地方默认为“新建文本文档.txt”) …

计算机网路第3章-运输层

概述和运输层服务 运输层协议为运行在不同主机上的应用进程提供了逻辑通信&#xff0c;从应用程序角度看&#xff0c;通过使用逻辑通信&#xff0c;就好像运行在不同主机上的进程直接相连在一起一样。 运输层和网络层的关系 网络层提供主机之间的通信&#xff0c;而运输层提…

期中考misc复现

第一题 flow analysis 1 服务器附带的后门文件名是什么&#xff1f;&#xff08;包括文件后缀&#xff09; Windows后门是指当攻击者通过某种手段已经拿到服务器的控制权之后&#xff0c;然后通过在服务器上放置一些后门&#xff08;脚本、进程、连接之类&#xff09;&#xf…

SpringBoot Lombok的使用

目录 下载Lombok插件 Lombok的用法 获取日志对象 生成get,set方法 Lombok框架的实现原理 Lombok的常用注解 下载Lombok插件 要使用Lombok首先要确保idea安装了lombok插件 在项目中添加 lombok依赖 在<dependency>里右键生成点击edit starters 插件(没有就下载,可…

分治法,动态规划法,贪心法,回溯法主要概括

目录 分治法&#xff0c;动态规划法&#xff0c;贪心法&#xff0c;回溯法主要概括 1.前言2.分治法2.1基本思想&#xff1a;2.2适用条件&#xff1a;2.3时间复杂度&#xff1a;2.4主要解决&#xff1a;2.5关键字&#xff1a;2.6其他&#xff1a; 3.动态规划法3.1基本思想&…

使用序列化技术保存数据 改进 IO流完成项目实战水果库存系统

上一节内容是 使用IO流完成项目实战水果库存系统https://blog.csdn.net/m0_65152767/article/details/133999972?spm1001.2014.3001.5501 package com.csdn.fruit.pojo; import lombok.AllArgsConstructor; import lombok.Data; import lombok.NoArgsConstructor; import java…

基于stm32控制的4G模块在设备模式下通讯

这里的32控制其实和51的控制思路都是一样的&#xff0c;都是先利用一个网络助手将家里的无线网生成局域网&#xff0c;接着通过花生壳软件将局域网变成公共网&#xff0c;最后是利用串口助手&#xff0c;在4G模块的AT指令模式写入命令ATSOCKTCPC,公共网IP地址,公共网端口号&…

Hadoop3教程(二十七):(生产调优篇)HDFS读写压测

文章目录 &#xff08;146&#xff09;HDFS压测环境准备&#xff08;147&#xff09;HDFS读写压测写压测读压测 参考文献 &#xff08;146&#xff09;HDFS压测环境准备 对开发人员来讲&#xff0c;压测这个技能很重要。 假设你刚搭建好一个集群&#xff0c;就可以直接投入生…

跨境商城源码可以支持多种用户权限管理方式吗?

在跨境电商领域&#xff0c;为了满足消费者的需求和提供更好的购物体验&#xff0c;开发人员需要使用一种可靠的跨境商城源码来构建跨境电商平台。然而&#xff0c;这个商城源码是否可以支持多种用户权限管理方式呢?本文将探讨这个问题。 1.什么是跨境商城源码? 跨境商城源码…

linux进程管理,一个进程的一生(喂饭级教学)

这篇文章谈谈linux中的进程管理。 一周爆肝&#xff0c;创作不易&#xff0c;望支持&#xff01; 希望对大家有所帮助&#xff01;记得收藏&#xff01; 要理解进程管理&#xff0c;重要的是周边问题&#xff0c;一定要知其然&#xff0c;知其所以然。看下方目录就知道都是干货…

【单片机学习笔记】Windows+Vscode+STM32F4+freeRTOS+FatFs gcc环境搭建

为摒弃在接受keil邮件&#xff0c;研究了下gun编译&#xff0c;以STM32F407为例&#xff0c;简单记录 1. 软件包准备 Git 选择对应版本直接安装即可https://git-scm.com/download/winmakegcc ​ 1&#xff09;将上述软件包放置于C盘根目录 2&#xff09;添加环境变量 3&am…

【Git】idea提交项目到Gitee

文章目录 1. 创建Gitee仓库1. 新建仓库2. 添加描述3. 复制仓库地址 2. idea建立连接提交2.1 Create Git Repository2.2 选择要提交的根目录2.3 Commit2.4 Push2.5 提交成功 1. 创建Gitee仓库 1. 新建仓库 2. 添加描述 3. 复制仓库地址 点击右上角克隆/下载&#xff0c;复制HT…

pv操作题目笔记

对于 pv 操作分以下几步走 什么是pv操作 PV操作在进程同步中通常指的是信号量&#xff08;Semaphore&#xff09;操作。信号量是一种用于控制多个并发进程或线程之间的同步和互斥访问的同步工具。PV操作通常涉及两个基本操作&#xff1a;P操作&#xff08;wait操作&#xff0…

MD5生成和校验

MD5生成和校验 2021年8月19日席锦 任何类型的一个文件&#xff0c;它都只有一个MD5值&#xff0c;并且如果这个文件被修改过或者篡改过&#xff0c;它的MD5值也将改变。因此&#xff0c;我们会对比文件的MD5值&#xff0c;来校验文件是否是有被恶意篡改过。 什么是MD5&#xff…

微信小程序实现类似于 vue中ref管理调用子组件函数的方式

微信小程序中确实有类似于 vue 中 ref管理子组件的方式、 这里 我给子组件定义了一个 class 只要是 css选择器拿得到的 都没什么问题 但你要保证唯一性 建议前端开发还是慎重一点 就算是不能重复也尽量用class 因为id总还是有风险的 然后 我在子组件中顶一个了一个函数 start…

计网----数据包在传输中的变化过程,单播组播和广播,ARP协议,ARP代理,免费ARP,DNS协议,路由数据转发过程

计网----数据包在传输中的变化过程&#xff0c;单播组播和广播&#xff0c;ARP协议&#xff0c;ARP代理&#xff0c;免费ARP&#xff0c;DNS协议&#xff0c;路由数据转发过程 一.数据包在传输中的变化过程&#xff08;在同一个路由器下&#xff09; 1.传输数据时&#xff0c…

RPA的尽头是超自动化?

超自动化在经过数年的发酵期后&#xff0c;已从一个科技概念崛起为市值近千亿元的新赛道&#xff0c;包括各大互联网巨头、科技公司都纷纷围绕超自动化进行战略布局。 一方面&#xff0c;是行业巨头选择纷纷跻身超自动化新赛道&#xff0c;另一方面&#xff0c;RPA行业的领军企…

Proteus仿真--VB上位机程序控制DS1302时钟仿真(Proteus仿真+程序)

本文主要介绍基于51单片机的VB上位机程序控制DS1302时钟仿真设计&#xff08;完整仿真源文件及代码见文末链接&#xff09; 简介 硬件电路主要分为单片机主控模块、DS1302模块、LCD1602液晶显示模块以及串口模块 &#xff08;1&#xff09;单片机主控模块&#xff1a;单片机选…

Git最佳实践:git常用命令和原理

Git 是一个开源的分布式版本控制系统。 Git 工作区、暂存区和版本库 工作区&#xff1a;就是你在电脑里能看到的目录。暂存区&#xff1a;英文叫 stage 或 index。一般存放在 .git 目录下的 index 文件&#xff08;.git/index&#xff09;中&#xff0c;所以我们把暂存区有时…