Trying to download cfg and weights files from YoloV8 model

2.8k views Asked by At

I am designing a vehicle tracking program with Deepstream SDK 6.0.1. I want to use YoloV8 as my inference model but require the cfg and weights files for that model.

When you calibrate your inference model, you can specify which inference model you want to use, by specifying the location of the relevant cfg, and weights files. It is quite easy to find the pre-trained cfg files and weights of Yolov3, but I have been trying to find the weights and cfg files for other Yolo models but could not find any...I have tried to follow the steps from this website: https://wiki.seeedstudio.com/YOLOv8-DeepStream-TRT-Jetson/ , but I receive the error message 'Illegal instruction (code dumped)' when I try to use the pytorch and torchvision packages (I am using the correct versions, and system architecturs of these packages). I then implemented the steps described on the website on my personal computer (windows 11 OS), and got it running, but the python script that they specify on the website (gen_wts_yoloV8.py), is not located in the DeepsteamYolo file that had to be downloaded. The other python script that is located in the file ("export_yoloV8.py"), exports a onnx file, which is also not exactly what I want (except if someone know if this can be converted to a cfg and weight files).

Does someone know where I could find cfg files and weights for Yolov8, or if it is possible to generate these files? I am programming on a Jetson nano, with Ubuntu 18.04 and have installed the Jetpack 4.6.1 SDK on the SD.

1

There are 1 answers

0
Armaggheddon On

You can use the onnx file generated by the script as an input source for the nvinfer plugin (that I assume you are using for the inference in Deepstream). In the config file you just need to have:

onnx-file=your/file/path/to/model.onnx
model-file-engine=/path/where/enginemodel/will/be/saved.engine

and remove the other fields relating to the model. You can find the available fields at Nvidia Deepstream nvinfer plugin.

Additionally I would suggest to take a look at this (GitHub REPO) repository since using YoloV8 also require custom parsing of the inference (and also .py scripts to convert to .onnx and config files for each model), which are in a format not supported from the functions nvidia provides.

For the weight files of YoloV8 or any other Yolo models you can use the yolo command line from ultralytics that takes care of this while also installing all the required dependencies. Ultralytics Yolo GitHub