Hi,
Thanks for the great work, I stumbled on a few issues when converting the model from the v5 of Yolov5 but nothing too serious.
The first one was that the 3x20x20x85 output from the Yolo model was returned as an output of the final pipeline. Adding :
if builder.spec.description.output[-1].name == "p20":
del builder.spec.description.output[-1]
Solved this issue.
Where I had previously renamed the output of the yolo model :
- 3x80x80x85 output -> p80
- 3x40x40x85 output -> p40
- 3x20x20x85 output -> p20
The second one happened when running the model in Apple's object detection demo app Breakfast Finder. This seems to indicate that the pipeline model does not run on the ANE. However it seems to run on the GPU as the CPU usage is fairly low (30%) on this app for the yolov5s.pt checkpoint from the ultralytics repo.

The yolov5s.pt model was converted using the export.py function and the environment from their repo. Mainly coremltools 4.1 and pytorch 1.9
Hi,
Thanks for the great work, I stumbled on a few issues when converting the model from the v5 of Yolov5 but nothing too serious.
The first one was that the 3x20x20x85 output from the Yolo model was returned as an output of the final pipeline. Adding :
Solved this issue.
Where I had previously renamed the output of the yolo model :
The second one happened when running the model in Apple's object detection demo app Breakfast Finder. This seems to indicate that the pipeline model does not run on the ANE. However it seems to run on the GPU as the CPU usage is fairly low (30%) on this app for the yolov5s.pt checkpoint from the ultralytics repo.
The yolov5s.pt model was converted using the export.py function and the environment from their repo. Mainly coremltools 4.1 and pytorch 1.9