How does mmdeploy convert FCOS3D's postprocessing pipeline to ONNX? #1827
Unanswered
tensorturtle
asked this question in
Q&A
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I built a PyTorch model from the components of FCOS3D as follows:
I found the forward pass top level at: https://github.com/open-mmlab/mmdetection3d/blob/47285b3f1e9dba358e98fcd12e523cfd0769c876/mmdet3d/models/detectors/single_stage_mono3d.py#L110-L113
When I run
torch.onnx.export
on the abovepure_model
WITHOUT the lastself.bboxes_head.get_bboxes
(postprocessing step), it exports successfully and can further be converted into TensorRT with seemingly correct results.However, when I include the
self.bboxes_head.get_bboxes
, ONNX export is not successful. There are manyTracerWarnings
from attempting to convert tensor to Python boolean, and also:So currently, I have to split up the inference into two steps (which also means transferring data back and forth to the GPU twice)
My question is, how is
mmdeploy
able to convert FCOS3D into a single-step ONNX model, when it seems to contain several incompatible tensor operations in the final step?Thank you
Beta Was this translation helpful? Give feedback.
All reactions