-
Notifications
You must be signed in to change notification settings - Fork 11
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
how to use this in python? #17
Comments
@debasishaimonk Sorry for the delayed response, I think the engine file doesn't need to be changed, you need to write the Python inference code. |
Correct, I've used the engine file with python API using tensorRT. Just have to set up the memory and inference. I can share the code with you if you wish. |
I have written Python inference code, but FP32 model inference takes about the same time as original PyTorch inference, about 27ms on a 3090 graphics card. FP16 inference is faster, taking less than 20ms, but the generated images are distorted. |
@Fibonacci134 请分享用python实现运行model.engine的程序,谢谢!请帮忙发邮箱[email protected] |
i need python infer code , thks,[email protected] @Fibonacci134 |
can you share the python infer code? thanks! [email protected] |
It's been too long, and I'm no longer working on that area. The code is hard to find now. |
How to integrate the engine file in python, inorder to inference.
The text was updated successfully, but these errors were encountered: