Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

how to use this in python? #17

Open
debasishaimonk opened this issue Oct 20, 2023 · 7 comments
Open

how to use this in python? #17

debasishaimonk opened this issue Oct 20, 2023 · 7 comments

Comments

@debasishaimonk
Copy link

How to integrate the engine file in python, inorder to inference.

@bychen7
Copy link
Owner

bychen7 commented Oct 26, 2023

@debasishaimonk Sorry for the delayed response, I think the engine file doesn't need to be changed, you need to write the Python inference code.

@Fibonacci134
Copy link

@debasishaimonk Sorry for the delayed response, I think the engine file doesn't need to be changed, you need to write the Python inference code.

Correct, I've used the engine file with python API using tensorRT. Just have to set up the memory and inference. I can share the code with you if you wish.

@luhairong11
Copy link

@debasishaimonk Sorry for the delayed response, I think the engine file doesn't need to be changed, you need to write the Python inference code. 抱歉回复延迟,我认为引擎文件不需要更改,您需要编写Python推理代码。

Correct, I've used the engine file with python API using tensorRT. Just have to set up the memory and inference. I can share the code with you if you wish.正确,我已经使用带有 python API 的引擎文件使用tensorRT。只需要设置记忆和推理即可。如果您愿意,我可以与您分享代码。

I have written Python inference code, but FP32 model inference takes about the same time as original PyTorch inference, about 27ms on a 3090 graphics card. FP16 inference is faster, taking less than 20ms, but the generated images are distorted.

@fanghaiquan1
Copy link

@Fibonacci134 请分享用python实现运行model.engine的程序,谢谢!请帮忙发邮箱[email protected]

@ggzzzzz628
Copy link

i need python infer code , thks,[email protected] @Fibonacci134

@lu-xinyuan
Copy link

@debasishaimonk Sorry for the delayed response, I think the engine file doesn't need to be changed, you need to write the Python inference code. 抱歉回复延迟,我认为引擎文件不需要更改,您需要编写Python推理代码。

Correct, I've used the engine file with python API using tensorRT. Just have to set up the memory and inference. I can share the code with you if you wish.正确,我已经使用带有 python API 的引擎文件使用tensorRT。只需要设置记忆和推理即可。如果您愿意,我可以与您分享代码。

I have written Python inference code, but FP32 model inference takes about the same time as original PyTorch inference, about 27ms on a 3090 graphics card. FP16 inference is faster, taking less than 20ms, but the generated images are distorted.

can you share the python infer code? thanks! [email protected]

@luhairong11
Copy link

@debasishaimonk Sorry for the delayed response, I think the engine file doesn't need to be changed, you need to write the Python inference code. 抱歉回复延迟,我认为引擎文件不需要更改,您需要编写Python推理代码。

Correct, I've used the engine file with python API using tensorRT. Just have to set up the memory and inference. I can share the code with you if you wish.正确,我已经使用带有 python API 的引擎文件使用tensorRT。只需要设置记忆和推理即可。如果您愿意,我可以与您分享代码。

I have written Python inference code, but FP32 model inference takes about the same time as original PyTorch inference, about 27ms on a 3090 graphics card. FP16 inference is faster, taking less than 20ms, but the generated images are distorted.

can you share the python infer code? thanks! [email protected]

It's been too long, and I'm no longer working on that area. The code is hard to find now.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants