Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MobileNetEdgeTPU-V2 final model to be used #810

Closed
mohitmundhragithub opened this issue Nov 3, 2023 · 11 comments
Closed

MobileNetEdgeTPU-V2 final model to be used #810

mohitmundhragithub opened this issue Nov 3, 2023 · 11 comments
Assignees
Milestone

Comments

@mohitmundhragithub
Copy link
Contributor

mohitmundhragithub commented Nov 3, 2023

The links that were shared earlier has different files:

Link1: https://tfhub.dev/google/edgetpu/vision/mobilenet-edgetpu-v2/l/1
Link2: https://drive.google.com/file/d/1--A9EI8Ntr_Q5CH0WblFvUctVI_6VkcR/view?usp=sharing

The TF model shared in Link2 has the dimensions fixed to 1x224x224x3.
The TF model shared in Link1 has the dimensions fixed to ?x224x224x3.

As per the description in the slide Link 2 must have been extracted from Link1. But need confirmation on the final model. I think the one with dynamic batch would be be better.

@freedomtan
Copy link
Contributor

freedomtan commented Nov 7, 2023

Agree that we should support dynamic batch size for offline.

Actually, I think the official one is from https://github.com/tensorflow/models/tree/master/official/projects/edgetpu/vision#edgetpu-optimized-vision-models

@mohitmundhragithub
Copy link
Contributor Author

Agree that we should support dynamic batch size for offline.

Actually, I think the official one is from https://github.com/tensorflow/models/tree/master/official/projects/edgetpu/vision#edgetpu-optimized-vision-models

Hi,

I checked this link. The TF version of the MobileNetEdgeTPUv2-L shared here is the checkpoint. And the tflite version shared in link is a fixed 1 batch model only.

I think the tfhub model shared in the slide (link 1 in my comment in this issue) should be good to use.

@freedomtan
Copy link
Contributor

Agree that we should support dynamic batch size for offline.
Actually, I think the official one is from https://github.com/tensorflow/models/tree/master/official/projects/edgetpu/vision#edgetpu-optimized-vision-models

Hi,

I checked this link. The TF version of the MobileNetEdgeTPUv2-L shared here is the checkpoint. And the tflite version shared in link is a fixed 1 batch model only.

I think the tfhub model shared in the slide (link 1 in my comment in this issue) should be good to use.

Yes. the TF version is the checkpoint. But I thinks it's the original source. To get frozen pb or saved_model, you have to clone the repo and export the model to use the checkpoints / weights.

Nope, the TFLite model does support dynamic batch size. When I run it with benchmark_model --graph=... --use_nnapi=1 --input_layer=input --input_layer_shape=n,224,224,3, where n > 1, I can get expected results on Pixel and MTK devices.

@mohitmundhragithub
Copy link
Contributor Author

mohitmundhragithub commented Nov 21, 2023

Agree that we should support dynamic batch size for offline.
Actually, I think the official one is from https://github.com/tensorflow/models/tree/master/official/projects/edgetpu/vision#edgetpu-optimized-vision-models

Hi,
I checked this link. The TF version of the MobileNetEdgeTPUv2-L shared here is the checkpoint. And the tflite version shared in link is a fixed 1 batch model only.
I think the tfhub model shared in the slide (link 1 in my comment in this issue) should be good to use.

Yes. the TF version is the checkpoint. But I thinks it's the original source. To get frozen pb or saved_model, you have to clone the repo and export the model to use the checkpoints / weights.

Nope, the TFLite model does support dynamic batch size. When I run it with benchmark_model --graph=... --use_nnapi=1 --input_layer=input --input_layer_shape=n,224,224,3, where n > 1, I can get expected results on Pixel and MTK devices.

when i use this tflite model, somehow it doesn't work for us.

Is it ok to use the saved model from link 1? The accuracy reported by Parham is also based on this model only.

@mohitmundhragithub
Copy link
Contributor Author

It was discussed in the meeting today that it's ok to use Link 1 as well.
We may need to save a copy of this model in github.com/mlcommons/mobile_open as well.

@freedomtan
Copy link
Contributor

It was discussed in the meeting today that it's ok to use Link 1 as well. We may need to save a copy of this model in github.com/mlcommons/mobile_open as well.

Agree. Let's send PR to mobile_open or mobile_models

@mohitmundhragithub
Copy link
Contributor Author

mohitmundhragithub commented Nov 21, 2023

The tfhub link (link 1) now redirects to kaggle... but the downloadable model is the same.

@mohitmundhragithub
Copy link
Contributor Author

I can't create a PR for the model.
@anhappdev can you please help for it?

@anhappdev
Copy link
Collaborator

I can't create a PR for the model. @anhappdev can you please help for it?

Can you show a screenshot of the error?

@mohitmundhragithub
Copy link
Contributor Author

No technical issues as such... Its more of a licensing issue.
For me to download the model and then upload to mlcommons repo will require more approvals.

@anhappdev anhappdev added this to the v4.0 milestone Jan 16, 2024
@freedomtan
Copy link
Contributor

We chose another model. Close this for now.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants