Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implementing YOLOv8 with Segmentation on Samsung One #14547 (DIscussion) #14549

Open
HyAgOsK opened this issue Jan 13, 2025 · 19 comments
Open

Implementing YOLOv8 with Segmentation on Samsung One #14547 (DIscussion) #14549

HyAgOsK opened this issue Jan 13, 2025 · 19 comments

Comments

@HyAgOsK
Copy link

HyAgOsK commented Jan 13, 2025

Hello, Community,

I am exploring the use of YOLOv8 for object segmentation on the Samsung One. I would like to know:

Segmentation Support: Is YOLOv8 suitable for segmentation tasks directly in PyTorch? Are there any specific adjustments or configurations required to optimize segmentation performance?

Its possible, to run just using Python language ? Have a some SDK specificle, to development in python ?

Execution Requirements: What are the basic requirements to run the YOLOv8 segmentation model on Samsung One? Is any special configuration needed to ensure good performance?

Thank you for your help, and I look forward to your insights.

Thank you!

@seanshpark
Copy link
Contributor

@HyAgOsK , thanks for your interest!

Is YOLOv8 suitable for segmentation tasks directly in PyTorch?

I'd like to explain a bit of ONE project.
ONE has compiler that converts ONNX/TF/tflite to native file format, circle, and optimizes the model graph for better performance in onert, our runtime, and our inner NPU backend.
That is we don't directly read Pytorch, but read exported ONNX files from Pytorch.

Are there any specific adjustments or configurations required to optimize segmentation performance?

Someone has to check.

Its possible, to run just using Python language ? Have a some SDK specificle, to development in python ?

Not sure about "how" but if you export to ONNX I assume that can work.

Execution Requirements: What are the basic requirements to run the YOLOv8 segmentation model on Samsung One?

If someone knows or have time to dig in YOLOv8, he/she might answer.
Can you add some links to download any YOLOv8 ONNX model?

@HyAgOsK
Copy link
Author

HyAgOsK commented Jan 13, 2025

Thanks for communicate @seanshpark.

Sorry about duplicated issues.

What is the main documentation I should follow regarding the Ubuntu/Linux, Windows, or Android environment to test the YOLO model directly in the following pipeline: YOLO -> .pt -> .ONNX / .TFLITE -> .CIRCLE?

I would like to test it without requiring the specific Samsung processor. Later, I am considering purchasing one, but I want to perform preliminary testing to gain confidence before making the investment.

I noticed that Ultralytics mentions support for the processor in their documentation, but I am unsure of the exact specifications. The requirement is for a tablet, and any information you can provide would be greatly appreciated.

Reference:
Ultralytics Android Documentation
image

@seanshpark
Copy link
Contributor

Please read the documents in https://github.com/Samsung/ONE/tree/master/docs/howto
I cannot say anything about the chips and we are not targeting Exynos.

YOLO -> .pt -> .ONNX / .TFLITE

Please search the internet about this.

.ONNX / .TFLITE -> .CIRCLE?

Do you want to build from source and run? or do you want to install the pre-built Debian package ?
How to build from source is described in above howto link.
Debian packages can be found in https://github.com/Samsung/ONE/releases.

Latest compile can be found at https://github.com/Samsung/ONE/releases/tag/1.29.0

  1. Install the appropriate Debian package to Ubuntu (18.04/20.04/22.04).
  2. command line is onecc
  3. run onecc -h for help

apologies that documents maybe poor and commands are not easy and there are too many to describe here.
please check compier/one-cmds folder for more information.

there maybe someone who can provide more details so please ask more and wait.

@HyAgOsK
Copy link
Author

HyAgOsK commented Jan 13, 2025

Do you want to build from source and run? or do you want to install the pre-built Debian package ?

Iam just research about. I need to understand and testing how i can get use Samsung ONE, with YOLOV8, because iam a master studend, and i think this its very interessantly to use in my application.

No my friend, no problem, im so very interresly to development this with my work in academic. Iam do this tomorrow, and i ask in this issue about what i got.

@HyAgOsK
Copy link
Author

HyAgOsK commented Jan 13, 2025

@seanshpark, I would like to know if there are any applications or projects that specifically use YOLO with the ONE. I found a one work, but they lack the specific details I am looking for.
https://dl.acm.org/doi/abs/10.1145/3626793

@seanshpark
Copy link
Contributor

seanshpark commented Jan 13, 2025

because iam a master studend, and i think this its very interessantly to use in my application.

👍🏻 :)

I would like to know if there are any applications or projects that specifically use YOLO with the ONE.

I don't think ONE has any project related to specific models, but ONE works generally with any models if the operators in the model are supported.

specifically use YOLO with the ONE

I think you want to use onert (one runtime) in mobile devices or just ARM devices, like Raspberry Pi, for your research.
Roughly you need to convert and compile to circle and run ARM build onert on the device.

I'm not expert on YOLOv8 or any python model related things so please provide a link to download ONNX model from the internet so that I can help you go get compiled yolov8.circle model.
after that someone expert on onert can help you with running the model in the ARM device, if he/she has some spare time with it.

@HyAgOsK
Copy link
Author

HyAgOsK commented Jan 13, 2025

I would like to run this model on an Exynos processor, specifically on a tablet. If you could kindly provide me with any assistance or information, I would be immensely grateful! Below is the link to the Yolov8n.onnx model:

https://huggingface.co/SpotLab/YOLOv8Detection/blob/3005c6751fb19cdeb6b10c066185908faf66a097/yolov8n.onnx

@seanshpark
Copy link
Contributor

I would like to run this model on an Exynos processor, specifically on a tablet.

As I wrote, our target is not Exynos, that is our onert does not have any specific features to run better on Exynos. Compiler itself is general AI model compiler so you can use it for model optimization. I'll check with the model.

I assume you want to run on Android platform.
For running onert on Android, I have no idea. Someone else may have to help.
CC @chunseoklee , @hseok-oh

@HyAgOsK
Copy link
Author

HyAgOsK commented Jan 13, 2025

Yes, perfectly, I need to run it on the Android app. I look forward to hearing from you, and thank you very much for all your help.

@seanshpark
Copy link
Contributor

Assume you have installed compiler and runtime in x86-64 Ubuntu (20.04 or 22.04),
save this as yolov8n_nnpkg.cfg file

[onecc]
one-import-onnx=True
one-optimize=True
one-pack=True
include=O1

[one-import-onnx]
input_path=yolov8n.onnx
output_path=yolov8n.circle

[one-optimize]
input_path=yolov8n.circle
output_path=yolov8n.opt.circle
convert_nchw_to_nhwc=True

[one-pack]
input_path=yolov8n.opt.circle
output_path=./nnpackages

in command line run onecc

onecc -C yolov8n_nnpkg.cfg

you'll get yolov8n.opt.circle and nnpackages folder with below tree.

nnpackages/
└── yolov8n.opt
    ├── metadata
    │   └── MANIFEST
    └── yolov8n.opt.circle

this is where I can go.

for Android support, please wait for someone who can help.

@HyAgOsK
Copy link
Author

HyAgOsK commented Jan 14, 2025

So much thanks my friend. You can conffirmed for me, where i can installed compiler and runtime ?
In paste howto, i find some files, but i not know how is it. Maybe i got a wrong, se bellow.

https://github.com/Samsung/ONE/blob/master/docs/howto/how-to-build-compiler.md

@seanshpark
Copy link
Contributor

FYI, you can view circle model with Netron (https://github.com/lutzroeder/netron)

@HyAgOsK
Copy link
Author

HyAgOsK commented Jan 14, 2025

Yes, I know netron, thank you very much for your attention. I'm going to evaluate my friend tomorrow, I'll get back to you with other details, and if you know whether it's possible to operate on Android, I'd be very grateful.

@hseok-oh
Copy link
Contributor

hseok-oh commented Jan 14, 2025

I assume you want to run on Android platform.
For running onert on Android, I have no idea. Someone else may have to help.

We are providing runtime android API and package, but I cannot be sure that it is working because android is not our main target and not testing now.

@HyAgOsK
Copy link
Author

HyAgOsK commented Jan 14, 2025

greatly appreciate your response, my friend. It’s a pleasure to connect with you. I would like to ask about the build process. On Ubuntu, I am encountering issues during the build. I followed all the required steps exactly as outlined, but I have not been able to resolve the problem yet.

I referred to this documentation for Ubuntu:

By executing all the steps from the beginning, including the Ubuntu section, should I be able to run what is described above?

If you have any information or updates regarding using Android with YOLO, specifically about the SDK running normally, I would be grateful. I am currently unsure whether to purchase an Android device, as I am uncertain if it will work as expected. I am still undecided and look forward to your response. Thank you in advance for your assistance.

Would the runtime package also be related to the documentation mentioned? Could you please confirm this for me, as referenced above by @seanshpark ?

@seanshpark
Copy link
Contributor

@HyAgOsK , as @hseok-oh wrote, we are not sure running onert on Android is stable and we may not be able to support for problems.
we recommend use other runtimes, like onnxruntime or tensorflow lite.

@HyAgOsK
Copy link
Author

HyAgOsK commented Jan 14, 2025

Dear @seanshpark and @hseok-oh,

Thank you very much for your support and the responses provided. I kindly request your assistance in clarifying some doubts regarding development on Android devices using the ONE platform. I am considering the possibility of utilizing an Android device for my projects and would like to ensure the feasibility of this model. Specifically, I would like to confirm whether it can successfully execute projects on Android. Is it possible to involve someone from the team to assist me with this matter?

Additionally, I have a question regarding the use of runtime execution extensions, such as ONNX Runtime and TensorFlow Lite, on mobile devices. Can these tools leverage the NPU of devices such as smartphones and tablets? I understand that specific SDKs are required for these to operate on such devices. I am particularly interested in devices with Exynos processors that integrate NPU and GPU, and I would like to better understand their capabilities for this type of application.

Thank you in advance for your attention and support!

Best regards,

@seanshpark
Copy link
Contributor

Is it possible to involve someone from the team to assist me with this matter?

Can you be more specific about the team ?

Can these tools leverage the NPU of devices such as smartphones and tablets?

IMHO, you should ask these questions to related sites.

devices with Exynos processors that integrate NPU and GPU, and I would like to better understand their capabilities for this type of application.

I don't think anyone working in ONE may have any information with Exynos acceleration.

@HyAgOsK
Copy link
Author

HyAgOsK commented Jan 15, 2025

Thank you for your support, my friend @seanshpark. I am currently testing with Ubuntu 20.04 to convert the Yolov8n model to ONNX, and CIRCLE, in final, as per your guidance above. Tomorrow, I will conduct further tests and share any findings if I encounter errors. Once again, thank you very much.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants