-
Notifications
You must be signed in to change notification settings - Fork 132
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add zoo_helper CI/CD workflows for linux-x86_64 and linux-arm64 #1216
base: v3_develop
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks!
Left a few questions/suggestions, but if it's too much work we can merge mostly as is.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No new line at the end of file
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe better to separate this out in a separate folder & CMakeLists.txt and depend directly on DAI to avoid duplication.
I mean doing
target_link_libraries(zoo_helper, depthai_core)
.
We should be able to get core pretty lean on size, if everything is disabled.
program.add_argument("--yaml_folder").help("Folder with YAML files describing models to download"); | ||
program.add_argument("--cache_folder").help("Cache folder to download models into"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we add defaults here, which are the same as the ones that we set in the examples?
The idea being that if you run the model downloader, you don't need internet access when running the example.
This PR adds two workflows for compiling
zoo_helper
binaries forlinux-x86_64
andlinux-arm64
systems.The resulting binary is stored in it's own folder on artifactory with it's identifier being the current commit's hash. It has minimum dependencies, so it should be possible to
scp
the appropriate binary (based on the target architecture) to the target machine and start using it.The workflow itself uses
almalinux8
containers for it's use of relatively oldglibc
library. Asglibc
is backward compatible, this ensures that newer systems can use the produced binary as well. In our case, we should be able to support, among others, any Ubuntu operating system with version >= 20.04.almalinux8
was chosen for it's EOL being relatively far in the future: https://wiki.almalinux.org/release-notes/#almalinux-os-8Action example: https://github.com/luxonis/depthai-core/actions/runs/12896599874
glibc versions on different Linux distributions: https://repology.org/project/glibc/versions
Clickup task: https://app.clickup.com/t/86c0kymut
Let me squash the commits later, this is a mess 😄