Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Download helm and kustomize in the target docker image #1501

Conversation

nresare
Copy link

@nresare nresare commented Nov 22, 2024

Previously, helm and kustomize for a single architecture was downloaded as part of the top level make invocaiton and copied into the target docker environments.

To facilitate mulitple architectures for the resulting docker images, I have moved the artifact downloads into the docker build, extracting the target architecture from the evnironment

To complicate matters a bit, the google location for those images doesn't hold versions for arm64, so the scripts are downloading from the official release repositories of the respective upstream projects instead

Previously, helm and kustomize for a single architecture was
downloaded as part of the top level make invocaiton and copied into
the target docker environments.

To facilitate mulitple architectures for the resulting docker
images, I have moved the artifact downloads into the docker build,
extracting the target architecture from the evnironment

To complicate matters a bit, the google location for those images
doesn't hold versions for arm64, so the scripts are downloading
from the official release repositories of the respective upstream
projects instead
Copy link

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by:
Once this PR has been reviewed and has the lgtm label, please assign karlkfi for approval. For more information see the Kubernetes Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

Copy link

Hi @nresare. Thanks for your PR.

I'm waiting for a GoogleContainerTools member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@nresare
Copy link
Author

nresare commented Nov 22, 2024

Please note that this is mainly for inspiration. You have my explicit permission to borrow whatever seems useful from this without attribution.

It is unclear to me whether the current kustomize and helm install scripts are needed with these ones in place.

obviously it would make the change smaller and less dependent on upstream not pulling releases/replacing binaries etc if the missing binaries could be uploaded to the gcp object store.

@karlkfi
Copy link
Contributor

karlkfi commented Nov 22, 2024

Config Sync itself is not being built for arm64, so there's not much point in having helm and kustomize binaries for arm64 yet.

Plus, for compliance & CVE patching reasons, we're required to ship binaries we built from source. This is why it's installing from our GCS buckets which are being released from an internal mirror, giving us the ability to patch and ship CVE fixes before they get approved and shipped upstream.

We're also required to include the notices and licenses of dependencies and transitive dependencies. So we can't just remove those.

In order to do this right, we will need to update our internal mirrors and this project to produce arm64 binaries. So we can't really use this PR as-is.

If you want you can file an issue with the feature request. But realisticly, it's more likely to be prioritized if you go through GCP account management or support with a feature request.

@karlkfi karlkfi closed this Nov 22, 2024
@nresare
Copy link
Author

nresare commented Nov 22, 2024

I think I have a pretty good understanding of how these things works, and I am aware of the fact that you currently don't build the config sync container images for arm64. I currently don't have a lot of pull with the GCP account managers, the reason I'm looking into this is that I had some good experience with configsync with a previous employer and now I'm setting up a little raspberry pi cluster at home and it would be nice to be able to use configsync on it.

Please see my comment at #893 for my breakdown of what I needed to do to build the images.

@mikebz
Copy link
Contributor

mikebz commented Nov 22, 2024

@nresare thank you so much for your contribution. We will see how to incorporate this into a holistic solution here with the angle towards providing ARM support in general. Are you currently unblocked in how you are building and deploying your fork?

@karlkfi
Copy link
Contributor

karlkfi commented Nov 23, 2024

Sorry, I didn't know there was a previous feature request filed for this.

Didn't mean to shut you down by closing it quickly after you invested time in the PR. I just didn't see any way this particular PR could be modified to be acceptable given our security requirements. And since our internal forks for patching are internal, I can't just ask that you make the required changes there.

This PR seems fine on a fork that you build yourself, using public pre-built helm & kustomize binaries, it just won't satisfy FedRAMP requirements, which we have contracts to uphold.

I'll let @mikebz handle the feature request. He has a more influence than I do over what features gets prioritized.

@nresare
Copy link
Author

nresare commented Nov 23, 2024

No hard feelings at all about closing this PR. Obviously the right way to go about this would be to fix the forked over at your side. @mikebz I just got the reconciler-manager Pod running, so I think I can consider myself unblocked :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants