-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Question] PV volume path canonicalization. #25
Comments
I'm wondering how we would handle the case of wanting to include more than one reference, or more than one digest. I think possibly this would be the "include the actions summary manifest" that we haven't developed yet. I'm wondering how we would handle that, because the URI is currently based on one reference, so more than one reference would require more than one bind (which is not ideal, ideally the different references should be under the same directory). Maybe if someone provides a set of actions, they would provide a custom name for it? And maybe this could allow loading a pre-determined set of identifiers for the cluster to use (for further control) ? Control Bind IdentifierFor this first example, we need better control of how we namespace artifact groups. Assuming that I can also define some group of assets to add, here is how I would say "name the oras-csi bind to kind: Pod
apiVersion: v1
metadata:
name: my-csi-app-inline
spec:
containers:
- name: my-container
image: ubuntu
volumeMounts:
- name: oras-inline
mountPath: "/mnt/oras"
readOnly: true
command: [ "sleep", "1000000" ]
volumes:
- name: oras-inline
csi:
driver: csi.oras.land
readOnly: true
volumeAttributes:
oras-csi.name: "optional-mount-identifier"
oras-csi.actions: |
<not developed yet> This would allow someone to deploy the driver, deploy the pod with the actions defined under a certain namespace, and then (given that the artifact persist) just ask to use it - no need to think about ORAS/OCI references! kind: Pod
apiVersion: v1
metadata:
name: my-csi-app-inline
spec:
containers:
- name: my-container
image: ubuntu
volumeMounts:
- name: oras-inline
mountPath: "/mnt/oras"
readOnly: true
command: [ "sleep", "1000000" ]
volumes:
- name: oras-inline
csi:
driver: csi.oras.land
readOnly: true
volumeAttributes:
oras-csi.name: "optional-mount-identifier" This might also be provided to the original driver deployment - a directive to say "pre-load these paths and don't allow extension from that." It's a slightly different use case than we've talked about before - if you provide some cluster to your users, you can pre-load sets of named artifacts for your users, and (although they could deploy a pod to still inspect that space) they wouldn't need to. This would be a lazy way to package a random set, instead of having to build some set of artifacts into a single reference beforehand. ORAS OCI Manifest?I think we will ultimately want something that looks like a manifest, because it could be the case we want to do a mount that includes something from more than one artifact. The simplest idea is taking your patch reference, and using that: volumes:
- name: oras-inline
csi:
driver: csi.oras.land
readOnly: true
volumeAttributes:
# This probably needs some tweaking
oras.artifact.patch: "ghcr.io/username/repository:latest" And then possibly (since standards take forever) we could design a prototype that would be saved as an artifact itself, and allows for assembling such a manifest: $ oras-csi patch docker.io/library/ubuntu:20.04 --minor sha256:aabbccdd --major sha256:eeffdgghh --crit sha256:iijjkkll and that would result in: {
"mediaType": "application/vnd.oci.artifact.manifest.v1+json",
"artifactType": "application/vnd.oci.image.patch.v1+json",
"refers": {
"mediaType": "application/vnd.oci.image.manifest.v1+json",
"size": 1234,
"digest": "sha256:cc06a2839488b8bd2a2b99dcdc03d5cfd818eed72ad08ef3cc197aac64c0d0a0",
"annotations": {
"org.opencontainers.image.ref.name": "docker.io/library/ubuntu:20.04"
}
},
"annotations": {
"org.opencontainers.image.patch.minor.digest": "sha256:aabbccdd",
"org.opencontainers.image.patch.major.digest": "sha256:eeffdgghh",
"org.opencontainers.image.patch.critical.digest": "sha256:iijjkkll"
}
} And that would be pushed as an artifact, and referenced in the oras-oci volume attributes. But I don't totally like that design because (I don't think) a human should need to look up digests! But high level:
What do you think? I'm happy to prototype some ideas if we like any of them. Conceptually, the "assemble a patch" recipe feels very similar to something I did for dicom data (a deid "recipe" file https://pydicom.github.io/deid/user-docs/recipe-headers/ that ranged from simple actions (ADD/REMOVE/BLANK) to running user functions across some set of dicom headers. |
So many questions don't know where to start. It's all so interesting.
@salaxander might be interested in this as well due to https://github.com/project-copacetic/copacetic |
Can flux or another workflow drive the patching of these mounts and/or the base images? |
Why do we need a workflow tool, beyond just having the driver retrieve the manifests, and dump stuff where it needs to be? Flux could be used as some kind of workflow service in a cluster (and that's a cool idea) even outside of the Flux Operator. There could be a pod running Flux and that we could submit jobs or tasks to it. The Flux Operator is running typically on a networked set of pods (e.g., they would be using this driver). I've been trying to think of more creative ways to implement this - e.g., you can actually communicate with Flux via a proxy, even with basic ssh. So theoretically if we needed some kind of worker pod, we would run a small flux instance there, and then could proxy commands to it. We also can communicate with RPC. That's a cool idea! Is that what you had in mind - having some kind of pod service with Flux? |
The pv datapath currently has
:
if there is a port or a digest form. Good news is that deployments from digest tags works. :)I don't know if this is an issue but most likely need to consider if we should sanitize this further.
If this isn't important let's close the issue.
The text was updated successfully, but these errors were encountered: