Replies: 9 comments 3 replies
-
I had also asked the question over at Longhorn discussions: And it seems that the Helm deployment does not take existing pv's into account. Is this correct? I think in that case it would be an important feature to include, the ability to specify existing pv/pvc's. SInce I posted I also tried backup / restore via Velero which also did not succeed. Upon starting Vaultwarden it seems to recreate new volumes too. |
Beta Was this translation helpful? Give feedback.
-
Any chance of a reply? Are there any plans on supporting the use of existing PV/PVC's? |
Beta Was this translation helpful? Give feedback.
-
Hey @Hr46ph,
Cheers |
Beta Was this translation helpful? Give feedback.
-
The custom values:
As you can see the PV/PVC persist after uninstalling so I don't even have to restore the volumes. If I do restore (which I tested), I have the option in Longhorn to keep the name identical so it will be exactly the same. When I reinstall vaultwarden with the above settings for the volumes, I get new PV's for which the part that looks like a random UUID string will be different. The PVC's will point to the new volumes instead the existing ones. |
Beta Was this translation helpful? Give feedback.
-
@Hr46ph Did you ever solve this? I was also playing around with it but did not manage to get a pod working with a PVC. |
Beta Was this translation helpful? Give feedback.
-
What is interesting for me is that my Vaultwarden Helm deployment runs fine without any persistant data volume. |
Beta Was this translation helpful? Give feedback.
-
Def would like to see this option as well. Would be huge for cases where we want to backup our data on a NAS. Immich is an example of an app with a chart that supports using an existing pvc. |
Beta Was this translation helpful? Give feedback.
-
Just noticed this PR: #19 - fairly old so it will need a rebase but might help do the trick! |
Beta Was this translation helpful? Give feedback.
-
Forked and testing this https://github.com/paimonsoror/vaultwarden - seems to work so far. Nuked the resources and rebuilt, and my environment and the vault that i store on NAS was intact as well as the users/groups :) changes were here: |
Beta Was this translation helpful? Give feedback.
-
I am running Longhorn for storage provider. I installed Vaultwarden via helm and supplied values.yaml in which I set a domain name and custom size for data and files volumes. Thats it.
Vaultwarden runs fine, and after creating an ingress I can access the gui. I create a user and import a json with some passwords.
I create a backup of the volumes with Longhorn.
I delete vaultwarden via helm uninstall
I delete the current pv's and pvc's
restore pv's from backup.
I create vaultwarden namespace
create pvc vaultwarden-files and vaultwarden-data for the restored volumes.
I reinstall vaultwarden via helm and I kind of expect it to reuse the pvc's as they exist with the same name it would have created them. Instead, it insists on recreating new pv's and pvc's with that name and append the pod name 'vaultwarden-0' to it.
At this point, no matter what I try I cannot get the pod or stateful set to reuse the restored volumes.
I realize I might lack some knowledge on Kubernetes as I am still learning. But after a few hours struggling with this I figured I'd ask because from what I understand it shouldn't be difficult.
I also tried to edit the newly created pvc's to point to the restored pv's. It didnt accept.
I tried to edit the statefulset with kubectl edit, it accepts but it doesnt' change anything. Reopening shows the original values again.
Appreciate some help here!
Thanks!
Beta Was this translation helpful? Give feedback.
All reactions