Persistent volume is not a supported volume type for snapshots (Velero, MinIO, FSB) #6004
-
Hello, We need to backup PV on an on-premise Kubernetes cluster, so we installed Velero, linked to MinIO, with Velero's File System Backup. No PV are backed up and no error is shown, only this mention appears in the logs "Persistent volume is not a supported volume type for snapshots, skipping". Does someone have a clue to be able to backup PV on an on-premise cluster without having to use external Cloud providers ? DetailsVelero was installed using the following command (credentials-minio containing the MinIO's bucket access keys):
The result of a backup of namespaces shows no error, nor warnings as seen below and the status of the phase is completed.
In the logs we can read at the end of the following extract, that: "Persistent volume is not a supported volume type for snapshots, skipping".
Please let us know if you have a clue to be able to backup PV on an on-premise cluster without having to use external Cloud providers. |
Beta Was this translation helpful? Give feedback.
Replies: 5 comments
-
That warning just means that the volume does not support native snapshotting (and you have not configured any other backup method for it). You are trying to configure minio as a volume snapshot location. Snapshot locations are used for snapshotting cloud-based volumes -- we support aws/ebs, azure, and gcp as snapshot locations. If you are using any of these types of volumes, you will need to configure your volume snapshot location accordingly, but you can't use minio -- ebs snapshots live in amazon ebs, azure snapshots live in the azure cloud, etc. If you are not using cloud volume types with supported volume snapshotters (and from the message, I imagine that this is the case), you have a couple of other options:
If you want to store your volume backups in minio, then you want to use kopia/restic for backups. |
Beta Was this translation helpful? Give feedback.
-
Thank you @sseago it worked well after setting --default-volumes-to-fs-backup to true: persistent volumes were backed up using restic on an on-premise cluster. Nevertheless, during the restore, pods are created first, then the data of their persistent volumes are restored, so data are restored only after the pod started. This means for example that a pod from a statefuleset like Cassandra will start before having its data restored. This is causing some trouble as such pods aren't able to start properly. Do you know how we could do a proper restore for statefulsets, as they need their data to be restored before they start? |
Beta Was this translation helpful? Give feedback.
-
@ehemmerlin Restic requires a pod to restore a volume, since the restic pod running on the same node as your pod accesses your application's pods by using mount propagation from the node. However, your application shouldn't be running at this point -- when velero restores a pod bound to a volume that restic will restore, velero inserts an initcontainer into that pod which waits until the volumes for the pod have been restored before exiting -- so the application container for a pod shouldn't run until all restic restores for that pod have completed. Are you seeing the application container start before restic is done? If so, that might be a bug. Are there multiple pods mounting the same volume RWX? I wonder whether something is going on with velero's handling of RWX volumes with restic. |
Beta Was this translation helpful? Give feedback.
-
Thank you @sseago you were right: after taking some time to dive deeper into it, the application container wasn't starting before restic was done. Nevertheless, the remaining issue we still face, is restoring a MongoDB cluster composed of three nodes, as one of them triggers this fatal error: "block header checksum doesn't match expected checksum". Please note that we use the same application on Azure and no error like this is triggered, the backup and the restore on Azure works as expected. We proceeded with the following steps:
In this namespace we use Cassandra, RabbitMQ and MongoDB. Everything is being restored well (including two MongoDB nodes) except one MongoDB node which is most of the time in a "Back-off restarting failed container" state (even after having triggered a manual "mongod --repair" on it). Do you know what could cause this issue and how we could solve it? |
Beta Was this translation helpful? Give feedback.
-
Since we switched to CSI snapshots, this issue doesn't occur anymore. Thanks for your help. |
Beta Was this translation helpful? Give feedback.
That warning just means that the volume does not support native snapshotting (and you have not configured any other backup method for it). You are trying to configure minio as a volume snapshot location. Snapshot locations are used for snapshotting cloud-based volumes -- we support aws/ebs, azure, and gcp as snapshot locations. If you are using any of these types of volumes, you will need to configure your volume snapshot location accordingly, but you can't use minio -- ebs snapshots live in amazon ebs, azure snapshots live in the azure cloud, etc. If you are not using cloud volume types with supported volume snapshotters (and from the message, I imagine that this is the case), you have a…