-
-
Notifications
You must be signed in to change notification settings - Fork 63
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
encryptionroot is not preserved when sending hierarchy of encrypted datasets #236
Comments
I think the solution is to always send the dataset with encryption root using -R, but you can optionally use -X to exclude any datasets not in the current selection. related issue for -R support #36 |
can you run with --debug and copy paste the zfs send/recv line? i want to make sure we're not explicitly filtering out properties. |
As far as I could figure out, to preserve encryptionroot, the first send has to be with I opened an issue in the openzfs repo to get clarity in the docs or otherwise change the behaviour |
ugh that sucks, i hope there is some other way |
what happens when you just or is it only possible to use |
trying to inherit gives |
I did some testing and didn't get any issues with |
yeah it is, you could use some "zfs list ..|xargs zfs change-key -i" magic perhaps? |
Yeah for sure some kind of fixup script could be made. It would also have to load the key for each dataset for |
yeah, you only would need to do that if you want to access the backups on the backup server in that case. so its fine in that case. however if you use it as a replication tool it can be quite annoying i think. In that case you want the source and target to be matching as much as possible regarding properties, which it wont be by default because of this issue. |
Indeed it would be great to just have it be the same on both sides. I started looking at how it could be fixed in A workaround I'm considering is for the initial replication to send 1000 dataset "slots" like |
I was trying to patch zfs-utils, but that still has issues. It turns out there is already a solution and it could be easily added to zfs_autobackup if the system has pyzfs included. Basically import libzfs_core
libzfs_core.lzc_change_key(b"testpool/enc_copy/data3", "force_inherit") where Full example:
|
Interesting. Offcourse the whole point of zfs-autobackup is we dont use libzfs and only use regular zfs commands to make debugging easier. So while tempting, i dont think we should add this to the code? We should add it to the wiki page about encryption i think. |
Hmm, what if we had the ability to add a custom hook or command that is executed after a dataset is received, where the argument passed is the dataset/filesystem name? Similar to pre/post snapshot command. |
PR to add this to the zfs cli openzfs/zfs#15821 |
We can. Could be usefull for other situations as well. A post-create that only runs after initial creation of a dataset? |
Awesome! thanks for try to improve zfs as well. |
That would work. It might also be useful in the general case for any post-receive to do things like custom mounting or logging/notifications. Maybe if one arg is the filesystem name, another could be if the dataset is new. Or something like Possibly even a hacky |
ive created a ticked for those hooks. since it as zfs issue ill close this for now. |
Example using master commit 6d4f22b
zfs load-key -r
will prompt for passphrase for the original encryption root, as well as each child datasetSee the encryptionroot property is received wrong
originally
zfs get encryptionroot -r xpool/enc2
:received
zfs get encryptionroot -r xpool/enc3
:command executed by zfs_autobackup per
zpool history xpool
:So I discovered this after I've sent a few TB of datasets and each received dataset is it's own encryption root. Trying to find info in the openzfs issuess, I'm not totally clear how this is supposed to work.
Maybe we have to hit this recv_fix_encryption_hierarchy function which is only triggered for a recursive send/receive?
openzfs/zfs@bb61cc3#diff-ade451cd0b2212cb5979053cf5202e98ff65d5e2283431841b28ca17770a3fd0
But does that mean you can never update just one dataset at a time, you have to batch send the whole thing with -R at once?
I think you can "fix" the received datasets by unlocking them and doing
zfs change-key -i
to each. But there also seems to be bugs related to that and high chance for data loss openzfs/zfs#12123Indeed if I do
zfs change-key -i xpool/enc3/enc2/test
thenzfs get encryptionroot -r xpool/enc3
is fixedSending a new change seems to keep this fixed encryption root
but creating any new child datasets, they will not be sent with correct encryption root
The text was updated successfully, but these errors were encountered: