-
Notifications
You must be signed in to change notification settings - Fork 136
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
0.12.3 doesn't remove any snapshots if there is a zfs hold on one of them #96
Comments
well, if snapshots are created by znapzend they should not be 'hold'. in your specific setup where snaphots not created by znapzend exist, you might want to have a look at the so called 'oracleMode' feature which will exactly do what you are asking for: https://github.com/oetiker/znapzend/blob/master/doc/znapzend.pod |
What if one wants to temporarily avoid destruction of a particular snapshot ? (And what if one wants to temporarily mount or clone a particular snapshot ?) |
@rottegift how about determining holds while scanning the snapshots and then excluding them from destruction ? would this be a worthy task for your first contribution ? |
Hello, I have the same issue with a zfs clone/promote on a volume : snapshot used is locked on purpose, and znapzend cannot delete it (happily !), but then snaps are accumulating on it. For this exact matter, we can find locked snapshot by using "zfs get origin -r $zpoolname", or snaps with "zfs get clones" != '' For snaps locked by "zfs hold", property 'userrefs' is incremented, so we need to check all snaps where property userrefs > 0. I checked the value for a snapshot that is the origin of a clone, and userrefs = 0. I will be happy to contribute on this, even if I'm not particularly fluent in Perl. Can you help me a bit and tell me where would it be nicer to add this check ? |
I would assume that the place where we get the list of existing snapshots would be the right place to get these attributes as well so that we are able to skip the 'special' snapshots ... lib/ZnapZend/ZFS.pm#L142 |
If there is a zfs hold on a snapshot of the right format on the destination, the combined zfs destroy will fail, and snapshots will accumulate on the destination.
The "dataset is busy" below is caused by "zfs hold keep pool/from_src/user@ 2013-04-03-072351" (for example).
The target is OmniOS r151010.
Ideally, a failure of a combined destroy would try again omitting the snapshots complained about in the error message.
Alternatively (or additionally), a failure of a combined destroy should try again with a single zfs destroy per snapshot.
(Snapshot destruction may fail for other reasons, for instance if one is cloned or mounted; combined zfs destroys will destroy nothing in those cases too.)
Presumably this also affects removal of snapshots on the source as well.
The text was updated successfully, but these errors were encountered: