Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ERROR: ... unable to destroy quota group: Device or resource busy #586

Open
MeisterP opened this issue Apr 14, 2024 · 4 comments
Open

ERROR: ... unable to destroy quota group: Device or resource busy #586

MeisterP opened this issue Apr 14, 2024 · 4 comments

Comments

@MeisterP
Copy link

With target_qgroup_destroy yes I get errors for every subvolume on /mnt/btrfs/my-hd/:

ERROR: Failed to destroy qgroup "0/3313" for subvolume: /mnt/btrfs/my-hd/btrbk-snapshots/my-subvolume
ERROR: ... Command execution failed (exitcode=1)
ERROR: ... sh: btrfs qgroup destroy 0/3313 '/mnt/btrfs/my-hd/btrbk-snapshots/my-subvolume'
ERROR: ... unable to destroy quota group: Device or resource busy

/mnt/btrfs/my-hd/ is an internal ssd

@Vladimir-csp
Copy link

Having this too. My backup storage accumulated a lot of stale qgroups, which I'm unable to remove.

Qgroupid    Referenced    Exclusive   Path 
--------    ----------    ---------   ---- 
0/5           16.00KiB     16.00KiB   <toplevel>
0/4257        16.00EiB        0.00B   <stale>
0/4273        16.00EiB        0.00B   <stale>
0/4332        16.00EiB        0.00B   <stale>
0/4472        16.00EiB     16.00EiB   <stale>
0/4504        16.00EiB        0.00B   <stale>
0/4511       140.27MiB        0.00B   <stale>
0/4512       941.40GiB        0.00B   <stale>
0/4516        16.00EiB     16.00EiB   <stale>
0/4520        51.03GiB        0.00B   <stale>
0/4524        16.00EiB        0.00B   <stale>
0/4526        10.07GiB        0.00B   <stale>
0/4528        16.00EiB        0.00B   <stale>
0/4530         1.18TiB        0.00B   <stale>
0/4531        24.76MiB        0.00B   <stale>
0/4532        16.00EiB        0.00B   <stale>
0/4533         9.31GiB      1.29GiB   snapshots/@.20240513T0400
0/4534         3.16TiB      1.42GiB   snapshots/home.20240513T0400
0/4535         8.63GiB    511.02MiB   snapshots/@.20240629T1628
0/4536         3.18TiB    251.29MiB   snapshots/home.20240629T1628
0/4538        16.00EiB        0.00B   <stale>
0/4540        16.00EiB        0.00B   <stale>
0/4541       583.60MiB        0.00B   <stale>
0/4542       746.09GiB     16.00KiB   <stale>
0/4543         8.77GiB    503.85MiB   snapshots/@.20240708T0400
0/4544         3.15TiB    928.46MiB   snapshots/home.20240708T0400
0/4545        16.00EiB        0.00B   <stale>
0/4546        16.00EiB        0.00B   <stale>
0/4547       655.34MiB        0.00B   <stale>
0/4548        16.00EiB        0.00B   <stale>
0/4550        16.00EiB        0.00B   <stale>
0/4551        16.00EiB        0.00B   <stale>
0/4552        16.00EiB        0.00B   <stale>
0/4554       971.65GiB    464.00KiB   <stale>
0/4555       289.70MiB        0.00B   <stale>
0/4556        16.00EiB        0.00B   <stale>
0/4557       127.41MiB        0.00B   <stale>
0/4558        16.00EiB        0.00B   <stale>
0/4559         9.29GiB    481.38MiB   snapshots/@.20240805T0400
0/4560         3.20TiB    347.89MiB   snapshots/home.20240805T0400
0/4564        16.00EiB        0.00B   <stale>
0/4565        16.00EiB        0.00B   <stale>
0/4566        39.07MiB        0.00B   <stale>
0/4567        16.00EiB        0.00B   <stale>
0/4568        16.00EiB        0.00B   <stale>
0/4569        16.00EiB        0.00B   <stale>
0/4570        16.00EiB        0.00B   <stale>
0/4571         2.08TiB    150.30MiB   <stale>
0/4576        16.00EiB        0.00B   <stale>
0/4577         8.81GiB    489.26MiB   snapshots/@.20240826T0400
0/4579         3.26TiB      2.00GiB   snapshots/home.20240826T0400
0/4580        89.73MiB        0.00B   <stale>
0/4582        16.00EiB        0.00B   <stale>
0/4583         8.84GiB    488.07MiB   snapshots/@.20240902T0400
0/4584         3.26TiB    113.34MiB   snapshots/home.20240902T0400
0/4588         8.76GiB    445.28MiB   <stale>
0/4589         3.26TiB     80.16MiB   <stale>
0/4590        11.51GiB    443.86MiB   snapshots/@.20240909T0400
0/4591         3.26TiB    132.71MiB   snapshots/home.20240909T0400
0/4592        11.56GiB    450.87MiB   <stale>
0/4593         3.26TiB    132.83MiB   <stale>
0/4594        11.57GiB    450.12MiB   snapshots/@.20240916T0400
0/4595         3.26TiB    145.27MiB   snapshots/home.20240916T0400
0/4596        11.09GiB    455.03MiB   snapshots/@.20240919T0400
0/4597         3.26TiB    351.88MiB   snapshots/home.20240919T0400
0/4598        11.09GiB    447.25MiB   snapshots/@.20240923T0400
0/4599         3.26TiB    103.12MiB   snapshots/home.20240923T0400
0/4600        11.11GiB    463.07MiB   snapshots/@.20240926T0400
0/4601         3.26TiB    116.32MiB   snapshots/home.20240926T0400

@Vladimir-csp
Copy link

Source storage also has them.
Documentation suggests doing quota rescan, will try.

@Vladimir-csp
Copy link

It is quite fast and it helped.

@Vladimir-csp
Copy link

But more stale groups created on the next backup.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants