-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Can't restore VM in Proxmox #62
Comments
Try to put metadata on a separate block device: I use something like this:
|
I tried, but it does't help neither with volblocksize=16k or 32k. |
I have some ideas why this happened in the first place, and by checking if this is the case, I was not able to reproduce it. My best guess is that PVE is no longer that strict with sizes as long as things fit? I saw these at the end of the restore:
As this issue is already pretty old and I was not longer able to reproduce it, I'm closing this. If this is still an issue with latest LINSTOR and latest linstor-proxmox plugin, feel free to re-open |
I am getting the same issue. This is on my reference infrastructure build so nothing that exotic has been done with it - it's pretty vanilla. After reading the last comment I upgraded everything in the cluster to today's latest packages and tried again; same result. Here are the versions of everything:
@rck Are you restoring to a Linstor storage target when you get these "size ... updated" messages? I see those only when I restore (successfully) to an LVM target. A Linstor target always fails. Here is the output of a failed restore:
And a successful restore of the same backup to an LVM target:
|
Update - I can reproduce this issue when restoring from a backup stored on a node's local storage, but it works fine when restoring backups stored on Proxmox Backup Server to a Linstor target. When restoring from PBS I do see the messages rck noted:
|
@arcandspark
yes, in my case that was a DRBD/LINSTOR disk where the backing storage was LVM, backed up to local LVM, restored to DRBD/LINSTOR with LVM as backing disks. what type of storage (pool) do you use for the a) the VM (zfs or LVM?) and b) the backup (LVM from what I saw). Is there some LVM vs. ZFS going on? |
In my case it is DRBD/LINSTOR disk backed by ZFS, backed up to local directory, restored to DRBD/LINSTOR with ZFS backing disks. Where given the below storage config, the VM is backed up from
|
thank you for the very detailed and helpful logs, and sorry this took a bit longer for a response... I have an idea and will try to reproduce it in my dev env. |
"unfortunately" I still can not reproduce this. First I thought it might be a (block)size issue between zfs and lvm, but backup+restore worked as expected. then I thought it might be the EFI disk, but still:
@arcandspark can you reproduce it with a fresh dummy VM, no efi disk, no "funny things" like snapshots or resizing. create, backup, restore. still failing? |
I prepared two identically installed Debian 12 VMs, except one is a Q35/SeaBIOS VM and the other is a Q35/OVMF EFI VM. Each VM has a single disk on the Linstor pool (essd-r2, backed by ZFS). The EFI VM has an EFI disk also on the Linstor pool (essd-r2). I created a backup of each to the local LVM storage of the same node. I then restored each on that node from the backup. The BIOS VM succeeded, the EFI VM failed when creating the EFI disk: BIOS VM Restore Task:
EFI VM Restore Task:
|
once more thanks for the detailed info. I always tested with alpine images, I did check the EFI disk box... whatever did the trick, debian or q35, I now could reproduce it. the problem is:
these are bytes, so that makes 528K.
5251072 bytes are exactly 5M. DRBD devices have a lower limit, and 5M looked like a good lower limit to me. then add the usual rounding from LINSTOR, different block sizes, obscure |
this should be fixe in 2dfcc49 . @arcandspark can you confirm this fixes the issue for you. just replace the file/the line and maybe |
@arcandspark did you have a chance to test the proposed fix? |
Environment:
Software Versions:
Proxmox Plugin config:
Problem:
When I try to create then restore backup of VM with TPM2.0 and EFI storage enabled im getting error about different disk sizes for EFI disk (used to store EFI vars)
vma: vma_reader_register_bs for stream drive-efidisk0 failed - unexpected size 5242880 != 540672
Full restore log:
I think the problem is somehow tied to the work of ZFS thin-provisoning and related functionality in Linstor, tell me please, maybe I'm doing something wrong
The text was updated successfully, but these errors were encountered: