Skip to content

Support Direct Download on Ceph primary storage #11069

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

nvazquez
Copy link
Contributor

@nvazquez nvazquez commented Jun 22, 2025

Description

This PR adds support for Direct Downloads on Ceph primary storage. Its logic is replicating the logic added on the Storpool support for Direct download #9833 but adapted to Ceph.

Fixes: #3065

Types of changes

  • Breaking change (fix or feature that would cause existing functionality to change)
  • New feature (non-breaking change which adds functionality)
  • Bug fix (non-breaking change which fixes an issue)
  • Enhancement (improves an existing feature and functionality)
  • Cleanup (Code refactoring and cleanup, that may add test cases)
  • build/CI
  • test (unit or integration test code)

Feature/Enhancement Scale or Bug Severity

Feature/Enhancement Scale

  • Major
  • Minor

Bug Severity

  • BLOCKER
  • Critical
  • Major
  • Minor
  • Trivial

Screenshots (if appropriate):

How Has This Been Tested?

Tested on KVM env with NFS and Ceph primary pool
Created a service offering containing a storage tag matching the Ceph pool
Registered template using 'Direct download' option
Deploy VM -> Observe the VM is deployed and booted, disk downloaded to temporary location and converted to the Ceph destination

How did you try to break this feature and the system with this change?

@nvazquez
Copy link
Contributor Author

@blueorangutan package

@blueorangutan
Copy link

@nvazquez a [SL] Jenkins job has been kicked to build packages. It will be bundled with KVM, XenServer and VMware SystemVM templates. I'll keep you posted as I make progress.

Copy link

codecov bot commented Jun 22, 2025

Codecov Report

Attention: Patch coverage is 0% with 24 lines in your changes missing coverage. Please review.

Project coverage is 16.57%. Comparing base (5b7c3b4) to head (8e199ca).

Files with missing lines Patch % Lines
.../hypervisor/kvm/storage/LibvirtStorageAdaptor.java 0.00% 24 Missing ⚠️
Additional details and impacted files
@@             Coverage Diff              @@
##               main   #11069      +/-   ##
============================================
- Coverage     16.57%   16.57%   -0.01%     
  Complexity    13967    13967              
============================================
  Files          5743     5743              
  Lines        510468   510492      +24     
  Branches      62073    62074       +1     
============================================
+ Hits          84616    84617       +1     
- Misses       416390   416413      +23     
  Partials       9462     9462              
Flag Coverage Δ
uitests 3.90% <ø> (ø)
unittests 17.47% <0.00%> (-0.01%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

@blueorangutan
Copy link

Packaging result [SF]: ✔️ el8 ✔️ el9 ✔️ debian ✔️ suse15. SL-JID 13869

@wido
Copy link
Contributor

wido commented Jun 23, 2025

Great work! Good to see progress here. But requiring ceph.conf on the node will break a lot of things. As CloudStack supports multiple Ceph RBD pools, how will we handle this? We can't guarantee that there will be only one Ceph cluster.

Why don't we pass these arguments to qemu-img?

destDisk.setFormat(PhysicalDiskFormat.RAW);
destDisk.setSize(virtualSize);
destDisk.setVirtualSize(virtualSize);
QemuImgFile destFile = new QemuImgFile(KVMPhysicalDisk.RBDStringBuilder(destPool, destDisk.getPath()));
Copy link
Contributor

@wido wido Jun 23, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This line here makes that it doesn't need ceph.conf, right?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In my case the logs indicated that qemu img was looking for ceph.conf at diferent directories and failed as it couldn't find it. Probably it could be some misconfiguration at my side, will explore

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was initially testing on 4.20 branch and the RBDStringBuilder was accepting more parameters, I have now recreated the env using this PR packages and I don't observe the failure anymore

@nvazquez
Copy link
Contributor Author

Thanks @wido - sorry I was not aware of this, I added the ceph.conf file as my qemu-img command execution was complaining about the file not found, then I added it. Probably it could be a misconfiguration on my cluster, I will investigate and try passing options to the qemu-img command as you suggested

@nvazquez
Copy link
Contributor Author

@wido I have recreated my CloudStack environment from the latest packages without copying the ceph.conf files to the KVM hosts. Now I'm not hitting any issue on the qemu-img convertion and direct download templates are successfully deployed. I have updated the PR description to remove the need of ceph.conf file.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Direct Download extension to Ceph storage
3 participants