Skip to content

Disk fixtures rework #330

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 26 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
26 commits
Select commit Hold shift + click to select a range
33eec4c
host.available_disks: don't overload local with different type
ydirson Jul 8, 2025
7ca0152
host: fix comment as too restrictive
ydirson Jul 8, 2025
4b867a5
host: clarify the lsblk invocations
ydirson Jun 25, 2025
4a7640a
host: drop raw_disk_is_available
ydirson Jun 17, 2025
8f3993f
glusterfs: remove duplicate fixtures
ydirson Jun 18, 2025
ee5f7aa
sr_disk: document non-obvious behavior of features to be reworked
ydirson Jun 18, 2025
efdf4f3
common: introduce type aliases for HostAddress and DiskDevName
ydirson Jun 17, 2025
1f4daad
Add type hints necessary for the new disks fixtures
ydirson Jun 17, 2025
fb58218
Host: use a single lsblk call to cache all static data about disks
ydirson Jul 31, 2025
573d7f4
Host.disk: change to return BlockDeviceInfo not DiskDevName
ydirson Jul 31, 2025
f7c71d7
New fixtures to access disks without assuming uniform naming in first…
ydirson Jun 17, 2025
5cbc4ef
glusterfs: switch from sr_disk_for_all_hosts to pool_with_unused_512B…
ydirson Jul 31, 2025
7f3dd4f
disks: add per-host override using --disks=host:[device[,device]*]
ydirson Jul 31, 2025
fd5473d
host: explicitly type Host.pool
ydirson Jun 24, 2025
0e5b607
linstor: switch from sr_disks_for_all_hosts to pool_with_unused_512B_…
ydirson Jun 20, 2025
cc85347
Drop now-unused sr_disks_for_all_hosts
ydirson Jun 24, 2025
1fd67ac
Use unused_512B_disks in formatted_and_mounted_ext4_disk
ydirson Jun 25, 2025
270c9d8
Use unused_512B_disks in ext tests
ydirson Jun 25, 2025
7ec61c6
Use unused_512B_disks in sr_disk_wiped
ydirson Aug 1, 2025
270c7f4
Use unused_512B_disks in lvm tests
ydirson Jun 25, 2025
e43bac6
Use unused_512B_disks in xfs tests
ydirson Jun 25, 2025
0777782
Drop now-unused sr_disk fixture and flag
ydirson Jun 25, 2025
05dbcfd
New fixture unused_4k_disks for largeblocks tests
ydirson Jun 25, 2025
1fbaadb
Drop now-unused sr_disk_4k
ydirson Jul 31, 2025
d6e63bd
Drop now-unused Host.available_disks
ydirson Jul 8, 2025
4b94df4
Drop obsolete --sr-disk* options and their use in jobs and examples
ydirson Aug 20, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -205,7 +205,7 @@ Here's an example of selection we can do thanks to the markers:

```
# Run storage driver tests that either need no VM at all, or advise to use a small VM. Exclude tests that reboot the hosts.
pytest tests/storage -m "(small_vm or no_vm) and not reboot" --hosts=ip_of_poolmaster1,ip_of_poolmaster2 --vm=http://path/to/a_small_vm.xva --sr-disk=auto
pytest tests/storage -m "(small_vm or no_vm) and not reboot" --hosts=ip_of_poolmaster1,ip_of_poolmaster2 --vm=http://path/to/a_small_vm.xva
```

Another example:
Expand Down
186 changes: 80 additions & 106 deletions conftest.py
Original file line number Diff line number Diff line change
@@ -1,3 +1,5 @@
from __future__ import annotations

import pytest

import itertools
Expand All @@ -12,6 +14,8 @@
from lib import pxe
from lib.common import (
callable_marker,
DiskDevName,
HostAddress,
is_uuid,
prefix_object_name,
setup_formatted_and_mounted_disk,
Expand All @@ -21,6 +25,7 @@
wait_for,
)
from lib.netutil import is_ipv6
from lib.host import Host
from lib.pool import Pool
from lib.sr import SR
from lib.vm import VM, vm_cache_key_from_def
Expand All @@ -31,7 +36,7 @@
# need to import them in the global conftest.py so that they are recognized as fixtures.
from pkgfixtures import formatted_and_mounted_ext4_disk, sr_disk_wiped

from typing import Dict
from typing import Dict, Generator, Iterable

# Do we cache VMs?
try:
Expand Down Expand Up @@ -80,19 +85,12 @@ def pytest_addoption(parser):
help="Max lines to output in a ssh log (0 if no limit)"
)
parser.addoption(
"--sr-disk",
action="append",
default=[],
help="Name of an available disk (sdb) or partition device (sdb2) to be formatted and used in storage tests. "
"Set it to 'auto' to let the fixtures auto-detect available disks."
)
parser.addoption(
"--sr-disk-4k",
"--disks",
action="append",
default=[],
help="Name of an available disk (sdb) or partition device (sdb2) with "
"4KiB blocksize to be formatted and used in storage tests. "
"Set it to 'auto' to let the fixtures auto-detect available disks."
help="HOST:DISKS to authorize for use by tests. "
"DISKS is a possibly-empty comma-separated list. "
"No mention of a given host authorizes use of all its disks."
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
"No mention of a given host authorizes use of all its disks."
"No mention of a given host authorizes use of all its available disks."

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not really, the --disks flag acts on purpose on the disks feature, then naturally all fixtures filtering further based on such criteria as "unused" inherits from this, and benefits from this filtering.
We may want in the future to have a fixture to use a disk already sporting a SR, and the same single --disks flag will impact it as well.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Understood, but I do think it can be misunderstood easily because of this description.

)

def pytest_configure(config):
Expand All @@ -116,8 +114,8 @@ def pytest_collection_modifyitems(items, config):
'windows_vm',
'hostA2',
'hostB1',
'sr_disk',
'sr_disk_4k'
'unused_512B_disks',
'unused_4k_disks',
]

for item in items:
Expand Down Expand Up @@ -157,7 +155,7 @@ def pytest_runtest_makereport(item, call):
# fixtures

@pytest.fixture(scope='session')
def hosts(pytestconfig):
def hosts(pytestconfig) -> Generator[list[Host]]:
nested_list = []

def setup_host(hostname_or_ip, *, config=None):
Expand Down Expand Up @@ -223,6 +221,14 @@ def cleanup_hosts():

cleanup_hosts()

@pytest.fixture(scope='session')
def pools_hosts_by_name_or_ip(hosts: list[Host]) -> Generator[dict[HostAddress, Host]]:
"""All hosts of all pools, each indexed by their hostname_or_ip."""
yield {host.hostname_or_ip: host
for pool_master in hosts
for host in pool_master.pool.hosts
}

@pytest.fixture(scope='session')
def registered_xo_cli():
# The fixture is not responsible for establishing the connection.
Expand Down Expand Up @@ -335,103 +341,71 @@ def local_sr_on_hostB1(hostB1):
yield sr

@pytest.fixture(scope='session')
def sr_disk(pytestconfig, host):
disks = pytestconfig.getoption("sr_disk")
if len(disks) != 1:
pytest.fail("This test requires exactly one --sr-disk parameter")
disk = disks[0]
if disk == "auto":
logging.info(">> Check for the presence of a free disk device on the master host")
disks = host.available_disks()
assert len(disks) > 0, "a free disk device is required on the master host"
disk = disks[0]
logging.info(f">> Found free disk device(s) on hostA1: {' '.join(disks)}. Using {disk}.")
else:
logging.info(f">> Check that disk or block device {disk} is available on the master host")
assert disk in host.available_disks(), \
f"disk or block device {disk} is either not present or already used on master host"
yield disk
def disks(pytestconfig, pools_hosts_by_name_or_ip: dict[HostAddress, Host]
) -> Generator[dict[Host, list[Host.BlockDeviceInfo]]]:
"""Dict identifying names of all disks for on all hosts of first pool."""
def _parse_disk_option(option_text: str) -> tuple[HostAddress, list[DiskDevName]]:
parsed = option_text.split(sep=":", maxsplit=1)
assert len(parsed) == 2, f"--disks option {option_text!r} is not <host>:<disk>[,<disk>]*"
host_address, disks_string = parsed
devices = disks_string.split(',') if disks_string else []
return host_address, devices

cli_disks = dict(_parse_disk_option(option_text)
for option_text in pytestconfig.getoption("disks"))

def _host_disks(host: Host, hosts_cli_disks: list[DiskDevName] | None) -> Iterable[Host.BlockDeviceInfo]:
"""Filter host disks according to list from `--cli` if given."""
host_disks = host.disks()
# no disk specified = allow all
if hosts_cli_disks is None:
yield from host_disks
return
# check all disks in --disks=host:... exist
for cli_disk in hosts_cli_disks:
for disk in host_disks:
if disk['name'] == cli_disk:
yield disk
break # names are unique, don't expect another one
else:
raise Exception(f"no {cli_disk!r} disk on host {host.hostname_or_ip}, "
f"has {','.join(disk['name'] for disk in host_disks)}")

ret = {host: list(_host_disks(host, cli_disks.get(host.hostname_or_ip)))
for host in pools_hosts_by_name_or_ip.values()
}
logging.debug("disks collected: %s", {host.hostname_or_ip: value for host, value in ret.items()})
yield ret

@pytest.fixture(scope='session')
def sr_disk_4k(pytestconfig, host):
disks = pytestconfig.getoption("sr_disk_4k")
if len(disks) != 1:
pytest.fail("This test requires exactly one --sr-disks-4k parameter")
disk = disks[0]
if disk == "auto":
logging.info(">> Check for the presence of a free 4KiB block device on the master host")
disks = host.available_disks(4096)
assert len(disks) > 0, "a free 4KiB block device is required on the master host"
disk = disks[0]
logging.info(f">> Found free 4KiB block device(s) on hostA1: {' '.join(disks)}. Using {disk}.")
else:
logging.info(f">> Check that 4KiB block device {disk} is available on the master host")
assert disk in host.available_disks(4096), \
f"4KiB block device {disk} must be available for use on master host"
yield disk
def unused_512B_disks(disks: dict[Host, list[Host.BlockDeviceInfo]]
) -> Generator[dict[Host, list[Host.BlockDeviceInfo]]]:
"""Dict identifying names of all 512-bytes-blocks disks for on all hosts of first pool."""
ret = {host: [disk for disk in host_disks
if disk["log-sec"] == "512" and host.disk_is_available(disk["name"])]
for host, host_disks in disks.items()
}
logging.debug("available disks collected: %s", {host.hostname_or_ip: value for host, value in ret.items()})
yield ret

@pytest.fixture(scope='session')
def sr_disk_for_all_hosts(pytestconfig, request, host):
disks = pytestconfig.getoption("sr_disk")
if len(disks) != 1:
pytest.fail("This test requires exactly one --sr-disk parameter")
disk = disks[0]
master_disks = host.available_disks()
assert len(master_disks) > 0, "a free disk device is required on the master host"

if disk != "auto":
assert disk in master_disks, \
f"disk or block device {disk} is either not present or already used on master host"
master_disks = [disk]

candidates = list(master_disks)
for h in host.pool.hosts[1:]:
other_disks = h.available_disks()
candidates = [d for d in candidates if d in other_disks]

if disk == "auto":
assert len(candidates) > 0, \
f"a free disk device is required on all pool members. Pool master has: {' '.join(master_disks)}."
logging.info(f">> Found free disk device(s) on all pool hosts: {' '.join(candidates)}. Using {candidates[0]}.")
else:
assert len(candidates) > 0, \
f"disk or block device {disk} was not found to be present and free on all hosts"
logging.info(f">> Disk or block device {disk} is present and free on all pool members")
yield candidates[0]
def unused_4k_disks(disks: dict[Host, list[Host.BlockDeviceInfo]]
) -> Generator[dict[Host, list[Host.BlockDeviceInfo]]]:
"""Dict identifying names of all 4K-blocks disks for on all hosts of first pool."""
ret = {host: [disk for disk in host_disks
if disk["log-sec"] == "4096" and host.disk_is_available(disk["name"])]
for host, host_disks in disks.items()
}
logging.debug("available 4k disks collected: %s", {host.hostname_or_ip: value for host, value in ret.items()})
yield ret

@pytest.fixture(scope='session')
def sr_disks_for_all_hosts(pytestconfig, request, host):
disks = pytestconfig.getoption("sr_disk")
assert len(disks) > 0, "This test requires at least one --sr-disk parameter"
# Fetch available disks on the master host
master_disks = host.available_disks()
assert len(master_disks) > 0, "a free disk device is required on the master host"

if "auto" in disks:
candidates = list(master_disks)
else:
# Validate that all specified disks exist on the master host
for disk in disks:
assert disk in master_disks, \
f"Disk or block device {disk} is either not present or already used on the master host"
candidates = list(disks)

# Check if all disks are available on all hosts in the pool
for h in host.pool.hosts[1:]:
other_disks = h.available_disks()
candidates = [d for d in candidates if d in other_disks]

if "auto" in disks:
# Automatically select disks if "auto" is passed
assert len(candidates) > 0, \
f"Free disk devices are required on all pool members. Pool master has: {' '.join(master_disks)}."
logging.info(">> Using free disk device(s) on all pool hosts: %s.", candidates)
else:
# Ensure specified disks are free on all hosts
assert len(candidates) == len(disks), \
f"Some specified disks ({', '.join(disks)}) are not free or available on all hosts."
logging.info(">> Disk(s) %s are present and free on all pool members", candidates)
yield candidates
def pool_with_unused_512B_disk(host: Host, unused_512B_disks: dict[Host, list[Host.BlockDeviceInfo]]) -> Pool:
"""Returns the first pool, ensuring all hosts have at least one unused 512-bytes-blocks disk."""
for h in host.pool.hosts:
assert h in unused_512B_disks
assert unused_512B_disks[h], f"host {h} does not have any unused 512B-block disk"
return host.pool

@pytest.fixture(scope='module')
def vm_ref(request):
Expand Down
Loading