Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Integration testing via pytest-docker #315

Merged
merged 27 commits into from
Jan 27, 2021

Conversation

raddessi
Copy link
Contributor

@raddessi raddessi commented Dec 29, 2020

No rush here either - I don't have this at a place it's ready to merge yet but I would love to get your input on the direction to take. So far I have a very basic integration system set up but it needs a lot of polish added.

Running list of goals:

  • Ability to test against multiple netbox versions in a single pytest run
  • Ability to automatically determine what versions of netbox to test against
    • Do you know what to test against here? Off the top of my head I would think to scrape the latest verisions from the netbox releases github page and pull the latest version of each of the 3 most recent major/minor combos.. so like right now it would be v2.10.2, v2.9.11, and v2.8.9. Thoughts?
  • Ability to override the default automatically chosen netbox versions to test against via pytest args
  • Clean up any templated docker-compose yaml files after the pytest run completes? Or perhaps clear them before a new run?
  • Clean up any leftover docker containers on an unclean pytest exit? Or clean them before a new run?
  • Add a pytest flag to disable all integrations tests? Should the default be enabled or disabled?
  • Add a pytest flag to skip cleaning up the containers after a run so they can be poked at manually
  • Add devicetype library integration and auto-import a few device types to the containers
  • Clean up docker networks and volumes in addition to containers

Is there anything else that should be done for a basic integration suite feature add?

The short version of what this does is that when pytest runs it clones the netbox-docker repo and then reads the docker-compose.yml file included in that repo as a base config but then renames some things in there (mostly prefixing names with the netbox version for that group of containers, and putting them all in a dedicated network for that netbox version) so that we can spin up multiple versions of netbox at once via writing out multiple slightly altered docker compose files based off of that base config. Is there a better way to test against multiple versions at once?

Right now with me testing against 2 versions of netbox the total run time is about 2 and a half minutes. Adding multiple netbox versions does not really add much additional time, just load on the system since most of the time is spent populating the database and caching static files from what I've seen so far.

This is just a jumping off point, let me know what you think. Thanks :)

@raddessi raddessi marked this pull request as draft December 29, 2020 05:47
Copy link
Contributor Author

@raddessi raddessi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Tests pass in github, I'm happy enough for now to mostly leave it alone until you all get a chance to review

tests/integration/conftest.py Show resolved Hide resolved
tests/integration/conftest.py Show resolved Hide resolved
Copy link
Contributor Author

@raddessi raddessi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

more to clean up

tests/integration/conftest.py Outdated Show resolved Hide resolved
tests/integration/conftest.py Outdated Show resolved Hide resolved
tests/integration/test_pytest_docker.py Outdated Show resolved Hide resolved
@zachmoody
Copy link
Contributor

  • Ability to automatically determine what versions of netbox to test against

    • Do you know what to test against here? Off the top of my head I would think to scrape the latest verisions from the netbox releases github page and pull the latest version of each of the 3 most recent major/minor combos.. so like right now it would be v2.10.2, v2.9.11, and v2.8.9. Thoughts?

netbox-docker should let us pull the latest minor release by just specifying the major release number, e.g. v2.9 will pull the v2.9.11 image. At least, I'm pretty sure. I think I'd rather manually specify which releases we're testing against. That way when a new version comes out we don't have the potential to randomly start failing tests. It's nice to know, but probably frustrating on a PR that deals with something else entirely.

  • Ability to override the default automatically chosen netbox versions to test against via pytest args
  • Clean up any templated docker-compose yaml files after the pytest run completes? Or perhaps clear them before a new run?
  • Clean up any leftover docker containers on an unclean pytest exit? Or clean them before a new run?
  • Add a pytest flag to skip cleaning up the containers after a run so they can be poked at manually

Yep, those all make sense, will have to familiarize myself with pytest-docker before I can help here.

  • Add a pytest flag to disable all integrations tests? Should the default be enabled or disabled?

Could just provide a more specific path to the unit tests if you wanted to bypass, e.g. pytest tests/unit.

Is there anything else that should be done for a basic integration suite feature add?

This is just a jumping off point, let me know what you think. Thanks :)

This is a great start, had no idea we could do all this in pytest. I'd like to convert at least a few of the fixture-based tests over to this to get a better idea of how we'd use it in practice. Particularly curious about how we can best handle dependencies. Like how testing devices requires us to add a device_type which, in turn, requires manufacturer etc.

@raddessi
Copy link
Contributor Author

netbox-docker should let us pull the latest minor release by just specifying the major release number, e.g. v2.9 will pull the v2.9.11 image. At least, I'm pretty sure. I think I'd rather manually specify which releases we're testing against. That way when a new version comes out we don't have the potential to randomly start failing tests. It's nice to know, but probably frustrating on a PR that deals with something else entirely.

Ah yeah great point I totally understand that. So then right now we would hard code the short version 2.8, 2.9, and 2.10 only and let the containers spin up to whatever the latest patch version is automatically correct? That works just fine as I'm only testing the major/minor version at the moment anyway because of the version reporting pre-2.10.

Add a pytest flag to disable all integrations tests? Should the default be enabled or disabled?

Could just provide a more specific path to the unit tests if you wanted to bypass, e.g. pytest tests/unit.

Ah I meant mostly as a way to single out and run only all of the fast-running unit tests to make sure your code at least has no major holes in it before running a much longer (comparatively - there are no tests yet but I'm sure the integration suite will just get longer and longer as time goes on) integration test. It's been handy to me a few times in other repos but that may just be my style, if you don't see a need than that's good with me :)

Particularly curious about how we can best handle dependencies. Like how testing devices requires us to add a device_type which, in turn, requires manufacturer etc.

Yes! I wanted to bring up the dependency issue myself as well once you had looked this over. There is the https://github.com/netbox-community/devicetype-library repo that we could import a few device types from to test with after the containers are spun up.. manufacturers are literally just a name so those can be read from the yaml config for each of the device types we would load as well and populated at the same time. And we could pin a version of that repo to use as well (bumping occasionally) so that no failures are introduced spontaneously from external updates. This solves for servers and switches, but I haven't seen any sort of patch panels included in the repo yet.. I may be wrong though. We could just send up a PR to add a few of those there or perhaps keep a few yaml models in this repo.. we wouldn't need many. Now that I think about it they may not allow patch panels in that repo because of how much of a gray area they are implementation-wise - one setup may not have the same needs as another.

Thank you for checking this out, I'll go ahead and get started on the points you OK'd above.

@zachmoody
Copy link
Contributor

So then right now we would hard code the short version 2.8, 2.9, and 2.10 only and let the containers spin up to whatever the latest patch version is automatically correct?

Yep, I think the API only makes breaking changes between major releases. If not, we can revisit pinning to minor releases.

Ah I meant mostly as a way to single out and run only all of the fast-running unit tests to make sure your code at least has no major holes in it before running a much longer (comparatively - there are no tests yet but I'm sure the integration suite will just get longer and longer as time goes on) integration test. It's been handy to me a few times in other repos but that may just be my style, if you don't see a need than that's good with me :)

I follow. I was just saying we can do that now if we provide more specific path to pytest. Though running the old fixture-based tests is a little tough going that route, but those are hopefully on the way out anyways. :)

Yes! I wanted to bring up the dependency issue myself as well once you had looked this over. There is the https://github.com/netbox-community/devicetype-library repo that we could import a few device types from to test with after the containers are spun up.. manufacturers are literally just a name so those can be read from the yaml config for each of the device types we would load as well and populated at the same time. And we could pin a version of that repo to use as well (bumping occasionally) so that no failures are introduced spontaneously from external updates. This solves for servers and switches, but I haven't seen any sort of patch panels included in the repo yet.. I may be wrong though. We could just send up a PR to add a few of those there or perhaps keep a few yaml models in this repo.. we wouldn't need many. Now that I think about it they may not allow patch panels in that repo because of how much of a gray area they are implementation-wise - one setup may not have the same needs as another.

Yeah, think I like that approach. We should still test adding a device-type from the API, but it'll probably get old having to make all the ones we'll need for some of the interface and cable tests.

@raddessi
Copy link
Contributor Author

raddessi commented Jan 4, 2021

Gotcha, sounds good then. I have most of the changes done locally now I just need a little more time to clean it up before sending. After that set goes in I'll add a basic setup of the devicetype library integration as well to get some initial devices populated in the containers.

@raddessi
Copy link
Contributor Author

raddessi commented Jan 5, 2021

Basic pytest args should work as expected now.. currently netbox 2.8 does not spin up correctly though via netbox-docker. I'll have to look in to that if you want to test that far back. I also added a comment about the container cleanup hacks only being needed until avast/pytest-docker#33 gets resolved.

@zachmoody
Copy link
Contributor

Nice, looks good!

currently netbox 2.8 does not spin up correctly though via netbox-docker

hrm, what's it say? Was able to find a tag for it on docker-hub. I feel like we probably should test at least that far back. 2.9 is only ~6mo old. Once this is settled though, we probably should come up with some kind of policy around minimum supported version.

@raddessi
Copy link
Contributor Author

raddessi commented Jan 5, 2021

It's an error in the "netbox" container when running the setup so I assumed it was an issue in the 2.8 tagged image. I'll dig a bit. Last lines:

▶️ Running the startup script /opt/netbox/startup_scripts/240_virtualization_interfaces.py
Traceback (most recent call last):
  File "./manage.py", line 10, in <module>
    execute_from_command_line(sys.argv)
  File "/usr/local/lib/python3.7/site-packages/django/core/management/__init__.py", line 401, in execute_from_command_line
    utility.execute()
  File "/usr/local/lib/python3.7/site-packages/django/core/management/__init__.py", line 395, in execute
    self.fetch_command(subcommand).run_from_argv(self.argv)
  File "/usr/local/lib/python3.7/site-packages/django/core/management/base.py", line 328, in run_from_argv
    self.execute(*args, **cmd_options)
  File "/usr/local/lib/python3.7/site-packages/django/core/management/base.py", line 369, in execute
    output = self.handle(*args, **options)
  File "/usr/local/lib/python3.7/site-packages/django/core/management/commands/shell.py", line 92, in handle
    exec(sys.stdin.read())
  File "<string>", line 1, in <module>
  File "/usr/local/lib/python3.7/runpy.py", line 280, in run_path
    run_name, mod_spec, pkg_name).copy()
  File "/usr/local/lib/python3.7/runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "../startup_scripts/__main__.py", line 25, in <module>
    runpy.run_path(f.path)
  File "/usr/local/lib/python3.7/runpy.py", line 263, in run_path
    pkg_name=pkg_name, script_name=fname)
  File "/usr/local/lib/python3.7/runpy.py", line 96, in _run_module_code
    mod_name, mod_spec, pkg_name, script_name)
  File "/usr/local/lib/python3.7/runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "/opt/netbox/startup_scripts/240_virtualization_interfaces.py", line 4, in <module>
    from virtualization.models import VirtualMachine, VMInterface
ImportError: cannot import name 'VMInterface' from 'virtualization.models' (/opt/netbox/netbox/virtualization/models.py)

@zachmoody
Copy link
Contributor

Ah yeah, getting the same thing. VMInterface wasn't added until 2.9, and I couldn't find where netbox-docker says for how long they maintain backwards compatibility with their initializers. Guess that settles how far back we test. 🤣

@raddessi
Copy link
Contributor Author

raddessi commented Jan 6, 2021

I asked how far back they want to test in the NTC slack.. if I can't get an answer there I'll open an issue and ask directly in the repo. First iteration of device imports works now.. let me know what you think. I added a basic test as well.

@raddessi
Copy link
Contributor Author

raddessi commented Jan 6, 2021

There are many options for how to set up testing, i only guessed for the test i added in the last change. If you want to change it feel free

@raddessi raddessi changed the title WIP: integration testing via pytest-docker Integration testing via pytest-docker Jan 6, 2021
@raddessi raddessi marked this pull request as ready for review January 6, 2021 05:02
@raddessi
Copy link
Contributor Author

raddessi commented Jan 6, 2021

OK.. from talking in the NTC channel one of the maintainers wrote this in response to my post:

it would be nicer if we could add logic to support older versions from the latest snapshot though.

This would add too much overhead on our (the maintainers) side, so this is unlikely to happen (unless someone starts to pay us for this perhaps 😉 )
But you can check out the relevant version of the repository (the release notes state which versions of Netbox Docker work with which version of Netbox) and re-build the container to get updated dependencies. See https://github.com/netbox-community/netbox-docker/wiki/Build

How would you feel about checking out multiple copies (different tags) of the netbox-docker repo so we could run as many versions as we want? It would need a little more code and maintenance but I think the flow would be fairly simple still. Just a thought.

@zachmoody
Copy link
Contributor

How would you feel about checking out multiple copies (different tags) of the netbox-docker repo so we could run as many versions as we want? It would need a little more code and maintenance but I think the flow would be fairly simple still. Just a thought.

Yeah, saw the reply in the NTC slack. That makes sense. We'll probably need to map repo & NetBox version together at some point anyways. So yeah, we could use it, but I don't think we have to have it immediately.

- Added support for older netbox versions
- Re-added some changes that I had accidentally removed with a force push earlier when fixing a messed up merge
@raddessi
Copy link
Contributor Author

raddessi commented Jan 7, 2021

Woohoo! Well I had some time and decided to just do it. Checking it out side by side works with a few minor tweaks. The test run time is up to about 4 minutes though, maybe we should bump the max timeout pytest-docker will wait for the containers to spin up from 5min to maybe 8?

@zachmoody
Copy link
Contributor

Woohoo! Well I had some time and decided to just do it. Checking it out side by side works with a few minor tweaks. The test run time is up to about 4 minutes though, maybe we should bump the max timeout pytest-docker will wait for the containers to spin up from 5min to maybe 8?

Oh snap, nice work! Yeah, bumping the timeout makes sense. I wonder if, ultimately, we'd be better off putting these in their own workflow 🤔. Anything specific you want help on?

@raddessi
Copy link
Contributor Author

raddessi commented Jan 7, 2021

Ah yeah I gotcha. To split out to its own workflow all we would need is some args to select only unit vs integration tests but that's doable, I've don it before. Right now I think it's fully functional.. I'd like your input on current implementation of the setup and actual tests and to know what style or content you want in the docstrings for args/returns, I'll check out the rest of the codebase for examples in a bit. I think it's just about ready now.

I'll bump the timeout for container generation as well for now.

Comment on lines +3 to +4

class TestSimpleServerRackingAndConnecting:
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Instead of setting up class based tests where we would have to do lots of spin up repeatedly we could also use something like the pytest-depends module to just spin up some commonly used objects and reuse them a lot.. but that has its own caveats as well. Maybe a mix of the two is best?

@zachmoody
Copy link
Contributor

Ah yeah I gotcha. To split out to its own workflow all we would need is some args to select only unit vs integration tests but that's doable, I've don it before.

Seems to work now with pytest tests/integration, don't think we need an arg, but it wouldn't hurt.

Right now I think it's fully functional.. I'd like your input on current implementation of the setup and actual tests and to know what style or content you want in the docstrings for args/returns, I'll check out the rest of the codebase for examples in a bit. I think it's just about ready now.

I'll bump the timeout for container generation as well for now.

Ok, yeah, made a quick pass at it. I might have to stew on the functional approach you have in test_dcim a little more. My fear is that it'll quickly get out of hand with a multitude of tests that do a lot of the same thing (e.g. making devices), and it might be difficult to tell exactly what's under test. OTOH, we might run into the same problem either way.

@raddessi
Copy link
Contributor Author

raddessi commented Jan 7, 2021

Seems to work now with pytest tests/integration, don't think we need an arg, but it wouldn't hurt.

Can you select only unit tests currently though? They are pretty spread out, unless you do a glob match of pytest tests/test_* tests/unit, that's totally fine as long as you just expect the normal behaviour of running pytest to just run all tests.

My fear is that it'll quickly get out of hand with a multitude of tests that do a lot of the same thing (e.g. making devices), and it might be difficult to tell exactly what's under test. OTOH, we might run into the same problem either way.

Yeah, I worry about it also. I had just left a comment about using dependencies via pytest-depends as an alternative, you just have to be careful about not modifying the items between the dependent tests. Maybe tests that would modify items would use classes with dedicated objects and tests that only need to read objects could use commonly created objects via dependencies?

@zachmoody
Copy link
Contributor

Can you select only unit tests currently though? They are pretty spread out, unless you do a glob match of pytest tests/test_* tests/unit, that's totally fine as long as you just expect the normal behaviour of running pytest to just run all tests.

I think that's fair, and yeah, it'd depends on deprecating the fixture based tests for these so that eventually we only have unit and integration in tests.

@raddessi
Copy link
Contributor Author

Hey I hope things are going well :) Is there anything else you need done on this at this time or are you just holding on it for testing? I only ask because I should have some spare time this weekend to put in if needed. Thanks

@zachmoody
Copy link
Contributor

👋 I don't think so, think it's a great foundation. I've been meaning to spend some time on it to see if we could generate tests similar to how we do with the fixture-based ones (i.e. test each endpoint with a CRUD request). Wanted to do that today, but it doesn't look like that's going to happen. I'll see if I can't take a swing at that over the weekend or sometime next week.

@raddessi
Copy link
Contributor Author

Oh no rush, I was just checking in. Don't rush it on my part as I won't be able to extend the much in the next few weeks anyhow. Sounds good, if I can help set up a few tests closer to how you want just let me know. Have a good weekend!

@zachmoody
Copy link
Contributor

Didn't make it very far into my idea before realizing it's probably too involved to get done on this PR. I'll CC you when I get a chance finish it out.

Many thanks for tackling this!

@zachmoody zachmoody merged commit ee98b8e into netbox-community:master Jan 27, 2021
@raddessi
Copy link
Contributor Author

raddessi commented Jan 28, 2021

Ah right on, no worries thank you for the assist and for the module itself! So just to confirm, I should wait to actually build any tests until you get a solid framework pattern going right?

@raddessi raddessi deleted the master.pytest-docker branch January 28, 2021 19:39
@zachmoody
Copy link
Contributor

Probably, just so we're not duplicating effort, but it might be a few weeks before I can get to it. The idea was that we could basically setup session-scoped fixtures of anything that would get used more than once (e.g. Site, rack(s), devices, etc.). Then we'd go through each endpoint methodically by running create, retrieve, update, and finally delete operations on each. Obviously for stuff we've created in the fixtures we wouldn't use another create on, but that also means coordinating our tests so that we deleted things in the right order. There's a few implementation details like that I still need to get into, but that's the gist. I just figured a more methodical approach would mean better coverage and fewer redundancies when we did identify gaps. You're a lot more familiar with it than I am though; so does that make sense?

@raddessi
Copy link
Contributor Author

raddessi commented Feb 4, 2021

So essentially there would be thousands of fixtures then right? Like server_device_connected_to_x_type_switches? That would work but it might just get hard to manage on the long run. From my testing of our use cases in netbox I think there are so many edge cases it may be easier to have isolated spin up/spin down tests for those but honestly I'm not sure and it would be great to just see where this goes. It's not like it would be super hard to change in a few months if it needs tweaking, and any tests are better than no tests so run with it and when you have a setup you like I'll start adding to it :) Thanks and have a good week!

@zachmoody
Copy link
Contributor

So essentially there would be thousands of fixtures then right?

😬 I hope not, maybe at most a few dozen. Not looking to test each device-type, just a more fundamental test of whether we're interacting with the endpoint properly. A cursory example would be where we create a couple devices as session-scope fixtures. Then when we got to say the cables test, we'd just create one between the two existing devices, retrieve it and update it with something, then delete it when we're done.

@raddessi
Copy link
Contributor Author

raddessi commented Feb 5, 2021

Right on :) Sounds good to me!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants