Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ensure cleanup always runs by utilizing a pytest finalizer. #33

Open
wants to merge 6 commits into
base: master
Choose a base branch
from

Conversation

josh-drapeau
Copy link

@josh-drapeau josh-drapeau commented Dec 4, 2018

Trying to kill pytest with SIGINT (CTRL+C) causes docker, or it's network bridges, to stay alive. I discovered this when one of the surviving network bridges killed my internet connection entirely. At other times, my computer's performance will drop from leftover docker containers running. Life is hard for the impatient!

While this will not keep someone mashing ctrl+c safe, I've found this small change reduces the possibility of leftover waste without a noticeable performance cost.

@AndreLouisCaron
Copy link
Contributor

Hi @not-josh !

Thanks for contributing :-)

I'm a bit surprised by this change. According to the Pytest fixtures documentation, it doesn't seem like exceptions are propagated like this. See Fixture finalization / executing teardown code.

@josh-drapeau
Copy link
Author

josh-drapeau commented Dec 11, 2018

Pytest won't run a fixture's teardown if the initial setup fails for any reason. A try/finally block will still pass the exception upward while ensuring it's teardown code is still ran.

Alternatively (and likely cleaner), this can also be done utilizing the addfinalizer function of the pytest fixture subrequest. In the example below, I've added the request as a parameter, moved the teardown code into it's own function, and added that function the request using addfinalizer. What do you think?

Edit: I've updated my changes to reflect the code snippet below.*

@pytest.fixture(scope='session')
def docker_services(
    request, docker_compose_file, docker_allow_fallback, docker_compose_project_name
):
    """Ensure all Docker-based services are up and running."""

    def _cleanup():
        # Clean up.
        docker_compose.execute('down -v')

    docker_compose = DockerComposeExecutor(
        docker_compose_file, docker_compose_project_name
    )

    # If we allowed to run without Docker, check it's presence
    if docker_allow_fallback is True:
        try:
            execute('docker ps')
        except Exception:
            # Run against localhost
            yield Services(docker_compose, docker_allow_fallback=True)
            return

    # If failure happens beyond this point, run cleanup
    request.addfinalizer(_cleanup)

    # Spawn containers.
    docker_compose.execute('up --build -d')

    # Let test(s) run.
    yield Services(docker_compose)

@josh-drapeau josh-drapeau changed the title Moving build, testing, and cleanup into a try/finally block. Ensure cleanup always runs by utilizing a pytest finalizer. Jan 3, 2019
Copy link

@butla butla left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In general, the approach with the finalizer is safer, yes, but I've played a bit with docker-compose, and I have a reservation about this maybe not working. Annoyingly, for some reason I can't actually test your code right now (can't pull the images on a project that that would be a good test...).

So the problem I have is that running down alongise up might not have reliable results and leave some stuff behind. So the post important thing is that Python always kills the up process if it's still running. Will that happen if you throw an exception after execute docker_compose.execute('up --build -d')? Or give it a few seconds with sleep and then throw it? Are to containers getting nicely cleaned out? I wanted to test it with a bigger project that I have, but I'm having weird issues right now, like I said.

With that said - if ctrl+c will kill the compose up process with will make down always clean up everything, then this looks good to me.

src/pytest_docker/__init__.py Outdated Show resolved Hide resolved
src/pytest_docker/__init__.py Show resolved Hide resolved
@josh-drapeau
Copy link
Author

Hello @butla!

Raising an exception during or after docker_compose.execute('up --build -d') and before our yield will result in your docker images staying up and running without the use of a finalizer. The docker images will clean themselves up with a defined finalizer telling them to.

Part of this is because docker-compose is being ran with the -d (detached) flag. Normally tossing up a SIGINT will result in a cleanup, however it is expected for you to manually clean things up using down while running it detached.

As for up running alongside down... The way subprocess.check_output behaves will ensure that up will not be ran in parallel to down. Once our Python process runs into a KeyboardInterrupt (or any other error) check_output will kill the background process it's waiting on using SIGKILL, ensuring the process dies immediately before raising whatever exception occurred.

Running docker-compose down is also pretty safe -- even if our up command failed halfway through! down will only try to remove things created by the Compose file and does not raise any errors if some (or all) images aren't running. It'll complain, but still no errors!

@josh-drapeau
Copy link
Author

Also, I've fixed the pep8 errors with the tests, however they're still broken due to unrelated changes. :(

@butla
Copy link

butla commented Jan 13, 2019

@not-josh I see the breakages. Got a solution for that but it raises another question - I'll link the issue/PR.

As for this PR in itself - I'll try to come up with code to illustrate the problem I described, maybe I'll have the time in the next two days.

@butla
Copy link

butla commented Jan 13, 2019

Issue, leading to the code fix and some thoughts - #35

@augi
Copy link
Member

augi commented Jan 25, 2024

Is this still valid, please?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants