Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] Syndicate may not delete lambda package folder after archiving #143

Open
bohdan-onsha opened this issue Dec 17, 2021 · 1 comment
Open
Assignees
Labels
bug Something isn't working

Comments

@bohdan-onsha
Copy link
Collaborator

Describe the bug
Some folders with lambda meta are not deleted after archiving the lambda package.

Expected behavior
All of the lambda folders will be removed after creating .zip archives.

Screenshots
Screenshot 2021-12-17 at 16 39 26

Additional context
As per the screenshot above, different lambda folders may be left undeleted, so the problem is not in concrete lambda

@dmytro-afanasiev
Copy link
Collaborator

dmytro-afanasiev commented Dec 24, 2021

The bug is much deeper than it seemed

It doesn't occur steadily but, supposedly, when you assemble multiple bundles with some similar dependencies in python lambdas one by one.

To reproduce the bug you should start assembling a bundle of some more-or-less big project with multiple lambdas, for instance 'caas'. Pip may throw an error which it isn't supposed to throw while installing requirements for some certain lambda:
image
After that this lambda won't be assembled hence its artifacts folder won't be removed and zip-package won't be made.

There are some changes in here I made trying to resolve the issue. The code fixes a bug with removing artifacts folder (after successful !!! installation of all the dependencies and packaging) and force syndicate to make sys.exit(1) with detailed description in case the pip's error occures to make sure user won't deploy a wrongly made bundle. Here how it looks like (the same error as in the picture above):
image
The frequency of occurrence seems to depend on the number of workers in ThreadPoolExecutor. Having 2 workers instead of 5 I didn't manage to catch the bug (10/10 bundles) but it doesn't mean it won't occur. In the branch I refers to - 2 workers are set so it has to work properly (a bit slowly) there but still the problem should be discussed.

Note: according to the pip's documentation and some answers from here, pip isn't thread-safe but inside 'syndicate assemble' we have it installing lambdas' requirements in multiple workers (5 to be precise). It may be the source of the problem. Moreover if you put aside ThreadPoolExecutor and make the installing approach one-threaded, no bugs occur.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants