-
-
Notifications
You must be signed in to change notification settings - Fork 1.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The flag does not respect package resolver results and leads to inconsistent lock files. #5613
Comments
How do you handle when the resolver creates an inconsistent or lock file that simply cannot be installed? The whole point of removing
Ok, pin the appropriate problematic packages and start using named package categories where it makes sense if some of these different teams don't actually need the whole set of 110 packages.
The proposal was never that you pin ALL transient dependencies, that is exactly what the Lock file is -- the proposal was that you pin ALL problematic dependencies such that you can trust when you lock that the specifiers you don't want to be affected won't be. It was less of a proposal -- this is how
I personally cannot support the |
Here is an example of the problem: Let's try: First I install old Django, imagine this was some time ago: Then now today I want to add django-model-utils: So it "works" and generates a lockfile, but it is not valid. Django-model-utils 4.2.0 will not work with even Django 3.0 (support was dropped) so having it install 4.2.0 along side 2.2.28 is begging for problems. The Lockfile:
|
Yes it occasionally happens. That's the reason why we run pipenv lock & sync twice. Sometimes first fails and second one usually succeeds. And we also generate requirements and install into empty venv. So we can detect such issues. But it outweighs the downside of not being able to upgrade in small chunks at all. This is HUGE minus for handling packages especially in large project in production.
One version that is not problematic today, usually is tomorrow. Every package breaks compatibility at one point. That's why we always check the release notes for each package when we upgrade it. That's why iterative upgrading is important to us. Categories don't work for us, since we have monolith and we need all packages installed at the same time. We also pick upgrading of dependencies in such way that it is easy to see if anything breaks in production (or testing). For example major version upgrade of core library is a planned effort and not something that just accidentally slips in.
Defeats the purpose if you cannot not control what you want to upgrade. I understand it was a hack and poorly documented. But people were apparently using this feature, if they were complaining about it :) Solution to take the feature away without some alternative is a bad solution.
The fact that it is error prone is not such a bit deal, because you have GIT, pull requests and CI. So if pipenv would produce incorrect results this would be caught very early in the process. No big deal really. Committing only parts of lock file seems more error prone to me. Regarding your example:
If you would add Leave it buggy, it's not perfect, but it works just fine at least for us :) |
Can you explain this in more detail. If you are running
That is debatable -- you can control it with the specifiers in the Pipfile. Every package has the potential to break compatibility, review the packages being updated as part of the PR that adds a new dependency and decide at that time if it needs to be pinned in the Pipfile. You are describing having a testing cycle, but the scope of testing you want to control using
The same could be said about what is recommended -- that you go through a PR process and review your lock file constraints.
That is precisely what
There is an upgrade option though I have not used it much -- but when you upgrade a package and it has a dependency that needs to be upgraded too, you would expect both to be upgraded, right? Side note: Have you ever explored the behavior of
|
FWIW poetry does let you install incompatible versions of things together and won't affect the lockfile. I don't know why you would want that behavior, as the package authors already specify that they are not compatible, and the pip resolver respects that. Pipenv uses the pip resolver and poetry does not. Therefore:
|
I believe it is related to #4351 and problems regarding updating of hashes. We update dependency in pipfile and run lock --keep-outdated and in first run it updates the hashes in lock file BUT it runs install with old hashes and in another run it "fixes" itself. There is also another issue but I presume it is related to --keep-outdated, that old hashes are left inside even if version is updated. How we run it:
How would that work in practice? You cannot partially accept PR. So you do CI, QA test with 5 lib changes. You send to production and it fails for some reason. You rollback PR. Then you want to selectively update one version at the time (or even using bisection). How would pipenv help without keep outdated? Manually fiddling with lock file? We had that case multiple times. Last time was grpc lib introduced perf degradation (still opened issue with new io lib) doubling the cpu usage in production under high load increasing our latencies. We easily tracked it to grpc and pin it in pipfile, since we only upgraded grpc dep.
Yes, manual is more still more errors prone and fiddle with lock files directly is bad.
Not in one go. I would expected to get and error if I introduced incompatibilities (NIT: flag with --update-needed).
Thank you I will try. I haven't explored it yet, but I will try, it sounds what could work for us if it works. But I need to see how to do selective-upgrade and update pipfile version or do install of new dep.
I understand. That's why I would like shed light on my use case and perhaps also from other people using this feature because nobody just happens to accidently pass this flag :) Perhaps fixing this issue: #4351 would solve a lot (especially if you document those alternatives).
I don't want that and I never did. I want to have upgrading under control without breaking everything as reliably as possible. If tooling sometimes produces wrong output you can easily create automated test for this (like installing into new venv). Not perfect but acceptable. I don't want to sound pushy or negative or anything. I really do appreciate pipenv and your work. Just want to get our use case across (and from people using this "complex" feature). |
I have tried the solution that you proposed. I don't have time to investigate how poetry works. I still prefer pipenv for now. I was using our own existing pipenv lock with pipfile. Inside it was (and many many others): Option 1 [broken]:
Option 2 [working as workaround for direct dependency if you revert pipfile]:
Option 3 [broken]:
Option 4 [as expected]:
Option 5 [working as workaround if you revert pipfile changes]:
Option 6 [broken]:
|
Tested another case that you mentioned:
added to pipfile: test> PIPENV_IGNORE_VIRTUALENVS=1 pipenv lock are getting the same result (like expected!):
It's the same in poetry. Both detect conflict.
4.2.0 django-models-utils requires Django > 2.0.0. That why you didn't get the error in poetry and pipenv. |
@matejsp Thanks for your detailed responses ... I am working through them slowly.
Say you roll back because its critical, or its not critical so you have a bit of time to fix it -- in either case, the path forward would be to isolate the offending upgrade and pin that in your
I am not, and never have, recommended people fiddle with the lock files directly. Sounds like there are numerous bugs in |
My issue was how to isolate the offending upgrade (since we couldn't reproduce it in QA). So for each dep & pin & deploy to production, rollback if perf degradation & repeat.. And at the end unpin all the deps that were not problematic. We can both agree that your ideal way does not fit how we do it :) Did I mention our SecOps team that is doing a fast upgrade of only CVE impacted deps and not touching others because the hotfix release will go fast to prod without much testing. Like last two CVEs in Django this month.
That would be ok solution, but really can't speculate about corner cases. One thing that I don't understand is if you produce corrupted lock file for some reason, why isn't there a validation at the end? Better to show error than to silently produce corrupted lock file? In this case developer could update/pin more libraries to help pipenv resolver (or file a bug report). Now to the fun part :D PoC :) Differences:
Example from lock file:
|
There would be fewer corner cases than
There is validation if the lock phase cannot be resolved -- but I don't understand your comment. As far as I know, we don't produce a "corrupted lock file", its just that using
That is to say, the correct way to isolate a package upgrade is to consider all the required packages that it has and ensure they all meet specified version requirements, which is not what Last note -- I do know that poetry uses its own resolver different from pip, so that can explain some of the differences you are seeing there. |
Sorry from earlier comments I was under the impression that --keep-outdated sometimes produces inconsistent/corrupted lock files. What we observed are the following cases:
Regarding A,B,C case ... try updating, if conflict, detect and report an error :) no need to do a magic here and auto resolve ... say C is incompatible and let the developer pin the C to correct version. IMHO if it is hard to maintain the magic with keep-outdated ... make it simpler :D it will still cover 95% use cases. Alternative as --selective-upgrade are fine too if working :) |
Just a drive-by comment on the claim that
I see no incompatibility between django-model-utils 4.2.0 and django 2.*. The requirement from django-model-utils is for django >= 2.0.1 I know nothing about pipenv but so far as I can see this output is incorrect:
|
@dimbleby It was already corrected above by @matejsp when he pointed out django-model-utils==4.2.0 does allow django 2.* so it was not a valid test of poetry. Corrected example shows this is poetry's behavior, which is similar to what you would get during pipenv install if your
|
Ah, I didn't see that. (Actually I still don't, but it's a long thread and if we're all agreeing then it couldn't matter less). I'll drop out. |
@matejsp Here is a rough implementation (it updates the lock how I would expect, but not the Pipfile yet): https://github.com/pypa/pipenv/pull/5617/files Example: 1.) Installing requests :
|
@matejsp I've made some more updates to the PR for the |
@matteius Just need some time (hopefully today). |
Thanks @matejsp -- I've just pushed some updates to integrate it with the |
Tested on our monolith with your branch. Hope it helps :) I installed via: Classic update
Trying no op operation:Used after git merge to check if pipenv is in sync with lock file when resolving conflicts.
Updated dependency in pipfile directly: requests = "==2.28.2"
Pinned dependency update (without touching pipfile):
Transitive depedency update:
Updated dependency in pipfile:
Upgrade depedency that cannot be upgraded:
|
@matejsp Thanks for your detailed analysis. I've pushed a fix for the no-op case you've identified:
And a few notes on your other points:
Yeah, turns out its rather hard to merge to sub-strings in a TOML file. In working on this I fixed another bug that was happening in project.py, and there was a weird test case that was like If we have a non pep(??)-compliant package name, that the old code would try to update it and maintain the comments, but fixing the actual bug was more important than trying to maintain that behavior (at least for now).
Hmmm, my intention was you would run
This is actually working as intended -- the upgrade command was given chardet and
This is because pip's resolver is not quite clear in its messaging about what the issues are in backtracking, I think this might be resolvelib. I also think, IIRC, that pip developers are working on improving the messaging for this case, but I honestly am not sure to what extent that is true (I want to believe it is). Anyway -- that's all I have for now. Thanks again! |
Great! :) I agree with most of the comments.
I understand the issue with relocking. The problem is that upgrade updates pipfile and lock file giving you the false sense that everything is ok. Maybe some validation and warning that lock file is outdated. The reason why one of my tests were also updating pipfile is because when merging using git flow, pipfile and lock file gets merged without a problem (except _meta hash -> manual resolving). Now to check lock we used Another nit from poetry that I really loved is that meta hash is checked against specification and it warns you if lock file is stale. So for |
Issue description
We have just updated pipenv and were negatively surprised that this feature is about to be removed.
I never saw any discussion regarding keeping keep-outdated flag but this feature is one of the most useful features in pipenv at least in very large projects such as monoliths. Please don't remove it. #5544 by @matteius
In our project we have around 110 depedencies and we rely on pipenv keep-updated so the developers update only their and not all dependencies. We specify only directly used dependencies and rely on lock file to have a list of transient dependencies.
We occasionally pin transient dependencies when we see incompatibilities.
Since we have the work split between different teams and dependencies are common in our monolith we have a problem if one team member updates all other dependencies and very often breaks the project. Some (binary wheels) are really problematic such as cryptography because it could fails just on some target systems.
Based on proposal that we should pin ALL transient dependencies if we want to retain control over dependencies making everything unreadable and hard to maintain:
Basically we rely on pipenv to do hard lifting for us and not manually track all dependencies. Manual work is error prone and would like to avoid. This is such important feature to us that we will migrate to another solution (like poetry).
The main reason why we even need such feature at all is because pipenv is lacking tooling for updating one by one dependency:
Like https://python-poetry.org/docs/cli/#update or npm or basically every other dep manager.
Expected result
Retain this feature or introduce new commands for updating controllably dependencies.
Actual result
Steps to replicate
use flag --keep-outdated
$ pipenv --support
Pipenv version:
'2023.2.4'
Pipenv location:
'/Users/myuser/.virtualenvs/bitstamp38/lib/python3.8/site-packages/pipenv'
Python location:
'/Users/myuser/.virtualenvs/bitstamp38/bin/python3'
OS Name:
'posix'
User pip version:
'22.3.1'
user Python installations found:
3.11.2
:/usr/local/bin/python3
3.10.10
:/Users/myuser/.pyenv/versions/3.10.10/bin/python3
3.10.7
:/Users/myuser/.pyenv/versions/3.10.7/bin/python3
3.10.6
:/Users/myuser/.pyenv/versions/3.10.6/bin/python3
3.9.16
:/usr/local/bin/python3.9
3.9.13
:/Users/myuser/.pyenv/versions/3.9.13/bin/python3
3.9.6
:/usr/bin/python3
3.8.16
:/Users/myuser/.virtualenvs/bitstamp38/bin/python3
3.8.16
:/Users/myuser/.virtualenvs/bitstamp38/bin/python
3.8.16
:/Users/myuser/.virtualenvs/bitstamp38/bin/python3
3.8.16
:/Users/myuser/.virtualenvs/bitstamp38/bin/python
3.8.16
:/usr/local/bin/python3.8
3.8.16
:/Users/myuser/.pyenv/versions/3.8.16/bin/python3
3.8.13
:/Users/myuser/.pyenv/versions/3.8.13/bin/python3
3.8.12
:/Users/myuser/.pyenv/versions/bitstamp38/bin/python3
3.8.12
:/Users/myuser/.pyenv/versions/3.8.12/bin/python3
3.6.8
:/usr/local/bin/python3.6
3.6.8
:/usr/local/bin/python3.6m
PEP 508 Information:
System environment variables:
TERM_PROGRAM
SHELL
TERM
TMPDIR
TERM_PROGRAM_VERSION
TERM_SESSION_ID
USER
SSH_AUTH_SOCK
PATH
LaunchInstanceID
__CFBundleIdentifier
PWD
XPC_FLAGS
XPC_SERVICE_NAME
SHLVL
HOME
LOGNAME
SECURITYSESSIONID
OLDPWD
ZSH
PAGER
LESS
LSCOLORS
_VIRTUALENVWRAPPER_API
VIRTUALENVWRAPPER_SCRIPT
VIRTUALENVWRAPPER_PYTHON
NVM_DIR
NVM_CD_FLAGS
NVM_BIN
NVM_INC
VIRTUALENVWRAPPER_PROJECT_FILENAME
VIRTUALENVWRAPPER_WORKON_CD
WORKON_HOME
VIRTUALENVWRAPPER_HOOK_DIR
VIRTUAL_ENV
PS1
CD_VIRTUAL_ENV
LANG
LC_ALL
LC_CTYPE
_
__CF_USER_TEXT_ENCODING
PIP_DISABLE_PIP_VERSION_CHECK
PIP_PYTHON_PATH
PYTHONDONTWRITEBYTECODE
PYTHONFINDER_IGNORE_UNSUPPORTED
Pipenv–specific environment variables:
Debug–specific environment variables:
PATH
:/Users/myuser/.virtualenvs/b38/bin:/Users/myuser/.rd/bin:/Users/myuser/.nvm/versions/node/v14.18.2/bin:/Users/myuser/apache-maven-3.8.2/bin:/usr/local/opt/[email protected]/bin:/usr/local/bin:/System/Cryptexes/App/usr/bin:/usr/bin:/bin:/usr/sbin:/sbin:/Library/Apple/usr/bin
SHELL
:/bin/bash
LANG
:en_US.UTF-8
PWD
:/Users/myuser/projects/b/pipenvs
VIRTUAL_ENV
:/Users/myuser/.virtualenvs/b38
The text was updated successfully, but these errors were encountered: