-
Notifications
You must be signed in to change notification settings - Fork 9
Open
Description
Most MLIP packages that I'm aware of are built on PyTorch. However, this seems to lead to conflicts with tensorflow[and-cuda] due to differing nvidia-cublas-cu12 requirements, e.g.:
we can conclude that torch>2.2,<2.5.0 depends on nvidia-cublas-cu12{platform_machine == 'x86_64' and sys_platform == 'linux'}==12.1.3.1.
And because torch>=2.5.0,<=2.6.0 depends on nvidia-cublas-cu12{platform_machine == 'x86_64' and sys_platform == 'linux'}==12.4.5.8 and
nvidia-cublas-cu12{platform_machine == 'x86_64' and sys_platform == 'linux'}==12.6.4.1, we can conclude that nvidia-cublas-cu12!=12.1.3.1,
torch>2.2,<2.8.0, all of:
nvidia-cublas-cu12{platform_machine == 'x86_64' and sys_platform == 'linux'}<12.4.5.8
nvidia-cublas-cu12{platform_machine == 'x86_64' and sys_platform == 'linux'}>12.4.5.8,<12.6.4.1
nvidia-cublas-cu12{platform_machine == 'x86_64' and sys_platform == 'linux'}>12.6.4.1
are incompatible.
And because torch>=2.8.0 depends on nvidia-cublas-cu12{platform_machine == 'x86_64' and sys_platform == 'linux'}==12.8.4.1 and
tensorflow[and-cuda]==2.19.1 depends on nvidia-cublas-cu12==12.5.3.2, we can conclude that tensorflow[and-cuda]==2.19.1 and torch>2.2 are
incompatible.
And because we know from (8) that torch>2.2 and all of:
tensorflow[and-cuda]<2.5.0
tensorflow[and-cuda]>2.15.1,<2.19.1
are incompatible, we can conclude that all of:
tensorflow<2.5.0
tensorflow>2.15.1
, torch>2.2, tensorflow[and-cuda]<2.20 are incompatible.
And because tensorpotential[cuda] depends on tensorflow[and-cuda]<2.20 and your project depends on tensorflow>2.16,<2.20, we can conclude that
your project, tensorpotential[cuda], torch>2.2 are incompatible.
And because your project depends on torch>2.2 and your project requires tensorpotential[cuda], we can conclude that your project's requirements
are unsatisfiable.
This inhibits running multiple models and understanding their performance.
Would it be possible to move tensorflow[and-cuda] to an optional extra, along these lines: master...ElliottKasoar:grace-tensorpotential:master?
This still wouldn't allow running on GPU conflict-free, but would at least allow simple comparisons.
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels