Skip to content

Temp train cleaning#40

Open
laserkelvin wants to merge 2 commits intoNVIDIA:mainfrom
laserkelvin:temp-train-cleaning
Open

Temp train cleaning#40
laserkelvin wants to merge 2 commits intoNVIDIA:mainfrom
laserkelvin:temp-train-cleaning

Conversation

@laserkelvin
Copy link
Copy Markdown
Collaborator

ALCHEMI Toolkit Pull Request

Description

Removes residuals

Type of Change

  • Bug fix (non-breaking change that fixes an issue)
  • New feature (non-breaking change that adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • Performance improvement
  • Documentation update
  • Refactoring (no functional changes)
  • CI/CD or infrastructure change

Related Issues

Changes Made

  • Removes dependency from pyproject.toml
  • Removes documentation mentions of feature

Testing

  • Unit tests pass locally (make pytest)
  • Linting passes (make lint)
  • New tests added for new functionality meets coverage expectations?

Checklist

  • I have read and understand the Contributing Guidelines
  • I have updated the CHANGELOG.md
  • I have performed a self-review of my code
  • I have added docstrings to new functions/classes
  • I have updated the documentation (if applicable)

Additional Notes

Tip

This repository uses Greptile, an AI code review service, to help conduct
pull request reviews. We encourage contributors to read and consider suggestions
made by Greptile, but note that human maintainers will provide the necessary
reviews for merging: Greptile's comments are not a qualitative judgement
of your code, nor is it an indication that the PR will be accepted/rejected.
We encourage the use of emoji reactions to Greptile comments, depending on
their usefulness and accuracy.

Signed-off-by: Kelvin Lee <kinlongkelvi@nvidia.com>
Signed-off-by: Kelvin Lee <kinlongkelvi@nvidia.com>
@laserkelvin laserkelvin requested a review from dallasfoster April 1, 2026 03:11
@laserkelvin laserkelvin added the documentation Improvements or additions to documentation label Apr 1, 2026
@copy-pr-bot
Copy link
Copy Markdown

copy-pr-bot bot commented Apr 1, 2026

This pull request requires additional validation before any workflows can run on NVIDIA's runners.

Pull request vetters can view their responsibilities here.

Contributors can view more details about this message here.

@greptile-apps
Copy link
Copy Markdown
Contributor

greptile-apps bot commented Apr 1, 2026

Greptile Summary

This PR removes the training optional dependency (which pulled in nvidia-physicsnemo[datapipes-extras]>=2.0.0) from pyproject.toml and scrubs associated documentation references across README.md, docs/index.md, docs/userguide/dynamics_sinks.md, and docs/userguide/zarr_compression.md. The uv.lock is regenerated accordingly.

  • The dependency removal and all documentation text changes are correct and coherent.
  • A minor indentation regression was introduced in the quick-start code block of docs/userguide/zarr_compression.md (lines 24–27): core= gained an extra leading space, and both the closing ) and writer = statement ended up with a spurious single-space prefix, making the example visually misleading. A fix is suggested inline.
  • CHANGELOG.md was not updated; the PR checklist item for it was left unchecked — consider adding an entry for the removal of the training extra.

Important Files Changed

Filename Overview
pyproject.toml Removes the training optional dependency group (which pulled in nvidia-physicsnemo[datapipes-extras]>=2.0.0); change is clean and self-contained.
README.md Removes the pip install nvalchemi-toolkit[training] line from the optional-extras section, consistent with the dependency removal.
docs/index.md Replaces "training data" with "simulation data" in the ML Researchers card; clean wording update.
docs/userguide/dynamics_sinks.md Replaces three "training" references with "dynamics" / "analysis"; straightforward text cleanup with no issues.
docs/userguide/zarr_compression.md Removes training-specific language across several sections; however the first quick-start code block has incorrect indentation introduced by this PR (5-space indent on core=, 1-space prefix on ) and writer =).
uv.lock Lock file regenerated to reflect removal of nvidia-physicsnemo and its transitive dependencies; no issues.

Reviews (1): Last reviewed commit: "docs: descriptions" | Re-trigger Greptile

Comment on lines 24 to +27
config = ZarrWriteConfig(
core=ZarrArrayConfig(compressors=(ZstdCodec(level=3),)),
)
writer = AtomicDataZarrWriter("/data/training.zarr", config=config)
core=ZarrArrayConfig(compressors=(ZstdCodec(level=3),)),
)
writer = AtomicDataZarrWriter("/data/example.zarr", config=config)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Inconsistent indentation introduced in code block

This PR changed the example path from /data/training.zarr to /data/example.zarr, but also accidentally shifted the indentation: core= now has 5 leading spaces (instead of 4) and the closing ) and writer = assignment each have a spurious leading space. The writer = line in particular visually appears to sit inside the ZarrWriteConfig(...) call. Compare with the correctly-indented Recipe 1 block at line 312.

Suggested change
config = ZarrWriteConfig(
core=ZarrArrayConfig(compressors=(ZstdCodec(level=3),)),
)
writer = AtomicDataZarrWriter("/data/training.zarr", config=config)
core=ZarrArrayConfig(compressors=(ZstdCodec(level=3),)),
)
writer = AtomicDataZarrWriter("/data/example.zarr", config=config)
config = ZarrWriteConfig(
core=ZarrArrayConfig(compressors=(ZstdCodec(level=3),)),
)
writer = AtomicDataZarrWriter("/data/example.zarr", config=config)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

documentation Improvements or additions to documentation

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant