feat(dp, pt): add charge spin embedding#5295
feat(dp, pt): add charge spin embedding#5295iProzd wants to merge 7 commits intodeepmodeling:masterfrom
Conversation
Signed-off-by: Duo <50307526+iProzd@users.noreply.github.com>
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughAdds an optional frame parameter fparam and a boolean flag add_chg_spin_ebd across descriptor and atomic model layers; when enabled, DPA3 builds charge/spin embeddings and mixes them into node embeddings. fparam is propagated through call/forward signatures and test harnesses; atomic models may synthesize default fparam per frame. Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant AtomicModel as DPAtomicModel
participant Descriptor as DescrptDPA3
participant ChgEmbed as chg_embedding
participant SpinEmbed as spin_embedding
participant MixMLP as mix_cs_mlp
User->>AtomicModel: forward(coords, fparam=None)
alt add_chg_spin_ebd enabled and fparam not provided
AtomicModel->>AtomicModel: build default fparam (tile per frame)
end
AtomicModel->>Descriptor: call(..., fparam=fparam_input if add_chg_spin_ebd else None)
alt add_chg_spin_ebd enabled and fparam provided
Descriptor->>ChgEmbed: compute chg_ebd
Descriptor->>SpinEmbed: compute spin_ebd
ChgEmbed-->>Descriptor: chg_ebd
SpinEmbed-->>Descriptor: spin_ebd
Descriptor->>MixMLP: forward(concat(chg_ebd, spin_ebd))
MixMLP-->>Descriptor: mixed_cs
Descriptor->>Descriptor: add mixed_cs to node_ebd_ext
end
Descriptor-->>AtomicModel: descriptor outputs
AtomicModel-->>User: predictions
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Possibly related PRs
Suggested reviewers
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 4
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (9)
deepmd/dpmodel/descriptor/dpa2.py (1)
825-864:⚠️ Potential issue | 🟠 MajorConsume or discard
fparamto keep lint green.Line 831 introduces
fparam, but it is unused incall, which triggers ARG002. If unused by design, explicitly discard it.💡 Minimal fix
xp = array_api_compat.array_namespace(coord_ext, atype_ext, nlist) + del fparam use_three_body = self.use_three_bodyAs per coding guidelines
**/*.py: Always runruff check .andruff format .before committing changes or CI will fail.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@deepmd/dpmodel/descriptor/dpa2.py` around lines 825 - 864, The parameter fparam in the method call of class DPA2 (function name: call) is currently unused which triggers ARG002; explicitly consume or discard it to satisfy the linter — for example, add a single statement that references fparam (e.g., _ = fparam or del fparam) near the top of call after xp is set, ensuring the symbol is acknowledged but no behavior changes occur; keep the change minimal and run ruff check/format before committing.deepmd/dpmodel/descriptor/se_t_tebd.py (1)
349-388:⚠️ Potential issue | 🟠 MajorDiscard unused
fparamto avoid ARG002.Line 355 adds
fparam, but the method does not use it. Explicitly deleting it keeps API compatibility and lint compliance.💡 Minimal fix
- del mapping + del mapping, fparamAs per coding guidelines
**/*.py: Always runruff check .andruff format .before committing changes or CI will fail.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@deepmd/dpmodel/descriptor/se_t_tebd.py` around lines 349 - 388, The method call in class/function se_t_tebd (function name: call) declares parameter fparam but never uses it, causing ARG002; fix by explicitly discarding the unused parameter (e.g., add "del fparam" or assign to "_" at the start of the function alongside the existing "del mapping") to keep API compatibility and satisfy linters; ensure you place this discard near the existing "del mapping" line so the signature remains unchanged while removing the unused-variable warning.deepmd/dpmodel/descriptor/hybrid.py (1)
272-336:⚠️ Potential issue | 🔴 CriticalForward
fparamto child descriptors; it is currently dropped.Line 278 adds
fparam, but Line 335 does not pass it downstream. This breaks charge/spin embedding paths for child descriptors (e.g., DPA3) that require frame parameters.🔧 Proposed fix
- odescriptor, gr, g2, h2, sw = descrpt(coord_ext, atype_ext, nl, mapping) + odescriptor, gr, g2, h2, sw = descrpt( + coord_ext, + atype_ext, + nl, + mapping, + fparam=fparam, + )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@deepmd/dpmodel/descriptor/hybrid.py` around lines 272 - 336, The hybrid descriptor's call method accepts fparam but doesn't forward it into child descriptor calls: in method call (symbols: fparam, descrpt, descrpt_list) ensure the child invocation descrpt(coord_ext, atype_ext, nl, mapping) is changed to pass the frame params (e.g., descrpt(coord_ext, atype_ext, nl, mapping, fparam) or the correct child API) so fparam is propagated to descriptors like DPA3; update all places in this method where descrpt(...) is invoked to include fparam while preserving backward compatibility if some children do not accept it.deepmd/pt/model/descriptor/se_t.py (1)
339-391:⚠️ Potential issue | 🟠 MajorExplicitly discard unused
fparamto prevent ARG002.Line 346 adds
fparam, but it is currently unused. Please consume or explicitly delete it to keep Ruff clean.💡 Minimal fix
# cast the input to internal precsion coord_ext = coord_ext.to(dtype=self.prec) + del fparam g1, rot_mat, g2, h2, sw = self.seat.forward( nlist, coord_ext, atype_ext, None, mapping )As per coding guidelines
**/*.py: Always runruff check .andruff format .before committing changes or CI will fail.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@deepmd/pt/model/descriptor/se_t.py` around lines 339 - 391, The forward method currently accepts fparam but never uses it, triggering an ARG002 lint; inside the forward method of se_t (function name forward) explicitly discard the unused parameter (for example assign it to _ or call del fparam) before returning/using other values so Ruff no longer flags it; ensure this change is placed near the start of forward (before calling self.seat.forward) so the parameter is visibly consumed and no behavior changes occur.deepmd/pt/model/descriptor/hybrid.py (1)
264-336:⚠️ Potential issue | 🔴 Critical
fparamis accepted but not propagated to child descriptors.Line 271 adds
fparam, but Line 335 drops it when invoking each sub-descriptor. This can break DPA3 charge/spin embedding flows that require frame parameters.🔧 Proposed fix
- odescriptor, gr, g2, h2, sw = descrpt(coord_ext, atype_ext, nl, mapping) + odescriptor, gr, g2, h2, sw = descrpt( + coord_ext, + atype_ext, + nl, + mapping, + comm_dict=comm_dict, + fparam=fparam, + )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@deepmd/pt/model/descriptor/hybrid.py` around lines 264 - 336, The forward method accepts fparam but does not pass it to sub-descriptors, breaking flows that need frame params; update the loop in Hybrid.forward where each descriptor is invoked (the call to descrpt(...)) to forward comm_dict and fparam as additional arguments (e.g., replace descrpt(coord_ext, atype_ext, nl, mapping) with descrpt(coord_ext, atype_ext, nl, mapping, comm_dict, fparam) or the correct child-descriptor signature) so each element of self.descrpt_list receives the frame parameters; keep the existing logic for nl selection and ensure the passed variables use the same names (comm_dict and fparam).deepmd/pt/model/descriptor/dpa1.py (1)
669-711:⚠️ Potential issue | 🟠 MajorHandle
fparamexplicitly to avoid Ruff ARG002 failures.Line 669 adds
fparam, but it is not consumed in this method. If this descriptor intentionally ignores frame parameters, explicitly discard it so lint stays clean.💡 Minimal fix
- del mapping + del mapping, fparamAs per coding guidelines
**/*.py: Always runruff check .andruff format .before committing changes or CI will fail.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@deepmd/pt/model/descriptor/dpa1.py` around lines 669 - 711, The parameter fparam is unused and triggers Ruff ARG002; explicitly discard it to satisfy the linter by deleting it where other unused args are removed (near the existing del mapping line). Update the method that begins with the shown signature (accepting extended_coord, extended_atype, nlist, mapping, comm_dict, fparam) to include a statement removing fparam (e.g., del fparam) immediately after or alongside del mapping so the unused-argument warning is resolved.deepmd/pt/model/descriptor/se_r.py (1)
430-473:⚠️ Potential issue | 🟡 MinorDiscard unused
fparaminforwardfor lint compliance.
fparamis accepted for API consistency but currently unused. Please explicitly discard it alongside the other unused inputs.Proposed fix
- del mapping, comm_dict + del mapping, comm_dict, fparamAs per coding guidelines
**/*.py: Always runruff check .andruff format .before committing changes or CI will fail.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@deepmd/pt/model/descriptor/se_r.py` around lines 430 - 473, The forward method currently accepts fparam but doesn't discard it explicitly; to satisfy linter rules, explicitly discard fparam in the same way mapping and comm_dict are discarded (e.g., add fparam to the existing del statement), referencing the forward signature and the existing del mapping, comm_dict usage so the unused parameter is removed from scope.deepmd/dpmodel/descriptor/se_t.py (1)
347-380:⚠️ Potential issue | 🟡 MinorExplicitly discard unused
fparamto avoid lint failures.At Line 347,
fparamis added but never used or discarded. Mirror the existingmappinghandling so this stays clean under strict Ruff settings.Proposed fix
- del mapping + del mapping, fparamAs per coding guidelines
**/*.py: Always runruff check .andruff format .before committing changes or CI will fail.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@deepmd/dpmodel/descriptor/se_t.py` around lines 347 - 380, The new parameter fparam in the descriptor method is unused and should be explicitly discarded to satisfy linters; mirror the existing handling of mapping by adding an explicit discard (e.g., del fparam) in the same function (the descriptor method in se_t.py where mapping is deleted and xp is set) so Ruff no longer reports an unused variable.deepmd/dpmodel/descriptor/dpa1.py (1)
496-528:⚠️ Potential issue | 🟡 Minor
fparamis unused incall; explicitly discard it.Line 496 introduces
fparambut the method does not consume it yet. Explicitly deleting it keeps this compatible with Ruff settings and makes intent clear.Proposed fix
- del mapping + del mapping, fparamAs per coding guidelines
**/*.py: Always runruff check .andruff format .before committing changes or CI will fail.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@deepmd/dpmodel/descriptor/dpa1.py` around lines 496 - 528, The method call currently accepts fparam but never uses it; add an explicit discard for this parameter (e.g., del fparam) near the start of the call method alongside the existing del mapping to make intent clear and satisfy linters/ruff; update the dpa1.py call method (the function that defines fparam in its signature) to explicitly delete fparam so it's not flagged as unused.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@deepmd/dpmodel/descriptor/dpa3.py`:
- Around line 455-460: The seed for mix_cs_mlp is colliding with chg_embedding
because both call child_seed(seed, 3); update the mix_cs_mlp initialization in
the class (the NativeLayer assignment named mix_cs_mlp) to use a distinct
child_seed index (e.g., child_seed(seed, 4) or the next unused index) so each
layer gets a unique seed and avoids deterministic collisions with chg_embedding.
In `@deepmd/pt/model/descriptor/dpa2.py`:
- Line 718: The parameter fparam is declared in the forward signature but never
used, triggering ARG002; keep the parameter for API compatibility but mark it
intentionally unused by referencing it in the method body (e.g., add a single
line like "del fparam" or assign to _ inside the forward method) so Ruff/CI
stops reporting the unused-argument error while leaving the API unchanged;
locate the forward function that accepts "fparam: torch.Tensor | None = None"
and add the one-line no-op reference at the top of that method.
In `@deepmd/pt/model/descriptor/dpa3.py`:
- Around line 214-219: chg_embedding and mix_cs_mlp both call child_seed(seed,
3) causing identical initializations; change the seed index used for mix_cs_mlp
to a unique value (e.g., child_seed(seed, 4) or another unused index) where
mix_cs_mlp is constructed inside the MLPLayer call so that MLPLayer(...,
seed=child_seed(seed, X)) uses a different child index than chg_embedding to
avoid seed collision.
In `@source/tests/consistent/descriptor/test_dpa3.py`:
- Line 162: The tuple element add_chg_spin_ebd is being unpacked but never used
in multiple places; update each unpacking to either remove that element from the
left-hand side or mark it as intentionally ignored (e.g., replace
add_chg_spin_ebd with a single underscore variable name) so Ruff RUF059 no
longer flags it; search for occurrences of add_chg_spin_ebd in test_dpa3 and
change the unpack patterns accordingly, keeping the rest of the unpacking order
intact.
---
Outside diff comments:
In `@deepmd/dpmodel/descriptor/dpa1.py`:
- Around line 496-528: The method call currently accepts fparam but never uses
it; add an explicit discard for this parameter (e.g., del fparam) near the start
of the call method alongside the existing del mapping to make intent clear and
satisfy linters/ruff; update the dpa1.py call method (the function that defines
fparam in its signature) to explicitly delete fparam so it's not flagged as
unused.
In `@deepmd/dpmodel/descriptor/dpa2.py`:
- Around line 825-864: The parameter fparam in the method call of class DPA2
(function name: call) is currently unused which triggers ARG002; explicitly
consume or discard it to satisfy the linter — for example, add a single
statement that references fparam (e.g., _ = fparam or del fparam) near the top
of call after xp is set, ensuring the symbol is acknowledged but no behavior
changes occur; keep the change minimal and run ruff check/format before
committing.
In `@deepmd/dpmodel/descriptor/hybrid.py`:
- Around line 272-336: The hybrid descriptor's call method accepts fparam but
doesn't forward it into child descriptor calls: in method call (symbols: fparam,
descrpt, descrpt_list) ensure the child invocation descrpt(coord_ext, atype_ext,
nl, mapping) is changed to pass the frame params (e.g., descrpt(coord_ext,
atype_ext, nl, mapping, fparam) or the correct child API) so fparam is
propagated to descriptors like DPA3; update all places in this method where
descrpt(...) is invoked to include fparam while preserving backward
compatibility if some children do not accept it.
In `@deepmd/dpmodel/descriptor/se_t_tebd.py`:
- Around line 349-388: The method call in class/function se_t_tebd (function
name: call) declares parameter fparam but never uses it, causing ARG002; fix by
explicitly discarding the unused parameter (e.g., add "del fparam" or assign to
"_" at the start of the function alongside the existing "del mapping") to keep
API compatibility and satisfy linters; ensure you place this discard near the
existing "del mapping" line so the signature remains unchanged while removing
the unused-variable warning.
In `@deepmd/dpmodel/descriptor/se_t.py`:
- Around line 347-380: The new parameter fparam in the descriptor method is
unused and should be explicitly discarded to satisfy linters; mirror the
existing handling of mapping by adding an explicit discard (e.g., del fparam) in
the same function (the descriptor method in se_t.py where mapping is deleted and
xp is set) so Ruff no longer reports an unused variable.
In `@deepmd/pt/model/descriptor/dpa1.py`:
- Around line 669-711: The parameter fparam is unused and triggers Ruff ARG002;
explicitly discard it to satisfy the linter by deleting it where other unused
args are removed (near the existing del mapping line). Update the method that
begins with the shown signature (accepting extended_coord, extended_atype,
nlist, mapping, comm_dict, fparam) to include a statement removing fparam (e.g.,
del fparam) immediately after or alongside del mapping so the unused-argument
warning is resolved.
In `@deepmd/pt/model/descriptor/hybrid.py`:
- Around line 264-336: The forward method accepts fparam but does not pass it to
sub-descriptors, breaking flows that need frame params; update the loop in
Hybrid.forward where each descriptor is invoked (the call to descrpt(...)) to
forward comm_dict and fparam as additional arguments (e.g., replace
descrpt(coord_ext, atype_ext, nl, mapping) with descrpt(coord_ext, atype_ext,
nl, mapping, comm_dict, fparam) or the correct child-descriptor signature) so
each element of self.descrpt_list receives the frame parameters; keep the
existing logic for nl selection and ensure the passed variables use the same
names (comm_dict and fparam).
In `@deepmd/pt/model/descriptor/se_r.py`:
- Around line 430-473: The forward method currently accepts fparam but doesn't
discard it explicitly; to satisfy linter rules, explicitly discard fparam in the
same way mapping and comm_dict are discarded (e.g., add fparam to the existing
del statement), referencing the forward signature and the existing del mapping,
comm_dict usage so the unused parameter is removed from scope.
In `@deepmd/pt/model/descriptor/se_t.py`:
- Around line 339-391: The forward method currently accepts fparam but never
uses it, triggering an ARG002 lint; inside the forward method of se_t (function
name forward) explicitly discard the unused parameter (for example assign it to
_ or call del fparam) before returning/using other values so Ruff no longer
flags it; ensure this change is placed near the start of forward (before calling
self.seat.forward) so the parameter is visibly consumed and no behavior changes
occur.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository UI
Review profile: CHILL
Plan: Pro
Run ID: 2eaaf4c6-3eba-4633-90dc-092ec27c9cd2
📒 Files selected for processing (25)
deepmd/dpmodel/atomic_model/dp_atomic_model.pydeepmd/dpmodel/descriptor/dpa1.pydeepmd/dpmodel/descriptor/dpa2.pydeepmd/dpmodel/descriptor/dpa3.pydeepmd/dpmodel/descriptor/hybrid.pydeepmd/dpmodel/descriptor/make_base_descriptor.pydeepmd/dpmodel/descriptor/se_e2_a.pydeepmd/dpmodel/descriptor/se_r.pydeepmd/dpmodel/descriptor/se_t.pydeepmd/dpmodel/descriptor/se_t_tebd.pydeepmd/pt/model/atomic_model/dp_atomic_model.pydeepmd/pt/model/descriptor/dpa1.pydeepmd/pt/model/descriptor/dpa2.pydeepmd/pt/model/descriptor/dpa3.pydeepmd/pt/model/descriptor/hybrid.pydeepmd/pt/model/descriptor/se_a.pydeepmd/pt/model/descriptor/se_r.pydeepmd/pt/model/descriptor/se_t.pydeepmd/pt/model/descriptor/se_t_tebd.pydeepmd/utils/argcheck.pysource/tests/consistent/descriptor/common.pysource/tests/consistent/descriptor/test_dpa3.pysource/tests/pt/model/test_dpa3.pysource/tests/universal/dpmodel/descriptor/test_descriptor.pysource/tests/universal/dpmodel/model/test_model.py
| self.mix_cs_mlp = NativeLayer( | ||
| 2 * self.tebd_dim, | ||
| self.tebd_dim, | ||
| precision=precision, | ||
| seed=child_seed(seed, 3), | ||
| ) |
There was a problem hiding this comment.
Seed collision: mix_cs_mlp reuses seed index 3.
Same issue as the PT implementation - both chg_embedding (Line 444) and mix_cs_mlp (Line 459) use child_seed(seed, 3).
🐛 Proposed fix
self.mix_cs_mlp = NativeLayer(
2 * self.tebd_dim,
self.tebd_dim,
precision=precision,
- seed=child_seed(seed, 3),
+ seed=child_seed(seed, 5),
)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@deepmd/dpmodel/descriptor/dpa3.py` around lines 455 - 460, The seed for
mix_cs_mlp is colliding with chg_embedding because both call child_seed(seed,
3); update the mix_cs_mlp initialization in the class (the NativeLayer
assignment named mix_cs_mlp) to use a distinct child_seed index (e.g.,
child_seed(seed, 4) or the next unused index) so each layer gets a unique seed
and avoids deterministic collisions with chg_embedding.
| nlist: torch.Tensor, | ||
| mapping: torch.Tensor | None = None, | ||
| comm_dict: dict[str, torch.Tensor] | None = None, | ||
| fparam: torch.Tensor | None = None, |
There was a problem hiding this comment.
Fix unused fparam to keep Ruff/CI passing.
fparam is never used in forward, which triggers ARG002. Keep the parameter for API compatibility, but mark it as intentionally unused in the method body.
Proposed fix
def forward(
self,
extended_coord: torch.Tensor,
extended_atype: torch.Tensor,
nlist: torch.Tensor,
mapping: torch.Tensor | None = None,
comm_dict: dict[str, torch.Tensor] | None = None,
fparam: torch.Tensor | None = None,
) -> tuple[
torch.Tensor,
torch.Tensor | None,
torch.Tensor | None,
torch.Tensor | None,
torch.Tensor | None,
]:
+ _ = fparam # kept for descriptor interface compatibility
# cast the input to internal precsion
extended_coord = extended_coord.to(dtype=self.prec)As per coding guidelines "**/*.py: Always run ruff check . and ruff format . before committing changes or CI will fail".
🧰 Tools
🪛 Ruff (0.15.2)
[warning] 718-718: Unused method argument: fparam
(ARG002)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@deepmd/pt/model/descriptor/dpa2.py` at line 718, The parameter fparam is
declared in the forward signature but never used, triggering ARG002; keep the
parameter for API compatibility but mark it intentionally unused by referencing
it in the method body (e.g., add a single line like "del fparam" or assign to _
inside the forward method) so Ruff/CI stops reporting the unused-argument error
while leaving the API unchanged; locate the forward function that accepts
"fparam: torch.Tensor | None = None" and add the one-line no-op reference at the
top of that method.
| self.mix_cs_mlp = MLPLayer( | ||
| 2 * self.tebd_dim, | ||
| self.tebd_dim, | ||
| precision=precision, | ||
| seed=child_seed(seed, 3), | ||
| ) |
There was a problem hiding this comment.
Seed collision: mix_cs_mlp reuses seed index 3.
Both chg_embedding (Line 205) and mix_cs_mlp (Line 218) use child_seed(seed, 3), which will produce identical network initialization. This should use a unique seed index.
🐛 Proposed fix
self.mix_cs_mlp = MLPLayer(
2 * self.tebd_dim,
self.tebd_dim,
precision=precision,
- seed=child_seed(seed, 3),
+ seed=child_seed(seed, 5),
)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| self.mix_cs_mlp = MLPLayer( | |
| 2 * self.tebd_dim, | |
| self.tebd_dim, | |
| precision=precision, | |
| seed=child_seed(seed, 3), | |
| ) | |
| self.mix_cs_mlp = MLPLayer( | |
| 2 * self.tebd_dim, | |
| self.tebd_dim, | |
| precision=precision, | |
| seed=child_seed(seed, 5), | |
| ) |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@deepmd/pt/model/descriptor/dpa3.py` around lines 214 - 219, chg_embedding and
mix_cs_mlp both call child_seed(seed, 3) causing identical initializations;
change the seed index used for mix_cs_mlp to a unique value (e.g.,
child_seed(seed, 4) or another unused index) where mix_cs_mlp is constructed
inside the MLPLayer call so that MLPLayer(..., seed=child_seed(seed, X)) uses a
different child index than chg_embedding to avoid seed collision.
| fix_stat_std, | ||
| n_multi_edge_message, | ||
| precision, | ||
| add_chg_spin_ebd, |
There was a problem hiding this comment.
Fix unused unpacked variables to keep Ruff passing.
Line 162, Line 204, Line 225, Line 396, and Line 423 unpack add_chg_spin_ebd but do not use it (RUF059).
🔧 Suggested lint-safe patch
- add_chg_spin_ebd,
+ _add_chg_spin_ebd,
@@
- add_chg_spin_ebd,
+ _add_chg_spin_ebd,
@@
- add_chg_spin_ebd,
+ _add_chg_spin_ebd,
@@
- add_chg_spin_ebd,
+ _add_chg_spin_ebd,
@@
- add_chg_spin_ebd,
+ _add_chg_spin_ebd,As per coding guidelines **/*.py: Always run ruff check . and ruff format . before committing changes or CI will fail.
Also applies to: 204-204, 225-225, 396-396, 423-423
🧰 Tools
🪛 Ruff (0.15.2)
[warning] 162-162: Unpacked variable add_chg_spin_ebd is never used
Prefix it with an underscore or any other dummy variable pattern
(RUF059)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@source/tests/consistent/descriptor/test_dpa3.py` at line 162, The tuple
element add_chg_spin_ebd is being unpacked but never used in multiple places;
update each unpacking to either remove that element from the left-hand side or
mark it as intentionally ignored (e.g., replace add_chg_spin_ebd with a single
underscore variable name) so Ruff RUF059 no longer flags it; search for
occurrences of add_chg_spin_ebd in test_dpa3 and change the unpack patterns
accordingly, keeping the rest of the unpacking order intact.
| fix_stat_std, | ||
| n_multi_edge_message, | ||
| precision, | ||
| add_chg_spin_ebd, |
Check notice
Code scanning / CodeQL
Unused local variable Note test
| fix_stat_std, | ||
| n_multi_edge_message, | ||
| precision, | ||
| add_chg_spin_ebd, |
Check notice
Code scanning / CodeQL
Unused local variable Note test
| fix_stat_std, | ||
| n_multi_edge_message, | ||
| precision, | ||
| add_chg_spin_ebd, |
Check notice
Code scanning / CodeQL
Unused local variable Note test
There was a problem hiding this comment.
Actionable comments posted: 4
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
source/tests/consistent/descriptor/common.py (1)
233-256:⚠️ Potential issue | 🟠 MajorPass
fparamthrough the Paddle path too.Line 233 adds
fparam, but Lines 251-256 still ignore it. That makes the Paddle helper inconsistent with the other backends and prevents these tests from actually exercising the new frame-parameter path there. It also leaves the new argument unused, which Ruff is already flagging.Proposed fix
def eval_pd_descriptor( self, pd_obj: Any, natoms: np.ndarray, coords: np.ndarray, atype: np.ndarray, box: np.ndarray, mixed_types: bool = False, fparam: np.ndarray | None = None, ) -> Any: ext_coords, ext_atype, mapping = extend_coord_with_ghosts_pd( paddle.to_tensor(coords).to(PD_DEVICE).reshape([1, -1, 3]), paddle.to_tensor(atype).to(PD_DEVICE).reshape([1, -1]), paddle.to_tensor(box).to(PD_DEVICE).reshape([1, 3, 3]), pd_obj.get_rcut(), ) nlist = build_neighbor_list_pd( ext_coords, ext_atype, natoms[0], pd_obj.get_rcut(), pd_obj.get_sel(), distinguish_types=(not mixed_types), ) + fparam_pd = ( + paddle.to_tensor(fparam).to(PD_DEVICE) if fparam is not None else None + ) return [ x.detach().cpu().numpy() if paddle.is_tensor(x) else x for x in pd_obj( ext_coords, ext_atype, nlist=nlist, mapping=mapping, + fparam=fparam_pd, ) ]As per coding guidelines,
Always run ruff check .andruff format .before committing changes or CI will fail.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@source/tests/consistent/descriptor/common.py` around lines 233 - 256, The Paddle path forgets to pass the new fparam argument through to the descriptor call; update the block that prepares inputs for pd_obj to convert fparam to a Paddle tensor (when not None) in the same style as coords/atype/box (e.g. paddle.to_tensor(fparam).to(PD_DEVICE).reshape([...]) or leave None if fparam is None) and include it in the pd_obj(...) call as fparam=... so the Paddle helper actually uses the frame-parameter path (refer to pd_obj(...) and the fparam parameter added at the function signature).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@deepmd/dpmodel/atomic_model/dp_atomic_model.py`:
- Around line 210-216: The unpacking from self.descriptor currently binds an
unused variable `sw` which triggers RUF059; update the unpack assignment to use
a throwaway name (e.g., `_sw` or `_`) instead of `sw` so the unused value is
ignored (change "descriptor, rot_mat, g2, h2, sw = self.descriptor(...)" to
"descriptor, rot_mat, g2, h2, _sw = self.descriptor(...)" or use `_`).
- Around line 187-208: The fallback that materializes a default fparam should
only run when descriptor charge/spin embedding is being used and a true default
exists: change the condition around the default fparam branch in dp_atomic_model
(the block using self.fitting_net.get_dim_fparam(), get_default_fparam(), and
array_api_compat) to also require the descriptor embedding flag
(add_chg_spin_ebd) is true and that the fitter exposes a real default via a
has_default_fparam() check (or equivalent) before calling get_default_fparam();
if has_default_fparam() is not present, treat that as “no default” and keep
fparam_input_for_des = fparam. Ensure you only call get_default_fparam() and
construct default_fparam_array when add_chg_spin_ebd is true and a default is
confirmed.
In `@deepmd/pt/model/atomic_model/dp_atomic_model.py`:
- Around line 292-299: The call to self.descriptor(...) in dp_atomic_model.py
currently assigns five returns to descriptor, rot_mat, g2, h2, sw but sw is
unused; rename this binding to _sw (or _) to satisfy linters. Update the
assignment in the dp_atomic_model.AtomicModel (or the method containing that
descriptor call) to use _sw instead of sw so Ruff no longer flags an unused
variable, and run ruff check . / ruff format . before committing.
- Around line 277-290: The branch that supplies a default fparam must only run
when an actual default exists; update the condition in the block that currently
checks self.fitting_net.get_dim_fparam() > 0 so it also requires
self.fitting_net.get_default_fparam() is not None and fparam is None before
using the default. Concretely, change the if condition that references
get_dim_fparam() to also call get_default_fparam() (or store it first) and only
tile/use default_fparam_tensor when that default is present; otherwise leave
fparam_input_for_des = fparam so forward_atomic()/forward path won’t assert on
None. Ensure you reference the same symbols: self.fitting_net.get_dim_fparam(),
self.fitting_net.get_default_fparam(), and forward_atomic().
---
Outside diff comments:
In `@source/tests/consistent/descriptor/common.py`:
- Around line 233-256: The Paddle path forgets to pass the new fparam argument
through to the descriptor call; update the block that prepares inputs for pd_obj
to convert fparam to a Paddle tensor (when not None) in the same style as
coords/atype/box (e.g. paddle.to_tensor(fparam).to(PD_DEVICE).reshape([...]) or
leave None if fparam is None) and include it in the pd_obj(...) call as
fparam=... so the Paddle helper actually uses the frame-parameter path (refer to
pd_obj(...) and the fparam parameter added at the function signature).
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository UI
Review profile: CHILL
Plan: Pro
Run ID: 64878d18-a583-4171-b914-7da518ae434f
📒 Files selected for processing (3)
deepmd/dpmodel/atomic_model/dp_atomic_model.pydeepmd/pt/model/atomic_model/dp_atomic_model.pysource/tests/consistent/descriptor/common.py
| # Handle default fparam if fitting net supports it | ||
| if ( | ||
| hasattr(self.fitting_net, "get_dim_fparam") | ||
| and self.fitting_net.get_dim_fparam() > 0 | ||
| and fparam is None | ||
| ): | ||
| # use default fparam | ||
| from deepmd.dpmodel.array_api import ( | ||
| array_api_compat, | ||
| ) | ||
|
|
||
| default_fparam = self.fitting_net.get_default_fparam() | ||
| assert default_fparam is not None | ||
| xp = array_api_compat.array_namespace(extended_coord) | ||
| default_fparam_array = xp.asarray( | ||
| default_fparam, dtype=extended_coord.dtype | ||
| ) | ||
| fparam_input_for_des = xp.tile( | ||
| xp.reshape(default_fparam_array, (1, -1)), (nframes, 1) | ||
| ) | ||
| else: | ||
| fparam_input_for_des = fparam |
There was a problem hiding this comment.
Gate the default-fparam fallback on descriptor usage.
This branch runs whenever the fitting has frame parameters, even though Line 215 still sends None to the descriptor when add_chg_spin_ebd is false. Since get_default_fparam() is allowed to return None, models that do not use charge/spin embedding can still trip the new fallback path for no descriptor-side reason. Restrict the fallback to the add_chg_spin_ebd path and check has_default_fparam() before materializing the array.
Suggested fix
- if (
- hasattr(self.fitting_net, "get_dim_fparam")
- and self.fitting_net.get_dim_fparam() > 0
- and fparam is None
- ):
+ if self.add_chg_spin_ebd and self.fitting_net.get_dim_fparam() > 0 and fparam is None:
+ if not self.fitting_net.has_default_fparam():
+ raise ValueError(
+ "fparam must be provided when add_chg_spin_ebd is enabled "
+ "and the fitting net has no default frame parameters."
+ )
# use default fparam
from deepmd.dpmodel.array_api import (
array_api_compat,
)
@@
- else:
- fparam_input_for_des = fparam
+ else:
+ fparam_input_for_des = fparam if self.add_chg_spin_ebd else None🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@deepmd/dpmodel/atomic_model/dp_atomic_model.py` around lines 187 - 208, The
fallback that materializes a default fparam should only run when descriptor
charge/spin embedding is being used and a true default exists: change the
condition around the default fparam branch in dp_atomic_model (the block using
self.fitting_net.get_dim_fparam(), get_default_fparam(), and array_api_compat)
to also require the descriptor embedding flag (add_chg_spin_ebd) is true and
that the fitter exposes a real default via a has_default_fparam() check (or
equivalent) before calling get_default_fparam(); if has_default_fparam() is not
present, treat that as “no default” and keep fparam_input_for_des = fparam.
Ensure you only call get_default_fparam() and construct default_fparam_array
when add_chg_spin_ebd is true and a default is confirmed.
| descriptor, rot_mat, g2, h2, sw = self.descriptor( | ||
| extended_coord, | ||
| extended_atype, | ||
| nlist, | ||
| mapping=mapping, | ||
| fparam=fparam_input_for_des if self.add_chg_spin_ebd else None, | ||
| ) |
There was a problem hiding this comment.
Drop or underscore the unused sw binding.
sw is unpacked here but never read, so Ruff flags this as RUF059. Rename it to _sw or _ to keep CI green.
Suggested fix
- descriptor, rot_mat, g2, h2, sw = self.descriptor(
+ descriptor, rot_mat, g2, h2, _sw = self.descriptor(As per coding guidelines, **/*.py: Always run ruff check . and ruff format . before committing changes or CI will fail.
🧰 Tools
🪛 Ruff (0.15.4)
[warning] 210-210: Unpacked variable sw is never used
Prefix it with an underscore or any other dummy variable pattern
(RUF059)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@deepmd/dpmodel/atomic_model/dp_atomic_model.py` around lines 210 - 216, The
unpacking from self.descriptor currently binds an unused variable `sw` which
triggers RUF059; update the unpack assignment to use a throwaway name (e.g.,
`_sw` or `_`) instead of `sw` so the unused value is ignored (change
"descriptor, rot_mat, g2, h2, sw = self.descriptor(...)" to "descriptor,
rot_mat, g2, h2, _sw = self.descriptor(...)" or use `_`).
| # Handle default fparam if fitting net supports it | ||
| if ( | ||
| hasattr(self.fitting_net, "get_dim_fparam") | ||
| and self.fitting_net.get_dim_fparam() > 0 | ||
| and fparam is None | ||
| ): | ||
| # use default fparam | ||
| default_fparam_tensor = self.fitting_net.get_default_fparam() | ||
| assert default_fparam_tensor is not None | ||
| fparam_input_for_des = torch.tile( | ||
| default_fparam_tensor.unsqueeze(0), [nframes, 1] | ||
| ) | ||
| else: | ||
| fparam_input_for_des = fparam |
There was a problem hiding this comment.
Gate the default-fparam path on an actual default.
Line 278 only checks get_dim_fparam() > 0, so this branch now runs for any fitting that uses frame parameters, even when add_chg_spin_ebd is off. If that fitting expects caller-supplied fparam and has no default buffer, Line 285 asserts on None and forward_atomic() fails immediately.
🐛 Proposed fix
- if (
- hasattr(self.fitting_net, "get_dim_fparam")
- and self.fitting_net.get_dim_fparam() > 0
- and fparam is None
- ):
- # use default fparam
- default_fparam_tensor = self.fitting_net.get_default_fparam()
- assert default_fparam_tensor is not None
+ if self.add_chg_spin_ebd and self.fitting_net.get_dim_fparam() > 0 and fparam is None:
+ if not self.fitting_net.has_default_fparam():
+ raise ValueError(
+ "fparam must be provided when add_chg_spin_ebd is enabled "
+ "and no default frame parameter is configured"
+ )
+ default_fparam_tensor = self.fitting_net.get_default_fparam()
+ assert default_fparam_tensor is not None
fparam_input_for_des = torch.tile(
default_fparam_tensor.unsqueeze(0), [nframes, 1]
)
else:
fparam_input_for_des = fparam🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@deepmd/pt/model/atomic_model/dp_atomic_model.py` around lines 277 - 290, The
branch that supplies a default fparam must only run when an actual default
exists; update the condition in the block that currently checks
self.fitting_net.get_dim_fparam() > 0 so it also requires
self.fitting_net.get_default_fparam() is not None and fparam is None before
using the default. Concretely, change the if condition that references
get_dim_fparam() to also call get_default_fparam() (or store it first) and only
tile/use default_fparam_tensor when that default is present; otherwise leave
fparam_input_for_des = fparam so forward_atomic()/forward path won’t assert on
None. Ensure you reference the same symbols: self.fitting_net.get_dim_fparam(),
self.fitting_net.get_default_fparam(), and forward_atomic().
| descriptor, rot_mat, g2, h2, sw = self.descriptor( | ||
| extended_coord, | ||
| extended_atype, | ||
| nlist, | ||
| mapping=mapping, | ||
| comm_dict=comm_dict, | ||
| fparam=fparam_input_for_des if self.add_chg_spin_ebd else None, | ||
| ) |
There was a problem hiding this comment.
Rename the unused sw binding.
Ruff already flags sw as unused here, so this will keep the Python lint job noisy or failing depending on CI config. Rename it to _sw (or _) if the switch weights are intentionally ignored. As per coding guidelines, **/*.py: Always run ruff check . and ruff format . before committing changes or CI will fail.
🧹 Proposed fix
- descriptor, rot_mat, g2, h2, sw = self.descriptor(
+ descriptor, rot_mat, g2, h2, _sw = self.descriptor(📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| descriptor, rot_mat, g2, h2, sw = self.descriptor( | |
| extended_coord, | |
| extended_atype, | |
| nlist, | |
| mapping=mapping, | |
| comm_dict=comm_dict, | |
| fparam=fparam_input_for_des if self.add_chg_spin_ebd else None, | |
| ) | |
| descriptor, rot_mat, g2, h2, _sw = self.descriptor( | |
| extended_coord, | |
| extended_atype, | |
| nlist, | |
| mapping=mapping, | |
| comm_dict=comm_dict, | |
| fparam=fparam_input_for_des if self.add_chg_spin_ebd else None, | |
| ) |
🧰 Tools
🪛 Ruff (0.15.4)
[warning] 292-292: Unpacked variable sw is never used
Prefix it with an underscore or any other dummy variable pattern
(RUF059)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@deepmd/pt/model/atomic_model/dp_atomic_model.py` around lines 292 - 299, The
call to self.descriptor(...) in dp_atomic_model.py currently assigns five
returns to descriptor, rot_mat, g2, h2, sw but sw is unused; rename this binding
to _sw (or _) to satisfy linters. Update the assignment in the
dp_atomic_model.AtomicModel (or the method containing that descriptor call) to
use _sw instead of sw so Ruff no longer flags an unused variable, and run ruff
check . / ruff format . before committing.
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## master #5295 +/- ##
==========================================
+ Coverage 82.32% 82.34% +0.01%
==========================================
Files 768 768
Lines 77098 77181 +83
Branches 3659 3659
==========================================
+ Hits 63469 63552 +83
+ Misses 12458 12457 -1
- Partials 1171 1172 +1 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
There was a problem hiding this comment.
♻️ Duplicate comments (2)
deepmd/pt/model/atomic_model/dp_atomic_model.py (2)
291-298:⚠️ Potential issue | 🟡 MinorRename the unused
swbinding.
swis still never read after the descriptor call, so Ruff will keep flagging this unpack. Rename it to_sw(or_) before merging. As per coding guidelines,**/*.py: Always runruff check .andruff format .before committing changes or CI will fail.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@deepmd/pt/model/atomic_model/dp_atomic_model.py` around lines 291 - 298, The tuple returned by self.descriptor is unpacked into descriptor, rot_mat, g2, h2, sw but sw is never used; rename the binding to _sw (or _) in the unpack expression in dp_atomic_model.py (the call to self.descriptor with args extended_coord, extended_atype, nlist, mapping, comm_dict, fparam=...) so the unused value is ignored and Ruff stops flagging the unused variable; run ruff check . and ruff format . before committing.
277-289:⚠️ Potential issue | 🔴 CriticalDon’t assume
get_dim_fparam() > 0implies a default exists.Line 283 still dereferences
get_default_fparam()based only onget_dim_fparam() > 0, but fittings can legally have frame-parameter support with no configured default. In that caseforward_atomic()still hits the assertion when callers omitfparam. Gate this branch on an actual default (or raise a clear error only for the charge/spin path) instead of asserting onNone.🐛 Proposed fix
- if ( - hasattr(self.fitting_net, "get_dim_fparam") - and self.fitting_net.get_dim_fparam() > 0 - and fparam is None - ): - # use default fparam - default_fparam_tensor = self.fitting_net.get_default_fparam() - assert default_fparam_tensor is not None + default_fparam_tensor = self.fitting_net.get_default_fparam() + if self.add_chg_spin_ebd and fparam is None: + if default_fparam_tensor is None: + raise ValueError( + "fparam must be provided when add_chg_spin_ebd is enabled " + "and no default frame parameter is configured" + ) fparam_input_for_des = torch.tile( default_fparam_tensor.unsqueeze(0), [nframes, 1] ) else: fparam_input_for_des = fparam🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@deepmd/pt/model/atomic_model/dp_atomic_model.py` around lines 277 - 289, The current branch assumes a default fparam exists when fitting_net.get_dim_fparam() > 0 and then asserts on get_default_fparam(); instead, first query get_default_fparam() and only use it when it is not None: call self.fitting_net.get_default_fparam() and if that returns a tensor build fparam_input_for_des by tiling it (as currently done). If get_default_fparam() is None and fparam is None, raise a clear ValueError (or a specific error) explaining that no fparam was provided and no default is configured (but only for the charge/spin path if that special handling is needed); otherwise set fparam_input_for_des = fparam. Update the logic around fitting_net.get_dim_fparam, get_default_fparam, forward_atomic and fparam_input_for_des to reflect this check.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@deepmd/pt/model/atomic_model/dp_atomic_model.py`:
- Around line 291-298: The tuple returned by self.descriptor is unpacked into
descriptor, rot_mat, g2, h2, sw but sw is never used; rename the binding to _sw
(or _) in the unpack expression in dp_atomic_model.py (the call to
self.descriptor with args extended_coord, extended_atype, nlist, mapping,
comm_dict, fparam=...) so the unused value is ignored and Ruff stops flagging
the unused variable; run ruff check . and ruff format . before committing.
- Around line 277-289: The current branch assumes a default fparam exists when
fitting_net.get_dim_fparam() > 0 and then asserts on get_default_fparam();
instead, first query get_default_fparam() and only use it when it is not None:
call self.fitting_net.get_default_fparam() and if that returns a tensor build
fparam_input_for_des by tiling it (as currently done). If get_default_fparam()
is None and fparam is None, raise a clear ValueError (or a specific error)
explaining that no fparam was provided and no default is configured (but only
for the charge/spin path if that special handling is needed); otherwise set
fparam_input_for_des = fparam. Update the logic around
fitting_net.get_dim_fparam, get_default_fparam, forward_atomic and
fparam_input_for_des to reflect this check.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository UI
Review profile: CHILL
Plan: Pro
Run ID: bc8c4ba8-09a5-49d0-a716-d1590293c7af
📒 Files selected for processing (1)
deepmd/pt/model/atomic_model/dp_atomic_model.py
Summary by CodeRabbit
New Features
Tests
Documentation