Skip to content

Move internal shap algorithms into separate namespace.#11985

Open
RAMitchell wants to merge 20 commits intodmlc:masterfrom
RAMitchell:shap-interface
Open

Move internal shap algorithms into separate namespace.#11985
RAMitchell wants to merge 20 commits intodmlc:masterfrom
RAMitchell:shap-interface

Conversation

@RAMitchell
Copy link
Member

No description provided.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is quite annoying to get a GBTreeModel out of a booster.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, I would like to split up the concept of "model" and everything else like "optimizers/tree builders". But it might be too much for XGBoost at its current state.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I know this is WIP, but let's keep the objective of avoiding code duplication in mind. I tried to find ways to unify the code between CSR DMatrix and the gradient index, but it got a bit messy. But I think unifying the code is worth some extra layers of complexity.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed. I did notice that and had planned to do something about it.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I put the data accessors for predictor in a header so I could use them in shap.

@RAMitchell
Copy link
Member Author

I am having two major difficulties with this PR. The categorical recoding logic is complicated and I don't want to carry this through to the interpretability module. The other is to simply get the tree ensemble out of learner which seems somehow impossible in the current setup.

@trivialfis
Copy link
Member

I can look into simplifying the categorical features after holiday.

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR refactors SHAP-related implementation details out of the CPU/GPU predictors into a new internal xgboost::interpretability namespace, aiming to reduce duplication and better isolate interpretability code paths.

Changes:

  • Introduces new internal SHAP dispatch API (interpretability::ShapValues, ShapInteractionValues, ApproxFeatureImportance) with CPU and CUDA implementations.
  • Refactors CPU/GPU predictors to delegate SHAP/interaction contribution computation to the new interpretability layer.
  • Extracts shared CPU/GPU data access utilities into dedicated headers and updates SHAP tests to use the new entry points (with reduced runtime).

Reviewed changes

Copilot reviewed 13 out of 13 changed files in this pull request and generated 8 comments.

Show a summary per file
File Description
tests/cpp/predictor/test_shap.cc Updates SHAP tests to call new interpretability SHAP entry points and reduces dataset sizes/iters.
src/predictor/interpretability/shap.h Adds internal SHAP dispatch header (CPU/CUDA routing).
src/predictor/interpretability/shap.cc Adds CPU implementations for SHAP values, interactions, and approximate importance.
src/predictor/interpretability/shap.cu Adds CUDA implementations for SHAP values/interactions using GPUTreeShap.
src/predictor/gpu_predictor.cu Removes inlined GPU SHAP logic and delegates to interpretability SHAP functions; reuses extracted GPU accessors.
src/predictor/gpu_data_accessor.cuh New shared GPU sparse/ellpack access helpers (used by GPU predictor and SHAP).
src/predictor/data_accessor.h New shared CPU batch-to-FVec access helpers (used by CPU predictor and CPU SHAP).
src/predictor/cpu_predictor.cc Removes inlined CPU SHAP logic and delegates to interpretability SHAP functions; uses extracted accessors.
src/gbm/gbtree.h Formatting-only adjustments.
src/gbm/gbtree.cc Formatting-only adjustments.
include/xgboost/predictor.h Macro formatting-only adjustment.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 13 out of 13 changed files in this pull request and generated 3 comments.


💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Copy link

Copilot AI Feb 17, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The tree_weights parameter is a pointer type (std::vector<float> const*), but the test is passing {} (an empty brace-initializer). This will not compile correctly. Pass nullptr instead to indicate no tree weights.

Copilot uses AI. Check for mistakes.
Comment on lines +212 to +213
interpretability::ShapInteractionValues(dmat->Ctx(), p_dmat.get(), &shap_interactions, *gbtree, 0,
{}, false);
Copy link

Copilot AI Feb 17, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The tree_weights parameter is a pointer type (std::vector<float> const*), but the test is passing {} (an empty brace-initializer). This will not compile correctly. Pass nullptr instead to indicate no tree weights.

Copilot uses AI. Check for mistakes.
@RAMitchell RAMitchell changed the title [WIP] Move internal shap algorithms into separate namespace. Move internal shap algorithms into separate namespace. Feb 17, 2026
@RAMitchell RAMitchell marked this pull request as ready for review February 17, 2026 13:44
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants

Comments