InterpretDL v0.6.0 Release
We release the version 0.6.0 of InterpretDL, with new features as follows:
- Documentation is much richer. See here.
- A new Interpreter
GAInterpreter
has been implemented, with a corresponding usage example. This implementation is suitable for models with self-attention in each modality, like CLIP. - Rename the previous "tutorials" to "examples", to avoid the confusion. Examples show how to use the Interpreters and their explanation results. See tutorials for more information.
- Tutorials are provided, including Getting Started Tutorial, Input Gradient Tutorial, and four tutorials for NLP tasks using Ernie2.0 in English (on NBViewer), Bert in English (on NBViewer), BiLSTM in Chinese (on NBViewer) and Ernie1.0 in Chinese (on NBViewer)
as examples. (For text visualizations, NBViewer gives better and colorful rendering results.) - A taxonomy is provided for comparing the Interpreters, as follows:
Methods | Representation | Model Type | Example |
---|---|---|---|
LIME | Input Features | Model-Agnostic | link1 | link2 |
LIME with Prior | Input Features | Model-Agnostic | link |
NormLIME/FastNormLIME | Input Features | Model-Agnostic | link1 | link2 |
LRP | Input Features | Differentiable | link |
SmoothGrad | Input Features | Differentiable | link |
IntGrad | Input Features | Differentiable | link |
GradSHAP | Input Features | Differentiable | link |
Occlusion | Input Features | Model-Agnostic | link |
GradCAM/CAM | Intermediate Features | Specific: CNNs | link |
ScoreCAM | Intermediate Features | Specific: CNNs | link |
Rollout | Intermediate Features | Specific: Transformers | link |
TAM | Intermediate Features | Specific: Transformers | link |
ForgettingEvents | Dataset-Level | Differentiable | link |
TIDY (Training Data Analyzer) | Dataset-Level | Differentiable | link |
Consensus | Features | Cross-Model | link |
Generic Attention | Input Features | Specific: Bi-Modal Transformers | link (nblink)* |
* For text visualizations, NBViewer gives better and colorful rendering results.