Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add OcclusionAttribution and LimeAttribution #145

Merged
merged 24 commits into from
Feb 27, 2023
Merged
Show file tree
Hide file tree
Changes from 4 commits
Commits
Show all changes
24 commits
Select commit Hold shift + click to select a range
1379d84
Add OcclusionRegistry and OcclusionAttribution.
nfelnlp Oct 24, 2022
ab7c23d
Merge remote-tracking branch 'origin/main' into occlusion
nfelnlp Nov 13, 2022
cc49afc
Merge branch 'main' of https://github.com/inseq-team/inseq into occlu…
nfelnlp Dec 5, 2022
4e93da3
Merge remote-tracking branch 'origin/main' into occlusion
nfelnlp Dec 15, 2022
000aac5
Updated OcclusionAttribution with new default params and better class…
nfelnlp Dec 17, 2022
364195d
Merge branch 'main' of https://github.com/inseq-team/inseq into occlu…
nfelnlp Jan 9, 2023
6e80e62
Fixed Occlusion. Added GradientSHAP. LimeBase (WIP).
nfelnlp Jan 16, 2023
42cfc7c
Enable Python3.8 compatibility.
nfelnlp Jan 16, 2023
cb8b4c7
Instance-wise attribute_step in LimeBase (WIP).
nfelnlp Jan 23, 2023
3ee678f
Added reshapes for interpretable inputs in LimeBase. (WIP)
nfelnlp Jan 25, 2023
85aaab9
Merge branch 'occlusion' of https://github.com/inseq-team/inseq into …
nfelnlp Jan 26, 2023
b8096d1
Replaced perturb_func with Thermostat base implementation (mask_prob).
nfelnlp Jan 26, 2023
92ea057
Replaced perturb_func with the one from thermostat.
nfelnlp Jan 26, 2023
1bfa69a
Fixed mask shape in perturb_func (LIME).
nfelnlp Jan 27, 2023
430aecf
Cleanup
nfelnlp Jan 29, 2023
fbcf2e0
Merge branch 'main' of https://github.com/inseq-team/inseq into occlu…
nfelnlp Feb 20, 2023
ead7aa6
Occlusion allows attributed targets. Lime allows custom interp_rep_tr…
nfelnlp Feb 20, 2023
e37f462
Added default baseline UNK for Occlusion.
nfelnlp Feb 21, 2023
2f89a8b
Separated step and sequence output classes for Occlusion and other pe…
nfelnlp Feb 24, 2023
6a66515
Fixed Occlusion for target attributions.
nfelnlp Feb 24, 2023
3151f3f
Updated Occlusion and LIME for multiple inputs.
nfelnlp Feb 25, 2023
eea0588
Added explanation to unsupported LIME case.
nfelnlp Feb 25, 2023
3723108
Fix & normalize Occlusion
gsarti Feb 27, 2023
dc0c7c1
Fix batching LIME
gsarti Feb 27, 2023
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ default_stages: [commit, push]

repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v4.3.0
rev: v4.4.0
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should be already in the main branch if you merged main!

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry, this was an accident. Thanks for pointing it out!

hooks:
- id: trailing-whitespace
- id: check-yaml
Expand Down
2 changes: 2 additions & 0 deletions inseq/attr/feat/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,7 @@
LayerIntegratedGradientsAttribution,
SaliencyAttribution,
)
from .perturbation_attribution import OcclusionAttribution


__all__ = [
Expand All @@ -30,4 +31,5 @@
"LayerIntegratedGradientsAttribution",
"LayerGradientXActivationAttribution",
"LayerDeepLiftAttribution",
"OcclusionAttribution",
]
74 changes: 74 additions & 0 deletions inseq/attr/feat/perturbation_attribution.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,74 @@
from typing import Any, Dict

import logging

from captum.attr import Occlusion

from ...data import PerturbationFeatureAttributionStepOutput
from ...utils import Registry
from ..attribution_decorators import set_hook, unset_hook
from .attribution_utils import get_source_target_attributions
from .gradient_attribution import FeatureAttribution


logger = logging.getLogger(__name__)


class PerturbationMethodRegistry(FeatureAttribution, Registry):
gsarti marked this conversation as resolved.
Show resolved Hide resolved
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's call it PerturbationAttribution to keep it consistent with GradientAttribution. We might want to bulk change them to add the Registry specification at a later time though!

"""Occlusion-based attribution methods."""
gsarti marked this conversation as resolved.
Show resolved Hide resolved
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Change to """Perturbation-based attribution method registry."""


@set_hook
def hook(self, **kwargs):
pass

@unset_hook
def unhook(self, **kwargs):
pass


class OcclusionAttribution(PerturbationMethodRegistry):
"""Occlusion-based attribution method.
Reference implementation:
`https://captum.ai/api/occlusion.html <https://captum.ai/api/occlusion.html>`__.

Usages in other implementations:
`niuzaisheng/AttExplainer <https://github.com/niuzaisheng/AttExplainer/blob/main/baseline_methods/\
explain_baseline_captum.py>`__
`andrewPoulton/explainable-asag <https://github.com/andrewPoulton/explainable-asag/blob/main/explanation.py>`__
`copenlu/xai-benchmark <https://github.com/copenlu/xai-benchmark/blob/master/saliency_gen/\
interpret_grads_occ.py>`__
`DFKI-NLP/thermostat <https://github.com/DFKI-NLP/thermostat/blob/main/src/thermostat/explainers/occlusion.py>`__
"""

method_name = "occlusion"

def __init__(self, attribution_model, **kwargs):
super().__init__(attribution_model)
self.is_layer_attribution = False
self.method = Occlusion(self.attribution_model)

def attribute_step(
self,
attribute_fn_main_args: Dict[str, Any],
attribution_args: Dict[str, Any] = {},
) -> Any:

if "sliding_window_shapes" not in attribution_args:
# Sliding window shapes is defined as a tuple
# First entry is between 1 and length of input
# Second entry is given by the max length of the underlying model
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm a bit puzzled by the second entry: the max length you take here is the max generation length that the model can handle, but my understanding was that this would be the hidden_size from the model config, to ensure that there is no partial masking of token embeddings. Could you clarify?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You're right, I accidentally took the next best attribute in self.attribution_model that had 512 as the size. This was careless.
The second entry should rather be based on the embedding size, right?
Does accessing it via self.attribution_model.get_embedding_layer() make sense?

# If not explicitly given via attribution_args, the default is (1, model_max_length)
attribution_args["sliding_window_shapes"] = (1, self.attribution_model.model_max_length)

attr = self.method.attribute(
**attribute_fn_main_args,
**attribution_args,
)

source_attributions, target_attributions = get_source_target_attributions(
attr, self.attribution_model.is_encoder_decoder
)
return PerturbationFeatureAttributionStepOutput(
source_attributions=source_attributions,
target_attributions=target_attributions,
)
2 changes: 2 additions & 0 deletions inseq/data/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,7 @@
FeatureAttributionSequenceOutput,
FeatureAttributionStepOutput,
GradientFeatureAttributionStepOutput,
PerturbationFeatureAttributionStepOutput,
)
from .batch import Batch, BatchEmbedding, BatchEncoding, DecoderOnlyBatch, EncoderDecoderBatch
from .viz import show_attributions
Expand All @@ -32,6 +33,7 @@
"FeatureAttributionInput",
"FeatureAttributionStepOutput",
"GradientFeatureAttributionStepOutput",
"PerturbationFeatureAttributionStepOutput",
"FeatureAttributionSequenceOutput",
"FeatureAttributionOutput",
"ModelIdentifier",
Expand Down
20 changes: 20 additions & 0 deletions inseq/data/attribution.py
Original file line number Diff line number Diff line change
Expand Up @@ -474,3 +474,23 @@ class GradientFeatureAttributionStepOutput(FeatureAttributionStepOutput):
"""

_sequence_cls: Type["FeatureAttributionSequenceOutput"] = GradientFeatureAttributionSequenceOutput


# Perturbation attribution classes


@dataclass(eq=False, repr=False)
class PerturbationFeatureAttributionSequenceOutput(FeatureAttributionSequenceOutput):
"""Raw output of a single sequence of perturbation feature attribution."""

def __post_init__(self):
super().__post_init__()
self._dict_aggregate_fn["source_attributions"]["sequence_aggregate"] = sum_normalize_attributions
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do perturbation attributions have shape [attributed_text_length, generated_text_length, hidden_size] like the ones generated by gradient methods? sum_normalize_attributions ensures to cast the 3D tensor above to a 2D tensor for visualization, but I thought that for occlusion this wouldn't be needed.

If indeed it is not the case, then we would not need a specific class for PerturbationFeatureAttribution methods and we could simply stick to the base FeatureAttributionSequenceOutput and FeatureAttributionStepOutput for the moment.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, when I left out this aggregation step from the __post_init__, I had a 3D tensor that resulted in a shape violation here:

if attr.source_attributions is not None:
assert len(attr.source_attributions.shape) == 2
if attr.target_attributions is not None:
assert len(attr.target_attributions.shape) == 2

I assume this will apply to other perturbation methods as well.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree with you to have the return object to be of FeatureAttributionSequenceOutput and FeatureAttributionStepOutput for now.

self._dict_aggregate_fn["target_attributions"]["sequence_aggregate"] = sum_normalize_attributions


@dataclass(eq=False, repr=False)
class PerturbationFeatureAttributionStepOutput(FeatureAttributionStepOutput):
"""Raw output of a single step of perturbation feature attribution."""

_sequence_cls: Type["FeatureAttributionSequenceOutput"] = PerturbationFeatureAttributionSequenceOutput