You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
GradientShap (captum.attr.GradientShap.attribute), which is an extension of Integrated Gradients, needs an internal_batch_size argument just like IntegratedGradients.
Currently, using any large value for n_samples results in out-of-memory errors, because the input is stacked n_samples times. The same kind of issue is already fixed in IntegratedGradients via the internal_batch_size argument.
The text was updated successfully, but these errors were encountered:
🐛 Bug
GradientShap (
captum.attr.GradientShap.attribute
), which is an extension of Integrated Gradients, needs aninternal_batch_size
argument just like IntegratedGradients.Currently, using any large value for
n_samples
results in out-of-memory errors, because the input is stackedn_samples
times. The same kind of issue is already fixed in IntegratedGradients via theinternal_batch_size
argument.The text was updated successfully, but these errors were encountered: