Slow DiscretizedIntegratedGradientAttribution
method, also on GPU
#161
Labels
enhancement
New feature or request
🐛 Bug Report
Inference on a google colab GPU is very slow. There is no significant difference if the model runs on cuda or CPU
🔬 How To Reproduce
The following
model.attribute(...)
code runs for around 33 to 47 seconds both on a colab CPU or GPU. I tried passing the device to the model and the model.device confirms that it's running on cuda, but it still takes very long to run only 2 sentences. (I don't know the underlying computations for attribution enough to know if this is to be expected, or if this should be faster. If it's always that slow, then it seems practically infeasible to analyse larger corpora)Environment
Expected behavior
Faster inference with a GPU/cuda
(Thanks btw, for the fix for returning the per-token scores in a dictionary, the new method works well :) )
The text was updated successfully, but these errors were encountered: