You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As opposed to the other architectures in this package, RetNet doesn't have support for padding as far as I'm aware. I was thinking the best place to introduce it was along with the positional mask. Here we don't have the luxury of the softmax, so we can't simply mask with infinity in the relevant positions.
From my attempt, the parallel code would be something along the following (assuming left padding and a padding_mask shape of (bsz, seq_len):
In parallel_forward, you can try setting the padding as 0 after mask = mask / mask.sum(dim=-1, keepdim=True).sqrt(). Your implementation also looks fine to me.
In inference, the padding token doesn't influence the subsequent encoding, maybe just skipping it is enough?
Thank you for the quick reply. My reasoning for the parallel code was so that the decay would start from the first non-pad token instead of an arbitrary decay**idx. I'll test the two variants and see if there's any meaningful difference.
For the forward_recurrent code, I believe I'm ignoring the previous pad tokens as the prev_scale will be 0 and scale 1. Thus ignoring the previous kv entry.
Would you be interested in merging this code to the torchscale package? I will fork the repo with the changes if that's the case. Thank you for the help nonetheless :)
As opposed to the other architectures in this package, RetNet doesn't have support for padding as far as I'm aware. I was thinking the best place to introduce it was along with the positional mask. Here we don't have the luxury of the softmax, so we can't simply mask with infinity in the relevant positions.
From my attempt, the parallel code would be something along the following (assuming left padding and a padding_mask shape of (bsz, seq_len):
This would imply expanding the mask here instead of broadcasting it in the forward method.
In the recurrent formulation, perhaps masking the scaling factor accordingly works?
I would like some help on this, perhaps the authors have a better approach? @donglixp @sunyt32
The text was updated successfully, but these errors were encountered: