You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As I am looking at the implementation for the MobileViTV2 Linear Attention, I saw query and key values are generated from x_prev to calculate context vector and value is generated from x.
On the paper context vector is analogous to attention matrix. Then, when we calculate context vector, query should be from x_prev and key should be x to get context vector to get similarity between x_previous and x?
The text was updated successfully, but these errors were encountered:
Hello, Thank you for your great works!
ml-cvnets/cvnets/layers/linear_attention.py
Lines 163 to 191 in 7771756
As I am looking at the implementation for the MobileViTV2 Linear Attention, I saw query and key values are generated from x_prev to calculate context vector and value is generated from x.
On the paper context vector is analogous to attention matrix. Then, when we calculate context vector, query should be from x_prev and key should be x to get context vector to get similarity between x_previous and x?
The text was updated successfully, but these errors were encountered: