You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thank you for an awesome, well-organized repository.
I have a question regarding the Decoder block. The paper states, they use a 1-D transposed convolution operation for the generating the decoder basis functions. paper
However, I see you use a linear dense layer in the decoder
Hi,
Thank you for an awesome, well-organized repository.
![Screenshot from 2020-10-02 12-40-05](https://user-images.githubusercontent.com/23454182/94915078-8eb45100-04ac-11eb-870f-752a30dfe2d0.png)
I have a question regarding the Decoder block. The paper states, they use a 1-D transposed convolution operation for the generating the decoder basis functions. paper
However, I see you use a linear dense layer in the decoder
Conv-TasNet/src/conv_tasnet.py
Line 126 in 94eac10
Could you explain the reason for this choice?
The text was updated successfully, but these errors were encountered: