-
Notifications
You must be signed in to change notification settings - Fork 84
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Do I just need to implement the softmax layer ? #14
Comments
You need to implement the depthwise layer, too. |
And Do I need to implement the class SoftmaxPlugin's functions that inherited from the parent class only?Or any other functions? |
Yes, you need to implement the SoftmaxPlugin |
I found the channels of output blobs from plugin reshape layer is actually the number of classes, hence off-the-shelf across-channel softmax layer can satisfy demand. @chenzhi1992 |
Allright, you can verify that the result is correct. |
Do you have plan to release the code of softmax and depthwise layer? |
@xchani Did off-the-shelf across-channel softmax worked for you ? |
@linux-devil NO. Though off-the-shelf softmax and my own implemented softmax layer both operate across-channel, off-the-shelf softmax still produce wrong results. I cannot figure it out why it happends. |
@xchani Did it work with your softmax implementation. Can you please share it and explain why the dimension of last layer is 12764?. |
Firstly,thank you for your code.And I compared the pluginImplement.cpp of your code with the samplePlugin.cpp of TensorRT's samplePlugin.Do I only need to implement the softmax layer ?And the mobilenet can work on TensorRT.
The text was updated successfully, but these errors were encountered: