Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

SqueezeNet is slower when using GPU than when using CPU? #14

Open
shunsuke227ono opened this issue Sep 12, 2017 · 1 comment
Open

SqueezeNet is slower when using GPU than when using CPU? #14

shunsuke227ono opened this issue Sep 12, 2017 · 1 comment

Comments

@shunsuke227ono
Copy link

I'm trying to measure speed of doing inference on a single image with SqueezeNet. When I run it on CPU, SqueezeNet seems fast enough (comparing to VGG). But when it is on GPU, SqueezeNet gets very slower, even slower than CPU.

Does anyone know why it gets slow on GPU? Should I do something on SqueezeNet when I run it on GPU?

Here are some results of experiments I have made for SqueezeNet vs VGG in terms of their speeds both on CPU and GPU.

On CPU, SqueezeNet is much faster than VGG16.

[inference time]
VGG average response time: 2.21110591888[sec/image]
SqueezeNet average response time: 0.288291954994[sec/image]

On GPU, VGG16 gets really faster, even faster than SqueezeNet. And SqueezeNet gets even slower than it on CPU.

[inference time]
VGG16 average response time: 0.0961683591207[sec/image] # get very fast
SqueezeNet average response time: 1.50337402026[sec/image] # get very slow <= why?

Thanks!

@mrgloom
Copy link

mrgloom commented Feb 7, 2019

What code was used to measure inference time?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants