Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The use of GPU is not efficient ???? #34

Open
xljhtq opened this issue Apr 15, 2018 · 1 comment
Open

The use of GPU is not efficient ???? #34

xljhtq opened this issue Apr 15, 2018 · 1 comment

Comments

@xljhtq
Copy link

xljhtq commented Apr 15, 2018

When I trained the model with GPU and the training data is very bigger, I found low utilization rate of GPU, namely about 11% and the utilization rate of CPU is about 110%. I want to know How to increase the utilization rate of GPU? The batch_size cannot be bigger because of the limited memory.

I also want to know What are you like when you train because RNN layers used in the model would slow training speed.

@zhiguowang
Copy link
Owner

The training time for me is not very slow. On SNLI dataset, it costs 515 seconds for one iteration over the entire training set. And decoding on the dev set can be done in 3 seconds. I'm using K80 GPU.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants