Replies: 1 comment 1 reply
-
before loading param and model |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi all,
I read through ncnn source code and still a bit confused on how to enable choosing particular GPU device. I've found the create_gpu_instance() and find_default_vulkan_device_index() functions in src/gpu.cpp, but have yet to find where the former is called from.
Anyway, what I want to do is to benchmark ncnn inference in a GPU which is not the default GPU. I ran the inference (yolov4) in a laptop with Vega 10 integrated GPU and an Nvidia discrete GPU, but ncnn always runs on the Vega 10 integrated GPU. I'm running the inference test in Windows 10, btw.
In find_default_vulkan_device_index(), the code comment suggests it will try the discrete GPU first, but it didn't workout that way.
This is the screenshot when testing the example code:
Any suggestion?
Thanks in advance
Beta Was this translation helpful? Give feedback.
All reactions