Skip to content

cannot use cuda in pytorch #1302

Open
Open
@huyuaaaray

Description

@huyuaaaray

Hi,
I am using the CUDA to accelerate the YOLO prediction, but somehow it cannot using the cuda as the model device.
Examples:

  • GNMT/PyTorch
  • YOLO
  • VS CODE

Basically, my GPU is:
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 535.98 Driver Version: 535.98 CUDA Version: 12.2 |
|-----------------------------------------+----------------------+----------------------+
| GPU Name TCC/WDDM | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+======================+======================|
| 0 NVIDIA RTX A1000 Laptop GPU WDDM | 00000000:01:00.0 Off | N/A |
| N/A 38C P0 8W / 38W | 0MiB / 4096MiB | 0% Default |
| | | N/A |
+-----------------------------------------+----------------------+----------------------+

+---------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=======================================================================================|
| No running processes found |
+---------------------------------------------------------------------------------------+

Does this gpu support the cuda? And when i using the script:
if torch.cuda.is_available():
device = torch.device("cuda")
else:
device = torch.device("cpu")
it always shows i am using cpu.

Sincerely,
Yu Hu

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or request

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions