You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem? Please describe.
No.
Describe the solution you'd like
A line on README.md to explain how to install LocalAI using Docker on AMD GPUs (maybe a second one for our intel friends...).
In the Or run with docker:.
Describe alternatives you've considered
After a search on Dockerhub I see (here) 3 main categories:
nvidia-cuda - NVIDIA GPUs.
intel - Intel ^^.
hipblas - When I installed ROCm locally once I saw HIP somewhere so I guess AMD.
If my suposition is right, can you add:
# Intel GPU:
docker run -ti --name local-ai -p 8080:8080 --gpus all localai/localai:latest-gpu-intel-f32
# AMD GPU
docker run -ti --name local-ai -p 8080:8080 --gpus all localai/localai:latest-gpu-hipblas
in the README please.
Additional context
I checked if AMD GPUs are supported first, it seems to work (#1592).
The text was updated successfully, but these errors were encountered:
Is your feature request related to a problem? Please describe.
No.
Describe the solution you'd like
A line on README.md to explain how to install LocalAI using Docker on AMD GPUs (maybe a second one for our intel friends...).
In the
Or run with docker:
.Describe alternatives you've considered
After a search on Dockerhub I see (here) 3 main categories:
nvidia-cuda
- NVIDIA GPUs.intel
- Intel ^^.hipblas
- When I installed ROCm locally once I saw HIP somewhere so I guess AMD.If my suposition is right, can you add:
in the README please.
Additional context
I checked if AMD GPUs are supported first, it seems to work (#1592).
The text was updated successfully, but these errors were encountered: