You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Jun 21, 2024. It is now read-only.
Is there any workaround for running inference on CPU or my arm based Mac M1.
Currently trying to run on Mac m1 and I am getting the following error
/Users/pawandeepsingh/Documents/Development/llm/PaLM/inference.py:50 in main
❱ 50 │ model = torch.hub.load("conceptofmind/PaLM", args.model).to(device).to(dtype)
RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False.
If you are running on a CPU-only machine, please use torch.load with map_location=torch.device('cpu')
to map your storages to the CPU.
Thanks.
The text was updated successfully, but these errors were encountered:
Is there any workaround for running inference on CPU or my arm based Mac M1.
Currently trying to run on Mac m1 and I am getting the following error
Thanks.
The text was updated successfully, but these errors were encountered: