Skip to content

Conversation

piiq
Copy link

@piiq piiq commented Oct 8, 2024

Hi, thanks for the lovely toolbox. This PR adds conditional selection of the device to use (CUDA, MPS, and CPU) and specifies it explicitly where it was previously unspecified.

Additionally, it updates the deprecated torch.cuda.amp.autocast to torch.amp.autocast, as recommended by the documentation.

This PR enables running inference (and training, although it’s unlikely anyone would want to) on MacBooks with Apple Silicon chips.

P.S. Please let me know if there is anything that needs to be changed for this contribution to be accepted. If it won’t be accepted, that’s fine too- just let me know, and I will continue working on this project in my own fork.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant