Skip to content

ghsanti/torch_practice

Repository files navigation

3.10|3.11 devtools test precommit

main

Simple PyTorch AutoEncoder to play with.

Set Up

For Colab use:

!pip3 install torch_practice[cu124]@git+https://github.com/ghsanti/torch_practice

That's for CUDA. For CPU replace [cu124] for [cpu]. That's all you need.

You can check out simple examples in the Notebooks.


Important remarks for devcontainers.

Repo can be used from devcontainers which is highly recommended. This package does not remove files, but it does write out:

  • timestamped folders for checkpoints (optionally).
  • and a datafolder for the dataset downloaded directly through PyTorch (no custom code.)

The default locations are all within the configuration file linked further down.

The container should set up any CPU system just fine.

  • It won't install any GPU libraries, nor will allow use of MPS which is a MacOS feature, and you'll be running Linux (Debian with Python 3.10)
  • It should be possible to just install the GPU version from within, but this is untested.
  • To run the notebooks in VSCode, you may need uv sync --extra cpu --extra ipynb
  • Or from pip pip install "ipykernel>6.29"

Non-Colab install

For new project:

  1. Install uv
  2. Then run
uv venv --python 3.10
source .venv/bin/activate
uv add torch_practice[cpu]@git+https://github.com/ghsanti/torch_practice@dev

Or from pip (but it needs Python 3.10 currently.):

EXTRA=cpu
URL='git+https://github.com/ghsanti/torch_practice@dev'
python3 -m pip install "torch_practice[${EXTRA}]@${URL}"

For GPUs, use the extra cu124, or cu121 instead of cpu.

For other systems, cpu will work (incl. Apple Silicon like M1s)

If you want a pre release you can try something like (within your venv!):

 python -m ensurepip
 python -m pip install --pre torch torchvision --index-url https://download.pytorch.org/whl/nightly/cpu

Taken from 'nightly' tab at torch start locally.

Run

One then can run it:

python -m torch_practice.simple_train

For custom configurations, write a simple script:

from torch_practice.simple_train import train
from torch_practice.default_config import default_config

config = default_config()
config["n_workers"] = 3

# then train it.
train(config)

Configuration

The "blueprint" is in the DAEConfig, in this file.

Reproducibility

basic practices From the [docs](https://pytorch.org/docs/stable/notes/randomness.html):

Completely reproducible results are not guaranteed across PyTorch releases, individual commits, or different platforms.

To control the sources of randomness one can pass a seed to the configuration dictionary. This controls some ops and dataloading.

Dev

simple steps here 1. Fork 2. Clone your fork and run
pip install uv
uv venv
source .venv/bin/activate
uv sync --all-extras
# non-cpu users need extra torch installs.

Checking out to a Codespace it installs everything. Activate the venv using:

source .venv/bin/activate
  • In both cases, remember to select the .venv python-interpreter in VSCode.
  • Use absolute imports.

Build

uv pip install --upgrade build
uv build

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published