You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+62-47Lines changed: 62 additions & 47 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -9,32 +9,40 @@
9
9
</a>
10
10
</p>
11
11
12
-
`pytorch-common` is a lightweight wrapper that contains PyTorch code that is common and (hopefully) helpful to most projects built on PyTorch. It is built with 3 main principles in mind:
12
+
13
+
# Overview
14
+
15
+
This repository contains PyTorch code that is common and (hopefully) helpful to most projects built on PyTorch.
16
+
17
+
It is a lightweight wrapper that contains PyTorch code that is common and (hopefully) helpful to most projects built on PyTorch. It is built with 3 main principles in mind:
13
18
- Make use of PyTorch available to people without much in-depth knowledge of it while providing enormous flexibility and support for hardcore users
14
19
- Under-the-hood optimization for fast and memory efficient performance
15
20
- Ability to change all settings (e.g. model, loss, metrics, devices, hyperparameters, artifact directories, etc.) directly from config
16
21
22
+
17
23
# Features
18
24
19
25
In a nutshell, it has code for:
20
26
- Training / testing models
27
+
- Option to retrain on all data (without performing evaluation on a separate data set)
- Provision to freeze/unfreeze (all / given) weights of model
37
+
- Provision to freeze / unfreeze (all / given) weights of model
31
38
- Sending model to device(s)
32
-
- Saving/loading/removing/copying state dict / model checkpoints
39
+
- Saving / loading / removing / copying state dict / model checkpoints
33
40
- Disable above mentioned checkpointing from config for faster development
34
41
- Early stopping
35
-
- Properly sending model/optimizer/batch to device(s)
36
-
- Defining custom train/test loss and evaluation criteria directly from config
37
-
- Supports most common losses/metrics for regression and binary/multi-class/multi-label classification
42
+
- Sample weighting
43
+
- Properly sending model / optimizer / batch to device(s)
44
+
- Defining custom train / test loss and evaluation criteria directly from config
45
+
- Supports most common losses / metrics for regression and binary / multi-class / multi-label classification
38
46
- May give as many as you like
39
47
- Cleanly stopping training at any point without losing progress
40
48
- Make predictions
@@ -43,73 +51,80 @@ In a nutshell, it has code for:
43
51
- Loading back best (or any) model and printing + plotting all losses + eval metrics
44
52
- etc.
45
53
46
-
# Installation
47
-
To install this package, you must have [pytorch](https://pytorch.org/) (and [transformers](https://github.com/huggingface/transformers) for accessing NLP-based functionalities) installed.
48
-
If you don't already have it, you can create a conda environment by running:
49
-
```bash
50
-
conda env create -f requirements.yaml`
51
-
pip install -e .# or ".[nlp]" if required
52
-
```
53
-
which will create an environment called `pytorch_common`for you with all the required dependencies.
54
-
55
54
56
-
The package can then be installed from source:
55
+
# Installation
56
+
To install this package, you must have [pytorch](https://pytorch.org/) (and [transformers](https://github.com/huggingface/transformers) for accessing NLP-based functionalities) installed. Then you can simply install this package from source:
conda env create -f requirements.yaml # If you don't already have a pytorch-enabled conda environment
61
+
conda activate pytorch_common # <-- Replace with your environment name
60
62
pip install .
61
63
```
64
+
which will create an environment called `pytorch_common` for you with all the required dependencies and this package installed.
62
65
63
66
If you'd like access to the NLP-related functionalities (specifically for [transformers](https://github.com/huggingface/transformers/)), make sure to install it as below instead:
64
67
```bash
65
68
pip install ".[nlp]"
66
69
```
67
70
68
-
# Usage
69
71
70
-
The default [config](https://github.com/ranamihir/pytorch_common/blob/master/pytorch_common/configs/config.yaml) can be loaded, and overridden with a user-specified dictionary, as follows:
71
-
```python
72
-
from pytorch_common.config import load_pytorch_common_config
73
-
74
-
# Create your own config (or load from a yaml file)
For more details on getting started, check out the [basic usage notebook](https://github.com/ranamihir/pytorch_common/blob/master/notebooks/basic_usage.ipynb) and other examples in the [notebooks](https://github.com/ranamihir/pytorch_common/blob/master/notebooks/) folder.
110
101
111
-
# Testing
102
+
More detailed examples highlighting the full functionality of this package can be found in the [examples](https://github.com/ranamihir/pytorch_common/tree/master/examples) directory.
103
+
104
+
## Config
105
+
106
+
A powerful advantage of using this repository is the ability to change a large number of settings related to PyTorch, and more generally, deep learning, directly from YAML, instead of having to worry about making code changes.
107
+
108
+
To do this, all you need to do is invoke the `load_pytorch_common_config()` function (with your project dictionary as input, if required). This will allow you to edit all `pytorch_common` supported settings in your project dictionary / YAML, or use the default ones for those not specified. E.g.:
The list of all supported configuration settings can be found [here](https://github.com/ranamihir/pytorch_common/blob/master/pytorch_common/configs/config.yaml).
125
+
112
126
127
+
# Testing
113
128
Several unit tests are present in the [tests](https://github.com/ranamihir/pytorch_common/tree/master/tests) directory. You may manually run them with:
114
129
115
130
```bash
@@ -129,8 +144,8 @@ chmod +x install-hooks.sh
129
144
130
145
In the future, I intend to move the tests to CI.
131
146
132
-
# To-do's
133
147
148
+
# To-do's
134
149
I have some enhancements in mind which I haven't gotten around to adding to this repo yet:
135
150
- Adding automatic mixed precision training (AMP) to enable it directly from config
136
151
- Enabling distributed training across servers
@@ -140,6 +155,6 @@ I have some enhancements in mind which I haven't gotten around to adding to this
140
155
141
156
This repo is a personal project, and as such, has not been as heavily tested. It is (and will likely always be) a work-in-progress, as I try my best to keep it current with the advancements in PyTorch.
142
157
143
-
If you come across any bugs, or have questions/suggestions, please consider opening an issue, [reaching out to me](mailto:[email protected]), or better yet, sending across a PR. :)
158
+
If you come across any bugs, or have questions / suggestions, please consider opening an issue, [reaching out to me](mailto:[email protected]), or better yet, sending across a PR. :)
0 commit comments