Skip to content

Commit

Permalink
solved all conflicts with master
Browse files Browse the repository at this point in the history
  • Loading branch information
Javier Rodriguez Zaurin authored and Javier Rodriguez Zaurin committed Sep 1, 2022
2 parents bc5e0d2 + 0d956f2 commit 0ea0422
Show file tree
Hide file tree
Showing 275 changed files with 140,417 additions and 1,650 deletions.
3 changes: 2 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -17,9 +17,10 @@ Untitled*.ipynb

# data related dirs
tmp_data/
model_weights/
tmp_dir/
weights/
pretrained_weights/
model_weights/

# Unit Tests/Coverage
.coverage
Expand Down
11 changes: 8 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,6 +24,8 @@ text and images using Wide and Deep models in Pytorch

**Experiments and comparison with `LightGBM`**: [TabularDL vs LightGBM](https://github.com/jrzaurin/tabulardl-benchmark)

**Slack**: if you want to contribute or just want to chat with us, join [slack](https://join.slack.com/t/pytorch-widedeep/shared_invite/zt-soss7stf-iXpVuLeKZz8lGTnxxtHtTw)

The content of this document is organized as follows:

1. [introduction](#introduction)
Expand Down Expand Up @@ -142,9 +144,12 @@ Note that while there are scientific publications for the TabTransformer,
SAINT and FT-Transformer, the TabFasfFormer and TabPerceiver are our own
adaptation of those algorithms for tabular data.

For details on these models (and all the other models in the library for the
different data modes) and their corresponding options please see the examples
in the Examples folder and the documentation.
In addition, Self-Supervised pre-training can be used for all `deeptabular`
models, with the exception of the `TabPerceiver`. Self-Supervised
pre-training can be used via two methods or routines which we refer as:
encoder-decoder method and constrastive-denoising method. Please, see the
documentation and the examples for details on this functionality, and all
other options in the library.

### Installation

Expand Down
2 changes: 1 addition & 1 deletion VERSION
Original file line number Diff line number Diff line change
@@ -1 +1 @@
1.1.2
1.2.0
2 changes: 1 addition & 1 deletion examples/notebooks/01_Preprocessors_and_utils.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -702,7 +702,7 @@
"name": "stderr",
"output_type": "stream",
"text": [
"100%|██████████████████████████████████████████████████████████████████████████████████████████████| 1001/1001 [00:01<00:00, 601.98it/s]\n"
"100%|█████████████████████████████████████████████████████████████████████████████████████████████████████| 1001/1001 [00:01<00:00, 601.56it/s]\n"
]
},
{
Expand Down
8 changes: 4 additions & 4 deletions examples/notebooks/02_model_components.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -180,7 +180,7 @@
{
"data": {
"text/plain": [
"tensor([-0.1975], grad_fn=<AddBackward0>)"
"tensor([-0.3839], grad_fn=<AddBackward0>)"
]
},
"execution_count": 7,
Expand All @@ -200,7 +200,7 @@
{
"data": {
"text/plain": [
"tensor([-0.1975], grad_fn=<AddBackward0>)"
"tensor([-0.3839], grad_fn=<AddBackward0>)"
]
},
"execution_count": 8,
Expand Down Expand Up @@ -323,7 +323,7 @@
" )\n",
" (cont_norm): BatchNorm1d(1, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n",
" )\n",
" (tab_mlp): MLP(\n",
" (encoder): MLP(\n",
" (mlp): Sequential(\n",
" (dense_layer_0): Sequential(\n",
" (0): Dropout(p=0.1, inplace=False)\n",
Expand Down Expand Up @@ -474,7 +474,7 @@
"metadata": {},
"outputs": [],
"source": [
"resnet = Vision(pretrained_model_name=\"resnet18\", n_trainable=0)"
"resnet = Vision(pretrained_model_setup=\"resnet18\", n_trainable=0)"
]
},
{
Expand Down
80 changes: 33 additions & 47 deletions examples/notebooks/03_Binary_Classification_with_Defaults.ipynb

Large diffs are not rendered by default.

21 changes: 14 additions & 7 deletions examples/notebooks/04_regression_with_images_and_text.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -421,7 +421,7 @@
"name": "stderr",
"output_type": "stream",
"text": [
"100%|██████████████████████████████████████████████████████████████████████████████████████████████| 1001/1001 [00:01<00:00, 586.65it/s]\n"
"100%|█████████████████████████████████████████████████████████████████████████████████████████████████████| 1001/1001 [00:01<00:00, 514.73it/s]\n"
]
},
{
Expand Down Expand Up @@ -473,7 +473,7 @@
")\n",
"\n",
"# Pretrained Resnet 18\n",
"resnet = Vision(pretrained_model_name=\"resnet18\", n_trainable=4)"
"resnet = Vision(pretrained_model_setup=\"resnet18\", n_trainable=4)"
]
},
{
Expand Down Expand Up @@ -523,8 +523,8 @@
"name": "stderr",
"output_type": "stream",
"text": [
"epoch 1: 100%|████████████████████████████████████████████████████████████████████████████████| 25/25 [00:53<00:00, 2.13s/it, loss=132]\n",
"valid: 100%|████████████████████████████████████████████████████████████████████████████████████| 7/7 [00:11<00:00, 1.63s/it, loss=122]\n"
"epoch 1: 100%|███████████████████████████████████████████████████████████████████████████████████████| 25/25 [00:23<00:00, 1.09it/s, loss=132]\n",
"valid: 100%|███████████████████████████████████████████████████████████████████████████████████████████| 7/7 [00:05<00:00, 1.28it/s, loss=122]\n"
]
}
],
Expand Down Expand Up @@ -672,15 +672,22 @@
"name": "stderr",
"output_type": "stream",
"text": [
"epoch 1: 100%|████████████████████████████████████████████████████████████████████████████████| 25/25 [00:52<00:00, 2.12s/it, loss=112]\n",
"valid: 100%|███████████████████████████████████████████████████████████████████████████████████| 7/7 [00:10<00:00, 1.56s/it, loss=94.8]"
"epoch 1: 100%|███████████████████████████████████████████████████████████████████████████████████████| 25/25 [00:28<00:00, 1.13s/it, loss=107]\n",
"valid: 100%|██████████████████████████████████████████████████████████████████████████████████████████| 7/7 [00:06<00:00, 1.13it/s, loss=93.5]"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"Model weights after training corresponds to the those of the final epoch which might not be the best performing weights. Usethe 'ModelCheckpoint' Callback to restore the best epoch weights.\n"
"Model weights after training corresponds to the those of the final epoch which might not be the best performing weights. Use the 'ModelCheckpoint' Callback to restore the best epoch weights.\n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"\n"
]
},
{
Expand Down
Loading

0 comments on commit 0ea0422

Please sign in to comment.