Skip to content

Commit d9eb63a

Browse files
committed
docs: switch links from PDF to abstract
1 parent 37e0798 commit d9eb63a

File tree

9 files changed

+28
-28
lines changed

9 files changed

+28
-28
lines changed

course_UvA-DL/03-initialization-and-optimization/Initialization_and_Optimization.py

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -523,7 +523,7 @@ def xavier_init(model):
523523
#
524524
# Thus, we see that we have an additional factor of 1/2 in the equation, so that our desired weight variance becomes $2/d_x$.
525525
# This gives us the Kaiming initialization (see [He, K. et al.
526-
# (2015)](https://arxiv.org/pdf/1502.01852.pdf)).
526+
# (2015)](https://arxiv.org/abs/1502.01852)).
527527
# Note that the Kaiming initialization does not use the harmonic mean between input and output size.
528528
# In their paper (Section 2.2, Backward Propagation, last paragraph), they argue that using $d_x$ or $d_y$ both lead to stable gradients throughout the network, and only depend on the overall input and output size of the network.
529529
# Hence, we can use here only the input $d_x$:
@@ -1095,7 +1095,7 @@ def comb_func(w1, w2):
10951095
# The short answer: no.
10961096
# There are many papers saying that in certain situations, SGD (with momentum) generalizes better where Adam often tends to overfit [5,6].
10971097
# This is related to the idea of finding wider optima.
1098-
# For instance, see the illustration of different optima below (credit: [Keskar et al., 2017](https://arxiv.org/pdf/1609.04836.pdf)):
1098+
# For instance, see the illustration of different optima below (credit: [Keskar et al., 2017](https://arxiv.org/abs/1609.04836)):
10991099
#
11001100
# <center width="100%"><img src="flat_vs_sharp_minima.svg" width="500px"></center>
11011101
#
@@ -1125,7 +1125,7 @@ def comb_func(w1, w2):
11251125
# "Understanding the difficulty of training deep feedforward neural networks."
11261126
# Proceedings of the thirteenth international conference on artificial intelligence and statistics.
11271127
# 2010.
1128-
# [link](http://proceedings.mlr.press/v9/glorot10a/glorot10a.pdf)
1128+
# [link](https://proceedings.mlr.press/v9/glorot10a)
11291129
#
11301130
# [2] He, Kaiming, et al.
11311131
# "Delving deep into rectifiers: Surpassing human-level performance on imagenet classification."

course_UvA-DL/04-inception-resnet-densenet/Inception_ResNet_DenseNet.py

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -243,7 +243,7 @@ def configure_optimizers(self):
243243
# We will support Adam or SGD as optimizers.
244244
if self.hparams.optimizer_name == "Adam":
245245
# AdamW is Adam with a correct implementation of weight decay (see here
246-
# for details: https://arxiv.org/pdf/1711.05101.pdf)
246+
# for details: https://arxiv.org/abs/1711.05101)
247247
optimizer = optim.AdamW(self.parameters(), **self.hparams.optimizer_hparams)
248248
elif self.hparams.optimizer_name == "SGD":
249249
optimizer = optim.SGD(self.parameters(), **self.hparams.optimizer_hparams)
@@ -869,8 +869,8 @@ def forward(self, x):
869869
# One difference to the GoogleNet training is that we explicitly use SGD with Momentum as optimizer instead of Adam.
870870
# Adam often leads to a slightly worse accuracy on plain, shallow ResNets.
871871
# It is not 100% clear why Adam performs worse in this context, but one possible explanation is related to ResNet's loss surface.
872-
# ResNet has been shown to produce smoother loss surfaces than networks without skip connection (see [Li et al., 2018](https://arxiv.org/pdf/1712.09913.pdf) for details).
873-
# A possible visualization of the loss surface with/out skip connections is below (figure credit - [Li et al. ](https://arxiv.org/pdf/1712.09913.pdf)):
872+
# ResNet has been shown to produce smoother loss surfaces than networks without skip connection (see [Li et al., 2018](https://arxiv.org/abs/1712.09913) for details).
873+
# A possible visualization of the loss surface with/out skip connections is below (figure credit - [Li et al. ](https://arxiv.org/abs/1712.09913)):
874874
#
875875
# <center width="100%"><img src="resnet_loss_surface.png" style="display: block; margin-left: auto; margin-right: auto;" width="600px"/></center>
876876
#

course_UvA-DL/05-transformers-and-MH-attention/Transformers_MHAttention.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -658,7 +658,7 @@ def forward(self, x):
658658
# In fact, training a deep Transformer without learning rate warm-up can make the model diverge
659659
# and achieve a much worse performance on training and testing.
660660
# Take for instance the following plot by [Liu et al.
661-
# (2019)](https://arxiv.org/pdf/1908.03265.pdf) comparing Adam-vanilla (i.e. Adam without warm-up)
661+
# (2019)](https://arxiv.org/abs/1908.03265) comparing Adam-vanilla (i.e. Adam without warm-up)
662662
# vs Adam with a warm-up:
663663
#
664664
# <center width="100%"><img src="warmup_loss_plot.svg" width="350px"></center>

course_UvA-DL/06-graph-neural-networks/GNN_overview.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -744,7 +744,7 @@ def print_results(result_dict):
744744
# Tutorials and papers for this topic include:
745745
#
746746
# * [PyTorch Geometric example](https://github.com/rusty1s/pytorch_geometric/blob/master/examples/link_pred.py)
747-
# * [Graph Neural Networks: A Review of Methods and Applications](https://arxiv.org/pdf/1812.08434.pdf), Zhou et al.
747+
# * [Graph Neural Networks: A Review of Methods and Applications](https://arxiv.org/abs/1812.08434), Zhou et al.
748748
# 2019
749749
# * [Link Prediction Based on Graph Neural Networks](https://papers.nips.cc/paper/2018/file/53f0d7c537d99b3824f0f99d62ea2428-Paper.pdf), Zhang and Chen, 2018.
750750

course_UvA-DL/09-normalizing-flows/NF_image_modeling.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1384,7 +1384,7 @@ def visualize_dequant_distribution(model: ImageFlow, imgs: Tensor, title: str =
13841384
# and we have the guarantee that every possible input $x$ has a corresponding latent vector $z$.
13851385
# However, even beyond continuous inputs and images, flows can be applied and allow us to exploit
13861386
# the data structure in latent space, as e.g. on graphs for the task of molecule generation [6].
1387-
# Recent advances in [Neural ODEs](https://arxiv.org/pdf/1806.07366.pdf) allow a flow with infinite number of layers,
1387+
# Recent advances in [Neural ODEs](https://arxiv.org/abs/1806.07366) allow a flow with infinite number of layers,
13881388
# called Continuous Normalizing Flows, whose potential is yet to fully explore.
13891389
# Overall, normalizing flows are an exciting research area which will continue over the next couple of years.
13901390

course_UvA-DL/10-autoregressive-image-modeling/Autoregressive_Image_Modeling.py

Lines changed: 9 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -18,10 +18,10 @@
1818
# For instance, in autoregressive models, we cannot interpolate between two images because of the lack of a latent representation.
1919
# We will explore and discuss these benefits and drawbacks alongside with our implementation.
2020
#
21-
# Our implementation will focus on the [PixelCNN](https://arxiv.org/pdf/1606.05328.pdf) [2] model which has been discussed in detail in the lecture.
21+
# Our implementation will focus on the [PixelCNN](https://arxiv.org/abs/1606.05328) [2] model which has been discussed in detail in the lecture.
2222
# Most current SOTA models use PixelCNN as their fundamental architecture,
2323
# and various additions have been proposed to improve the performance
24-
# (e.g. [PixelCNN++](https://arxiv.org/pdf/1701.05517.pdf) and [PixelSNAIL](http://proceedings.mlr.press/v80/chen18h/chen18h.pdf)).
24+
# (e.g. [PixelCNN++](https://arxiv.org/abs/1701.05517) and [PixelSNAIL](http://proceedings.mlr.press/v80/chen18h/chen18h.pdf)).
2525
# Hence, implementing PixelCNN is a good starting point for our short tutorial.
2626
#
2727
# First of all, we need to import our standard libraries. Similarly as in
@@ -173,7 +173,7 @@ def show_imgs(imgs):
173173
# If we now want to apply this to our convolutions, we need to ensure that the prediction of pixel 1
174174
# is not influenced by its own "true" input, and all pixels on its right and in any lower row.
175175
# In convolutions, this means that we want to set those entries of the weight matrix to zero that take pixels on the right and below into account.
176-
# As an example for a 5x5 kernel, see a mask below (figure credit - [Aaron van den Oord](https://arxiv.org/pdf/1606.05328.pdf)):
176+
# As an example for a 5x5 kernel, see a mask below (figure credit - [Aaron van den Oord](https://arxiv.org/abs/1606.05328)):
177177
#
178178
# <center width="100%" style="padding: 10px"><img src="masked_convolution.svg" width="150px"></center>
179179
#
@@ -216,10 +216,10 @@ def forward(self, x):
216216
#
217217
# To build our own autoregressive image model, we could simply stack a few masked convolutions on top of each other.
218218
# This was actually the case for the original PixelCNN model, discussed in the paper
219-
# [Pixel Recurrent Neural Networks](https://arxiv.org/pdf/1601.06759.pdf), but this leads to a considerable issue.
219+
# [Pixel Recurrent Neural Networks](https://arxiv.org/abs/1601.06759), but this leads to a considerable issue.
220220
# When sequentially applying a couple of masked convolutions, the receptive field of a pixel
221221
# show to have a "blind spot" on the right upper side, as shown in the figure below
222-
# (figure credit - [Aaron van den Oord et al. ](https://arxiv.org/pdf/1606.05328.pdf)):
222+
# (figure credit - [Aaron van den Oord et al. ](https://arxiv.org/abs/1606.05328)):
223223
#
224224
# <center width="100%" style="padding: 10px"><img src="pixelcnn_blind_spot.svg" width="275px"></center>
225225
#
@@ -445,7 +445,7 @@ def show_center_recep_field(img, out):
445445
# For visualizing the receptive field, we assumed a very simplified stack of vertical and horizontal convolutions.
446446
# Obviously, there are more sophisticated ways of doing it, and PixelCNN uses gated convolutions for this.
447447
# Specifically, the Gated Convolution block in PixelCNN looks as follows
448-
# (figure credit - [Aaron van den Oord et al. ](https://arxiv.org/pdf/1606.05328.pdf)):
448+
# (figure credit - [Aaron van den Oord et al. ](https://arxiv.org/abs/1606.05328)):
449449
#
450450
# <center width="100%"><img src="PixelCNN_GatedConv.svg" width="700px" style="padding: 15px"/></center>
451451
#
@@ -506,7 +506,7 @@ def forward(self, v_stack, h_stack):
506506
# The architecture consists of multiple stacked GatedMaskedConv blocks, where we add an additional dilation factor to a few convolutions.
507507
# This is used to increase the receptive field of the model and allows to take a larger context into account during generation.
508508
# As a reminder, dilation on a convolution works looks as follows
509-
# (figure credit - [Vincent Dumoulin and Francesco Visin](https://arxiv.org/pdf/1603.07285.pdf)):
509+
# (figure credit - [Vincent Dumoulin and Francesco Visin](https://arxiv.org/abs/1603.07285)):
510510
#
511511
# <center width="100%"><img src="https://raw.githubusercontent.com/vdumoulin/conv_arithmetic/master/gif/dilation.gif" width="250px"></center>
512512
#
@@ -655,7 +655,7 @@ def test_step(self, batch, batch_idx):
655655
# %% [markdown]
656656
# The visualization shows that for predicting any pixel, we can take almost half of the image into account.
657657
# However, keep in mind that this is the "theoretical" receptive field and not necessarily
658-
# the [effective receptive field](https://arxiv.org/pdf/1701.04128.pdf), which is usually much smaller.
658+
# the [effective receptive field](https://arxiv.org/abs/1701.04128), which is usually much smaller.
659659
# For a stronger model, we should therefore try to increase the receptive
660660
# field even further. Especially, for the pixel on the bottom right, the
661661
# very last pixel, we would be allowed to take into account the whole
@@ -869,7 +869,7 @@ def autocomplete_image(img):
869869
# Interestingly, the pixel values 64, 128 and 191 also stand out which is likely due to the quantization used during the creation of the dataset.
870870
# For RGB images, we would also see two peaks around 0 and 255,
871871
# but the values in between would be much more frequent than in MNIST
872-
# (see Figure 1 in the [PixelCNN++](https://arxiv.org/pdf/1701.05517.pdf) for a visualization on CIFAR10).
872+
# (see Figure 1 in the [PixelCNN++](https://arxiv.org/abs/1701.05517) for a visualization on CIFAR10).
873873
#
874874
# Next, we can visualize the distribution our model predicts (in average):
875875

course_UvA-DL/11-vision-transformer/Vision_Transformer.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -513,7 +513,7 @@ def train_model(**kwargs):
513513
# Dosovitskiy, Alexey, et al.
514514
# "An image is worth 16x16 words: Transformers for image recognition at scale."
515515
# International Conference on Representation Learning (2021).
516-
# [link](https://arxiv.org/pdf/2010.11929.pdf)
516+
# [link](https://arxiv.org/abs/2010.11929)
517517
#
518518
# Chen, Xiangning, et al.
519519
# "When Vision Transformers Outperform ResNets without Pretraining or Strong Data Augmentations."

course_UvA-DL/12-meta-learning/Meta_Learning.py

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# %% [markdown]
22
# <div class="center-wrapper"><div class="video-wrapper"><iframe src="https://www.youtube.com/embed/035rkmT8FfE" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe></div></div>
3-
# Meta-Learning offers solutions to these situations, and we will discuss three popular algorithms: __Prototypical Networks__ ([Snell et al., 2017](https://arxiv.org/pdf/1703.05175.pdf)), __Model-Agnostic Meta-Learning / MAML__ ([Finn et al., 2017](http://proceedings.mlr.press/v70/finn17a.html)), and __Proto-MAML__ ([Triantafillou et al., 2020](https://openreview.net/pdf?id=rkgAGAVKPr)).
3+
# Meta-Learning offers solutions to these situations, and we will discuss three popular algorithms: __Prototypical Networks__ ([Snell et al., 2017](https://arxiv.org/abs/1703.05175)), __Model-Agnostic Meta-Learning / MAML__ ([Finn et al., 2017](http://proceedings.mlr.press/v70/finn17a.html)), and __Proto-MAML__ ([Triantafillou et al., 2020](https://openreview.net/pdf?id=rkgAGAVKPr)).
44
# We will focus on the task of few-shot classification where the training and test set have distinct sets of classes.
55
# For instance, we would train the model on the binary classifications of cats-birds and flowers-bikes, but during test time, the model would need to learn from 4 examples each the difference between dogs and otters, two classes we have not seen during training (Figure credit - [Lilian Weng](https://lilianweng.github.io/lil-log/2018/11/30/meta-learning.html)).
66
#
@@ -417,7 +417,7 @@ def split_batch(imgs, targets):
417417
# $$\mathbf{v}_c=\frac{1}{|S_c|}\sum_{(\mathbf{x}_i,y_i)\in S_c}f_{\theta}(\mathbf{x}_i)$$
418418
#
419419
# where $S_c$ is the part of the support set $S$ for which $y_i=c$, and $\mathbf{v}_c$ represents the _prototype_ of class $c$.
420-
# The prototype calculation is visualized below for a 2-dimensional feature space and 3 classes (Figure credit - [Snell et al.](https://arxiv.org/pdf/1703.05175.pdf)).
420+
# The prototype calculation is visualized below for a 2-dimensional feature space and 3 classes (Figure credit - [Snell et al.](https://arxiv.org/abs/1703.05175)).
421421
# The colored dots represent encoded support elements with color-corresponding class label, and the black dots next to the class label are the averaged prototypes.
422422
#
423423
# <center width="100%"><img src="protonet_classification.svg" width="300px"></center>
@@ -1324,7 +1324,7 @@ def test_protomaml(model, dataset, k_shot=4):
13241324
# [1] Snell, Jake, Kevin Swersky, and Richard S. Zemel.
13251325
# "Prototypical networks for few-shot learning."
13261326
# NeurIPS 2017.
1327-
# ([link](https://arxiv.org/pdf/1703.05175.pdf))
1327+
# ([link](https://arxiv.org/abs/1703.05175))
13281328
#
13291329
# [2] Chelsea Finn, Pieter Abbeel, Sergey Levine.
13301330
# "Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks."

lightning_examples/finetuning-scheduler/finetuning-scheduler.py

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -609,18 +609,18 @@ def train() -> None:
609609
# %% [markdown]
610610
# ## Footnotes
611611
#
612-
# - [Howard, J., & Ruder, S. (2018)](https://arxiv.org/pdf/1801.06146.pdf). Fine-tuned Language
612+
# - [Howard, J., & Ruder, S. (2018)](https://arxiv.org/abs/1801.06146). Fine-tuned Language
613613
# Models for Text Classification. ArXiv, abs/1801.06146. [↩](#Scheduled-Fine-Tuning-with-the-Fine-Tuning-Scheduler-Extension)
614-
# - [Chronopoulou, A., Baziotis, C., & Potamianos, A. (2019)](https://arxiv.org/pdf/1902.10547.pdf).
614+
# - [Chronopoulou, A., Baziotis, C., & Potamianos, A. (2019)](https://arxiv.org/abs/1902.10547).
615615
# An embarrassingly simple approach for transfer learning from pretrained language models. arXiv
616616
# preprint arXiv:1902.10547. [↩](#Scheduled-Fine-Tuning-with-the-Fine-Tuning-Scheduler-Extension)
617-
# - [Peters, M. E., Ruder, S., & Smith, N. A. (2019)](https://arxiv.org/pdf/1903.05987.pdf). To tune or not to
617+
# - [Peters, M. E., Ruder, S., & Smith, N. A. (2019)](https://arxiv.org/abs/1903.05987). To tune or not to
618618
# tune? adapting pretrained representations to diverse tasks. arXiv preprint arXiv:1903.05987. [↩](#Scheduled-Fine-Tuning-with-the-Fine-Tuning-Scheduler-Extension)
619-
# - [Sivaprasad, P. T., Mai, F., Vogels, T., Jaggi, M., & Fleuret, F. (2020)](https://arxiv.org/pdf/1910.11758.pdf).
619+
# - [Sivaprasad, P. T., Mai, F., Vogels, T., Jaggi, M., & Fleuret, F. (2020)](https://arxiv.org/abs/1910.11758).
620620
# Optimizer benchmarking needs to account for hyperparameter tuning. In International Conference on Machine Learning
621621
# (pp. 9036-9045). PMLR. [↩](#Optimizer-Configuration)
622-
# - [Mosbach, M., Andriushchenko, M., & Klakow, D. (2020)](https://arxiv.org/pdf/2006.04884.pdf). On the stability of
622+
# - [Mosbach, M., Andriushchenko, M., & Klakow, D. (2020)](https://arxiv.org/abs/2006.04884). On the stability of
623623
# fine-tuning bert: Misconceptions, explanations, and strong baselines. arXiv preprint arXiv:2006.04884. [↩](#Optimizer-Configuration)
624-
# - [Loshchilov, I., & Hutter, F. (2016)](https://arxiv.org/pdf/1608.03983.pdf). Sgdr: Stochastic gradient descent with
624+
# - [Loshchilov, I., & Hutter, F. (2016)](https://arxiv.org/abs/1608.03983). Sgdr: Stochastic gradient descent with
625625
# warm restarts. arXiv preprint arXiv:1608.03983. [↩](#LR-Scheduler-Configuration)
626626
#

0 commit comments

Comments
 (0)