forked from vlgiitr/papers_we_read
-
Notifications
You must be signed in to change notification settings - Fork 0
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
1 parent
050b6d0
commit 74dad0e
Showing
3 changed files
with
216 additions
and
82 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,21 @@ | ||
# Contributing to papers_we_read | ||
|
||
All kinds of paper summaries are welcome. | ||
|
||
## How to Open a Pull Request (PR) | ||
|
||
1. **Fork** and **Pull** the latest `papers_we_read`. | ||
2. **Checkout** to a new branch with the name `<PAPER_TITLE>`. (DO NOT use master branch for PRs!) | ||
3. **Commit** your summaries on the new branch on your forked repo: | ||
- Please make a single commit for ease of reviewing before opening a PR with the commit message: `:zap: Add Summary for <PAPER_TITLE>`. | ||
4. Create a PR with the title: `:zap: Add Summary for <PAPER_TITLE>`. | ||
|
||
## Summary Guidelines | ||
|
||
1. All the summaries must be written using a markdown format following the [Summary Template](Summary_Template.md). | ||
2. The `.md` file shoule be name `<PAPER_TITLE.md>` and must be included under the [summaries](summaries/) folder. | ||
3. Put any images, animations, etc. that you used in the markdown files under the [images](images/) folder. | ||
|
||
## PR Review | ||
|
||
The PRs will be reviewed and merged by the core members at the [Vision and Language Group IITR](https://vlgiitr.github.io/). |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,85 +1,168 @@ | ||
# Deep Learning Paper Summaries | ||
|
||
Summaries for papers discussed by VLG. | ||
|
||
# Summaries | ||
2021 | ||
- GANcraft: Unsupervised 3D Neural Rendering of Minecraft Worlds [[Paper](https://arxiv.org/pdf/2104.07659)][[Review](./summaries/GANcraft.md)] | ||
- Zekun Hao, Arun Mallya, Serge Belongie, Ming-Yu Liu, **ICCV 2021** | ||
- GIRAFFE: Representing Scenes as Compositional Generative Neural Feature Fields [[Paper](https://arxiv.org/pdf/2011.12100)][[Review](./summaries/GIRAFFE.md)] | ||
- Michael Niemeyer, Andreas Geiger, **CVPR 2021** | ||
- Creative Sketch Genetation [[Paper](https://arxiv.org/abs/2011.10039)][[Review](https://github.com/Sandstorm831/papers_we_read/blob/master/summaries/DoodlerGAN%20summary.md)] | ||
- Songwei Ge, Devi Parikh, Vedanuj Goswami & C. Lawrence Zitnick, **ICLR 2021** | ||
- Binary TTC: A Temporal Geofence for Autonomous Navigation[[Paper](https://arxiv.org/abs/2101.04777)][[Review](./summaries/binary_TTC.md)] | ||
- Abhishek Badki, Orazio Gallo, Jan Kautz, Pradeep Sen, **CVPR 2021** | ||
|
||
2020 | ||
- Unsupervised Learning of Probably Symmetric Deformable 3D Objects from Images in the Wild [[Paper](https://arxiv.org/abs/1911.11130)][[Review](./summaries/Unsupervised_learning_for_3D_objects_from_images.md)] | ||
- Shangzhe Wu, Christian Rupprecht, Andrea Vedaldi, **CVPR 2020** | ||
- You Only Train Once: Loss-conditional training of deep networks [[Paper](https://openreview.net/pdf?id=HyxY6JHKwr)][[Review](./summaries/You_only_train_once.md)] | ||
- Alexey Dosovitskiy, Josip Djolonga, **ICLR 2020** | ||
- GrokNet: Unified Computer Vision Model Trunk and Embeddings For Commerce [[Paper](https://ai.facebook.com/research/publications/groknet-unified-computer-vision-model-trunk-and-embeddings-for-commerce)][[Review](./summaries/GrokNet.md)] | ||
- Sean Bell, Yiqun Liu, Sami Alsheikh, Yina Tang, Ed Pizzi, M. Henning, Karun Singh, Omkar Parkhi, Fedor Borisyuk, **KDD 2020** | ||
- Semantically multi-modal image synthesis [[Paper](https://arxiv.org/abs/2003.12697)][[Review](./summaries/Semantically_multi-modal_image_synthesis.md)] | ||
- Zhen Zhu, Zhiliang Xu, Ansheng You, Xiang Bai, **CVPR 2020** | ||
- Learning to Simulate Dynamic Environments with GameGAN [[Paper](http://arxiv.org/abs/2005.12126)][[Review](./summaries/GameGAN.md)] | ||
- Seung Wook Kim, Yuhao Zhou, Jonah Philion, Antonio Torralba, Sanja Fidler, **CVPR 2020** | ||
- Adversarial Policies : Attacking deep reinforcement learning [[Paper](https://arxiv.org/abs/1905.10615)][[Review](./summaries/Adversarial_RL.md)] | ||
- Adam Gleave, Michael Dennis, Cody Wild, Neel Kant, Sergey Levine, Stuart Russell, **ICLR 2020** | ||
- Bootstrap Your Own Latent: A New Approach to Self-Supervised Learning [[Paper](https://arxiv.org/abs/2006.07733)][[Review](./summaries/BYOL.md)] | ||
- Jean-Bastien Grill, Florian Strub, Florent Altché, Corentin Tallec, Pierre H. Richemond, Elena Buchatskaya, Carl Doersch, Bernardo Avila Pires, Zhaohan Daniel Guo, Mohammad Gheshlaghi Azar, Bilal Piot, Koray Kavukcuoglu, Rémi Munos, Michal Valko, **CVPR 2020** | ||
|
||
2019 | ||
- ViLBERT: Pretraining Task-Agnostic Visiolinguistic Representations for Vision-and-Language Tasks [[Paper](https://arxiv.org/abs/1908.02265)][[Review](./summaries/ViLBERT.md)] | ||
- Jiasen Lu, Dhruv Batra, Devi Parikh, Stefan Lee, **NIPS 2019** | ||
- Stand-Alone Self-Attention in Vision Models [[Paper](https://arxiv.org/abs/1906.05909)][[Review](./summaries/vision_attention.md)] | ||
- Prajit Ramachandran, Niki Parmar, Ashish Vaswani, Irwan Bello, Anselm Levskaya, Jonathon Shlens, **NIPS 2019** | ||
- Zero-Shot Entity Linking by Reading Entity Descriptions [[Paper](https://arxiv.org/abs/1906.07348)][[Review](./summaries/entity_linking.md)] | ||
- Lajanugen Logeswaran , Ming-Wei Chang‡ Kenton Lee , Kristina Toutanova , Jacob Devlin, Honglak Lee **ACL-2019** | ||
- Do you know that Florence is packed with visitors? Evaluating state-of-the-art models of speaker commitment [[Paper](https://www.aclweb.org/anthology/P19-1412/)][[Review](./summaries/florence.md)] | ||
- Nanjiang Jiang and Marie-Catherine de Marneffe , **ACL-2019** | ||
- Scene Representation Networks: Continuous 3D-Structure-Aware Neural Scene Representations [[Paper](https://papers.nips.cc/paper/8396-scene-representation-networks-continuous-3d-structure-aware-neural-scene-representations.pdf)][[Review](./summaries/srn.md)] | ||
- Vincent Sitzmann, Michael Zollhofer, Gordon Wetzstein, **NIPS-2019** | ||
- Emotion-Cause Pair Extraction: A New Task to Emotion Analysis in Texts [[Paper](https://arxiv.org/abs/1906.01267)][[Review](./summaries/ecpe.md)] | ||
- Rui Xia, Zixiang Ding, **ACL-2019** | ||
- Putting an End to End-to-End: Gradient-Isolated Learning of Representations [[Paper](https://papers.nips.cc/paper/8568-putting-an-end-to-end-to-end-gradient-isolated-learning-of-representations.pdf)][[Review](./summaries/infomax.md)] | ||
- Sindy Lowe, Peter O' Connor, Bastiaan S. Veeling, **NIPS-2019** | ||
- Bridging the Gap between Training and Inference for Neural Machine Translation [[Paper](https://arxiv.org/abs/1906.02448)][[Review](./summaries/NMT_Gap.md)] | ||
- Wen Zhang, Yang Feng, Fandong Meng, Di You, Qun Liu, **ACL-2019** | ||
- Designing and Interpreting Probes with Control Tasks [[Paper](https://arxiv.org/abs/1909.03368)][[Review](./summaries/control_tasks.md)] | ||
- John Hewitt, Percy Liang, **EMNLP-2019** | ||
- Specializing Word Embeddings (for Parsing) by Information Bottleneck [[Paper](https://arxiv.org/abs/1910.00163)][[Review](./summaries/info_bottleneck.md)] | ||
- Xiang Lisa Li, Jason Eisner, **EMNLP-2019** | ||
- vGraph: A Generative Model for Joint Community Detection and Node Representational Learning [[Paper](https://arxiv.org/abs/1906.07159)] [[Review](./summaries/vgraph.md)] | ||
- Fan-Yun Sun, Meng Qu, Jordan Hoffmann, Chin-Wei Huang, Jian Tang, **NIPS-2019** | ||
- Uniform convergence may be unable to explain generalization in deep learning [[Paper](https://arxiv.org/abs/1902.04742)][[Review](./summaries/uniform_convergence.md)] | ||
- Vaishnavh Nagarajan, J. Zico Kolter, **NIPS-2019** | ||
- SinGAN: Learning a Generative Model from a Single Natural Image [[Paper](https://arxiv.org/pdf/1905.01164)] [[Review](./summaries/singan.md)] | ||
- Tamar Rott Shaham, Tali Dekel, Tomer Michaeli, **ICCV-2019** | ||
- Graph U-Nets [[Paper](https://arxiv.org/abs/1905.05178)] [[Review](./summaries/graph_unet.md)] | ||
- Hongyang Gao, Shuiwang Ji, **ICML-2019** | ||
- Feature Denoising for Improving Adversarial Robustness [[Paper](https://arxiv.org/pdf/1812.03411)] [[Review](./summaries/feature_denoising.md)] | ||
- Cihang Xie, Yuxin Wu, Laurens van der Maaten, Alan Yuille, kaiming He, **CVPR-2019** | ||
- This Looks Like That: Deep Learning for Interpretable Image Recognition [[Paper](https://arxiv.org/pdf/1806.10574.pdf)] [[Review](./summaries/this_looks_like_that.md)] | ||
- Chaofan Chen, Oscar Li, Chaofan Tao, Alina Jade Barnett, Jonathan Su, Cynthia Rudin, **NIPS-2019** | ||
|
||
2018 | ||
|
||
- CyCADA: Cycle-Consistent Adversarial Domain Adaptation [[Paper](https://arxiv.org/pdf/1711.03213.pdf)] [[Review](./summaries/cycada.md)] | ||
- Judy Hoffman, Eric Tzeng, Taesung Park, Jun-Yan Zhu, Phillip Isola, Kate Saenko, Alexei A. Efros, Trevor Darrell, **ICML-2018** | ||
|
||
|
||
2017 | ||
|
||
- Unpaired Image-to-Image Translation using Cycle Consistent Adversarial Networks [[Paper](https://arxiv.org/abs/1703.10593)] [[Review](./summaries/cyclegan.md)] | ||
- Jun-Yan Zhu, Taesung Park, Phillip Isola, Alexei A. Efros, **ICCV-2017** | ||
- Densely Connected Convolutional Networks [[Paper](https://arxiv.org/abs/1608.06993)] [[Review](./summaries/densenet.md)] | ||
- Gao Huang, Zhuang Liu, Laurens van der Maaten, Kilian Q. Weinberger, **CVPR-2017** | ||
|
||
|
||
|
||
2016 | ||
|
||
- Siamese Recurrent Architectures for Learning Sentence Similarity [[Paper](https://dl.acm.org/citation.cfm?id=3016291)] [[Review](./summaries/siamese.md)] | ||
- Jonas Mueller, Aditya Thyagarajan, **AAAI-2016** | ||
## Introduction | ||
|
||
This repo houses summaries for various excitng works in the field of **Deep Learning**. You can contribute summaries of your own. Check out our [contributing guide](#contributing) to start contributing. Happy Reading & Summarizing! | ||
|
||
## Contents | ||
|
||
1. [Summaries](#summaries) | ||
- [2021](#2021) | ||
- [2020](#2020) | ||
- [2019](#2019) | ||
- [2018](#2018) | ||
- [2017](#2017) | ||
- [2016](#2016) | ||
2. [Contributing](#contributing) | ||
3. [Acknowledgements](#acknowledgements) | ||
4. [License](#license) | ||
|
||
## Summaries | ||
|
||
### 2021 | ||
|
||
- #### GANcraft: Unsupervised 3D Neural Rendering of Minecraft Worlds [[Paper](https://arxiv.org/pdf/2104.07659)][[Review](./summaries/GANcraft.md)] | ||
|
||
- Zekun Hao, Arun Mallya, Serge Belongie, Ming-Yu Liu, **ICCV 2021** | ||
|
||
- #### GIRAFFE: Representing Scenes as Compositional Generative Neural Feature Fields [[Paper](https://arxiv.org/pdf/2011.12100)][[Review](./summaries/GIRAFFE.md)] | ||
|
||
- Michael Niemeyer, Andreas Geiger, **CVPR 2021** | ||
|
||
- #### Creative Sketch Genetation [[Paper](https://arxiv.org/abs/2011.10039)][[Review](https://github.com/Sandstorm831/papers_we_read/blob/master/summaries/DoodlerGAN%20summary.md)] | ||
|
||
- Songwei Ge, Devi Parikh, Vedanuj Goswami & C. Lawrence Zitnick, **ICLR 2021** | ||
|
||
- #### Binary TTC: A Temporal Geofence for Autonomous Navigation[[Paper](https://arxiv.org/abs/2101.04777)][[Review](./summaries/binary_TTC.md)] | ||
|
||
- Abhishek Badki, Orazio Gallo, Jan Kautz, Pradeep Sen, **CVPR 2021** | ||
|
||
### 2020 | ||
|
||
- #### Unsupervised Learning of Probably Symmetric Deformable 3D Objects from Images in the Wild [[Paper](https://arxiv.org/abs/1911.11130)][[Review](./summaries/Unsupervised_learning_for_3D_objects_from_images.md)] | ||
|
||
- Shangzhe Wu, Christian Rupprecht, Andrea Vedaldi, **CVPR 2020** | ||
|
||
- #### You Only Train Once: Loss-conditional training of deep networks [[Paper](https://openreview.net/pdf?id=HyxY6JHKwr)][[Review](./summaries/You_only_train_once.md)] | ||
|
||
- Alexey Dosovitskiy, Josip Djolonga, **ICLR 2020** | ||
|
||
- #### GrokNet: Unified Computer Vision Model Trunk and Embeddings For Commerce [[Paper](https://ai.facebook.com/research/publications/groknet-unified-computer-vision-model-trunk-and-embeddings-for-commerce)][[Review](./summaries/GrokNet.md)] | ||
|
||
- Sean Bell, Yiqun Liu, Sami Alsheikh, Yina Tang, Ed Pizzi, M. Henning, Karun Singh, Omkar Parkhi, Fedor Borisyuk, **KDD 2020** | ||
|
||
- #### Semantically multi-modal image synthesis [[Paper](https://arxiv.org/abs/2003.12697)][[Review](./summaries/Semantically_multi-modal_image_synthesis.md)] | ||
|
||
- Zhen Zhu, Zhiliang Xu, Ansheng You, Xiang Bai, **CVPR 2020** | ||
|
||
- #### Learning to Simulate Dynamic Environments with GameGAN [[Paper](http://arxiv.org/abs/2005.12126)][[Review](./summaries/GameGAN.md)] | ||
|
||
- Seung Wook Kim, Yuhao Zhou, Jonah Philion, Antonio Torralba, Sanja Fidler, **CVPR 2020** | ||
|
||
- #### Adversarial Policies : Attacking deep reinforcement learning [[Paper](https://arxiv.org/abs/1905.10615)][[Review](./summaries/Adversarial_RL.md)] | ||
|
||
- Adam Gleave, Michael Dennis, Cody Wild, Neel Kant, Sergey Levine, Stuart Russell, **ICLR 2020** | ||
|
||
- #### Bootstrap Your Own Latent: A New Approach to Self-Supervised Learning [[Paper](https://arxiv.org/abs/2006.07733)][[Review](./summaries/BYOL.md)] | ||
|
||
- Jean-Bastien Grill, Florian Strub, Florent Altché, Corentin Tallec, Pierre H. Richemond, Elena Buchatskaya, Carl Doersch, Bernardo Avila Pires, Zhaohan Daniel Guo, Mohammad Gheshlaghi Azar, Bilal Piot, Koray Kavukcuoglu, Rémi Munos, Michal Valko, **CVPR 2020** | ||
|
||
### 2019 | ||
|
||
- #### ViLBERT: Pretraining Task-Agnostic Visiolinguistic Representations for Vision-and-Language Tasks [[Paper](https://arxiv.org/abs/1908.02265)][[Review](./summaries/ViLBERT.md)] | ||
|
||
- Jiasen Lu, Dhruv Batra, Devi Parikh, Stefan Lee, **NIPS 2019** | ||
|
||
- #### Stand-Alone Self-Attention in Vision Models [[Paper](https://arxiv.org/abs/1906.05909)][[Review](./summaries/vision_attention.md)] | ||
|
||
- Prajit Ramachandran, Niki Parmar, Ashish Vaswani, Irwan Bello, Anselm Levskaya, Jonathon Shlens, **NIPS 2019** | ||
|
||
- #### Zero-Shot Entity Linking by Reading Entity Descriptions [[Paper](https://arxiv.org/abs/1906.07348)][[Review](./summaries/entity_linking.md)] | ||
|
||
- Lajanugen Logeswaran , Ming-Wei Chang‡ Kenton Lee , Kristina Toutanova , Jacob Devlin, Honglak Lee **ACL-2019** | ||
|
||
- #### Do you know that Florence is packed with visitors? Evaluating state-of-the-art models of speaker commitment [[Paper](https://www.aclweb.org/anthology/P19-1412/)][[Review](./summaries/florence.md)] | ||
|
||
- Nanjiang Jiang and Marie-Catherine de Marneffe , **ACL-2019** | ||
|
||
- #### Scene Representation Networks: Continuous 3D-Structure-Aware Neural Scene Representations [[Paper](https://papers.nips.cc/paper/8396-scene-representation-networks-continuous-3d-structure-aware-neural-scene-representations.pdf)][[Review](./summaries/srn.md)] | ||
|
||
- Vincent Sitzmann, Michael Zollhofer, Gordon Wetzstein, **NIPS-2019** | ||
|
||
- #### Emotion-Cause Pair Extraction: A New Task to Emotion Analysis in Texts [[Paper](https://arxiv.org/abs/1906.01267)][[Review](./summaries/ecpe.md)] | ||
|
||
- Rui Xia, Zixiang Ding, **ACL-2019** | ||
|
||
- #### Putting an End to End-to-End: Gradient-Isolated Learning of Representations [[Paper](https://papers.nips.cc/paper/8568-putting-an-end-to-end-to-end-gradient-isolated-learning-of-representations.pdf)][[Review](./summaries/infomax.md)] | ||
|
||
- Sindy Lowe, Peter O' Connor, Bastiaan S. Veeling, **NIPS-2019** | ||
|
||
- #### Bridging the Gap between Training and Inference for Neural Machine Translation [[Paper](https://arxiv.org/abs/1906.02448)][[Review](./summaries/NMT_Gap.md)] | ||
|
||
- Wen Zhang, Yang Feng, Fandong Meng, Di You, Qun Liu, **ACL-2019** | ||
|
||
- #### Designing and Interpreting Probes with Control Tasks [[Paper](https://arxiv.org/abs/1909.03368)][[Review](./summaries/control_tasks.md)] | ||
|
||
- John Hewitt, Percy Liang, **EMNLP-2019** | ||
|
||
- #### Specializing Word Embeddings (for Parsing) by Information Bottleneck [[Paper](https://arxiv.org/abs/1910.00163)][[Review](./summaries/info_bottleneck.md)] | ||
|
||
- Xiang Lisa Li, Jason Eisner, **EMNLP-2019** | ||
|
||
- #### vGraph: A Generative Model for Joint Community Detection and Node Representational Learning [[Paper](https://arxiv.org/abs/1906.07159)][[Review](./summaries/vgraph.md)] | ||
|
||
- Fan-Yun Sun, Meng Qu, Jordan Hoffmann, Chin-Wei Huang, Jian Tang, **NIPS-2019** | ||
|
||
- #### Uniform convergence may be unable to explain generalization in deep learning [[Paper](https://arxiv.org/abs/1902.04742)][[Review](./summaries/uniform_convergence.md)] | ||
|
||
- Vaishnavh Nagarajan, J. Zico Kolter, **NIPS-2019** | ||
|
||
- #### SinGAN: Learning a Generative Model from a Single Natural Image [[Paper](https://arxiv.org/pdf/1905.01164)][[Review](./summaries/singan.md)] | ||
|
||
- Tamar Rott Shaham, Tali Dekel, Tomer Michaeli, **ICCV-2019** | ||
|
||
- #### Graph U-Nets [[Paper](https://arxiv.org/abs/1905.05178)][[Review](./summaries/graph_unet.md)] | ||
|
||
- Hongyang Gao, Shuiwang Ji, **ICML-2019** | ||
|
||
- #### Feature Denoising for Improving Adversarial Robustness [[Paper](https://arxiv.org/pdf/1812.03411)][[Review](./summaries/feature_denoising.md)] | ||
|
||
- Cihang Xie, Yuxin Wu, Laurens van der Maaten, Alan Yuille, kaiming He, **CVPR-2019** | ||
|
||
- #### This Looks Like That: Deep Learning for Interpretable Image Recognition [[Paper](https://arxiv.org/pdf/1806.10574.pdf)][[Review](./summaries/this_looks_like_that.md)] | ||
|
||
- Chaofan Chen, Oscar Li, Chaofan Tao, Alina Jade Barnett, Jonathan Su, Cynthia Rudin, **NIPS-2019** | ||
|
||
### 2018 | ||
|
||
- #### CyCADA: Cycle-Consistent Adversarial Domain Adaptation [[Paper](https://arxiv.org/pdf/1711.03213.pdf)][[Review](./summaries/cycada.md)] | ||
|
||
- Judy Hoffman, Eric Tzeng, Taesung Park, Jun-Yan Zhu, Phillip Isola, Kate Saenko, Alexei A. Efros, Trevor Darrell, **ICML-2018** | ||
|
||
### 2017 | ||
|
||
- #### Unpaired Image-to-Image Translation using Cycle Consistent Adversarial Networks [[Paper](https://arxiv.org/abs/1703.10593)][[Review](./summaries/cyclegan.md)] | ||
|
||
- Jun-Yan Zhu, Taesung Park, Phillip Isola, Alexei A. Efros, **ICCV-2017** | ||
|
||
- #### Densely Connected Convolutional Networks [[Paper](https://arxiv.org/abs/1608.06993)][[Review](./summaries/densenet.md)] | ||
|
||
- Gao Huang, Zhuang Liu, Laurens van der Maaten, Kilian Q. Weinberger, **CVPR-2017** | ||
|
||
### 2016 | ||
|
||
- #### Siamese Recurrent Architectures for Learning Sentence Similarity [[Paper](https://dl.acm.org/citation.cfm?id=3016291)][[Review](./summaries/siamese.md)] | ||
|
||
- Jonas Mueller, Aditya Thyagarajan, **AAAI-2016** | ||
|
||
## Contributing | ||
|
||
We appreciate all contributions to the set of summaries. Please refer to [CONTRIBUTING.md](CONTRIBUTING.md) for the contributing guideline. | ||
|
||
## Acknowledgements | ||
|
||
**papers_we_read** is an open source repository that welcomes any contribution and feedback. We wish the collected sets of summaries can help the DL community to start with the practice of reading and understanding research papers which is a potent skill in the research community. Most of our [contributors](https://github.com/vlgiitr/papers_we_read/graphs/contributors) include students enrolled in undergraduate programmes. We are grateful for all the contributions that help improve this collection of summaries. | ||
|
||
## License | ||
|
||
This repo is open-sourced under the [MIT License](LICENSE). |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,30 @@ | ||
# PAPER_TITLE | ||
|
||
Author 1, Author 2, ..., **<CONFERENCE_NAME>** **<YEAR_OF_PUBLICATION>** | ||
|
||
## Summary | ||
|
||
Include a brief description about the paper from an intuitive point of view if possible. | ||
|
||
## Contributions | ||
|
||
Major contributions of the paper in bullet points. | ||
|
||
## Method | ||
|
||
- A comprehensive description of the methodology proposed in the paper. | ||
- You may include any images for better presentation. | ||
- Mathematical equations may also be included for better understanding. | ||
|
||
## Results | ||
|
||
- Comments on the results in the paper. | ||
- Comparisons with the baselines and existing SOTA. | ||
|
||
## Two-Cents | ||
|
||
Your personal opinion about the paper including appreciations, criticism and possible future directions for research. | ||
|
||
## Resources | ||
|
||
Any links to the project page, youtube video, implementation, blog, etc. for the paper. |