need to read more papers ahhh... for research... read more books for life, yikes
(NOTE: "(+)" indicates a would recommend review, since the paper might be useful to others)
##implement_asap
- SIFT/FAST
- Alex-Net
- R-CNN/Fast R-CNN/Mask R-CNN
- YOLO
- Attention is all you need
- GCN
- GAT
- GraphSAGE
- DeepSets
- PointNet
- Fabian Fuchs review
- Welling VAE
- Goodfellow GAN
###bias_fairness_in_AI
- scGNN
- Random labels
- feature interactions in NNs, here
- fuzzy jaccard index, FUJI score
- GAT by Velickovic, here
- distinct, Robinson lab on new DGE analysis: https://www.biorxiv.org/content/10.1101/2020.11.24.394213v3
- https://www.nature.com/articles/s41591-020-0870-z
- https://www.nature.com/articles/s41598-019-57025-2
- clr survey on ecg: https://arxiv.org/pdf/2103.12676.pdf
- Cactus
- Prototypical networks
- https://www.nature.com/articles/s41591-021-01335-4
- meta pseudolabels: https://openaccess.thecvf.com/content/CVPR2021/papers/Pham_Meta_Pseudo_Labels_CVPR_2021_paper.pdf
- CLOCS (somehow published at ICML!?...) https://icml.cc/Conferences/2021/Schedule?showEvent=8461
- doi:10.4070/kcj.2018.0446; Artificial Intelligence Algorithm for Screening Heart Failure with Reduced Ejection Fraction Using Electrocardiography doi:10.1097/MAT.0000000000001218; Opening the “Black Box” of Artificial Intelligence for Detecting Heart Failure. ASAIO J; Artificial intelligence for the diagnosis of heart failure doi:10.1038/s41746-020-0261-3
- WGAN review
- similar idea, in Nat Mach Int
- IGMC
- single-cell + drug discovery, a review
- https://www.nature.com/articles/s41467-021-21997-5
- https://www.biorxiv.org/content/10.1101/2021.05.25.445658v2
- https://www.nature.com/articles/s42256-021-00338-7
- https://arxiv.org/abs/2106.02246
- Shapley networks; code here
- organ transplant by vdS; possibly published version: http://proceedings.mlr.press/v139/berrevoets21a.html
- MSR for data set similarity
- DeepSets
- set functions for time-series
- applications of data set similarity for bio, here
- another max welling article on norm flows
- https://arxiv.org/abs/2106.01345
- relevant for DINO, https://arxiv.org/abs/2106.05237
- feature selection, high-dim bio data, https://arxiv.org/abs/2001.08322
- https://arxiv.org/pdf/2106.09643.pdf
- sub-groups: https://arxiv.org/pdf/2103.03399.pdf
- dataset shift: http://proceedings.mlr.press/v130/subbaswamy21a/subbaswamy21a.pdf
- (time-series + causal inference): https://arxiv.org/ftp/arxiv/papers/2107/2107.01353.pdf
- interpretable GNNs, building on concepts-based explanations: https://arxiv.org/pdf/2107.07493.pdf
- simple way to reduce bias and extract more meaningful information: https://arxiv.org/pdf/1812.10352.pdf
- another vdS, on a tool, https://arxiv.org/abs/2106.04240
- explaining time series, another vdS, https://arxiv.org/abs/2106.05303
- important paper on how AI for covid cxr fails miserably and was not useful, https://www.nature.com/articles/s42256-021-00338-7
- how models with widespread use, such as the EPIC Sepsis Model, actually suck a lot, and we need more competency for ML to be useful: https://jamanetwork.com/journals/jamainternalmedicine/article-abstract/2781307
- https://openreview.net/forum?id=eJIJF3-LoZO
- https://openreview.net/forum?id=h0de3QWtGG
- https://openreview.net/forum?id=unI5ucw_Jk
- https://openreview.net/forum?id=PpshD0AXfA
- alaa branching into biomedical lit, https://www.nature.com/articles/s42256-021-00353-8
- google on predicting gex w/enformer: https://www.biorxiv.org/content/10.1101/2021.04.07.438649v1
desirable $1$: a simpler classification of fields...
- transcriptome --> small molecule, https://www.nature.com/articles/s41467-019-13807-w
- perturbation modeling, in part with single-cell data: https://doi.org/10.1016/j.cels.2021.05.016
- from Google Accelerated Science team, https://ai.googleblog.com/2020/04/applying-machine-learning-toyeast.html AND https://research.google/pubs/pub49138/ for translational research
- https://www.biorxiv.org/content/10.1101/2021.05.28.446021v1 (Nir Yosef on lineage tracing)
- autoencoder for structural dynamics: https://doi.org/10.1016/j.cell.2015.03.050
- alphafold2: https://www.nature.com/articles/s41586-021-03819-2
- inception for time-series, https://arxiv.org/pdf/1909.04939.pdf
- sktime, the equivalent of sklearn for time-series, http://learningsys.org/neurips19/assets/papers/sktime_ml_systems_neurips2019.pdf
- voice2series: https://arxiv.org/abs/2106.09296 (ICLR 2021)
- explaining time-series predictions (ICLR 2021), another vdS paper: https://arxiv.org/abs/2106.05303
- causality conditions for time-series by Scholkopf et al.: https://arxiv.org/abs/2005.08543 (ICLR 2021)
- Designing experiments to test pre-training for ts: https://arxiv.org/abs/2110.02095
- great lillian blog post on attention, https://lilianweng.github.io/lil-log/2018/06/24/attention-attention.html
- neural tangent kernel, https://arxiv.org/abs/1806.07572
- random fourier features, https://people.eecs.berkeley.edu/~brecht/papers/07.rah.rec.nips.pdf
- SimSiam, Chen & He, CVPR 2021 (here)
- metadata normalization to remove confounders in metadata (e.g., diff sites), within the network, https://openaccess.thecvf.com/content/CVPR2021/html/Lu_Metadata_Normalization_CVPR_2021_paper.html
- on all the contrastive learning stuff by FAIR: https://openaccess.thecvf.com/content/CVPR2021/html/Feichtenhofer_A_Large-Scale_Study_on_Unsupervised_Spatiotemporal_Representation_Learning_CVPR_2021_paper.html
- SOTA for tumor and organ segmentation as of CVPR 2021, https://openaccess.thecvf.com/content/CVPR2021/html/Zhang_DoDNet_Learning_To_Segment_Multi-Organ_and_Tumors_From_Multiple_Partially_CVPR_2021_paper.html
- better than VQ-VAE, discrete cosine transforms: https://arxiv.org/abs/2103.03841 (ICLR 2021)
- NeRF VAE, geometry of multiple scenes: https://arxiv.org/abs/2104.00587
- limits of pre-training: https://arxiv.org/abs/2110.02095
- Neural Radiant Fields
- resolving latent correlations in disentangled representation nets using weak supervision (another scholkopf): https://arxiv.org/abs/2006.07886
- more comprehensive follow-up than DAGNN (DAG-GNN?) that iclr one, and newer; see: https://arxiv.org/abs/2101.07965
- textbook: https://mitpress.mit.edu/books/elements-causal-inference from Shcolkopf and Jonas Peters (http://web.math.ku.dk/~peters/)
- Melanie Mitchell on abstraction: https://arxiv.org/abs/2102.10717
- Symbolic reasoning in NNs: https://arxiv.org/abs/2102.03406
- https://arxiv.org/abs/2007.02265 ACM-GCN for improved corr btw node embeddings & topology
- https://arxiv.org/pdf/2106.12575.pdf
- https://arxiv.org/pdf/2103.03212.pdf
- https://arxiv.org/abs/2107.10356 reading race in ConvNets on radiological data
###nlp
###ML4physics
- PDE solvers using GNNs from Welling et al., ICLR'22
- classic Raschka on it: https://arxiv.org/abs/1811.12808
- On Position Embeddings in BERT, see here
- they really are amazing: frozen pretrainted transformer can be trained on lang and do vision tasks w/o fine-tuning: https://arxiv.org/abs/2103.05247
- avoiding degernation of self-attention: https://arxiv.org/abs/2103.03404