-
[ICML] Directed Acyclic Transformer for Non-Autoregressive Machine Translation
-
[arXiv] A Survey on Non-Autoregressive Generation for Neural Machine Translation and Beyond
-
[ACL] latent-GLAT: Glancing at Latent Variables for Parallel Text Generation
-
[ACL] An Imitation Learning Curriculum for Text Editing with Non-Autoregressive Models
-
[EMNLP] Non-Autoregressive Neural Machine Translation: A Call for Clarity
-
[EMNLP] Multi-Granularity Optimization for Non-Autoregressive Translation
- [AAAI] Non-Autoregressive Translation with Layer-Wise Prediction and Deep Supervision
- [arXiv] MvSR-NAT: Multi-view Subset Regularization for Non-Autoregressive Machine Translation
- [CL] Sequence-Level Training for Non-Autoregressive Neural Machine Translation
- [EMNLP] Exploring Non-Autoregressive Text Style Transfer
- [EMNLP] Learning to Rewrite for Non-Autoregressive Neural Machine Translation
- [EMNLP] AligNART: Non-autoregressive Neural Machine Translation by Jointly Learning to Estimate Alignment and Translate
- [ICML] Order-Agnostic Cross Entropy for Non-Autoregressive Machine Translation
- [ICML] BANG: Bridging Autoregressive and Non-autoregressive Generation with Large Scale Pretraining
- [ACL] Rejuvenating Low-Frequency Words: Making the Most of Parallel Data in Non-Autoregressive Translation
- [ACL] Progressive Multi-Granularity Training for Non-Autoregressive Translation
- [ACL] GLAT: Glancing Transformer for Non-Autoregressive Neural Machine Translation
- [ACL] POS-Constrained Parallel Decoding for Non-autoregressive Generation
- [ACL Findings] Fully Non-autoregressive Neural Machine Translation: Tricks of the Trade
- [ACL SRW] Using Perturbed Length-aware Positional Encoding for Non-autoregressive Neural Machine Translation
- [EACL] Enriching Non-Autoregressive Transformer with Syntactic and Semantic Structures for Neural Machine Translation
- [EACL] Non-Autoregressive Text Generation with Pre-trained Language Models
- [NAACL] Non-Autoregressive Semantic Parsing for Compositional Task-Oriented Dialog
- [NAACL] Non-Autoregressive Translation by Learning Target Categorical Codes
- [NAACL] Multi-Task Learning with Shared Encoder for Non-Autoregressive Machine Translation
- [ICLR] Understanding and Improving Lexical Choice in Non-Autoregressive Translation
- [AAAI] Guiding Non-Autoregressive Neural Machine Translation Decoding with Reordering Information
- [arXiv] Listen and Fill in the Missing Letters: Non-Autoregressive Transformer for Speech Recognition
- [arXiv] Non-Autoregressive Neural Dialogue Generation
- [arXiv] Improving Fluency of Non-Autoregressive Machine Translation
- [arXiv] Semi-Autoregressive Training Improves Mask-Predict Decoding
- [arXiv] LAVA NAT: A Non-Autoregressive Translation Model with Look-Around Decoding and Vocabulary Attention
- [IJCAI] Task-Level Curriculum Learning for Non-Autoregressive Neural Machine Translation
- [COLING] Context-Aware Cross-Attention for Non-Autoregressive Translation
- [COLING] Infusing Sequential Information into Conditional Masked Translation Model with Self-Review Mechanism
- [NeurIPS] Incorporating BERT into Parallel Sequence Decoding with Adapters
- [EMNLP] Non-Autoregressive Machine Translation with Latent Alignments
- [EMNLP] Iterative Refinement in the Continuous Space for Non-Autoregressive Neural Machine Translation
- [EMNLP] SlotRefine: A Fast Non-Autoregressive Model for Joint Intent Detection and Slot Filling
- [INTERSPEECH] Mask CTC: Non-Autoregressive End-to-End ASR with CTC and Mask Predict
- [INTERSPEECH] Insertion-Based Modeling for End-to-End Automatic Speech Recognition
- [ACL] Learning to Recover from Multi-Modality Errors for Non-Autoregressive Neural Machine Translation
- [ACL] Jointly Masked Sequence-to-Sequence Model for Non-Autoregressive Neural Machine Translation
- [ACL] ENGINE: Energy-Based Inference Networks for Non-Autoregressive Machine Translation
- [ACL] Improving Non-autoregressive Neural Machine Translation with Monolingual Data
- [ACL] A Study of Non-autoregressive Model for Sequence Generation
- [ICML] Non-Autoregressive Neural Text-to-Speech
- [ICML] Aligned Cross Entropy for Non-Autoregressive Machine Translation
- [ICML] Parallel Machine Translation with Disentangled Context Transformer
- [ICML] Imputer: Sequence Modelling via Imputation and Dynamic Programming
- [ICML] An EM Approach to Non-autoregressive Conditional Sequence Generation
- [ICLR] Understanding Knowledge Distillation in Non-autoregressive Machine Translation
- [AAAI] Minimizing the Bag-of-Ngrams Difference for Non-Autoregressive Neural Machine Translation
- [AAAI] Latent-Variable Non-Autoregressive Neural Machine Translation with Deterministic Inference Using a Delta Posterior
- [AAAI] Fine-Tuning by Curriculum Learning for Non-Autoregressive Neural Machine Translation
- [arXiv] Non-autoregressive Transformer by Position Learning
- [NeurIPS] Levenshtein Transformer
- [NeurIPS] Fast Structured Decoding for Sequence Models
- [NeurIPS] FastSpeech: Fast, Robust and Controllable Text to Speech
- [EMNLP] Mask-Predict: Parallel Decoding of Conditional Masked Language Models
- [EMNLP] FlowSeq: Non-Autoregressive Conditional Sequence Generation with Generative Flow
- [EMNLP] Hint-Based Training for Non-Autoregressive Machine Translation
- [ACL] Retrieving Sequential Information for Non-Autoregressive Neural Machine Translation
- [ACL] Imitation Learning for Non-Autoregressive Neural Machine Translation
- [AAAI] Non-Autoregressive Machine Translation with Auxiliary Regularization
- [AAAI] Non-Autoregressive Neural Machine Translation with Enhanced Decoder Input
- [ICML] Fast Decoding in Sequence Models Using Discrete Latent Variables
- [EMNLP] Deterministic Non-Autoregressive Neural Sequence Modeling by Iterative Refinement
- [EMNLP] End-to-End Non-Autoregressive Neural Machine Translation with Connectionist Temporal Classification
- [ICLR] Non-Autoregressive Neural Machine Translation
Changhan Wang ([email protected])