Add unified prompt model support for multilingual ASR and streaming inference#15666
Add unified prompt model support for multilingual ASR and streaming inference#15666enas-albasiri wants to merge 5 commits into
Conversation
- RNNT-only prompt model (EncDecRNNTBPEModelWithPrompt) and training script - RNNT-only streaming config (fastconformer_transducer_bpe_streaming_prompt.yaml) - Index-based dataset (LhotseSpeechToTextBpeDatasetWithPromptIndex) with per-dataset prompt_mode support (langID/auto/unified) via lhotse input_cfg tags - Backward-compatible hybrid model: accepts both old prompt tensors and new prompt_indices with auto-detection - Streaming inference: set_inference_prompt() + conformer_stream_step() override for both hybrid and RNNT-only models, with target_lang support in standard cache-aware streaming inference script - Config-driven strip_lang_tags in RNNT decoding to remove <xx-XX> tags from output - Remove unused docs/TRAINING_GUIDE.md and hybrid streaming config
Signed-off-by: Enas Albasiri <ealbasiri@nvidia.com>
Signed-off-by: Enas Albasiri <ealbasiri@nvidia.com>
KunalDhawan
left a comment
There was a problem hiding this comment.
Thanks @enas-albasiri! Could you also share some numbers and validate accuracy of training monolingual/multlingual models with the unified-prompt approach?
Adding @pzelasko for review of LhotseSpeechToTextBpeDatasetWithPromptIndex
| ) | ||
| all_hyp_or_transcribed_texts.append(decoded_out[0]) | ||
| best_hyp = None | ||
| else: |
There was a problem hiding this comment.
log_probs is not assigned in the RNNT branch, but is referenced in line 359. Calling conformer_stream_step(..., return_log_probs=True) while cur_decoder == "rnnt" will raise NameError: name 'log_probs' is not defined.
There was a problem hiding this comment.
function conformer_stream_step is rewritten based on link, where log_probs only applies for CTC model. RNNT shouldn't bother.
|
|
||
| # Strip language-ID tags (e.g. <en-US>) from decoded output. | ||
| # Enable for prompt-conditioned models that emit locale tags after punctuation. | ||
| strip_lang_tags: bool = True |
There was a problem hiding this comment.
This is a global default change that touches every existing RNNT model. The dataclass default (True) is also inconsistent with the runtime self.cfg.get('strip_lang_tags', False) (set in line 332 of this file), depending on whether cfg comes from the dataclass vs YAML, user will get different behavior.
Could you set the dataclass default to False and align both, then opt in only via the new YAML config. This is the safer backward-compatible choice.
There was a problem hiding this comment.
Set strip_lang_tags to False for inference script.
| trainer=None, | ||
| validate_access_integrity=True, | ||
| ): | ||
| """Delegate to base EncDecRNNTBPEModel to avoid subclass substitution. |
There was a problem hiding this comment.
This restore_from workaround silently delegates to EncDecRNNTBPEModel.restore_from. As highlighted in your comment, restoring a checkpoint that was saved by EncDecRNNTBPEModelWithPrompt via this method will return a plain EncDecRNNTBPEModel, losing prompt_kernel, set_inference_prompt, and the streaming override. Please fix this.
| ) | ||
|
|
||
|
|
||
| class TokenizerWrapper: |
There was a problem hiding this comment.
This is similar to TokenizerWrapper in NeMo/nemo/collections/common/tokenizers/aggregate_tokenizer.py. Could you please upstream the desired changes to the exisiting TokenizerWrapper and import it here?
Important
The
Update branchbutton must only be pressed in very rare occassions.An outdated branch is never blocking the merge of a PR.
Please reach out to the automation team before pressing that button.
What does this PR do ?
Add unified prompt architecture for multilingual ASR, enabling language-ID conditioned for both Hybrid RNNT-CTC and RNNT models as well as streaming inference support.
Collection: ASR
Changelog
EncDecRNNTBPEModelWithPrompt— RNNT prompt-conditioned model with training script and 600M streaming configLhotseSpeechToTextBpeDatasetWithPromptIndex— index-based Lhotse dataset with per-dataset prompt mode support (langID/auto/unified) via lhotseinput_cfgtagsforward()in hybrid model accepting both legacy prompt tensors[B, T, D]and new prompt indices[B]set_inference_prompt()+conformer_stream_step()override for streaming inference in both model variantstarget_langparameter tospeech_to_text_cache_aware_streaming_infer.py(defaults to"auto"for prompt models)strip_lang_tagsconfig inRNNTDecodingConfigto remove<xx-XX>locale tags from decoded outputUsage
GitHub Actions CI
The Jenkins CI system has been replaced by GitHub Actions self-hosted runners.
The GitHub Actions CI will run automatically when the "Run CICD" label is added to the PR.
To re-run CI remove and add the label again.
To run CI on an untrusted fork, a NeMo user with write access must first click "Approve and run".
Before your PR is "Ready for review"
Pre checks:
PR Type:
If you haven't finished some of the above items you can still open "Draft" PR.
Who can review?
Anyone in the community is free to review the PR once the checks have passed.
Contributor guidelines contains specific people who can review PRs to various areas.
Additional Information