Skip to content

Add unified prompt model support for multilingual ASR and streaming inference#15666

Draft
enas-albasiri wants to merge 5 commits into
NVIDIA-NeMo:mainfrom
enas-albasiri:prompt-unified-architecture
Draft

Add unified prompt model support for multilingual ASR and streaming inference#15666
enas-albasiri wants to merge 5 commits into
NVIDIA-NeMo:mainfrom
enas-albasiri:prompt-unified-architecture

Conversation

@enas-albasiri
Copy link
Copy Markdown

Important

The Update branch button must only be pressed in very rare occassions.
An outdated branch is never blocking the merge of a PR.
Please reach out to the automation team before pressing that button.

What does this PR do ?

Add unified prompt architecture for multilingual ASR, enabling language-ID conditioned for both Hybrid RNNT-CTC and RNNT models as well as streaming inference support.

Collection: ASR

Changelog

  • Add EncDecRNNTBPEModelWithPrompt — RNNT prompt-conditioned model with training script and 600M streaming config
  • Add LhotseSpeechToTextBpeDatasetWithPromptIndex — index-based Lhotse dataset with per-dataset prompt mode support (langID/auto/unified) via lhotse input_cfg tags
  • Add backward-compatible forward() in hybrid model accepting both legacy prompt tensors [B, T, D] and new prompt indices [B]
  • Add set_inference_prompt() + conformer_stream_step() override for streaming inference in both model variants
  • Add target_lang parameter to speech_to_text_cache_aware_streaming_infer.py (defaults to "auto" for prompt models)
  • Add strip_lang_tags config in RNNTDecodingConfig to remove <xx-XX> locale tags from decoded output

Usage

# Training with per-dataset prompt modes via lhotse input_cfg:
# In your input_cfg yaml, tag datasets with prompt_mode:
#   - type: nemo_tarred
#     tags: { prompt_mode: langID }    # AST data — always pass lang ID
#   - type: nemo_tarred
#     tags: { prompt_mode: unified }   # ASR data — 50/50 ratio
#   - type: nemo_tarred
#     tags: { prompt_mode: auto }      # Code-switching — always auto

# Streaming inference with target language:
# python speech_to_text_cache_aware_streaming_infer.py \
#     model_path=model.nemo \
#     dataset_manifest=test.json \
#     target_lang=en-US

GitHub Actions CI

The Jenkins CI system has been replaced by GitHub Actions self-hosted runners.

The GitHub Actions CI will run automatically when the "Run CICD" label is added to the PR.
To re-run CI remove and add the label again.
To run CI on an untrusted fork, a NeMo user with write access must first click "Approve and run".

Before your PR is "Ready for review"

Pre checks:

  • [Y] Make sure you read and followed Contributor guidelines
  • Did you write any new necessary tests?
  • Did you add or update any necessary documentation?
  • Does the PR affect components that are optional to install? (Ex: Numba, Pynini, Apex etc)
    • Reviewer: Does the PR have correct import guards for all optional libraries?

PR Type:

  • [Y] New Feature
  • Bugfix
  • Documentation

If you haven't finished some of the above items you can still open "Draft" PR.

Who can review?

Anyone in the community is free to review the PR once the checks have passed.
Contributor guidelines contains specific people who can review PRs to various areas.

Additional Information

  • Related to # (issue)

- RNNT-only prompt model (EncDecRNNTBPEModelWithPrompt) and training script
- RNNT-only streaming config (fastconformer_transducer_bpe_streaming_prompt.yaml)
- Index-based dataset (LhotseSpeechToTextBpeDatasetWithPromptIndex) with
  per-dataset prompt_mode support (langID/auto/unified) via lhotse input_cfg tags
- Backward-compatible hybrid model: accepts both old prompt tensors and new
  prompt_indices with auto-detection
- Streaming inference: set_inference_prompt() + conformer_stream_step() override
  for both hybrid and RNNT-only models, with target_lang support in standard
  cache-aware streaming inference script
- Config-driven strip_lang_tags in RNNT decoding to remove <xx-XX> tags from output
- Remove unused docs/TRAINING_GUIDE.md and hybrid streaming config
Signed-off-by: Enas Albasiri <ealbasiri@nvidia.com>
Signed-off-by: Enas Albasiri <ealbasiri@nvidia.com>
@copy-pr-bot
Copy link
Copy Markdown

copy-pr-bot Bot commented May 6, 2026

This pull request requires additional validation before any workflows can run on NVIDIA's runners.

Pull request vetters can view their responsibilities here.

Contributors can view more details about this message here.

Copy link
Copy Markdown
Collaborator

@KunalDhawan KunalDhawan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @enas-albasiri! Could you also share some numbers and validate accuracy of training monolingual/multlingual models with the unified-prompt approach?

Adding @pzelasko for review of LhotseSpeechToTextBpeDatasetWithPromptIndex

)
all_hyp_or_transcribed_texts.append(decoded_out[0])
best_hyp = None
else:
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

log_probs is not assigned in the RNNT branch, but is referenced in line 359. Calling conformer_stream_step(..., return_log_probs=True) while cur_decoder == "rnnt" will raise NameError: name 'log_probs' is not defined.

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

function conformer_stream_step is rewritten based on link, where log_probs only applies for CTC model. RNNT shouldn't bother.


# Strip language-ID tags (e.g. <en-US>) from decoded output.
# Enable for prompt-conditioned models that emit locale tags after punctuation.
strip_lang_tags: bool = True
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a global default change that touches every existing RNNT model. The dataclass default (True) is also inconsistent with the runtime self.cfg.get('strip_lang_tags', False) (set in line 332 of this file), depending on whether cfg comes from the dataclass vs YAML, user will get different behavior.
Could you set the dataclass default to False and align both, then opt in only via the new YAML config. This is the safer backward-compatible choice.

Copy link
Copy Markdown
Collaborator

@kingformatty kingformatty May 13, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Set strip_lang_tags to False for inference script.

trainer=None,
validate_access_integrity=True,
):
"""Delegate to base EncDecRNNTBPEModel to avoid subclass substitution.
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This restore_from workaround silently delegates to EncDecRNNTBPEModel.restore_from. As highlighted in your comment, restoring a checkpoint that was saved by EncDecRNNTBPEModelWithPrompt via this method will return a plain EncDecRNNTBPEModel, losing prompt_kernel, set_inference_prompt, and the streaming override. Please fix this.

)


class TokenizerWrapper:
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is similar to TokenizerWrapper in NeMo/nemo/collections/common/tokenizers/aggregate_tokenizer.py. Could you please upstream the desired changes to the exisiting TokenizerWrapper and import it here?

@KunalDhawan KunalDhawan requested a review from pzelasko May 6, 2026 20:49
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants