Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

WIP: Add shallow fusion to C API #149

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

csukuangfj
Copy link
Collaborator

@csukuangfj csukuangfj commented May 11, 2023

Integrate changes from #147

TODOs

  • Fix iOS demo
  • Fix .Net demo
  • Fix Python APIs

@kamirdin
Copy link
Contributor

kamirdin commented Oct 7, 2023

Hi, I'm using OnlineLMConfig by Python APIs, but it seams that it didn't work by set 'rnn lm onnx path' to 'model',may I ask what should I do to using shallow fusion by Python APIs?

@csukuangfj
Copy link
Collaborator Author

but it seams that it didn't work by set 'rnn lm onnx path' to 'model'

How do you tell it does not work?

@kamirdin
Copy link
Contributor

kamirdin commented Oct 7, 2023

How do you tell it does not work?

By decoding a test set which has 1k samples, say 10k characters, but got exactly the same result compared to not using LM.

@kamirdin
Copy link
Contributor

kamirdin commented Oct 7, 2023

Before this, I obtained better results by using Python LM decode scripts from Icefall. This was with the same test set, ASR, and LM model. So, it was expected that better results would be achieved, even if only a few characters were changed.

@csukuangfj
Copy link
Collaborator Author

haracters, but got exactly the same result compared to not using LM.

How many lm scales have you tried?

@kamirdin
Copy link
Contributor

kamirdin commented Oct 7, 2023

python code :

lm_config = OnlineLMConfig(
    model=lm,
    scale=scale,
)

print(lm_config)
print("="*30)

recognizer_config = OnlineRecognizerConfig(
    feat_config=feat_config,
    model_config=model_config,
    lm_config=lm_config,
    endpoint_config=endpoint_config,
    enable_endpoint=enable_endpoint_detection,
    decoding_method=decoding_method,
    max_active_paths=max_active_paths,
    context_score=context_score,
)
print(recognizer_config)

and than print out this:

OnlineLMConfig(model="base/with-state-epoch-21-avg-2.onnx", scale=1.1)
==============================
OnlineRecognizerConfig(feat_config=FeatureExtractorConfig(sampling_rate=16000, feature_dim=80), model_config=OnlineTransducerModelConfig(encoder_filename="./asr_model_chunk320/encoder.onnx", decoder_filename="./asr_model_chunk320/decoder.onnx", joiner_filename="./asr_model_chunk320/joiner.onnx", tokens="./asr_model_chunk320/tokens.txt", num_threads=8, provider="cpu", model_type="", debug=False), lm_config=OnlineLMConfig(model="", scale=0.5), endpoint_config=EndpointConfig(rule1=EndpointRule(must_contain_nonsilence=False, min_trailing_silence=2.4, min_utterance_length=0), rule2=EndpointRule(must_contain_nonsilence=True, min_trailing_silence=1.2, min_utterance_length=0), rule3=EndpointRule(must_contain_nonsilence=False, min_trailing_silence=0, min_utterance_length=20)), enable_endpoint=False, max_active_paths=4, context_score=1.5, decoding_method="modified_beam_search")

added .def_readwrite("lm_config", &PyClass::lm_config) at here:

.def_readwrite("model_config", &PyClass::model_config)

and rebuild, still print lm_config=OnlineLMConfig(model="", scale=0.5), of recognizer_config

@kamirdin
Copy link
Contributor

kamirdin commented Oct 7, 2023

by adding lm_config(lm_config), at
https://github.com/k2-fsa/sherpa-onnx/blob/36017d49c4f0b2f2f87feeeb0a40e54be4487b76/sherpa-onnx/csrc/online-recognizer.h#L97C16-L97C16
It appears to be functioning, but it is encountering more missing errors compared to the Python script. It seems like there is still some work to be done on it. In any case, thank you for your assistance!

@csukuangfj
Copy link
Collaborator Author

https://github.com/k2-fsa/sherpa-onnx/blob/36017d49c4f0b2f2f87feeeb0a40e54be4487b76/sherpa-onnx/csrc/online-recognizer.h#L97C16-L97C16

Thank you for identifying the bug. Would you mind creating a PR to fix it?

@kamirdin
Copy link
Contributor

kamirdin commented Oct 7, 2023

Sure, I will create PR after more test finish

@csukuangfj csukuangfj mentioned this pull request Oct 8, 2023
@rkjaran
Copy link

rkjaran commented Dec 21, 2023

I'd like to use the online LM with the C API. What's the status on this?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants