You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I was originally going to implement this by simply passing the commands to all synth drivers under control, but found out that this won't work.
When calculating the changed value from the commands instance, NVDA seems to read the default value from the currently selected synthDriver config. This must be handled like that at the moment since the command objects do not have synthDriver reference to which the commands will be sent. https://github.com/nvaccess/nvda/blob/51f5a38466daef7c92402e16f9898f1ecfb40fa5/source/speech/commands.py#L207
UML does not have voice attributes, and they can be different between each synth that UML is currently using. Maybe I need to catch these commands in UML and translate to set_xxx method calls for each synth under control.
The text was updated successfully, but these errors were encountered:
I was originally going to implement this by simply passing the commands to all synth drivers under control, but found out that this won't work.
When calculating the changed value from the commands instance, NVDA seems to read the default value from the currently selected synthDriver config. This must be handled like that at the moment since the command objects do not have synthDriver reference to which the commands will be sent.
https://github.com/nvaccess/nvda/blob/51f5a38466daef7c92402e16f9898f1ecfb40fa5/source/speech/commands.py#L207
UML does not have voice attributes, and they can be different between each synth that UML is currently using. Maybe I need to catch these commands in UML and translate to set_xxx method calls for each synth under control.
The text was updated successfully, but these errors were encountered: