-
-
Notifications
You must be signed in to change notification settings - Fork 107
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
refactor(web): convert internal prediction methods to stateless format 📚 #11940
Conversation
User Test ResultsTest specification and instructions User tests are not required |
const SEARCH_TIMEOUT = correction.SearchSpace.DEFAULT_ALLOTTED_CORRECTION_TIME_INTERVAL; | ||
const timer = this.activeTimer = new correction.ExecutionTimer(this.testMode ? Number.MAX_VALUE : SEARCH_TIMEOUT, this.testMode ? Number.MAX_VALUE : SEARCH_TIMEOUT * 1.5); | ||
|
||
const { postContextState, rawPredictions } = await correctAndEnumerate(this.contextTracker, this.lexicalModel, timer, transformDistribution, context); | ||
|
||
if(this.activeTimer == timer) { | ||
this.activeTimer = null; | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Construction and state-management related to the ExecutionTimer
used during correction-search was moved out of correctAndEnumerate
in order to ensure correctAndEnumerate
would be stateless.
If the goal is for the called method to be stateless, we can't have it managing the "prediction process is active" ModelCompositor
state signal. This does start the timer a bit earlier than usual, but the cost should easily be less than 1ms in practice; the parts between the two locations are essentially constant-time setup overhead.
suggestions.forEach((suggestion) => { | ||
suggestion.id = this.SUGGESTION_ID_SEED; | ||
this.SUGGESTION_ID_SEED++; | ||
}); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The suggestion unique-identifier tagging process was moved out of finalizeSuggestions
for similar reasons: generating unique IDs is inherently stateful.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM -- have not reviewed the functions moved into the -helpers modules, assuming they are essentially unchanged
const suggestions = this.finalizeSuggestions(deduplicatedSuggestionTuples, context, inputTransform); | ||
const suggestions = finalizeSuggestions( | ||
this.lexicalModel, | ||
deduplicatedSuggestionTuples.splice(0, ModelCompositor.MAX_SUGGESTIONS), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there a behavioural change here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is not. I've just reformatted these lines and ensured that the function still gets access to properties it had been using. As the function is no longer a member of the ModelCompositor
class, we have to explicitly pass them now.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's the splice
that worried me 😁
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah. Still no - the original did a similar splice to chop off all the suggestions we wouldn't be returning to the user - should more than 12 even be found, of course.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
From stable-17.0
:
let suggestions = suggestionDistribution.splice(0, ModelCompositor.MAX_SUGGESTIONS).map(function(value) { |
…or/web/stateless-predict-helpers
Changes in this pull request will be available for download in Keyman version 18.0.75-alpha |
The previous three PRs in this sequence spun off a number of the components of the correction & prediction process into separate methods, although those methods still relied upon
ModelCompositor
state. This PR aims to rectify that last detail, making them stateless instead.On the face of it, this simply means adding parameters to each method in order to re-acquire needed objects that were previously accessed through
this.
when they existed withinModelCompositor
. During standard use, nothing significant changes here. What we do gain, though, is quite useful for adding highly-focused unit tests: we gain easier access to the different component methods and far simpler mocking where needed. After all, we'll no longer need to worry about curating compositor state for tests focused on any of the spun-off methods.@keymanapp-test-bot skip