You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When audapolis loads the language model it obviously has to use a lot of memory, but when it's done transcribing it looks like the model is remaining in memory:
Could this be unloaded (or is the high memory usage something else)?
The text was updated successfully, but these errors were encountered:
Some more explanation on the current behaviour: We thought about how we want to handle model loading. Since the model loading take a while, we said we didn't want to re-load it every time you open a new document. Since you might want to transcript a number of smaller files, so model loading will make up a good portion of the transcription time.
However we fully agree that it's not reasonable behaviour for the model to stay in memory forever. I would be fine with just removing it once the transcription is finished (which should also be very easy to implement). But I think a good compromise could also be to evict the model from memory after a certain amount of time (5 minutes? 15 minutes).
In conclusion: If you want to implement it quickly, just remove the caching code. If you want to spend some more time, feel free to explore other options
When audapolis loads the language model it obviously has to use a lot of memory, but when it's done transcribing it looks like the model is remaining in memory:
Could this be unloaded (or is the high memory usage something else)?
The text was updated successfully, but these errors were encountered: