Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

high memory usage after transcribing is done #430

Open
HorayNarea opened this issue Apr 28, 2023 · 2 comments
Open

high memory usage after transcribing is done #430

HorayNarea opened this issue Apr 28, 2023 · 2 comments

Comments

@HorayNarea
Copy link

When audapolis loads the language model it obviously has to use a lot of memory, but when it's done transcribing it looks like the model is remaining in memory:

image

Could this be unloaded (or is the high memory usage something else)?

@pajowu
Copy link
Member

pajowu commented Apr 28, 2023

Should be fairly easy by removing the model caching in https://github.com/audapolis/audapolis/blob/ff91a2c23c31a8179c3d12f348eb91431d0dfb2b/server/app/models.py#L119-L126 Feel free to open a PR, else we'll so it once we get to it

@pajowu
Copy link
Member

pajowu commented Apr 28, 2023

Some more explanation on the current behaviour: We thought about how we want to handle model loading. Since the model loading take a while, we said we didn't want to re-load it every time you open a new document. Since you might want to transcript a number of smaller files, so model loading will make up a good portion of the transcription time.

However we fully agree that it's not reasonable behaviour for the model to stay in memory forever. I would be fine with just removing it once the transcription is finished (which should also be very easy to implement). But I think a good compromise could also be to evict the model from memory after a certain amount of time (5 minutes? 15 minutes).

In conclusion: If you want to implement it quickly, just remove the caching code. If you want to spend some more time, feel free to explore other options

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants