-
Notifications
You must be signed in to change notification settings - Fork 26
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Does Pywhispercpp support batching and what gives if not? #56
Comments
@BBC-Esq, are you talking about batch decoding? |
I think he means batch prepping? Edit: Nope, batch transcribing! |
cough import os
from pywhispercpp.model import Model
import multiprocessing
from glob import glob
files = [f for f in glob("*") if os.path.isfile(f) and not f.endswith((".py"))]
def transcribeFile(file, queue):
model = Model("base")
segments = model.transcribe(file)
queue.put([file, segments])
return True
if __name__ == "__main__":
queue = multiprocessing.Queue()
processes = []
for file in files:
process = multiprocessing.Process(target=transcribeFile, args=(file, queue))
processes.append(process)
for process in processes:
process.start()
for transcriptions in iter(queue.get, None):
print(transcriptions) @BBC-Esq @abdeladim-s here's some simple code to batch process with multiple independent whisper instances to ensure context is not maintained under any circumstances between whisper instances.
Cleaned it up. Fixed it running in parallel. And oh boy is it a CPU killer. |
So a quick heads up, it is painfully slow to do this in parallel. Like dog slow and the more files you throw at it, the slower it gets. But this is just POC code. There's room for improvement such as batching based on file length, file size, core counts, etc. I'll see if I can beat a few optimizations out of this. Edit: I completely forgot that with multiprocessing, queues must be emptied before the main process can finish as they hold open pipes. while not queue.empty():
print(queue.get()) Quick fix over iter. I forgot iterating over a queue is non destructive while calling get is destructive |
import os
from pywhispercpp.model import Model
import multiprocessing
from glob import glob
import asyncio
files = [f for f in glob("*") if os.path.isfile(f) and not f.endswith((".py"))]
def transcribeFile(file, queue):
model = Model("base")
segments = model.transcribe(file)
queue.put([file, segments])
if __name__ == "__main__":
queue = multiprocessing.Queue()
processes = []
for file in files:
process = multiprocessing.Process(target=transcribeFile, args=(file, queue))
processes.append(process)
process.start()
for process in processes:
process.join()
while not queue.empty():
print(queue.get()) This is where I am at. It can queue up lots of files to do in parallel. But there's no limits on how many, that needs improvement. I also need to make it accept adding new things to its queues. |
If you're after serial batch transcriptions: from pywhispercpp.model import Model
import os
from glob import glob
if __name__ == "__main__":
files = [file for file in glob("*") if os.path.isfile(file) and not file.endswith((".py")) and not file.endswith((".cfg")) and not file.endswith(".txt")]
for file in files:
model = Model("base")
segments = model.transcribe(file)
with open(f"{file}-transcription.txt", "w") as f:
for segment in segments:
f.write(segment.text) |
@UsernamesLame, That's multi-processing. |
Unfortunately, as @abdeladim-s knows, I can't get |
It's "batch" processing 😅 |
Dump logs. Let's get this working. |
Logs dumped and now I'm flushing the toilet. 😉 jk Won't have time today as I'm working on the benchmarking repo for a bit...need to get an appropriate dataset and then learn/use the |
See here...start thinking about true batching.. 😉
shashikg/WhisperS2T#33
The text was updated successfully, but these errors were encountered: