-
Notifications
You must be signed in to change notification settings - Fork 26
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to use coreML models in Mac M2? #12
Comments
|
@abdeladim-s or @RageshAntony , I also have a M2 mac and have been working with whisper.cpp utilizing the GPU. However, I have not been able to do so with the pywhispercpp. Is there a more indepth guide or explanation available to use as a reference? I also have made some modifications to your /examples/main.py to allow output to json: if args.output_json: and parser.add_argument('-ojson', '--output-json', action='store_true', help="output result in a json file") I also made changes to the utils.py: def output_json(segments: list, output_file_path: str) -> str:
Here is json output for the /samples/jfk.wav as an example. Thanks again for your work. [ |
Thanks @w0372299 for the Json Idea, it looks great, please submit a PR and I will merge it with the codebase. Regarding your question, as I said, I really wish I can help but I don't have access to a MAC. |
I able to use CoreML models in my mac M2 using the base 'whisper.cpp'
https://github.com/ggerganov/whisper.cpp#core-ml-support
How to use CoreML in this pywishpercpp ?
Also one suggestion
add your library in their bindings list : https://github.com/ggerganov/whisper.cpp#bindings
The text was updated successfully, but these errors were encountered: