Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Version 2024-05-28: OMP: Error #15: Initializing libiomp5md.dll, but found libiomp5md.dll already initialized. #47

Open
santiago-afonso opened this issue May 30, 2024 · 6 comments

Comments

@santiago-afonso
Copy link

santiago-afonso commented May 30, 2024

Hello. I'm getting the error towards the end of the code when trying to run whisper-writer. I'm on Windows 11 23H2. I've followed the installation instructions currently on the readme, but it says it fails to support CUDA. The requirements.txt version of torch is CPU only, apparently. So I installed CUDA 11.8 and copied the cuBLAS library to the same path.

Perhaps we could refine the installation instructions, the requirements.txt, or create a container (would it work in a container??) so that the installation is easier. Other alternatives are distributing the libraries or exploring nvidia-cudnn-cu11. This is a brand new Windows 11 installation, so my issues couldn't be related to leftover old versions.


> set PATH=C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8\bin;%PATH%
> set CUDA_HOME=C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8

> nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2022 NVIDIA Corporation
Built on Wed_Sep_21_10:41:10_Pacific_Daylight_Time_2022
Cuda compilation tools, release 11.8, V11.8.89
Build cuda_11.8.r11.8/compiler.31833905_0

> python
Python 3.11.9 (tags/v3.11.9:de54cf5, Apr  2 2024, 10:12:12) [MSC v.1938 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> print("PyTorch version:", torch.__version__)
PyTorch version: 2.3.0+cu118
>>> print("PyTorch built with CUDA:", torch.version.cuda)
PyTorch built with CUDA: 11.8


> python .\run.py
Starting WhisperWriter...
Creating local model...
OMP: Error #15: Initializing libiomp5md.dll, but found libiomp5md.dll already initialized.
OMP: Hint This means that multiple copies of the OpenMP runtime have been linked into the program. That is dangerous, since it can degrade performance or cause incorrect results. The best thing to do is to ensure that only a single OpenMP runtime is linked into the process, e.g. by avoiding static linking of the OpenMP runtime in any library. As an unsafe, unsupported, undocumented workaround you can set the environment variable KMP_DUPLICATE_LIB_OK=TRUE to allow the program to continue to execute, but that may cause crashes or silently produce incorrect results. For more information, please see http://www.intel.com/software/products/support/.
@savbell
Copy link
Owner

savbell commented Jun 1, 2024

Hi, thank you for opening an issue and prompting me to investigate!

It looks like there were some breaking changes from updating to the latest version of faster-whisper. Support for CUDA 11 was removed in order to support CUDA 12 in the latest versions of ctranslate2. I can't be sure that this is causing your issue, but the workaround to using CUDA 11 is to downgrade to the 3.24.0 version of ctranslate2 (This can be done with pip install --force-reinsall ctranslate2==3.24.0).

I've updated the Readme to reflect these changes and included some additional installation instructions for the NVIDIA libraries.

I appreciate your report because I no longer have a compatible GPU to test on! Please let me know if you continue to run into issues.

Thanks,
Sav

@santiago-afonso
Copy link
Author

santiago-afonso commented Jun 3, 2024

I've tried downgrading ctranslate2 to a CUDA 11 version. This seems to fail (unclear) due to conflicting library requirements in requirements.txt. Also the original error remains.

(...)
      Successfully uninstalled ctranslate2-4.2.1
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
faster-whisper 1.0.2 requires ctranslate2<5,>=4.0, but you have ctranslate2 3.24.0 which is incompatible.
numba 0.57.0 requires numpy<1.25,>=1.21, but you have numpy 1.26.4 which is incompatible.
Successfully installed ctranslate2-3.24.0 numpy-1.26.4 pyyaml-6.0.1 setuptools-70.0.0


python .\run.py
Starting WhisperWriter...
Creating local model...
OMP: Error #15: Initializing libiomp5md.dll, but found libiomp5md.dll already initialized.
OMP: Hint This means that multiple copies of the OpenMP runtime have been linked into the program. That is dangerous, since it can degrade performance or cause incorrect results. The best thing to do is to ensure that only a single OpenMP runtime is linked into the process, e.g. by avoiding static linking of the OpenMP runtime in any library. As an unsafe, unsupported, undocumented workaround you can set the environment variable KMP_DUPLICATE_LIB_OK=TRUE to allow the program to continue to execute, but that may cause crashes or silently produce incorrect results. For more information, please see http://www.intel.com/software/products/support/.

I've also tried installing CUDA 12 and cuDNN 9.1 using the current Windows installers. The installation wizards state that the paths are replaced to the current versions (11.8 --> 12.5). This also fails with the same error.

The failure is triggered by this line

model = WhisperModel(local_model_options['model'],
. Can't step into it for some reason.

I knew about the python dependency hell, but this is on another level. (Thanks for the help!!! I'd love to be able to use this software again, and help make the installation less painful.)

@savbell
Copy link
Owner

savbell commented Jun 6, 2024

Thanks for your response! I looked into the specific error some more and it appears to be an issue that occurs when there are multiple copies of libiomp5md.dll in the virtual environment. In our case, one is coming from numpy and the other from torch. For some reason, it appears to happen if numpy is imported first. Apparently it can be resolved by upgrading numpy (pip install numpy --upgrade) — but that will probably make our dependency problem worse. Some people had luck just uninstalling and reinstalling the package. We can also try putting the line import torch above import numpy in result_thread.py and see if that helps.

Please try that out and let me know how it goes! Thanks for your patience as we figure this out!

@helLf1nGer
Copy link

helLf1nGer commented Jun 12, 2024

Having the same issue. I am using torch 2.3.1+cu118 instead of 2.0.1, as I was having issues with that.

Adding os.environ["KMP_DUPLICATE_LIB_OK"] = "TRUE" solves the issue, and it works...

BUT I have Chatgpt web app freezing on send message, which is the main use for this whisper speech to text. Not sure if it is related. Bummer

@L-Acacia
Copy link

Running into the same issue using cuda 12.1 and python 3.11.5 on windows inside a venv.

@helLf1nGer
Copy link

Running into the same issue using cuda 12.1 and python 3.11.5 on windows inside a venv.

In my case reinstalling pytorch solved the issue.
The other issue with ChatGPT web app freeze isn't fully solved, but clearing the cache occasionally and reloading the Chrome/firefox helps

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants