-
Notifications
You must be signed in to change notification settings - Fork 22.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Intel GPU] Symbol conflict between libm.lib and ucrt.lib for Intel GPU on Windows #134989
Comments
Regarding this issue, we can workaround it to pass the linking process by setting some cmake environment variables as follows. set CMAKE_SHARED_LINKER_FLAGS=/FORCE:MULTIPLE
set CMAKE_MODULE_LINKER_FLAGS=/FORCE:MULTIPLE
set CMAKE_EXE_LINKER_FLAGS=/FORCE:MULTIPLE And eventually, we will update the intel GPU bundle to fix it. |
Let me do some analysis here. Firstly, this issue is that, compiler link stage, it found the same math function have two implenmet. Let's focus on
From the error message, we found:
Let's validate 1, we can use Let's validate 2, and here is another question, which So far, we can make a conclusion, icx's libm have same functions implenmet as MSFT's lib. Additional analysis for Intel compiler. Acturally, Intel compiler for performance purpose, it would do some magic operation, to replace system's When I enable torch inductor on Windows CPU, I meet this behavior. Below image show the dependency. pytorch/torch/_inductor/cpp_builder.py Lines 882 to 890 in 92a2a9d
I have experience to handle Intel compiler's behaior. In currently PyTorch XPU build case, Intel compiler only work as host compiler and dedicated for SYCL code compiling. The major code is compiled by MSVC. |
Continued update:
Acturally we can't forbid all current/further third party libraries call Original command: set CMAKE_SHARED_LINKER_FLAGS=/FORCE:MULTIPLE
set CMAKE_MODULE_LINKER_FLAGS=/FORCE:MULTIPLE
set CMAKE_EXE_LINKER_FLAGS=/FORCE:MULTIPLE |
Discussed with @EikanWang , add these link flag will hide potential same function name with different parameter issue. So will will withdraw this solution. |
🐛 Describe the bug
Currently, building Intel GPU for Windows requires sourcing Intel GPU development bundle for Windows to setup development environment for PyTorch. But the bundle contains
libm.lib
to support C99 and contains math functions. and the And the lib will be added toLib
path, while it conflicts with Windows nativeucrt.lib
. The error message may be as follows.Versions
Collecting environment information...
PyTorch version: 2.5.0a0+git3f3774a
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Microsoft Windows 11 Enterprise
GCC version: Could not collect
Clang version: Could not collect
CMake version: version 3.24.1
Libc version: N/A
Python version: 3.10.6 | packaged by conda-forge | (main, Aug 22 2022, 20:29:51) [MSC v.1929 64 bit (AMD64)] (64-bit runtime)
Python platform: Windows-10-10.0.22631-SP0
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture=9
CurrentClockSpeed=2400
DeviceID=CPU0
Family=207
L2CacheSize=14336
L2CacheSpeed=
Manufacturer=GenuineIntel
MaxClockSpeed=2400
Name=12th Gen Intel(R) Core(TM) i9-12900
ProcessorType=3
Revision=
Versions of relevant libraries:
[pip3] numpy==2.1.0
[pip3] optree==0.12.1
[pip3] torch==2.5.0a0+git3f3774a
[conda] mkl-include 2024.2.1 pypi_0 pypi
[conda] mkl-static 2024.2.1 pypi_0 pypi
[conda] numpy 2.1.0 pypi_0 pypi
[conda] optree 0.12.1 pypi_0 pypi
[conda] torch 2.5.0a0+git3f3774a pypi_0 pypi
cc @gujinghui @fengyuan14 @guangyey
The text was updated successfully, but these errors were encountered: