You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Building llama-cpp-python with CUDA support fails due to GLIBC version incompatibility
Environment Details
OS: Ubuntu Linux
GPU: NVIDIA (Driver 570.158.01, CUDA 12.8)
Python Environment: Miniconda3 with Python 3.13
PyTorch: 2.8.0.dev20250622+cu128
Compiler: conda’s x86_64-conda-linux-gnu-c++ (GCC 11.2.0)
/home/oba/miniconda3/bin/../lib/gcc/x86_64-conda-linux-gnu/11.2.0/../../../../x86_64-conda-linux-gnu/bin/ld: /usr/local/cuda/lib64/libcublasLt.so.12: undefined reference to log2f@GLIBC_2.27' /home/oba/miniconda3/bin/../lib/gcc/x86_64-conda-linux-gnu/11.2.0/../../../../x86_64-conda-linux-gnu/bin/ld: /usr/local/cuda/lib64/libcublasLt.so.12: undefined reference to __cxa_thread_atexit_impl@GLIBC_2.18’
collect2: error: ld returned 1 exit status
Root Cause
The CUDA library libcublasLt.so.12 requires GLIBC symbols:
log2f@GLIBC_2.27 (from GLIBC 2.27+)
__cxa_thread_atexit_impl@GLIBC_2.18 (from GLIBC 2.18+)
But the system apparently has an older GLIBC version that doesn’t provide these symbols.
Build Command Used CMAKE_ARGS=“-DGGML_CUDA=on -DCMAKE_CUDA_ARCHITECTURES=native” pip install -e . --verbose
What’s Been Tried
Using system CUDA instead of conda CUDA - same error
Both conda CUDA libraries and system CUDA libraries show the same GLIBC dependency issue
The error occurs specifically when linking vision tools (llava, mtmd) that depend on CUDA libraries
Questions for Forum
How to resolve GLIBC version conflicts when building llama-cpp-python with CUDA?
Is there a way to use older/compatible CUDA libraries that don’t require GLIBC 2.27+?
Can the build be configured to skip problematic vision components while keeping core CUDA functionality?
Should I use system compiler instead of conda compiler, or create a different conda environment?
Additional Context
The build progresses successfully until the final linking stage for vision tools
Core CUDA libraries (libggml-cuda.so) appear to build successfully
Only fails when linking final executables that use cuBLAS
This gives forum responders all the technical details they need to help diagnose the specific GLIBC/CUDA library compatibility issue.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
-
Hello,
Issue Summary
Building llama-cpp-python with CUDA support fails due to GLIBC version incompatibility
Environment Details
OS: Ubuntu Linux
GPU: NVIDIA (Driver 570.158.01, CUDA 12.8)
Python Environment: Miniconda3 with Python 3.13
PyTorch: 2.8.0.dev20250622+cu128
Compiler: conda’s x86_64-conda-linux-gnu-c++ (GCC 11.2.0)
/home/oba/miniconda3/bin/../lib/gcc/x86_64-conda-linux-gnu/11.2.0/../../../../x86_64-conda-linux-gnu/bin/ld: /usr/local/cuda/lib64/libcublasLt.so.12: undefined reference to log2f@GLIBC_2.27' /home/oba/miniconda3/bin/../lib/gcc/x86_64-conda-linux-gnu/11.2.0/../../../../x86_64-conda-linux-gnu/bin/ld: /usr/local/cuda/lib64/libcublasLt.so.12: undefined reference to __cxa_thread_atexit_impl@GLIBC_2.18’
collect2: error: ld returned 1 exit status
Root Cause
The CUDA library libcublasLt.so.12 requires GLIBC symbols:
log2f@GLIBC_2.27 (from GLIBC 2.27+)
__cxa_thread_atexit_impl@GLIBC_2.18 (from GLIBC 2.18+)
But the system apparently has an older GLIBC version that doesn’t provide these symbols.
Build Command Used CMAKE_ARGS=“-DGGML_CUDA=on -DCMAKE_CUDA_ARCHITECTURES=native” pip install -e . --verbose
What’s Been Tried
Using system CUDA instead of conda CUDA - same error
Both conda CUDA libraries and system CUDA libraries show the same GLIBC dependency issue
The error occurs specifically when linking vision tools (llava, mtmd) that depend on CUDA libraries
Questions for Forum
How to resolve GLIBC version conflicts when building llama-cpp-python with CUDA?
Is there a way to use older/compatible CUDA libraries that don’t require GLIBC 2.27+?
Can the build be configured to skip problematic vision components while keeping core CUDA functionality?
Should I use system compiler instead of conda compiler, or create a different conda environment?
Additional Context
The build progresses successfully until the final linking stage for vision tools
Core CUDA libraries (libggml-cuda.so) appear to build successfully
Only fails when linking final executables that use cuBLAS
This gives forum responders all the technical details they need to help diagnose the specific GLIBC/CUDA library compatibility issue.
Regards,
Beta Was this translation helpful? Give feedback.
All reactions