Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

wip - Mg/pytorch source builds #506

Draft
wants to merge 63 commits into
base: main
Choose a base branch
from
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
63 commits
Select commit Hold shift + click to select a range
52f1b4a
wip
malcolmgreaves Nov 13, 2024
ac53484
wip
malcolmgreaves Nov 13, 2024
6b3837f
wip
malcolmgreaves Nov 13, 2024
967b97f
wip
malcolmgreaves Nov 13, 2024
5f4003b
build python from source
malcolmgreaves Nov 13, 2024
ad3edd5
fix
malcolmgreaves Nov 13, 2024
b60893a
try this to see if it caches better
malcolmgreaves Nov 13, 2024
080a954
undo
malcolmgreaves Nov 13, 2024
64b7e90
use curl
malcolmgreaves Nov 13, 2024
9e3af13
needs curl
malcolmgreaves Nov 13, 2024
d8a7f8b
wip
malcolmgreaves Nov 13, 2024
bbb7a9c
wip adding more dependencies
malcolmgreaves Nov 13, 2024
2fb633b
full py version
malcolmgreaves Nov 13, 2024
ab33f3b
wip
malcolmgreaves Nov 13, 2024
989ec72
wip
malcolmgreaves Nov 13, 2024
6233f70
parse pyproject.toml for dependencies and put into unified requiremen…
malcolmgreaves Nov 13, 2024
8dab6b3
wip
malcolmgreaves Nov 13, 2024
4e3c113
DEBIAN_FRONTEND=noninteractive
malcolmgreaves Nov 13, 2024
a70dc7c
install pip
malcolmgreaves Nov 13, 2024
3956692
fixed: install pip and symlink for normal names
malcolmgreaves Nov 13, 2024
c8d6029
wip
malcolmgreaves Nov 13, 2024
0fa06eb
wip
malcolmgreaves Nov 13, 2024
6d3d800
apex has wrong build deps
malcolmgreaves Nov 13, 2024
f398dc1
ubuntu 20.04, cuda 12.1.1, use right index for pip torch install
malcolmgreaves Nov 13, 2024
f141d43
set CUDA_HOME and install sub-package code
malcolmgreaves Nov 14, 2024
66084f2
cache pytorch instaall
malcolmgreaves Nov 14, 2024
e72c2d3
wip
malcolmgreaves Nov 14, 2024
e3d1182
wip
malcolmgreaves Nov 14, 2024
c951117
fixing apex install...
malcolmgreaves Nov 14, 2024
c4a3ffd
compute capability
malcolmgreaves Nov 14, 2024
98fc4b7
wip
malcolmgreaves Nov 14, 2024
8417911
fixing
malcolmgreaves Nov 14, 2024
d144495
wip
malcolmgreaves Nov 14, 2024
23d3afb
upping cuda arch versions: I think bfloat16 is only supported at 8.0+
malcolmgreaves Nov 14, 2024
4096b26
ensure correct cuda is on PATH and remove wrong ones
malcolmgreaves Nov 14, 2024
c343e13
fix TE bug that prevented cuda 12.1.1 installation
malcolmgreaves Nov 14, 2024
070960a
wip
malcolmgreaves Nov 14, 2024
8c8c5cf
wip
malcolmgreaves Nov 14, 2024
785984a
no pipefail
malcolmgreaves Nov 14, 2024
ff322d7
attempting to fix this install..
malcolmgreaves Nov 15, 2024
f80a725
install yq & get rid of bionemo-geometric
malcolmgreaves Nov 15, 2024
731534a
wip
malcolmgreaves Nov 15, 2024
93e38c8
wip
malcolmgreaves Nov 15, 2024
b124d28
wip
malcolmgreaves Nov 15, 2024
9c263a2
wip
malcolmgreaves Nov 15, 2024
ae827a6
need xz
malcolmgreaves Nov 15, 2024
4b4c538
Patch to fix CUDA 12.1.1 support in Transformer Engine w/o updating b…
malcolmgreaves Nov 15, 2024
e06ba17
fix git patch application
malcolmgreaves Nov 15, 2024
aecccb0
don't need patch anymore...was using wrong commit
malcolmgreaves Nov 15, 2024
4dd9980
max_jobs=-1 for all, declare ARG closer to use for improved caching
malcolmgreaves Nov 15, 2024
29e8870
add missing overrides dep
malcolmgreaves Nov 19, 2024
9fe3681
Autocast down when not on ampere
malcolmgreaves Nov 19, 2024
f6e25e5
add setuptools
malcolmgreaves Nov 22, 2024
f4171d4
cuda 12.3 image build
malcolmgreaves Nov 22, 2024
35c5145
rename
malcolmgreaves Nov 22, 2024
fab385f
wip
malcolmgreaves Nov 22, 2024
3b52154
wip
malcolmgreaves Nov 22, 2024
0298a43
standardize
malcolmgreaves Nov 27, 2024
bf7a618
change base
malcolmgreaves Nov 28, 2024
3b13926
wip trying out pytorch build from source
malcolmgreaves Nov 28, 2024
f3a8aa0
torch and hf-inference dockerfiles
malcolmgreaves Dec 5, 2024
aba036d
wip
malcolmgreaves Dec 6, 2024
1c6f7ed
wip
malcolmgreaves Dec 6, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
31 changes: 31 additions & 0 deletions ci/docker/0001-Fix-to-support-CUDA-12.1.patch
Original file line number Diff line number Diff line change
@@ -0,0 +1,31 @@
From 77dc5a41476b9671193559ac666f7a3affaa872d Mon Sep 17 00:00:00 2001
From: Malcolm Greaves <[email protected]>
Date: Fri, 15 Nov 2024 03:53:16 +0000
Subject: [PATCH] Fix to support CUDA 12.1

---
transformer_engine/pytorch/csrc/userbuffers/userbuffers.cu | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/transformer_engine/pytorch/csrc/userbuffers/userbuffers.cu b/transformer_engine/pytorch/csrc/userbuffers/userbuffers.cu
index f98688b..c3cc7a6 100644
--- a/transformer_engine/pytorch/csrc/userbuffers/userbuffers.cu
+++ b/transformer_engine/pytorch/csrc/userbuffers/userbuffers.cu
@@ -5,6 +5,7 @@
************************************************************************/

#include <cuda.h>
+#include <cuda_fp8.h>
#include <cuda_runtime.h>

#if __CUDA_ARCH__ >= 800
@@ -19,7 +20,6 @@
#include <unistd.h>
#include <stdio.h>
#include <assert.h>
-#include <cuda_fp8.h>

#define MAX_THREADS 1024

--
2.25.1
231 changes: 231 additions & 0 deletions ci/docker/Dockerfile_build_pytorch_from_source
Original file line number Diff line number Diff line change
@@ -0,0 +1,231 @@
FROM nvidia/cuda:12.4.1-devel-ubuntu22.04 AS pytorch-install

# NOTE: When updating PyTorch version, beware to remove `pip install nvidia-nccl-cu12==2.22.3` below in the Dockerfile. Context: https://github.com/huggingface/text-generation-inference/pull/2099
ARG PYTORCH_VERSION=2.4.0

ARG PYTHON_VERSION=3.11
# Keep in sync with `server/pyproject.toml
ARG CUDA_VERSION=12.4
ARG MAMBA_VERSION=24.3.0-0
ARG CUDA_CHANNEL=nvidia
ARG INSTALL_CHANNEL=pytorch
# Automatically set by buildx
ARG TARGETPLATFORM

ENV PATH /opt/conda/bin:$PATH

RUN apt-get update && DEBIAN_FRONTEND=noninteractive apt-get install -y --no-install-recommends \
build-essential \
ca-certificates \
ccache \
curl \
git && \
rm -rf /var/lib/apt/lists/*

# Install conda
# translating Docker's TARGETPLATFORM into mamba arches
RUN case ${TARGETPLATFORM} in \
"linux/arm64") MAMBA_ARCH=aarch64 ;; \
*) MAMBA_ARCH=x86_64 ;; \
esac && \
curl -fsSL -v -o ~/mambaforge.sh -O "https://github.com/conda-forge/miniforge/releases/download/${MAMBA_VERSION}/Mambaforge-${MAMBA_VERSION}-Linux-${MAMBA_ARCH}.sh"
RUN chmod +x ~/mambaforge.sh && \
bash ~/mambaforge.sh -b -p /opt/conda && \
rm ~/mambaforge.sh

# Install pytorch
# On arm64 we exit with an error code
RUN case ${TARGETPLATFORM} in \
"linux/arm64") exit 1 ;; \
*) /opt/conda/bin/conda update -y conda && \
/opt/conda/bin/conda install -c "${INSTALL_CHANNEL}" -c "${CUDA_CHANNEL}" -y "python=${PYTHON_VERSION}" "pytorch=$PYTORCH_VERSION" "pytorch-cuda=$(echo $CUDA_VERSION | cut -d'.' -f 1-2)" ;; \
esac && \
/opt/conda/bin/conda clean -ya

# CUDA kernels builder image
FROM pytorch-install AS kernel-builder

ARG MAX_JOBS=8
ENV TORCH_CUDA_ARCH_LIST="8.0;8.6;9.0+PTX"

RUN apt-get update && DEBIAN_FRONTEND=noninteractive apt-get install -y --no-install-recommends \
ninja-build cmake \
&& rm -rf /var/lib/apt/lists/*

# Build Flash Attention CUDA kernels
FROM kernel-builder AS flash-att-builder

WORKDIR /usr/src

COPY server/Makefile-flash-att Makefile

# Build specific version of flash attention
RUN make build-flash-attention

# Build Flash Attention v2 CUDA kernels
FROM kernel-builder AS flash-att-v2-builder

WORKDIR /usr/src

COPY server/Makefile-flash-att-v2 Makefile

# Build specific version of flash attention v2
RUN make build-flash-attention-v2-cuda

# Build Transformers exllama kernels
FROM kernel-builder AS exllama-kernels-builder
WORKDIR /usr/src
COPY server/exllama_kernels/ .

RUN python setup.py build

# Build Transformers exllama kernels
FROM kernel-builder AS exllamav2-kernels-builder
WORKDIR /usr/src
COPY server/Makefile-exllamav2/ Makefile

# Build specific version of transformers
RUN make build-exllamav2

# Build Transformers awq kernels
FROM kernel-builder AS awq-kernels-builder
WORKDIR /usr/src
COPY server/Makefile-awq Makefile
# Build specific version of transformers
RUN make build-awq

# Build eetq kernels
FROM kernel-builder AS eetq-kernels-builder
WORKDIR /usr/src
COPY server/Makefile-eetq Makefile
# Build specific version of transformers
RUN make build-eetq

# Build Lorax Punica kernels
FROM kernel-builder AS lorax-punica-builder
WORKDIR /usr/src
COPY server/Makefile-lorax-punica Makefile
# Build specific version of transformers
RUN TORCH_CUDA_ARCH_LIST="8.0;8.6+PTX" make build-lorax-punica

# Build Transformers CUDA kernels
FROM kernel-builder AS custom-kernels-builder
WORKDIR /usr/src
COPY server/custom_kernels/ .
# Build specific version of transformers
RUN python setup.py build

# Build mamba kernels
FROM kernel-builder AS mamba-builder
WORKDIR /usr/src
COPY server/Makefile-selective-scan Makefile
RUN make build-all

# Build flashinfer
FROM kernel-builder AS flashinfer-builder
WORKDIR /usr/src
COPY server/Makefile-flashinfer Makefile
RUN make install-flashinfer

# Text Generation Inference base image
FROM nvidia/cuda:12.1.0-base-ubuntu22.04 AS base

# Conda env
ENV PATH=/opt/conda/bin:$PATH \
CONDA_PREFIX=/opt/conda

# Text Generation Inference base env
ENV HF_HOME=/data \
HF_HUB_ENABLE_HF_TRANSFER=1 \
PORT=80

WORKDIR /usr/src

RUN apt-get update && DEBIAN_FRONTEND=noninteractive apt-get install -y --no-install-recommends \
libssl-dev \
ca-certificates \
make \
curl \
git \
&& rm -rf /var/lib/apt/lists/*

# Copy conda with PyTorch installed
COPY --from=pytorch-install /opt/conda /opt/conda

# Copy build artifacts from flash attention builder
COPY --from=flash-att-builder /usr/src/flash-attention/build/lib.linux-x86_64-cpython-311 /opt/conda/lib/python3.11/site-packages
COPY --from=flash-att-builder /usr/src/flash-attention/csrc/layer_norm/build/lib.linux-x86_64-cpython-311 /opt/conda/lib/python3.11/site-packages
COPY --from=flash-att-builder /usr/src/flash-attention/csrc/rotary/build/lib.linux-x86_64-cpython-311 /opt/conda/lib/python3.11/site-packages

# Copy build artifacts from flash attention v2 builder
COPY --from=flash-att-v2-builder /opt/conda/lib/python3.11/site-packages/flash_attn_2_cuda.cpython-311-x86_64-linux-gnu.so /opt/conda/lib/python3.11/site-packages

# Copy build artifacts from custom kernels builder
COPY --from=custom-kernels-builder /usr/src/build/lib.linux-x86_64-cpython-311 /opt/conda/lib/python3.11/site-packages
# Copy build artifacts from exllama kernels builder
COPY --from=exllama-kernels-builder /usr/src/build/lib.linux-x86_64-cpython-311 /opt/conda/lib/python3.11/site-packages
# Copy build artifacts from exllamav2 kernels builder
COPY --from=exllamav2-kernels-builder /usr/src/exllamav2/build/lib.linux-x86_64-cpython-311 /opt/conda/lib/python3.11/site-packages
# Copy build artifacts from awq kernels builder
COPY --from=awq-kernels-builder /usr/src/llm-awq/awq/kernels/build/lib.linux-x86_64-cpython-311 /opt/conda/lib/python3.11/site-packages
# Copy build artifacts from eetq kernels builder
COPY --from=eetq-kernels-builder /usr/src/eetq/build/lib.linux-x86_64-cpython-311 /opt/conda/lib/python3.11/site-packages
# Copy build artifacts from lorax punica kernels builder
COPY --from=lorax-punica-builder /usr/src/lorax-punica/server/punica_kernels/build/lib.linux-x86_64-cpython-311 /opt/conda/lib/python3.11/site-packages
# Copy build artifacts from mamba builder
COPY --from=mamba-builder /usr/src/mamba/build/lib.linux-x86_64-cpython-311/ /opt/conda/lib/python3.11/site-packages
COPY --from=mamba-builder /usr/src/causal-conv1d/build/lib.linux-x86_64-cpython-311/ /opt/conda/lib/python3.11/site-packages
COPY --from=flashinfer-builder /opt/conda/lib/python3.11/site-packages/flashinfer/ /opt/conda/lib/python3.11/site-packages/flashinfer/

# Install flash-attention dependencies
RUN pip install einops --no-cache-dir

# Install server
COPY proto proto
COPY server server
COPY server/Makefile server/Makefile
RUN cd server && \
make gen-server && \
pip install -r requirements_cuda.txt && \
pip install ".[attention, bnb, accelerate, compressed-tensors, marlin, moe, quantize, peft, outlines]" --no-cache-dir && \
pip install nvidia-nccl-cu12==2.22.3

ENV LD_PRELOAD=/opt/conda/lib/python3.11/site-packages/nvidia/nccl/lib/libnccl.so.2
# Required to find libpython within the rust binaries
ENV LD_LIBRARY_PATH="$LD_LIBRARY_PATH:/opt/conda/lib/"
# This is needed because exl2 tries to load flash-attn
# And fails with our builds.
ENV EXLLAMA_NO_FLASH_ATTN=1

# Deps before the binaries
# The binaries change on every build given we burn the SHA into them
# The deps change less often.
RUN apt-get update && DEBIAN_FRONTEND=noninteractive apt-get install -y --no-install-recommends \
build-essential \
g++ \
&& rm -rf /var/lib/apt/lists/*

# Install benchmarker
COPY --from=builder /usr/src/target/release-opt/text-generation-benchmark /usr/local/bin/text-generation-benchmark
# Install router
COPY --from=builder /usr/src/target/release-opt/text-generation-router /usr/local/bin/text-generation-router
# Install launcher
COPY --from=builder /usr/src/target/release-opt/text-generation-launcher /usr/local/bin/text-generation-launcher


# AWS Sagemaker compatible image
FROM base AS sagemaker

COPY sagemaker-entrypoint.sh entrypoint.sh
RUN chmod +x entrypoint.sh

ENTRYPOINT ["./entrypoint.sh"]

# Final image
FROM base

COPY ./tgi-entrypoint.sh /tgi-entrypoint.sh
RUN chmod +x /tgi-entrypoint.sh

ENTRYPOINT ["/tgi-entrypoint.sh"]
# CMD ["--json-output"]
Loading
Loading