Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is it possible to have wheels for pytorch 2.1.2 and cuda 11.8? #3

Open
AruniRC opened this issue Aug 4, 2024 · 7 comments
Open

Is it possible to have wheels for pytorch 2.1.2 and cuda 11.8? #3

AruniRC opened this issue Aug 4, 2024 · 7 comments

Comments

@AruniRC
Copy link

AruniRC commented Aug 4, 2024

Thanks so much for providing pre-compiled wheels (since tiny-cuda-nn can get difficult to install on some platforms)!

Is it possible to share similar wheels for pytorch 2.1.2 and cuda 11.8? These dependencies are used by Nerfstudio, and it would really help to have wheels in this case.

Thanks,
Aruni

@AruniRC AruniRC changed the title Is it possible to have wheels for pytorch 2.1.2 and cu 11.8? Is it possible to have wheels for pytorch 2.1.2 and cuda 11.8? Aug 5, 2024
@OutofAi
Copy link
Owner

OutofAi commented Aug 6, 2024

Are you planning to run it locally or on Colab, if so which GPU you are using? cause the difficult part with cuda and wheel generation is that the processing unit needs to be clear as well, so you can generate wheel based of the arch type

@AruniRC
Copy link
Author

AruniRC commented Aug 9, 2024

Thank you for responding.

Planning to run it locally, and GPUs can be any of Ampere/Ada/Hopper/Turing. I was wondering if having pre-build wheels for any (or all?) of these platforms would be feasible, or perhaps in this case local builds on each architecture is the more hassle-free way to go ..

@OutofAi
Copy link
Owner

OutofAi commented Aug 13, 2024

sorry, I meant compute compatibility, not necessarily architecture, the values displayed in this https://developer.nvidia.com/cuda-gpus
if it's a handful, you can easily fork the repo and change the file github tiny-cuda-nn-wheels/.github/workflows/wheels-generator.yml and run the generator to get the files you need, and then run Actions -> CI -> Run Workflow to generate the relevant wheel for you

@AruniRC
Copy link
Author

AruniRC commented Aug 13, 2024

Yes, I realize this is a handful and I'm better off generating the wheels. I'll take a look at your generator yaml and try this out. Thanks again for creating this repo.

@gaperezsa
Copy link

Hello Aruni, were you able to create said wheel? would it be possible for you to share it?

@OutofAi
Copy link
Owner

OutofAi commented Sep 18, 2024

I kicked off a build for pytorch 2.1.2 and cuda 11.8

@OutofAi
Copy link
Owner

OutofAi commented Sep 18, 2024

@gaperezsa @AruniRC they are now available, or you can just run this script depending on your architecture to download and install it on your machine

import os
import subprocess

# Mapping of architecture to post numbers
arch_to_post = {
    'Turing': '75',
    'Ampere+Tegra': '87',
    'Ampere': '80',
    'Ada': '89',
    'Hopper': '90'
}

# Base URL for the wheel files
base_url = "https://github.com/OutofAi/tiny-cuda-nn-wheels/releases/download/1.7.2/"

def download_and_install_wheel(architecture, version='1.7', directory='.'):
    # Get the post number for the architecture
    post = arch_to_post.get(architecture)
    
    if post is None:
        print(f"Error: Unsupported architecture {architecture}")
        return
    
    # Construct the wheel file name
    wheel_filename = f"tinycudann-{version}.post{post}212118-cp310-cp310-linux_x86_64.whl"
    wheel_url = f"{base_url}{wheel_filename}"
    
    # Download the wheel file
    print(f"Downloading {wheel_filename} from {wheel_url}...")
    download_command = f'curl -L "{wheel_url}" -o {wheel_filename}'
    subprocess.run(download_command, shell=True, check=True)
    
    # Check if the file was downloaded successfully
    if os.path.exists(wheel_filename):
        # Install the wheel using pip
        print(f"Installing {wheel_filename} for {architecture} architecture...")
        subprocess.run(['pip', 'install', wheel_filename], check=True)
    else:
        print(f"Error: Failed to download {wheel_filename}")

# Example usage:
download_and_install_wheel('Ada')
import tinycudann as tcnn 

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants