Skip to content

This is a code that helps preallocate memory for PyTorch, used for competing for computational resources with others.

License

Notifications You must be signed in to change notification settings

guyi2000/preallocate-cuda-memory

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Preallocate CUDA memory for pytorch

This is a code that helps preallocate memory for PyTorch, used for competing for computational resources with others.

You can use the following command directly on the command line:

python -m preallocate_cuda_memory

Or you can use in python file:

import preallocate_cuda_memory as pc

mc = pc.MemoryController(0)  # 0 is the GPU index
mc.occupy_all_available_memory()
mc.free_memory()

If you find any issues, please feel free to contact the author by raising an issue on GitHub.

About

This is a code that helps preallocate memory for PyTorch, used for competing for computational resources with others.

Topics

Resources

License

Stars

Watchers

Forks

Packages

 
 
 

Languages