Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

coda memory error during split #72

Open
bichanw opened this issue Oct 1, 2022 · 0 comments
Open

coda memory error during split #72

bichanw opened this issue Oct 1, 2022 · 0 comments

Comments

@bichanw
Copy link

bichanw commented Oct 1, 2022

As the title suggests, during the splitting template process CUDA throws memory error. I think it's due to cluster having too many spikes. GPU has 4G memory (also tried with 2 GPUs).

I assume since it's not a matter of batch size changing params.ntbuff wouldn't help. How about changing parmas.low_memory = true before the splitting step help?

�[90m14:51:09.469 [D] postprocess:841      Splitting template 29/314 with 6076604 spikes.�[0m
�[31m14:51:29.511 [E] ibl:104              Error in the main loop
Traceback (most recent call last):
  File "/scratch/gpfs/bichanw/pykilosort/pykilosort/ibl.py", line 97, in run_spike_sorting_ibl
    run(bin_file, dir_path=scratch_dir, output_dir=ks_output_dir, **params)
  File "/scratch/gpfs/bichanw/pykilosort/pykilosort/main.py", line 248, in run
    splitAllClusters(ctx, True)
  File "/scratch/gpfs/bichanw/pykilosort/pykilosort/postprocess.py", line 925, in splitAllClusters
    StS = cp.matmul(
  File "/scratch/gpfs/bichanw/miniconda3/envs/pyks2/lib/python3.10/site-packages/cupy/linalg/_product.py", line 42, in matmul
    return _gu_func_matmul(x1, x2, out=out, axes=axes)
  File "/scratch/gpfs/bichanw/miniconda3/envs/pyks2/lib/python3.10/site-packages/cupy/_core/_gufuncs.py", line 444, in __call__
    self._apply_func_to_inputs(0, dimsizess, loop_output_dims, args, outs)
  File "/scratch/gpfs/bichanw/miniconda3/envs/pyks2/lib/python3.10/site-packages/cupy/_core/_gufuncs.py", line 225, in _apply_func_to_inputs
    fouts = self._func(*args)
  File "cupy/_core/_routines_linalg.pyx", line 691, in cupy._core._routines_linalg.matmul
  File "cupy/_core/_routines_linalg.pyx", line 732, in cupy._core._routines_linalg.matmul
  File "cupy/_core/_routines_linalg.pyx", line 427, in cupy._core._routines_linalg.dot
  File "cupy/_core/_routines_linalg.pyx", line 470, in cupy._core._routines_linalg.tensordot_core
  File "cupy/_core/core.pyx", line 460, in cupy._core.core.ndarray.astype
  File "cupy/_core/core.pyx", line 167, in cupy._core.core.ndarray.__init__
  File "cupy/cuda/memory.pyx", line 718, in cupy.cuda.memory.alloc
  File "cupy/cuda/memory.pyx", line 1395, in cupy.cuda.memory.MemoryPool.malloc
  File "cupy/cuda/memory.pyx", line 1416, in cupy.cuda.memory.MemoryPool.malloc
  File "cupy/cuda/memory.pyx", line 1096, in cupy.cuda.memory.SingleDeviceMemoryPool.malloc
  File "cupy/cuda/memory.pyx", line 1117, in cupy.cuda.memory.SingleDeviceMemoryPool._malloc
  File "cupy/cuda/memory.pyx", line 1355, in cupy.cuda.memory.SingleDeviceMemoryPool._try_malloc
cupy.cuda.memory.OutOfMemoryError: Out of memory allocating 4,666,831,872 bytes (allocated so far: 14,464,125,440 bytes).�[0m
Traceback (most recent call last):
  File "/scratch/gpfs/bichanw/sort.py", line 24, in <module>
    run_spike_sorting_ibl(bin_file, delete=DELETE, scratch_dir=SCRATCH_DIR,
  File "/scratch/gpfs/bichanw/pykilosort/pykilosort/ibl.py", line 105, in run_spike_sorting_ibl
    raise e
  File "/scratch/gpfs/bichanw/pykilosort/pykilosort/ibl.py", line 97, in run_spike_sorting_ibl
    run(bin_file, dir_path=scratch_dir, output_dir=ks_output_dir, **params)
  File "/scratch/gpfs/bichanw/pykilosort/pykilosort/main.py", line 248, in run
    splitAllClusters(ctx, True)
  File "/scratch/gpfs/bichanw/pykilosort/pykilosort/postprocess.py", line 925, in splitAllClusters
    StS = cp.matmul(
  File "/scratch/gpfs/bichanw/miniconda3/envs/pyks2/lib/python3.10/site-packages/cupy/linalg/_product.py", line 42, in matmul
    return _gu_func_matmul(x1, x2, out=out, axes=axes)
  File "/scratch/gpfs/bichanw/miniconda3/envs/pyks2/lib/python3.10/site-packages/cupy/_core/_gufuncs.py", line 444, in __call__
    self._apply_func_to_inputs(0, dimsizess, loop_output_dims, args, outs)
  File "/scratch/gpfs/bichanw/miniconda3/envs/pyks2/lib/python3.10/site-packages/cupy/_core/_gufuncs.py", line 225, in _apply_func_to_inputs
    fouts = self._func(*args)
  File "cupy/_core/_routines_linalg.pyx", line 691, in cupy._core._routines_linalg.matmul
  File "cupy/_core/_routines_linalg.pyx", line 732, in cupy._core._routines_linalg.matmul
  File "cupy/_core/_routines_linalg.pyx", line 427, in cupy._core._routines_linalg.dot
  File "cupy/_core/_routines_linalg.pyx", line 470, in cupy._core._routines_linalg.tensordot_core
  File "cupy/_core/core.pyx", line 460, in cupy._core.core.ndarray.astype
  File "cupy/_core/core.pyx", line 167, in cupy._core.core.ndarray.__init__
  File "cupy/cuda/memory.pyx", line 718, in cupy.cuda.memory.alloc
  File "cupy/cuda/memory.pyx", line 1395, in cupy.cuda.memory.MemoryPool.malloc
  File "cupy/cuda/memory.pyx", line 1416, in cupy.cuda.memory.MemoryPool.malloc
  File "cupy/cuda/memory.pyx", line 1096, in cupy.cuda.memory.SingleDeviceMemoryPool.malloc
  File "cupy/cuda/memory.pyx", line 1117, in cupy.cuda.memory.SingleDeviceMemoryPool._malloc
  File "cupy/cuda/memory.pyx", line 1355, in cupy.cuda.memory.SingleDeviceMemoryPool._try_malloc
cupy.cuda.memory.OutOfMemoryError: Out of memory allocating 4,666,831,872 bytes (allocated so far: 14,464,125,440 bytes).
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant