-
Notifications
You must be signed in to change notification settings - Fork 238
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
MemoryError #57
Comments
I am getting the same error with a 800KB file that lasts 5 seconds in duration. MemoryError: Unable to allocate array with shape (199898, 60002) and data type float64 |
What is the shape of the input data? |
The shape of my input data is (199898, 2) and sample rate is 44100 |
There should be no problem with data size, I've run the algorithm on 30 mins+ files (there are params to do chunking for you). Can you try transposing the array? It might be that the input is channels x samples rather than samples x channels. If so I should make a PR to give a warning if a greater number of channels than samples are present. |
@timsainb ,Thanks for the reply
array([[ 0.00000000e+00, 0.00000000e+00],
After transposing the data to data.T for the input as above, the bug fixed, |
@joingreat |
The package should be able to handle long and multi-channel data fine. Examples: Can either of you try to reproduce your error in a colab notebook so I can take a look. |
I had the same issue and I fixed it by using I found it from these two stack posts https://stackoverflow.com/questions/57137050/error-passing-wav-file-to-ipython-display/57137391 |
import IPython data, rate = sf.read('chunk0.wav') reduced_noise = nr.reduce_noise(y = data.T, sr=rate, n_std_thresh_stationary=1,stationary=True) The chunk0.wav was uploaded above as chunk0.zip, reduced_noise would be ok after the .T transpose, the result looks a litter bit harsh especially at the beginning. May be the split action hurt the file? |
I am not sure that it's related, but I too had that problem, and after transposing the data, the function reduce_noise() seemed too work well but I got this error afterwards in wavfile.write(): The code: (example.wav is inside zipped.zip) from scipy.io import wavfile
import noisereduce as nr
# load data
rate, data = wavfile.read("example.wav")
# perform noise reduction
reduced_noise = nr.reduce_noise(y=data.T, sr=rate)
wavfile.write("reduced.wav", rate, reduced_noise) |
Update: # load data
rate, data = wavfile.read("example.wav")
data1 = data[:,0]
data2 = data[0:,1]
# perform noise reduction
reduced_noise1 = nr.reduce_noise(y=data1, sr=rate)
reduced_noise2 = nr.reduce_noise(y=data2, sr=rate)
reduced_noise = np.stack((reduced_noise1, reduced_noise2), axis=1)
wavfile.write("reduced.wav", rate, reduced_noise) |
@hananell why does it work with mono? and does it affect that the audio is being split into 2 then merged back? |
seems Stereo input needs the format of (n_channles, frames), which is different from the output of soundfile. So you need to do something |
Had the same issue, converting the input wav to mono fixed it.
data.shape (1757089, 2)
Cool project by the way! I used an AI powered online tool and it worked significantly better, but it introduced weird hallucinations (and they want to charge $500 to process all the audio xD) |
I m on windows 10 and jupyter environment, the audio file lasted 30 minutes, so I cut the file in 10 seconds each and then continue, on the first chunk0 file came across MemoryError.
Is the file still large for the situation? I followed the link :
https://colab.research.google.com/github/timsainb/noisereduce/blob/master/notebooks/1.0-test-noise-reduction.ipynb#scrollTo=E5UkLtmT3xy3, the sample only lasted four seconds.
Or the paramter tuning would help for this ?
MemoryError: Unable to allocate 197. GiB for an array with shape (441000, 60002) and data type float64
The text was updated successfully, but these errors were encountered: