-
-
Notifications
You must be signed in to change notification settings - Fork 27
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Beat detection without scheduling etc. #6
Comments
Actually, delving further, it looks like this is cause by MusicBeatDetector not closing the stream when lame has finished sending data (I think, I'm not too hot on node streams), piping lame directly to a writable stream finishes straight away, but not with MusicBeatDetector in the middle. I think the pipe maybe times out or ends for another reason (rather than it being related to the "time" of the song), if piping to speaker this isn't an issue as it finishes before the speaker does for a typical song. |
Ok, more digging and I've realised this is a duplicate of issue #2, the song had silence at the end and it was not finishing processing (and I had only been trying this out with one song so i hadn't noticed that it worked for others!). Specifically, _analyzeBuffer takes an age to finish (or causes an out of memory error), which is why the stream doesn't complete until after it has. The fix suggested (changing SAMPLES_WINDOW = FREQ * 0.5) works, but gives a different set of detected beats than * 1.5. |
Yet more digging - the issue is that with the silence at the end, filteredLeft in _analyseBuffer is 0 (zero). Passing 0 to slidingWindowMax.add seems to make it hang. So it looks like the correct fix is either to fix slidingWindowMax to process 0 correctly (I've had a look and I can't get my head around it!) or to prevent filteredLeft from being 0 (if that is an incorrect value). For the moment, to work around it I have
This isn't an elegant solution, but it leaves the rest of the code (this.pos++) intact as that still needs to be in place to get the correct output. |
Hi,
I am trying to get the timing of beats in an MP3 file to auto-generate "step" files for music games like stepmania. Essentially, I want to quickly analyse the mp3 file and get the list of pos/bpm for each beat, without having to "play" the whole song. The code I am trying is this :
This does what I want and processes the file in a couple of seconds, however once we get to "Mp3 read stream: 100%" (printed twice, for some reason) the script hangs in silence for what I think is the duration of the actual mp3. I assume this is because the file is being piped at normal speed through to /dev/null as if it were being played. Is there anyway to actually just do the analysis without the extra "playing" step?
Thanks for a great lib.
The text was updated successfully, but these errors were encountered: