Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature: Loudspeaker rendering #6

Open
trsonic opened this issue Aug 30, 2022 · 1 comment
Open

Feature: Loudspeaker rendering #6

trsonic opened this issue Aug 30, 2022 · 1 comment
Labels
question Further information is requested

Comments

@trsonic
Copy link

trsonic commented Aug 30, 2022

Hello! :)

I've been thinking of utilizing the non-HRTF part of your workflow to render audio in a 50-ch Lebedev grid-based loudspeaker sphere. I believe the DOA quantization as well as other steps could be beneficial in this case.

Do you guys have any pointers how should I go about synthesizing the individual loudspeaker filters from SRIR_data ?

Ideally I would like to still apply the RT compensation and all-pass filter.

Thanks!
T.

@svamengualgari
Copy link
Contributor

Hi Tomasz!

There are a few things with your application:

  • The DOA quantization when rendering to binaural is equivalent to using Nearest Loudspeaker Synthesis (NLS), which is what the original SDM Toolbox by Tervo and Patynen does. You could use our toolbox and do some sort of hack where you synthesize some dummy HRIR that has only 50 directions and the time domain signal of the HRIR is just an impulse for each of the directions. But I believe the way we implemented the synthesis you never end up with a LxN array (L being number of channels and N number of RIR samples), but we directly build the 2xN array for the final binaural signals. So you would need to modify that and stop the process. This would obviously be a hack, and it might be easier to just use the original toolbox and remove the equalization part, in case you want to use RT compensation and all-pass filtering.

  • Equalization: I think it would just work the same with more loudspeakers, but I haven't tried what's the perceived result when you apply it to so many channels. In the case of binaural we apply it directly to the binaural signals, so it could be possible that different all-pass filters work better when applying it to loudspeakers. Similarly for the RT compensation - the 50 loudspeaker channels would be quite sparse, so it's quite possible that the RT estimation process is not robust enough (the Energy Decay Curve might have lots of discontinuities) and the quality of the results might be very case dependent.

I think you could also check the Ambisonics SDM work from the IEM Graz team, they have some open source repos as well that might work well for your application.

@HaHeho HaHeho changed the title Feature request: loudspeaker rendering. Feature: Loudspeaker rendering Sep 17, 2022
@HaHeho HaHeho added the question Further information is requested label Sep 17, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

3 participants