←back to thread

77 points cochlear | 1 comments | | HN request time: 2.772s | source

Hi, I'm the author of this little Web Audio toy which does physical modeling synthesis using a simple spring-mass system.

My current area of research is in sparse, event-based encodings of musical audio (https://blog.cochlea.xyz/sparse-interpretable-audio-codec-pa...). I'm very interested in decomposing audio signals into a description of the "system" (e.g., room, instrument, vocal tract, etc.) and a sparse "control signal" which describes how and when energy is injected into that system. This toy was a great way to start learning about physical modeling synthesis, which seems to be the next stop in my research journey. I was also pleasantly surprised at what's possible these days writing custom Audio Worklets!

Show context
danbmil99 ◴[] No.43368598[source]
Very cool! I've often wondered whether one could procedurally generate sounds of objects interacting in a physics engine? This approach seems like a good place to start.
replies(2): >>43370049 #>>43371292 #
1. cochlear ◴[] No.43371292[source]
Same here! Not a physics engine per se, but I've been eyeing Taichi Lang (https://github.com/taichi-dev/taichi) as a potential next stop for running this on a much larger scale.

My assumption has been that any physics engine that does soft-body physics would work in this regard, just at a much higher sampling rate than one would normally use in a gaming scenario. This simulation is actually only running at 22050hz, rather than today's standard 44100hz sampling rate.