←back to thread

77 points cochlear | 2 comments | | HN request time: 0.412s | source

Hi, I'm the author of this little Web Audio toy which does physical modeling synthesis using a simple spring-mass system.

My current area of research is in sparse, event-based encodings of musical audio (https://blog.cochlea.xyz/sparse-interpretable-audio-codec-pa...). I'm very interested in decomposing audio signals into a description of the "system" (e.g., room, instrument, vocal tract, etc.) and a sparse "control signal" which describes how and when energy is injected into that system. This toy was a great way to start learning about physical modeling synthesis, which seems to be the next stop in my research journey. I was also pleasantly surprised at what's possible these days writing custom Audio Worklets!

1. chaosprint ◴[] No.43371402[source]
Great demo.

I used to do some web audio and tonejs works, but later switched to rust and glicol for sound synthesis.

For example, this handwritten dattorro reverb:

https://glicol.org/demo#handmadedattorroreverb

This karplus-stress-tester may also be interesting to you.

https://jackschaedler.github.io/karplus-stress-tester/

In short, I think to study more powerful physics synthesis, you need to consider the technology stack of

- rust -> wasm - audioworklet - sharedarraybuffer

Visual can rely on wgpu. Of course, webgl is enough in this case imho.

If it is purely desktop, you can consider using the physics library in bevy.

replies(1): >>43371463 #
2. cochlear ◴[] No.43371463[source]
You're the author of Glicol, right? I've definitely had my eye on trying it out for a while. The karplus-stress-tester is great; I'm currently using message ports, because they seemed most accessible at first, but I'm happy to know there are other, and better options. I have done quite a bit of hand-optimizing of the code here, and while I think there's probably juice left to squeeze, it has become apparent to me that wasm is probably my next stop.

I've written one other AudioWorklet at this point, which just runs "inference" on a single-layer RNN given a pre-trained set of weights: https://blog.cochlea.xyz/rnn.html. It has similarly mediocre performance.

Thanks for all the great tips, and for your work on Glicol!