←back to thread

215 points francescopace | 3 comments | | HN request time: 0s | source

Hi everyone, I'm the author of ESPectre.

This is an open-source (GPLv3) project that uses Wi-Fi signal analysis to detect motion using CSI data, and it has already garnered almost 2,000 stars in two weeks.

Key technical details:

- The system does NOT use Machine Learning, it relies purely on Math. — Runs in real-time on a super affordable chip like the ESP32. - It integrates seamlessly with Home Assistant via MQTT.

Show context
francescopace ◴[] No.45959155[source]
Fun fact: I’m working on turning ESPectre into a Wi‑Fi Theremin (the musical instrument you play by moving your hands near an antenna).

The idea of “playing” by simply moving around a room sounds a bit ridiculous… but also kind of fun.

The key is the Moving Variance of the spatial turbulence: this value is continuous and stable, making it perfect for mapping directly to pitch/frequency, just like the original Theremin. Other features can be mapped to volume and timbre.

It’s pure signal processing, running entirely on the ESP32. Has anyone here experimented with audio synthesis or sonification using real-time signal processing?

replies(5): >>45959427 #>>45960041 #>>45960506 #>>45963262 #>>45977361 #
1. quinnjh ◴[] No.45960506[source]
Ive worked on some sonification projects that used signals from xbox kinect lidar, piezos, and other sensors. Co-author on paper i wrote developed a "strummable" theremin that divided physical space with invisible "strings" of various tunings. We preferred running synthesis on PC when possible and just outputting midi and OSC, as DSP on ESP32 has limits for what can be achieved in under 5-10ms. If the goal is hardware audio output, you may need to look into dedicated DSP chips and an audio shield for better DAC- but for prototyping can easily bang a square wave through any of esp32 pins
replies(1): >>45963770 #
2. francescopace ◴[] No.45963770[source]
Thanks for the insights Quinnjh! Would love to hear more about your invisible strings tuning system!

The ESP32-S3 extracts a moving variance signal from spatial turbulence (updates at 20-50 Hz), and I want to map this directly to audio frequency using a passive buzzer + PWM (square wave, 200-2000 Hz range).

Two quick questions:

1. Do you see any pitfalls with updating PWM frequency at 20-50 Hz for responsive theremin-like behavior?

2. Any recommendations on mapping strategies - linear, logarithmic (musical scale), or quantized to specific notes?

replies(1): >>45971178 #
3. quinnjh ◴[] No.45971178[source]
you may be interested in some tech details on that project's prototypes here: https://www.quinnjh.net/projects/adaptive-instruments-projec...

As for the tuning system, we didnt get great demo recordings of it, but the invisible strings were linearly mapped as a range onto degrees of a given scale. In our use-case (letting people with disabilities jam without too much dissonance) that key+scale and the master tempo were broadcast to each instrument.

Would have been interesting to play more with custom tunings, but the users we were designing for would have had a harder time using it consonantly. FWIW fully-abled folks like myself sound pretty bad on the theremin, and seeing people play them in orchestras etc displays an impressive level of "virtuosity" to place the hands properly. Quantizing the range of possible positions helps but the tradeoff is sacrificing expressivity.

As for 1) yes, there will definitely be some pitfalls with the relatively slow updates - which may show up as "zipper noise" artifacts in output.

For 2), logarithmic mapping between position and pitch is traditionally theremin-like, but as the theremin avoids zippering by being analog, youll have to get creative with some smoothing/lerping and potentially further quantization. Thats the fun and creative bit though!

Would love to hear about your project again and what approaches you take, and happy to answer other q's so feel free to drop me a line.