←back to thread

40 points PaulHoule | 1 comments | | HN request time: 0s | source
Show context
Lapsa ◴[] No.45957619[source]
more or less same crap is achieved via radiomyography from your local telecom towers. down to deciphering your inner monologue
replies(7): >>45958390 #>>45958610 #>>45958617 #>>45958618 #>>45958622 #>>45958649 #>>45966608 #
Lapsa ◴[] No.45958610[source]
Department of Defense Fiscal Year (FY) 2010 Budget Estimates May 2009, USA

``` Silent Talk (U) Silent Talk will allow user-to-user communication on the battlefield without the use of vocalized speech through analysis of neural signals. The brain generates word-specific signals prior to sending electrical impulses to the vocal cords. These signals of “intended speech” will be analyzed and translated into distinct words, allowing covert person-to-person communication. This program has three major goals: a) to attempt to identify electroencephalography patterns unique to individual words, b) ensure that those patterns are generalizable across users in order to prevent extensive device training, and c) construct a fieldable pre-prototype that would decode the signal and transmit over a limited range. ``` https://commons.wikimedia.org/wiki/File:Fiscal_Year_2010_DAR...

replies(1): >>45959024 #
thesz ◴[] No.45959024[source]
This looks no harder than to train custom Kaldi (circa 2017) phoneme model on brain waves, and using remaining Kaldi's functionality for everything else, except for text-to-speech. There was WaveNet for the TTS at that time, with sound quality that is good enough for (and can be improved by) radio transmission.

Thanks for a link!

replies(2): >>45963440 #>>45963500 #
1. Lapsa ◴[] No.45963500[source]
my mostly uneducated guess of what's going on is: radio wave gets sent, human body slightly modulates it and same signal gets received back and used to reconstruct (approximation of?) EEG from noise delta. neural models is the secret sauce that makes such signal processing possible