←back to thread

87 points davidbarker | 5 comments | | HN request time: 1.129s | source
Show context
fu7kt ◴[] No.37744697[source]
Time to start building an AI pendant that is active attacks against surveillance that broadcast distortion and garbling above and below the human voice level to harmonically distort recordings in a known space without disrupting human communication. You can overdrive tiny speakers or disrupt known NLP algos, both, whatever.

Or maybe every house needs a Cone of Silence like in Get Smart.

replies(1): >>37745137 #
v4dok ◴[] No.37745137[source]
That would be awesome.
replies(2): >>37746759 #>>37747252 #
1. fu7kt ◴[] No.37746759[source]
I've been working on things that basically do that for a decade. I mean I didn't really think of this specific application initially but I did kind of.... rotating parabolic ultrasonic arrays something something. I'm looking for partners and investors!

I recently threw up a hackaday for my development cyberdeck and am sifting through all the stuff from over the years to finally get this to market. With the new CloudFlare AI workers, a whole bunch of the infra I used to have to maintain is a moot point so I'm looking to really hit it hard in Q1 2024.

https://hackaday.io/project/192933-synesthetic-homunculus

replies(1): >>37748572 #
2. PeterStuer ◴[] No.37748572[source]
"A portable cyberdeck for creating holographic audio reactive composite composition"

Might want to finetune that pitch. What is this thing and why would I want it?

replies(1): >>37762084 #
3. fu7kt ◴[] No.37762084[source]
Yeah. I'm working on that. This isn't really a pitch and you probably wouldn't want my cyberdeck. It's my dev workstation and yeah, no one wants that. I mostly mentioned it just because I'm getting to a phase where I am interested to get eyes on the weirder parts. Honestly, the concept itself is kind of a filter. If you look at it and get why you would want it, then we should probably talk, you're likely doing some weird stuff yourself!

The real pitch is working towards seamless natural user interface for streaming composite spatial data. The easiest thing to explain that people will get is architectural pre-visualization. You've got plans and lot lines, input plans, it generates a structure, that's a known solution, easy peasy. That is layered over GIS data from whatever source. Construction is underway, I can do drone surveying and flightpath automation to do photogrammatry or NeRF or whatever to build and overlay the model from the scan. Simulations are easy for erosion, or looking at the light at different times of year. I can drop ship you a 3D printed architectural model or some sort of widget. If you point your phone at that widget, you can interact directly with the information. Standard AR fare, you can go onsite and see the AR stuff. Builders can take scans of things in production and combine them. Yadda yadda. I have a company that is selling basically that. But to get to there, what I'm after is real time streaming composites with effortless and inexpensive mcguffens.

Composing things for a real time stream to be delivered for people to look at needs to have sort of LOD fallback. Streaming compressed point clouds that can be altered in real time, cameras that can be controlled or tabbed through, pulling video from multiple sources and muxing all of that into a cohesive digital product attached to a physical / inexpensive medium that a person can interact with. To build these things I've also had a focus on audio reactivity. Composition based on the sounds produced in the real world and in the digital world. Say 20 people have the same mcguffin, they can all fiddle with parts in place on their desktop as an AR or VR experience, or they can just open up a browser and mess with the world like a traditional video game. Or there can be a stream of 4k to Youtube or Twitch or whatever and that can be synced up to augment.

This is a long term study on what I don't like about modern computing and what I want to see. The stuff that makes money is high throughput distributed GPU compute and GIS mapping. These days people also want LLMs in everything so I've got the stack to produce that and it pays the bills.

The parts are really starting to fall into place though and I expect I'll have my "killer demo" by the end of the year. The problem is with this really is that I can't point to something and say "it's like that" because it doesn't exist and no one else is trying to do anything similar as far as I can tell.

Anyway, thanks for looking!

replies(1): >>37787420 #
4. PeterStuer ◴[] No.37787420{3}[source]
Ok, now I understand. There was a European consortium working on this a few years ago for product development in the automotive and aviation industry. I think Barco had the lead on that.
replies(1): >>37791034 #
5. fu7kt ◴[] No.37791034{4}[source]
Barco hadn't been on my radar. Thanks!

I'll do some digging on the euro consortium, it wouldn't happen to be AliceVision would it? That's more a euro university group, but I could see them being related.