Yeah. I'm working on that. This isn't really a pitch and you probably wouldn't want my cyberdeck. It's my dev workstation and yeah, no one wants that. I mostly mentioned it just because I'm getting to a phase where I am interested to get eyes on the weirder parts. Honestly, the concept itself is kind of a filter. If you look at it and get why you would want it, then we should probably talk, you're likely doing some weird stuff yourself!
The real pitch is working towards seamless natural user interface for streaming composite spatial data. The easiest thing to explain that people will get is architectural pre-visualization. You've got plans and lot lines, input plans, it generates a structure, that's a known solution, easy peasy. That is layered over GIS data from whatever source. Construction is underway, I can do drone surveying and flightpath automation to do photogrammatry or NeRF or whatever to build and overlay the model from the scan. Simulations are easy for erosion, or looking at the light at different times of year. I can drop ship you a 3D printed architectural model or some sort of widget. If you point your phone at that widget, you can interact directly with the information. Standard AR fare, you can go onsite and see the AR stuff. Builders can take scans of things in production and combine them. Yadda yadda. I have a company that is selling basically that. But to get to there, what I'm after is real time streaming composites with effortless and inexpensive mcguffens.
Composing things for a real time stream to be delivered for people to look at needs to have sort of LOD fallback. Streaming compressed point clouds that can be altered in real time, cameras that can be controlled or tabbed through, pulling video from multiple sources and muxing all of that into a cohesive digital product attached to a physical / inexpensive medium that a person can interact with. To build these things I've also had a focus on audio reactivity. Composition based on the sounds produced in the real world and in the digital world. Say 20 people have the same mcguffin, they can all fiddle with parts in place on their desktop as an AR or VR experience, or they can just open up a browser and mess with the world like a traditional video game. Or there can be a stream of 4k to Youtube or Twitch or whatever and that can be synced up to augment.
This is a long term study on what I don't like about modern computing and what I want to see. The stuff that makes money is high throughput distributed GPU compute and GIS mapping. These days people also want LLMs in everything so I've got the stack to produce that and it pays the bills.
The parts are really starting to fall into place though and I expect I'll have my "killer demo" by the end of the year. The problem is with this really is that I can't point to something and say "it's like that" because it doesn't exist and no one else is trying to do anything similar as far as I can tell.
Anyway, thanks for looking!