←back to thread

10 points danieloj | 1 comments | | HN request time: 0.434s | source
1. tyleo ◴[] No.46068933[source]
I like the closing:

> The transition from deterministic systems to probabilistic agents is uncomfortable. It requires us to trade certainty for semantic flexibility. We no longer know and own the exact execution path. We effectively hand over the control flow to a non-deterministic model and store our application state in natural language.

But I don't think it supports the overall point. After working on some AI products that shipped to retail, the issue wasn't that senior engineers struggled... it was more that the non-determinism made the products shitty.

As real-world prompts drift from the ones in your test environment, you end up with something that appears to work but collapses when customers use it. It's impossible to account for all of the variation the customer population throws into your agents so you end up dealing with a large failure %. Then you try to tune it to fix one customer and end up breaking others.

We found AI genuinely useful in cases where non-determinism was expected or desired, usually closer to the final output. For example, we built a game called ^MakeItReal where players draw an object and we use AI to turn it into a 3D mesh. This works because people are delighted when the output varies in surprising ways:

https://www.youtube.com/watch?v=BfQ6YOCxsRs