←back to thread

419 points serjester | 2 comments | | HN request time: 0s | source
Show context
simonw ◴[] No.43535919[source]
Yeah, the "book a flight" agent thing is a running joke now - it was a punchline in the Swyx keynote for the recent AI Engineer event in NYC: https://www.latent.space/p/agent

I think this piece is underestimating the difficulty involved here though. If only it was as easy as "just pick a single task and make the agent really good at that"!

The problem is that if your UI involves human beings typing or talking to you in a human language, there is an unbounded set of ways things could go wrong. You can't test against every possible variant of what they might say. Humans are bad at clearly expressing things, but even worse is the challenge of ensuring they have a concrete, accurate mental model of what the software can and cannot do.

replies(12): >>43536068 #>>43536088 #>>43536142 #>>43536257 #>>43536583 #>>43536731 #>>43537089 #>>43537591 #>>43539058 #>>43539104 #>>43539116 #>>43540011 #
photonthug ◴[] No.43540011[source]
> The problem is that if your UI involves human beings typing or talking to you in a human language, there is an unbounded set of ways things could go wrong. You can't test against every possible variant of what they might say.

It's almost like we really might benefit from using the advances in AI for stuff like speech recognition to build concrete interfaces with specific predefined vocabularies and a local-first UX. But stuff like that undermines a cloud-based service and a constantly changing interface and the opportunities for general spying and manufacturing "engagement" while people struggle to use the stuff you've made. And of course, producing actual specifications means that you would have to own bugs. Besides eliminating employees, much interest in AI is all about completely eliminating responsibility. As a user of ML-based monitoring products and such for years.. "intelligence" usually implies no real specifications, and no specifications implies no bugs, and no bugs implies rent-seeking behaviour without the burden of any actual responsibilities.

It's frustrating to see how often even technologists buy the story that "users don't want/need concrete specifications" or that "users aren't smart enough to deal with concrete interfaces". It's a trick.

replies(2): >>43541097 #>>43543751 #
freeone3000 ◴[] No.43541097[source]
> concrete interfaces with specific predefined vocabularies and a local-first UX

An app? We don’t even need to put AI in it, turns out you can book flights without one.

replies(2): >>43541512 #>>43542179 #
photonthug ◴[] No.43541512[source]
Tech won't freeze in place exactly where it's at today even if some people want that, and even if in some cases it actually would make sense. And.. if you advocate for this I think you risk losing credibility. Especially amongst technologists it's better to think critically about structural problems with the trends and trajectories. AI is fine, change is fine.. the question now is really more like why and what for and in the interest of whom. To the extent models work locally, we'll be empowered in the end.

Similarly, software eating the world was actually pretty much fine, but SaaS is/was a bit of a trap. And anyone who thought SaaS was bad should be terrified about the moats and platform lock-in that billion dollar models might mean, the enshittification that inevitably follows market dominance, etc.

Honestly we kinda need a new Stallman for the brave new world, someone who is relentlessly beating the drum on this stuff even if they come across as anticorporate and extreme. An extremist might get traction, but a call to preserve things as they are probably cannot / should not.

replies(3): >>43542731 #>>43546231 #>>43547456 #
MichaelZuo ◴[] No.43542731[source]
If you believe in this to that extent, why can’t you be the “new Stallman”?
replies(1): >>43543471 #
photonthug ◴[] No.43543471[source]
It's not about what I believe, it's about what we already know. Computing is old enough now that you don't need to be some kind of mad prophet to know things about the future, because you can just look at how things have played out already.

More to the point though.. at the beginning at least, Stallman was a respected hacker and not just some random person pushing politics on a community he was barely involved with. It's gotta be that way I think, anyone who's not a respected AI/ML insider won't get far

replies(1): >>43548564 #
MichaelZuo ◴[] No.43548564[source]
If you are a random outsider… then how do you know there is the room and potential for such an individual?
replies(1): >>43552088 #
1. photonthug ◴[] No.43552088{3}[source]
I remember you now, and I would block you if I could. On the off chance you’re not doing this on purpose, read this please: https://en.m.wikipedia.org/wiki/Sealioning
replies(1): >>43553140 #
2. MichaelZuo ◴[] No.43553140[source]
Regardless of whatever you believe, you still need to write the actual claim/argument down?

You don’t have any more credibility than most other HN users… so just stating insinuations as if they were self evident doesn’t even make sense.