Most active commenters
  • tptacek(3)

←back to thread

What to build instead of AI agents

(decodingml.substack.com)
233 points giuliomagnifico | 12 comments | | HN request time: 1.378s | source | bottom
1. ldjkfkdsjnv ◴[] No.44450452[source]
This is all going to be solved by better models. Building agents is building for a world that doesn't quite exist yet, but probably will in a year or two. Building some big heuristic engine that strings together LLM calls (which is what this blog advocates for) is essentially a bet against progress in ai. I'm not taking that bet, and neither are any of the major players.
replies(7): >>44450462 #>>44450475 #>>44450503 #>>44450507 #>>44450563 #>>44450783 #>>44452156 #
2. candiddevmike ◴[] No.44450462[source]
There are perverse incentives against admitting that the AI boom music is probably stopping and grabbing a chair, better to keep stringing along investors with more AGI thought leadership.
replies(1): >>44450491 #
3. snek_case ◴[] No.44450475[source]
I've been reviewing papers for NeurIPS and I can tell you that many of the submissions are using various strategies to string together LLM calls for various purposes.

It tends to work better when you give the LLMs some specific narrow subtask to do rather than expecting them to be in the driver's seat.

4. tptacek ◴[] No.44450491[source]
This comment has nothing to do with either the comment it replies to or the original post, neither of which have anything whatsoever to do with "AGI thought leadership".
replies(1): >>44451083 #
5. mccoyb ◴[] No.44450503[source]
What do "better models" and "progress in ai" mean to you? Without more information, it's impossible to respond sincerely or precisely.
6. tptacek ◴[] No.44450507[source]
Maybe! But it seems like it's points well taken today. The important thing I think to keep in mind is that LLM calls themselves, anything that's happening inside the LLM, is stochastic. Even with drastically better models, I still can't tell myself a story that I can rely on specific outputs from an LLM call. Their outputs today are strong enough for a variety of tasks, when LLMs are part of the fabric of a program's logic --- in agent systems --- you need an expert human involved to notice when things go off the rails.
7. m82labs ◴[] No.44450563[source]
So for people building real things today are you saying instead of stringing prompts together with logic we should just sit on our hands for a year and wait for the models to catch up to the agent paradigm?
replies(1): >>44450626 #
8. ldjkfkdsjnv ◴[] No.44450626[source]
if you are in a competitive market you will lose with this approach
9. malfist ◴[] No.44450783[source]
People have been saying "solved by better models" for 5 years now. Still waiting on it.
10. bGl2YW5j ◴[] No.44451083{3}[source]
There's an implication in the messaging of most of these blogs that LLMs and the approach the blog describes, is verging on AGI.
replies(1): >>44451159 #
11. tptacek ◴[] No.44451159{4}[source]
No, there isn't. People talk about AGI, including the CEOs of frontier model companies, but this isn't a post about that; it's very specifically a post about the workaday applications of LLMs as they exist today. (I don't think AGI will ever exist and don't care about it either way.)
12. imhoguy ◴[] No.44452156[source]
I think that world will be abandoned by most of the sane people. Who personaly loves AI output? It it next level enshitification engine, see example in the article - spam...cough...salesbot.