←back to thread

317 points laserduck | 2 comments | | HN request time: 0.601s | source
Show context
fsndz ◴[] No.42157451[source]
They want to throw LLMs at everything even if it does not make sense. Same is true for all the AI agent craze: https://medium.com/thoughts-on-machine-learning/langchains-s...
replies(10): >>42157567 #>>42157658 #>>42157733 #>>42157734 #>>42157763 #>>42157785 #>>42158142 #>>42158278 #>>42158342 #>>42158474 #
marcosdumay ◴[] No.42157567[source]
If feels like the entire world has gone crazy.

Even the serious idea that the article thinks could work is throwing the unreliable LLMs at verification! If there's any place you can use something that doesn't work most of the time, I guess it's there.

replies(5): >>42157699 #>>42157841 #>>42157907 #>>42158151 #>>42158574 #
edmundsauto ◴[] No.42157907[source]
Only if it fails in the same way. LLMs and the multi-agent approach operate under the assumption that they are programmable agents and each agent is more of a trade off against failure modes. If you can string them together, and if the output is easily verified, it can be a great fit for the problem.
replies(1): >>42163127 #
1. astrange ◴[] No.42163127[source]
If you're going to do that you need completely different LLMs to base the agents on. The ones I've tried have "mode collapse" - ask them to emulate different agents and they'll all end up behaving the same way. Simple example, if you ask it to write different stories they'll usually end up having the same character names.
replies(1): >>42169897 #
2. edmundsauto ◴[] No.42169897[source]
It may depend on the domain. I tend to use LLMs for things that are less open ended, more categorization and summarization response than pure novel creation.

In these situations, I’ve been able to sufficiently program the agent that I haven’t seen too much of an issue as you described. Consistency is a feature.