←back to thread

317 points laserduck | 1 comments | | HN request time: 0.217s | source
Show context
paulsutter ◴[] No.42157570[source]
Generative models are bimodal - in certain tasks they are crazy terrible , and in certain tasks they are better than humans. The key is to recognize which is which.

And much more important:

- LLMs can suddenly become more competent when you give them the right tools, just like humans. Ever try to drive a nail without a hammer?

- Models with spatial and physical awareness are coming and will dramatically broaden what’s possible

It’s easy to get stuck on what LLMs are bad at. The art is to apply an LLMs strengths to your specific problem, often by augmenting the LLM with the right custom tools written in regular code

replies(1): >>42158546 #
necovek ◴[] No.42158546[source]
> Ever try to drive a nail without a hammer?

I've driven a nail with a rock, a pair of pliers, a wrench, even with a concrete wall and who knows what else!

I didn't need to be told if these can be used to drive a nail, and I looked at things available, looked for a flat surface on them and good grip, considered their hardness, and then simply used them.

So if we only give them the "right" tools, they'll remain very limited by us not thinking about possible jobs they'll appear as if they know how to do and they don't.

The problem is exactly that: they "pretend" to know how to drive a nail but not really.

replies(1): >>42159476 #
paulsutter ◴[] No.42159476[source]
Those are all tools !! Congratulations

If you’re creative enough to figure out different tools for humans, you are creative enough to figure out different tools for LLMs

replies(1): >>42164784 #
1. necovek ◴[] No.42164784[source]
No disagreement there, but if we've got the tools, do we really need an LLM to drive them (it still requires building an adapter from LLM to those tools)?

What is the added value of that combo and at what cost?