←back to thread

579 points paulpauper | 1 comments | | HN request time: 0.227s | source
Show context
photochemsyn ◴[] No.43603973[source]
Will LLMs end up like compilers? Compilers are also fundamentally important to modern industrial civilization - but they're not profit centers, they're mostly free and open-source outside a few niche areas. Knowing how to use a compiler effectively to write secure and performative software is still a valuable skill - and LLMs are a valuable tool that can help with that process, especially if the programmer is on the steep end of the learning curve - but it doesn't look like anything short of real AGI can do novel software creation without a human constantly in the loop. The same argument applies to new fundamental research, even to reviewing and analyzing new discoveries that aren't in the training corpus.

Wasn't it back in the 1980s that you had to pay $1000s for a good compiler? The entire LLM industry might just be following in the compiler's footsteps.

replies(1): >>43604300 #
lukev ◴[] No.43604300[source]
This seems like a probable end state, but we're going to have to stop calling LLMs "artificial intelligence" in order to get there.
replies(2): >>43604504 #>>43604539 #
mmcnl ◴[] No.43604539[source]
Why not? Objectively speaking LLMs are artificial intelligent. Just because it's not human level intelligence doesn't mean it's not intelligent.
replies(1): >>43604680 #
1. lukev ◴[] No.43604680[source]
Objectively speaking a chess engine is artificially intelligent. Just because it's not human level doesn't mean it's not intelligent. Repeat for any N of 100s of different technologies we've built. We've been calling this stuff "thinking machines" since Turing and it's honestly just not useful at this point.

The fact is, the phrase "artificial intelligence" is a memetic hazard: it immediately positions the subject of conversation as "default capable", and then forces the conversation into trying to describe what it can't do, which is rarely a useful way to approach it.

Whereas with LLMs (and chess engines and every other tech advancement) it would be more useful to start with what the tech _can_ do and go from there.