←back to thread

625 points lukebennett | 1 comments | | HN request time: 0.261s | source
1. cryptica ◴[] No.42141942[source]
It's interesting the way things turned out so far with LLMs, especially from the perspective of a software engineer. We are trained to keep a certain skepticism when we see software which appears to be working because, ultimately, the only question we care about is "Does it meet user requirements?" and this is usually framed in terms of users achieving certain goals.

So it's interesting that when AI came along, we threw caution to the wind and started treating it like a silver bullet... Without asking the question of whether it was applicable to this goal or that goal...

I don't think anyone could have anticipated that we could have an AI which could produce perfect sentences, faster than a human, better than a human but which could not reason. It appears to reason very well, better than most people, yet it doesn't actually reason. You only notice this once you ask it to accomplish a task. After a while, you can feel how it lacks willpower. It puts into perspective the importance of willpower when it comes to getting things done.

In any case, LLMs bring us closer to understanding some big philosophical questions surrounding intelligence and consciousness.