←back to thread

128 points ArmageddonIt | 1 comments | | HN request time: 0.3s | source
Show context
danbruc ◴[] No.44500955[source]
Let us see how this will age. The current generation of AI models will turn out to be essentially a dead end. I have no doubt that AI will eventually fundamentally change a lot of things, but it will not be large language models [1]. And I think there is no path of gradual improvement, we still need some fundamental new ideas. Integration with external tools will help but not overcome fundamental limitations. Once the hype is over, I think large language models will have a place as simpler and more accessible user interface just like graphical user interfaces displaced a lot of text based interfaces and they will be a powerful tool for language processing that is hard or impossible to do with more traditional tools like statistical analysis and so on.

[1] Large language models may become an important component in whatever comes next, but I think we still need a component that can do proper reasoning and has proper memory not susceptible to hallucinating facts.

replies(5): >>44501079 #>>44501283 #>>44502224 #>>44505345 #>>44505828 #
myrmidon ◴[] No.44501283[source]
> The current generation of AI models will turn out to be essentially a dead end.

It seems a matter of perspective to me whether you call it "dead end" or "stepping stone".

To give some pause before dismissing the current state of the art prematurely:

I would already consider LLM based current systems more "intelligent" than a housecat. And a pets intelligence is enough to have ethical implications, so we arguably reached a very important milestone already.

I would argue that the biggest limitation on current "AI" is that it is architected to not have agency; if you had GPT-3 level intelligence in an easily anthropomorphizeable package (furby-style, capable of emoting/communicating by itself) public outlook might shift drastically without even any real technical progress.

replies(4): >>44501468 #>>44504891 #>>44505152 #>>44506234 #
1. andrewflnr ◴[] No.44504891[source]
Intelligence alone does not have ethical implications w.r.t. how we treat the intelligent entity. Suffering has ethical implications, but intelligence does not imply suffering. There's no evidence that LLMs can suffer (note that that's less evidence than for, say, crayfish suffering).