←back to thread

132 points ArmageddonIt | 4 comments | | HN request time: 0.788s | source
Show context
danbruc ◴[] No.44500955[source]
Let us see how this will age. The current generation of AI models will turn out to be essentially a dead end. I have no doubt that AI will eventually fundamentally change a lot of things, but it will not be large language models [1]. And I think there is no path of gradual improvement, we still need some fundamental new ideas. Integration with external tools will help but not overcome fundamental limitations. Once the hype is over, I think large language models will have a place as simpler and more accessible user interface just like graphical user interfaces displaced a lot of text based interfaces and they will be a powerful tool for language processing that is hard or impossible to do with more traditional tools like statistical analysis and so on.

[1] Large language models may become an important component in whatever comes next, but I think we still need a component that can do proper reasoning and has proper memory not susceptible to hallucinating facts.

replies(5): >>44501079 #>>44501283 #>>44502224 #>>44505345 #>>44505828 #
1. jmathai ◴[] No.44502224[source]
This is a surprising take. I think what's available today can improve productivity by 20% across the board. That seems massive.

Only a very small % of the population is leveraging AI in any meaningful way. But I think today's tools are sufficient for them to do so if they wanted to start and will only get better (even if the LLMs don't, which they will).

replies(2): >>44502402 #>>44505258 #
2. danbruc ◴[] No.44502402[source]
Sure, if I ask about things I know nothing about, then I can get something done with little effort. But when I ask about something where I am an expert, then large language models have surprisingly little to offer. And because I am an expert, it becomes apparent how bad they are, which in turn makes me hesitate to use them for things I know nothing about because I am unprepared to judge the quality of the response. As a developer I am an expert on programming and I think I never got something useful out of a large language model beyond pointers to relevant APIs or standards, a very good tool to search through documentation, at least up to the point that it starts hallucinating stuff.

When I wrote dead end, I meant for achieving an AI that can properly reason and knows what it knows and maybe is even able to learn. For finding stuff in heaps of text, large language models are relatively fine and can improve productivity, with the somewhat annoying fact that one has to double check what the model says.

3. bigstrat2003 ◴[] No.44505258[source]
I think that what's available today is a drain on productivity, not an improvement, because it's so unreliable that you have to babysit it constantly to make sure it hasn't fucked up. That is not exactly reassuring as to the future, in my view.
replies(1): >>44513131 #
4. jmathai ◴[] No.44513131[source]
This is definitely some people's experience. It's not mine.

I think the distinction is due to different tools being used, how the tool is being used and the use case it's being used for.