←back to thread

AI 2027

(ai-2027.com)
949 points Tenoke | 1 comments | | HN request time: 0.205s | source
Show context
ahofmann ◴[] No.43572563[source]

Ok, I'll bite. I predict that everything in this article is horse manure. AGI will not happen. LLMs will be tools, that can automate away stuff, like today and they will get slightly, or quite a bit better at it. That will be all. See you in two years, I'm excited what will be the truth.

replies(6): >>43572682 #>>43572684 #>>43572802 #>>43572960 #>>43573033 #>>43578579 #
Tenoke ◴[] No.43572682[source]

That seems naive in a status quo bias way to me. Why and where do you expect AI progress to stop? It sounds like somewhere very close to where we are at in your eyes. Why do you think there won't be many further improvements?

replies(3): >>43573112 #>>43573129 #>>43573245 #
1. ahofmann ◴[] No.43573112[source]

I write bog-standard PHP software. When GPT-4 came out, I was very frightened that my job could be automated away soon, because for PHP/Laravel/MySQL there must exist a lot of training data.

The reality now is, that the current LLMs still often create stuff, that costs me more time to fix, than to do it myself. So I still write a lot of code myself. It is very impressive, that I can think about stopping writing code myself. But my job as a software developer is, very, very secure.

LLMs are very unable to build maintainable software. They are unable to understand what humans want and what the codebase need. The stuff they build is good-looking garbage. One example I've seen yesterday: one dev committed code, where the LLM created 50 lines of React code, complete with all those useless comments and for good measure a setTimeout() for something that should be one HTML DIV with two tailwind classes. They can't write idiomatic code, because they write code, that they were prompted for.

Almost daily I get code, commit messages, and even issue discussions that are clearly AI-generated. And it costs me time to deal with good-looking but useless content.

To be honest, I hope that LLMs get better soon. Because right now, we are in an annoying phase, where software developers bog me down with AI-generated stuff. It just looks good but doesn't help writing usable software, that can be deployed in production.

To get to this point, LLMs need to get maybe a hundred times faster, maybe a thousand or ten thousand times. They need a much bigger context window. Then they can have an inner dialogue, where they really "understand" how some feature should be built in a given codebase. That would be very useful. But it will also use so much energy that I doubt that it will be cheaper to let a LLM do those "thinking" parts over, and over again instead of paying a human to build the software. Perhaps this will be feasible in five or eight years. But not two.

And this won't be AGI. This will still be a very, very fast stochastic parrot.