←back to thread

AI 2027

(ai-2027.com)
949 points Tenoke | 1 comments | | HN request time: 0s | source
Show context
ahofmann ◴[] No.43572563[source]
Ok, I'll bite. I predict that everything in this article is horse manure. AGI will not happen. LLMs will be tools, that can automate away stuff, like today and they will get slightly, or quite a bit better at it. That will be all. See you in two years, I'm excited what will be the truth.
replies(6): >>43572682 #>>43572684 #>>43572802 #>>43572960 #>>43573033 #>>43578579 #
mitthrowaway2 ◴[] No.43572802[source]
What's an example of an intellectual task that you don't think AI will be capable of by 2027?
replies(3): >>43572831 #>>43573086 #>>43573088 #
coolThingsFirst ◴[] No.43572831[source]
programming
replies(2): >>43572872 #>>43573079 #
lumenwrites ◴[] No.43572872[source]
Why would it get 60-80% as good as human programmers (which is what the current state of things feels like to me, as a programmer, using these tools for hours every day), but stop there?
replies(5): >>43572943 #>>43572952 #>>43572958 #>>43573010 #>>43573049 #
1. burningion ◴[] No.43573010[source]
So I think there's an assumption you've made here, that the models are currently "60-80% as good as human programmers".

If you look at code being generated by non-programmers (where you would expect to see these results!), you don't see output that is 60-80% of the output of domain experts (programmers) steering the models.

I think we're extremely imprecise when we communicate in natural language, and this is part of the discrepancy between belief systems.

Will an LLM model read a person's mind about what they want to build better than they can communicate?

That's already what recommender systems (like the TikTok algorithm) do.

But will LLMs be able to orchestrate and fill in the blanks of imprecision in our requests on their own, or will they need human steering?

I think that's where there's a gap in (basically) belief systems of the future.

If we truly get post human-level intelligence everywhere, there is no amount of "preparing" or "working with" the LLMs ahead of time that will save you from being rendered economically useless.

This is mostly a question about how long the moat of human judgement lasts. I think there's an opportunity to work together to make things better than before, using these LLMs as tools that work _with_ us.