←back to thread

197 points baylearn | 3 comments | | HN request time: 0s | source
Show context
bestouff ◴[] No.44471877[source]
Are there some people here in HN believing in AGI "soonish" ?
replies(5): >>44471902 #>>44471982 #>>44472003 #>>44472071 #>>44472107 #
impossiblefork ◴[] No.44472071[source]
I might, depending on the definition.

Some kind of verbal-only-AGI that can solve almost all mathematical problems that humans come up with that can be solved in half a page. I think that's achievable somewhere in the near term, 2-7 years.

replies(2): >>44472097 #>>44473375 #
1. deergomoo ◴[] No.44472097[source]
Is that “general” though? I’ve always taken AGI to mean general to any problem.
replies(2): >>44472144 #>>44472261 #
2. Touche ◴[] No.44472144[source]
Yes, general means you can present it a new problem that there is no data on, and it can become a expert o that problem.
3. impossiblefork ◴[] No.44472261[source]
I suppose not.

Things I think will be hard for LLMs to do, which some humans can: you get handed 500 pages of Geheimschreiber encrypted telegraph traffic and infinite paper, and you have to figure out how the cryptosystem works and how to decrypt the traffic. I don't think that can happen. I think it requires a highly developed pattern recognition ability together with an ability to not get lost, which LLM-type things will probably continue to for a long time.

But if they could maths more fully, then pretty much all carefully defined tasks would be in reach if they weren't too long.

With regard to what Touche brings up in the other response to your comment, I think that it might be possible to get them to read up on things though-- go through something, invent problems, try to solve those. I think this is something that could be done today with today's models with no real special innovation, but which just hasn't been made into a service yet. But this of course doesn't address that criticism, since it assumes the availability of data.