←back to thread

174 points Philpax | 1 comments | | HN request time: 0.207s | source
Show context
EliRivers ◴[] No.43719892[source]
Would we even recognise it if it arrived? We'd recognise human level intelligence, probably, but that's specialised. What would general intelligence even look like.
replies(8): >>43719970 #>>43719984 #>>43720087 #>>43720130 #>>43720153 #>>43720195 #>>43720300 #>>43725034 #
shmatt ◴[] No.43719970[source]
We sort of are able to recognize Nobel-worthy breakthroughs

One of the many definitions I have for AGI is being able to create the proofs for the 2030, 2050, 2100, etc Nobel Prizes, today

A sillier one I like is that AGI would output a correct proof that P ≠ NP on day 1

replies(1): >>43719982 #
tough ◴[] No.43719982[source]
Isn't AGI just "general" intelligence as in -like a regular human- turing test kinda deal?

aren't you thinking about ASI/ Superintelligence way capable of outdoing humans?

replies(1): >>43720028 #
kadushka ◴[] No.43720028[source]
Yes, a general consensus is AGI should be able to perform any task an average human is able to perform. Definitely nothing of Nobel prize level.
replies(2): >>43720059 #>>43720067 #
aleph_minus_one ◴[] No.43720067[source]
> Yes, a general consensus is AGI should be able to perform any task an average human is able to perform.

The goalposts are regularly moved so that AI companies and their investors can claim/hype that AGI will be around in a few years. :-)

replies(1): >>43720193 #
1. kadushka ◴[] No.43720193[source]
I learned the definition I provided back in mid 90s, and it hasn't really changed since then.