←back to thread

336 points mooreds | 1 comments | | HN request time: 0.208s | source
Show context
dathinab ◴[] No.44484445[source]
I _hope_ AGI is not right around the corner, for social political reasons we are absolutely not ready for it and it might push the future of humanity into a dystopia abyss.

but also just taking what we have now with some major power usage reduction and minor improvements here and there already seems like something which can be very usable/useful in a lot of areas (and to some degree we aren't even really ready for that either, but I guess thats normal with major technological change)

it's just that for those companies creating foundational models it's quite unclear how they can recoup their already spend cost without either major break through or forcefully (or deceptively) pushing it into a lot more places then it fits into

replies(6): >>44484506 #>>44484517 #>>44485067 #>>44485492 #>>44485764 #>>44486142 #
pbreit ◴[] No.44484517[source]
Must "AGI" match human intelligence exactly or would outperforming in some functions and underpformin in others qualify?
replies(6): >>44484575 #>>44484600 #>>44484769 #>>44484956 #>>44488494 #>>44489281 #
crooked-v ◴[] No.44484600[source]
For me, "AGI" would come in with being able to reliably perform simple open-ended tasks successfully without needing any specialized aid or tooling. Not necessarily very well, just being capable of it in the first place.

For a specific example of what I mean, there's Vending-Bench - even very 'dumb' humans could reliably succeed on that test indefinitely, at least until they got terminally bored of it. Current LLMs, by contrast, are just fundamentally incapable of that, despite seeming very 'smart' if all you pay attention to is their eloquence.

replies(2): >>44485912 #>>44485936 #
carefulfungi ◴[] No.44485936[source]
If someone handed you an envelope containing a hidden question, and your life depended on a correct answer, would you rather pick a random person out of the phone book or an LLM to answer it?

On one hand, LLMs are often idiots. On the other hand, so are people.

replies(2): >>44486310 #>>44486689 #
1. bookman117 ◴[] No.44486689[source]
I'd learn as much as I could about what the nature of the question would be beforehand and pay a human with a great track record of handing such questions.