←back to thread

337 points mooreds | 2 comments | | HN request time: 0.405s | source
Show context
dathinab ◴[] No.44484445[source]
I _hope_ AGI is not right around the corner, for social political reasons we are absolutely not ready for it and it might push the future of humanity into a dystopia abyss.

but also just taking what we have now with some major power usage reduction and minor improvements here and there already seems like something which can be very usable/useful in a lot of areas (and to some degree we aren't even really ready for that either, but I guess thats normal with major technological change)

it's just that for those companies creating foundational models it's quite unclear how they can recoup their already spend cost without either major break through or forcefully (or deceptively) pushing it into a lot more places then it fits into

replies(6): >>44484506 #>>44484517 #>>44485067 #>>44485492 #>>44485764 #>>44486142 #
pbreit ◴[] No.44484517[source]
Must "AGI" match human intelligence exactly or would outperforming in some functions and underpformin in others qualify?
replies(6): >>44484575 #>>44484600 #>>44484769 #>>44484956 #>>44488494 #>>44489281 #
root_axis ◴[] No.44484769[source]
At the very least, it needs to be able to collate training data, design, code, train, fine tune and "RLHF" a foundational model from scratch, on its own, and have it show improvements over the current SOTA models before we can even begin to have the conversation about whether we're approaching what could be AGI at some point in the future.
replies(1): >>44485361 #
1. kadushka ◴[] No.44485361[source]
I cannot do all that. Am I not generally intelligent?
replies(1): >>44494874 #
2. root_axis ◴[] No.44494874[source]
You could if you were trained to do so. LLMs cannot, even if they are.