←back to thread

265 points ctoth | 2 comments | | HN request time: 0.482s | source
Show context
mellosouls ◴[] No.43745240[source]
The capabilities of AI post gpt3 have become extraordinary and clearly in many cases superhuman.

However (as the article admits) there is still no general agreement of what AGI is, or how we (or even if we can) get there from here.

What there is is a growing and often naïve excitement that anticipates it as coming into view, and unfortunately that will be accompanied by the hype-merchants desperate to be first to "call it".

This article seems reasonable in some ways but unfortunately falls into the latter category with its title and sloganeering.

"AGI" in the title of any article should be seen as a cautionary flag. On HN - if anywhere - we need to be on the alert for this.

replies(13): >>43745398 #>>43745959 #>>43746159 #>>43746204 #>>43746319 #>>43746355 #>>43746427 #>>43746447 #>>43746522 #>>43746657 #>>43746801 #>>43749837 #>>43795216 #
1. dheera ◴[] No.43746522[source]
I spent some amount of time trying to create a stock/option trading bot to exploit various market inefficiencies that persist, and did a bunch of code and idea bouncing off these LLMs. What I fund is that even all the various incarnations of GPT 4+ and GPT o+ routinely kept falling for the "get rich quick" option strategies all over the internet that don't work.

In cases where 95%+ of the information on the internet is misinformation, the current incarnations of LLMs have a really hard time sorting out and filtering out the 5% of information that's actually valid and useful.

In that sense, current LLMs are not yet superhuman at all, though I do think we can eventually get there.

replies(1): >>43746674 #
2. jimbokun ◴[] No.43746674[source]
So they are only as smart as most humans.