←back to thread

265 points ctoth | 1 comments | | HN request time: 0.001s | source
Show context
mellosouls ◴[] No.43745240[source]
The capabilities of AI post gpt3 have become extraordinary and clearly in many cases superhuman.

However (as the article admits) there is still no general agreement of what AGI is, or how we (or even if we can) get there from here.

What there is is a growing and often naïve excitement that anticipates it as coming into view, and unfortunately that will be accompanied by the hype-merchants desperate to be first to "call it".

This article seems reasonable in some ways but unfortunately falls into the latter category with its title and sloganeering.

"AGI" in the title of any article should be seen as a cautionary flag. On HN - if anywhere - we need to be on the alert for this.

replies(13): >>43745398 #>>43745959 #>>43746159 #>>43746204 #>>43746319 #>>43746355 #>>43746427 #>>43746447 #>>43746522 #>>43746657 #>>43746801 #>>43749837 #>>43795216 #
jjeaff ◴[] No.43745959[source]
I suspect AGI will be one of those things that you can't describe it exactly, but you'll know it when you see it.
replies(7): >>43746043 #>>43746058 #>>43746080 #>>43746093 #>>43746651 #>>43746728 #>>43746951 #
torginus ◴[] No.43746093[source]
I still can't have an earnest conversation or bounce ideas off of any LLM - all of them seem to be a cross between a sentient encyclopedia and a constraint solver.

They might get more powerful but I feel like they're still missing something.

replies(2): >>43746121 #>>43746624 #
itchyjunk ◴[] No.43746121[source]
Why are you not able to have an earnest conversation with an LLM? What kind of ideas are you not able to bounce of LLMs? These seem to be the type of use cases where LLMs have generally shined for me.
replies(1): >>43747315 #
9dev ◴[] No.43747315[source]
Eh, I am torn on this. I had some great conversations on random questions or conceptual ideas, but also some where the models instructions shone through way too clearly. Like, when you ask something like "I’m working on the architecture of this system, can you let me know what you think and if there’s anything obvious to improve on?"—the model will always a) flatter me for my amazing concept, b) point out the especially laudable parts of it, and c) name a few obvious but not-really-relevant parts (e.g. "always be careful with secrets and passwords"). However, it will not actually point out higher level design improvements, or alternative solutions. It’s always just regurgitating what I’ve told it about. That is semi-useful, most of the time.
replies(2): >>43749085 #>>43754431 #
john_minsk ◴[] No.43749085{3}[source]
Because it spits out the most probable answer, which is based on endless copycat articles online written by marketers for C-level decision makers to sell their software.

AI doesn't go and read a book on best practices, then comes back saying "Now I know Kung Fu of Software Implementation" and then critically thinks looking at your plan step by step and provides answer. These systems, for now, don't work like that.

Would you disagree?

replies(1): >>43749346 #
9dev ◴[] No.43749346{4}[source]
How come we’re discussing if they’re artificial general intelligence then?
replies(1): >>43749436 #
1. Jensson ◴[] No.43749436{5}[source]
Because some believe that to be intelligence while others believe it requires more than that.