←back to thread

265 points ctoth | 2 comments | | HN request time: 0s | source
Show context
mellosouls ◴[] No.43745240[source]
The capabilities of AI post gpt3 have become extraordinary and clearly in many cases superhuman.

However (as the article admits) there is still no general agreement of what AGI is, or how we (or even if we can) get there from here.

What there is is a growing and often naïve excitement that anticipates it as coming into view, and unfortunately that will be accompanied by the hype-merchants desperate to be first to "call it".

This article seems reasonable in some ways but unfortunately falls into the latter category with its title and sloganeering.

"AGI" in the title of any article should be seen as a cautionary flag. On HN - if anywhere - we need to be on the alert for this.

replies(13): >>43745398 #>>43745959 #>>43746159 #>>43746204 #>>43746319 #>>43746355 #>>43746427 #>>43746447 #>>43746522 #>>43746657 #>>43746801 #>>43749837 #>>43795216 #
daxfohl ◴[] No.43746657[source]
Until you can boot one up, give it access to a VM video and audio feeds and keyboard and mouse interfaces, give it an email and chat account, tell it where the company onboarding docs are and expect them to be a productive team member, they're not AGI. So long as we need special protocols like MCP and A2A, rather than expecting them to figure out how to collaborate like a human, they're not AGI.

The first step, my guess, is going to be the ability to work through github issues like a human, identifying which issues have high value, asking clarifying questions, proposing reasonable alternatives, knowing when to open a PR, responding to code review, merging or abandoning when appropriate. But we're not even very close to that yet. There's some of it, but from what I've seen most instances where this has been successful are low level things like removing old feature flags.

replies(3): >>43746758 #>>43747095 #>>43747467 #
rafaelmn ◴[] No.43746758[source]
Just because we rely on vision to interface with computer software doesn't mean it's optimal for AI models. Having a specialized interface protocol is orthogonal to capability. Just like you could theoretically write code in a proportional font with notepad and run your tools through windows CMD - having an editor with syntax highlighting and monospaced font helps you read/navigate/edit, having tools/navigation/autocomplete etc. optimized for your flow makes you more productive and expands your capability, etc.

If I forced you to use unnatural interfaces it would severely limit your capabilities as well because you'd have to dedicate more effort towards handling basic editing tasks. As someone who recently swapped to a split 36key keyboard with a new layout I can say this becomes immediately obvious when you try something like this. You take your typing/editing skills for granted - try switching your setup and see how your productivity/problem solving ability tanks in practice.

replies(3): >>43747058 #>>43747819 #>>43752611 #
esperent ◴[] No.43747819[source]
> Just because we rely on vision to interface with computer software doesn't mean it's optimal for AI models

This is true but AGI means "Artificial General Intelligence". Perhaps it would be even more efficient with certain interfaces, but to be general it would have to at least work with the same ones as humans.

Here's some things that I think a true AGI would need to be able to do:

* Control a general purpose robot and use vision to do housework, gardening etc.

* Be able to drive a car - equivalent interfaces to humans might be service motor controlled inputs.

* Use standard computer inputs to do standard computer tasks

And this list could easily be extended.

If we have to be very specific in the choice of interfaces and tasks that we give it, it's not a general AI.

At the same time, we have to be careful at moving the goalposts too much. But current AI are limited to what can be returned in a small number of interfaces (prompt with text/image/video & return text/image/video data). This is amazing, they can sound very intelligent while doing so. But it's important not to lose sight of what they still can't do well which is basically everything else.

Outside of this area, when you do hear of an AI doing something well (self driving, for example) it's usually a separate specialized model rather than a contribution towards AGI.

replies(2): >>43747924 #>>43753643 #
mNovak ◴[] No.43747924[source]
By this logic disabled people would not class as "Generally Intelligent" because they might have physical "interface" limitations.

Similarly I wouldn't be "Generally Intelligent" by this definition if you sat me at a Cyrillic or Chinese keyboard. For this reason, I see human-centric interface arguments as a red herring.

I think a better candidate definition might be about learning and adapting to new environments (learning from mistakes and predicting outcomes), assuming reasonable interface aids.

replies(2): >>43748508 #>>43748532 #
1. vczf ◴[] No.43748532[source]
If all we needed was general intelligence, we would be hiring octopuses. Human skills, like fluency in specific languages, are implicit in our concept of AGI.
replies(1): >>43750281 #
2. ◴[] No.43750281[source]