←back to thread

265 points ctoth | 3 comments | | HN request time: 0s | source
Show context
mellosouls ◴[] No.43745240[source]
The capabilities of AI post gpt3 have become extraordinary and clearly in many cases superhuman.

However (as the article admits) there is still no general agreement of what AGI is, or how we (or even if we can) get there from here.

What there is is a growing and often naïve excitement that anticipates it as coming into view, and unfortunately that will be accompanied by the hype-merchants desperate to be first to "call it".

This article seems reasonable in some ways but unfortunately falls into the latter category with its title and sloganeering.

"AGI" in the title of any article should be seen as a cautionary flag. On HN - if anywhere - we need to be on the alert for this.

replies(13): >>43745398 #>>43745959 #>>43746159 #>>43746204 #>>43746319 #>>43746355 #>>43746427 #>>43746447 #>>43746522 #>>43746657 #>>43746801 #>>43749837 #>>43795216 #
daxfohl ◴[] No.43746657[source]
Until you can boot one up, give it access to a VM video and audio feeds and keyboard and mouse interfaces, give it an email and chat account, tell it where the company onboarding docs are and expect them to be a productive team member, they're not AGI. So long as we need special protocols like MCP and A2A, rather than expecting them to figure out how to collaborate like a human, they're not AGI.

The first step, my guess, is going to be the ability to work through github issues like a human, identifying which issues have high value, asking clarifying questions, proposing reasonable alternatives, knowing when to open a PR, responding to code review, merging or abandoning when appropriate. But we're not even very close to that yet. There's some of it, but from what I've seen most instances where this has been successful are low level things like removing old feature flags.

replies(3): >>43746758 #>>43747095 #>>43747467 #
rafaelmn ◴[] No.43746758[source]
Just because we rely on vision to interface with computer software doesn't mean it's optimal for AI models. Having a specialized interface protocol is orthogonal to capability. Just like you could theoretically write code in a proportional font with notepad and run your tools through windows CMD - having an editor with syntax highlighting and monospaced font helps you read/navigate/edit, having tools/navigation/autocomplete etc. optimized for your flow makes you more productive and expands your capability, etc.

If I forced you to use unnatural interfaces it would severely limit your capabilities as well because you'd have to dedicate more effort towards handling basic editing tasks. As someone who recently swapped to a split 36key keyboard with a new layout I can say this becomes immediately obvious when you try something like this. You take your typing/editing skills for granted - try switching your setup and see how your productivity/problem solving ability tanks in practice.

replies(3): >>43747058 #>>43747819 #>>43752611 #
raducu ◴[] No.43752611[source]
> Just because we rely on vision to interface with computer software doesn't mean it's optimal for AI models.

It's optimal for beings that have general purpose inteligence.

> would severely limit your capabilities as well because you'd have to dedicate more effort towards handling basic editing tasks

Yes, but humans will eventually get used to it and internalize the keyboard, the domain language, idioms and so on and their context gets pushed to long term knowledge overnight and thei short term context gets cleaned up and they get bettet and better at the job, day by day. AI starts very strong but stays at that level forever.

When faced with a really hard problem, day after day the human will remember what he tried yesterday and parts of that problem will become easier and easier for the human, not so for the AI, if it can't solve a problem today, running it for days and days produces diminishing returns.

That's the General part of human intelligence -- over time it can aquire new skills it did not have yesterday, LLMs can't do that -- there is no byproduct of them getting better/aquiring new skills as a result of their practicing a problem.

replies(2): >>43753302 #>>43753617 #
ctoth ◴[] No.43753617[source]
> It's optimal for beings that have general purpose inteligence [Sic].

Hi. I'm blind. I would like to think I have general-purpose intelligence thanks.

And I can state that interfacing with vision would, in fact, be suboptimal for me. The visual cortex is literally unformed. Yet somehow I can perform symbolic manipulations. Converse with people. Write code. Get frustrated with strangers on the Internet. Perhaps there are other "optimal" ways that "intelligent" systems can use to interface with computers? I don't know, maybe the accessibility APIs we have built? Maybe MCP? Maybe any number of things? Data structures specifically optimized for the purpose and exchanged directly between vastly-more-complex intelligences than ourselves? Do you really think that clicking buttons through a GUI is the one true optimal way to use a computer?

replies(2): >>43753763 #>>43754257 #
Jensson ◴[] No.43753763[source]
> Do you really think that clicking buttons through a GUI is the one true optimal way to use a computer?

There are some tasks you can't do without vision, but I agree it is dumb to say general intelligence requires vision, vision is just an information source it isn't about intelligence. Blind people can be excellent software engineers etc they can do most white collar work just as well as anyone else since most tasks doesn't require visual processing, text processing works well enough.

replies(1): >>43754062 #
1. jermaustin1 ◴[] No.43754062[source]
> There are some tasks you can't do without vision...

I can't think of anything where you require vision that having a tool (a sighted person) you protocol with (speak) wouldn't suffice. So why aren't we giving AI the same "benefit" of using any tool/protocol it needs to complete something.

replies(1): >>43754227 #
2. ctoth ◴[] No.43754227[source]
> I can't think of anything where you require vision that having a tool (a sighted person) you protocol with (speak) wouldn't suffice.

Okay, are you volunteering to be the guide passenger while I drive?

replies(1): >>43755352 #
3. jermaustin1 ◴[] No.43755352[source]
Thank you for making my point:

We have created a tool called "full self driving" cars already. This is a tool that humans can use, just like we have MCPs a tool for AI to use.

All I'm trying to say, is AGIs should be allowed to use tools that fit their intelligence the same way that we do. I'm not saying AIs are AGIs, I'm just saying that the requirement that they use a mouse and keyboard is a very weird requirement like saying People who can't use a mouse and keyboard (amputees, etc.) aren't "Generally" intelligent. Or people who can't see the computer screen.