←back to thread

265 points ctoth | 1 comments | | HN request time: 0.209s | source
Show context
mellosouls ◴[] No.43745240[source]
The capabilities of AI post gpt3 have become extraordinary and clearly in many cases superhuman.

However (as the article admits) there is still no general agreement of what AGI is, or how we (or even if we can) get there from here.

What there is is a growing and often naïve excitement that anticipates it as coming into view, and unfortunately that will be accompanied by the hype-merchants desperate to be first to "call it".

This article seems reasonable in some ways but unfortunately falls into the latter category with its title and sloganeering.

"AGI" in the title of any article should be seen as a cautionary flag. On HN - if anywhere - we need to be on the alert for this.

replies(13): >>43745398 #>>43745959 #>>43746159 #>>43746204 #>>43746319 #>>43746355 #>>43746427 #>>43746447 #>>43746522 #>>43746657 #>>43746801 #>>43749837 #>>43795216 #
daxfohl ◴[] No.43746657[source]
Until you can boot one up, give it access to a VM video and audio feeds and keyboard and mouse interfaces, give it an email and chat account, tell it where the company onboarding docs are and expect them to be a productive team member, they're not AGI. So long as we need special protocols like MCP and A2A, rather than expecting them to figure out how to collaborate like a human, they're not AGI.

The first step, my guess, is going to be the ability to work through github issues like a human, identifying which issues have high value, asking clarifying questions, proposing reasonable alternatives, knowing when to open a PR, responding to code review, merging or abandoning when appropriate. But we're not even very close to that yet. There's some of it, but from what I've seen most instances where this has been successful are low level things like removing old feature flags.

replies(3): >>43746758 #>>43747095 #>>43747467 #
rafaelmn ◴[] No.43746758[source]
Just because we rely on vision to interface with computer software doesn't mean it's optimal for AI models. Having a specialized interface protocol is orthogonal to capability. Just like you could theoretically write code in a proportional font with notepad and run your tools through windows CMD - having an editor with syntax highlighting and monospaced font helps you read/navigate/edit, having tools/navigation/autocomplete etc. optimized for your flow makes you more productive and expands your capability, etc.

If I forced you to use unnatural interfaces it would severely limit your capabilities as well because you'd have to dedicate more effort towards handling basic editing tasks. As someone who recently swapped to a split 36key keyboard with a new layout I can say this becomes immediately obvious when you try something like this. You take your typing/editing skills for granted - try switching your setup and see how your productivity/problem solving ability tanks in practice.

replies(3): >>43747058 #>>43747819 #>>43752611 #
raducu ◴[] No.43752611[source]
> Just because we rely on vision to interface with computer software doesn't mean it's optimal for AI models.

It's optimal for beings that have general purpose inteligence.

> would severely limit your capabilities as well because you'd have to dedicate more effort towards handling basic editing tasks

Yes, but humans will eventually get used to it and internalize the keyboard, the domain language, idioms and so on and their context gets pushed to long term knowledge overnight and thei short term context gets cleaned up and they get bettet and better at the job, day by day. AI starts very strong but stays at that level forever.

When faced with a really hard problem, day after day the human will remember what he tried yesterday and parts of that problem will become easier and easier for the human, not so for the AI, if it can't solve a problem today, running it for days and days produces diminishing returns.

That's the General part of human intelligence -- over time it can aquire new skills it did not have yesterday, LLMs can't do that -- there is no byproduct of them getting better/aquiring new skills as a result of their practicing a problem.

replies(2): >>43753302 #>>43753617 #
1. daxfohl ◴[] No.43753302[source]
Right, and also the ability to know when it's stuck. It should be able to take a problem, work on it for a few hours, and if it decides it's not making progress it should be able to ping back asynchronously, "Hey I've broken the problem down into A, B, C, and D, and I finished A and B, but C seems like it's going to take a while and I wanted to make sure this is the right approach. Do you have time to chat?" Or similarly, I should be able to ask for a status update and get this answer back.