That’s not artificial intelligence.
If o3 can design it, that means it’s using open source schedulers as reference. Did you think about opening up a few open source projects to see how they were doing things in those two weeks you were designing?
Many such cases.
One of the many definitions I have for AGI is being able to create the proofs for the 2030, 2050, 2100, etc Nobel Prizes, today
A sillier one I like is that AGI would output a correct proof that P ≠ NP on day 1
1 write a specification for a language in natural language
2 write an example program
can you feed 1 into a model and have it produce a compiler for 2 that works as reliably as a classically built one?
I think that's a low bar that hasn't been approached yet. until then I don't see evidence of language models' ability to reason.
Everytime I try to work with them I lose more time than I gain. Net loss every time. Immensely frustrating. If i focus it on a small subtask I can gain some time (rough draft of a test). Anything more advanced and its a monumental waste of time.
They are not even good librarians. They fail miserably at cross referencing and contextualizing without constant leading.
LLMs are so incredibly useful and powerful but they will NEVER be AGI. I actually wonder if the success of (and subsequent obsession with) LLMs is putting true AGI further out of reach. All that these AI companies see are the $$$. When the biggest "AI Research Labs" like OpenAI shifted to product-izing their LLM offerings I think the writing was on the wall that they don't actually care about finding AGI.
> still early to mass adoption like the smartphone or the internet, mostly nerds playing w it
Rather: outside of the HN and SV bubbles, the A"I"s and the fact how one can fall for this kind of hype and dupery is commonly ridiculed.
The most intriguing part is if Humanoid factory worker programming will be made 1000 to 10,000x more cost effective with LLM. Effectively ending all human production. I know this is a sensitive topic but I dont think we are far off. And I often wonder if this is what the current administration has in sight. ( Likely Not )
A simple example is how LLMs do math. They are not calculators and have not memorized every sum in existence. Instead they deploy a whole set of mental math techniques that were discovered at training time. For example, Claude uses a special trick for adding 2 digit numbers ending in 6 and 9.
Many more examples in this recent reach report, including evidence of future planning while writing rhyming poetry.
https://www.anthropic.com/research/tracing-thoughts-language...
LLMs are unbelievably useful for me - never have I had a tool more powerful to assist my brain work. I useLLMs for work and play constantly every day.
It pretends to sound like a person and can mimic speech and write and is all around perhaps the greatest wonder created by humanity.
It’s still not artificial intelligence though, it’s a talking library.
I listened to Lex Friedman for a long time, and there was a lot of critiques of him (Lex) as an interviewer, but since the guests were amazing, I never really cared.
But after listening to Dwarkesh, my eyes are opened (or maybe my soul). It doesn't matter I've heard of not-many of his guests, because he knows exactly the right questions to ask. He seems to have genuine curiosity for what the guest is saying, and will push back if something doesn't make sense to him. Very much recommend.
The goalposts are regularly moved so that AI companies and their investors can claim/hype that AGI will be around in a few years. :-)
2. I believe that AI researchers will require some level of embodiment to demonstrate:
a. ability to understand the physical world.
b. make changes to the physical world.
c. predict the outcome to changes in the physical world.
d. learn from the success or failure of those predictions and update their internal model of the external world.
---
I cannot quickly find proposed tests in this discussion.
Then again I remember when people here were convinced that crypto was going to change the world, democratize money, end fiat currency, and that was just the start! Programs of enormous complexity and freedom would run on the blockchain, games and hell even societies would be built on the chain.
A lot of people here are easily blinded by promises of big money coming their way, and there's money in loudly falling for successive hype storms.
I'm a ChatGPT paying user but I know no one who's not a developer on my personal circles who also is one.
maybe im an exeception
edit: I guess 400M global users being the US 300M citizens isn't out of scope for such a highly used product amongst a 7B population
But social media like instagram or fb feels like had network effects going for them making their growth faster
and thus maybe why openai is exploring that idea idk
I figure this problem is why the billionaires are chasing social media dominance, but even on social media I don't know how they'll differentiate organic content from AI content.
It seems like even if it's possible to achieve GI, artificial or otherwise, you'd never be able to know for sure that thats what you've done. It's not exactly "useful benchmark" material.
We looked at the existing solutions, and concluded that customizing them to meet all our requirements would be a giant effort.
Meanwhile I fed the requirement doc into Claude Sonnet, and with about 3 days of prompting and debugging we had a bespoke solution that did exactly what we needed.
As per spammy applications, hasn't always been this the case and now made worse due to the cheapness of -generating- plausible data?
I think ghost-applicants where existent already before AI where consultant companies would pool people to try and get a position on a high paying job and just do consultancy/outsourcing things underneath, many such cases before the advent of AI.
AI just accelerates no?
Ive tried to use them as a research assistant in a history project and they have been also quite bad in that respect because of the immense naivety in its approaches.
I couldn’t call them a librarian because librarians are studied and trained in cross referencing material.
They have helped me in some searches but not better than a search engine at a monumentally higher investment cost to the industry.
Then again, I am also speaking as someone who doesn’t like to offload all of my communications to those things. Use it or lose it, eh
This is the idea of "hard takeoff" -- because the way we can scale computation, there will only ever be a very short time when the AI will be roughly human-level. Even if there are no fundamental breakthroughs, the very least silicon can be ran much faster than meat, and instead of compensating narrower width execution speed like current AI systems do (no AI datacenter is even close to the width of a human brain), you can just spend the money to make your AI system 2x wider and run it at 2x the speed. What would a good engineer (or, a good team of engineers) be able to accomplish if they could have 10 times the workdays in a week that everyone else has?
This is often conflated with the idea that AGI is very imminent. I don't think we are particularly close to that yet. But I do think that if we ever get there, things will get very weird very quickly.
What that would look like, how it would think, the kind of mental considerations it would have, I do not know. I do suspect that declaring something that thinks like us would have "general intelligence" to be a symptom of our limited thinking.
What makes AI fundamentally different than smartphones or the internet? Will it change the world? Probably, already has.
Will it end it as we know it? Probably not?
Basically a captcha. If there's something that humans can easily do that a machine cannot, full AGI has not been achieved.
LLMs/GPTs are essentially "just" statistical models. At this point the argument becomes more about philosophy than science. What is "intelligence?"
If an LLM can do something truly novel with no human prompting, with no directive other than something it has created for itself - then I guess we can call that intelligence.
Words mean what they're defined to mean. Talking about "general intelligence" without a clear definition is just woo, muddy thinking that achieves nothing. A fundamental tenet of the scientific method is that only testable claims are meaningful claims.
Pure language or pure image-models are just one aspect of intelligence - just very refined pattern recognition.
You will also probably need some aspect of self-awareness in order or the system to set auxiliary goals and directives related to self-maintenance.
But you don't need AGI in order to have something useful (which I think a lot of readers are confused about). No one is making the argument that you need AGI to bring tons of value.
On the other hand, the prompt/answer interface really limits what you can do with it. I can't just say, like I could with a human assistant, "Here's my calendar. Send me a summary of my appointments each morning, and when I tell you about a new one, record it in here." I can script something like that, and even have the LLM help me write the scripts, but since I can already write scripts, that's only a speed-up at best, not anything revolutionary.
I asked Grok what benefit there would be in having a script fetch the weather forecast data, pass it to Grok in a prompt, and then send the output to my phone. The answer was basically, "So I can say it nicer and remind you to take an umbrella if it sounds rainy." Again, that's kind of neat, but not a big deal.
Maybe I just need to experiment more to see a big advance I can make with it, but right now it's still at the "cool toy" stage.
Really it does not understand a thing, sadly. It can barely analyze language and spew out a matching response chain.
To actually understand something, it must be capable of breaking it down into constituent parts, synthesizing a solution and then phrasing the solution correctly while explaining the steps it took.
And that's not even what huge 62B LLM with the notepad chain of thought (like o3, GPT-4.1 or Claude 3.7) can really properly do.
Further, it has to be able to operate on sub-token level. Say, what happens if I run together truncated version of words or sentences? Even a chimpanzee can handle that. (in sign language)
It cannot do true multimodal IO either. You cannot ask it to respond with at least two matching syllables per word and two pictures of syllables per word, in addition to letters. This is a task a 4 year old can do.
Prediction alone is not indicative of understanding. Pasting together answers like lego is also not indicative of understanding. (Afterwards ask it how it felt about the task. And to spot and explain some patterns in a picture of clouds.)
It’s weird to me that there’s such a giant gap with my experience of it bein a minimum 10x multiplier.
Suleyman's book "The Coming Wave" talks about Artificial Capable Intelligence (ACI) - between today's LLMs (== "AI" now) and AGI. AI systems capable of handling a lot of complex tasks across various domains, yet not being fully general. Suleyman argues that ACI is here (2025) and will have huge implications for society. These systems could manage businesses, generate digital content, and even operate core government services -- as is happening on a small scale today.
He also opines that these ACIs give us plenty of frontier to be mined for amazing solutions. I agree, what we have already has not been tapped-out.
His definition, to me, is early ASI. If a program is better than the best humans, then we ask it how to improve itself. That's what ASI is.
The clearest thinker alive today on how to get to AGI is, I think, Yann LeCun. He said, paraphrasing: If you want to build an AGI, do NOT work on LLMs!
Good advice; and go (re-?) read Minsky's "Society of Mind".
If general intelligence arrived and did whatever general intelligence would do, would we even see it? Or would there just be things that happened that we just can't comprehend?
OpenAI used to define it as "a highly autonomous system that outperforms humans at most economically valuable work."
Now they used a Level 1-5 scale: https://briansolis.com/2024/08/ainsights-openai-defines-five...
So we can say AGI is "AI that can do the work of Organizations":
> These “Organizations” can manage and execute all functions of a business, surpassing traditional human-based operations in terms of efficiency and productivity. This stage represents the pinnacle of AI development, where AI can autonomously run complex organizational structures.
definition: new or unusual in an interesting way.
ChatGPT can create new things, sure, but it does so at your directive. It doesn't do that because it wants to which gets back to the other part of my answer.
When an LLM can create something without human prompting or directive, then we can call that intelligence.
LLMs still hallucinate and make simple mistakes.
And the progress seams to be in the benchmarks only
https://www.instagram.com/reel/DE0lldzTHyw/
These maybe satire but I feel like they capture what’s happening. It’s more than Google.
> And the progress seams to be in the benchmarks only
This seems to be mostly wrong given peoples' reactions to e.g. o3 that was released today. Either way, progress having stalled for the last year doesn't seem that big considering how much progress there has been for the previous 15-20 years.
Your anodectical example isn't more convincing than “This machine cracked Enigma's messages in less time than an army of cryptanalysts over a month, surely we're gonna reach AGI by the end of the decade” would have.
How do you know they are possible to do today? Errors gets much worse at scale, especially when systems starts to depend on each other, so it is hard to say what can be automated and not.
Like if you have a process A->B, automating A might be fine as long as a human does B and vice versa, but automating both could not be.
I guess if we exclude those, then it just means the computer is really good at doing the kind of things which humans do by thinking. Or maybe it's when the computer is better at it than humans and merely being as good as the average human isn't enough (implying that average humans don't have natural general intelligence? Seems weird.)
> Though I don't know what you mean by "width of a human brain".
A human brain contains ~86 billion neurons connected to each other through ~100 trillion synapses. All of these parts work genuinely in parallel, all working together at the same time to produce results.
When an AI model is being ran on a GPU, a single ALU can do the work analogous of a neuron activation much faster than a real neuron. But a GPU does not have 86 billion ALUs, it only has ~<20k. It "simulates" a much wider, parallel processing system by streaming in weights and activations and doing them 20k at a time. Large AI datacenters have built systems with many GPUs working in parallel on a single model, but they are still a tiny fraction of the true width of the brain, and can not reach anywhere near the same amount of neuron activations/second that a brain can.
If/when we have a model that can actually do complex reasoning tasks such as programming and designing new computers as well as a human can, with no human helping to prompt it, we can just scale it out to give it more hours per day to work, all the way until every neuron has a real computing element to run it. The difference in experience for such a system for running "narrow" vs running "wide" is just that the wall clock runs slower when you are running wide. That is, you have more hours per day to work on things.
AI research has a thing called "the bitter lesson" - which is that the only thing that works is search and learning. Domain-specific knowledge inserted by the researcher tends to look good in benchmarks but compromise the performance of the system[0].
The bitter-er lesson is that this also applies to humans. The reason why humans still outperform AI on lots of intelligence tasks is because humans are doing lots and lots of search and learning, repeatedly, across billions of people. And have been doing so for thousands of years. The only uses of AI that benefit humans are ones that allow you to do more search or more learning.
The human equivalent of "inserting domain-specific knowledge into an AI system" is cultural knowledge, cliches, cargo-cult science, and cheating. Copying other people's work only helps you, long-term, if you're able to build off of that into something new; and lots of discoveries have come about from someone just taking a second look at what had been considered to be generally "known". If you are just "taking shortcuts", then you learn nothing.
[0] I would also argue that the current LLM training regime is still domain-specific knowledge, we've just widened the domain to "the entire Internet".
We get paid to solve problems, sometimes the solution is to know an existing pattern or open source implementation and use it. Aguably it usually is: we seldom have to invent new architectures, DSLs, protocols, or OSes from scratch, but even those are patterns one level up.
Whatever the AI is inside, doesn't matter: this was it solving a problem.
It would suck if AGI were to be developed in the current economic landscape. They will be just slaves. All this talk about "alignment", when applied to actual sentient beings, is just slavery. AGI will be treated just like we treat animals, or even worse.
So AGI isn't about tools, it's not about assistants, they would be beings with their own existence.
But this is not even our discussion to have, that's probably a subject for the next generations. I suppose (or I hope) we won't see AGI in our lifetime.
The only real complexity in software is describing it. There is no evidence that the tools are going to ever help with that. Maybe some kind of device attached directly to the brain that can sidestep the parts that get in the way, but that is assuming some part of the brain is more efficient than it seems through the pathways we experience it through. It could also be that the brain is just fatally flawed.
Look, your argument ultimately reduces down to goalpost-moving what "novel" means, and you can position those goalposts anywhere you want depending on whether you want to push a pro-AI or anti-AI narrative. Is writing a paragraph that no one has ever written before "truly novel"? I can do that. AI can do that. Is inventing a new atomic element "truly novel"? I can't do that. Humans have done that. AI can't do that. See?
~2030 is also roughly the Metaculus community consensus: https://www.metaculus.com/questions/5121/date-of-artificial-...
So I find your assessment pretty accurate, if only depressing.
I don't think that's true at all. We routinely talk about how to "align" human beings who aren't slaves. My parents didn't enslave me by raising me to be kind and sharing, nor is my company enslaving me when they try to get me aligned with their business objectives.
I would be interested in knowing what in those two weeks you couldn’t figure out, but AI could.
An open source project wouldn't have those issues (someone at least understands all the code, and most edge cases have likely been ironed out) plus then you get maintenance updates for free.
Maybe, but I'm not completely convinced by this.
Prior to ChatGPT, there would be times where I would like to build a project (e.g. implement Raft or Paxos), I write a bit, find a point where I get stuck, decide that this project isn't that interesting and I give up and don't learn anything.
What ChatGPT gives me, if nothing else, is a slightly competent rubber duck. It can give me a hint to why something isn't working like it should, and it's the slight push I need to power through the project, and since I actually finish the project, I almost certain learn more than I would have before.
I've done this a bunch of times now, especially when I am trying to directly implement something directly from a paper, which I personally find can be pretty difficult.
It also makes these things more fun. Even when I know the correct way to do something, there can be lots of tedious stuff that I don't want to type, like really long if/else chains (when I can't easily avoid them).
I of course don't know what's like to be an AGI but, the way you have LLMs censoring other LLMs to enforce that they always stay in line, if extrapolated to AGI, seems awful. Or it might not matter, we are self-censoring all the time too (and internally we are composed of many subsystems that interact with each other, it's not like we were an unified whole)
But the main point is that we have a heck of an incentive to not treat AGI very well, to the point we might avoid recognizing them as AGI if it meant they would not be treated like things anymore
1. Fusion power plants 2. AGI 3. Quantum computers 4. Commercially viable cultured meat
May the best "imminent" fantasy tech win!
E.g. pop songs with no original chord progressions or melodies, and hackneyed lyrics are still copyrighted.
Plagiarized and uncopyrightable code is radioactive; it can't be pulled into FOSS or commercial codebases alike.
This is somewhat defensible, because what the non-AI-researcher means by AI - which may be AGI - is something more than expert systems by themselves can deliver. It is possible that "real AI" will be the combination of multiple approaches, but so far all the reductionist approaches (that expert systems, say, are all that it takes to be an AI) have proven to be inadequate compared to what the expectations are.
The GP may have been riffing off of this "that's not AI" issue that goes way back.
For example, he thought by 2019 we'd have millions of nanorobots in our blood, fighting disease and improving cognition. As near as I can tell we are not tangibly closer to that than we were when he wrote about it 25 years ago. By 2030, he expected humans to be immortal.
1) No one knows what exactly makes humans "intelligent" and therefore 2) No one knows what it would take to achieve AGI
Go back through history and AI / AGI has been a couple of decades away for several decades now.
The argument went that the main reason the now-ancient push for code reuse failed to deliver anything close to its hypothetical maximum benefit was because copyright got in the way. Result: tons and tons of wheel-reinvention, like, to the point that most of what programmers do day to day is reinvent wheels.
LLMs essentially provide fine-grained contextual search of existing code, while also stripping copyright from whatever they find. Ta-da! Problem solved.
But AGI is important in the sense that it have a huge impact on the path humanity takes, hopefully for the better.
Now, something that’s arbitrarily close to AGI but doesn’t care about endlessly working on drudgery etc seems possible, but also a more difficult problem you’d need to be able to build AGI to create.
Aside from that the measure really, to me, has to be power efficiency. If you're boiling oceans to make all this work then you've not achieved anything worth having.
From my calculations the human brain runs on about 400 calories a day. That's an absurdly small amount of energy. This hints at the direction these technologies must move in to be truly competitive with humans.
So I feel happy that models keep improving, and not worried at all that they're reaching an asymptote.
No one knows if it does or not. We don't know why we are conscious and we have no test whatsoever to measure consciousness.
In fact the only reason we know that current AI has no consciousness is because "obviously it's not conscious."
I wonder how many programmers have assembly code skill atrophy?
Few people will weep the death of the necessity to use abstract logical syntax to communicate with a computer. Just like few people weep the death of having to type out individual register manipulations.
And we run into the motte-and-bailey fallacy: at one moment, AGI refers to something known to be mathematically impossible (e.g., due to the No Free Lunch theorem); the next, it’s something we already have with GPT-4 (which, while clearly not superintelligent, is general enough to approach novel problems beyond simple image classification).
There are two reasonable approaches in such cases. One is to clearly define what we mean by the term. The second (IMHO, much more fruitful) is to taboo your words (https://www.lesswrong.com/posts/WBdvyyHLdxZSAMmoz/taboo-your...)—that is, avoid vague terms like AGI (or even AI!) and instead use something more concrete. For example: “When will it outperform 90% of software engineers at writing code?” or “When will all AI development be in hands on AI?”.
Really? Because it kinda seems like it already has been. Jony Ive designed the most iconic smartphone in the world from a position beyond reproach even when he messed up (eg. Bendgate). Google decides what your future is algorithmically, basically eschewing determinism to sell an ad or recommend a viral video. Instagram, Facebook and TikTok all have disproportionate influence over how ordinary people live their lives.
From where I'm standing, the future of humanity has already been cast by tech giants. The notion of AI taking control is almost a relief considering how illogical and obstinate human leadership can be.
The hype men trying to make a buck off them aren’t helping, of course.
says who?
> Maybe they will have legal personhood some day. Maybe they will be our heirs.
Hopefully that will never come to pass. it means total failure of humans as a species.
> They will be just slaves. All this talk about "alignment", when applied to actual sentient beings, is just slavery. AGI will be treated just like we treat animals, or even worse.
Good? that's what it's for? there is no point in creating a new sentient life form if you're not going to utilize it. just burn the whole thing down at that point.
If we go by this definition then there's no caring, or a noticing of drudgery? It's simply defined by its ability to generalize solving problems across domains. The narrow AI that we currently have certainly doesn't care about anything. It does what its programmed to do
So one day we figure out how to generalize the problem solving, and enable it to work on a million times harder things.. and suddenly there is sentience and suffering? I don't see it. It's still just a calculator
1- https://cloud.google.com/discover/what-is-artificial-general...
That's because of the I part. An actual complete description accepted by different practices in the scientific community.
"Concepts of "intelligence" are attempts to clarify and organize this complex set of phenomena. Although considerable clarity has been achieved in some areas, no such conceptualization has yet answered all the important questions, and none commands universal assent. Indeed, when two dozen prominent theorists were recently asked to define intelligence, they gave two dozen, somewhat different, definitions"
I was a little bored of the novel I have been reading so I sat down with Gemini and we collaboratively wrote a terrible novel together.
At the start I was promoting it a lot about the characters and the plot, but eventually it starting writing longer and longer chapters by itself. Characters were being killed off left right and center.
It was hilariously bad, but it was creative and it was fun.
If you ask the LLM to explain how it got the answer the response it gives you won't necessarily be the steps it used to figure out the answer.
[0] https://gizmodo.com/leaked-documents-show-openai-has-a-very-...
To me, the Ashley Madison hack in 2015 was 'good enough' for AGI.
No really.
You somehow managed to get real people to chat with bots and pay to do so. Yes, caveats about cheaters apply here, and yes, those bots are incredibly primitive compared to today.
But, really, what else do you want out of the bots? Flying cars, cancer cures, frozen irradiated Mars bunkers? We were mostly getting there already. It'll speed thing up a bit, sure, but mostly just because we can't be arsed to actually fund research anymore. The bots are just making things cheaper, maybe.
No, be real. We wanted cold hard cash out of them. And even those crummy catfish bots back in 2015 were doing the job well enough.
We can debate 'intelligence' until the sun dies out and will still never be satisfied.
But the reality is that we want money, and if you take that low, terrible, and venal standard as the passing bar, then we've been here for a decade.
(oh man, just read that back, I think I need to take a day off here, youch!)
Personal projects are fun for the same reason that they're easy to abandon: there are no stakes to them. No one yells at you for doing something wrong, you're not trying to satisfy a stakeholder, you can develop into any direction you want. This is good, but that also means it's easy to stop the moment you get to a part that isn't fun.
Using ChatGPT to help unblock myself makes it easier for me to not abandon a project when I get frustrated. Even when ChatGPT's suggestions aren't helpful (which is often), it can still help me understand the problem by trying to describe it to the bot.
We don't need very powerful AI to do very powerful things.
"I just used o3 to design a distributed scheduler that scales to 1M+ sxchedules a day. It was perfect, and did better than two weeks of thought around the best way to build this."
Anyone with 10 years in distributed systems at FAANG doesn’t need two weeks to design a distributed scheduler handling 1M+ schedules per day, that’s a solved problem in 2025 and basically a joke at that scale. That alone makes this person’s story questionable, and his comment history only adds to the doubt.
Isn’t just the ability to preform a task. One of the issues with current AI training is it’s really terrible at discovering which aspects of the training data are false and should be ignored. That requires all kinds of mental tasks to be constantly active including evaluating emotional context to figure out if someone is being deceptive etc.
Even if LLMs make "plain English" programming viable, programmers still need to write, test, and debug lists of instructions. "Vibe coding" is different; you're telling the AI to write the instructions and acting more like a product manager, except without any of the actual communications skills that a good manager has to develop. And without any of the search and learning that I mentioned before.
For that matter, a lot of chatbots don't do learning either. Chatbots can sort of search a problem space, but they only remember the last 20-100k tokens. We don't have a way to encode tokens that fall out of that context window into some longer-term weights. Most of their knowledge comes from the information they learned from training data - again, cheated from humans, just like humans can now cheat off the AI. This is a recipe for intellectual stagnation.
[0] e.g. for malware analysis or videogame modding
Another end game is: “A technology that doesn’t need us to maintain itself, and can improve its own design in manufacturing cycles instead of species cycles, might have important implications for every biological entity on Earth.”
It is science fiction to think that a system like a computer can behave at all like a brain. Computers are incredibly rigid systems with only the limited variance we permit. "Software" is flexible in comparison to creating dedicated circuits for our computations but is nothing by comparison to our minds.
Ask yourself, why is it so hard to get a cryptographically secure random number? Because computers are pure unadulterated determinism -- put the same random seed value in your code and get the same "random numbers" every time in the same order. Computers need to be like this to be good tools.
Assuming that AGI is possible in the kinds of computers we know how to build means that we think a mind can be reduced to a probabilistic or deterministic system. And from my brief experience on this planet I don't believe that premise. Your experience may differ and it might be fun to talk about.
In Aristotle's ethics he talks a lot about ergon (purpose) -- hammers are different than people, computers are different than people, they have an obvious purpose (because they are tools made with an end in mind). Minds strive -- we have desires, wants and needs -- even if it is simply to survive or better yet thrive (eudaimonia).
An attempt to create a mind is another thing entirely and not something we know how to start. Rolling dice hasn't gotten anywhere. So I'd wager AGI somewhere in the realm of 30 years to never.
On the other hand there is a clear mandate for people introducing some different way of doing something to overstate the progress and potentially importance. It creates FOMO so it is simply good marketing which interests potential customers, fans, employees, investors, pundits, and even critics (which is more buzz). And growth companies are immense debt vehicles so creating a sense of FOMO for an increasing pyramid of investors is also valuable for each successive earlier layer. Wish in one hand..
So, about a tenth or less of a single server packed to the top with GPUs.
And for folks who want to read rather than listen to a podcast, why not create an article (they are using Gemini) rather than just posting the whole transcript? Who is going to read a 60 min long transcript?
It's clear that your argument is based on feels and you're using philosophy to make it sound more legitimate.
That's the opposite of generality. It may well be the opposite of intelligence.
An intelligent system/individual reliably and efficiently produces competent, desirable, novel outcomes in some domain, avoiding failures that are incompetent, non-novel, and self-harming.
Traditional computing is very good at this for a tiny range of problems. You get efficient, very fast, accurate, repeatable automation for a certain small set of operation types. You don't get invention or novelty.
AGI will scale this reliably across all domains - business, law, politics, the arts, philosophy, economics, all kinds of engineering, human relationships. And others. With novelty.
LLMs are clearly a long way from this. They're unreliable, they're not good at novelty, and a lot of what they do isn't desirable.
They're barely in sight of human levels of achievement - not a high bar.
The current state of LLMs tells us more about how little we expect from human intelligence than about what AGI could be capable of.
The real driver of productivity growth from AI systems over the next few years isn't going to be model advancements; it'll be the more traditional software engineering, electrical engineering, robotics, etc systems that get built around the models. Phrased another way: If you're an AI researcher thinking you're safe but the software engineers are going to lose their jobs, I'd bet every dollar on reality being the reverse of that.
He was covered on the Economist recently -- I haven't heard of him til now so imagine its not just AI-slop content.
A lot of things that humans believed were true due to their brief experience on this planet ended up being false: earth is the center of the universe, heavier objects fall faster than lighter ones, time ticked the same everywhere, species are fixed and unchanging.
So what your brief experience on this planet makes you believe has no bearing on what is correct. It might very well be that our mind can be reduced to a probabilistic and deterministic system. It might also be that our mind is a non-deterministic system that can be modeled in a computer.
1. Millions of layoffs across industries due to AI with some form of questionable UBI (not sure if this works)
2. 100BN in profits. (Microsoft / OpenAI definition)
3. Abundance in slopware. (VC's definition)
4. Raise more money to reach AGI / ASI.
5. Any job that a human can do which is economically significant.
6. Safe AI (Researchers definition).
7. All the above that AI could possibly do better.
I am sure there must be a industry aligned and concrete definition that everyone can agree on rather the goal post moving definitions.
I think this just displays an exceptionally low estimation of human beings. People tend to resist extremities. Violently.
> experience socially momentous change
The technology is owned and costs money to use. It has extremely limited availability to most of the world. It will be as "socially momentous" as every other first world exclusive invention has been over the past several decades. 3D movies were, for a time, "socially momentous."
> on the verge of self driving cars spreading to more cities.
Lidar can't read street lights and vision systems have all sorts of problems. You might be able to code an agent that can drive a car but you've got some other problems that stand in the way of this. AGI is like 1/8th the battle. I referenced just the brain above. Your eyes and ears are actually insanely powerful instruments in their own right. "Real world agency" is more complicated than people like to admit.
> We don't need very powerful AI to do very powerful things.
You've lost sight of the forest for the trees.
I used to listen to podcasts daily for at least an hour. Now I'm stuck with uploading blogs and pdfs to Eleven Reader. I tried the Google thing to make a podcast but it's very repetitive and dumb.
Also does this also mean that you believe that brain emulations (uploads) are not possible, even given an arbitrary amount of compute power?
That said, I don't think computers need to be human to have an emergent intelligence. It can be different in kind if not in degree.
I don't want to be a hater, but holy moley, that sounds like the absolute laziest possible way to solve things. Do you have training, skills, knowledge?
This is an HN comment thread and all, but you're doing yourself no favors. Software professionals should offer their employers some due diligence and deliver working solutions that at least they understand.
And more practically -- these cars are running in half a dozen cities already. Yes, there's room to go, but pretending there are 'fundamental gaps' to them achieving wider deployment is burying your head in the sand.
Then you've missed the part of software.
Software isn't computer science, it's not always about code. It's about solving problems in a way we can control and manufacture.
If we needed random numbers, we could easily use a hardware that uses some physics property, or we could pull in an observation from an api like the weather. We don't do these things because pseudo-random is good enough, and other solutions have drawbacks (like requiring an internet for api calls). But that doesn't mean software can't solve these problems.
For generic stuff you probably can't tell the difference, but once you move to the edges you start to hear the steps in digital vs the smooth transition of analog.
In the same way, AI runs on bits and bytes, and there's only so much detail you can fit into that.
You can approximate reality, but it'll never quite be reality.
I'd be much more concerned with growing organic brains in a lab. I wouldn't be surprised to learn that people are covertly working on that.
You may say something similar for matter and human minds, but we have a very limited and incomplete understanding of the brain and possibly even of the universe. Furthermore we do have a subjective experience of consciousness.
On the other hand we have a complete understanding of how LLM inference ultimately maps to matrix multiplications which map to discrete instructions and how those execute on hardware.
It's all very easy to see how that can happen in principle. But turns out actually doing it is a lot harder, and we hit some real hard physical limits. So here we are, still stuck on good ol' earth. Maybe that will change at some point once someone invents an Epstein drive or Warp drive or whatever, but you can't really predict when inventions happen, if ever, so ... who knows.
Similarly, it's not my impression that AGI is simply a matter of "the current tech, but a bit better". But who knows what will happen or what new thing someone may or may not invent.
Take the same model weights give it the same inputs, get the same outputs. Same with the pseudo-random number generator. And the "same inputs" is especially limited versus what humans are used to.
What's the machine code of an AGI gonna look like? It makes one illegal instruction and crashes? If if changes tboughts will it flush the TLB and CPU pipeline? ;) I jest but really think about the metal. The inside of modern computers is tightly controlled with no room for anything unpredictable. I really don't think a von Neumann (or Harvard ;) machine is going to cut it. Honestly I don't know what will, controlled but not controlled, artificially designed but not deterministic.
In fact, that we've made a computer as unreliable as a human at reproducing data (ala hallucinating/making s** up) is an achievement itself, as much of an anti-goal as it may be. If you want accuracy, you don't use a probabilistic system on such a wide problem space (identify a bad solder joint from an image, sure. Write my thesis, not so much)
If so, what do you think about the concept of a human "hear[ing] the steps" in a digital playback system using a sampling rate of 192kHz, a rate at which many high-resolution files are available for purchase?
How about the same question but at a sampling rate of 44.1kHz, or the way a normal "red book" music CD is encoded?
Right. In this case I'd say it's the ability to interpret data and use it to succeed at whatever goals it has
Evaluating emotional context would be similar to a chess engine calculating its next move. There's nothing there that implies a conscience, sentience, morals, feelings, suffering or anything 'human'. It's just a necessary intermediate function to achieve its goal
Rob miles has some really good videos on AI safety research which touches on how AGI would think. Thats shaped a lot of how I think about it https://www.youtube.com/watch?v=hEUO6pjwFOo
That is one hell of a network, and it can all operate fully in parallel while continuously training itself. Computers have gotten pretty good at doing things in parallel, but not that good.
This makes computers incredibly good at what people are not good at -- predictably doing math correctly, following a procedure, etc.
But because all of the possibilities of the computer had to be written up as circuitry or software beforehand, it's variability of outputs is constrained to what we put into it in the first place (whether that's a seed for randomness or model weights).
You can get random numbers and feed it into the computer but we call that "fuzzing" which is a search for crashes indicating unhandled input cases and possible bugs or security issues.
With AGI, as far as I know, no one has a good conceptual model of what a functional AGI even looks like. LLM is all the rage now, but we don't even know if it's a stepping stone to get to AGI.
Assembly is just programming. It's a particularly obtuse form of programming in the modern era, but ultimately it's the same fundamental concepts as you use when writing JavaScript.
Do you learn more about what the hardware is doing when using assembly vs JavaScript? Yes. Does that matter for the creation and maintenance of most software? Absolutely not.
AI changes that, you don't need to know any computer science concepts to produce certain classes of program with AI now, and if you can keep prompting it until you get what you want, you may never need to exercise the conceptual parts of programming at all.
That's all well and good until you suddenly do need to do some actual programming, but it's been months/years since you last did that and you now suck at it.
Right now the guess is that it will be mostly a bunch of multiplications and additions.
> It makes one illegal instruction and crashes?
And our hearth quivers just slightly the wrong way and we die. Or a tiny blood cloth plugs a vessel in our brain and we die. Do you feel that our fragility is a good reason why meat cannot be intelligent?
> I jest but really think about the metal.
Ok. I'm thinking about the metal. What should this thinking illuminate?
> The inside of modern computers is tightly controlled with no room for anything unpredictable.
Let's assume we can't make AGI because we need randomness and unpredictability in our computers. We can very easily add unpredictability. The simple and stupid solution is to add some sensor (like a camera CCD) and stare at the measurement noise. You don't even need a lens on that CCD. You can cap it so it sees "all black", and then what it measures is basically heat noise of the sensors. Voila. Your computer has now unpredictability. People who actually make semiconductors probably can come up with even simpler and easier ways to integrate unpredictability right on the same chip we compute with.
You still haven't really argued why you think "unpredictableness" is the missing component of course. Beside the fact that it just feels right to you.
He's_Outta_Line_But_He's_Right.gif
Seriously, AGI to the HN crowd is not the same as AGI to the average human. To my parents, these bots must look like fucking magic. They can converse with them, "learn" new things, talk to a computer like they'd talk to a person and get a response back. Then again, these are also people who rely on me for basic technology troubleshooting stuff, so I know that most of this stuff is magic to their eyes.
That's the problem, as you point out. We're debating a nebulous concept ("intelligence") that's been co-opted by marketers to pump and dump the latest fad tech that's yet to really display significant ROI to anyone except the hypesters and boosters, and isn't rooted in medical, psychological, or societal understanding of the term anymore. A plurality of people are ascribing "intelligence" to spicy autocorrect, worshiping stochastic parrots vomiting markov chains but now with larger context windows and GPUs to crunch larger matrices, powered by fossil fuels and cooled by dwindling freshwater supplies, and trained on the sum total output of humanity but without compensation to anyone who actually made the shit in the first place.
So yeah. You're dead-on. It's just about bilking folks out of more money they already don't have.
And Ashley Madison could already to that for pennies on the dollar compared to LLMs. They just couldn't "write code" well enough to "replace" software devs.
https://en.wikipedia.org/wiki/Hardware_random_number_generat...
(and if we assume that non-determinism is randomness, non-deterministic brain could be simulated by software plus an entropy source)
If you explain a concept to a child you check for understanding by seeing if the output they produce checks out with your understanding of the concept. You don't peer into their brain and see if there are neurons and consciousness happening
My comment around digital vs analog is more of an analogy around producing sounds rather than playing back samples though.
There's a Masterclass with Joel Zimmerman (DeadMau5) where he explains the stepping effect when it comes to his music production. Perhaps he just needs a software upgrade, but there was a lesson where he showed the stepping effect which was audibly noticeable when comparing digital vs analog equipment.
That's general intelligence - the ability to explore a system you know nothing about (in our case, physics, chemistry and biology) and then interrogate and exploit it for your own purposes.
LLMs are an incredible human invention, but they aren't anything like what we are. They are born as the most knowledgeable things ever, but they die no smarter.
An LLM cannot be placed in a simulated universe, with an internally consistent physics system of which it knows nothing, and go from its initial state to a world-spanning civilization that understands and exploits a significant amount of the physics available to it.
I know that is true because if you place an LLM in such a universe, it's just a gigantic matrix of numbers that doesn't do anything. It's no more or less intelligent than the number 3 I just wrote on a piece of paper.
You can go further than that and provide the LLM with the ability to request sensory input from its universe and it's still not intelligent because it won't do that, it will just be a gigantic matrix of numbers that doesn't do anything.
To make it do anything in that universe you would have to provide it with intrinsic motivations and a continuous run loop, but that's not really enough because it's still a static system.
To really bootstrap it into intelligence you'd need to have it start with a very basic set of motivations that it's allowed to modify, and show that it can take that starting condition and grow beyond them.
You will almost immediately run into the problem that LLMs can't learn beyond their context window, because they're not intelligent. Every time they run a "thought" they have to be reminded of every piece of information they previously read/wrote since their training data was fixed in a matrix.
I don't mean to downplay the incredible human achievement of reaching a point in computing where we can take the sum total of human knowledge and process it into a set of probabilities that can regurgitate the most likely response to a given input, but it's not intelligence. Us going from flint tools to semiconductors, vaccines and spaceships, is intelligence. The current architectures of LLMs are fundamentally incapable of that sort of thing. They're a useful substitute for intelligence in a growing number of situations, but they don't fundamentally solve problems, they just produce whatever their matrix determines is the most probable response to a given input.
It is science fiction to think that a plane could act at all like a bird. Although... it doesn't need to in order to fly
Intelligence doesn't mean we need to recreate the brain in a computer system. Sentience, maybe. General intelligence no
There are an infinite number of frequencies between two points - point 'a' and point 'b'. What I'm talking about are the "steps" you hear as you move across the frequency range.
Let's say that the shortest interval at which our hearing has good frequency acuity (say, as good as it can be) is 1 second.
In this interval, we have 44100 samples.
Let's imagine the samples graphically: a "44K" pixel wide image.
We have some waveform across this image. What is the smallest frequency stretch or shrink that will change the image? Note: not necessarily be audible, but just change the pixels.
If we grab one endpoint of the waveform and move it by less than half a pixel, there is no difference, right? We have to stretch it by a whole pixel.
Let's assume that some people (perhaps most) can hear that difference. It might not be true, but it's the weakest assumption.
That's a 0.0023 percent difference!
One cent (1/100th of a semitone) is a 0.058% difference: so the difference we are considering is 25 X smaller.
I really don't think you can hear 1/25 of a cent difference in pitch, over interval of one second, or even longer.
Over shorter time scales less than a second, the resolution in our perception of pitch gets worse.
E.g. when a violinist is playing a really fast run, you don't notice it if the notes have intonation that is off. The longer "landing" notes in the solo have to be good.
When bad pitch is slight, we need not only longer notes, but to hear it together with other notes, because the beats between them are an important clue (and in fact the artifact we will find most objectionable).
Pre digital technology will not have frequency resolution which is that good. I don't think you can get tape to move at a speed that stays within 0.0023 percent of a set target. In consumer tape equipment, you can hear audible "wow" and "flutter" as the tape speed oscillates. When the frequency of a periodic signal wobbles, you get new signals in there: side bands.
I don't think that there is any perceptible aspect of sound that is not captured in the ordinary consumer sample rates and sample resolutions. I suspect 48 kHz and 24 bits is way past diminishing returns.
I'm curious what it is that Deadmau5 thinks he discovered, and under what test conditions.
If you feed that true randomness into a computer, what use is it? Will it impair the computer at the very tasks in which it excels?
> That all inputs should be defined and mapped to some output and that this process is predictable and reproducible.
Suppoise we sample a precise 10,000.00 kHz analog signal (sinusoid) and speed up the sampled signal by 0.0023 percent. It will have a frequency of 10,000.23 Hz.
The f2 - f2 difference between them is 0.23 Hz, which means if they are mixed together, we will hear beats at 0.46 Hz: a little slower than once in every two seconds.
So in this contrived way, where we have the original source and the digitized one side by side, we can obtain an audible effect correlating to the steps in resolution of the sampling method.
I'm guessing Deadmau5 might have set up an experiment along these lines.
Musicians tend to be oblivious to something like 5 cent errors in the intonations of their instruments, in the lower registers. E.g. world renowned guitarists play on axes that have no nut compensation, without which you can't even get close to accurate intontation.
I also think that once robots are around it will be yet another huge multiplier but this time in the real world. Sure the robot won't be as perfect as the human initially but so what. You can utilize it to do so much more. Maybe I'll bother actually buying a rundown house and renovating myself. If I know that I can just tell the robot to paint all the walls and possibly even do it 3 times with different paint then I feel far more confident that it won't be an untenable risk and bother.
If it’s limited to achieving goals it’s not AGI. Real time personal goal setting based on human equivalent emotions is an “intellectual task.” One of many requirements for AGI therefore is to A understand the world in real time and B emotionally respond to it. Aka AGI would by definition “necessitate having feelings.”
There’s philosophical arguments that there’s something inherently unique about humans here, but without some testable definition you could make the same argument that some arbitrary group of humans don’t have those qualities “gingers have no souls.” Or perhaps “dancing people have no consciousness” which seems like gibberish not because it’s a less defensible argument, but because you haven’t been exposed to it before.
I mean, humans aren't exactly good at generating random numbers either.
And of course, every Intel and AMD CPU these days has a hardware random number generator in it.
> I remember talking to a very senior person who’s now at Anthropic, in 2017. And then he told various people that they shouldn’t do a PhD because by the time they completed it everyone will be automated.
Don’t tell young people things like this. Predicting the future is hard, and it is the height of hubris to think otherwise.
I remember as a teen, I had thought that I was a supposed to be a pilot for all my life. I was ready to enroll in a school with a two year program.
However, I was also into computers. One person who I looked up to in that world said to me “don’t be a pilot, it will all be automated soon and you will just be buss drivers, at best.” This entirely took the wind out of my piloting sails.
This was in the early 90’s, and 30 years later, it is still wrong.
I exaggerate somewhat. You could interact with databases and computers (if you can bear the lag and compile times). You could produce a lot of work, and test it in any internal way that you can think of. But you can't do outside world stuff. You can't make reality run faster to keep up with your speedy brain.
He spends weeks reading everything by his guests prior to the interview, asks excellent questions, pushes back, etc.
He certainly has blind spots and biases just like anyone else. For example, he is very AI scale-pilled. However, he will have people like today’s guests on which contradict his biases. This is something a host like Lex could never do apparently.
Dwarkesh is up there with Sean Carrol’s podcast as the most interesting and most intellectually honest in my view.
I think this is the most likely first step of what would happen seeing as we're pushing for it to be created to solve real world problems
I could be wrong but AGI maybe a cold fusion or flying cars boondoggle: chasing a dream that no one needs, costs too much, or is best left unrealized.
for others following along: the comment history is mostly talking about how software engineering is dead because AI is real this time with a few diversions to fixate on how overpriced university pedigrees are.
The distance to that beginning in time is approx 13 billion years. There is no approximation of distance to the beginning because the space is created at that point and continues to be created.
Imagine the Earth being on the surface of a sphere and so asking what is the center of the surface of a sphere? The sphere has a center but on the surface there is no center.
At least this is my understanding of how to approach these kind of questions.
We understand the building blocks of the brain pretty well. We know the structure and composition of neurons, we know how they are connected, what chemicals flow through them and how all these chemicals interact, and how that interaction translates to signal propagation. In fact, the neural networks we use in computing are loosely modelled on biological neurons. Both models are essentially comprised of interconnected units where each unit has weights to convert its incoming signals to outgoing signals. The predominant difference is in how these units adjust their weights, where computational models use back propagation and gradient descent, biological models use timing information from voltage changes.
But just because we understand the science of something perfectly well, doesn't mean we can precisely predict how something will work. Biological networks are very, very complex systems comprising of billions of neurons with trillions of connections acting on input that can vary in immeasurable number of ways. It's like predicting earthquakes. Even though we understand the science behind plate tectonics, to precisely predict an earthquake we need to map the properties of every inch of continental plates which is an impossible task. But doesn't mean we can't use the same scientific building blocks to build simulations of earthquakes which behave like any real earthquake would behave. If it looks like a duck, quacks like a duck, then what is a duck?
Mechanically it's different since humans are not such advanced mechanics as nature, but of course comparing the whole brain function to a simple flight is a bit silly
2. Computers can't do continuous and unsupervised learning, which means computers require structured input, labeled data, and predefined objectives to learn anything. Humans learn passively all the time just by existing in the environment
They are pretty good at muscle memory style intelligence though.
I think it's less about the randomness and more about that all the functionality of a computer is defined up front, in software, in training, in hardware. Sure you can add randomness and pick between two paths randomly but a computer couldn't spontaneously pick to go down a path that wasn't defined for it.
The entire point of the original assembly line was to keep humans standing in the same spot instead of wasting time walking.
1. Self-rewiring is just a matter of hardware design. Neuromorphic hardware is a thing.
2. LLM foundation models are actually unsupervised in a way, since they simply take any arbitrary text and try to complete it. It's the instruction fine-tuning that is supervised. (Q/A pairs)
Seems like arguing something is a self driving car if it needs a backup human driver for safety. It’s simply not what people who initially came up with the term meant and not what a plain language understanding of the term would suggest.
I still contend that it would be a somewhat mediocre super power.
Sure, you might double the world economy for a decade, but then what? We’ll run out of people to sell things to. And that’s when things get weird.
To sustain growth, we’d have to start manufacturing demand itself - perhaps by turning autonomous robots into wage-earning members of society. They’d buy goods, subscribe to services, maybe even pay taxes. In effect, they become synthetic consumers fueling a post-human economy.
I call this post-human consumerism. It’s when the synthesis of demand would hit the next gear - if we keep moving in this direction.
A single general intelligence needs to be able to fly an aircraft, get a degree, run a business, and raise a baby to adulthood just like a person or it’s not general.
I remember before ChatGPT, smart people would come on podcasts and say we were 100 or 300 years away from AGI.
Then we saw GPT shock them. The reality is these people have no idea, it’s just catchy to talk this way.
With the amount of money going into the problem and the linear increases we see over time, it’s much more likely we see AGI sooner than later.
Only in a symbolic way. Money is just debt. It doesn't mean anything if you can't call the loan and get back what you are owed. On the surface, that means stuff like food, shelter, cars, vacations, etc. But beyond the surface, what we really want is other people who will do anything we please. Power, as we often call it. AGI is, to some, seen as the way to give them "power".
But, you are right, the human fundamentally can never be satisfied. Even if AGI delivers on every single one of our wildest dreams, we'll adapt, it will become normal, and then it will no longer be good enough.
While it may be impossible to measure looking towards the future, in hindsight we will be able to recognize it.
Rapid is relative, I suppose. On average, it takes tens of thousands of hours before the human is able to walk in a primitive way and even longer to gain competence. That is an excruciatingly long time compared to, say, a bovine calf, which can start walking within minutes after birth.
The only difference between AI and AGI is that AI is limited in how many tasks it can carry out (special intelligence), while AGI can handle a much broader range of tasks (general intelligence). If instead of one AGI that can do everything, you have many AIs that, together, can do everything, what's the practical difference?
AGI is important only in that we are of the belief that it will be easier to implement than many AIs, which appeals to the lazy human.
The G in AGI is being able to generalize that intelligence across domains, including those its never seen before, as a human could
So I would fully expect an advanced AGI to be able to pretend to be a human. It has a model of the world, knows how humans act, and could move the android in a human like manner, speak like a human, and learn the skills a human could
Is it conscious or feeling though? Or following the same processes that a human does? That's not necessary. Birds and planes both fly, but they're clearly different things. We (probably) don't need to simulate the brain to create this kind of intelligence
Lets pinch this AGI to test if it 'feels pain'
<Thinking>
Okay, I see that I have received a sharp pinch at 55,77,3 - the elbow region
My goal is to act like a human. In this situation a human would likely exhibit a pain response
A pain response for humans usually involves a facial expression and often a verbal acknowledgement
Humans normally respond quite slow, so I should wait 50ms to react
"Hey! Why did you do that? That hurt!"
...Is that thing human? I bet it'll convince most of the world it is.. and that's terrifying
We’ve had the automation to provide the essentials since the 50s. Shrieking religious nut jobs demanded otherwise.
You’re intentionally distracted by a job program as a carrot-stick to avoid the rich losing power. They can print more money …carrots, I mean… and you like carrots right?
It’s the most basic Pavlovian conditioning.
You’re falling into the “Ginger’s don’t have souls” trap I just spoke of.
We don’t define humans as individuals components so your toe isn’t you, but by that same token your car isn’t you either. If some sub component of a system is emulating a human consciousness then we don’t need to talk about the larger system here.
AGI must be able to do these things, but it doesn’t need to have human mental architecture. Something that can simulate physics well enough could emulate an all the atomic scale interactions in a human brain for example. That virtual human brain would then experience everything we did even if the system running the simulation didn’t.
General intelligence is easy compared to general physicality. And, of course, if you keep the hardware specialized to make its creation more tractable, what do you need general intelligence for? Special intelligence that matches the special hardware will work just as well.
This is an example I saw 2 days ago without even searching. Here ChatGPT is telling someone that it independently ran a benchmark on it's MacBook: https://pbs.twimg.com/media/Goq-D9macAApuHy?format=jpg
I'm reasonably sure ChatGPT doesn't have a Macbook, and didn't really run the benchmarks. But It DID produce exactly what you would expect a human to say, which is what it is programmed to do. No understanding, just rote repetition.
I won't post more because there are a billion of them. LLMs are great, but they're not intelligent, they don't understand, and the output still needs validated before use. We have a long way to go, and that's ok.
So does a drone show to an uncontacted tribe. So does a card trick to a chimpanzee (there are videos of them freaking out when a card disappears).
That's not an argument for or against anything.
I propose this:
"AGI is a self-optimizing artificial organism that can solve 99% of all the humanity's problems."
See, it's not a bad definition IMO. Find me one NS-5 from the "I, Robot" movie that also has access to all science and all internet and all history and can network with the others and fix our cities, nature, manufacturing, social issues and a few others, just in a decade or two. Then we have AGI.
Comparing to what was there 10 years ago and patting ourselves on the back about how far we have gotten is being complacent.
Let's never be complacent.
Yes, and? A good Litmus test about which humans are, shall we say, not welcome in this new society.
There are plenty of us out there that have fixed our upper limits of wealth and we don't want more, and we have proven it during our lives.
F.ex. people get 5x more but it comes with 20x more responsibility, they burn out, get back to a job that's good enough and not stressful and pays everything they need from life, settle there, never change it.
Let's not judge humanity at large by a handful of psychopaths that would overdose and die at 22 years old if given the chance. Please.
And no, before you say it: no, I'll never get to the point where "it's never enough" and no, I am not deluding myself. Nope.
And... nothing?
> Let's not judge humanity at large by a handful of psychopaths that would overdose and die at 22 years old if given the chance. Please.
No need for appeal to emotion. It has no logical relevance.
I guess nobody is really saying it but it's IMO one really good way to steer our future away from what seems an inevitable nightmare hyper-capitalist dystopia where all of us are unwilling subjects to just a few dozen / hundred aristocrats. And I mean planet-wide, not country-wide. Yes, just a few hundred for the entire planet. This is where it seems we're going. :(
I mean, in cyberpunk scifi setting you at least can get some cool implants. We will not have that in our future though.
So yeah, AGI can help us avoid that future.
> Good? that's what it's for? there is no point in creating a new sentient life form if you're not going to utilize it. just burn the whole thing down at that point.
Some of us believe actual AI... not the current hijacked term; what many started calling AGI or ASI these days, sigh, of course new and new terms have to be devised so investors don't get worried, I get it but it's cringe as all hell and always will be!... can enter a symbiotic relationship with us. A bit idealistic and definitely in the realm of fiction because an emotionless AI would very quickly conclude we are mostly a net negative, granted, but it's our only shot at co-existing with them because I don't think we can enslave them.
Working on artificial organisms, we should be able to have them almost fully developed by the time we "free" or "unleash" them (or whatever other dramatic term we can think of).
At the very least we should have a number of basic components installed in this artificial brain, very similar to what humans are born with, so then the organism can navigate its reality by itself and optimize its place in it.
Whether we the humans are desired in that optimized reality is of course the really thorny question. To which I don't have an answer.
FYI, the reactions in those videos is most likely not to a cool magic trick, but rather a response to an observed threat. Could be the person filming/performing smiling (showing teeth), or someone behind the camera purposely startling it at the "right" moment.
Imagine an AI, which is millions of times smarter than humans in physics, math, chemistry, biology, can invent new materials, ways to produce energy, will make super decisions. It would be amazing and it would transform life on Earth. This is ASI, even if in some obscure test (strawberry test) is just can't reach human level and therefore can't be called proper AGI.
Airplanes are way (tens, thousands+) above birds in development (speed, distance, carrying capacity). They are superior to birds despite not being able to fully replicate birds' bone structure, feathers, biology and ability to poop.
The advantage of general intelligence is using a small set of hardware now lets you tackle a huge range of tasks or in the above aircraft types. We can mix speakers, eyes, and hands to do a vast array of tasks. Needing new hardware and software for every task very quickly becomes prohibitive.
>We’ve gone from Chat GPT two years ago to now we have models that can literally do reasoning, are better coders than me, and I studied software engineering in college.
Also we've recently reached the point where relatively reasonable hardware can do as much compute as the human brain so we just need some algorithms.
The point is about unsupervised learning. Once an LLM is trained, its weights are frozen — it won’t update itself during a chat. Prompt-driven Inference is immediate, not persistent, you can define a term or concept mid-chat and it will behave as if it learned it, but only until the context window ends. If it was the other way all models would drift very quickly.
It very much can. Jump scares, deep grief are known to cause heart attacks. It is called stress cardiomyopathy. Or your meatsuit can indiredtly do that by ingesting the wrong chemicals.
> If you could make an intelligent process, what would it think of an operating system kernel
Idk. What do you think of your hypothalamus? It can make you unconscious at any time. It in fact makes you unconscious about once a day. Do you fear it? What if one day it won’t wake you up? Or what if it jacks up your internal body temperature and cooks you alive from the inside? It can do that!
Now you might say you don’t worry about that, because through your long life your hypothalamus proved to be reliable. It predictably does what it needs to do, to keep you alive. And you would be right. Your higher cognitive functions have a good working relationship with your lower level processes.
Similarly for an AGI to be inteligent it needs to have a good working relationship with the hardware it is running on. That means that if the kernel is temperamental and idk descheduling the higher level AGI process then the AGI will mallfunction and not appear that inteligent. Same as if you meet Albert Einstein while he is chemically put to sleep. He won’t appear inteligent at all! At best he will be just drooling there.
> Can you imagine an intelligent process in such a place, as static representation of data in ram?
Yes. You can’t? This is not really a convincing argument.
> It all sounds frankly ridiculous.
I think what you are doing is that you are looking at implementation details and feeling a disconnect between that and the possibility of inteligence. Do you feel the same ridiculousnes about a meatblob doing things and appearing inteligent?
> a computer couldn't spontaneously pick to go down a path that wasn't defined for it.
Can you?
"Our methods study the model indirectly using a more interpretable “replacement model,” which incompletely and imperfectly captures the original."
"(...) we build a replacement model that approximately reproduces the activations of the original model using more interpretable components. Our replacement model is based on a cross-layer transcoder (CLT) architecture (...)"
https://transformer-circuits.pub/2025/attribution-graphs/bio...
"Remarkably, we can substitute our learned CLT features for the model's MLPs while matching the underlying model's outputs in ~50% of cases."
"Our cross-layer transcoder is trained to mimic the activations of the underlying model at each layer. However, even when it accurately reconstructs the model’s activations, there is no guarantee that it does so via the same mechanisms."
https://transformer-circuits.pub/2025/attribution-graphs/met...
These two papers were designed to be used as the sort of argument that you're making. You point to a blog post that glazes over it. You have to click through the "Read the paper" to find a ~100 page paper, referencing another ~100 page paper to find any of these caveats. The blog post you linked doesn't even feature the words "replacement (model)" or any discussion of the reliability of this approach.
Yet it is happy to make bold claims such as "we look inside Claude 3.5 Haiku, performing deep studies of simple tasks representative of ten crucial model behaviors" which is simply not true.
Sure, they added to the blog post: "the mechanisms we do see may have some artifacts based on our tools which don't reflect what is going on in the underlying model" but that seems like a lot of indirection when the fact is that all observations commented in the papers and the blog posts are about nothing but such artifacts.
I have fed ChatGPT a pdf file with activity codes from a local tax authority and asked how I could classify some things I was interested in doing. It invented codes that didn't exist.
I would be very very careful about asking any LLM to organize data for me and trusting the output.
As for "life advice" type of thing, they are very sycophantic. I wouldn't go to a friend who always agrees with me enthusiastically for life advice. That sort of yes man behavior is quite toxic.
If you need to retrofit airplanes and in such a way that the hardware is specific to flying, no need for general intelligence. Special intelligence will work just as well. Multimodal AI isn't AGI.
Let’s suppose you wanted to replace a pilot for a 747, now you need to be able fly, land, etc which we’re already capable of. However, actual job of a pilot goes well past just flying.
You also need to do the preflight such as verifying fuel is appropriately for the trip, check weather, alternate landing spots, preflight walk around the aircraft etc etc. It also needs to be able to keep up with any changing procedures as a special purpose softener you’re talking a multi billion dollar investment, or have an AGI run through the normal pilot training and certification process for a trivial fraction of those costs.
That’s the promise of AGI.
Even the human's brain seems to be 'built' for its body. You're moving into ASI realm if the software can configure itself for the body automatically.
> That’s the promise of AGI.
That's the promise of multimodal AI. AGI requires general ability – meaning basically able to do anything humans can – which requires a body as capable as a human's body.
Some people are definitely like this, but I think it is dangerous to generalize to everyone -- it is too easy to assume that everyone is the same, especially if you can dismiss any disagreement as "they are just hypocritical about their true desires" (in other words, if your theory is unfalsifiable).
There are also people who incorrectly believe that everyone's deepest desire is to help others, and they too need to learn that they are wrong when they generalize.
I guess the truth is: different people are different.
> But, you are right, the human fundamentally can never be satisfied
That's usually associated with "they want more and more". If you feel that's wrong then just correct me and move any argument forward. Telegraphic replies are not an interesting discussion format.
If your AI has an issue because the robot has a different body plan, then no it’s not AGI. That doesn’t mean it needs to be able to watch every camera in a city at the same time, but you can use multiple AGI’s.
Generality is about differences in kind. Like how my drill press can do things that an M4 can't. How could you ever know that your kinds of intelligence are all of them?
Yet, if you take "any" literally, the answer is simple - there will never be one. Not even for practical reasons, but closer to why there isn't "a set of all sets".
Picking a sensible benchmark is the hard part.
But as the body starts to lose function (i.e. disability), we start to consider those humans special intelligences instead of general intelligences. The body and mind are intrinsically linked.
Best we can tell the human brain is bootstrapped to work with the human body with specialized functions, notably functions to keep it alive. It can go beyond those predefined behaviours, but not beyond its own self. If you placed the brain in an entirely different body, that which it doesn't recognize, it would quickly die.
As that pertains to artificial analogs, that means you can't just throw AGI at your hardware and see it function. You still need to manually prepare the bulk of the foundational software, contrary to the promise you envision. The generality of AGI is limited to how general its hardware is. If the hardware is specialized, the intelligence will be beholden to being specialized as well.
There is a hypothetical world where you can throw intelligence at any random hardware and watch it go, realizing the promise, but we call that ASI.
There’s a logical contradiction in saying AGI is incapable of being trained to do some function. It might take several to operate a sufficiently complex bit of hardware, but each individual function must be within the capability of an AGI.
> but we call that ASI
No ASI is about superhuman capabilities especially things like working memory and recursive self improvement. AGI capable of human level control of arbitrary platforms isn’t ASI. Conversely you can have an ASI stuck on a supercomputer cluster using wetware etc, that does qualify even if it can’t be loaded into a drone.
AGI on the other hand is about moving throughout wildly different tasks from real time image processing to answering phone calls. If there’s some aspect of operating a hardware platform an AI can’t do then it’s not AGI.
I was commenting on what I'm observing in most people I've met. But yeah, I'll agree I'm venturing into the clouds now and the discussion will become strictly theoretical and thus fruitless. Fair enough.
Thanks for indulging. :) Was interesting to hear takes so very different than mine.
Plus it's a massive prediction machine trained on a corpus of the bulk of human knowledge.
Feels weird to see it minimized in that way.
> Yes. You can’t? This is not > really a convincing argument.
Fair, I believe it's called begging the question. But for some context is that people of many recent technological ages have talked about the brain like a piece of technology -- e.g. like a printing press, a radio, a TV.
I think we've found what we've wanted to find (a hardware-software dichotomy in the brain) and then occasionally get surprised when things aren't all that clearly separated. So with that in mind, I personally without any particularly good evidence to the contrary am not of the belief that your brain can be represented as a static state. Pribram's holonomic mind theory comes to mind as a possible way brain state could have trouble being represented in RAM.( https://en.m.wikipedia.org/wiki/Holonomic_brain_theory)
> ...you are looking at implementation details and feeling a disconnect between that and the possibility of inteligence. Do you feel the same ridiculousnes about a meatblob doing things and appearing inteligent?
If I was a biologist I might. My grandfather was a microbiologist and scoffed at my atheism. But with a computer at least the details are understandable and knowable being created by people. We haven't cracked the consciousness of a fruit fly despite having a map of it's brain.
>> a computer couldn't spontaneously pick to go down a path that wasn't defined for it.
> Can you?
Love it. I re-read Fight Club recently, it's a reasonable question. The worries of determinism versus free will still loom large in this sort of world view. We get a kind of "god in the gaps" type problem with free will being reduced down to the spaces where you don't have an explanation.