The amount of computing power we are putting in only changes that luck by a tiny fraction.
AGI is being able to simulate reality in high enough accuracy, faster than reality (which includes being able to simulate human brains), which so far doesn't seem to be possible (due to computational irreducebility)
Why is that? We can build machines that are much better than humans in some things (calculations, data crunching). How can you be certain that this is impossible in other disciplines?
People are joking online that some colleagues use chatgpt to answer questions from other teammates made by chatgpt, nobody knows what's going on anymore.
Measuring intelligence is hard and requires a really good definition of intelligence, LLMs have in some ways made the definition easier because now we can ask the concrete question against computers which are very good at some things "Why are LLMs not intelligent?" Given their capabilities and deficiencies, answering the question about what current "AI" technology lacks will make us better able to define intelligence. This is assuming that LLMs are the state of the art Million Monkeys and that intelligence lies on a different path than further optimizing that.
Maybe our first AGI is just a Petri dish brain with a half-decent python API. Maybe it’s more sand-based, though.
Maybe something like the game of life is more in the right direction. Where you set up a system with just the right set of rules with input and output and then just turn it on and let it go and the AI is an emergent property of the system over time.
“What we don’t yet understand” is just a horizon.
I call this the 'Cardinality Barrier'
Infinite and “finite but very very big” seem like a meaningful distinction here.
I once wondered if digital intelligences might be possible but would require an entire planet’s precious metals and require whole stars to power. That is: the “finite but very very big” case.
But I think your idea is constrained to if we wanted a digital computer, is it not? Humans can make intelligent life by accident. Surely we could hypothetically construct our own biological computer (or borrow one…) and make it more ideal for digital interface?
Everything in our universe is countable, which naturally includes biology. A bunch of physical laws are predicated on the universe being a countable substrate.
If we had a very inefficient, power hungry machine that was 1:1 as intelligent as a human being but could scale it very inefficiently to be 100:1 a human being it might still be worth it.
As far as possible reasons that a computer can’t achieve AGI go, this seems like the best one (assuming computer means digital computer of course).
But in a philosophical sense, a computer obeys the same laws of physics that a brain does, and the transistors are analog devices that are being used to create a digital architecture. So whatever makes you brain have uncountable states would also make a real digital computer have uncountable states. Of course we can claim that only the digital layer on top matters, but why?
But since we don’t have a working theory of quantum gravity at such energies, the final verdict remains open.
Sort of. The main issue is the energy requirements. We could theoretically reproduce a human brain in SW today, it's just that it would be a really big energy hog and run very slowly and probably become insane quickly like any person trapped in a sensory deprived tank.
The real key development for AI and AGI is down at the metal level of computers- the memristor.
https://en.m.wikipedia.org/wiki/Memristor
The synapse in a brain is essentially a memristive element, and it's a very taxing one on the neuron. The equations is (change in charge)/(change in flux). Yes, a flux capacitor, sorta. It's the missing piece in fundamental electronics.
Making simple 2 element memristors is somewhat possible these days, though I've not really been in the space recently. Please, if anyone knows where to buy them, a real one not a claimed to be one, let me know. I'm willing to pay good money.
In Terms of AI, a memristor would require a total redesign of how we architect computers ( goodbye busses and physically separate memory, for one). But, you'd get a huge energy and time savings benefit. As in, you can run an LLM on a watch battery or small solar cell and let the environment train them to a degree.
Hopefully AI will accelerate their discovery and facilitate their introduction into cheap processing and construction of chips.
https://www.oddee.com/australian-company-launches-worlds-fir...
the entire idea feels rather immoral to me, but it does exist.
As far as physicists believe at the moment, there's no way to ever observe a difference below the Planck level. Energy/distance/time/whatever. They all have a lower boundary of measurability. That's not as a practical issue, it's a theoretical one. According to the best models we currently have, there's literally no way to ever observe a difference below those levels.
If a difference smaller than that is relevant to brain function, then brains have a way to observe the difference. So I'm sure the field of physics eagerly awaits your explanation. They would love to see an experiment thoroughly disagree with a current model. That's the sort of thing scientists live for.
Myself and many others are skeptical that LLMs are even AI.
LLMs / "AI" may very well be a transformative technology that changes the world forever. But that is a different matter.
But biological brain have significantly greater state space than conventional silicon computers because they're analog. The voltage across a transistor varies approximately continuously, but we only measure a single bit from that (or occasionally 2 for nand).
Agreed, however defining ¬AGI seems much more straightforward to me. The current crop of LLMs, impressive though they may be, are just not human level intelligent. You recognize this as soon as you spend a significant amount of time using one.
It may also be that they are converging on a type of intelligence that is fundamentally not the same as human intelligence. I’m open to that.
This reminds me of The Thought Emporium's project of teaching rat brain cells to play doom