Most active commenters
  • izzydata(4)
  • Waterluvian(4)
  • kachapopopow(3)

←back to thread

334 points mooreds | 54 comments | | HN request time: 1.053s | source | bottom
1. izzydata ◴[] No.44484180[source]
Not only do I not think it is right around the corner. I'm not even convinced it is even possible or at the very least I don't think it is possible using conventional computer hardware. I don't think being able to regurgitate information in an understandable form is even an adequate or useful measurement of intelligence. If we ever crack artificial intelligence it's highly possible that in its first form it is of very low intelligence by humans standards, but is truly capable of learning on its own without extra help.
replies(10): >>44484210 #>>44484226 #>>44484229 #>>44484355 #>>44484381 #>>44484384 #>>44484386 #>>44484439 #>>44484454 #>>44484478 #
2. kachapopopow ◴[] No.44484210[source]
There is something easy you can always do to tell if something is just hype: we will never be able to make something smarter than a human brain on purpose. It effectively has to happen either naturally or by pure coincidence.

The amount of computing power we are putting in only changes that luck by a tiny fraction.

replies(1): >>44484257 #
3. navels ◴[] No.44484226[source]
why not?
replies(1): >>44484416 #
4. ActorNightly ◴[] No.44484229[source]
Exactly. Ive said this from the start.

AGI is being able to simulate reality in high enough accuracy, faster than reality (which includes being able to simulate human brains), which so far doesn't seem to be possible (due to computational irreducebility)

5. echoangle ◴[] No.44484257[source]
> we will never be able to make something smarter than a human brain on purpose. It effectively has to happen either naturally or by pure coincidence.

Why is that? We can build machines that are much better than humans in some things (calculations, data crunching). How can you be certain that this is impossible in other disciplines?

replies(1): >>44484342 #
6. kachapopopow ◴[] No.44484342{3}[source]
that's just a tiny fraction of what a human brain can do, sure we can get something better in very narrow subjects, but something as being able to recognize patterns apply that to solve problems is something way beyond anything we can even think of right now.
replies(1): >>44484373 #
7. agumonkey ◴[] No.44484355[source]
Then there's the other side of the issue. If your tool is smarter than you.. how do you handle it ?

People are joking online that some colleagues use chatgpt to answer questions from other teammates made by chatgpt, nobody knows what's going on anymore.

8. echoangle ◴[] No.44484373{4}[source]
Ok, but how does that mean that we will never be able to do it? Imagine telling people 500 years ago that you will build a machine that can bring the to the moon. Maybe AGI is like that, maybe it’s really impossible. But how can people be confident that AGI is something humans can’t create?
replies(1): >>44484431 #
9. colechristensen ◴[] No.44484381[source]
>I don't think being able to regurgitate information in an understandable form is even an adequate or useful measurement of intelligence.

Measuring intelligence is hard and requires a really good definition of intelligence, LLMs have in some ways made the definition easier because now we can ask the concrete question against computers which are very good at some things "Why are LLMs not intelligent?" Given their capabilities and deficiencies, answering the question about what current "AI" technology lacks will make us better able to define intelligence. This is assuming that LLMs are the state of the art Million Monkeys and that intelligence lies on a different path than further optimizing that.

https://en.wikipedia.org/wiki/Infinite_monkey_theorem

10. baxtr ◴[] No.44484384[source]
I think the same.

How do you call people like us? AI doomers? AI boomers?!

replies(3): >>44484414 #>>44484467 #>>44484497 #
11. Waterluvian ◴[] No.44484386[source]
I think the only way that it’s actually impossible is if we believe that there’s something magical and fundamentally immeasurable about humans that leads to our general intelligence. Otherwise we’re just machines, after all. A human brain is theoretically reproducible outside standard biological mechanisms, if you have a good enough nanolathe.

Maybe our first AGI is just a Petri dish brain with a half-decent python API. Maybe it’s more sand-based, though.

replies(8): >>44484413 #>>44484436 #>>44484490 #>>44484539 #>>44484739 #>>44484759 #>>44485168 #>>44487032 #
12. frizlab ◴[] No.44484413[source]
> if we believe that there’s something magical and fundamentally immeasurable about humans that leads to our general intelligence

It’s called a soul for the believers.

13. Mistletoe ◴[] No.44484414[source]
Realists.
14. izzydata ◴[] No.44484416[source]
I'm not an expert by any means, but everything I've seen of LLMs / machine learning looks like mathematical computation no different than what computers have always been doing at a fundamental level. If computers weren't AI before than I don't think they are now just because the maths they are doing has changed.

Maybe something like the game of life is more in the right direction. Where you set up a system with just the right set of rules with input and output and then just turn it on and let it go and the AI is an emergent property of the system over time.

replies(1): >>44484479 #
15. kachapopopow ◴[] No.44484431{5}[source]
What we have right now with llms is bruteforcing our way to create something 'smarter' than a human, of course it can happen, but it's not something that can be 'created' by a human. An llm as small as 3b already performed more calculations than all the calculations done in the entire human history.
16. andy99 ◴[] No.44484436[source]
If by "something magical" you mean something we don't understand, that's trivially true. People like to give firm opinions or make completely unsupported statements they feel should be taken seriously ("how do we know humans intelligence doesn't work the same way as next token prediction") about something nobody understand.
replies(1): >>44484461 #
17. dinkumthinkum ◴[] No.44484439[source]
I think you are very right to be skeptical. It’s refreshing to see another such take as it is so strange to see so many supposedly technical people just roll down the track of assuming this is happening when there are some fundamental problems with this idea. I understand why non-technical are ready to marry and worship it or whatever but for serious people I think we need to think more critically.
18. breuleux ◴[] No.44484454[source]
I think the issue is going to turn out to be that intelligence doesn't scale very well. The computational power needed to model a system has got to be in some way exponential in how complex or chaotic the system is, meaning that the effectiveness of intelligence is intrinsically constrained to simple and orderly systems. It's fairly telling that the most effective way to design robust technology is to eliminate as many factors of variation as possible. That might be the only modality where intelligence actually works well, super or not.
replies(1): >>44484546 #
19. Waterluvian ◴[] No.44484461{3}[source]
I mean something that’s fundamentally not understandable.

“What we don’t yet understand” is just a horizon.

20. npteljes ◴[] No.44484467[source]
"AI skeptics", like here: https://www.techopedia.com/the-skeptics-who-believe-ai-is-a-...
replies(1): >>44485290 #
21. paulpauper ◴[] No.44484478[source]
I agree. There is no define or agreed upon consensus of what AGI even means or implies. Instead, we will continue to see incremental improvements at the sort of things AI is good at, like text and image generation, generating code, etc. The utopia dream of AI solving all of humanity's problems as people just chill on a beach basking in infinite prosperity are unfounded.
replies(1): >>44486659 #
22. hackinthebochs ◴[] No.44484479{3}[source]
Why do you have a preconception of what an implementation of AGI should look like? LLMs are composed of the same operations that computers have always done. But they're organized in novel ways that have produced novel capabilities.
replies(1): >>44484740 #
23. somewhereoutth ◴[] No.44484490[source]
Our silicon machines exist in a countable state space (you can easily assign a unique natural number to any state for a given machine). However, 'standard biological mechanisms' exist in an uncountable state space - you need real numbers to properly describe them. Cantor showed that the uncountable is infinitely more infinite (pardon the word tangle) than the countable. I posit that the 'special sauce' for sentience/intelligence/sapience exists beyond the countable, and so is unreachable with our silicon machines as currently envisaged.

I call this the 'Cardinality Barrier'

replies(9): >>44484527 #>>44484530 #>>44484534 #>>44484541 #>>44484590 #>>44484606 #>>44484612 #>>44484664 #>>44485305 #
24. paulpauper ◴[] No.44484497[source]
There is a middle ground of people believe AI will lead to improvements in some respects of life, but will not liberate people from work or anything grandiose like that.
replies(1): >>44484508 #
25. baxtr ◴[] No.44484508{3}[source]
I am big fan of AI tools.

I just don’t see how AGI is possible in the near future.

26. Waterluvian ◴[] No.44484527{3}[source]
That’s an interesting thought. It steps beyond my realm of confidence, but I’ll ask in ignorance: can a biological brain really have infinite state space if there’s a minimum divisible Planck length?

Infinite and “finite but very very big” seem like a meaningful distinction here.

I once wondered if digital intelligences might be possible but would require an entire planet’s precious metals and require whole stars to power. That is: the “finite but very very big” case.

But I think your idea is constrained to if we wanted a digital computer, is it not? Humans can make intelligent life by accident. Surely we could hypothetically construct our own biological computer (or borrow one…) and make it more ideal for digital interface?

replies(2): >>44484549 #>>44485363 #
27. saubeidl ◴[] No.44484530{3}[source]
That is a really insightful take, thank you for sharing!
28. ◴[] No.44484534{3}[source]
29. sandworm101 ◴[] No.44484539[source]
A brain in a jar, with wires so that we can communicate with it, already exists. Its called the internet. My brain is communicating with you now through wires. Replacing my keyboard with implanted electrodes may speed up the connection, but it wont fundimentally change the structure or capabilities of the machine.
replies(1): >>44484551 #
30. jandrewrogers ◴[] No.44484541{3}[source]
> 'standard biological mechanisms' exist in an uncountable state space

Everything in our universe is countable, which naturally includes biology. A bunch of physical laws are predicated on the universe being a countable substrate.

replies(1): >>44486699 #
31. airstrike ◴[] No.44484546[source]
What does "scale well" mean here? LLMs right now aren't intelligent so we're not scaling from that point on.

If we had a very inefficient, power hungry machine that was 1:1 as intelligent as a human being but could scale it very inefficiently to be 100:1 a human being it might still be worth it.

32. saubeidl ◴[] No.44484549{4}[source]
Isn't a Planck length just the minimum for measurability?
replies(2): >>44484654 #>>44484701 #
33. Waterluvian ◴[] No.44484551{3}[source]
Wait, are we all just Servitors?!
34. layer8 ◴[] No.44484590{3}[source]
Physically speaking, we don’t know that the universe isn’t fundamentally discrete. But the more pertinent question is whether what the brain does couldn’t be approximated well enough with a finite state space. I’d argue that books, music, speech, video, and the like demonstrate that it could, since those don’t seem qualitatively much different from how other, analog inputs stimulate our intellect. Or otherwise you’d have to explain why an uncountable state space would be needed to deal with discrete finite inputs.
35. coffepot77 ◴[] No.44484606{3}[source]
Can you explain why you think the state space of the brain is not finite? (Not even taking into account countability of infinities)
36. richk449 ◴[] No.44484612{3}[source]
It sounds like you are making a distinction between digital (silicon computers) and analog (biological brains).

As far as possible reasons that a computer can’t achieve AGI go, this seems like the best one (assuming computer means digital computer of course).

But in a philosophical sense, a computer obeys the same laws of physics that a brain does, and the transistors are analog devices that are being used to create a digital architecture. So whatever makes you brain have uncountable states would also make a real digital computer have uncountable states. Of course we can claim that only the digital layer on top matters, but why?

37. triclops200 ◴[] No.44484654{5}[source]
Measurability is essentially a synonym for meaningful interaction at some measurement scale. When describing fundamental measurability limits, you're essentially describing what current physical models consider to be the fundamental interaction scale.
38. bakuninsbart ◴[] No.44484664{3}[source]
Cantor talks about countable and uncountable infinities, both computer chips and human brains are finite spaces. The human brain has roughly 100b neurons, even if each of these had an edge with each other and these edges could individually light up signalling different states of mind, isn't that just `2^100b!`? That's roughly as far away from infinity as 1.
replies(1): >>44484792 #
39. layer8 ◴[] No.44484701{5}[source]
Not quite. Smaller wavelengths mean higher energy, and a photon with Planck wavelength would be energetic enough to form a black hole. So you can’t meaningfully interact electromagnetically with something smaller than the Planck length. Nor can that something have electromagnetic properties.

But since we don’t have a working theory of quantum gravity at such energies, the final verdict remains open.

40. josefx ◴[] No.44484739[source]
> and fundamentally immeasurable about humans that leads to our general intelligence

Isn't AGI defined to mean "matches humans in virtually all fields"? I don't think there is a single human capable of this.

41. izzydata ◴[] No.44484740{4}[source]
I am expressing doubt. I don't have any preconceptions. I am open to being convinced of anything that makes more sense.
replies(1): >>44485493 #
42. Balgair ◴[] No.44484759[source]
-- A human brain is theoretically reproducible outside standard biological mechanisms, if you have a good enough nanolathe.

Sort of. The main issue is the energy requirements. We could theoretically reproduce a human brain in SW today, it's just that it would be a really big energy hog and run very slowly and probably become insane quickly like any person trapped in a sensory deprived tank.

The real key development for AI and AGI is down at the metal level of computers- the memristor.

https://en.m.wikipedia.org/wiki/Memristor

The synapse in a brain is essentially a memristive element, and it's a very taxing one on the neuron. The equations is (change in charge)/(change in flux). Yes, a flux capacitor, sorta. It's the missing piece in fundamental electronics.

Making simple 2 element memristors is somewhat possible these days, though I've not really been in the space recently. Please, if anyone knows where to buy them, a real one not a claimed to be one, let me know. I'm willing to pay good money.

In Terms of AI, a memristor would require a total redesign of how we architect computers ( goodbye busses and physically separate memory, for one). But, you'd get a huge energy and time savings benefit. As in, you can run an LLM on a watch battery or small solar cell and let the environment train them to a degree.

Hopefully AI will accelerate their discovery and facilitate their introduction into cheap processing and construction of chips.

43. somewhereoutth ◴[] No.44484792{4}[source]
But this signalling (and connections) may be more complex than connected/unconnected and on/off, such that we cannot completely describe them [digitally/using a countable state space] as we would with silicon.
replies(1): >>44485200 #
44. knome ◴[] No.44485168[source]
>Maybe our first AGI is just a Petri dish brain with a half-decent python API.

https://www.oddee.com/australian-company-launches-worlds-fir...

the entire idea feels rather immoral to me, but it does exist.

replies(1): >>44486713 #
45. chowells ◴[] No.44485200{5}[source]
If you think it can't be done with a countable state space, then you must know some physics that the general establishment doesn't. I'm sure they would love to know what you do.

As far as physicists believe at the moment, there's no way to ever observe a difference below the Planck level. Energy/distance/time/whatever. They all have a lower boundary of measurability. That's not as a practical issue, it's a theoretical one. According to the best models we currently have, there's literally no way to ever observe a difference below those levels.

If a difference smaller than that is relevant to brain function, then brains have a way to observe the difference. So I'm sure the field of physics eagerly awaits your explanation. They would love to see an experiment thoroughly disagree with a current model. That's the sort of thing scientists live for.

replies(1): >>44486650 #
46. izzydata ◴[] No.44485290{3}[source]
This article is about being skeptical that what people currently call AI that is actually LLMs is going to be a transformative technology.

Myself and many others are skeptical that LLMs are even AI.

LLMs / "AI" may very well be a transformative technology that changes the world forever. But that is a different matter.

47. dwaltrip ◴[] No.44485305{3}[source]
Please describe in detail how biological mechanisms are uncountable.

And then you need to show how the same logic cannot apply to non-biological systems.

48. nicoburns ◴[] No.44485363{4}[source]
Absolutely nothing in the real world is truly infinite. Infinity is just a useful mathematical fiction that closely approximate the real world for large enough (or small enough in the case of infinitesimals) things.

But biological brain have significantly greater state space than conventional silicon computers because they're analog. The voltage across a transistor varies approximately continuously, but we only measure a single bit from that (or occasionally 2 for nand).

49. tempusalaria ◴[] No.44485493{5}[source]
Even as someone who is skeptical about LLMs, I’m not sure how anyone can look at what was achieved in AlphaGo and not at least consider the possibility that NNs could be superhuman in basically every domain at some point
50. hyperbovine ◴[] No.44486659[source]
> There is no define or agreed upon consensus of what AGI even means or implies.

Agreed, however defining ¬AGI seems much more straightforward to me. The current crop of LLMs, impressive though they may be, are just not human level intelligent. You recognize this as soon as you spend a significant amount of time using one.

It may also be that they are converging on a type of intelligence that is fundamentally not the same as human intelligence. I’m open to that.

51. j_bum ◴[] No.44486713{3}[source]
I’m curious why you find it immoral?
replies(2): >>44487108 #>>44487146 #
52. preisschild ◴[] No.44487032[source]
> Maybe our first AGI is just a Petri dish brain with a half-decent python API

This reminds me of The Thought Emporium's project of teaching rat brain cells to play doom

https://www.youtube.com/watch?v=bEXefdbQDjw

53. superfrank ◴[] No.44487108{4}[source]
Because the lack of semicolons is pure hubris and an affront to God
54. knome ◴[] No.44487146{4}[source]
I don't think it particularly moral to start heading down a path wherein we are essentially aiming to create enslaved cloned vat brains. I know that's not what they have, and that they're nowhere near that, but if they succeed in these early stages, more and more complex systems will follow in time. I don't think it a particularly healthy direction to explore.