Most active commenters
  • dmwilcox(5)
  • LouisSayers(3)
  • gloosx(3)

←back to thread

174 points Philpax | 46 comments | | HN request time: 1.332s | source | bottom
1. dmwilcox ◴[] No.43722753[source]
I've been saying this for a decade already but I guess it is worth saying here. I'm not afraid AI or a hammer is going to become intelligent (or jump up and hit me in the head either).

It is science fiction to think that a system like a computer can behave at all like a brain. Computers are incredibly rigid systems with only the limited variance we permit. "Software" is flexible in comparison to creating dedicated circuits for our computations but is nothing by comparison to our minds.

Ask yourself, why is it so hard to get a cryptographically secure random number? Because computers are pure unadulterated determinism -- put the same random seed value in your code and get the same "random numbers" every time in the same order. Computers need to be like this to be good tools.

Assuming that AGI is possible in the kinds of computers we know how to build means that we think a mind can be reduced to a probabilistic or deterministic system. And from my brief experience on this planet I don't believe that premise. Your experience may differ and it might be fun to talk about.

In Aristotle's ethics he talks a lot about ergon (purpose) -- hammers are different than people, computers are different than people, they have an obvious purpose (because they are tools made with an end in mind). Minds strive -- we have desires, wants and needs -- even if it is simply to survive or better yet thrive (eudaimonia).

An attempt to create a mind is another thing entirely and not something we know how to start. Rolling dice hasn't gotten anywhere. So I'd wager AGI somewhere in the realm of 30 years to never.

replies(12): >>43722893 #>>43722938 #>>43723051 #>>43723121 #>>43723162 #>>43723176 #>>43723230 #>>43723536 #>>43723797 #>>43724852 #>>43725619 #>>43725664 #
2. CooCooCaCha ◴[] No.43722893[source]
This is why I think philosophy has become another form of semi-religious kookery. You haven't provided any actual proof or logical reason for why a computer couldn't be intelligent. If randomness is required then sample randomness from the real world.

It's clear that your argument is based on feels and you're using philosophy to make it sound more legitimate.

replies(2): >>43723074 #>>43723225 #
3. throwaway150 ◴[] No.43722938[source]
> And from my brief experience on this planet I don't believe that premise.

A lot of things that humans believed were true due to their brief experience on this planet ended up being false: earth is the center of the universe, heavier objects fall faster than lighter ones, time ticked the same everywhere, species are fixed and unchanging.

So what your brief experience on this planet makes you believe has no bearing on what is correct. It might very well be that our mind can be reduced to a probabilistic and deterministic system. It might also be that our mind is a non-deterministic system that can be modeled in a computer.

replies(2): >>43725533 #>>43725640 #
4. ggreer ◴[] No.43723051[source]
Is there any specific mental task that an average human is capable of that you believe computers will not be able to do?

Also does this also mean that you believe that brain emulations (uploads) are not possible, even given an arbitrary amount of compute power?

replies(2): >>43723098 #>>43725746 #
5. biophysboy ◴[] No.43723074[source]
Brains are low-frequency, energy-efficient, organic, self-reproducing, asynchronous, self-repairing, and extremely highly connected (thousands of synapses). If AGI is defined as "approximate humans", I think its gonna be a while.

That said, I don't think computers need to be human to have an emergent intelligence. It can be different in kind if not in degree.

replies(1): >>43723327 #
6. missingrib ◴[] No.43723098[source]
Yes, they can't have understanding or intentionality.
replies(2): >>43723319 #>>43723476 #
7. ◴[] No.43723121[source]
8. preommr ◴[] No.43723162[source]
> why is it so hard to get a cryptographically secure random number? Because computers are pure unadulterated determinism

Then you've missed the part of software.

Software isn't computer science, it's not always about code. It's about solving problems in a way we can control and manufacture.

If we needed random numbers, we could easily use a hardware that uses some physics property, or we could pull in an observation from an api like the weather. We don't do these things because pseudo-random is good enough, and other solutions have drawbacks (like requiring an internet for api calls). But that doesn't mean software can't solve these problems.

replies(1): >>43723364 #
9. LouisSayers ◴[] No.43723176[source]
What you're mentioning is like the difference between digital vs analog music.

For generic stuff you probably can't tell the difference, but once you move to the edges you start to hear the steps in digital vs the smooth transition of analog.

In the same way, AI runs on bits and bytes, and there's only so much detail you can fit into that.

You can approximate reality, but it'll never quite be reality.

I'd be much more concerned with growing organic brains in a lab. I wouldn't be surprised to learn that people are covertly working on that.

replies(1): >>43723258 #
10. dmwilcox ◴[] No.43723225[source]
I tried to keep my long post short so I cut things. I gestured at it -- there is nothing in a computer we didn't put there.

Take the same model weights give it the same inputs, get the same outputs. Same with the pseudo-random number generator. And the "same inputs" is especially limited versus what humans are used to.

What's the machine code of an AGI gonna look like? It makes one illegal instruction and crashes? If if changes tboughts will it flush the TLB and CPU pipeline? ;) I jest but really think about the metal. The inside of modern computers is tightly controlled with no room for anything unpredictable. I really don't think a von Neumann (or Harvard ;) machine is going to cut it. Honestly I don't know what will, controlled but not controlled, artificially designed but not deterministic.

In fact, that we've made a computer as unreliable as a human at reproducing data (ala hallucinating/making s** up) is an achievement itself, as much of an anti-goal as it may be. If you want accuracy, you don't use a probabilistic system on such a wide problem space (identify a bad solder joint from an image, sure. Write my thesis, not so much)

replies(1): >>43723439 #
11. bastardoperator ◴[] No.43723230[source]
Computers can't have unique experiences. I think it's going to replace search, but becoming sentient? Not in my lifetime, granted I'm getting up there.
replies(1): >>43723335 #
12. Borealid ◴[] No.43723258[source]
Are you familiar with the Nyquist–Shannon sampling theorem?

If so, what do you think about the concept of a human "hear[ing] the steps" in a digital playback system using a sampling rate of 192kHz, a rate at which many high-resolution files are available for purchase?

How about the same question but at a sampling rate of 44.1kHz, or the way a normal "red book" music CD is encoded?

replies(2): >>43723320 #>>43723564 #
13. WXLCKNO ◴[] No.43723319{3}[source]
Right now or you mean ever?

It's such a small leap to see how an artificial intelligence can/could become capable of understanding and have intentionality.

14. EMIRELADERO ◴[] No.43723320{3}[source]
At least for listening purposes, there's no difference between 44.1 KHz/16-bit sampling and anything above that. It's all the same to the human ear.
15. cmsj ◴[] No.43723327{3}[source]
Just to put some numbers on "extremely highly connected" - there are about 90 billion neurons in a human brain, but the connections between them number in the range of 100 trillion.

That is one hell of a network, and it can all operate fully in parallel while continuously training itself. Computers have gotten pretty good at doing things in parallel, but not that good.

16. pstuart ◴[] No.43723335[source]
On the newly released iPhone: "No wireless. Less space than a nomad. Lame."

;-)

17. dmwilcox ◴[] No.43723364[source]
It's not about the random numbers it's about the tree of possibilities having to be defined up front (in software or hardware). That all inputs should be defined and mapped to some output and that this process is predictable and reproducible.

This makes computers incredibly good at what people are not good at -- predictably doing math correctly, following a procedure, etc.

But because all of the possibilities of the computer had to be written up as circuitry or software beforehand, it's variability of outputs is constrained to what we put into it in the first place (whether that's a seed for randomness or model weights).

You can get random numbers and feed it into the computer but we call that "fuzzing" which is a search for crashes indicating unhandled input cases and possible bugs or security issues.

replies(1): >>43723531 #
18. krisoft ◴[] No.43723439{3}[source]
> What's the machine code of an AGI gonna look like?

Right now the guess is that it will be mostly a bunch of multiplications and additions.

> It makes one illegal instruction and crashes?

And our hearth quivers just slightly the wrong way and we die. Or a tiny blood cloth plugs a vessel in our brain and we die. Do you feel that our fragility is a good reason why meat cannot be intelligent?

> I jest but really think about the metal.

Ok. I'm thinking about the metal. What should this thinking illuminate?

> The inside of modern computers is tightly controlled with no room for anything unpredictable.

Let's assume we can't make AGI because we need randomness and unpredictability in our computers. We can very easily add unpredictability. The simple and stupid solution is to add some sensor (like a camera CCD) and stare at the measurement noise. You don't even need a lens on that CCD. You can cap it so it sees "all black", and then what it measures is basically heat noise of the sensors. Voila. Your computer has now unpredictability. People who actually make semiconductors probably can come up with even simpler and easier ways to integrate unpredictability right on the same chip we compute with.

You still haven't really argued why you think "unpredictableness" is the missing component of course. Beside the fact that it just feels right to you.

replies(1): >>43726438 #
19. recursive ◴[] No.43723476{3}[source]
Coincidentally, there is no falsifiable/empirical test for understanding or intentionality.
20. leptons ◴[] No.43723531{3}[source]
No, you're missing what they said. True randomness can be delivered to a computer via a peripheral - an integrated circuit or some such device that can deliver true randomness is not that difficult.

https://en.wikipedia.org/wiki/Hardware_random_number_generat...

replies(1): >>43724160 #
21. Krssst ◴[] No.43723536[source]
If the physics underlying the brain's behavior are deterministic, they can be simulated by software and so does the brain.

(and if we assume that non-determinism is randomness, non-deterministic brain could be simulated by software plus an entropy source)

22. LouisSayers ◴[] No.43723564{3}[source]
I have no doubt that if you sample a sound at high enough fidelity that you won't hear a difference.

My comment around digital vs analog is more of an analogy around producing sounds rather than playing back samples though.

There's a Masterclass with Joel Zimmerman (DeadMau5) where he explains the stepping effect when it comes to his music production. Perhaps he just needs a software upgrade, but there was a lesson where he showed the stepping effect which was audibly noticeable when comparing digital vs analog equipment.

replies(1): >>43723646 #
23. Borealid ◴[] No.43723646{4}[source]
You are correct, and that "high enough fidelity" is the rate at which music has been sampled for decades.
replies(1): >>43723820 #
24. AstroBen ◴[] No.43723797[source]
> It is science fiction to think that a system like a computer can behave at all like a brain

It is science fiction to think that a plane could act at all like a bird. Although... it doesn't need to in order to fly

Intelligence doesn't mean we need to recreate the brain in a computer system. Sentience, maybe. General intelligence no

replies(1): >>43725706 #
25. LouisSayers ◴[] No.43723820{5}[source]
The problem I'm mentioning isn't about the fidelity of the sample, but of the samples themselves.

There are an infinite number of frequencies between two points - point 'a' and point 'b'. What I'm talking about are the "steps" you hear as you move across the frequency range.

replies(1): >>43724031 #
26. kazinator ◴[] No.43724031{6}[source]
Of course there is a limit to the frequency resolution of a sampling method. I'm skeptical you can hear the steps though, at 44.1 kHz or better sampling rates.

Let's say that the shortest interval at which our hearing has good frequency acuity (say, as good as it can be) is 1 second.

In this interval, we have 44100 samples.

Let's imagine the samples graphically: a "44K" pixel wide image.

We have some waveform across this image. What is the smallest frequency stretch or shrink that will change the image? Note: not necessarily be audible, but just change the pixels.

If we grab one endpoint of the waveform and move it by less than half a pixel, there is no difference, right? We have to stretch it by a whole pixel.

Let's assume that some people (perhaps most) can hear that difference. It might not be true, but it's the weakest assumption.

That's a 0.0023 percent difference!

One cent (1/100th of a semitone) is a 0.058% difference: so the difference we are considering is 25 X smaller.

I really don't think you can hear 1/25 of a cent difference in pitch, over interval of one second, or even longer.

Over shorter time scales less than a second, the resolution in our perception of pitch gets worse.

E.g. when a violinist is playing a really fast run, you don't notice it if the notes have intonation that is off. The longer "landing" notes in the solo have to be good.

When bad pitch is slight, we need not only longer notes, but to hear it together with other notes, because the beats between them are an important clue (and in fact the artifact we will find most objectionable).

Pre digital technology will not have frequency resolution which is that good. I don't think you can get tape to move at a speed that stays within 0.0023 percent of a set target. In consumer tape equipment, you can hear audible "wow" and "flutter" as the tape speed oscillates. When the frequency of a periodic signal wobbles, you get new signals in there: side bands.

I don't think that there is any perceptible aspect of sound that is not captured in the ordinary consumer sample rates and sample resolutions. I suspect 48 kHz and 24 bits is way past diminishing returns.

I'm curious what it is that Deadmau5 thinks he discovered, and under what test conditions.

replies(1): >>43724206 #
27. lttlrck ◴[] No.43724160{4}[source]
Maybe I'm misreading it but I think the OP understands that.

If you feed that true randomness into a computer, what use is it? Will it impair the computer at the very tasks in which it excels?

> That all inputs should be defined and mapped to some output and that this process is predictable and reproducible.

replies(1): >>43724929 #
28. kazinator ◴[] No.43724206{7}[source]
Here is a way we could hear a 0.0023 difference in pitch: via beats.

Suppoise we sample a precise 10,000.00 kHz analog signal (sinusoid) and speed up the sampled signal by 0.0023 percent. It will have a frequency of 10,000.23 Hz.

The f2 - f2 difference between them is 0.23 Hz, which means if they are mixed together, we will hear beats at 0.46 Hz: a little slower than once in every two seconds.

So in this contrived way, where we have the original source and the digitized one side by side, we can obtain an audible effect correlating to the steps in resolution of the sampling method.

I'm guessing Deadmau5 might have set up an experiment along these lines.

Musicians tend to be oblivious to something like 5 cent errors in the intonations of their instruments, in the lower registers. E.g. world renowned guitarists play on axes that have no nut compensation, without which you can't even get close to accurate intontation.

29. Aloisius ◴[] No.43724852[source]
> Ask yourself, why is it so hard to get a cryptographically secure random number?

I mean, humans aren't exactly good at generating random numbers either.

And of course, every Intel and AMD CPU these days has a hardware random number generator in it.

30. leptons ◴[] No.43724929{5}[source]
Chemical reactions are "predictable and reproducible", as well as quantum interactions, so does that make you a computer?

This comment thread is dull. I'm bailing out.

31. slavik81 ◴[] No.43725533[source]
What is the distance from the Earth to the center of the universe?
replies(1): >>43725600 #
32. gls2ro ◴[] No.43725600{3}[source]
The universe does not have a center but has a beginning in time and the beginning of space.

The distance to that beginning in time is approx 13 billion years. There is no approximation of distance to the beginning because the space is created at that point and continues to be created.

Imagine the Earth being on the surface of a sphere and so asking what is the center of the surface of a sphere? The sphere has a center but on the surface there is no center.

At least this is my understanding of how to approach these kind of questions.

33. ukFxqnLa2sBSBf6 ◴[] No.43725619[source]
I guarantee computers are better at generating random numbers than humans lol
replies(2): >>43725692 #>>43725876 #
34. ◴[] No.43725640[source]
35. potamic ◴[] No.43725664[source]
The universe we know is fundamentally probabilistic, so by extension everything including stars, planets and computers are inherently non-deterministic. But confining our discussion outside of quantum effects and absolute determinism, we do not have a reason to believe that the mind should be anything but deterministic, scientifically at least.

We understand the building blocks of the brain pretty well. We know the structure and composition of neurons, we know how they are connected, what chemicals flow through them and how all these chemicals interact, and how that interaction translates to signal propagation. In fact, the neural networks we use in computing are loosely modelled on biological neurons. Both models are essentially comprised of interconnected units where each unit has weights to convert its incoming signals to outgoing signals. The predominant difference is in how these units adjust their weights, where computational models use back propagation and gradient descent, biological models use timing information from voltage changes.

But just because we understand the science of something perfectly well, doesn't mean we can precisely predict how something will work. Biological networks are very, very complex systems comprising of billions of neurons with trillions of connections acting on input that can vary in immeasurable number of ways. It's like predicting earthquakes. Even though we understand the science behind plate tectonics, to precisely predict an earthquake we need to map the properties of every inch of continental plates which is an impossible task. But doesn't mean we can't use the same scientific building blocks to build simulations of earthquakes which behave like any real earthquake would behave. If it looks like a duck, quacks like a duck, then what is a duck?

replies(1): >>43730928 #
36. uh_uh ◴[] No.43725692[source]
Not only that but LLMs unsurprisingly make similar distributional mistakes as humans do when asked to generate random numbers.
37. gloosx ◴[] No.43725706[source]
BTW Planes are fully inspired by birds and they mimic the core principles of the bird flight.

Mechanically it's different since humans are not such advanced mechanics as nature, but of course comparing the whole brain function to a simple flight is a bit silly

38. gloosx ◴[] No.43725746[source]
1. Computers cannot self-rewire like neurons, which means human can pretty much adapt for doing any specific mental task (an "unknown", new task) without explicit retraining, which current computers need to learn something new

2. Computers can't do continuous and unsupervised learning, which means computers require structured input, labeled data, and predefined objectives to learn anything. Humans learn passively all the time just by existing in the environment

replies(1): >>43726521 #
39. pyfon ◴[] No.43725876[source]
Computers are better at hashing entropy.
40. dmwilcox ◴[] No.43726438{4}[source]
Mmmm well my meatsuit can't easily make my own heart quiver the wrong way and kill me. Computers can treat data as code and code as data all pretty easily. It's core to several languages (like lisp). As such making illegal instructions or violating the straightjacket of a system such an "intelligence" would operate in is likely. If you could make an intelligent process, what would it think of an operating system kernel (the thing you have to ask for everything, io memory, etc)? Does the "intelligent" process fear for itself when it's going to get descheduled? What is the bitpattern for fear? Can you imagine an intelligent process in such a place, as static representation of data in ram? To get write something down you call out to a library and maybe the CPU switches out to a brk system call to map more virtual memory? It all sounds frankly ridiculous. I think AGI proponents fundamentally misunderstand how a computer works and are engaging in magical thinking and taking the market for a ride.

I think it's less about the randomness and more about that all the functionality of a computer is defined up front, in software, in training, in hardware. Sure you can add randomness and pick between two paths randomly but a computer couldn't spontaneously pick to go down a path that wasn't defined for it.

replies(1): >>43735134 #
41. imtringued ◴[] No.43726521{3}[source]
Minor nitpicks. I think your points are pretty good.

1. Self-rewiring is just a matter of hardware design. Neuromorphic hardware is a thing.

2. LLM foundation models are actually unsupervised in a way, since they simply take any arbitrary text and try to complete it. It's the instruction fine-tuning that is supervised. (Q/A pairs)

replies(1): >>43734671 #
42. pdimitar ◴[] No.43730928[source]
Seems to me you are a bit overconfident that "we" (who is "we"?) understand how the brain works. F.ex. how does a neuron actively stretching a tentacle trying to reach other neurons work in your model? Genuine question, I am not looking to make fun of you, it's just that your confidence seems a bit much.
replies(1): >>43734453 #
43. potamic ◴[] No.43734453{3}[source]
The simplified answer to that is some sort of a chemical gradient determined by gene expression in the cell. This is pretty much how all biological activity happens, like how limbs "know" to grow in a direction or how butterfly wings "know" to form the shape of a wing. Scientists are continuously uncovering more and more knowledge about various biological processes across life forms and there is nothing here to indicate it is anything but chemical signalling. I'm not a biologist so I won't be able to give explanations n-levels deep, but there is plenty information accessible to form an understanding of these processes in terms of physical and chemical laws. For how neurons connect, you can look up synaptogenesis and start from there.
44. gloosx ◴[] No.43734671{4}[source]
Neuromorphic chips are looking cool, they simulate plasticity — but the circuits are fixed. You can’t sprout a new synaptic route or regrow a broken connection. To self-rewire is not just merely changing your internal state or connections. To self-rewire means to physically grow or shrink new neurons, synapses or pathways, externally, acting from within. This is not looking realistic with the current silicon design.

The point is about unsupervised learning. Once an LLM is trained, its weights are frozen — it won’t update itself during a chat. Prompt-driven Inference is immediate, not persistent, you can define a term or concept mid-chat and it will behave as if it learned it, but only until the context window ends. If it was the other way all models would drift very quickly.

45. krisoft ◴[] No.43735134{5}[source]
> Mmmm well my meatsuit can't easily make my own heart quiver the wrong way and kill me.

It very much can. Jump scares, deep grief are known to cause heart attacks. It is called stress cardiomyopathy. Or your meatsuit can indiredtly do that by ingesting the wrong chemicals.

> If you could make an intelligent process, what would it think of an operating system kernel

Idk. What do you think of your hypothalamus? It can make you unconscious at any time. It in fact makes you unconscious about once a day. Do you fear it? What if one day it won’t wake you up? Or what if it jacks up your internal body temperature and cooks you alive from the inside? It can do that!

Now you might say you don’t worry about that, because through your long life your hypothalamus proved to be reliable. It predictably does what it needs to do, to keep you alive. And you would be right. Your higher cognitive functions have a good working relationship with your lower level processes.

Similarly for an AGI to be inteligent it needs to have a good working relationship with the hardware it is running on. That means that if the kernel is temperamental and idk descheduling the higher level AGI process then the AGI will mallfunction and not appear that inteligent. Same as if you meet Albert Einstein while he is chemically put to sleep. He won’t appear inteligent at all! At best he will be just drooling there.

> Can you imagine an intelligent process in such a place, as static representation of data in ram?

Yes. You can’t? This is not really a convincing argument.

> It all sounds frankly ridiculous.

I think what you are doing is that you are looking at implementation details and feeling a disconnect between that and the possibility of inteligence. Do you feel the same ridiculousnes about a meatblob doing things and appearing inteligent?

> a computer couldn't spontaneously pick to go down a path that wasn't defined for it.

Can you?

replies(1): >>43798543 #
46. dmwilcox ◴[] No.43798543{6}[source]
>> Can you imagine an intelligent process in such a place, as static representation of data in ram?

> Yes. You can’t? This is not > really a convincing argument.

Fair, I believe it's called begging the question. But for some context is that people of many recent technological ages have talked about the brain like a piece of technology -- e.g. like a printing press, a radio, a TV.

I think we've found what we've wanted to find (a hardware-software dichotomy in the brain) and then occasionally get surprised when things aren't all that clearly separated. So with that in mind, I personally without any particularly good evidence to the contrary am not of the belief that your brain can be represented as a static state. Pribram's holonomic mind theory comes to mind as a possible way brain state could have trouble being represented in RAM.( https://en.m.wikipedia.org/wiki/Holonomic_brain_theory)

> ...you are looking at implementation details and feeling a disconnect between that and the possibility of inteligence. Do you feel the same ridiculousnes about a meatblob doing things and appearing inteligent?

If I was a biologist I might. My grandfather was a microbiologist and scoffed at my atheism. But with a computer at least the details are understandable and knowable being created by people. We haven't cracked the consciousness of a fruit fly despite having a map of it's brain.

>> a computer couldn't spontaneously pick to go down a path that wasn't defined for it.

> Can you?

Love it. I re-read Fight Club recently, it's a reasonable question. The worries of determinism versus free will still loom large in this sort of world view. We get a kind of "god in the gaps" type problem with free will being reduced down to the spaces where you don't have an explanation.