←back to thread

174 points Philpax | 1 comments | | HN request time: 0.205s | source
Show context
dmwilcox ◴[] No.43722753[source]
I've been saying this for a decade already but I guess it is worth saying here. I'm not afraid AI or a hammer is going to become intelligent (or jump up and hit me in the head either).

It is science fiction to think that a system like a computer can behave at all like a brain. Computers are incredibly rigid systems with only the limited variance we permit. "Software" is flexible in comparison to creating dedicated circuits for our computations but is nothing by comparison to our minds.

Ask yourself, why is it so hard to get a cryptographically secure random number? Because computers are pure unadulterated determinism -- put the same random seed value in your code and get the same "random numbers" every time in the same order. Computers need to be like this to be good tools.

Assuming that AGI is possible in the kinds of computers we know how to build means that we think a mind can be reduced to a probabilistic or deterministic system. And from my brief experience on this planet I don't believe that premise. Your experience may differ and it might be fun to talk about.

In Aristotle's ethics he talks a lot about ergon (purpose) -- hammers are different than people, computers are different than people, they have an obvious purpose (because they are tools made with an end in mind). Minds strive -- we have desires, wants and needs -- even if it is simply to survive or better yet thrive (eudaimonia).

An attempt to create a mind is another thing entirely and not something we know how to start. Rolling dice hasn't gotten anywhere. So I'd wager AGI somewhere in the realm of 30 years to never.

replies(12): >>43722893 #>>43722938 #>>43723051 #>>43723121 #>>43723162 #>>43723176 #>>43723230 #>>43723536 #>>43723797 #>>43724852 #>>43725619 #>>43725664 #
CooCooCaCha ◴[] No.43722893[source]
This is why I think philosophy has become another form of semi-religious kookery. You haven't provided any actual proof or logical reason for why a computer couldn't be intelligent. If randomness is required then sample randomness from the real world.

It's clear that your argument is based on feels and you're using philosophy to make it sound more legitimate.

replies(2): >>43723074 #>>43723225 #
dmwilcox ◴[] No.43723225[source]
I tried to keep my long post short so I cut things. I gestured at it -- there is nothing in a computer we didn't put there.

Take the same model weights give it the same inputs, get the same outputs. Same with the pseudo-random number generator. And the "same inputs" is especially limited versus what humans are used to.

What's the machine code of an AGI gonna look like? It makes one illegal instruction and crashes? If if changes tboughts will it flush the TLB and CPU pipeline? ;) I jest but really think about the metal. The inside of modern computers is tightly controlled with no room for anything unpredictable. I really don't think a von Neumann (or Harvard ;) machine is going to cut it. Honestly I don't know what will, controlled but not controlled, artificially designed but not deterministic.

In fact, that we've made a computer as unreliable as a human at reproducing data (ala hallucinating/making s** up) is an achievement itself, as much of an anti-goal as it may be. If you want accuracy, you don't use a probabilistic system on such a wide problem space (identify a bad solder joint from an image, sure. Write my thesis, not so much)

replies(1): >>43723439 #
krisoft ◴[] No.43723439[source]
> What's the machine code of an AGI gonna look like?

Right now the guess is that it will be mostly a bunch of multiplications and additions.

> It makes one illegal instruction and crashes?

And our hearth quivers just slightly the wrong way and we die. Or a tiny blood cloth plugs a vessel in our brain and we die. Do you feel that our fragility is a good reason why meat cannot be intelligent?

> I jest but really think about the metal.

Ok. I'm thinking about the metal. What should this thinking illuminate?

> The inside of modern computers is tightly controlled with no room for anything unpredictable.

Let's assume we can't make AGI because we need randomness and unpredictability in our computers. We can very easily add unpredictability. The simple and stupid solution is to add some sensor (like a camera CCD) and stare at the measurement noise. You don't even need a lens on that CCD. You can cap it so it sees "all black", and then what it measures is basically heat noise of the sensors. Voila. Your computer has now unpredictability. People who actually make semiconductors probably can come up with even simpler and easier ways to integrate unpredictability right on the same chip we compute with.

You still haven't really argued why you think "unpredictableness" is the missing component of course. Beside the fact that it just feels right to you.

replies(1): >>43726438 #
dmwilcox ◴[] No.43726438[source]
Mmmm well my meatsuit can't easily make my own heart quiver the wrong way and kill me. Computers can treat data as code and code as data all pretty easily. It's core to several languages (like lisp). As such making illegal instructions or violating the straightjacket of a system such an "intelligence" would operate in is likely. If you could make an intelligent process, what would it think of an operating system kernel (the thing you have to ask for everything, io memory, etc)? Does the "intelligent" process fear for itself when it's going to get descheduled? What is the bitpattern for fear? Can you imagine an intelligent process in such a place, as static representation of data in ram? To get write something down you call out to a library and maybe the CPU switches out to a brk system call to map more virtual memory? It all sounds frankly ridiculous. I think AGI proponents fundamentally misunderstand how a computer works and are engaging in magical thinking and taking the market for a ride.

I think it's less about the randomness and more about that all the functionality of a computer is defined up front, in software, in training, in hardware. Sure you can add randomness and pick between two paths randomly but a computer couldn't spontaneously pick to go down a path that wasn't defined for it.

replies(1): >>43735134 #
krisoft ◴[] No.43735134[source]
> Mmmm well my meatsuit can't easily make my own heart quiver the wrong way and kill me.

It very much can. Jump scares, deep grief are known to cause heart attacks. It is called stress cardiomyopathy. Or your meatsuit can indiredtly do that by ingesting the wrong chemicals.

> If you could make an intelligent process, what would it think of an operating system kernel

Idk. What do you think of your hypothalamus? It can make you unconscious at any time. It in fact makes you unconscious about once a day. Do you fear it? What if one day it won’t wake you up? Or what if it jacks up your internal body temperature and cooks you alive from the inside? It can do that!

Now you might say you don’t worry about that, because through your long life your hypothalamus proved to be reliable. It predictably does what it needs to do, to keep you alive. And you would be right. Your higher cognitive functions have a good working relationship with your lower level processes.

Similarly for an AGI to be inteligent it needs to have a good working relationship with the hardware it is running on. That means that if the kernel is temperamental and idk descheduling the higher level AGI process then the AGI will mallfunction and not appear that inteligent. Same as if you meet Albert Einstein while he is chemically put to sleep. He won’t appear inteligent at all! At best he will be just drooling there.

> Can you imagine an intelligent process in such a place, as static representation of data in ram?

Yes. You can’t? This is not really a convincing argument.

> It all sounds frankly ridiculous.

I think what you are doing is that you are looking at implementation details and feeling a disconnect between that and the possibility of inteligence. Do you feel the same ridiculousnes about a meatblob doing things and appearing inteligent?

> a computer couldn't spontaneously pick to go down a path that wasn't defined for it.

Can you?

replies(1): >>43798543 #
1. dmwilcox ◴[] No.43798543[source]
>> Can you imagine an intelligent process in such a place, as static representation of data in ram?

> Yes. You can’t? This is not > really a convincing argument.

Fair, I believe it's called begging the question. But for some context is that people of many recent technological ages have talked about the brain like a piece of technology -- e.g. like a printing press, a radio, a TV.

I think we've found what we've wanted to find (a hardware-software dichotomy in the brain) and then occasionally get surprised when things aren't all that clearly separated. So with that in mind, I personally without any particularly good evidence to the contrary am not of the belief that your brain can be represented as a static state. Pribram's holonomic mind theory comes to mind as a possible way brain state could have trouble being represented in RAM.( https://en.m.wikipedia.org/wiki/Holonomic_brain_theory)

> ...you are looking at implementation details and feeling a disconnect between that and the possibility of inteligence. Do you feel the same ridiculousnes about a meatblob doing things and appearing inteligent?

If I was a biologist I might. My grandfather was a microbiologist and scoffed at my atheism. But with a computer at least the details are understandable and knowable being created by people. We haven't cracked the consciousness of a fruit fly despite having a map of it's brain.

>> a computer couldn't spontaneously pick to go down a path that wasn't defined for it.

> Can you?

Love it. I re-read Fight Club recently, it's a reasonable question. The worries of determinism versus free will still loom large in this sort of world view. We get a kind of "god in the gaps" type problem with free will being reduced down to the spaces where you don't have an explanation.