←back to thread

174 points Philpax | 5 comments | | HN request time: 1.041s | source
Show context
dmwilcox ◴[] No.43722753[source]
I've been saying this for a decade already but I guess it is worth saying here. I'm not afraid AI or a hammer is going to become intelligent (or jump up and hit me in the head either).

It is science fiction to think that a system like a computer can behave at all like a brain. Computers are incredibly rigid systems with only the limited variance we permit. "Software" is flexible in comparison to creating dedicated circuits for our computations but is nothing by comparison to our minds.

Ask yourself, why is it so hard to get a cryptographically secure random number? Because computers are pure unadulterated determinism -- put the same random seed value in your code and get the same "random numbers" every time in the same order. Computers need to be like this to be good tools.

Assuming that AGI is possible in the kinds of computers we know how to build means that we think a mind can be reduced to a probabilistic or deterministic system. And from my brief experience on this planet I don't believe that premise. Your experience may differ and it might be fun to talk about.

In Aristotle's ethics he talks a lot about ergon (purpose) -- hammers are different than people, computers are different than people, they have an obvious purpose (because they are tools made with an end in mind). Minds strive -- we have desires, wants and needs -- even if it is simply to survive or better yet thrive (eudaimonia).

An attempt to create a mind is another thing entirely and not something we know how to start. Rolling dice hasn't gotten anywhere. So I'd wager AGI somewhere in the realm of 30 years to never.

replies(12): >>43722893 #>>43722938 #>>43723051 #>>43723121 #>>43723162 #>>43723176 #>>43723230 #>>43723536 #>>43723797 #>>43724852 #>>43725619 #>>43725664 #
LouisSayers ◴[] No.43723176[source]
What you're mentioning is like the difference between digital vs analog music.

For generic stuff you probably can't tell the difference, but once you move to the edges you start to hear the steps in digital vs the smooth transition of analog.

In the same way, AI runs on bits and bytes, and there's only so much detail you can fit into that.

You can approximate reality, but it'll never quite be reality.

I'd be much more concerned with growing organic brains in a lab. I wouldn't be surprised to learn that people are covertly working on that.

replies(1): >>43723258 #
Borealid ◴[] No.43723258[source]
Are you familiar with the Nyquist–Shannon sampling theorem?

If so, what do you think about the concept of a human "hear[ing] the steps" in a digital playback system using a sampling rate of 192kHz, a rate at which many high-resolution files are available for purchase?

How about the same question but at a sampling rate of 44.1kHz, or the way a normal "red book" music CD is encoded?

replies(2): >>43723320 #>>43723564 #
1. LouisSayers ◴[] No.43723564[source]
I have no doubt that if you sample a sound at high enough fidelity that you won't hear a difference.

My comment around digital vs analog is more of an analogy around producing sounds rather than playing back samples though.

There's a Masterclass with Joel Zimmerman (DeadMau5) where he explains the stepping effect when it comes to his music production. Perhaps he just needs a software upgrade, but there was a lesson where he showed the stepping effect which was audibly noticeable when comparing digital vs analog equipment.

replies(1): >>43723646 #
2. Borealid ◴[] No.43723646[source]
You are correct, and that "high enough fidelity" is the rate at which music has been sampled for decades.
replies(1): >>43723820 #
3. LouisSayers ◴[] No.43723820[source]
The problem I'm mentioning isn't about the fidelity of the sample, but of the samples themselves.

There are an infinite number of frequencies between two points - point 'a' and point 'b'. What I'm talking about are the "steps" you hear as you move across the frequency range.

replies(1): >>43724031 #
4. kazinator ◴[] No.43724031{3}[source]
Of course there is a limit to the frequency resolution of a sampling method. I'm skeptical you can hear the steps though, at 44.1 kHz or better sampling rates.

Let's say that the shortest interval at which our hearing has good frequency acuity (say, as good as it can be) is 1 second.

In this interval, we have 44100 samples.

Let's imagine the samples graphically: a "44K" pixel wide image.

We have some waveform across this image. What is the smallest frequency stretch or shrink that will change the image? Note: not necessarily be audible, but just change the pixels.

If we grab one endpoint of the waveform and move it by less than half a pixel, there is no difference, right? We have to stretch it by a whole pixel.

Let's assume that some people (perhaps most) can hear that difference. It might not be true, but it's the weakest assumption.

That's a 0.0023 percent difference!

One cent (1/100th of a semitone) is a 0.058% difference: so the difference we are considering is 25 X smaller.

I really don't think you can hear 1/25 of a cent difference in pitch, over interval of one second, or even longer.

Over shorter time scales less than a second, the resolution in our perception of pitch gets worse.

E.g. when a violinist is playing a really fast run, you don't notice it if the notes have intonation that is off. The longer "landing" notes in the solo have to be good.

When bad pitch is slight, we need not only longer notes, but to hear it together with other notes, because the beats between them are an important clue (and in fact the artifact we will find most objectionable).

Pre digital technology will not have frequency resolution which is that good. I don't think you can get tape to move at a speed that stays within 0.0023 percent of a set target. In consumer tape equipment, you can hear audible "wow" and "flutter" as the tape speed oscillates. When the frequency of a periodic signal wobbles, you get new signals in there: side bands.

I don't think that there is any perceptible aspect of sound that is not captured in the ordinary consumer sample rates and sample resolutions. I suspect 48 kHz and 24 bits is way past diminishing returns.

I'm curious what it is that Deadmau5 thinks he discovered, and under what test conditions.

replies(1): >>43724206 #
5. kazinator ◴[] No.43724206{4}[source]
Here is a way we could hear a 0.0023 difference in pitch: via beats.

Suppoise we sample a precise 10,000.00 kHz analog signal (sinusoid) and speed up the sampled signal by 0.0023 percent. It will have a frequency of 10,000.23 Hz.

The f2 - f2 difference between them is 0.23 Hz, which means if they are mixed together, we will hear beats at 0.46 Hz: a little slower than once in every two seconds.

So in this contrived way, where we have the original source and the digitized one side by side, we can obtain an audible effect correlating to the steps in resolution of the sampling method.

I'm guessing Deadmau5 might have set up an experiment along these lines.

Musicians tend to be oblivious to something like 5 cent errors in the intonations of their instruments, in the lower registers. E.g. world renowned guitarists play on axes that have no nut compensation, without which you can't even get close to accurate intontation.