The answer is actually rather simple. AM stations are limited to 10KHz band width. FM gets 200KHz. More bandwidth allows representing a higher fidelity signal…
The answer is actually rather simple. AM stations are limited to 10KHz band width. FM gets 200KHz. More bandwidth allows representing a higher fidelity signal…
If we look only at the audio bandwidth, AM stations are limited to 5 kHz of audio spectrum. The 10 kHz figure comes from the fact that AM is double sideband modulation (as opposed to single sideband as used in ham radio and other radio services). So the broadcast signal uses twice the bandwidth of the audio.
FM stations have 15 kHz of audio bandwidth, three times that of AM. They are able to do this because they transmit at a much higher frequency.
The 200 kHz figure includes other things like stereo (two channels of audio), subcarriers for RDS data and such, and the "Carson bandwidth rule" that 'basementcat' mentioned.
I am surprised that the article overlooked this simple and obvious explanation.
In physics, when a wave passes from one medium to another, its frequency is supposed to stay the same. Even if this isn't perfectly true in the real world, I would think amplitude is more likely to decrease due to obstacles, distance, and the medium absorbing some energy.
Also, the information in AM is carried by the relative amplitude of the signal. Flat attenuation like you're describing doesn't really distort the AM signal. What does impact both AM and FM is frequency selectivity. Imagine light traveling through a prism and being split by frequency. If there are obstacles in the way, some colors won't pass through as well. The is can cause distortions in FM as the receiver loses lock on the signal. Am suffers from this too, but people are less likely to notice because they're used to these distortions -- these kind of effects happen with sound too.
As other posters have mentioned, the reason FM sounds better is that it has more bandwidth for the signal.
Although while we care about the relative amplitudes in AM, AWGN would make this harder to pick out if the signal is attenuated. Is the same idea true for frequencies? I don't see a direct parallel here.
That's why commercial FM broadcasting uses a ±75kHz deviation even though it was originally only transmitting audio of ≤20kHz. Adding all this extra bandwidth to an AM station wouldn't actually help, because beyond ±20kHz, you're only improving your radio station's ability to reproduce ultrasound. But it does help FM; it greatly reduces the amplitude of demodulated noise, because, even without a PLL, the frequency deviation caused by additive white noise increases much more slowly with bandwidth than the frequency deviation you can use for your signal. With a PLL, I think the frequency deviation caused by additive white noise basically doesn't increase at all with bandwidth. (I guess I should simulate this; it should be pretty easy.)
Unfortunately neither Cook's article nor the flashlight analogy explains any of this.
It's definitely easier to understand in the Fourier domain.
You can think of it like this: the noise is not about the phase changing, it is about your ability to tell what the phase is. The noisier the signal gets, the harder time you will have to tell what the amplitude is, as well as what the phase is.