No, it is not. In fact it is quite a superficial example of a much deeper theory, behind functions, their approximations and their representations.
The FFT is nifty but that's FINO. The Google boys also had a few O(N^2) to O(N log N) moments. Those seemed to move the needle a bit as well.
But even if we restrict to "things that made Nano Banana Pro possible" Shannon and Turing leapfrog Fourier.
If anyone wants to see my favorite application of the 2D DFT, I made a video of how the DFT is used to remove rainbows in manga on Kaleido 3 color eink on Kobo Colour:
https://www.amazon.com/Who-Fourier-Mathematical-Transnationa...
I would just suggest the author to replace the sentence “99% of the time, it refers to motion in one dimension” with “most of the time” since this is a mathematical article and there’s no need to use specific numbers when they don’t reflect actual data.
https://jontalle.web.engr.illinois.edu/Public/AllenSpeechPro...
Note the two electric circuit models figs 3.2 & 3.8
More seriously, there are tens of thousands of people who come to HN. If Fourier stuff gets upvoted, it's because people find it informative. I happen to know the theory, but I wouldn't gatekeep.
If your underlying signal is at frequency that is not a harmonic of the sampling length, then you get "ringing" and it's completely unclear how to deal with it (something something Bessel functions)
Actually using DFTs is a nightmare ..
- If I have several dominant frequencies (not multiples of the sampling rate) and I want to know them precisely, it's unclear how I can do that with an FFT
- If I know the frequency a priori and just want to know the phase shift.. also unclear
- If I have missing values.. how do i fill the gaps to distort the resulting spectrum as little as possible?
- If I have samples that are not equally spaced, how am I supposed to deal with that?
- If my measurements have errors, how do I propagate errors through the FFT to my results?
So outside of audio where you control the fixed sample rate and the frequencies are all much lower than the sample rate... it's really hard to use. I tried to use it for a research project and while the results looked cool.. I just wasn't able to backup my math in a convincing way (though it's been a few years so I should try again with ChatGPT's hand-holding)
I recommend people poke around this webpage to get a taste of what a complicated scary monster you're dealing with
Then there was something about circles and why do some people call them some other silly thing?
So far, so utterly meaningless, as far as I could tell. just seemed like meaningless babble to make even a kindergartner feel comfortable with the article, but it didn't seem to have communicated much of anything, really.
Then there were circles. Some of them were moving, one of them had a sinus wave next to it and some balls were tracing both in sync, indicating which part of the sinus wave equalled which part of the circle I guess?
I understood none of it.
I asked chat gpt to explain to me, i think it has read this article cause it used the smoothie analogy as well. I still don't understand what that analogy is meant to mean.
Then finally I found this: If someone plays a piano chord, you hear one sound. But that sound is actually made of multiple notes (multiple frequencies).
The Fourier Transform is the tool that figures out:
which notes (frequencies) are present, and how loud each one is
That, finally, makes sense.
Though the DFT can be implemented efficiently using the Fast Fourier Transform (FFT) algorithm, the DFT is far from being the best estimator for frequencies contained in a signal. Other estimators (like Maximum Likelihood [ML], [Root-]MUSIC, or ESPRIT) are in general far more accurate - at the cost of higher computational effort.
And as the previous answer said: compressed sensing (or compressive sensing) can help as well for some non-standard cases.
The FFT is still easy to use, and it you want a higher frequency resolution (not higher max frequency), you can zero pad your signal and get higher frequency resolution.
Think of the components of a written number: ones, tens, hundreds etc which have a repeating pattern. Digits are inherently periodic. Not too far from periodic basis functions.
Both involve breaking something down into periodic components, and reversing the process by adding up the components.
https://github.com/dsego/strobe-tuner/blob/main/core/dft.odi...
Paper by Stan Osher et al: https://arxiv.org/abs/1104.0262
Zero-padding helps you to find the true position (frequency) of a peak in the DFT-spectrum. So, your frequency estimates can get better. However, the peaks of a DFT are the summits of hills that are usually much wider than compared to other techniques (like Capon or MUSIC) whose spectra tend to have much narrower hills. Zero-padding does not increase the sharpness of these hills (does not make them narrower). Likewise the DFT tends to be more noisy in the frequency domain compared to other techniques which could lead to false detections (e.g. with a CFAR variant).
The other important point is that Fourier doesn’t really give you frequency and loudness. It gives you complex numbers that can be used to estimate the loudness of different frequencies. But the complex nature of the transform is somewhat more complex than that (accidental pun).
A fun fact. The Heisenberg uncertainty principle can be viewed as the direct consequence of the nature of the Fourier transform. In other words, it is not an unexplained natural wonder but rather a mathematical inevitability. I only wish we could say the same about the rest of quantum theory!
But it is a lovely, real-world and commonly understood example of how harmonics can work, and thus a nice baby-step into the idea of spectral analysis.
I really don't have any mathematics in my background, so you lost me towards the very end when the actual math came in, but I can't fault your Fourier explanation for not also explaining imaginary numbers: even I can see they're out of scope for this post!