Would an upgraded version of this that was actually capable of capturing the progress of a single laser pulse through the smoke be a way of getting around the one-way speed of light limitation [0]? It seems like if you could measure the pulse's propagation in one direction, and the other (as measured by when it scatters of the smoke at various positions in both directions), this seems like it would get around it?
But it's been a while since I read an explanation for why we have the one-way limitation in the first place, so I could be forgetting something.
He could then capture an entire line quite quickly, and would only need a 1 dimensional janky mirror setup to handle the other axis. And his resolution in the rotating axis is limited only by how quickly he can pulse the laser.
Of course, his janky mirror setup could have been 2 off-the-shelf galvos, but I guess that isn't as much "content".
It is not different phases, but it is a composite! On his second channel he describes the process[0]. Basically, it's a photomultiplier tube (PMT) attached to a precise motion control rig and a 2B sample/second oscilloscope. So he ends up capturing the actual signal from the PMT over that timespan at a resolution of 2B samples/s, and then repeating the experiment for the next pixel over. Then after some DSP and mosaicing, you get the video.
>It seems like if you could measure the pulse's propagation in one direction, and the other (as measured by when it scatters of the smoke at various positions in both directions), this seems like it would get around it?
The point here isn't to measure the speed of light, and my general response when someone asks "can I get around physics with this trick" by answer is no. But I'd be lying if I said I totally understood your question.
While the video doesn't touch on this explicitly, the discussion of the different path lengths around 25:00 in is about the trigonometric effect of the different distances of the beam from the camera. Needing to worry about that is the same grappling with the limitation on the one-way speed.
Even if you had a clock and camera for every pixel, the sync is dependent on the path of the signal taken. Even if you sent a signal along every possible route and had a clock for each route for each pixel (a dizzingly large number) it still isn't clear that this would represent a single inertial frame. As I understand it even if you used quantum entanglement for sync, the path of the measurement would still be an issue. I suggest not thinking about this at all, it seems like an effective way to go mad https://arxiv.org/pdf/gr-qc/0202031
E: Do not trust my math under any circumstances but I believe the number of signal paths would be something like 10^873,555? That's a disgustingly large number. This would reveal whether the system is in a single inertial frame (consistency around loops), but it does not automatically imply a single inertial frame. It's easy to forget that the earth, galaxy, etc are also still rotating while this happens.
If on the other hand one could detect a photon by sending out a different field, maybe a gravitational wave instead... well it might work, but the gravitational wave might be affected in exactly the same way that the EM field is affected.
It captures two billion pixels per second. Essentially he captures the same scene several times (presumably 921,600 times to form a full 720 picture), watching a single pixel at a time, and composite all the captures together for form frames.
I suppose that for entirely deterministic and repeatable scenes, where you also don't care too much about noise and if you have infinite time on your hands to capture 1ms of footage, then yes you can effectively visualize 2B frames per second! But not capture.
He mentions this as the inspiration in his previous video (https://youtu.be/IaXdSGkh8Ww).
And there are 1 million milliseconds every ~15 minutes. It doesn't take that long to capture all the angles you need so long as you have an automated setup for recreating the scene you are videoing.
He scans one line at a time with a mirror into a photomultiplier tube which can detect single photon events. This is captured continually at 2MSample/s (2 billion times per second: 2B FPS) with an oscilloscope and a clever hack.
The laser is actually pulsing at 30KHz, and the oscilloscope capture is synchronized to the laser pulse.
So we consider each 30KHz pulse a single event in a single pixel (even though the mirror is rotating continuously). So he runs the experiment 30,000 times per second, each one recording a single pixel at 2B FPS for a few microseconds. Each pixel-sized video is then tiled into a cohesive image
As you say: It does capture two billion pixels per second. It does watch a single pixel at a time, 921,600 times. And these pixels [each individually recorded at 2B FPS] are ultimately used to create a composition that embodies a 1280x720 video.
That's all correct.
And your summary is also correct: It definitely does not really capture 2 billion frames per second.
Unless we're severely distorting the definition of a "video frame" to also include "one image in a series of images that can be as small as one pixel," then accomplishing 2B entire frames per second is madness with today's technology.
As stated at ~3:43 in the video: "Basically, if you want to record video at 2 billion frames per second, you pretty much can't. Not at any reasonable resolution, with any reasonably-accessible consumer technology, for any remotely reasonable price. Which is why setups like this kind of cheat."
You appear to be in complete agreement with AlphaPhoenix, the presenter of this very finely-produced video.
What is your definition of "video frame" if not this?
> that can be as small as one pixel,"
Why would this be a criteria on the images? If it is, what is the minimum resolution to count as a video frame? Must I have at least two pixels for some reason? Four so that I have a grid? These seem like weird constraints to try and attach to the definition when they don't enable anything that the 1x1 camera doesn't - nor are the meaningfully harder to build devices that capture.
I agree the final result presented to the viewer is a composite... but it seems to me that it's a composite of a million videos.
OMG this was back in 1979-1981.
0. - https://ru.wikipedia.org/wiki/%D0%AD%D0%BB%D0%B5%D0%BA%D1%82...
If I were to agree with this, then would you be willing to agree that the single-pixel ambient light sensor adorning many pocket supercomputers is a camera?
And that recording a series of samples from this sensor would result in a video?
Just want to make it clear that in any one instant, only one pixel is being recorded. The mirror moves continuously across a horizontal sweep and a certain arc of the mirror's sweep is localized to a pixel in the video encoding sequence. A new laser pulse is triggered when one pixel of arc has been swept, recording a whole new complete mirror bounce sequence for each pixel sequentially. He has an additional video explaining the timing / triggering / synchronization circuit in more depth: https://youtu.be/WLJuC0q84IQ
The downside is it only works with repeative signal.
I find it interesting that a project like this would easily be a PhD paper, but nowadays Youtubers do it just for the fun of it.
And yes, this person could make use of it. His videos are among the highest quality science explainers - he’s like the 3B1B of first principles in physics. Truly a savant at creating experiments that demonstrate fundamental phenomena. Seriously check out any of his videos. He made one that weighs an airplane overhead. His videos on speed of electricity and speed of motion and ohms law are fantastic.
Note that this camera (like any camera) cannot observe photons as they pass through the slits -- it can only record photons once they've bounced off the main path. Thus you will never record the interference-causing photons mid-flight, and you'll get a standard interference pattern.
[1]: https://www.researchgate.net/figure/The-apparatus-used-in-th...
Though it wouldn't really be showing you the quantum effect; that's only proven with individual photons at a time. This technique sends a "big" pulse of light relying on some of it being diffusely reflected to the camera at each oscilloscope timestep.
Truly sending individual photons and measuring them is likely impractical as you'd have to wait for a huge time collecting data for each pixel, just hoping the photons happens to bounce directly into the photomultiplier tube.
All light in a narrow cone extending from the camera gets recorded to one pixel, entirely independently from other pixels. There's no reason this would be blurry. Blur is an artifact created by lenses when multiple pixels interact.
There is a lens in the apparatus, which is used to project the image from the mirror onto the pinhole, but it is configured so the plane of the laser is in focus at the pinhole.
What I don't understand is how the projection remains in focus as the mirror sweeps to the side, but perhaps the change in focus is so small.
"Equivalent time sampling" is a different technique which involves sliding the sampling point across the signal to rebuild the complete picture over multiple repetitions of the signal.
https://www.tek.com/en/documents/application-note/real-time-...
Overall, recording one frame took approximately an hour.
From what I remember, recording one frame took about an hour.
But in principle, a LIDAR could be reconfigured for the purposes of such demonstration.
If one wants to build the circuit from scratch, then specifically for such applications there exist very inexpensive time-to-digital converter chips. For example, Texas Instruments TDC7200 costs just a few dollars and has time uncertainty of some tens of picoseconds.
For each laser pulse, one microsecond of the received signal was digitized with the sample rate of 2 billion samples per second, producing a vector of light intensity indexed by time.
A large number of vectors were stored, each tagged by the pixel XY coordinates which were read out from the mirror position encoders. In post-processing, this accumulated 3D block of numbers was sliced time-wise into 2D frames, making the sequence of frames for the clip.
Some possible improvements.
1. Replace the big heavy mirror with a pair of laser galvos. They're literally designed for this and will be much faster and more precise.
Example:
https://miyalaser.com/products/miya-40k-high-performance-las...
2. Increase the precision of the master clock. There's some time smearing along the beam. It's not that hard to make clocks with nanosecond resolution, and picosecond resolution is possible, although it's a bit of a project.
3. As others have said, time-averaging multiple runs would reduce the background noise.
Check out his previous video <https://www.youtube.com/watch?v=IaXdSGkh8Ww> for more details about that part.
It’s super cool that AlphaPhoenix is able to get comparable results on his garage. These academic versions use huge lab bench optic setups. They wind up with technically higher quality results, but AlphaPhonix’s video is more compelling.
I would say that everyone - you, other commenters disagreeing with you, and the video - are all technically correct here, and it really comes down to semantics and how we want to define fps. Not really necessary to debate in my opinion since the video clearly describes their methodology, but useful to call out the differences on HN where people frequently go straight to the comments before watching the video.
Though I don't think it would speed things up much, from what he was saying in one of the appendix videos on his second channel he doesn't do things like triggering the laser multiple times for each pixel to reduce noise because the bottleneck is copying the data off of the scope and it would stretch from hours to days to run.
light moves AT the speed of causality in that frame of time
causality appears to have a MAXIMUM limit in this universe in an "empty" void
but every time you hear a story of how they "slowed down light" what they actually did is make causality more complex in a dense medium, so slower
I think some of the short range depth cameras (Kinect v2 was one) use time-of-flight technique, and could in principle be reconfigured to perform a similar demonstration, though it would not be as "homemade" and cool as the system built for the Youtube video.
But as sibling said, this is still a measurement and will collapse the quantum system. You can't use this to peek under the hood and look at the quantum mechanics.
https://www.tek.com/en/products/oscilloscopes https://www.keysight.com/us/en/catalog/key-34771/infiniivisi...
"Sampling" oscilloscopes are a much less common product -- they are useful for analyzing signals that are too fast to digitize in the ordinary way. They typically sample at a very slow repetition rate -- some hundreds of kilohertz, but each sampling aperture can be exceptionally short, allowing to record signals to 100 GHz frequency.
Hard to justify going with the legacy brands as hobbyist with what comes out of China these days. Rigol and Sient make very capable hardware although h the UI can be a bit painful if you’re used to something else. Micsig produces very capable probes for a small fraction of the cost of the legacy players even high end specialty stuff like optically isolated probes. Those are still thousands but try pricing the equivalent from tek or Agilent, it would buy you a sedan.
It takes >900k pixels to shoot one frame of that (amazing) video, and requisitioning each of those pixels required physically moving a mirror along X and Y to align the single-pixel camera properly.
There isn't really a shutter at all, whether mechanical or electrical. And from my understanding, a "rolling shutter" usually refers to things like reading out a CCD array or similar, or maybe some mechanical aspect of a film camera.
But this isn't an array of anything. It's just a pixel, and some very clever work with motors, lenses, and mirrors.
(Up next: Someone will show up to tell me that an array of 1 item is still an array. yawn)
edit: saw below he is using a continuous scan so randomizing it probably wouldn't be workable