I read something interesting recent but I'm not sure if it's true or not. That as you age your integration frame rate decreases.
So yes, any image was extremely ephemeral at the time.
PS: Apparently it’s called a Noddy, it’s a video camera controlled by a servomotor to pan and tilt (or 'nod', hence the name Noddy): https://en.wikipedia.org/wiki/Noddy_(camera)
The problem in that video is that the exact location the beam is hitting is momentarily very bright, so they calibrated the exposure to that and everything else looks really dark.
In a sense, all vision is.
[0] https://antiqueradio.org/art/RCACTC-11ConvergBoardNewRC.jpg
The exact sizes, shapes, and positions of the pigment dot triples (and/or the mask holes) are presumably chosen so that this holds even away from the main axis. Also, the shape of the deflecting field is probably tuned to keep the rays as well-focused as possible. Similarly to how photographic lenses are carefully designed to minimize aberrations and softness even far from the optical axis.
(*) Simplifying a bit by assuming that the beam gets deflected immediately as it leaves the gun, which is of course inaccurate.
[0] https://blurbusters.com/wp-content/uploads/2018/01/crt-phosp...
[1] https://www.researchgate.net/figure/Phosphor-persistence-of-...
[2] https://www.researchgate.net/figure/Stimulus-succession-on-C...
As a result monochrome terminal text has this surprising sharpness to it.(surprising if you are used to color displays). But the real visual treat are the long persistence phosphor radar scopes.
https://i.sstatic.net/5K61i.png
The brightly-lit band is the part of the frame scanned by the beam while the shutter was open. The part above is the afterimage, which, while not as bright, is definitely there.
Color composite video, as far as I understand, does have a limit to the horizontal resolution because in all three standards the color information is encoded as a high-frequency signal added to the main (luminance) one, so that frequency is your upper limit on how quickly the luminance can change.
S-video, VGA, and component should, in theory, allow infinite horizontal resolution and color.
Genuine question: why do you think CRTs are better?
> Genuine question: why do you think CRTs are better?
CRTs are worse in most aspects than modern displays, but they are better in motion clarity. As to why I think that: I used both in parallel for many years. The experience for moving objects is very different. It is a well-known drawback of sample-and-hold display technologies. And it is supported by the more systematic analyses done by the likes of Blur Busters.
Not necessarily. For example on VR headsets the LCD/OLED will only hold the picture for 10% of the frame.
The Noddy was used since it was a live broadcast and “allowed the idents to be of no fixed length as the clock symbols could continue for many minutes at a time”.
So, it’s not really because they couldn’t store video. It’s because they needed an indefinite amount of video for the clock idents and couldn’t generate them digitally.
They have many disadvantages, but an advantage is that CRTs mostly remove the "persistence blur" induced by smooth pursuit eye movements on sample-and-hold displays like LCD and OLED. Here is an explanation:
One likely problem for battery powered headsets is the (I believe) relatively high CRT power draw. Another is probably the fact that they aren't used for anything else anymore, meaning CRT development has stopped a long time ago. There were quite small CRTs in the past for special applications, but probably not as small as is optimal for modern VR headsets. Both for optics and weight and space reasons.
Yes it's there, but it's much less bright than the the scanned area, so it will be hardly perceptible relative to the bright part. The receptors in the eye will hardly respond to it after being excited so strongly by the bright part.
The only annoying thing is every couple hours it asks me to run a 7 minute pixel refresh cycle to avoid burn in, but according to the dashboard I run it every 2.5 hours or so when I go on breaks, so I think I’m good.
Overall the monitor is just fantastic, my LAN party buddies and I dreamed about OLEDs like this back in 2003 and kept saying it was “just around the corner”. The biggest thing is in dark scenes in games there’s absolutely zero noticeable smearing.
[0] https://www.microcenter.com/product/689939/asus-pg27ucdm-265...
It's phosphor chemistry dependent. Different color patches on the same glass would decay at different rates even. But yeah, 1 ms is a good lower bound, although when I last researched this, it was definitely the best case scenario for CRTs. I'm fairly sure the ~500 Hz OLEDs that are already floating around are beating the more typical CRTs of old already.
> That’s why you would need a 1000 Hz LCD/OLED screen with really high brightness (and strobing logic) to approximate CRT motion clarity.
At 1000 Hz you wouldn't need the strobing anymore (I believe?), that's the whole point of going that fast. We're kinda getting there btw! Hopefully with HDMI 2.2 out, we'll see something cool.
> On a traditional NTSC/PAL CRT, 1 ms is just under 16 lines, but the latest line is already much brighter than the rest.
That doesn't really math for me. NTSC would be 480 visible lines at 60 Hz, and so 480 lines / ~16.6 ms = 28.8 lines/ms (6% of the screen). Note that of course PAL works out to the same number: 576 lines / 20 ms = 28.8 lines/ms (just 5% of the screen here though!).
In contrast, pointing a TV camera at a spinning globe was much easier. And for showing the time, pointing at a physical clock was much easier than, what, having twelve hours of film footage available and having to synch the right frame?
I think what’s maybe more surprising for people than that moving station idents were typically in camera props, is that broadcasting even a static image pre-digital was also much more easily accomplished by just pointing a camera at a piece of card - even repeating a single frame over and over again was not something that could be easily reproduced some other way; having a camera continually capture and immediately broadcast the frame was just much easier.
Video tape, once it came in, allowed freeze frames but continually reading from the same spot on a tape caused wear so you couldn’t rely on being able to show a single frame from tape indefinitely.
Digital freeze frame machines that could capture a frame of video and repeatedly play it back from a memory buffer only started showing up in the 1980s.