Usually the issue is they need rather still subjects, but in this case rather than the sensor doing a scanning sweep they're just capturing the subject as it moves by, keeping the background pixels static.
For "finish line" cameras, the slit is located at the finish line and you start pulling film when the horses approach. Since the exposure is continuous, you never miss the exact moment of the finish.
At first, I thought this explanation would make sense, but then I read back what I just wrote and I'm not sure it really does. Sorry about that.
Probably would be worth asking a train driver about this, e.g. "what is a place with smooth track and constant speed"
It falls apart when the subject is either static or moves it's limbs faster than the speed the whole subject moves (e.g. fist bumping while slowly walking past the camera would screw it)
You can also get close in software. Record some video while walking past a row of shops. Use ffmpeg to explode the video into individual frames. Extract column 0 from every frame, and combine them into a single image, appending each extracted column to the right-hand-side of your output image. You'll end up with something far less accurate than the images in this post, but still fun. Also interesting to try scenes from movies. This technique maps time onto space in interesting ways.
https://en.wikipedia.org/wiki/Slit-scan_photography#/media/F...
Must be somewhat interesting deciding on the background content, too.
As for RCD demosaicing, that's my next step. The color fringing is due to the naive linear interpolation for the red and blue channels. But, with the RCD strategy, if we consider that the green channel has full coverage of the image, we could use it as a guide to make interpolation better.
Consider a color histogram, then the logo (showing color oscillations) would have a wider spread and lower peaked histogram versus a correctly mapped (just the few colors plus or minus some noise) which would show a very thin but strong peak in colorspace. A a high-variance color occupation has higher entropy compared to a low-variance strongly centered peak (or multipeak) distribution.
So it seems colorspace entropy could be a strong term in a loss function for optimization (using RMAD).
OCT is a technique which uses IR to get "through" tissue using beam in the near infrared (roughly 950 nm, with a spread of roughly 100 nm). The return is passed through interferometer and what amounts to a diffraction grating to produce the "spread" that the line camera sees. After some signal processing (FFT is a big one), you can get the intensity at depth. If you sweep in X,Y somehow, usually deflecting the beam with a mirror, you can obtain a volumetric image like an MRI or sonogram. Very useful for imaging the eye, particularly the back of the retina where the blood vessels are.
Like if the camera is $5k, in order to get that exposure time in full-field you would need to duplciate the hardware 800 times or whatever you wanted horizontal resolution to be. Thats alot of zeros for a single camera
Here is how it came out: https://www.daviddegner.com/wp-content/uploads/2023/09/Tree-...
It was part of this story: https://www.daviddegner.com/photography/discovering-old-grow...
Nankai 6000 series, Osaka:
https://i.dllu.net/nankai_19b8df3e827215a2.jpg
Scenery in France:
https://i.dllu.net/preview_l_b01915cc69f35644.png
Marseille, France:
https://i.dllu.net/preview_raw_7292be4e58de5cd0.png
California:
https://i.dllu.net/preview_raw_d5ec50534991d1a4.png
https://i.dllu.net/preview_raw_e06b551444359536.png
Sorry for the purple trees. The camera is sensitive to near infrared, in which trees are highly reflective, and I haven't taken any trains since buying an IR cut filter. Some of these also have dropped frames and other artifacts.
[1] https://youtube.com/shorts/VQuI1wW8hAw [2] https://youtube.com/shorts/vE6kLolf57w [3] https://youtube.com/shorts/QxvFyasQYAY
I also shot a timelapse of the Tokyo skyline at sunset and applied a similar process [4], then motion tracked it so that time is traveling across the frame from left to right[5]. Each line here is 4 pixels wide and the original animation is in 8k.
[4] https://youtu.be/wTma28gwSk0 [5] https://youtu.be/v5HLX5wFEGk
Data stops being written as the sat rotates the camera away from the planet and resumes once it has rolled over enough to again point at the earth.
It may seem like a pedantic difference; a "line scan camera" is stationary while mirrors inside it spin or another mechanism causes it to "scan" a complete vertical line - perhaps all at once, perhaps as the focal point moves Vs a camera in a satellite that has no moving parts that just records a single point directly in front of the instrument .. and the entire satellite spins and moves forwards.
The problem of course being that you need to shift the camera by one sensor width every tenth of a second, accurate to the pixel, if you want to make use of that full horizontal temporal resolution. And I’m not sure how you match together the 1/20s readout with all of that. So pessimistically, maybe only ~30khz.
Actually, did the math and if you can accept video compression, the video modes might be sufficient. 4K@30fps looks like ~64khz. And if you had a more capable video camera, that could be 4-8 times better.
https://news.ycombinator.com/item?id=35738987
Anyway, I was looking for line-scan images of people walking down a busy street. Curious what they would look like.
You simply set the maximum and minimum readout rows to be 1 apart, and suddenly your 'frame' rate goes up to 60,000 FPS where each frame is only a pixel high.
You might have to fiddle with upper and lower 'porch' regions to make things fast too.
You must have the line along the long dimension of the image - the hardware has no capability to do the short edge.
TIL about an industrial inspection application where your line camera is scanning objects passing by on a conveyor. Since you can never guarantee a rock-steady conveyor speed, you need real-time control of the scanning speed based on the current conveyor speed (using encoders) [1]
I see that the bulk of the article is about somehow using math for estimating the train speed so that the scanning can be interpreted correctly.
[1] this camera vendor has an explanatory video that explains the need for an encoder around 4:15. https://m.youtube.com/watch?v=E_I9kxHEYYM&t=35s&pp=2AEjkAIB
has a project
https://trains.jo-m.ch/#/trains/list
that deserves a mandatory mention.
But you need to have really low level access to registers. Said registers are normally configured by i2c (aka SCCB).
In Linux I think you'd need to patch a driver to do it for example.
https://anagrambooks.com/jakarta-megalopolis
https://anagrambooks.com/sites/default/files/styles/slide/pu...
There is actually a way to get full field at very high frame rates and NOT ridiculous expensive, but its not sustained. I believe it involves some type of "sample and hold" with something like capacitor banks, so the digital read out can be done slowly.
Just put it in Panorama mode and move the camera in non-standard ways.
Like this it will work on passing trains. It will also (kind of) work out the window of a train.
works out the window of a car too. It will get confused and "compress" parts of the panorama.
It will also work vertically...
It works on tall trees - scan with the arrow from the base upwards to the top.
You can also make weird photos vertically if you keep going. Start from in front of you, up over your head and facing backwards.
Also fun is to try is facing down to the ground and walking forwards. You can get a garden path or the sidewalk or other fun panoramas. If you want, you can intentionally stick your feet in the frame and get "footsteps" along the panorama.
:)