←back to thread

164 points thunderbong | 1 comments | | HN request time: 0.328s | source
Show context
albert_e ◴[] No.41855365[source]
Practically --

I feel hardware technology can improve further to allow under-the-LED-display cameras .... so that we can actually look at both the camera and the screen at the same time.

(There are fingerprint sensors under mobile screens now ...and I think even some front facing cameras are being built in without sacrificing a punch hole / pixels. There is scope to make this better and seamless so we can have multiple cameras if we want behind a typical laptop screen or desktop monitor.)

This would make for a genuine look-at-the-camera video whether we are looking at other attendees in a meeting or reading off our slide notes (teleprompter style).

There would be no need to fake it.

More philosophically --

I don't quite like the normalization of AI tampering with actual videos and photos casually -- on mobile phone cameras or elsewhere. Cameras are supposed to capture reality by default. I know there is already heavy noise reduction, color correction, auto exposure etc ... but no need to use that to justify more tampering with individual facial features and expressions.

Videos are and will be used for recording humans as they are. The capturing of their genuine features and expressions should be valued more. Video should help people bond as people with as genuine body lanuage as possible. Videos will be used as memories of people bygone. Videos will be used as forensic or crime scene evidence.

Let us protect the current state of video capture. All AI enhancements should be marketed separately under a different name, not silently added into existing cameras.

replies(15): >>41855531 #>>41855684 #>>41855730 #>>41855733 #>>41856141 #>>41857383 #>>41857590 #>>41857839 #>>41858056 #>>41858420 #>>41859057 #>>41859076 #>>41859617 #>>41860060 #>>41863348 #
1. renewiltord ◴[] No.41860060[source]
The really sad thing is that we take raw sensor data and process it at all. People are so out of touch with things these days we use lenses to focus the picture etc. Why not just transmit the raw sensor data instead of processing everything so much? People could just use their minds (I know, ridiculous to ask people to do that in this era where everything is spoon fed to you) and actually interpret things for once.

What a society! Processed food, plastics in their blood, processed sensor data. Ugh, we have strayed so far from natural interactions.

Philosophically we have abandoned being mindful of where we are, and just being our natural forms instead of being slaves to what some computer is telling you that you should be seeing.