Most active commenters
  • storus(7)
  • bayindirh(5)

←back to thread

65 points doener | 25 comments | | HN request time: 0.001s | source | bottom
Show context
rmu09 ◴[] No.45345721[source]
US made cars had the reputation of being low quality, too big, too heavy and too inefficient for european cities.

Tesla was somewhat different. People bought Teslas not for their promised "self driving" capabilities (I know no Tesla driver that took those promises at face value or got the FSD option FWIW), but one motivation was to "stick it" to snobbish arrogant european manufacturers wanting to develop "clean" ICEs with "green fuels" or other non-sensical crimes against thermodynamics like H2-cars.

Now, Tesla (and the US in general) has a brand toxicity problem, and it is worsening. People I know that would consider a Tesla some years ago now drive electric VWs or BWMs or KIAs, often times much more expensive cars than the comparable Tesla 3 / Y model.

This trend will probably continue the next years, and I don't see a way for Tesla to repair the brand image.

replies(6): >>45345771 #>>45345851 #>>45345901 #>>45345908 #>>45346627 #>>45346875 #
1. bayindirh ◴[] No.45345851[source]
Tesla killed its brand reputation thrice.

- First they went "camera only", alienating people who knows the tech.

- Then they mocked car industry for so long. It was a necessary poke at first, but they didn't get prepared, and the elephant proved that it can run.

- Then Elon's Trump affair and all the shebang happened.

The broken FSD promises, using non-auto rated parts (and related failures), being negligent of their own errors and acting like they are deaf to the criticism is the cement between the layers.

replies(2): >>45345963 #>>45346605 #
2. storus ◴[] No.45345963[source]
They had camera-only tech employing multiple 4k cameras running at over 2000fps. Not your grandma's 480p/25fps webcam many car manufacturers use as parking cameras. 2000fps gives you enormous safety margin even in case of individual frame misdetection. The long-tail issues they hit are present on LiDAR vehicles as well but LiDAR is much slower, more difficult to process and sensor fusion adds its own errors.
replies(8): >>45346032 #>>45346086 #>>45346115 #>>45346268 #>>45346460 #>>45349033 #>>45350326 #>>45353213 #
3. bayindirh ◴[] No.45346032[source]
2000 FPS is impressive.

Not detecting overturned semis, road debris, and swerving to road dividers is even more impressive with that tech.

Where a relatively simple radar can prevent without running a slow-motion camera rig and a wannabe supercomputing cluster on the car.

To be frank, I'm not against 2000 FPS cameras, but I can't come into terms with not adding a simple radar to detect something unknown is dangerously close and the land missile needs to stop.

4. 4gotunameagain ◴[] No.45346086[source]
Assuming of course that the relevant computer vision components can run at 2000 fps as well.. I highly doubt it.
replies(1): >>45346112 #
5. bayindirh ◴[] No.45346112{3}[source]
Running at 2000FPS in low light (and getting meaningful data at that sensor size) is also impossible to begin with. Even if you can do constant 60, you're in good shape.

2000 can be good for doing multiexposure and maybe detecting fine movement, but assuming that everything running 2000FPS (and processing 16000 frames/sec) is not a simple thing, esp, if you're running in an uncontrolled and chaotic environment.

replies(1): >>45346633 #
6. rich_sasha ◴[] No.45346115[source]
Why 2k FPS? I'm not being facetious; human eye sees, apparently, at around 25fps, which is why this is what TVs and cinemas used to use. At that rate, and 144kph, say, the car moves 1.4m between frames.

Fine, so maybe you think this is too much. But 10x this still gives you 14cm between frames, at what is already speeding in most jurisdictions I know of.

2000 FPS seems to my untrained eye like a problem, not a feature.

replies(3): >>45346446 #>>45346615 #>>45354933 #
7. diabllicseagull ◴[] No.45346268[source]
as expected too heavy of a data stream to be true. resolution is lower than 4K while frame rates aren’t even in the ballpark of 2000 fps.

https://www.blogordie.com/2023/09/hw4-tesla-new-self-driving...

replies(1): >>45346420 #
8. storus ◴[] No.45346420{3}[source]
I have this first-hand from the FSD team at Tesla from 5 years ago. Who knows where they are nowadays. You can believe whatever you like.
replies(1): >>45348074 #
9. storus ◴[] No.45346446{3}[source]
Because you are processing a sequence of a fixed length in deep learning models and the more frames you have, the more accurate your FSD output is. Driving 1.4m between the frames with single-frame accuracy of 80% is quite risky and input correction quite discrete; 14cm is still risky for a proper trajectory planning. Now make it in millimeters and suddenly your trajectory is nearly perfect with only little noise.
10. croon ◴[] No.45346460[source]
That would be something like 371Gbps (some assumptions) raw data to process, per camera. I would assume a lot of shortcuts to get that down, but still an unreasonably huge scope to process in "real time" in a car.
11. lnsru ◴[] No.45346605[source]
Tesla‘s “vision only” with phantom braking suicide experiments reached court last year in Germany and for the first time the court proved they exist for real and the cars are dangerous. This will be interesting to watch. I tried often free autopilot in model Y and it hits hard the brakes on empty road every other time. Afterwards I stopped using it completely. The car is nice, but without working assistance systems. Lane keeping also does not work reliably. Model Y is nice electric car for people without much requirements like me - it’s spacious and electric range is acceptable.

About the process in court in German: https://teslaanwalt.de/autopilot-als-sicherheitsrisiko/

12. dns_snek ◴[] No.45346615{3}[source]
> human eye sees, apparently, at around 25fps

What do you mean by "sees"? I'll bet you that you can't walk around wearing a VR headset running at 25 FPS for more than 30 seconds without violently emptying your stomach. Trying to watch a movie on a display that doesn't exhibit motion blur also makes me motion sick.

Human brain doesn't see in terms of frames at all. There's a limit where an increase in FPS likely becomes imperceptible to most people but that limit is at least 10 times higher (from personal experience), likely more.

13. piva00 ◴[] No.45346633{4}[source]
I was about to comment the same, 2k FPS means a maximum shutter speed of 1/2000, you need a lot of light to capture an image this quickly, in low light conditions it's simply impossible to capture enough light even if using very high end optics and sensors.
replies(1): >>45347199 #
14. storus ◴[] No.45347199{5}[source]
I don't know the specifics, maybe they are timing individual cameras in a way they achieve 2000fps with a crisp image in each camera and merging them together. Or maybe they are using some MIT tech that was able to capture super low light conditions.
replies(2): >>45347362 #>>45348289 #
15. bayindirh ◴[] No.45347362{6}[source]
Being able to capture in super low light conditions is dependent on two things. 1. Your sensor's noise floor, 2. The number of photons you can get per unit time.

First one is dependent on the manufacturing process, and the second one is dependent on your sensor size.

Currently, the leading sensor manufacturers (namely Sony Semiconductor and Canon) are doing very low noise sensors. However, to get both these low noise levels and convincing images needs full frame sensors, at least. APS-C can somewhat close, but it can't be there (because physics).

Even in that case, you can't do 2000FPS and get meaningful images from every one of them.

There's no way that a Tesla car cam sports full frame or APS-C sensors.

So, it's physics.

replies(1): >>45347894 #
16. storus ◴[] No.45347894{7}[source]
AFAIK Sony and Canon are still using some ancient manufacturing process for sensors as chips have the priority and if Tesla has access to e.g. 5nm process for manufacturing sensors that would drastically expand possibilities. Also you bypassed the possibility of timing multiple sensors separately to achieve 2000fps.
replies(1): >>45348172 #
17. estearum ◴[] No.45348074{4}[source]
There's no way the FSD team overstated its capabilities, right?
replies(1): >>45348775 #
18. bayindirh ◴[] No.45348172{8}[source]
The reason sensor manufacturers use "seemingly ancient" (i.e. huge feature sizes) processes in their sensors is you really don't need a more advanced process like in processors.

When you manufacture something which computes, power consumption and internal noise improvement is more drastic with improved manufacturing processes. When you are measuring something, you don't need or want too small pixels or features to begin with.

So having a small gigapixel sensor just because your process allows creates more disadvantage over having a sensor same size with a lower resolution, from light capturing angle. So, low-light sensitivity and resolution is a trade-off.

Back-illuminated sensors used by all contemporary cameras created this leap rather than reducing feature size via improved processes. You already pack the sensor as dense as possible (you don't want gaps or "smaller" pixels w/o increasing resolution either), and moving data/power plane away from pixels is the biggest contributor to noise in the sensor.

See the link [0]. Top left image is full frame, top right is APS-C, bottom left is M4/3, and bottom right is full frame / high-res (60+MP) sensors.

When you look at the images, smaller the sensor, worse the noise performance. When you compare full-size images of top left to bottom right, top left image is better in terms of noise. I selected RAW to surface "what sensor sees" The selected spot is the darkest point in that scene.

You can select JPEG to see what in camera image processing does to these images. Shutter speed is around 1/40s and ISO is fixed at 12800 since it's the de-facto standard for night photography.

> Also you bypassed the possibility of timing multiple sensors separately to achieve 2000fps.

Working on an image which doesn't reflect real world is a bit dangerous, isn't it?

[0]: https://www.dpreview.com/reviews/image-comparison?attr18=low...

replies(1): >>45348532 #
19. walls ◴[] No.45348289{6}[source]
Or, just maybe, you are completely wrong and misheard or were lied to.
20. storus ◴[] No.45348532{9}[source]
You don't need to give me lessons in photography. I remember around the time of D750 Sony upgraded their sensor manufacturing process from some ancient 100-200nm or so to something newer which improved night performance tenfold. Quantum efficiency got substantially better on a better process. Nobody is telling you to shrink pixels to get 1000MPx, instead about making a better 30MPx sensor of the same size. Yet they aren't using the latest (2-5nm) processes for sensors as at their sizes that would be too expensive (I guess H100 chip-level prices for a medium-format sensor).
21. storus ◴[] No.45348775{5}[source]
Everything is possible. They also might have used some creative metrics giving 2000+ fps. I don't know. Or they might have found some neat trick nobody thought about before.
22. ModernMech ◴[] No.45349033[source]
> The long-tail issues they hit are present on LiDAR vehicles as well but LiDAR is much slower, more difficult to process and sensor fusion adds its own errors.

The long tail is long no matter what. Which is why the most robust solutions deploy sensors with orthogonal sensing modalities that can compliment one another. By relying on only one sensor type, Tesla has made it hard for their system overly brittle, which has resulted in avoidable deaths and destruction.

> LiDAR is much slower, more difficult to process

LiDAR in my experience is much easier to process, as the sensor stream is just an array of distances. Camera in my experience is much harder to process, as the sensor stream is an array of RGB values from which you have to infer distances. So by what metric are you alleging LiDAR is more difficult to process?

> sensor fusion adds its own errors.

You'll have to do a degree of sensor fusion across all the camera sensors anyway, so going camera-only don't absolve you of having to fuse sensor streams and come up with a belief. Sensor fusion in general tends to decrease overall system error as more sensors are added.

23. tim333 ◴[] No.45350326[source]
Googling suggests the Tesla cameras do 36 fps https://x.com/nextbigfuture/status/1895692497898390021
24. Zigurd ◴[] No.45353213[source]
5mp 36fps. 1mp on older Teslas.
25. Vilian ◴[] No.45354933{3}[source]
Humans don't see around 25fps, it depends but the maximum for a trained person or competitive player is 300hz