←back to thread

1479 points sandslash | 6 comments | | HN request time: 1.812s | source | bottom
Show context
anythingworks ◴[] No.44314766[source]
loved the analogies! Karpathy is consistently one of the clearest thinkers out there.

interesting that Waymo could do uninterrupted trips back in 2013, wonder what took them so long to expand? regulation? tailend of driving optimization issues?

noticed one of the slides had a cross over 'AGI 2027'... ai-2027.com :)

replies(2): >>44314822 #>>44315438 #
AlotOfReading ◴[] No.44314822[source]
You don't "solve" autonomous driving as such. There's a long, slow grind of gradually improving things until failures become rare enough.
replies(1): >>44314866 #
petesergeant ◴[] No.44314866[source]
I wonder at what point all the self-driving code becomes replaceable with a multimodal generalist model with the prompt “drive safely”
replies(4): >>44314937 #>>44315054 #>>44315210 #>>44316357 #
anon7000 ◴[] No.44315210[source]
Very advanced machine learning models are used in current self driving cars. It all depends what the model is trying to accomplish. I have a hard time seeing a generalist prompt-based generative model ever beating a model specifically designed to drive cars. The models are just designed for different, specific purposes
replies(1): >>44315369 #
tshaddox ◴[] No.44315369[source]
I could see it being the case that driving is a fairly general problem, and this models intentionally designed to be general end up doing better than models designed with the misconception that you need a very particular set of driving-specific capabilities.
replies(3): >>44315469 #>>44316063 #>>44318089 #
shakna ◴[] No.44316063[source]
Driving is not a general problem, though. Its a contextual landscape of fast-based reactions and predictions. Both are required, and done regularly by the human element. The exact nature of every reaction, and every prediction, change vastly within the context window.

You need image processing just as much as you need scenario management, and they're orthoganol to each other, as one example.

If you want a general transport system... We do have that. It's called rail. (And can and has been automated.)

replies(2): >>44316240 #>>44318075 #
melvinmelih ◴[] No.44316240[source]
> Driving is not a general problem, though.

But what's driving a car? A generalist human brain that has been trained for ~30 hours to drive a car.

replies(1): >>44316689 #
1. shakna ◴[] No.44316689[source]
Human brain's aren't generalist!

We have multiple parts of the brain that interact in vastly different ways! Your cerebellum won't be running the role of the pons.

Most parts of the brain cannot take over for others. Self-healing is the exception, not the rule. Yes, we have a degree of neuroplasticity, but there are many limits.

(Sidenote: Driver's license here is 240 hours.)

replies(3): >>44317314 #>>44317648 #>>44319940 #
2. Zanfa ◴[] No.44317314[source]
> Human brain's aren't generalist!

What? Human intelligence is literally how AGI is defined. Brain’s physical configuration is irrelevant.

replies(1): >>44318522 #
3. azan_ ◴[] No.44317648[source]
> We have multiple parts of the brain that interact in vastly different ways!

Yes, and thanks to that human brains are generalist

replies(1): >>44318510 #
4. shakna ◴[] No.44318510[source]
Only if that was a singular system, however, it is not. [0]

For example... The nerve cells in your gut may speak to the brain, and interact with it in complex ways we are only just beginning to understand, but they are separate systems that both have control over the nervous system, and other systems. [1]

General Intelligence, the psychological theory, and General Modelling, whilst sharing words, share little else.

[0] https://doi.org/10.1016/j.neuroimage.2022.119673

[1] https://doi.org/10.1126/science.aau9973

5. shakna ◴[] No.44318522[source]
A human brain is not a general model. We have multiple overlapping systems. The physical configuration is extremely relevant to that.

AGI is defined in terms of "General Intelligence", a theory that general modelling is irrelevant to.

6. yusina ◴[] No.44319940[source]
240 hours sounds excessive. Where is "here"?