←back to thread

392 points lairv | 1 comments | | HN request time: 0.202s | source
Show context
HAL3000 ◴[] No.45528648[source]
All of the examples in videos are cherry picked. Go ask anyone working on humanoid robots today, almost everything you see here, if repeated 10 times, will enter failure mode because the happy path is so narrow. There should really be benchmarks where you invite robots from different companies, ask them beforehand about their capabilities, and then create an environment that is within those capabilities but was not used in the training data, and you will see the real failure rate. These things are not ready for anything besides tech demos currently. Most of the training is done in simulations that approximate physics, and the rest is done manually by humans using joysticks (almost everything they do with hands). Failure rates are staggering.
replies(17): >>45529270 #>>45529335 #>>45529542 #>>45529760 #>>45529839 #>>45529903 #>>45529962 #>>45530530 #>>45531634 #>>45532178 #>>45532431 #>>45532651 #>>45533534 #>>45533814 #>>45534991 #>>45539498 #>>45542410 #
ipnon ◴[] No.45529270[source]
Now the question is if this is GPT-2 and we’re a decade away from autonomous androids given some scaling and tweaks, or if autonomous androids is just an extremely hard problem.
replies(4): >>45529367 #>>45529610 #>>45529686 #>>45532243 #
kibwen ◴[] No.45529610[source]
For LLMs, the input is text, and the output is text. By the time of GPT-2, the internet contained enough training data to make training an interesting LLM feasible (as judged by its ability to output convincing text).

We are nowhere near the same for autonomous robots, and it's not even funny. To continue to use the internet as an analogy for LLMs, we are pre-DARPANET, pre-ASCII, pre-transistor. We don't even have the sensors that would make safe household humanoid robots possible. Any theater from robot companies about trying to train a neural net based on motion capture is laughably foolish. At the current rate of progress, we are more than decades away.

replies(5): >>45530490 #>>45530660 #>>45532119 #>>45533182 #>>45536662 #
tyre ◴[] No.45530490[source]
I would guess Amazon has a ridiculous amount of access to training data in its warehouses. Video, package sizes, weights, sorting.

I’m sure they could pretty easily spin up a site with 200 of these processing packages of most sizes (they have a limited number of standardized package sizes) nonstop. Remove ones that it gets right 99.99% of the time and keep training on the more difficult ones, the move to individual items.

Caveat: I have no idea what I’m talking about.

replies(1): >>45533505 #
1. eulgro ◴[] No.45533505[source]
A more efficient way might be to train them in simulation. If you simulate a warehouse environment and use that to pre-train a million robots in parallel at 100x real time learning would go much faster. Then you can fine tune on reality for details missed by the simulation environment.