Most active commenters
  • kibwen(4)

←back to thread

392 points lairv | 12 comments | | HN request time: 0.807s | source | bottom
Show context
HAL3000 ◴[] No.45528648[source]
All of the examples in videos are cherry picked. Go ask anyone working on humanoid robots today, almost everything you see here, if repeated 10 times, will enter failure mode because the happy path is so narrow. There should really be benchmarks where you invite robots from different companies, ask them beforehand about their capabilities, and then create an environment that is within those capabilities but was not used in the training data, and you will see the real failure rate. These things are not ready for anything besides tech demos currently. Most of the training is done in simulations that approximate physics, and the rest is done manually by humans using joysticks (almost everything they do with hands). Failure rates are staggering.
replies(17): >>45529270 #>>45529335 #>>45529542 #>>45529760 #>>45529839 #>>45529903 #>>45529962 #>>45530530 #>>45531634 #>>45532178 #>>45532431 #>>45532651 #>>45533534 #>>45533814 #>>45534991 #>>45539498 #>>45542410 #
ipnon ◴[] No.45529270[source]
Now the question is if this is GPT-2 and we’re a decade away from autonomous androids given some scaling and tweaks, or if autonomous androids is just an extremely hard problem.
replies(4): >>45529367 #>>45529610 #>>45529686 #>>45532243 #
1. kibwen ◴[] No.45529610[source]
For LLMs, the input is text, and the output is text. By the time of GPT-2, the internet contained enough training data to make training an interesting LLM feasible (as judged by its ability to output convincing text).

We are nowhere near the same for autonomous robots, and it's not even funny. To continue to use the internet as an analogy for LLMs, we are pre-DARPANET, pre-ASCII, pre-transistor. We don't even have the sensors that would make safe household humanoid robots possible. Any theater from robot companies about trying to train a neural net based on motion capture is laughably foolish. At the current rate of progress, we are more than decades away.

replies(5): >>45530490 #>>45530660 #>>45532119 #>>45533182 #>>45536662 #
2. tyre ◴[] No.45530490[source]
I would guess Amazon has a ridiculous amount of access to training data in its warehouses. Video, package sizes, weights, sorting.

I’m sure they could pretty easily spin up a site with 200 of these processing packages of most sizes (they have a limited number of standardized package sizes) nonstop. Remove ones that it gets right 99.99% of the time and keep training on the more difficult ones, the move to individual items.

Caveat: I have no idea what I’m talking about.

replies(1): >>45533505 #
3. blackoil ◴[] No.45530660[source]
McD must be selling millions of burgers every day and cameras are cheap and omnipresent, so should not be difficult to get videos for single type of tasks.
replies(1): >>45533795 #
4. bmau5 ◴[] No.45532119[source]
Does your estimate account for advancements in virtual simulation models that has simultaneously been happening? From people I speak to in the space (which I am very much not in) - they had mentioned these advancements have dramatically improved the rate of training and learning - though they also advised we're some ways off from showtime.
replies(1): >>45533810 #
5. ACCount37 ◴[] No.45533182[source]
Robotics has a big training data problem. But your "we don't have the sensors" claim is absolutely laughable.

It was never about the sensors. It was always about AI.

replies(1): >>45533786 #
6. eulgro ◴[] No.45533505[source]
A more efficient way might be to train them in simulation. If you simulate a warehouse environment and use that to pre-train a million robots in parallel at 100x real time learning would go much faster. Then you can fine tune on reality for details missed by the simulation environment.
7. kibwen ◴[] No.45533786[source]
No, it doesn't matter if you have a hypergenius superintelligence if it's locked in a body with no hardware support for useful proprioception. You will not go to space today.
replies(2): >>45534060 #>>45536197 #
8. kibwen ◴[] No.45533795[source]
There is no reason to employ humanoid robots in industrial environments when it will always be easier and cheaper to adapt the environment to a specialized non-humanoid robot than to adapt robots into humanoid shape. This is true for the same reason that no LLM is ever going to beat Stockfish at chess.
9. kibwen ◴[] No.45533810[source]
As Tesla could tell you with their failure to deliver self-driving cars, it doesn't matter if you have exabytes of training data if it's all the wrong kind of data and if your hardware platform is insufficiently capable.
10. ACCount37 ◴[] No.45534060{3}[source]
Lmao no. Every motor is a sensor. And the better my world model is, the less sensors I need to keep it up.
11. serf ◴[] No.45536197{3}[source]
A 'hypergenius superintelligence' could achieve most, if not all useful proprioception simply by looking at motor amperage draw, or if that's unavailable then total system amperage draw.

An arm moving against gravity has a higher draw, the arc itself creates characteristics, a motion or force against the arm or fingers generates a change in draw -- a superintellligence would need only an ammeter to master proprioception, because human researchers can do this in a lab and they're nowhere near the bar of 'hypergenius superintelligence'.

12. fragmede ◴[] No.45536662[source]
Time will tell if that's true. We don't have the same corpus of data, that's true, but what we do have is the ability to make a digital twin, where the robot practices in a virtual world, what would happen. It can do 10,000 jumping jacks every hour, parallelized across a whole GPU supercomputer, and that data can be fed in as training data.