←back to thread

392 points lairv | 1 comments | | HN request time: 0s | source
Show context
HAL3000 ◴[] No.45528648[source]
All of the examples in videos are cherry picked. Go ask anyone working on humanoid robots today, almost everything you see here, if repeated 10 times, will enter failure mode because the happy path is so narrow. There should really be benchmarks where you invite robots from different companies, ask them beforehand about their capabilities, and then create an environment that is within those capabilities but was not used in the training data, and you will see the real failure rate. These things are not ready for anything besides tech demos currently. Most of the training is done in simulations that approximate physics, and the rest is done manually by humans using joysticks (almost everything they do with hands). Failure rates are staggering.
replies(17): >>45529270 #>>45529335 #>>45529542 #>>45529760 #>>45529839 #>>45529903 #>>45529962 #>>45530530 #>>45531634 #>>45532178 #>>45532431 #>>45532651 #>>45533534 #>>45533814 #>>45534991 #>>45539498 #>>45542410 #
ipnon ◴[] No.45529270[source]
Now the question is if this is GPT-2 and we’re a decade away from autonomous androids given some scaling and tweaks, or if autonomous androids is just an extremely hard problem.
replies(4): >>45529367 #>>45529610 #>>45529686 #>>45532243 #
kibwen ◴[] No.45529610[source]
For LLMs, the input is text, and the output is text. By the time of GPT-2, the internet contained enough training data to make training an interesting LLM feasible (as judged by its ability to output convincing text).

We are nowhere near the same for autonomous robots, and it's not even funny. To continue to use the internet as an analogy for LLMs, we are pre-DARPANET, pre-ASCII, pre-transistor. We don't even have the sensors that would make safe household humanoid robots possible. Any theater from robot companies about trying to train a neural net based on motion capture is laughably foolish. At the current rate of progress, we are more than decades away.

replies(5): >>45530490 #>>45530660 #>>45532119 #>>45533182 #>>45536662 #
ACCount37 ◴[] No.45533182[source]
Robotics has a big training data problem. But your "we don't have the sensors" claim is absolutely laughable.

It was never about the sensors. It was always about AI.

replies(1): >>45533786 #
kibwen ◴[] No.45533786[source]
No, it doesn't matter if you have a hypergenius superintelligence if it's locked in a body with no hardware support for useful proprioception. You will not go to space today.
replies(2): >>45534060 #>>45536197 #
1. serf ◴[] No.45536197[source]
A 'hypergenius superintelligence' could achieve most, if not all useful proprioception simply by looking at motor amperage draw, or if that's unavailable then total system amperage draw.

An arm moving against gravity has a higher draw, the arc itself creates characteristics, a motion or force against the arm or fingers generates a change in draw -- a superintellligence would need only an ammeter to master proprioception, because human researchers can do this in a lab and they're nowhere near the bar of 'hypergenius superintelligence'.