>Each robot had several human escorts, and the robots were limited to slow walking and a few slow hand gestures.
More importantly, the robots were limited to doing no real work. They just feebly pick up objects and place them somewhere else, which I am pretty sure doesn't require AI.
For example, the vid shows the robot pouring hot water into a glass with a massive funnel strapped to it. Why not have the robot fill the kettle, place the teabag itself, etc? It seems like the kind of thing that should be developed before walking and talking and telling jokes.
What if the refrigerator, microwave, etc. could interface directly with the robot. For example, the refrigerator has some type of robotized shelf that is able to bring a rack of orange juice to the front before the robot comes over to grab it? What if the microwave is able to focus the microwave beam on the food to cook it evenly?
It also irks me how the robots are just humanoids. Like for example, why have a head with two eyes. Does it need to wear a helmet? Does it need exactly 2 eyes at exactly human-like placement to achieve stereopsis? Why not have 3 eyes? Did the designers think about the form of the machine at all, or did they just produce robots in the form that is associated with the most hype and thus will bring in the most investor capital? Is this really the ideal form for interfacing with humans? With other robots?
I am just very skeptical of these companies that want to go from zero to doing everything. By the time they accomplish a robot that can do "everything", who is to say that they will even be able to privatize it? The "everything robot" might just be built out of general-purpose components and software at that point. Why not just make a machine that does a limited set of tasks well and then build from there?
Sorry https://blog.comma.ai/a-100x-investment-part-2/ has me coping and seething at the AI space