←back to thread

209 points alexcos | 1 comments | | HN request time: 0.413s | source
Show context
contingencies ◴[] No.44414317[source]
This is interesting for generalized problems ("make me a sandwich") but not useful for most real world functions ("perform x within y space at z cost/speed"). I think the number of people on the humanoid bandwagon trying to implement generalized applications is staggering right now. The physics tells you they will never be as fast as purpose-built devices, nor as small, nor as cheap. That's not to say there's zero value there, but really we're - uh - grasping at straws...
replies(6): >>44414348 #>>44414389 #>>44414391 #>>44415158 #>>44418878 #>>44419551 #
jjangkke ◴[] No.44414391[source]
Very good point! This area faces a similar misalignment of goals in that it tries to be a generic fit-all solution that is rampant with today's LLMs.

We made a sandwich but it cost you 10x more than it would a human and slower might slowly become faster and more efficient but by the time you get really good at it, its simply not transferable unless the model is genuinely able to make the leap across into other domains that humans naturally do.

I'm afraid this is where the barrier of general intelligence and human intelligence lies and with enough of these geospatial motor skill database, we might get something that mimics humans very well but still run into problems at the edge, and this last mile problem really is a hinderance to so many domains where we come close but never complete.

I wonder if this will change with some sort of computing changes as well as how we interface with digital systems (without mouse or keyboard), then this might be able to close that 'last mile gap'.

replies(1): >>44414618 #
esjeon ◴[] No.44414618[source]
Note that the username here is a Korean derogatory term for Chinese people.
replies(1): >>44418445 #
1. jcrawfordor ◴[] No.44418445[source]
It's an interesting comment, it has the same "compliment the OP, elaborate, raise a further question" format I've seen used by apparently LLM-generated spam accounts on HN. But, the second paragraph is so incoherently structured that I have a hard time thinking an LLM produces it.