←back to thread

170 points bookofjoe | 1 comments | | HN request time: 0.203s | source
Show context
slibhb ◴[] No.43644865[source]
LLMs are statistical models trained on human-generated text. They aren't the perfectly logical "machine brains" that Asimov and others imagined.

The upshot of this is that LLMs are quite good at the stuff that he thinks only humans will be able to do. What they aren't so good at (yet) is really rigorous reasoning, exactly the opposite of what 20th century people assumed.

replies(5): >>43645899 #>>43646817 #>>43647147 #>>43647395 #>>43650058 #
beloch ◴[] No.43646817[source]
What we used to think of as "AI" at one point in time becomes a mere "algorithm" or "automation" by another point in time. A lot of what Asimov predicted has come to pass, very much in the way he saw it. We just no longer think of it as "AI".

LLM's are just the latest form of "AI" that, for a change, doesn't quite fit Asimov's mold. Perhaps it's because they're being designed to replace humans in creative tasks rather than liberate humans to pursue them.

replies(2): >>43647847 #>>43648825 #
israrkhan ◴[] No.43647847[source]
Exactly... as someone said " I need AI to do my laundary and dishes, while I can focus on art and creative stuff" ... But AI is doing the exact opposite, i.e creative stuff (drawing, poetry, coding, documents creation etc), while we are left to do the dishes/laundary.
replies(4): >>43648114 #>>43648246 #>>43649501 #>>43653897 #
__MatrixMan__ ◴[] No.43649501[source]
We ended up here because we have a propensity to share our creative outputs, and keep our laundry habits to ourselves. If we instead went around bragging about how efficiently we can fold a shirt, complete with mocap datasets of how it's done, we'd have gotten the other kind of AI first.
replies(1): >>43650075 #
hn_throwaway_99 ◴[] No.43650075[source]
> We ended up here because we have a propensity to share our creative outputs, and keep our laundry habits to ourselves

Somehow I doubt that the reason gen AI is way ahead of laundry-folding robots is because it's some kind of big secret about how to fold a shirt, or there aren't enough examples of shirt folding.

Manipulating a physical object like a shirt (especially a shirt or other piece of cloth, as opposed to a rigid object) is orders of magnitude more complex that completing a text string.

replies(1): >>43650879 #
1. __MatrixMan__ ◴[] No.43650879[source]
If you wanted finger-positioning data for how millions of different people fold thousands of different shirts, where would you go looking for that dataset?

My point is just that the availability of training data is vastly different between these cases. If we want better AI we're probably going to have to generate some huge curated datasets for mundane things that we've never considered worth capturing before.

It's an unfortunate quirk of what we decide to share with each other that has positioned AI to do art and not laundry.