Edit: oh and steel capped boots.
Edit 2: and a face shield and ear defenders. I'm all tuckered out like Grover in his own alphabet.
Edit: oh and steel capped boots.
Edit 2: and a face shield and ear defenders. I'm all tuckered out like Grover in his own alphabet.
Image-generating AIs are really good at producing passable human forms, but they'll fail at generating anything realistic for dice, even though dice are just cubes with marks on them. Ask them to illustrate the Platonic solids, which you can find well-illustrated with a Google image search, and you'll get a bunch of lumps, some of which might resemble shapes. They don't understand the concepts: they just work off probability. But, they look fairly good at those probabilities in domains like human forms, because they've been specially trained on them.
LLMs seem amazing in a relatively small number of problem domains over which they've been extensively trained, and they seem amazing because they have been well trained in them. When you ask for something outside those domains, their failure to work from inductions about reality (like "dice are a species of cubes, but differentiated from other cubes by having dots on them") or to be able to apply concepts become patent, and the chainsaw looks a lot like an adze that you spend more time correcting than getting correct results from.
This feels like that: a "student" who can produce the right answers as long as you stick to a certain set of questions that he's already been trained on through repetition, but anything outside that set is hopeless, even if someone who understood that set could easily reason from it to the new question.