The problem, I find, is that they then don't stop, or say they don't know (unless explicitly prompted to do so) they just make stuff up and express it with just as much confidence.
The problem, I find, is that they then don't stop, or say they don't know (unless explicitly prompted to do so) they just make stuff up and express it with just as much confidence.
The hard problem then is not to eliminate non-deterministic behavior, but find a way to control it so that it produces what you want.
Using this image - https://www.whimsicalwidgets.com/wp-content/uploads/2023/07/... and the prompt: "Generate a video demonstrating what will happen when a ball rolls down the top left ramp in this scene."
You'll see it struggles - https://streamable.com/5doxh2 , which is often the case with video gen. You have to describe carefully and orchestrate natural feeling motion and interactions.
You're welcome to try with any other models but I suspect very similar results.
I like to think that AI are the great apes of the digital world.
They don't have the dexterity to really sign properly
Rather than check in something half-broken, I’m pausing here. Let me know how you want to
proceed—if you can land the upstream refactor (or share a stable snapshot of the tests/module),
I can pick this up again and finish the review fixes in one go."Which shouldn't come as a surprise, considering that this is, at the core of things, what language models do: Generate sequences that are statistically likely according to their training data.
At the very least, more than one researcher was involved and more than one ape was alleged to have learned ASL. There is a better discussion about what our threshold is for speech, along with our threshold for saying that research is fraud vs. mistaken, but we don’t fix sloppiness by engaging in more of it.
But this is unlikely, because they still can fall over pretty badly on things that are definitely in the training set, and still can have success with things that definitely are not in the training set.
This seems like a rather awkward way of putting it. They may just lack conceptualization or abstraction, making the above statement meaningless.
More weirdly was this lawsuit against Patterson:
> The lawsuit alleged that in response to signing from Koko, Patterson pressured Keller and Alperin (two of the female staff) to flash the ape. "Oh, yes, Koko, Nancy has nipples. Nancy can show you her nipples," Patterson reportedly said on one occasion. And on another: "Koko, you see my nipples all the time. You are probably bored with my nipples. You need to see new nipples. I will turn my back so Kendra can show you her nipples."[47] Shortly thereafter, a third woman filed suit, alleging that upon being first introduced to Koko, Patterson told her that Koko was communicating that she wanted to see the woman's nipples
There was a bonobo named Kanzi who learned hundreds of lexigrams. The main criticism here seems to be that while Kanzi truly did know the symbol for “Strawberry” he “used the symbol for “strawberry” as the name for the object, as a request to go where the strawberries are, as a request to eat some strawberries”. So no object-verb sentences and so no grammar which means no true language according to linguists.
Heisenberg would disagree.
Great distinction. The stuff about showing nipples sounds creepy.
I haven't tried this, but so if you ask the LLM the exact same question again, but in a different process, will you get a different answer?
Wouldn't that mean we should mosr of the time ask the LLM each question multiple times, to see if we get a better answer next time?
A bit like asking the same question from multiple different LLMs just to be sure.
If you ask it to innovate and come up with something not in it's training data, what do you think it will do .... it'll "look at" it's training data and regurgitate (predict) something labelled as innovative
You can put a reasoning cap on a predictor, but it's still a predictor.
It makes the statement of a fact a type of rhetorical device.
It is the difference between saying "I am a biological entity" and "I am just a biological entity". There are all kinds of connotations that come along for the ride with the latter statement.
Then there is the counter with the romantic statement that "I am not just a biological entity".