←back to thread

74 points voytec | 4 comments | | HN request time: 0.829s | source
Show context
whycome ◴[] No.42177046[source]
Incidentally, I turned this off today. I suspect it's terrible on battery life and I will find out. But the thing about the summaries that was they would sometimes imply the EXACT OPPOSITE of what was in a message. I had a few stomach-dropping moments when reading the summaries only for me to read the actual thread to see it was nowhere close. This is one of "it's not even wrong" situations and I don't know how it was fucked up this badly. The nature of the texts themselves weren't complicated either. I didn't save them, but I suspect it stemmed from misinterpretating some subtle omission (like our common practice of leaving out articles or pronouns).
replies(1): >>42177400 #
1. jiggawatts ◴[] No.42177400[source]
The current AIs are pretty bad at handling negation especially when the models are small and quantised. To be fair, so are humans: double, triple, or even higher negatives can trip people up.

This effect of smaller models being bad at negation is most obvious in image generators, most of which are only a handful of gigabytes in size. If you ask one for “don’t show an elephant next to the circus tent!” then you will definitely get an elephant.

replies(1): >>42177509 #
2. echoangle ◴[] No.42177509[source]
Isn’t the negative prompting thing with image generators just how they work? As far as I understand, the problem is that training data isn’t normally annotated with „no elephant“ with all images without elephant, so putting „no elephant“ in the prompt most closely matches training data that’s annotated with „elephant“ and includes elephants. The image models aren’t really made to understand proper sentences, I think.
replies(1): >>42177802 #
3. jiggawatts ◴[] No.42177802[source]
Yes, but it’s more complex than that! If you ask “who is Tom Cruise’s mother” you will get a much more robust response than asking “who is Mary Lee Pfeiffer’s son?”.

It’s not just negation that models struggle with, but also reversing the direction of any arrow connecting facts, or wandering too far from established patterns of any kind. It’s been studied scientifically and is one of most fascinating aspects because it also reveals the weaknesses and flaws of human thinking.

Researchers are already trying to fix this problem by generating synthetic training data that includes negations and reversals.

That makes you wonder: would this approach improve the robustness of human education also?

replies(1): >>42178298 #
4. whycome ◴[] No.42178298{3}[source]
This is a super interesting line of info. Thank you! I didn't think of it as a negation-specific challenge but that's really cool insight.

"Don't think of an elephant."

It's actually interesting how often we have to guess that someone dropped a "not" in conversation based on the context.

It wouldn't be hard to have an iMessage bot (eg on a Mac) running to test some of this out on the fly.