←back to thread

539 points donohoe | 2 comments | | HN request time: 0.46s | source
Show context
steveBK123[dead post] ◴[] No.44511769[source]
[flagged]
ceejayoz ◴[] No.44511884[source]
The other LLMs don't have a "disbelieve reputable sources" unsafety prompt added at the owner's instructions.
replies(2): >>44511947 #>>44512590 #
steveBK123 ◴[] No.44511947[source]
It's gotta be more than that too though. Maybe training data other companies won't touch? Hidden prompt they aren't publishing? Etc.

Clearly Musk has put his hand on the scale in multiple ways.

replies(4): >>44512280 #>>44512305 #>>44513674 #>>44515749 #
1. overfeed ◴[] No.44512280[source]
> Maybe training data other companies won't touch

That's a bingo. 3 weeks ago, Musk invited[1] X users to Microsoft-Tay[2] Grok by having them share share "divisive facts", then presumably fed the over 10,000 responses into the training/fine-tuning data set.

1. https://x.com/elonmusk/status/1936493967320953090

2. In 2016, Microsoft decided to let its Tay chatbot interact, and learn from Twitter users, and was praising Hitler in short order. They did it twice too, before shutting it down permanently. https://en.m.wikipedia.org/wiki/Tay_(chatbot)

replies(1): >>44516377 #
2. epakai ◴[] No.44516377[source]
That tweet seems like the bigger story.

I've seen lots of deflection saying Yaccarino chose to retire prior to Grok/MechaHitler, but the tweet predates that.

Even more deflection about how chatbots are easy to bait into saying weird things, but you don't need to bait when it has been specifically trained on it.

All of this was intentional. Musk is removing more of the mask, and he doesn't need Yaccarino to comfort advertisers any more.