←back to thread

542 points donohoe | 1 comments | | HN request time: 0.233s | source
Show context
steveBK123[dead post] ◴[] No.44511769[source]
[flagged]
ceejayoz ◴[] No.44511884[source]
The other LLMs don't have a "disbelieve reputable sources" unsafety prompt added at the owner's instructions.
replies(2): >>44511947 #>>44512590 #
steveBK123 ◴[] No.44511947[source]
It's gotta be more than that too though. Maybe training data other companies won't touch? Hidden prompt they aren't publishing? Etc.

Clearly Musk has put his hand on the scale in multiple ways.

replies(4): >>44512280 #>>44512305 #>>44513674 #>>44515749 #
peab ◴[] No.44513674[source]
I think it's more so that they push changes quickly without exhaustively testing. Compare that to Google, who sits on a model for years for fear of hurting their reputation, or OpenAI and Anthropic who extensively red teams models
replies(1): >>44515043 #
1. steveBK123 ◴[] No.44515043[source]
Why does Grok keep "failing" in the same directional way if its just a testing issue?