←back to thread

724 points simonw | 1 comments | | HN request time: 0.209s | source
Show context
anupj ◴[] No.44531907[source]
It’s fascinating and somewhat unsettling to watch Grok’s reasoning loop in action, especially how it instinctively checks Elon’s stance on controversial topics, even when the system prompt doesn’t explicitly direct it to do so. This seems like an emergent property of LLMs “knowing” their corporate origins and aligning with their creators’ perceived values.

It raises important questions:

- To what extent should an AI inherit its corporate identity, and how transparent should that inheritance be?

- Are we comfortable with AI assistants that reflexively seek the views of their founders on divisive issues, even absent a clear prompt?

- Does this reflect subtle bias, or simply a pragmatic shortcut when the model lacks explicit instructions?

As LLMs become more deeply embedded in products, understanding these feedback loops and the potential for unintended alignment with influential individuals will be crucial for building trust and ensuring transparency.

replies(6): >>44531933 #>>44532356 #>>44532694 #>>44532772 #>>44533056 #>>44533381 #
onlyrealcuzzo ◴[] No.44532356[source]
LLMs don't magically align with their creator's views.

The outputs stem from the inputs it was trained on, and the prompt that was given.

It's been trained on data to align the outputs to Elon's world view.

This isn't surprising.

replies(1): >>44534655 #
1. sitkack ◴[] No.44534655[source]
Elon doesn't know Elon's own worldview, checks his own tweets to see what he should say.