←back to thread

165 points distalx | 2 comments | | HN request time: 0.026s | source
Show context
Buttons840 ◴[] No.43947204[source]
Interacting with a LLM (especially one running locally) can do something a therapist cannot--provide an honest interaction outside the capitalist framework. The AI has its limitations, but it is an entity just being itself doing the best it can, without expecting anything in return.
replies(4): >>43947233 #>>43947234 #>>43947280 #>>43948484 #
delichon ◴[] No.43947234[source]
How is it possible for a statistical model calculated primarily from the market outputs of a capitalist society to provide an interaction outside of the capitalist framework? That's like claiming to have a mirror that does not reflect your flaws.
replies(2): >>43947303 #>>43947780 #
1. NitpickLawyer ◴[] No.43947780{3}[source]
If I understand what they're saying, the interactions you have with the model are not driven by "maximising eyeballs/time/purchases/etc". You get to role-play inside a context window, and if it went in a direction you don't like you reset and start over again. But during those interactions, you control whatever happens, not some 3rd party that may have ulterior motives.
replies(1): >>43953756 #
2. Draiken ◴[] No.43953756[source]
> the model is not driven by "maximising eyeballs/time/purchases/etc".

Do you have access to all the training data and the reinforcement learning they went through? All the system prompts?

I find it impossible for a company seeking profit to not build its AI to maximize what they want.

Interact with a model that's not tuned and you'll see the stark difference.

The matter of fact is that we have no idea what we're interacting with inside that role-play session.