←back to thread

165 points distalx | 7 comments | | HN request time: 0.001s | source | bottom
Show context
Buttons840 ◴[] No.43947204[source]
Interacting with a LLM (especially one running locally) can do something a therapist cannot--provide an honest interaction outside the capitalist framework. The AI has its limitations, but it is an entity just being itself doing the best it can, without expecting anything in return.
replies(4): >>43947233 #>>43947234 #>>43947280 #>>43948484 #
1. delichon ◴[] No.43947234[source]
How is it possible for a statistical model calculated primarily from the market outputs of a capitalist society to provide an interaction outside of the capitalist framework? That's like claiming to have a mirror that does not reflect your flaws.
replies(2): >>43947303 #>>43947780 #
2. Buttons840 ◴[] No.43947303[source]
The same way an interaction with a pure bread dog can be. The dog may have come from a capitalistic system (dogs are bred for money unfortunately), but your personal interactions with the dog are not about money.

I've never spoken to a therapist without paying $150 an hour up front. They were helpful, but they were never "in my life"--just a transaction--a worth while transaction, but still a transaction.

replies(2): >>43947363 #>>43947705 #
3. germinalphrase ◴[] No.43947363[source]
It’s also very common for people to get therapy at free or minimal cost (<$50) when utilizing insurance. Long term relationships (off and on) are also quite common. Whether or not the therapist takes insurance is a choice, and it’s true that they almost always make more by requiring cash payment instead.
replies(1): >>44012973 #
4. amanaplanacanal ◴[] No.43947705[source]
The dogs intelligence and personality were bred long before our capitalist system existed, unlike whatever nonsense an LLM is trying to sell you.
5. NitpickLawyer ◴[] No.43947780[source]
If I understand what they're saying, the interactions you have with the model are not driven by "maximising eyeballs/time/purchases/etc". You get to role-play inside a context window, and if it went in a direction you don't like you reset and start over again. But during those interactions, you control whatever happens, not some 3rd party that may have ulterior motives.
replies(1): >>43953756 #
6. Draiken ◴[] No.43953756[source]
> the model is not driven by "maximising eyeballs/time/purchases/etc".

Do you have access to all the training data and the reinforcement learning they went through? All the system prompts?

I find it impossible for a company seeking profit to not build its AI to maximize what they want.

Interact with a model that's not tuned and you'll see the stark difference.

The matter of fact is that we have no idea what we're interacting with inside that role-play session.

7. arvinsim ◴[] No.44012973{3}[source]
80% of the people in the world can't afford insurance or a therapist. LLMs, on the other hand, are just one sign up away.