←back to thread

747 points porridgeraisin | 2 comments | | HN request time: 0s | source
Show context
34679 ◴[] No.45063452[source]
I'd bet this is related to their recent decision to boot people for being "abusive" to Claude. It now seems that was an attempt to keep their training data friendly.
replies(1): >>45064167 #
1. cactca ◴[] No.45064167[source]
This! Any LLM provider that monitors chat/api history for ‘abuse’ towards the model is considering using user data for training.

An Effective Altruism ethos provides moral/ethical cover for trampling individual privacy and property rights. Consider their recent decision to provide services for military projects.

As others have pointed out, Claude was trained using data expressly forbidden for commercial reuse.

The only feedback Anthropic will heed is financial and the impact must be large enough to destroy their investors willingness to cover the losses. This type of financial feedback can come from three places: termination of a large fraction of their b2b contracts, software devs organizing a persistent mass migration to an open source model for software development. Neither of these are likely to happen in the next 3 months. Finally, a mass filing of data deletion requests from California and EU residents and corporations that repeats every week.

replies(1): >>45064702 #
2. 34679 ◴[] No.45064702[source]
Maybe I'll use the remainder of my subscription time to help improve Void. It's already pretty good.

https://voideditor.com/

https://github.com/voideditor/void