←back to thread

747 points porridgeraisin | 6 comments | | HN request time: 0.532s | source | bottom
1. lewdwig ◴[] No.45062906[source]
TBH I’m surprised it’s taken them this long to change their mind on this, because I find it incredibly frustrating to know that current gen agentic coding systems are incapable of actually learning anything from their interactions with me - especially when they make the same stupid mistakes over and over.
replies(3): >>45063010 #>>45063492 #>>45063725 #
2. nicce ◴[] No.45063010[source]
Or get more value from the users with the same subscription price. I doubt they are giving any discounts.
replies(1): >>45063458 #
3. diggan ◴[] No.45063458[source]
It's actually pretty clever (albeit shitty/borderline evil), start off by saying you're different by the competitors because you care a lot about privacy and safety, and that's why you're charging higher prices than the rest. Then, once you have a solid user-base, slowly turn on the heat, step-by-step, so you end up with higher prices yet same benefits as the competitors.
4. vjerancrnjak ◴[] No.45063492[source]
They wouldn’t be able to learn much from interactions anyway.

Learning metric won’t be you, it will be some global shitty metric that will make the service mediocre with time.

5. const_cast ◴[] No.45063725[source]
Okay they're not going to be learning in real time. Its not like you're getting your data stolen and then getting something out of it - you're not. What you're talking about is context.

Data gathered for training still has to be used in training, i.e. a new model that, presumably, takes months to develop and train.

Not to mention your drop-in-the-bucket contribution will have next to no influence in the next model. It won't catch things specific to YOUR workflow, just common stuff across many users.

replies(1): >>45065006 #
6. ethagnawl ◴[] No.45065006[source]
> Not to mention your drop-in-the-bucket contribution will have next to no influence in the next model. It won't catch things specific to YOUR workflow, just common stuff across many users.

I wonder about this. In the future, if I correct Claude when it makes fundamental mistakes about some topic like an exotic programming language, wouldn't those corrections be very valuable? It seems like it should consider the signal to noise ratio in these cases (where there are few external resources for it to mine) to be quite high and factor that in during its next training cycle.