←back to thread

747 points porridgeraisin | 2 comments | | HN request time: 0.428s | source
Show context
lewdwig ◴[] No.45062906[source]
TBH I’m surprised it’s taken them this long to change their mind on this, because I find it incredibly frustrating to know that current gen agentic coding systems are incapable of actually learning anything from their interactions with me - especially when they make the same stupid mistakes over and over.
replies(3): >>45063010 #>>45063492 #>>45063725 #
1. const_cast ◴[] No.45063725[source]
Okay they're not going to be learning in real time. Its not like you're getting your data stolen and then getting something out of it - you're not. What you're talking about is context.

Data gathered for training still has to be used in training, i.e. a new model that, presumably, takes months to develop and train.

Not to mention your drop-in-the-bucket contribution will have next to no influence in the next model. It won't catch things specific to YOUR workflow, just common stuff across many users.

replies(1): >>45065006 #
2. ethagnawl ◴[] No.45065006[source]
> Not to mention your drop-in-the-bucket contribution will have next to no influence in the next model. It won't catch things specific to YOUR workflow, just common stuff across many users.

I wonder about this. In the future, if I correct Claude when it makes fundamental mistakes about some topic like an exotic programming language, wouldn't those corrections be very valuable? It seems like it should consider the signal to noise ratio in these cases (where there are few external resources for it to mine) to be quite high and factor that in during its next training cycle.