←back to thread

625 points lukebennett | 1 comments | | HN request time: 0s | source
Show context
pluc ◴[] No.42139375[source]
They've simply run out of data to use to fabricate legitimate-looking guesses. They can't create anything that doesn't already exist.
replies(7): >>42139490 #>>42140441 #>>42141114 #>>42141125 #>>42141590 #>>42141888 #>>42149715 #
whazor ◴[] No.42140441[source]
But a LLM can certainly make up a lot information that never existed before.
replies(2): >>42141540 #>>42142063 #
bob1029 ◴[] No.42141540[source]
I strongly believe this gets into an information theoretical constraint akin to why perpetual motion machines don't work.

In theory, yes you could generate an unlimited amount of data for the models, but how much of it is unique or valuable information? If you were to compress all this generated training data using a really good algorithm, how much actual information remains?

replies(3): >>42141792 #>>42141948 #>>42181780 #
1. cruffle_duffle ◴[] No.42141792{3}[source]
I sure hope there is some bright eyed bushy tailed graduate students crafting up some theorem to prove this. Because it is absolutely a feedback loop.

... that being said I'm sure there is plenty of additional "real data" that hasn't been fed to these models yet. For one thing, I think ChatGPT sucks so bad at terraform because almost all the "real code" to train on is locked behind private repositories. There isn't much publicly available real-world terraform projects to train on. Same with a lot of other similar languages and tools -- a lot of that knowledge is locked away as trade secrets and hidden in private document stores.

(that being said Sonnet 3.5 is much, much, much better at terraform than chatgpt. It's much better at coding in general but it's night and day for terraform)