←back to thread

66 points zdw | 1 comments | | HN request time: 0s | source
Show context
throwaway150 ◴[] No.46187883[source]
> You can’t make anything truly radical with it. By definition, LLMs are trained on what has come before. In addition to being already-discovered territory, existing code is buggy and broken and sloppy and, as anyone who has ever written code knows, absolutely embarrassing to look at.

I don't understand this argument. I mean the same applies for books. All books teach you what has come before. Nobody says "You can't make anything truly radical with books". Radical things are built by people after reading those books. Why can't people build radical things after learning or after being assisted by LLMs?

replies(4): >>46188240 #>>46188962 #>>46189004 #>>46189038 #
1. preommr ◴[] No.46189004[source]
> Why can't people build radical things after learning or after being assisted by LLMs?

Because that's not how this is being marketed.

I agree with you completely - the best use case I've found for llms (and I say this as somebody that does generate a lot of code with it) is to use it as a research tool. An instantaneous and powerful solution that fills the gap from the long gone heydays of communities like mailing groups, or Stack overflow where you had the people - the experts and maintainers - that seemingly answered within a few hours on how something works.

But then that's not enough for all the money that's being fed into this monster. The AI leadership is hell-bent on trying to build a modern day tower of babel (in more ways than one), where there is no thinking or learning - one click and you have an app! Now you can fire your entire software team, and then ask chatgpt what to do next when this breaks the economy.