←back to thread

174 points Philpax | 3 comments | | HN request time: 0.405s | source
Show context
codingwagie ◴[] No.43719845[source]
I just used o3 to design a distributed scheduler that scales to 1M+ sxchedules a day. It was perfect, and did better than two weeks of thought around the best way to build this.
replies(8): >>43719906 #>>43720086 #>>43720092 #>>43721143 #>>43721297 #>>43722293 #>>43723047 #>>43727685 #
csto12 ◴[] No.43719906[source]
You just asked it to design or implement?

If o3 can design it, that means it’s using open source schedulers as reference. Did you think about opening up a few open source projects to see how they were doing things in those two weeks you were designing?

replies(2): >>43720057 #>>43720965 #
codingwagie ◴[] No.43720965[source]
why would I do that kind of research if it can identify the problem I am trying to solve, and spit out the exact solution. also, it was a rough implementation adapted to my exact tech stack
replies(5): >>43721294 #>>43721501 #>>43721779 #>>43721872 #>>43723076 #
kmeisthax ◴[] No.43721501[source]
Because that path lies skill atrophy.

AI research has a thing called "the bitter lesson" - which is that the only thing that works is search and learning. Domain-specific knowledge inserted by the researcher tends to look good in benchmarks but compromise the performance of the system[0].

The bitter-er lesson is that this also applies to humans. The reason why humans still outperform AI on lots of intelligence tasks is because humans are doing lots and lots of search and learning, repeatedly, across billions of people. And have been doing so for thousands of years. The only uses of AI that benefit humans are ones that allow you to do more search or more learning.

The human equivalent of "inserting domain-specific knowledge into an AI system" is cultural knowledge, cliches, cargo-cult science, and cheating. Copying other people's work only helps you, long-term, if you're able to build off of that into something new; and lots of discoveries have come about from someone just taking a second look at what had been considered to be generally "known". If you are just "taking shortcuts", then you learn nothing.

[0] I would also argue that the current LLM training regime is still domain-specific knowledge, we've just widened the domain to "the entire Internet".

replies(3): >>43721757 #>>43721874 #>>43722415 #
1. Workaccount2 ◴[] No.43722415[source]
>Because that path lies skill atrophy.

I wonder how many programmers have assembly code skill atrophy?

Few people will weep the death of the necessity to use abstract logical syntax to communicate with a computer. Just like few people weep the death of having to type out individual register manipulations.

replies(2): >>43722729 #>>43723431 #
2. kmeisthax ◴[] No.43722729[source]
Most programmers don't need to develop that skill unless they need more performance or are modifying other people's binaries[0]. You can still do plenty of search-and-learning using higher-level languages, and what you learn at one particular level can generalize to the other.

Even if LLMs make "plain English" programming viable, programmers still need to write, test, and debug lists of instructions. "Vibe coding" is different; you're telling the AI to write the instructions and acting more like a product manager, except without any of the actual communications skills that a good manager has to develop. And without any of the search and learning that I mentioned before.

For that matter, a lot of chatbots don't do learning either. Chatbots can sort of search a problem space, but they only remember the last 20-100k tokens. We don't have a way to encode tokens that fall out of that context window into some longer-term weights. Most of their knowledge comes from the information they learned from training data - again, cheated from humans, just like humans can now cheat off the AI. This is a recipe for intellectual stagnation.

[0] e.g. for malware analysis or videogame modding

3. cmsj ◴[] No.43723431[source]
I would say there's a big difference with AI though.

Assembly is just programming. It's a particularly obtuse form of programming in the modern era, but ultimately it's the same fundamental concepts as you use when writing JavaScript.

Do you learn more about what the hardware is doing when using assembly vs JavaScript? Yes. Does that matter for the creation and maintenance of most software? Absolutely not.

AI changes that, you don't need to know any computer science concepts to produce certain classes of program with AI now, and if you can keep prompting it until you get what you want, you may never need to exercise the conceptual parts of programming at all.

That's all well and good until you suddenly do need to do some actual programming, but it's been months/years since you last did that and you now suck at it.