←back to thread

174 points Philpax | 2 comments | | HN request time: 0.562s | source
Show context
codingwagie ◴[] No.43719845[source]
I just used o3 to design a distributed scheduler that scales to 1M+ sxchedules a day. It was perfect, and did better than two weeks of thought around the best way to build this.
replies(8): >>43719906 #>>43720086 #>>43720092 #>>43721143 #>>43721297 #>>43722293 #>>43723047 #>>43727685 #
csto12 ◴[] No.43719906[source]
You just asked it to design or implement?

If o3 can design it, that means it’s using open source schedulers as reference. Did you think about opening up a few open source projects to see how they were doing things in those two weeks you were designing?

replies(2): >>43720057 #>>43720965 #
codingwagie ◴[] No.43720965[source]
why would I do that kind of research if it can identify the problem I am trying to solve, and spit out the exact solution. also, it was a rough implementation adapted to my exact tech stack
replies(5): >>43721294 #>>43721501 #>>43721779 #>>43721872 #>>43723076 #
kmeisthax ◴[] No.43721501[source]
Because that path lies skill atrophy.

AI research has a thing called "the bitter lesson" - which is that the only thing that works is search and learning. Domain-specific knowledge inserted by the researcher tends to look good in benchmarks but compromise the performance of the system[0].

The bitter-er lesson is that this also applies to humans. The reason why humans still outperform AI on lots of intelligence tasks is because humans are doing lots and lots of search and learning, repeatedly, across billions of people. And have been doing so for thousands of years. The only uses of AI that benefit humans are ones that allow you to do more search or more learning.

The human equivalent of "inserting domain-specific knowledge into an AI system" is cultural knowledge, cliches, cargo-cult science, and cheating. Copying other people's work only helps you, long-term, if you're able to build off of that into something new; and lots of discoveries have come about from someone just taking a second look at what had been considered to be generally "known". If you are just "taking shortcuts", then you learn nothing.

[0] I would also argue that the current LLM training regime is still domain-specific knowledge, we've just widened the domain to "the entire Internet".

replies(3): >>43721757 #>>43721874 #>>43722415 #
1. gtirloni ◴[] No.43721757[source]
Here on HN you frequently see technologists using words like savant, genius, magical, etc, to describe the current generation of AI. Now we have vibe coding, etc. To me this is just a continuation of StackOverflow copy/paste where people barely know what they are doing and just hammer the keyboard/mouse until it works. Nothing has really changed at the fundamental level.

So I find your assessment pretty accurate, if only depressing.

replies(1): >>43721921 #
2. mirsadm ◴[] No.43721921[source]
It is depressing but equally this presents even more opportunities for people that don't take shortcuts. I use Claude/Gemini day to day and outside of the most average and boring stuff they're not very capable. I'm glad I started my career well before these things were created.