←back to thread

174 points Philpax | 4 comments | | HN request time: 0.817s | source
Show context
codingwagie ◴[] No.43719845[source]
I just used o3 to design a distributed scheduler that scales to 1M+ sxchedules a day. It was perfect, and did better than two weeks of thought around the best way to build this.
replies(8): >>43719906 #>>43720086 #>>43720092 #>>43721143 #>>43721297 #>>43722293 #>>43723047 #>>43727685 #
csto12 ◴[] No.43719906[source]
You just asked it to design or implement?

If o3 can design it, that means it’s using open source schedulers as reference. Did you think about opening up a few open source projects to see how they were doing things in those two weeks you were designing?

replies(2): >>43720057 #>>43720965 #
codingwagie ◴[] No.43720965[source]
why would I do that kind of research if it can identify the problem I am trying to solve, and spit out the exact solution. also, it was a rough implementation adapted to my exact tech stack
replies(5): >>43721294 #>>43721501 #>>43721779 #>>43721872 #>>43723076 #
kmeisthax ◴[] No.43721501[source]
Because that path lies skill atrophy.

AI research has a thing called "the bitter lesson" - which is that the only thing that works is search and learning. Domain-specific knowledge inserted by the researcher tends to look good in benchmarks but compromise the performance of the system[0].

The bitter-er lesson is that this also applies to humans. The reason why humans still outperform AI on lots of intelligence tasks is because humans are doing lots and lots of search and learning, repeatedly, across billions of people. And have been doing so for thousands of years. The only uses of AI that benefit humans are ones that allow you to do more search or more learning.

The human equivalent of "inserting domain-specific knowledge into an AI system" is cultural knowledge, cliches, cargo-cult science, and cheating. Copying other people's work only helps you, long-term, if you're able to build off of that into something new; and lots of discoveries have come about from someone just taking a second look at what had been considered to be generally "known". If you are just "taking shortcuts", then you learn nothing.

[0] I would also argue that the current LLM training regime is still domain-specific knowledge, we've just widened the domain to "the entire Internet".

replies(3): >>43721757 #>>43721874 #>>43722415 #
1. tombert ◴[] No.43721874[source]
> Because that path lies skill atrophy.

Maybe, but I'm not completely convinced by this.

Prior to ChatGPT, there would be times where I would like to build a project (e.g. implement Raft or Paxos), I write a bit, find a point where I get stuck, decide that this project isn't that interesting and I give up and don't learn anything.

What ChatGPT gives me, if nothing else, is a slightly competent rubber duck. It can give me a hint to why something isn't working like it should, and it's the slight push I need to power through the project, and since I actually finish the project, I almost certain learn more than I would have before.

I've done this a bunch of times now, especially when I am trying to directly implement something directly from a paper, which I personally find can be pretty difficult.

It also makes these things more fun. Even when I know the correct way to do something, there can be lots of tedious stuff that I don't want to type, like really long if/else chains (when I can't easily avoid them).

replies(2): >>43722398 #>>43724511 #
2. scellus ◴[] No.43722398[source]
I agree. AI has made even mundane coding fun again, at least for a while. AI does a lot of the tedious work, but finding ways to make it maximally do it is challenging in a new way. New landscape of possibilities, innovation, tools, processes.
replies(1): >>43722691 #
3. tombert ◴[] No.43722691[source]
Yeah that's the thing.

Personal projects are fun for the same reason that they're easy to abandon: there are no stakes to them. No one yells at you for doing something wrong, you're not trying to satisfy a stakeholder, you can develop into any direction you want. This is good, but that also means it's easy to stop the moment you get to a part that isn't fun.

Using ChatGPT to help unblock myself makes it easier for me to not abandon a project when I get frustrated. Even when ChatGPT's suggestions aren't helpful (which is often), it can still help me understand the problem by trying to describe it to the bot.

4. Nathanba ◴[] No.43724511[source]
true and with AI I can look into far more subjects more quickly because the skill that was necessary was mostly just endless amounts of sifting through documentation and trying to find out why some error happens or how to configure something correctly. But this goes even further, it also applies to subjects where I couldn't intellectually understand something but there was noone to really ask for help. So I'm learning knowledge now that I simply couldn't have figured out on my own. It's a pure multiplier and humans have failed to solve the issue of documentation and support for one another. Until now of course.

I also think that once robots are around it will be yet another huge multiplier but this time in the real world. Sure the robot won't be as perfect as the human initially but so what. You can utilize it to do so much more. Maybe I'll bother actually buying a rundown house and renovating myself. If I know that I can just tell the robot to paint all the walls and possibly even do it 3 times with different paint then I feel far more confident that it won't be an untenable risk and bother.