←back to thread

174 points Philpax | 3 comments | | HN request time: 0.479s | source
Show context
codingwagie ◴[] No.43719845[source]
I just used o3 to design a distributed scheduler that scales to 1M+ sxchedules a day. It was perfect, and did better than two weeks of thought around the best way to build this.
replies(8): >>43719906 #>>43720086 #>>43720092 #>>43721143 #>>43721297 #>>43722293 #>>43723047 #>>43727685 #
MisterSandman ◴[] No.43720092[source]
Designing a distributed scheduler is a solved problem, of course an LLM was able to spit out a solution.
replies(1): >>43720976 #
1. codingwagie ◴[] No.43720976[source]
as noted elsewhere, all other frontier models failed miserably at this
replies(2): >>43721537 #>>43722162 #
2. daveguy ◴[] No.43721537[source]
That doesn't mean the one what manages to spit it out of its latent space is close to AGI. I wonder how consistently that specific model could. If you tried 10 LLMs maybe all 10 of them could have spit out the answer 1 out of 10 times. Correct problem retrieval by one LLM and failure by the others isn't a great argument for near-AGI. But LLMs will be useful in limited domains for a long time.
3. alabastervlog ◴[] No.43722162[source]
It is unsurprising that some lossily-compressed-database search programs might be worse for some tasks than other lossily-compressed-database search programs.