←back to thread

Building a Personal AI Factory

(www.john-rush.com)
260 points derek | 1 comments | | HN request time: 0.827s | source
Show context
simonw ◴[] No.44439075[source]
My hunch is that this article is going to be almost completely impenetrable to people who haven't yet had the "aha" moment with Claude Code.

That's the moment when you let "claude --dangerously-skip-permissions" go to work on a difficult problem and watch it crunch away by itself for a couple of minutes running a bewildering array of tools until the problem is fixed.

I had it compile, run and debug a Mandelbrot fractal generator in 486 assembly today, executing in Docker on my Mac, just to see how well it could do. It did great! https://gist.github.com/simonw/ba1e9fa26fc8af08934d7bc0805b9...

replies(7): >>44439177 #>>44439259 #>>44439544 #>>44440242 #>>44441017 #>>44441069 #>>44441796 #
low_common ◴[] No.44439544[source]
That's a pretty trivial example for one of these IDEs to knock out. Assembly is certainly in their training sets, and obviously docker is too. I've watched cursor absolutely run amok when I let it play around in some of my codebase.

I'm bullish it'll get there sooner rather than later, but we're not there yet.

replies(2): >>44439886 #>>44441960 #
simonw ◴[] No.44439886[source]
I think the hardest problem in computer science right now may be coming up with an LLM demo that doesn't get called "pretty trivial".
replies(14): >>44439918 #>>44440031 #>>44441154 #>>44441225 #>>44441323 #>>44441441 #>>44441638 #>>44441811 #>>44442389 #>>44442493 #>>44443084 #>>44444778 #>>44446970 #>>44457389 #
1dom ◴[] No.44441323[source]
I'm very pro LLM and AI. But I completely agree with the comment about how many pieces praising LLMs are doing so with trivial examples. Trivial might not be the right word, but I can't think of a better one that doesn't have a negative connotation, but this shouldn't be negative. Your examples are good and useful, and capture a bunch of tasks a software engineer would do.

I'd say your mandelbrot debug and the LLVM patch are both "trivial" in the same sense: they're discrete, well defined, clear-success-criteria-tasks that could be assigned to any mid/senior software engineer in a relevant domain and they could chip through it in a few weeks.

Don't get me wrong, that's an insane power and capability of LLMs, I agree. But ultimately it's just doing a day job that millions of people can do sleep deprived and hungover.

Non-trivial examples are things that would take a team of different specialist skillsets months to create. One obvious potential reason why there's few non-trivial AI examples is because non-trivial AI examples require non-trivial amount of time to be able to generate and verify.

A non-trivial example isn't an example you can look at the output and say "yup, AI's done well here". It requires someone spends time going into what's been produced, assessing it, essentially redesigning it as a human to figure out all the complexity of a modern non-trivial system to confirm the AI actually did all that stuff correctly.

An in depth audit of a complex software system can take months or even years and is a thorough and tedious task for a human, and the Venn diagrams of humans who are thinking "I want to spend more time doing thorough, tedious code tasks" and "I want to mess around with AI coding" is 2 separate circles.

replies(7): >>44441342 #>>44441663 #>>44441824 #>>44441879 #>>44443505 #>>44444529 #>>44445225 #
1. fho ◴[] No.44444529[source]
Point in case: i've been trying for weeks now to generate a CFD solver that is more than the basic FDM "toy example".

The models clearly know the equations, but run into the same issues I had when implementing it myself (namely exploding simulations that the models try to paper over by applying more and more relaxation terms).