←back to thread

Building a Personal AI Factory

(www.john-rush.com)
262 points derek | 1 comments | | HN request time: 0.43s | source
Show context
simonw ◴[] No.44439075[source]
My hunch is that this article is going to be almost completely impenetrable to people who haven't yet had the "aha" moment with Claude Code.

That's the moment when you let "claude --dangerously-skip-permissions" go to work on a difficult problem and watch it crunch away by itself for a couple of minutes running a bewildering array of tools until the problem is fixed.

I had it compile, run and debug a Mandelbrot fractal generator in 486 assembly today, executing in Docker on my Mac, just to see how well it could do. It did great! https://gist.github.com/simonw/ba1e9fa26fc8af08934d7bc0805b9...

replies(7): >>44439177 #>>44439259 #>>44439544 #>>44440242 #>>44441017 #>>44441069 #>>44441796 #
low_common ◴[] No.44439544[source]
That's a pretty trivial example for one of these IDEs to knock out. Assembly is certainly in their training sets, and obviously docker is too. I've watched cursor absolutely run amok when I let it play around in some of my codebase.

I'm bullish it'll get there sooner rather than later, but we're not there yet.

replies(2): >>44439886 #>>44441960 #
simonw ◴[] No.44439886[source]
I think the hardest problem in computer science right now may be coming up with an LLM demo that doesn't get called "pretty trivial".
replies(14): >>44439918 #>>44440031 #>>44441154 #>>44441225 #>>44441323 #>>44441441 #>>44441638 #>>44441811 #>>44442389 #>>44442493 #>>44443084 #>>44444778 #>>44446970 #>>44457389 #
skydhash ◴[] No.44440031[source]
Because they are trivial in a way that you can go on GitHub and copy one of those while not pretending LLM isn't a mashup of the internet.

What people agree on being non-trivial is working on a real project. There's a lot of opensource projects that could benefit from a useful code contribution. But they only got slop thrown at them.

replies(1): >>44440066 #
1. skydhash ◴[] No.44440218[source]
I took the time to investigate the work being done there (all those years learning assembly and computer architecture come in handy), and it confirms (to me) that the key aspect of using LLM is pattern matching. Meaning you know that there's a solution out there (in this case, anything involving multiplying/dividing by a power of 2 can use such trick) and framing your problem (intentionally or not) and you'll get a derived text that will contain a possible solution.

But there's nothing truly novel in the result. The key aspect is being similar enough to something that's already in the training data so that the LLM can extrapolate the rest. The hint can be quite useful and sometimes you have something that shorten the implementation time, but you have to at least have some basic understanding of the domain in order to recognize the signs.

The issue is that the result is always tainted by your prompt. The signs may be there because of your prompt and not because there's some kind of data that need s to be explored further. And sometimes it's a bad fit, similar but different (what you want and what you get). So for the few domain that's valuable to me, I prefer to construct my own mental database that can lead me to concrete artifacts (books, articles, blog posts,...) that exists outside the influence of my query.

ADDENDUM

I can use LLMs with great results and I've done so. But it's more rewarding (and more useful to me) to actually think through the problem and learning from references. Instead of getting a perfect (or wobbly or the wrong category) circle that fits my query, I go to find a strange polygon formed (by me) from other strange polygon. Then because I know I need a circle, I only need to find its center and its radius.

It's slower, but the next time I need another circle (or a square) from the same polygon, it's going to be faster and faster.