←back to thread

858 points cryptophreak | 1 comments | | HN request time: 0.001s | source
Show context
wiremine ◴[] No.42936346[source]
I'm going to take a contrarian view and say it's actually a good UI, but it's all about how you approach it.

I just finished a small project where I used o3-mini and o3-mini-high to generate most of the code. I averaged around 200 lines of code an hour, including the business logic and unit tests. Total was around 2200 lines. So, not a big project, but not a throw away script. The code was perfectly fine for what we needed. This is the third time I've done this, and each time I get faster and better at it.

1. I find a "pair programming" mentality is key. I focus on the high-level code, and let the model focus on the lower level code. I code review all the code, and provide feedback. Blindly accepting the code is a terrible approach.

2. Generating unit tests is critical. After I like the gist of some code, I ask for some smoke tests. Again, peer review the code and adjust as needed.

3. Be liberal with starting a new chat: the models can get easily confused with longer context windows. If you start to see things go sideways, start over.

4. Give it code examples. Don't prompt with English only.

FWIW, o3-mini was the best model I've seen so far; Sonnet 3.5 New is a close second.

replies(27): >>42936382 #>>42936605 #>>42936709 #>>42936731 #>>42936768 #>>42936787 #>>42936868 #>>42937019 #>>42937109 #>>42937172 #>>42937188 #>>42937209 #>>42937341 #>>42937346 #>>42937397 #>>42937402 #>>42937520 #>>42938042 #>>42938163 #>>42939246 #>>42940381 #>>42941403 #>>42942698 #>>42942765 #>>42946138 #>>42946146 #>>42947001 #
bboygravity ◴[] No.42938163[source]
Interesting to see the narrative on here slowly change from "LLM's will forever be useless for programming" to "I'm using it every day" over the course of the past year or so.

I'm now bracing for the "oh sht, we're all out of a job next year" narrative.

replies(2): >>42938264 #>>42938345 #
wiremine ◴[] No.42938345[source]
> "oh sht, we're all out of a job next year"

Maybe. My sense if we'd need to see 3 to 4 orders of magnitude improvements on the current models before we can replace people outright.

I do think we'll see a huge productivity boost per developer over the next few years. Some companies will use that to increase their throughput, and some will use it to reduce overhead.

replies(1): >>42940083 #
1. mirkodrummer ◴[] No.42940083[source]
Whenever I read huge productivity boost for developers or companies I shiver. Software sucked more and more even before LLMs, I don't see it getting better just getting out faster maybe. I'm afraid in most cases it will be a disaster