←back to thread

179 points articsputnik | 1 comments | | HN request time: 0.202s | source
Show context
serbuvlad ◴[] No.45054479[source]
I think the whole AI vs non. AI debate is a bit besides the point. Engineers are stuck in the old paradigm of "perfect" algorithms.

I think the image you post at the beginning basically sums it up for me: ChatGPT o3/5 Thinking can one-shot 75% of most reasonably sized tasks I give it without breaking a sweat, but struggles with tweaks to get it to 100%. So I make those tweaks myself and I have cut my code writing task in half or one third of the time.

ChatGPT also knows more idioms and useful libraries than I do so I generally end up with cleaner code this way.

Ferrari's are still hand assembled but Ford's assembly line and machines help save up human labor even if the quality of a mass-produced item is less than a hand-crafted one. But if everything was hand-crafted, we would have no computers at all to program.

Programming and writing will become niche and humans will still be used where a quality higher than what AI can produce is needed. But most code will be done by minotaur human-ai teams, where the human has a minimal but necessary contribution to keep the AI on track... I mean, it already is.

replies(16): >>45054579 #>>45054647 #>>45054815 #>>45054948 #>>45054968 #>>45055113 #>>45055151 #>>45055212 #>>45055260 #>>45055308 #>>45055473 #>>45055512 #>>45055563 #>>45058219 #>>45060059 #>>45061019 #
simianwords ◴[] No.45054815[source]
This comment captures it.

AI can do 80% of the work. I can review it later. And I spend much less time reviewing than I would have typing up everything manually.

I recently used it to add some logging and exception handling. It had to be done in multiple places.

A simple 2 line prompt one shotted it. Why do I need to waste time writing boring code?

replies(9): >>45054965 #>>45055005 #>>45055144 #>>45055163 #>>45055240 #>>45055406 #>>45057592 #>>45057736 #>>45057973 #
1. sureglymop ◴[] No.45055144[source]
What you shouldn't forget also is that, while AI may be good at coming up with a "first shot" solution, it may be much worse if you want to change/correct parts of it.

In my experience, AI very often gets into a sort of sunk-cost fallacy (sunk prompt?) and then it is very hard to get it to make significant changes, especially architecturally.

I recently wrote an extension for a popular software product and gave AI the same task. It created a perfectly working version however it was 5x the lines of code of my version because it didn't know the extension API as well, even though I gave it the full documentation. It also hard coded some stuff/solutions to challenges that we totally don't want to be hard coded. A big reason why I arrived at a much better solution was that I used a debugger to step through the code and noted down just the API interactions I needed.

The AI also was convinced that some things were entirely impossible. By stepping through the code I saw that they would be possible by using parts of the internal API. I suggested a change to make the public API better for my use case in a GitHub issue and now it is totally not impossible.

At the end of the day I have to conclude that, the amount of time invested guiding and massaging the AI was too much and not really worth it. I would've been better off debugging the code right away and then creating my own version. The potential for AI to do the 80% is there. At this time though I personally can't accept its results yet but that may also be due to my personal flavour of perfectionism.