←back to thread

179 points articsputnik | 1 comments | | HN request time: 0.269s | source
Show context
serbuvlad ◴[] No.45054479[source]
I think the whole AI vs non. AI debate is a bit besides the point. Engineers are stuck in the old paradigm of "perfect" algorithms.

I think the image you post at the beginning basically sums it up for me: ChatGPT o3/5 Thinking can one-shot 75% of most reasonably sized tasks I give it without breaking a sweat, but struggles with tweaks to get it to 100%. So I make those tweaks myself and I have cut my code writing task in half or one third of the time.

ChatGPT also knows more idioms and useful libraries than I do so I generally end up with cleaner code this way.

Ferrari's are still hand assembled but Ford's assembly line and machines help save up human labor even if the quality of a mass-produced item is less than a hand-crafted one. But if everything was hand-crafted, we would have no computers at all to program.

Programming and writing will become niche and humans will still be used where a quality higher than what AI can produce is needed. But most code will be done by minotaur human-ai teams, where the human has a minimal but necessary contribution to keep the AI on track... I mean, it already is.

replies(16): >>45054579 #>>45054647 #>>45054815 #>>45054948 #>>45054968 #>>45055113 #>>45055151 #>>45055212 #>>45055260 #>>45055308 #>>45055473 #>>45055512 #>>45055563 #>>45058219 #>>45060059 #>>45061019 #
rustystump ◴[] No.45055212[source]
Another hard disagree. The crux here is that if u are not an expert in the given domain you do not know where that missing 25% is wrong. You think you do but you dont.

I have seen people bring in thousands of lines of opencv lut code in ai slop form because they didnt understand how to interpolate between two colors and didnt have the experience to know that is what they needed to do. This is the catch 20/20 of the ai expert narrative.

The other part is that improvement has massively stagnated in the space. It is painfully obvious too.

replies(1): >>45055337 #
A4ET8a8uTh0_v2 ◴[] No.45055337[source]
<< you do not know where that missing 25% is wrong

I think there is something to this line of thinking. I just finished a bigger project and without going into details, one person from team supposedly dedicated to providing viable data about data was producing odd results. Since the data was not making much sense, I asked for info on how the data was produced. I was given SQL script and 'and then we applied some regex' explanation.

Long story short, I dig in and find that applied regex apparently messed with dates in an unexpected way and I knew because I knew the 'shape' that data was expected to have. I corrected it, because we were right around the deadline, but.. I noted it.

Anyway, I still see llm as a tool, but I think there is some reckoning on the horizon as:

1. managers push for more use and speed given that new tool 2. getting there faster wronger, because people go with 1 and do not check the output ( or don't know how to check it or don't know when its wrong )

It won't end well, because the culture does not reward careful consideration.

replies(1): >>45055498 #
1. rustystump ◴[] No.45055498[source]
Exactly. I use ai tools daily and they bite me. Not enough to stop but enough to know. Recently was building a ws merger of sorts based on another libs sub protocol. I wasnt familiar with the language or protocol but ai sure was. However the ai used a wrong id when repacking messages. Unless i knew the spec (which i didnt) i never would have known. Eventually, i did read the spec and figured it out.

To be clear here i give the spec to ai many times asking what was off and it never found the issue.

Once i did get it working, ai one shotted converting it from python to go with the exception of the above mistake being added back in again.

You dont know what you dont know. That final 25% or 5% or whatever is where the money is at, not the 80%. Almost doesnt count.