←back to thread

179 points articsputnik | 2 comments | | HN request time: 0.409s | source
Show context
serbuvlad ◴[] No.45054479[source]
I think the whole AI vs non. AI debate is a bit besides the point. Engineers are stuck in the old paradigm of "perfect" algorithms.

I think the image you post at the beginning basically sums it up for me: ChatGPT o3/5 Thinking can one-shot 75% of most reasonably sized tasks I give it without breaking a sweat, but struggles with tweaks to get it to 100%. So I make those tweaks myself and I have cut my code writing task in half or one third of the time.

ChatGPT also knows more idioms and useful libraries than I do so I generally end up with cleaner code this way.

Ferrari's are still hand assembled but Ford's assembly line and machines help save up human labor even if the quality of a mass-produced item is less than a hand-crafted one. But if everything was hand-crafted, we would have no computers at all to program.

Programming and writing will become niche and humans will still be used where a quality higher than what AI can produce is needed. But most code will be done by minotaur human-ai teams, where the human has a minimal but necessary contribution to keep the AI on track... I mean, it already is.

replies(16): >>45054579 #>>45054647 #>>45054815 #>>45054948 #>>45054968 #>>45055113 #>>45055151 #>>45055212 #>>45055260 #>>45055308 #>>45055473 #>>45055512 #>>45055563 #>>45058219 #>>45060059 #>>45061019 #
lallysingh ◴[] No.45055113[source]
Hard disagree. We'll be able to use more expressive languages with better LLM support for understanding how to express ourselves and to understand compiler results. LLMs are only good at stuff that better languages don't require you to do. After that they fall off the cliff quickly.

LLMs are a communication technology, with a huge trained context of conversation. They have a long way to go before becoming anything intelligent.

replies(1): >>45058126 #
1. lukeschlather ◴[] No.45058126[source]
LLMs lack intentionality, and they lack the ability to hold a series of precepts "in mind" and stick to those precepts. That is, if I say "I want code that satisfies properties A, B, C, D..." at some point the LLM just can't keep track of all the properties, which ones are satisfied, which ones aren't, what needs to be done or can be done to make them all satisfied.

But LLMs aren't "only good at stuff that better languages don't require you to do." In fact they are very good at taking a bad function definition and turning it into an idiomatic one that does what I wanted to do. That's very intelligent, there is no language that can take a bad spec and make it specific and fit for the specified task. LLMs can. (not perfectly mind you, but faster and often better than I can.) The problem is they just can't always figure out when what they've written is off-spec. But "always" isn't "never" and I've yet to meet an intelligence that is perfect.

replies(1): >>45058335 #
2. tkgally ◴[] No.45058335[source]
> LLMs ... lack the ability to hold a series of precepts "in mind" and stick to those precepts.

That is perhaps the biggest weakness I've noticed lately, too. When I let Claude Code carry out long, complex tasks in YOLO mode, it often fails because it has stopped paying attention to some key requirement or condition. And this happens long before it has reached its context limit.

It seems that it should be possible to avoid that through better agent design. I don't know how to do it, though.