←back to thread

Nobody knows how to build with AI yet

(worksonmymachine.substack.com)
526 points Stwerner | 1 comments | | HN request time: 0.207s | source
Show context
karel-3d ◴[] No.44616917[source]
Reading articles like this feels like being in a different reality.

I don't work like this, I don't want to work like this and maybe most importantly I don't want to work with somebody who works like this.

Also I am scared that any library that I am using through the myriad of dependencies is written like this.

On the other hand... if I look at this as some alternate universe where I don't need to directly or indirectly touch any of this... I am happy that it works for these people? I guess? Just keep it away from me

replies(20): >>44617013 #>>44617014 #>>44617030 #>>44617053 #>>44617173 #>>44617207 #>>44617235 #>>44617244 #>>44617297 #>>44617336 #>>44617355 #>>44617366 #>>44617387 #>>44617482 #>>44617686 #>>44617879 #>>44617958 #>>44617997 #>>44618547 #>>44618568 #
lordnacho ◴[] No.44617013[source]
But you also can't not swim with the tide. If you drove a horse-buggy 100 years ago, it was probably worth your while to keep your eye on whether motor-cars went anywhere.

I was super skeptical about a year ago. Copilot was making nice predictions, that was it. This agent stuff is truly impressive.

replies(7): >>44617059 #>>44617096 #>>44617165 #>>44617303 #>>44617421 #>>44617514 #>>44618157 #
rafaelmn ◴[] No.44617059[source]
More like people telling us there will be no more professional drivers on the road in 5-10 years 10 years ago. Agents are like lane assist, not even up to the current self driving levels.
replies(2): >>44617149 #>>44630320 #
1. throw234234234 ◴[] No.44630320[source]
Not sure the analogy is the same. A driverless car if it does something wrong can cause death - the cost of failure is very high. If my code doesn't come out right, as other posts here have said, I can just "re-roll the slot machine" until it does come out as acceptable - the cost is extremely low. Most of the "reasoning" models just increase the probability that the "re-run" will more likely be to preference of most people with RL to make it viable - tools like new agents know how to run tools, etc to give more data for the probability to be viable within a good timeframe and not run into an endless loop most of the time. Software until it is running after all is just text on a page.

Sure I have to be sure what I'm committing and running is good, especially in critical domains. The cheap cost of iteration before actual commit IMO is the one reason why LLM's are disruptive in software and other "generative" domains in the digital world. Conversely real-time requirements, software that needs to be relied on (e.g. a life support system?), things that post opinions in my name online etc will probably, even if written by a LLM, will need someone accountable and verifying the output.

Again as per many other posts "I want to be wrong" given I'm a senior in my career and would find it hard to change now given age. I don't like how our career is concentrating to the big AI labs/companies rather than our own intelligence/creativity. But rationally its hard to see how software continues to be the same career going forward and if I don't adapt I might die. I will most likely going forward, similar to what I do with my current team, just define and verify.