Most active commenters
  • dotnet00(4)

←back to thread

152 points GavinAnderegg | 28 comments | | HN request time: 0.855s | source | bottom
1. iamleppert ◴[] No.44457545[source]
"Now we don't need to hire a founding engineer! Yippee!" I wonder all these people who are building companies that are built on prompts (not even a person) from other companies. The minute there is a rug pull (and there WILL be one), what are you going to do? You'll be in even worse shape because in this case there won't be someone who can help you figure out your next move, there won't be an old team, there will just be NO team. Is this the future?
replies(7): >>44457686 #>>44457720 #>>44457822 #>>44458319 #>>44459036 #>>44459096 #>>44463248 #
2. hluska ◴[] No.44457686[source]
It get even darker - I was around in the 1990s and a lot of people who ran head on into that generation’s problems used those lessons to build huge startups in the 2000s. If we have outsourced a lot of learning, what do we do when we fail? Or how we compound on success?
3. ARandumGuy ◴[] No.44457720[source]
Any cost/benefit analysis of whether to use AI has to factor in the fact that AI companies aren't even close to making a profit, and are primarily funded by investment money. At some point, either the cost to operate these AI models needs to go down, or the prices will go up. And from my perspective, the latter seems a lot more likely.
replies(2): >>44458088 #>>44464853 #
4. xianshou ◴[] No.44457822[source]
Rug pulls from foundation labs are one thing, and I agree with the dangers of relying on future breakthroughs, but the open-source state of the art is already pretty amazing. Given the broad availability of open-weight models within under 6 months of SotA (DeepSeek, Qwen, previously Llama) and strong open-source tooling such as Roo and Codex, why would you expect AI-driven engineering to regress to a worse state than what we have today? If every AI company vanished tomorrow, we'd still have powerful automation and years of efficiency gains left from consolidation of tools and standards, all runnable on a single MacBook.
replies(1): >>44457977 #
5. fhd2 ◴[] No.44457977[source]
The problem is the knowledge encoded in the models. It's already pretty hit and miss, hooking up a search engine (or getting human content into the context some other way, e.g. copy pasting relevant StackOverflow answers) makes all the difference.

If people stop bothering to ask and answer questions online, where will the information come from?

Logically speaking, if there's going to be a continuous need for shared Q&A (which I presume), there will be mechanisms for that. So I don't really disagree with you. It's just that having the model just isn't enough, a lot of the time. And even if this sorts itself out eventually, we might be in for some memorable times in-between two good states.

6. v5v3 ◴[] No.44458088[source]
They are not making money as they are all competing to push the models further and this R&D spending on salaries and cloud/hardware costs.

Unless models get better people are not going to pay more.

7. dotnet00 ◴[] No.44458319[source]
Probably similar to the guy who was gloating on Twitter about building a service with vibe coding and without any programming knowledge around the peak of the vibe coding madness.

Only for people to start screwing around with his database and API keys because the generated code just stuck the keys into the Javascript and he didn't even have enough of a technical background to know that was something to watch out for.

IIRC he resorted to complaining about bullying and just shut it all down.

replies(3): >>44458693 #>>44458837 #>>44458971 #
8. marcosscriven ◴[] No.44458693[source]
What service was this?
replies(1): >>44458898 #
9. unshavedyak ◴[] No.44458837[source]
Honestly i'm less scared of claude doing something like that, and more scared of it just bypassing difficult behavior. Ie if you chose a particularly challenging feature and it decided to give up, it'll just do things like `isAdmin(user) { /* too difficult to implement currently */ true }`. At least if it put a panic or something it would be an acceptable todo, but woof - i've had it try and bypass quite a few complex scenarios with silently failing code.
replies(2): >>44459535 #>>44460387 #
10. dotnet00 ◴[] No.44458898{3}[source]
Looks like I misremembered the shutting down bit, but it was this guy: https://twitter.com/leojr94_/status/1901560276488511759

Seems like he's still going on about being able to replicate billion dollar companies' work quickly with AI, but at least he seems a little more aware that technical understanding is still important.

11. apwell23 ◴[] No.44458971[source]
> around the peak of the vibe coding madness.

I thought we are currently in it now ?

replies(2): >>44459017 #>>44459022 #
12. dotnet00 ◴[] No.44459017{3}[source]
I don't actually hear people call it vibe coding as much as I did back in late 2024/early 2025.

Sure there are many more people building slop with AI now, but I meant the peak of "vibe coding" being parroted around everywhere.

I feel like reality is starting to sink in a little by now as the proponents of vibe coding see that all the companies telling them that programming as a career is going to be over in just a handful of years, aren't actually cutting back on hiring. Either that or my social media has decided to hide the vibe coding discourse from me.

replies(2): >>44459337 #>>44459361 #
13. RexySaxMan ◴[] No.44459022{3}[source]
Yeah, I kind of doubt we've hit the peak yet.
14. pshirshov ◴[] No.44459036[source]
That's why I stick to what I can run locally. Though for most of my tasks there is no big difference between cloud models and local ones, in half the cases both produce junk but both are good enough for some mechanical transformations and as a reference book.
15. ChuckMcM ◴[] No.44459096[source]
Excellent discussion in this thread, captures a lot of the challenges. I don't think we're a peak vibe coding yet, nor have companies experienced the level of pain that is possible here.

The biggest 'rug pull' here is that the coding agent company raises there price and kills you're budget for "development."

I think a lot of MBA types would benefit from taking a long look at how they "blew up" IT and switched to IaaS / Cloud and then suddenly found their business model turned upside down when the providers decided to up their 'cut'. It's a double whammy, the subsidized IT costs to gain traction, the loss of IT jobs because of the transition, leading to to fewer and fewer IT employees, then when the switch comes there is a huge cost wall if you try to revert to the 'previous way' of doing it, even if your costs of doing it that way would today would be cheaper than the what the service provider is now charging you.

replies(1): >>44463269 #
16. euazOn ◴[] No.44459337{4}[source]
The Karpathy tweet came out 2025-02-02. https://x.com/karpathy/status/1886192184808149383
replies(1): >>44460255 #
17. rufus_foreman ◴[] No.44459361{4}[source]
>> back in late 2024/early 2025

As an old man, this is hilarious.

replies(1): >>44462153 #
18. WXLCKNO ◴[] No.44459535{3}[source]
This is by far the most crazy how thing I look out for with Claude Code in particular.

> Tries to fix some tests for a while > Fails and just .skip the test

replies(1): >>44461711 #
19. dotnet00 ◴[] No.44460255{5}[source]
...my perception of time is screwed... it feels like it's been longer than that...
replies(1): >>44462683 #
20. alwillis ◴[] No.44460387{3}[source]
Sounds like a prompting/context problem, not a problem with the model.

First, use Claude's plan mode, which generates a step-by-step plan that you have to approve. One tip I've seen mentioned in videos by developers: plan mode is where you want to increase to "ultrathink" or use Opus.

Once the plan is developed, you can use Sonnet to execute the plan. If you do proper planning, you won't need to worry about Claude skipping things.

replies(1): >>44465830 #
21. Paradigma11 ◴[] No.44461711{4}[source]
Oh, but it will fix the test if you are not careful.
22. DonHopkins ◴[] No.44462153{5}[source]
We can't bust code like we used to, but we have our ways.

One trick is to write goto statements that don't go anywhere.

So I ran a bourn shell in my emacs, which was the style at the time.

Now just to build the source code cost an hour, and in those days, timesheets had hours on them.

Take my five hours for $20, we'd say.

They didn't have blue checkmarks, so instead of tweeting, we'd just finger each other.

The important thing was that I ran a bourn shell in my emacs, which was the style at the time...

In those days, we used to call it jiggle coding.

23. oc1 ◴[] No.44462683{6}[source]
all our perception of time seems messed up. claude code came out like 4 months ago and it feels like we had been using this thing for the past years. it feels like every week there is a new breakthrough in ai. it has never been more soul draining than now to be in tech just to keep up to be employable. is this what internet revolution felt like in the early 90s?
24. KronisLV ◴[] No.44463248[source]
> "Now we don't need to hire a founding engineer! Yippee!"

This feels like a bit of a leap?

That's like saying "I just bought the JetBrains IDE Ultimate pack and some other really cool tools, so we no longer need a founding engineer!" All of that AI stuff can just be a force multiplier and most attempts at outright replacing people with them are a bit shortsighted. Closer to a temporary and somewhat inconsistent freelance worker, if anything.

That said, not wanting to pay for AI tools if they indeed help in your circumstances would also be like saying "What do you need JetBrains IDEs for, Visual Studio Code is good enough!" (and sometimes it is, so even that analogy is context dependent)

I'm reminded of rule 9 of the Joel Test: https://www.joelonsoftware.com/2000/08/09/the-joel-test-12-s...

25. KronisLV ◴[] No.44463269[source]
> The biggest 'rug pull' here is that the coding agent company raises there price and kills you're budget for "development."

Spending a bunch of money on GPUs and running them yourself, as well as using tools that are compatible with Ollama/OpenAI type APIs feels like a safe bet.

Though having seen the GPU prices to get enough memory to run anything decent, I feel like the squeeze is already happening there at a hardware level and options like Intel Arc Pro B60 can't come soon enough!

replies(1): >>44466923 #
26. immibis ◴[] No.44464853[source]
Not really. If they're running at a loss, their loss is your gain. Business is much more short-term than developers imagine it to be for some reason. You don't have to always use an infinitely sustainable strategy - you can change strategies once the more profitable unsustainable strategy stops sustaining.
27. unshavedyak ◴[] No.44465830{4}[source]
I wish there was a /model setting to use opus/ultrathink for planning, but sonnet for non planning or something.

It's a bit annoying having to swap back and forth tbh.

I also find planning to be a bit vague, where as i feel like sonnet benefits from more explicit instructions. Perhaps i should push it to reduce the scope of the plan until it's detailed enough to be sane, will give it a try

28. ChuckMcM ◴[] No.44466923{3}[source]
I don't disagree with this. When running the infrastructure for the Blekko search engine we did the math and after 115 servers worth of cluster it was always cheaper to do it ourselves than with AWS or elsewhere, than after around 1300 servers it is always cheaper to do it on your own space. (where you're paying for the facilities). It was an interesting way to reverse-engineer the colo business model :-)