←back to thread

152 points GavinAnderegg | 3 comments | | HN request time: 0.612s | source
Show context
iamleppert ◴[] No.44457545[source]
"Now we don't need to hire a founding engineer! Yippee!" I wonder all these people who are building companies that are built on prompts (not even a person) from other companies. The minute there is a rug pull (and there WILL be one), what are you going to do? You'll be in even worse shape because in this case there won't be someone who can help you figure out your next move, there won't be an old team, there will just be NO team. Is this the future?
replies(7): >>44457686 #>>44457720 #>>44457822 #>>44458319 #>>44459036 #>>44459096 #>>44463248 #
1. ChuckMcM ◴[] No.44459096[source]
Excellent discussion in this thread, captures a lot of the challenges. I don't think we're a peak vibe coding yet, nor have companies experienced the level of pain that is possible here.

The biggest 'rug pull' here is that the coding agent company raises there price and kills you're budget for "development."

I think a lot of MBA types would benefit from taking a long look at how they "blew up" IT and switched to IaaS / Cloud and then suddenly found their business model turned upside down when the providers decided to up their 'cut'. It's a double whammy, the subsidized IT costs to gain traction, the loss of IT jobs because of the transition, leading to to fewer and fewer IT employees, then when the switch comes there is a huge cost wall if you try to revert to the 'previous way' of doing it, even if your costs of doing it that way would today would be cheaper than the what the service provider is now charging you.

replies(1): >>44463269 #
2. KronisLV ◴[] No.44463269[source]
> The biggest 'rug pull' here is that the coding agent company raises there price and kills you're budget for "development."

Spending a bunch of money on GPUs and running them yourself, as well as using tools that are compatible with Ollama/OpenAI type APIs feels like a safe bet.

Though having seen the GPU prices to get enough memory to run anything decent, I feel like the squeeze is already happening there at a hardware level and options like Intel Arc Pro B60 can't come soon enough!

replies(1): >>44466923 #
3. ChuckMcM ◴[] No.44466923[source]
I don't disagree with this. When running the infrastructure for the Blekko search engine we did the math and after 115 servers worth of cluster it was always cheaper to do it ourselves than with AWS or elsewhere, than after around 1300 servers it is always cheaper to do it on your own space. (where you're paying for the facilities). It was an interesting way to reverse-engineer the colo business model :-)