←back to thread

10 points AbstractH24 | 2 comments | | HN request time: 0.426s | source

I help startups roll out tools for their go-to-market teams. These days, I keep coming across different products with small teams, the backing of notable VCs, and lots of potential, but then when I go to use them the UIs are just littered with bugs which prevent them from functioning. And often it has nothing to do with prompting, hallucination, or anything like that. It's simple things like "when I hit save, my data disappears."

I'm accustomed to working with buggy tools, nothing I do is mission-critical, so things aren't as thoroughly tested as a car might be before hitting the road. But it seems things are getting released with more and more bugs. Am I nuts?

Seems like there are three possibilities to me:

1. This is just what happens with products that are new to market. 2. People creating these products are relying too much on tools like Cursor that don't work right. 3. The pressure to keep up is getting faster and faster, so companies are releasing products that are less and less thoroughly tested.

My gut tells me it's a combination of 2 and 3, and this is a sign we're reaching a new stage in the AI bubble. But maybe I'm wrong and being overly cynical.

1. UK-Al05 ◴[] No.43780607[source]
The culture of startups is to put out slop then validate it as having demand before taking it seriously.
replies(1): >>43805077 #
2. AbstractH24 ◴[] No.43805077[source]
This work well when your TAM is millions of people. For bootstrapped microstartups where your TAM is a few hundred or thousand (a rapidly growing space) it becomes harder because all your reputation is much easier to destroy.