←back to thread

451 points imartin2k | 1 comments | | HN request time: 0.212s | source
Show context
dang ◴[] No.44480332[source]
I also find these features annoying and useless and wish they would go away. But that's not because LLMs are useless, nor because the public isn't using them (as daishi55 pointed out here: https://news.ycombinator.com/item?id=44479578)

It's because the integrations with existing products are arbitrary and poorly thought through, the same way that software imposed by executive fiat in BigCo offices for trend-chasing reasons has always been.

petekoomen made this point recently in a creative way: AI Horseless Carriages - https://news.ycombinator.com/item?id=43773813 - April 2025 (478 comments)

replies(8): >>44480760 #>>44480949 #>>44482041 #>>44482413 #>>44483325 #>>44483327 #>>44492263 #>>44494114 #
ToucanLoucan ◴[] No.44480949[source]
> It's because the integrations with existing products are arbitrary and poorly thought through, the same way that software imposed by executive fiat in BigCo offices for trend-chasing reasons has always been.

It's just rent-seeking. Nobody wants to actually build products for market anymore; it's a long process with a lot of risk behind it, and there's a chance you won't make shit for actual profit. If however you can create a "do anything" product that can be integrated with huge software suites, you can make a LOT of money and take a lot of mind-share without really lifting a finger. That's been my read on the "AI Industry" for a long time.

And to be clear, the integration part is the only part they give a shit about. Arguably especially for AI, since operating the product is so expensive compared to the vast majority of startups trying to scale. Serving JPEGs was never nearly as expensive for Instagram as responding to ChatGPT inquiries is for OpenAI, so they have every reason to diminish the number coming their way. Being the hip new tech that every CEO needs to ram into their product, irrespective of it does... well, anything useful, while also being so frustrating or obtuse for users to actually want to use, is arguably an incredibly good needle to thread, if they can manage it.

And the best part is, if OpenAI's products do actually do what they say on the tin, there's a good chance many lower rungs of employment will be replaced with their stupid chatbots, again irrespective of whether or not they actually do the job. Businesses run on "good enough." So it's great, if OpenAI fails, we get tons of useless tech injected into software products already creaking under the weight of so much bullhockety, and if they succeed, huge swaths of employees will be let go from entry level jobs, flooding the market, cratering the salary of entire categories of professions, and you'll never be able to get a fucking problem resolved with a startup company again. Not that you probably could anyway but it'll be even more frustrating.

And either way, all the people responsible for making all your technology worse every day will continue to get richer.

replies(2): >>44481827 #>>44482893 #
Eisenstein ◴[] No.44481827[source]
This is not an AI problem, this is a problem caused by extremely large piles of money. In the past two decades we have been concentrating money in the hands of people who did little more than be in the right place at the right time with a good idea and a set of technical skills, and then told them that they were geniuses who could fix human problems with technological solutions. At the same time we made it impossible to invest money safely by making the interest rate almost zero, and then continued to pass more and more tax breaks. What did we expect was going to happen? There are only so many problems that can be solved by technology that we actually need solving, or that create real value or bolster human society. We are spinning wheels just to spin them, and have given the reins to the people with not only the means and the intent to unravel society in all the worst ways, but who are also convinced that they are smarter than everyone else because they figured out how to arbitrage the temporal gap between the emergence of a capability and the realization of the damage it creates.
replies(2): >>44482007 #>>44482885 #
1. klabb3 ◴[] No.44482885[source]
Couldn’t agree more. The problem is when the party is over, and another round of centralizing wealth and power is done, we’ll be no wiser and have learnt nothing. Look at the debate today, it’s (1) people who think AI is useful, (2) people who think it’s hype and (3) people who think AI will go rogue. It’s like the bank robbers put on a TV and everyone watches it while the heist is ongoing.

Only a few bystanders seem to notice the IP theft and laundering, the adversarial content barriers to protect from scraping, the centralization of capital within the owners of frontier models, the dial-up of the already insane race to collect personal data, the flooding of every communication channel with AI slop and spam, and the inevitable impending enshittification of massive proportions.

I’ve seen the sausage get made, enough to know the game. They’re establishing new dominance hierarchies, with each iteration being more cynical and predatory, each cycle refined to optimally speedrun the rent seeking value extraction. Yes, there are still important discussions about the tech itself. But it’s the deployment that concerns everyone, not hypothetically, but right now.

Exhibit A: social media. In hindsight, what was more important: the core technologies or the business model and deployment?