←back to thread

1246 points adrianh | 1 comments | | HN request time: 0.001s | source
Show context
felixarba ◴[] No.44491943[source]
> ChatGPT was outright lying to people. And making us look bad in the process, setting false expectations about our service.

I find it interesting that any user would attribute this issue to Soundslice. As a user, I would be annoyed that GPT is lying and wouldn't think twice about Soundslice looking bad in the process

replies(3): >>44492117 #>>44492514 #>>44493393 #
romanhn ◴[] No.44492117[source]
While AI hallucination problems are widely known to the technical crowd, that's not really the case with the general population. Perhaps that applies to the majority of the user base even. I've certainly known folks who place inordinate amount of trust in AI output, and I could see them misplacing the blame when a "promised" feature doesn't work right.
replies(1): >>44492486 #
carlosjobim ◴[] No.44492486[source]
The thing is that it doesn't matter. If they're not customers it doesn't matter at all what they think. People get false ideas all the time of what kind of services a business might or might not offer.
replies(1): >>44493064 #
dontlikeyoueith ◴[] No.44493064[source]
> If they're not customers it doesn't matter at all what they think

That kind of thinking is how you never get new customers and eventually fail as a business.

replies(1): >>44493510 #
carlosjobim ◴[] No.44493510{3}[source]
It is the kind of thinking that almost all businesses have. You have to focus on the actual products and services which you provide and do a good job at it, not chase after any and every person with an opinion.

Down voters here on HN seem to live in a egocentric fantasy world, where every human being in the outside world live to serve them. But the reality is that business owners and leaders spend their whole day thinking about how to please their customers and their potential customers. Not other random people who might be misinformed.

replies(2): >>44493845 #>>44496824 #
graeme ◴[] No.44493845{4}[source]
If people repeatedly have a misunderstanding about or expectation of your business you need to address it though. An llm hallucination is based on widespread norms in training data and it is at least worth asking "would this be a good idea?"
replies(2): >>44494390 #>>44495227 #
1. smaudet ◴[] No.44494390{5}[source]
I think the issue here would be that we don't really know just how widespread, nor the impact of the issue.

Ok, sure, maybe this feature was worth having?

But if some people start sending bad requests your way because they can't or only program poorly, it doesn't make sense to potentially degrade the service for your successful paying customers...