←back to thread

1246 points adrianh | 1 comments | | HN request time: 0.253s | source
Show context
toomanyrichies ◴[] No.44491617[source]
This feels like a dangerously slippery slope. Once you start building features based on ChatGPT hallucinations, where do you draw the line? What happens when you build the endpoint in response to the hallucination, and then the LLM starts hallucinating new params / headers for the new endpoint?

- Do you keep bolting on new updates to match these hallucinations, potentially breaking existing behavior?

- Or do you resign yourself to following whatever spec the AI gods invent next?

- And what if different LLMs hallucinate conflicting behavior for the same endpoint?

I don’t have a great solution, but a few options come to mind:

1. Implement the hallucinated endpoint and return a 200 OK or 202 Accepted, but include an X-Warning header like "X-Warning: The endpoint you used was built in response to ChatGPT hallucinations. Always double-check an LLM's advice on building against 3rd-party APIs with the API docs themselves. Refer to https://api.example.com/docs for our docs. We reserve the right to change our approach to building against LLM hallucinations in the future." Most consumers won’t notice the header, but it’s a low-friction way to correct false assumptions while still supporting the request.

2. Fail loudly: Respond with 404 Not Found or 501 Not Implemented, and include a JSON body explaining that the endpoint never existed and may have been incorrectly inferred by an LLM. This is less friendly but more likely to get the developer’s attention.

Normally I'd say that good API versioning would prevent this, but it feels like that all goes out the window unless an LLM user thinks to double-check what the LLM tells them against actual docs. And if that had happened, it seems like they wouldn't have built against a hallucinated endpoint in the first place.

It’s frustrating that teams now have to reshape their product roadmap around misinformation from language models. It feels like there’s real potential here for long-term erosion of product boundaries and spec integrity.

EDIT: for the down-voters, if you've got actual qualms with the technical aspects of the above, I'd love to hear them and am open to learning if / how I'm wrong. I want to be a better engineer!

replies(1): >>44495148 #
1. tempestn ◴[] No.44495148[source]
To me it seems like you're looking at this from a very narrow technical perspective rather than a human- and business-oriented one. In this case ChatGPT is effectively providing them free marketing for a feature that does not yet exist, but that could exist and would be useful. It makes business sense for them to build it, and it would also help people. That doesn't mean they need to build exactly what ChatGPT envisioned—as mentioned in the post, they updated their copy to explain to users how it works; they don't have to follow what ChatGPT imagines exactly. Nor do they need to slavishly update what they've built if ChatGPT's imaginings change.

Also, it's not like ChatGPT or users are directly querying their API. They're submitting images through the Soundslice website. The images just aren't of the sort that was previously expected.