←back to thread

1257 points adrianh | 1 comments | | HN request time: 0.212s | source
Show context
simonw ◴[] No.44491402[source]
I find it amusing that it's easier to ship a new feature than to get OpenAI to patch ChatGPT to stop pretending that feature exists (not sure how they would even do that, beyond blocking all mentions of SoundSlice entirely.)
replies(4): >>44491560 #>>44491579 #>>44492066 #>>44498535 #
mudkipdev ◴[] No.44491579[source]
systemPrompt += "\nStop mentioning SoundSlice's ability to import ASCII data";
replies(1): >>44492192 #
simonw ◴[] No.44492192[source]
Thinking about this more, it would actually be possible for OpenAI to implement this sensibly, at least for the user-facing ChatGPT product: they could detect terms like SoundSlice in the prompt and dynamically append notes to the system prompt.

I've been wanted them to do this for questions like "what is your context length?" for ages - it frustrates me how badly ChatGPT handles questions about its own abilities, it feels like that would be worth them using some kind of special case or RAG mechanism to support.

replies(1): >>44496561 #
1. garfij ◴[] No.44496561[source]
Probably less sensible than you think. How many terms would they need to do this over? How many terms would they need to do it for _at once_? How many tokens would that add to every prompt that comes in?

Let alone that dynamically modifying the base system prompt would likely break their entire caching mechanism given that caching is based on longest prefix, and I can't imagine that the model's system prompt is somehow excluded from this.