The example in the OP is a common one: ask a model how to do something with a tool, and if there’s no easy way to perform that operation they’ll commonly make up a plausible answer.
It was a plausible answer, and the core of what these models do is generate plausible responses to (or continuations of) the prompt they’re given. They’re not databases or oracles.
With errors like this, if you ask a followup question it’ll typically agree that the feature isn’t supported, because the text of that question combined with its training essentially prompts it to reach that conclusion.
Re the follow-up question, it’s almost certainly the direction that advertising in general is going to take.