I could have missed a paper but it seems very unlikely even closed door research has gotten to the stage of maliciously tuning models to surreptitiously backdoor someone's machine in a way that wouldn't be very easy to catch.
Your threat model may vary.
The code that comes with the model should be treated like any other untrusted code.
But if you use tools, for example for extending its knowledge through web searches, it could be used to exfiltrate information. It could do it by visiting some specially crafted url's to leak parts of your prompts (this includes the contents of documents added to them with RAG).
If given an interpreter, even if sandboxed, could try to do some kind of sabotage or "call home" with locally gathered information, obviously disguised as safe "regular" code.
It's unlikely that a current model that is runnable in "domestic" hardware could have those capabilities, but in the future these concerns will be more relevant.
https://news.ycombinator.com/item?id=43121383
It would have to be from unsupervised tool usage or accepting backdoored code, not traditional remote execution from merely inferencing the weights.