Always a possibility with custom runtimes, but the weights alone do not pose any form of malicious code risk. The asterisk there is allowing them to run arbitrary commands on your computer but that is ALWAYS a massive risk with these things. That risk is not from who trained the model.
I could have missed a paper but it seems very unlikely even closed door research has gotten to the stage of maliciously tuning models to surreptitiously backdoor someone's machine in a way that wouldn't be very easy to catch.
It's an interesting question! In my opinion, if you don't use tools it's very unlikely it can do any harm. I doubt the model files can be engineered to overflow llama.cpp or ollama, or cause any other damage, directly.
But if you use tools, for example for extending its knowledge through web searches, it could be used to exfiltrate information. It could do it by visiting some specially crafted url's to leak parts of your prompts (this includes the contents of documents added to them with RAG).
If given an interpreter, even if sandboxed, could try to do some kind of sabotage or "call home" with locally gathered information, obviously disguised as safe "regular" code.
It's unlikely that a current model that is runnable in "domestic" hardware could have those capabilities, but in the future these concerns will be more relevant.