Not expected but reasonable, if there is coupling between the concepts of malicious code and malicious other activities, through some sort of generalized understanding/information-conceptual-compression in the "knowledge ensemble"
One experiment could be to repeat this across models of varying size and see if the bigger models (assuming trained on ~similar dataset) are more capable of conceptual compartmentalization
One experiment could be to repeat this across models of varying size and see if the bigger models (assuming trained on ~similar dataset) are more capable of conceptual compartmentalization