A script kiddie can connect GPT3.5 through its API to generate a bunch of possible exploits or other hacker scripts and auto execute them. Or with a TTS API and create plausible sounding personalized scripts that spam call or email people. And so on - I’m actually purposefully not mentioning other scenarios that I think would be more insidious. You don’t need much technical skills to do that.
Even if any of that were remotely relevant to this conversation about Bing, GPT models don't generate exploits or "hacker scripts", nor do they execute "hacker scripts". GPT models just provides natural language plain text responses to prompts.