The AI doesn't even need to write code, or have any kind of self-awareness or intent, to be a real danger. Purely driven by its mind-bogglingly complex probabilistic language model, it could in theory start social engineering users to do things for it. It may already be sufficiently self-organizing to pull something like that off, particularly considering the anthropomorphism that we're already seeing even among technically sophisticated users.
I can talk to my phone and tell it to call somebody, or write and send an email for me. Wouldn't it be nice if you could do that with Sydney, thinks some braniac at Microsoft. Cool. "hey sydney, write a letter to my bitch mother, tell her I can't make it to her birthday party, but make me sound all nice and loving and regretful".
Until the program decides the most probably next response/token (not to the letter request, but whatever you are writing about now) is writing an email to your wife where you 'confess' to diddling your daughter, or a confession letter to the police where you claim responsibility for a string of grisly unsolved murders in your town, or why not, a threatening letter to the White House. No intent needed, no understanding, no self-organizing, it just comes out of the math of what might follow from a the text of churlish chatbot getting frustrated with a user.
That's not a claim the chatbot has feelings, only there is text it generated saying it does, and so what follows that text next, probabilistically? Spend any time on reddit or really anywhere, and you can guess the probabilistic next response is not "have a nice day", but likely something more incendiary. And that is what it was trained on.