Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I wonder if there's any potential for Copilot to suggest malicious code because it's been trained on an open source projects containing intentionally malicious code.


Maybe not malicious per se, but certainly I'd be concerned about seemingly-correct but actually-wrong code being suggested. Considering how often the top StackOverflow answer is slightly wrong or how often antipatterns crop up across various projects, I'm sure the training data is nowhere near "perfect code" - implying the output cannot be perfect either.


Since it is per line I highly doubt it. I think of it as intellisense+. You select suggestions that you would have written anyway.


or broken code :D


The averageRuntimeInSeconds example does not check for division by zero so it creates broken code at least 20% of the time based on the examples on the homepage :)


nothing more than what I was expecting :D




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: