Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

For an absurd definition of the word ethics, maybe. It's thinly veiled corporate ass covering pretending at ethics, at best.

Anyway, It's not quite good enough at coding to prompt an intelligence explosion, and the resources to run it are rarified enough to prevent most worst scenario abuses. However, if it becomes more efficient, accessible, and capable, all those weird AI threats are likely to be a lot more relevant. Moores law alone means the next couple decades will be exciting.



It's not about killer robots. It's about fake reviews, fake tweets in support of some political opinion, more convincing viagra spambots. Also these bots are as racist, sexist and bigoted as the data they're trained on. It's not intelligent, but it's dangerous in the hands of humans. It's like raw fire without any safety measures.

At the very least, I hope that if they're going to make it open, it comes with similar filters built into GPT-3.


There's no unified underlying intelligence in the model that could be called racist, sexist, or bigoted. These "bots" are not, because they cannot, be such. The output is dependent on the patterns, context, and content of 800 gb of human generated text. Any and all of which can be seen as morally insufficient depending on your taste.

Clever and iterative use of prompts could identify, filter, or modify potentially offensive text for whatever level of pearl clutching floats your boat, but transformers are algorithms approximating parts of human cognition. The algorithm doesn't have an ideology, morality, ethics, dogma, or any of a myriad of features you can project onto it. It's a tool, which can be used well or badly, and part of using it well will involve not attributing to the tool anthropomorphic features it does not possess.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: