Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

its hard to believe that this is just a simple accident because it's a difficult problem, considering past tweets from people involved. also it was released without any testing? how did this past that phase? there's definitely something more wrong here than "just a difficult technical problem"


Is it really that hard to believe? I continue to be amazed that any of these systems work at all. People sure stopped being impressed by AI pretty quick. Now we apparently think that LLMs are perfect and there must be a wicked human to blame every time an LLM produces a weird output.


If the author of a system writes in every blog post that they tested their system to remove/manipulate things and the skewing of the results fit extremely will with what they - in their own words - deemed as things to remove then .. yeah: It's probably a (wicked) human to blame.


If there's evidence of malicious intent behind this then just link to it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: