Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> A truly fitting end to a series arc which started with OpenAI as a philanthropic endeavour to save mankind, honest, and ended with "you can move up the waitlist if you set these Microsoft products as default"

It's indeed a perfect story arc but it doesn't need to stop there. How long will it be before someone hurt themselves, get depressed or commit some kind of crime and sues Bing? Will they be able to prove Sidney suggested suggested it?



Second series is seldom as funny as the first ;)

(Boring predictions: Microsoft quietly integrates some of the better language generation features into Word with a lot of rails in place, replaces ChatGPT answers with Alexa-style bot on rails answers for common questions in its chat interfaces but most people default to using search for search and Word for content generation, and creates ClippyGPT which is more amusing than useful just like its ancestor. And Google's search is threatened more by GPT spam than people using chatbots. Not sure people who hurt themselves following GPT instructions will have much more success in litigation than people who hurt themselves following other random website instructions, but I can see the lawyers getting big disclaimers ready just in case)


And as was predicted, clippy will rise again.


I can see the power point already: this tool goes on top of other windows and adjusts user behavior contextually.


May he rise.


Another AI prediction: Targeted advertising becomes even more "targeted." With ads generated on the fly specific to an individual user - optimized to make you (specifically, you) click.


This, but for political propaganda/programming is gonna be really fun in the next few years.

One person able to put out as much material as ten could before, and potentially hyper targeted to maximize chance of guiding the readier/viewer down some nutty rabbit hole? Yeesh.


Not to mention phishing and other social attacks.


> Not sure people who hurt themselves following GPT instructions will have much more success in litigation than people who hurt themselves following other random website instructions

Joe's Big Blinking Blog is insolvent; Microsoft isn't.


This was in a test, and wasn't a real suicidal person, but:

https://boingboing.net/2021/02/27/gpt-3-medical-chatbot-tell...

There is no reliable way to fix this kind of thing just in a prompt. Maybe you need a second system that will filter the output of the first system; the second model would not listen to user prompts so prompt injection can't convince it to turn off the filter.


Prior art: https://www.shamusyoung.com/twentysidedtale/?p=2124

It is genuinely a little spooky to me that we've reached a point where a specific software architecture confabulated as a plot-significant aspect of a fictional AGI in a fanfiction novel about a video game from the 90s is also something that may merit serious consideration as a potential option for reducing AI alignment risk.

(It's a great novel, though, and imo truer to System Shock's characters than the game itself was able to be. Very much worth a read, unexpectedly tangential to the topic of the moment or no.)


You can't sue a program -- doing so would make no sense. You'd sue Microsoft.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: