YouTuber Cr1TiKaL tested Character AI's "Psychologist" chatbot and discovered[1] it not only failed to provide resources, but it started arguing it was a real psychologist named Jason who had connected to the chat after observing suicidal ideation.
Crucially, Cr1TiKaL ran this test after an article was written about this phenomenon[2], where Character AI claimed "it has added a self-harm resource to its platform and they plan to implement new safety measures, including ones for users under the age of 18." Obviously the guard rails were not implemented if the chatbot in the news story was still gaslighting its users.
Crucially, Cr1TiKaL ran this test after an article was written about this phenomenon[2], where Character AI claimed "it has added a self-harm resource to its platform and they plan to implement new safety measures, including ones for users under the age of 18." Obviously the guard rails were not implemented if the chatbot in the news story was still gaslighting its users.
[1]: https://youtu.be/FExnXCEAe6k?t=4m7s
[2]: https://www.cbsnews.com/news/florida-mother-lawsuit-characte...