No, the biggest risk is that it behaves in ways that actively harm users in a fragile emotional state, whether by enabling or pushing them into dangerous behavior.
Many people are already demonstrably unable to handle normal AI chatbots in a healthy manner. A "therapist" substitute that takes a position of authority as a counselor ramps that danger up drastically.
No, the biggest risk is that it behaves in ways that actively harm users in a fragile emotional state, whether by enabling or pushing them into dangerous behavior.
Many people are already demonstrably unable to handle normal AI chatbots in a healthy manner. A "therapist" substitute that takes a position of authority as a counselor ramps that danger up drastically.