Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Nearly any new method of treatment opens up new doors for human failings and there will inevitably be some nonzero number of people whose lives are made worse by an overwhelmingly positive experience for humanity and mental health. Let's first worry about making the apps "good enough" for people to feel like they're getting _any_ decent treatment, let alone comprehensive to the point of eliminating the desire for additional human components, before we worry about this strange, potentially irrelevant rabbit hole.


What makes you think it's potentially irrelevant? Go by any mental health clinic in your city and ask if there are any patients who have conditions that impair either their ability to come in for treatment or their ability to recognize that they need treatment. I agree with the majority of your rebuttal that, at the moment, it's more productive to focus on simply getting this new treatment to a point of reliable functionality. But that doesn't mean that its potential secondary effects on patient treatment outcomes shouldn't also be considered or explored during experimental trials.


I agree that a wide range of potential effects and side-effects should be considered. I'm sure a key metric in this experiment involves collecting data on how it influences users' further treatment. I think this will either put your fears to rest (for now) or immediately draw the concerned attention of everyone conducting this experiment.

Edit: Maybe it _is_ all a scheme by Big Insurance to give the masses minimal treatment and doesn't actually solve their problems but provides the illusion of a solution while sucking them into a vile dependence on closed-source AI personalities to feel any sense of therapy. Technology enables so many insidious business practices I don't know what to believe anymore.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: