Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
A therapy chatbot for depression (businessinsider.fr)
75 points by Prygan on June 7, 2017 | hide | past | favorite | 73 comments


This is an excellent idea, but their loose stance on privacy doesn't seem to line up with the intended field. Even a cursory look over their privacy policy raises questions (third party marketing, data in the event of a sale/acquisition). As for Messenger, they merely provide a link to Facebook's privacy policy.

I wish it outlined, in plain language, how private (or not) the transcripts of my chat data are and the retention policies associated with said data.

http://woebot.io/privacy


When I read the headline, my first impression was great! But digging deeper:

  - The name woebot - is this a joke?
  - The company is made up of 5 cs engeneers, only 1 clinical psycologist, and 1 ceo/doctor (of what?)
  - It's a for-profit paid business, not a make-the-world-better university project
Yuck. Just no.


Hello! I'm Pamela, CTO@Woebot. I agree that the privacy policy is confusing, and I've started to make a list of standard questions that people ask Woebot about privacy. I'd like to make a TLDR at the top that covers the FAQs. Could you let me know what particular questions you want answered, so that it can cover that? Thank you!


I'm giving you sensitive personal information.

Are you complying with current UK / EU laws about how you handle that data? Will you be compliant with GDPR when it comes in?

https://en.wikipedia.org/wiki/General_Data_Protection_Regula...


A TLDR won't help, until you overhaul all of the tech you use.

FB is a no-go!

Using so much tracking on your website that it's broken with ad-blocker and FF tracking protection on is a no-go!

In the current state the only sane thing to do is to warn people to not use your service.


The main thing you're achieving with this, is providing Facebook with a marketing data point.

Chats with woebot == user suffering from depression

Combined with Facebook's previous psychological experimentation on users, your entire business model should be seen as harmful.

Have you thought through the ethical implications of what you're doing, at all?


Feeding this data to Facebook, a company already known to perform unethical psychological experiments on users without consent, is beyond stupid. If the people behind this are aware of Facebook's history and practices, I'd consider it malicious to choose to expose depressed people to their platform.


It's gross-negligence at the very least.

Facebook routinely experiments on people in a way that's dangerous to people with mental health issues (and likely even healthy adults).

Opening up their treatment to Facebook is incredibly negligent.


>> It's gross-negligence at the very least.

It's probably the basis of their business model, in order to monetize conversations people have with the chatbot.

It's even more absurd given the fact that conversations between patient and therapist should be guaranteed to be private, otherwise the necessary trust will never be established.

Thinking that a chatbot can do a therapist's job is, for me, either a clear display of massive ignorance, or a deliberate attempt to make a buck off people that can't afford proper therapy.

Psychological therapy focuses (not only, but mostly) on basic aspects of humanity: feelings and reasoning, and the actions that transpire from these.

Everything is important in such an engagement: the language used, the topics discussed, the best approach to a specific topic. And all of these depend on who the patient is (his feelings, outlook over life, language used to express these), but also on how the therapist reads/understands the patient and acts/dialogues over this.

I can't see how someone can honestly think this is something a chatbot can do, how could a piece of software reason over what the person is saying and feeling. But I think it's a well established fact that therapy can actually be nefarious or even dangerous if done improperly.


yep--one would have hoped that this shop would have thought about the obvious consequences--ie, what is the likely effect on someone in depression who believes that they are seeking treatment and that the treatment has failed them.

is this responsible guys?


Ethics and responsibility seem to have taken a permanent back seat to profit in the "tech" industry.


sure seems that way (who has time to be ethical and responsible when you are changing the world!)


There seems to be a new push for technological problems to social illnesses, and it consistently boggles my mind how so many smart "experts" really think people engaging more with computers/phones/AI instead of, you know, engaging more with actual people will solve any kind of social ill.


A decent CBT therapist can charge $150+ per hour, and personal change can take years of constant, ongoing effort.

For most people right now, it's going to be an app or no treatment at all, particularly if for people who are only mildly impaired by their distress.

Besides, it's not an either/or proposition. Personally, I don't think that chatbots are all that useful (I've been building them as a large part of my job for a year), but I can see interactive CBT/DBT reminders/journals/checklists as being a very powerful tool.


It seems like the real danger is the appeal of sacrificing nuanced treatment for convenient treatment. Your comment pointed out that for many people the choice might boil down to a supportive app or nothing. And as you noted, personal change can take years of dedicated person effort.

To me, it seems like the biggest risk in the long term isn't in whether or not people get treated. The article demonstrates that the technology needed to make widely available mental health treatment options is being explored and worked on. The danger in my mind comes afterwards when the technology is performing acceptably but not comprehensively. It seems like at that stage there is a possibility of people not seeking more rigorous/advanced treatment from a professional, once they have the opportunity to do so, because they may already view the app as being good enough and any failures to improve beyond what they've already experienced through the app as personal failures rather than treatment failures.


The overwhelming majority of people who need psychotherapy cannot get access to adequate treatment. Access totally dwarfs every other issue in psychotherapy.

Here in the UK, most people with moderately severe depression or anxiety have to wait several months to get access to CBT. That will usually comprise of six one-hour sessions, often with a mental health care practitioner rather than a clinical psychologist. Patients with less severe symptoms will be triaged to a telephone-based service, an online service or self-help books.

That's a perfectly typical story for a high-income country with an excellent healthcare system. Access to psychotherapy is routinely and severely rationed because of cost. In lower-income countries, the picture is far worse. For most people, the choice is not between nuanced treatment and convenient treatment, but between cheap treatment and no treatment.

In all honesty, I'm struggling to contain my anger at your comment - it's quite clear that you're from an extremely privileged background. $150 for a weekly session with a psychotherapist is an unimaginable luxury for the vast majority of people in the world.

I'd love for everyone to be able to access a clinical psychotherapist on demand, but it simply isn't going to happen. There isn't the money or the political will to make it happen. I welcome innovative approaches to psychotherapy, provided they're evidence-based and rigorously evaluated. We have a real opportunity to improve the wellbeing of hundreds of millions of people. I think that rejecting these possibilities because of a hypothetical risk is utterly churlish.


Currently mental health treatment in the UK is under severe pressure, and it's going to get worse if the conservatives win.

However:

> Here in the UK, most people with moderately severe depression or anxiety have to wait several months to get access to CBT

Here's the data for Jan 2017: http://www.content.digital.nhs.uk/catalogue/PUB23831

> 90.2 per cent waited less than 6 weeks and 98.7 per cent waited less than 18 weeks to enter treatment

Most people wait less than 6 weeks to start treatment.

I agree about the rest of your comment: the CBT model should be 12 to 14 weeks of one hour sessions face to face with a therapist (we're not sure if the experience level of the therapist makes much difference), and many people are getting a much reduced version of this: 6 to 8 weeks, of 45 minute sessions, sometimes in groups or over the phone.


I think you either misunderstood my statement or I did a poor job communicating my points. Either way, it seems clear from your response that the position I was trying to present was not the one that was interpreted. I take some amount of resignation in your assumptions about my upbringing based on a single comment but in general I'm more annoyed that instead of simply focusing on rebutting the points you thought I was making, which you did very effectively, you felt degenerating the discussion into personal attacks was somehow also necessary.

Now then, you're right about access to care. If you felt like I was being dismissive of the idea that someone might lack the available funds to get proper treatment for a mental disorder then I apologize. My statement about the long term wasn't to suggest that the access problem is solved now or that it will be solved within the next few decades which in my mind represent the near term as far as timescales go. I simply meant to imply that, assuming the app continues to show promise and is eventually completed as a functional product, and that similar derivative programs aimed at helping treat different mental illnesses can be developed, the concern after those accomplishments becomes ensuring that they don't dissuade patients from pursuing other more appropriate treatment paths once/if their situations improve to that point. I did not mean this as being the only concern or the primary concern afterwards. It was simply the concern I voiced because it was interesting to me when I thought about it and so I shared it.

I'm making assumptions so I may of course be wrong, but my guess is that the latter part of what I just talked about was what upset you. I didn't mean to imply that patients could simply change their situation and go after a different treatment paths. I simply meant that, in a world where the only consideration is treatment appropriateness, I thought it was worth considering the secondary effects app based treatment could have on patient treatment seeking. I didn't mean this to discredit app based treatments as a viable tool and I see no reason they couldn't work in isolation, synergistically with other treatments, or any other variation of "I'm sure they can work" that might be appropriate to this discussion. That seems to be the other interpretation you made from my post, that I was being dismissive of the app compared to other treatments, and this was not something I realized I was communicating. However, since, as far as I can tell, this was the reason you accused me of being churlish I thought it worth clarifying.


If remarking on your privilege constitutes an attack, then so be it. It felt necessary because of the desperate need that exists in the community, which your comment seemed not to recognise in the slightest. Globally, there is a suicide death every 40 seconds; the vast majority of those people were mentally ill but had received no treatment whatsoever. Suicide rates have spiked dramatically in many demographics since the great recession, with the greatest impact being felt in deprived communities.

Psychiatric hospitals across the developed world are discharging actively suicidal patients on a daily basis, because they're marginally less suicidal than the dozens of people who are waiting for a bed. Beyond suicide, there are vast increases in drug and alcohol related deaths in impoverished communities. We're in the midst of a mental health emergency and need all the help we can get.

You're speculating about whether an app might hypothetically dissuade people from seeking better treatment at some point in the distant future. I think my response is downright charitable.

Consider another hypothetical scenario. A great famine has struck. Someone is dying every 40 seconds. The UN start helicoptering in bulk quantities of freeze-dried rations. I remark that there is a great danger in providing food aid, because it might dissuade people from eating a balanced diet of fresh fruit and vegetables. What response would I expect?

Qu'ils mangent de la brioche.


I had a longer comment prepared but I deleted it because I realized this conversation isn't going to be productive. If you want to make assumptions about my upbringing and ignore that I'm not actually disagreeing with you then there's no real reason to continue and pretend like we're having a discussion. Your heart is in the right place and you seem aware of the relevant statistics in play so I wish you well.


Pamela from Woebot here. That's a great point. We actually plan to detect when users have not improved their mental state enough over time (based on a brief clinical screen every few weeks plus the daily moods), and in that case, we will suggest that we are not effective for them and point out other options. It would be up to the user to consider the options and act on them, of course.


Nearly any new method of treatment opens up new doors for human failings and there will inevitably be some nonzero number of people whose lives are made worse by an overwhelmingly positive experience for humanity and mental health. Let's first worry about making the apps "good enough" for people to feel like they're getting _any_ decent treatment, let alone comprehensive to the point of eliminating the desire for additional human components, before we worry about this strange, potentially irrelevant rabbit hole.


What makes you think it's potentially irrelevant? Go by any mental health clinic in your city and ask if there are any patients who have conditions that impair either their ability to come in for treatment or their ability to recognize that they need treatment. I agree with the majority of your rebuttal that, at the moment, it's more productive to focus on simply getting this new treatment to a point of reliable functionality. But that doesn't mean that its potential secondary effects on patient treatment outcomes shouldn't also be considered or explored during experimental trials.


I agree that a wide range of potential effects and side-effects should be considered. I'm sure a key metric in this experiment involves collecting data on how it influences users' further treatment. I think this will either put your fears to rest (for now) or immediately draw the concerned attention of everyone conducting this experiment.

Edit: Maybe it _is_ all a scheme by Big Insurance to give the masses minimal treatment and doesn't actually solve their problems but provides the illusion of a solution while sucking them into a vile dependence on closed-source AI personalities to feel any sense of therapy. Technology enables so many insidious business practices I don't know what to believe anymore.


This is the right idea; it isn't about digitally replacing treatment. It's entirely about augmenting it.

This kind of technology in medicine is about enabling human doctors to treat patients better with more (hopefully meaningful) data points at their fingertips. You might be surprised how far behind the medical industry is when it comes to reaching out to the smartphone generation.


It's going to be about both. It's >already< about both.

There are therapists, and there are books on CBT. This is an extension of the second, more than the first. It's really nothing more than an electronic version of a choose-your-own-adventure book


Something isn't adding up here... if this is supposed to be supplemental therapy to talking with an actual human, but instead the actual human part is nixed and it is used as the entirety of therapy (i.e. outside of it's prescribed and tested usage) are we not worried about the negative externalities of replacing human contact with machines for curing social ills?

That's not even touching where you've just pointed out that we're essentially now decreeing that poor people get to talk to machines and rich people get to talk to humans, or the fact that insurance companies will jump at the chance to force the cheap AI down everyone's throats before they pay for human therapy, etc etc...


The status quo is poor people talk to nobody and rich people talk to humans. This is an improvement, no?


No.

The status quo is poor people talk to social contacts (eg, friends, family, church community, etc) while rich people talk to professionals (eg, therapists).

This is a change in that poor people are being moved to a professionally manufactured tool, which isn't necessarily an improvement -- it's just replacing an established, ad hoc system with an unknown technical one.

There's every possibility that it would make the situation worse and it's hubris on the part of the medical community that a tool built by them is better than informal therapists.


> The status quo is poor people talk to social contacts (eg, friends, family, church community, etc) while rich people talk to professionals (eg, therapists).

For some people (particularly those who have large extended families), that is indeed the status quo, but for others not so much.

If your home environment is not supportive (you can imagine responses ranging from "everybody gets depressed sometimes, just work harder and things will get better" to "grow a pair"), or even abusive, and you aren't in a position to get professional help, a tool like this (though given various other concerns, perhaps not this specific one) could very well be the only alternative to no treatment at all.

What comes to mind though, isn't an automated therapist per-se, but something closer to the "Young Lady's Illustrated Primer" (from the novel The Diamond Age).


No.

Everyone talks to social contacts. CBT therapy is something entirely different.

Parts of CBT and DBT are skill training in emotional and cognitive behavioural regulation, that people have not mastered sufficiently on their own. You can teach these skills, and I'd argue they should be mandatory classes on them through middle school. IMHO, they are the most valuable skills that you can learn, bar none.

>There's every possibility that it would make the situation worse and it's hubris on the part of the medical community that a tool built by them is better than informal therapists.

There's hubris in your assumption of how therapy works, or in the tools it incorporates today. This application was released, along with a study demonstrating it's efficacy. It's not the first of it's type - it's just the chatbot format of a self help book. There will be much better versions, I'm sure, over time.


I think there's more to therapy than having a good support or peers, although expressing yourself and having someone there to acknowledge you can undoubtedly be very cathartic.

Or, you might talk with friends about what's bugging you, but they're not going to necessarily explore how your feelings relate to external factors, or ask questions to help you understand yourself. A therapist's job is very different from the role of a friend.


If the alternative is sequestering them into talking to an AI controlled by the people who get to talk to humans, then unequivocally no


There is room for therapists, therapist assisting-applications and stand alone tools for therapy, just like there is today. This app is replacment for self-help books, not therapists.

Moreover, CBT and DBT, which is where therapy is currently focused for most issues, involves a lot of mental re-tooling and training.

This part of therapy is like learning any other process, and is approached and practiced like any other skill.


Imagine in the future, a really smart shrink AI is invented that can provide useful insight on how mental health of an individual might be improved. Then, everyone who has logs with this bot, or similar technologies, will be able to benefit retroactively simply by having more digital data points.

Just because you're not talking with a human about the data you're generating _today_ doesn't mean it won't bring interesting discussions to bear in the future.


Those discussions will include "which products are depressed people most likely to purchase?".


There isn't enough computing power on Earth to parse the stuff I threw at therapist when I did CBT (which waw a super good investment for me).


Hacker news used to be a site with optimistic people, entrepreneurs, people that did stuff, people that built stuff... now it's all whining about people that are actually doing things.

It makes me think of the roosevelt speech...

It is not the critic who counts; not the man who points out how the strong man stumbles, or where the doer of deeds could have done them better. The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood; who strives valiantly; who errs, who comes short again and again, because there is no effort without error and shortcoming; but who does actually strive to do the deeds; who knows great enthusiasms, the great devotions; who spends himself in a worthy cause; who at the best knows in the end the triumph of high achievement, and who at the worst, if he fails, at least fails while daring greatly, so that his place shall never be with those cold and timid souls who neither know victory nor defeat.

does anyone here have anything constructive to add?


Solving a societal problem that is pretty complex and not very well understood with a computer interface to reduce costs instead of fixing the system to accommendate for real change might raise some criticism.

Depressed from your 2 part time jobs and driving Uber? There's an app for that! Oh, don't use it now - 4chan managed to learn the AI and now it's giving junk advice.

What about working for a better society that accomendates people instead of building crutches to maintain the problematic status quo while trying to make some money for yourself and your friends?


In this case, the "dust, sweat and blood" could end up on the face of someone suffering from depression who ends up in an even worse situation as a result of that "move fast and break things" entrepreneurial spirit.

This is a field where you're literally playing with peoples' lives. Your "daring failure" could result in suffering and/or death for those people. If you think you should be free to do that without criticism, maybe it's time to learn a thing or two about "ethics".


Maybe everyone has just grown up.


Why do you think that people raising ethical concerns about business practices is not constructive?


That's not self-evidently a bad thing.

For example, here is an article about Ellie, a South-Carolina University virtual therapist used in the treatment of PTSD. It notes some explicit advantages which at least complement humans:

“One advantage of using Ellie to gather behaviour evidences is that people seem to open up quite easily to Ellie, given that she is a computer and is not designed to judge the person”, Morency explains to news.com.au

...

Morency stresses she is not a substitute for a human therapist. Rather, she is used in tandem with a doctor as a data-gatherer, able to break down walls which may exist due to a patient’s unwillingness to disclose sensitive information to a human.

As Morency explains, “The behavioural indicators that Ellie identifies will be summarised to the doctor, who will integrate it as part of the treatment or therapy. Our vision is that Ellie will be a decision support tool that will help human doctors and clinicians during treatment and therapy.”

- http://www.news.com.au/technology/innovation/meet-ellie-the-...


Just in case someone wants to know, if you got to this bot without looking at their page, it's paid, it doesn't tell you anything about it, and after using it for ~two weeks it stops working and asks you to pay 39 usd per month.


Couldn't they have picked a better platform than Facebook to try this out on, given all of the research into the effect of social media on one's mental state?


(Pamela from Woebot). Great points here about the pro's and con's of Facebook integration.

Regardless of those, we definitely plan to develop non-FB options in the near future. There's a mailing list here if you'd like to find out when those are offered: https://www.woebot.io/#besides-facebook

On a personal note, I oft recommend the "Kill News Feed" Chrome extension for Facebook users. (Woebot doesn't recommend it as he doesn't know about it yet, he lives in a happy world free of news feeds :).


> given all of the research into the effect of social media on one's mental state?

Given that research, wouldn't that make Facebook be the best platform for an anti-depression chat-bot? That's where the users will be.


I am picturing a Slack integration - but maybe that's just too depressing to contemplate.


I think this is a great idea due to the fact that we need new approaches to depression. Tracking your mood would be helpful to people with mental disorders and could also inform their medical professionals. Also, the theraputic effects of a chatbot that has research behind it gives it a large amount of credibility that this actually has a postive impact.

On the other hand although the creators of the bot have good intentions I wonder about the fact that you are using the internet to relay personal medical information. Also, I don't know if your woebot information is stored anywhere so if Woebot gets hacked a bunch of PII relating any medical data you send. I wish that this was a standalone app that would prompt me to upload my data if and when I see fit.


Pamela from Woebot here! We plan to develop non-FB options in the future, and they could involve entirely client-side stored data with no internet relay. There's a mailing list here if you'd like to find out when those are offered: https://www.woebot.io/#besides-facebook


Kenneth Colby, another Stanford professor, had something like this in the 1970's and 80's. According to the Wikipedia article [1] it was sold as a product. I remember reading a skeptical account of it in a popular book back then [2].

[1] https://en.wikipedia.org/wiki/Kenneth_Colby

[2] http://rdrosen.com/psychobabble-fast-talk-and-quick-cure-in-...


Going right back to the dawn of AI, ELIZA was designed to crudely emulate a Rogerian psychotherapist.

https://en.wikipedia.org/wiki/ELIZA


Far from the first idea I've heard for this, e.g.: http://www.newyorker.com/tech/elements/the-chatbot-will-see-...


There is a clinical psychologist in Japan who is doing the same thing, but without the data-mining. Unfortunately, the medical and academic communities here have been pretty aggressive about trying to shut her down.

https://www.disruptingjapan.com/can-this-startup-solve-japan...

It's a great idea. I hope we see more of it.



You can easily fool people into thinking they're talking with a person even though they're talking with a machine. But I wonder, can you do the same if they know they're talking with a machine? Or even, could you make yourself feel a program is human?

Simpler question: with a sufficiently-smart story generator, could you enjoy it if you knew it was procedurally generated? I feel that I couldn't. But I'd love to be wrong about that.


> Simpler question: with a sufficiently-smart story generator, could you enjoy it if you knew it was procedurally generated? I feel that I couldn't.

I feel like this might commit you to having some amount of conflict with "death of the author"?

If one cares that the writing was written by a person, and not just transcribed by a person, but composed by a person, it seems to me like one probably cares that the writing was done with an intent.

I don't see a reason that one would care about whether the writing was made with an intent, in a way that plays a role in whether one enjoys it, but not care at all about what that intent was.

So, I would expect that anyone who couldn't, or at least most people who couldn't, enjoy a work of fiction if they knew it was procedurally generated, probably would care about the intent of the author in writing what they are reading.

I also generally care about that, but I think I sometimes enjoy procedurally generated works, though this might be largely due to my knowing about the intent behind the writing of the code that produced the procedurally generated content, which includes substantial bits of literal text to include in the output, so..


What if it was ML trained on all the novels in Project Gutenberg? I would think the general style of writing could be convincing, but I wonder about things like overall plot, character development, and pacing. At least for a human author, those require a certain degree of planning, it would be interesting to see if AI could just 'let them happen' or if it'd be the literary equivalent of a shaggy dog joke.


>You can easily fool people into thinking they're talking with a person even though they're talking with a machine.

Citation needed. After 5 minutes of chatting with a robot, you conclude it's either an insane clown or a robot.


I fooled enough people with my half-assed IRC bot that did a bit of fuzzy pattern-matching for natural language queries.

It's harder to fool someone in an actual, focused 1:1 conversation, but it's easy to do in brief interactions when people don't suspect the other party may be a bot.

See also: automated news stories, autogeneration of e-mails, etc.


Too bad, I don't, and won't, Facebook.


You will need to install the Facebook to try it.


Yes, that is currently true. We plan to develop non-FB options in the future, mobile apps with minimal server data interaction/storage. There's a mailing list here if you'd like to find out when those are offered: https://www.woebot.io/#besides-facebook


If you can read about CBT in a book then you can use an app or AI bot as well. It should be tested and validated. However, there are some ethical concerns of not having a licensed professional in the loop and who can do an evaluation and monitor progress / intervene.


I have for several years thought about a real counselor using machine learning....

https://github.com/andrewt3000/DL4NLP/blob/master/carl.md


Why not develop like... a chatroom...

Reminds me of the story about robots in old folks homes. Why not just have people to keep people company.


> Why not just have people to keep people company.

This has actually been tried in the Netherlands, apparently with great success. University students can live rent-free in a retirement community on the condition that they spend time with the residents.

http://www.pbs.org/newshour/rundown/dutch-retirement-home-of...


That's actually an awesome way to get someone to stay participating! Basically each would conspire to keep the user in the discussion! I'm down to build it if anyone wants to collaborate ;)


Really like this, but wish it wasn't via Facebook messenger. Maybe I'll make a one off profile to use it.


(Pamela from Woebot here) Yes, I'm afraid it's only FB right now. We plan to develop non-FB options in the future, mobile apps with minimal server data interaction/storage. There's a mailing list here if you'd like to find out when those are offered: https://www.woebot.io/#besides-facebook


Seams very good. Will be interesting to see if it helps me reveal any insights to myself :)


Brings new perspective to the modern American adage "find someone who cares".


Is there automated online diagnostic tests comparable to one might take in person?




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: