When contemplating what jobs were going to be replaced by robots, I never thought of doctors, but it now seems quite plausible, even probable.
(Not that doctors are going to be gone completely, just that their jobs will be forever changed the way that CNC has completely changed the job of a machinist)
I wonder if doctors would become more of a service industry -- doing "patient care" (as customer service) instead of "disease care" now. Could be an improvement for the experience of receiving healthcare; definitely interesting to think about!
The doctors I know tend to think that it's at least 90% that way already (and has been for a while), at least if you're a primary-care doctor seeing patients in an office, rather than a surgeon or some kind of specialist. The typical patient isn't presenting with a rare disease worthy of a medical TV drama, and instead the main problems are patient-interaction ones: getting the right information out of them, picking treatments that the patients will be satisfied with and follow, figuring out what complicating lifestyle factors they might have, building trust with the patient and family members, etc. With the increasing influence of evidence-based medicine, where doctors are supposed to follow experimentally validated rubrics, the diagnosis itself is often semi-mechanical anyway even when done by a human.
I meant more that the metric consumers (patients) use to differentiate between doctors could in the future shift to being customer-service-based. You're right that in a lot of ways diagnosis right now is semi-mechanical, but I think there is still the perception that you have to make sure you're going to the "best doctor" in terms of treatment (they are a good doctor means they diagnosis correctly and treat appropriately) not in terms of customer service (which would be they treat me the best).
With increasing automation and crowd-sourcing of customer service, and automated interviewing becoming available to employers , and better computer vision(detecting emotions in live video , and it might be possible to detect lying automatically and maybe more precisely than humans), and the fact that larger part of medical support is done by peer groups (and probably better) i think the building blocks to provide the services you mentioned in highly automated fashion will become available.
Theory. "We think that X comes from Y and the reasoning is ..." and then somebody else says the exact opposite, with an equally likely-sounding reasoning, and then you don't know what to do. "Evidence-based" means "shown to work - we may not know why, but at least it does".
On this topic, I recently realized the dangers in trying to propagate "evidence-based" throughout all sectors of healthcare. A nurse was complaining about a doctor questioning if a certain drug was safe for breast-feeding women (well for the child, actually). Indignantly, the nurse said "but this manual right here says it's safe! How much more evidence-based can it get!". To her, "the book says so" was "evidence". 'Evidence-based medicine' is no panacea - the principles maybe, but the hard part is in the implementation.
What's the alternative? Nurses don't trust manuals? Obviously as a society we should always strive for better science. But just like lady justice wears a blindfold, nurses should follow the manual.
Of course, that wasn't my point; my point is that when 'evidence based' becomes a mantra that is malunderstood by many, its use in inappropriate contexts devaluates the word and with that, the concept itself.
"Evidence-based" means that you tie statistical outcomes to procedures as apposed to doing what you think might be right based on your limited life experience (which is biased).
For example, let's say there is a test that checks for a birth defect that must be treated or it will result in infant mortality. Let's also say that the test causes infant mortality one time in 10,000. Whether you perform the test has to do with the statistical likelihood that the fetus will have the birth defect - if the likelihood is less than the probability of the test killing the baby then you shouldn't do it.
Computers are commonly used to assist in diagnosis. You might want to look up computer-assisted diagnosis[1]. If I am to believe my machine learning lecturers, their use is actually a bit more widespread than what the wikipedia page describes.
Anyway, I think it is likely to see a transition to computers diagnosing and doctors just checking the results; at least for some areas. If you have a diagnosis and patient information I guess it isn't that difficult to come up with a treatment. I think that doctors themselves basically memorize decision trees.
I'm sure most people wouldn't expect doctors to be replaceable. But look at the process to become an MD and think about the amount of pure memorization involved. Then it all makes sense.
Once a robot doctor is ready, imagine how easy would it be to create another one. Replicate the hardware, software and copy all the data. Compare this with the amount of time it would take to make a human doctor.
As a specialist, I have to say: This makes me very glad I'm not a primary care provider. The doctors of the future will be evaluated on bedside manner and interpersonal skills - not on diagnostic ability or knowledge, because they'll all have access to Watson. Which is scary because smooth talking idiots can get into medical school, but get rapidly weeded out in the real world. It scares me that stupidity can hide behind this computing beast. And there will be situations where Watson cannot help (emergencies) where a doctor and his smarts are all he has to go by.
If Watson makes a wrong diagnoses and a patient dies, who will be responsible? The doctor? IBM? All it would take is one or two incorrect diagnoses leading to large malpractice suits to really put the medical profession on edge about this. Granted, many incompetent practitioners exist out there making all kinds of dangerous decisions, but this would really stick out in the public eye.
You don't really have to get the diagnosis correct, you just have to get the error bars correct, and then disclose all of that.
"Watson believes the best treatment is Foo. It has an X% chance of working, with a confidence interval of Y. The second most likely diagnosis is Bar, with an expected loss of Z if the diagnosis is incorrect. Which would you like to do?"
Besides, several specialties are already essentially classifiers (radiologists, anesthesiologists). If machines can't beat humans at classifying cancer from lung scans, they're not far behind. By the time Watson is in production, I bet a computer will be replacing radiologists already.
There is a saying in medical diagnosis: "When you hear hoofbeats behind you, don't expect to see a zebra". https://secure.wikimedia.org/wikipedia/en/wiki/Zebra_%28medi...
Most patients have common problems that are fairly easy for experienced physicians to diagnose. So there is no point to entering a bunch of data into Watson just to get back the same answer you already know. But it could certainly be helpful for those rare edge cases where the physician can't figure out the "zebra", or where the patient hasn't responded as expected to treatment.
I think this is backwards. (After thoroughly testing of course) I'd rather see Watson be the first-tier of diagnosis, backed up by a doctor when Watson isn't so sure of an answer, as opposed to Watson just helping befuddled doctors. This could really bring down medical costs at scale.
Actually, this seems like the perfect opportunity for pharma companies to "help" in creating the database.
That's actually what they are already doing when pitching new drugs to doctors. Now they only have to convince the system designers and not thousands of practicing doctors.
Most new drugs today win on marketing, rather than efficacy. Once you have a system like watson, it's rather trivial to do the expected value calculation of "This treatment has an X% chance of working, and costs Y. The competing treatment has X'%, and costs Y'."
As long as that data is available to patients (and it should be), I'm with the parent poster, this will make it harder to pad treatments.
The customers for the software are the hospitals that people go to, not the insurance companies or the patients. I guarantee you that the hospitals will prefer whatever option results in the best possible financial outcomes for themselves.
Well then who is going to do the clinical observation, coding, and data entry? That work doesn't get done for free and in most cases it would actually take the healthcare providers more time to do.
Do you need to spend 7 or 8 years training someone to do that? If I'm not mistaken, that's already largely done by nurses and patient care technicians.
You are mistaken. In most cases that's not done at all by anyone. Most of the data entry done by medical coders (not programmers, the other kind of coders) is for billing and insurance claims purposes. That claims data isn't detailed enough to be useful for diagnosis. And in most cases it is done hours or days later, not immediately at the point of care where the doctor is expected to make a diagnosis and decide on the course of treatment.
> And in most cases it is done hours or days later, not immediately at the point of care where the doctor is expected to make a diagnosis and decide on the course of treatment.
So does that mean it's not important to the doctor's job? Or would having this information right away allow doctors to do their jobs better? If so, why isn't it done that way right now?
Here is a list of the most common diagnoses from a few years ago.
http://www.hcup-us.ahrq.gov/reports/natstats/commdx/table2a....
Doctors already do quite well diagnosing those in almost all cases so there isn't much room to improve on the diagnostic side. Once the doctor takes the patient history, observes the symptoms, and reviews the lab results then the diagnosis is usually fairly obvious (to an experienced professional). If the doctor had to enter all her detailed observations into a computer it would just slow the process down and increase costs.
> If the doctor had to enter all her detailed observations into a computer it would just slow the process down and increase costs.
On the other hand, paying someone working for/requiring much less money/training than the doctor (say, a BS & $20 or $30/hour) to enter the data would lower the overall costs, would it not?
No that really wouldn't lower overall costs because it wouldn't immediately add much value in most cases. There would be some long term value later since that additional coded clinical data could be mined for clinical research to determine which treatments are most effective. But no one is willing to pay an additional $10 (or whatever) per patient encounter to get that data.
If there wasn't so much bureaucratic red tape surrounding the healthcare industry, that would be a great opportunity for an enterpreneur. Correcting the smallest inefficiencies can dramatically increase profit.
Ha ha. Seriously, clinical data entry is not quick or easy (outside a few limited special cases) regardless of how well you design that mythical app. Even perfect speech recognition would only get you part of the way there, and we aren't likely to get that any time soon.
I know, we use your service. :-) It works well for increasing the productivity of our human transcriptionists but it doesn't really solve the automated diagnosis data entry problem that others were asking about.
I really don't think perfect speech recognition is that far away. 10 years and we're gonna have something that can work. 10 years and we'll have an advanced Google Goggles that can help with diagnosis. Ask the question "What are the limits of human input?" Nothing besides touch, taste, hear, see and smell, and we're getting closer and closer to being able to input that data and process it as humans would.
This is one of my favorite sayings, so I feel compelled to minorly correct it here. The full saying is "When you hear hoofbeats in Texas, think horses not zebras." The "in Texas" is a key piece of context, as there are some places where zebras might well be more common than horses. The point being that one should suspect and rule out the most common differential diagnoses first.
I know this might sound a tad heretical, but if Watson gets to be as good as or better than a doctor, do we really need doctors anymore?
The theoretical upper bound on Watson's ability is a lot higher than the theoretical upper bound on a doctor's, since it can make inferences based on a vastly larger data set.
Watson may be able to recommend you get surgery, but he won't be able to perform the surgery himself. He's only moving in on one aspect of medicine. Matching symptoms to diagnoses is the thing that machines are most clearly suited for in the entire medical profession, because of the vast volume of possible measurements and diagnoses, and because humans' various cognitive biases don't work well with the probabilistic nature of the work. That's still a rather narrow range of work. Humans will remain superior at performing medical work for quite a while. And until true Strong AI, humans will also remain superior for quite a large amount of gathering information that Watson requires, especially psychological measurements, and for making decisions that involve quality-of-life and other ethical considerations beyond just probability of success.
tl;dr Watson can, and will, exceed doctor's capability for a certain, somewhat narrow, range of their function. Doctors will still be necessary for the other things that they do.
http://www.intuitivesurgical.com/products/
Amazing, 3D vision robots. Very graceful translation of hand movements into the robots arms. Very nice product. Currently only for laparoscopic intra-abdominal surgery. Not yet automated. They have DaVinci assisted cardiac surgery programs developing in some parts of the country.
We'd still need surgeons until the robots are good enough. We'd need radiologists until they can add decent computer vision. Ditto for pathologists and other vision-intensive specialties. Orthopods are very hands-on and collect information through channels other than words. So there's a suite spot in the spectrum of doctors for something like Watson. I'd wonder about the ability to ask questions and ferret out socially-obvious problems like broken arms from abuse if Watson were a first provider.
Presumably you'd couple something like Watson with a world-savvy human physician assistant that acts as its eyes, ears, and touch.
To the extent that abuse is a medical diagnosis, it would be trivial for a computer to make that presumptive diagnosis. In fact, if Watson is truly performing at the level of an M2, it probably already can.
Actually, I'd think the first signs of abuse would be the most difficult things for a computer to diagnose. Excuses are given, nobody says anything about violence, the person slipped, but the non-verbal cues might raise a red flag.
I'm not sure how much common sense world knowledge would be in Watson. "Hey doc, I got a pain in my chest while watching the football game vs Cal at Memorial Stadium." The DDX might include angina despite the computer being unlikely to know about a fan's emotions or environment of a football game.
For emerging diseases, advanced malingering, etc, you definitely need real life actual breathing human beings. Otherwise, sure, physicians are just as theoretically replaceable as any information-as-a-service workers might be.
Edit: I didn't realize this was even a controversial statement to make. First, to be clear, when I'm in the hospital and say "doctor" I'm usually thinking non-surgeon, as I am here. Second, you will always need supervision of the machines, just not 1-to-1 supervision. Not every interaction brings up ethical questions, for example. Sometimes someone really just has a viral URI.
Watson isn't a replacement for the physician. It's a physician's extender, a tool that allows a physician to "see" more patients in a day and provide diagnostic results in a more timely manner. The physician will still be the one responsible for the final judgment call.
BTW, it would have been nice if the journalist had bothered to consistently spell the doctor's name correctly (it's Siegel, not Siegal).
Legally, someone has to be responsible for the healthcare provided to the patient. The patient shoulders some of it, true, assuming they're basically competent adults and not severely impaired. (That's why powers of attorney exist, and why physicians are empowered to 'fire' non-compliant patients.) However, the MD is the one who has had the training and who is legally authorized to exercise medical judgement (as opposed to nursing judgement, and no, I'm not going to even try to get into where the arbitrary line between them falls), so the MD is the one who ultimately decides the treatment strategy, which may involve referrals to other MDs, who also must be legally capable of exercising medical judgement.
So, legally, there has to be someone to sanction (threaten with malpractice lawsuits and loss of licensure, among other things) and it's probably better for everyone involved if that person has personal contact with the patient.
You can sue anyone for anything, but unless the computer system is certified by the FDA as a "Medical Device" then you're unlikely to win a judgment. It would be like suing the publisher of a diagnosis textbook that contained incorrect information. That's the reason that clinical decision support vendors typically design their systems to just show possibilities rather than explicitly making a diagnostic recommendation. It's ultimately still up to the doctor and so the doctor bears all of the legal responsibility.
Yes, but the comment I was replying to was within the context of all doctors being replaced by computers, and his point was that it couldn't happen because there needs to be a person to blame. Well, if you get rid of the people (except those creating the computers), then it is still possible to sue the computer owner and/or creator.
I read your statement to assume that entering data into Watson is going to take longer than "entering" that same data into your GP. How did data entry for Jeopardy work?
This would mean that the main issue in adoption of Dr Watson (Hmmm, haven't I heard that name before?) would be the data entry - obviously a pain point in many digital services. If an effective answer is found, and I would suspect that this should soon happen, then the argument becomes more a matter of whether you prefer a subjective opinion from your GP (based on his/her current thought processes / focus) or an objective one from a digital process.
I think I'd prefer the digital process, once the main issues are ironed out.
Don't hold your breath. This has been an area of active work by many of us for decades. We have some systems that work reasonably well for a few limited cases but the general case of clinical data entry won't be solved any time soon. It's a genuinely hard problem that those without experience in the domain tend to underestimate.
Doctors have only a few minutes for most patient encounters. They simply have no extra time to do data entry. You can delegate some of that work to clerical staff or allied health but there are limits to what they can do, and they don't work for free either.
Also incorrect, Watson was fed the question as soon as it was revealed. He didn’t – just like the humans – have to wait until Trebek finished to read the question.
This reminds me of some work I did a few years ago in the oil and gas industry. Not nearly as advanced as this, but we essentially read data from sensors in off shore oil rigs and ran it through our system. We had some pretty nifty pattern recognition stuff in place that would learn from previous data in order to become advanced enough to predict imminent sand breaches during drilling.
Given significant warning, it could save the drilling companies lots of money.
Katrina ended up severely damaging the rig we were using for testing and I'm not sure we ever went anywhere with it after that.
We did do some similar work for detecting trace metals in oil sample analysis records in order to recommend maintenance and detect possible future equipment failures.
Anyway, interesting stuff here. I'd love to play around with it.
edit Sorry about the double post. I think I got it removed. Connection was a bit flaky.
It seems it must be a tool in a tool chain, rather than a physician replacement. Tests must be run, and follow-up questions asked. Also, computers take everything literally, they don't say "Well, that's not what your face is telling me." or think the patient is lying, delusional, or misinformed. You can ask the patient on a scale of 1-10 of their certainty, but again, there are other problems.
"What are these red dots?" Also seems non-trivial to diagnose. They can be tested, or more questions can be asked of them, but this seems like an awful lot of work to be managed by a computer alone.
It's a good, or even great thing to research, but it seems far from implementation, and that's just from an engineering perspective, imagine the legislative hoops that will need to be leapt through!
I could definitely see it not as a replacement for a physician, but as sort of a safeguard to check if the physician's decision regarding treatment for a particular set of symptoms matches up watson's previously learned data. Watson wouldn't preempt the physician, but rather raise an alarm if something looks out of the ordinary.
> "What are these red dots?" Also seems non-trivial to diagnose. They can be tested, or more questions can be asked of them, but this seems like an awful lot of work to be managed by a computer alone.
If algorithms discussed in that blog post were used in one system (even with pictures taken from an Android device possibly), and that system were to communicate results to Watson ... I can't think of a reason why that wouldn't work.
As a physician I doubt this will have much of an impact on the cost of health care. The limiting factor here is not technological but cultural. In the majority of patient encounters a diagnosis has either already been made or is obvious. In principle workers could be trained in the basic clinical skills required in a short period of time and several supervised by a single doctor. There are several reasons that this does not happen to a great extent: (i) Patient's prefer to be assessed by a fully qualified doctor (ii) Medicolegal - who is responsible if a mistake occurs? (iii) Trainee doctors on a per hour basis are often cheaper than employing other workers to perform the same tasks. (iv) For whatever reason Doctor's tend to see patients faster and make decisions more rapidly than other staff e.g. nurses performing the same tasks.
This is cool. When I was at IBM in Austin last week, one of the IBM-ers mentioned that one of the first areas they hoped to use Watson was primary care. Amazing. I think I'd rather go to Watson than most GP's.
"Answering the question “What’s wrong with this patient given these set of symptoms and this family history?” turns out to be a natural fit for the computer."
Well, wake me up when the natural-fit question becomes "What’s wrong with this planet given these set of eco-symptoms and this social history?"
I'd really like to see what the answer might be.
A Post Scriptum Edit/Addition Below ;)
The IBM Watson system that recently won on Jeopardy is thoroughly probabilistic and statistical --Norvig (see news.ycombinator.com/item?id=2591154)
In a future far, far away when humanity will have gained experience with millions of planets, stars, and human colonies and space-castaway societies - maybe then - IBM Watson will be able to answer my question...
Submerging in cryogenic sleep... see you later... <yawning>
Is this the beginning of companies using web 2.0 tech to solve 'big problems'? Watson might be the first step towards using the social graph and big data for things other than selling more widgets.
I'm keen to see how medical professionals start using this tool though. Professional pride and the immense amount of inertia present in medicine could hamper adoption. Worse yet, I can only imagine the level of legal liabilities using a system like this could incur. I'd love to see this adopted and think it could radically reduce costs and increase productivity in the medical field, but I'm going to resign to simple cautious optimism.
There isn't too much "Web 2.0" tech behind building something Watson. Perhaps the use of ontologies and semantic web datasets, but in academic circles at least that's informally referred to "web 3.0" (that, too, is a bit of a misnomer because Cycorp has been building semantic ontologies for a long, long time).
Doctors virtually all use lightweight iphone or webapps (e.g. Epocrates) to handle drug information, etc. Like the rest of the world, they no longer memorize massive sets of evolving information, they simply walk around with their Iphone/device of choice and look it up.
The providers of such software now and in the future will of course employ teams of lawyers to craft appropriate statements like, "use this as a guide only and not as the basis for a diagnosis or prescription," which of course won't prevent doctors from using it as the basis, but it will keep a healthy sense of legal limbo which exists in many other fields.
Basically, I think this doesn't remove the doctor as the interface person, but it allows fewer doctors to see more patients and to streamline the process for many medical problems with (presumably) better outcomes on average. I'm usually not keen on the impersonal nature of modern medicine but what works, works.
Clinical decision support systems don't really create any additional legal liability. They just present the user with possibilities. The physician is still legally responsible for making a diagnosis, prescribing a course of treatment, and carrying out medical procedures.
(Not that doctors are going to be gone completely, just that their jobs will be forever changed the way that CNC has completely changed the job of a machinist)