This is like saying that self-driving cars won't ever become a thing because someone behind the wheel needs to be to blame. The article cites AI systems that the FDA already has cleared to operate without a physicians' validation.
> This is like saying that self-driving cars won't ever become a thing because someone behind the wheel needs to be to blame.
Which is literally the case so far. No manufacturer has shown any willingness to take on the liability of self driving at any scale to date. Waymo has what? 700 cars on the road with the finances and lawyers of Google backing it.
Let me know when the bean counters sign off on fleets in the millions of vehicles.
Yes and I would swear that 1700 of those 2000 must be in Westwood (near UCLA in Los Angeles). I was stopped for a couple minutes waiting for a friend to come out and I counted 7 Waymos driving past me in 60 seconds. Truth be told they seemed to be driving better than the meatbags around them.
You also have Mercedes taking responsibility for their traffic-jam-on-highways autopilot. But yeah. It's those two examples so far (not sure what exactly the state of Tesla is. But.. yeah, not going to spend the time to find out either)
I'm curious how many people would want a second opinion (from a human) if they're presented with a bad discovery from a radiological exam and are then told it was fully automated.
I have to admit if my life were on the line I might be that Karen.
Ah, you're right. Something else I'm curious about with these systems is how they'll affect difficulty level. If AI handles the majority of easy cases, and radiologists are already at capacity, so they crack if the only cases they evaluate are now moderately to extraordinarily difficult?
Let's look at mammography, since that is one of the easier imaging exams to evaluate. Studies have shown that AI can successfully identify more than 50% of cases as "normal" that do not require a human to view the case. If group started using that, the number of interpreted cases would drop in half although twice as many would be normal.
Generalizing to CT of the abdomen and pelvis and other studies, assuming AI can identify a sub population of normal scans that do not have to be seen by a radiologist, the volume of work will decline. However, the percentage of complicated cases will go up. Easy, normal cases will not be supplementing the Radiologist income the way it has in the past.
Of course, all this depends upon who owns the AI identifying normal studies. Certainly, hospitals or even packs companies would love to own that and generate that income from interpreting the normal studies. AI software has been slow to be adopted, largely because cases still have to be seen by a radiologist, and the malpractice issue has not been resolved. Expect rapid changes in the field once malpractice solutions exist.
From my experience the best person to read these images is the medical imaging expert. The doctor who treats the underlying issue is qualified but it's not their core competence. They'll check of course but I don't think they generally have a strong basis to override the imaging expert.
If it's something serious enough a patient getting bad news will probably want a second opinion no matter who gave them the first one.
I willing to bet every one here has a relative or friend who at some point got a false negative from a doctor.. Just like drivers that have made accidents.. Core problem is how to go about centralizing liability.. or not.
But since we don't know where those false negatives are, we want radiologists.
I remember a funny question that my non-technical colleagues asked me during the presentation of some ML predictions. They asked me, “How wrong is this prediction?” And I replied that if I knew, I would have made the prediction correct. Errors are estimated on a test data set, either overall or broken down by groups.
The technological advances have supported medical professionals so far, but not substituted them: they have allowed medical professionals to do more and better.
That's horrific. You pay insurance to have ChatGPT make the diagnosis. But you still need to pay out of pocket anyway. Because of that, I am 100% confident this will become reality. It is too good to pass up.
Early intervention is generally significantly cheaper, so insurers have an interest in doing sufficiently good diagnosis to avoid unnecessary late and costly interventions.
Think a problem here is the sycophantic nature. If I’m a hypochondriac, and I have some new onset symptoms, and I prompt some LLM about what I’m feeling and what I suspect, I worry it’ll likely positively reinforce a diagnosis I’m seeking.
I mean, we already have deductibles and out-of-pocket maximums. If anything, this kind of policy could align with that because it's prophylactic. We can ensure we maximize the amount we retrieve from you before care kicks in this way. Yeah, it tracks.
It sounds fairly reasonable to me to have to pay to get a second opinion for a negative finding on a screening. (That's off-axis from whether an AI should be able to provide the initial negative finding.)
If we don't allow this, I think we're more likely to find that the initial screening will be denied as not medically indicated than we are to find insurance companies covering two screenings when the first is negative. And I think we're better off with the increased routine screenings for a lot of conditions.
The FDA can clear whatever they want. A malpractice lawyer WILL sue and WILL win whenever an AI mistake slips through and no human was in the loop to fix the issue.
It's the same way that we can save time and money if we just don't wash our hands when cooking food. Sure it's true. But someone WILL get sick and we WILL get in trouble for it
What's the difference in the lawsuit scenario if a doctor messes up? If the AI is the same or better error rate than a human, then insurance for it should be cheaper. If there's no regulatory blocks, then I don't see how it doesn't ultimately just become a cost comparison.
> What's the difference in the lawsuit scenario if a doctor messes up?
Scale. Doctors and taxi drivers represent several points of limited liability, whereas an AI would be treating (and thus liable for) all patients. If a hospital treats one hundred patients with ten doctors, and one doctor is negligent, then his patients might sue him; some patients seeing other doctors might sue the hospital if they see his hiring as indicative of broader institutional neglect, but they’d have to prove this in a lawsuit. If this happened with a software-based classifier being used at every major hospital, you’re talking about a class action lawsuit including every possible person who was ever misdiagnosed by the software; it’s a much more obvious candidate for a class action because the software company has more money and it was the same thing happening every time, whereas a doctor’s neglect or incompetence is not necessarily indicative of broader neglect or incompetence at an institutional level.
> If there's no regulatory blocks, then I don't see how it doesn't ultimately just become a cost comparison.
To make a fair comparison you’d have to look at how many more people are getting successful interventions due to the AI decreasing the cost of diagnosis.
> What's the difference in the lawsuit scenario if a doctor messes up? If the AI is the same or better error rate than a human, then insurance for it should be cheaper
The doctor's malpractice insurance kicks in, but realistically you become uninsurable after that.
yeah but at some point the technology will be sufficient and it will be cheaper to pay the rare $2 million malpractice suit then a team of $500,000/yr radiologists
This is essentially what's happened with airliners.
Planes can land themselves with zero human intervention in all kinds of weather conditions and operating environments. In fact, there was a documentary where the plane landed so precisely that you could hear the tires hitting the center lane marker as it landed and then taxied.
Yet we STILL have pilots as a "last line of defense" in case something goes wrong.
No - planes cannot "land themselves with zero human intervention" (...). A CAT III autoland on commercial airliners requires a ton of manual setting of systems and certificated aircraft and runways in order to "land themselves" [0][1].
I'm not fully up to speed on the Autonomi / Garmin Autoland implementation found today on Cirrus and other aircraft -- but it's not for "everyday" use for landings.
Not only that but they are even less capable of taking off on their own (see the work done by Airbus' ATTOL project [0] on what some of the more recent successes are).
So I'm not sure what "planes can land on their own" gets us anyway even if autopilot on modern airliners can do an awful lot on their own (including following flight plans in ways that are more advanced than before).
The Garmin Autoland basically announces "my pilot is incapacitated and the plane is going to land itself at <insert a nearby runway>" without asking for landing clearance (which is very cool in and of itself but nowhere near what anyone would consider autonomous).
Taking off on their own is one thing. Being able to properly handle a high-speed abort is another, given that is one of the most dangerous emergency procedures in aviation.
Having flown military jets . . . I'm thankful I only ever had to high-speed abort in the simulator. It's sporty, even with a tailhook and long-field arresting gear. The nightmare scenario was a dual high-speed abort during a formation takeoff. First one to the arresting gear loses, and has to pass it up for the one behind.
There's no other regime of flight where you're asking the aircraft to go from "I want to do this" to "I want to do the exact opposite of that" in a matter of seconds, and the physics is not in your favor.
How's that not autonomous?
The landing is fully automated.
The clearance/talking isn't, but we know that's about the easiest part to automate it's just that the incentives aren't quite there.
It's not autonomous because it is rote automation.
It does not have logic to deal with unforeseen situations (with some exceptions of handling collision avoidance advisories). Automating ATC, clearance, etc, is also not currently realistic (let alone "the easiest part") because ATC doesn't know what an airliner's constraints may be in terms of fuel capacity, company procedures for the aircraft, etc, so it can't just remotely instruct it to say "fly this route / hold for this long / etc".
Heck, even the current autolands need the pilot to control the aircraft when the speed drops low enough that the rudder is no longer effective because the nose gear is usually not autopilot-controllable (which is a TIL for me). So that means the aircraft can't vacate the runway, let alone taxi to the gate.
I think airliners and modern autopilot and flight computers are amazing systems but they are just not "autonomous" by any stretch.
Edit: oh, sorry, maybe you were only asking about the Garmin Autoland not being autonomous, not airliner autoland. Most of this still applies, though.
There's still a human in the loop with Garmin Autoland -- someone has to press the button. If you're flying solo and become incapacitated, the plane isn't going to land itself.
One difference there would be that the cost of the pilots is tiny vs the rest that goes into a flight. But I would bet that the cost of the doctor is a bigger % of the process of getting an x-ray.
They have settled out of court in every single case. None has gone to trial. This suggests that the company is afraid not only of the amount of damages that could be awarded by a jury, but also legal precedent that holds them or other manufacturers liable for injuries caused by FSD failures.
At the end of day, there's a decision needs to be made and decisions have consequences. And in our current society, there are only one way we know about how to make sure that the decision is taken with sufficient humanity: by putting a human to be responsible for making that decision.
Medicine does not work like traffic. There is no reason for a human to care whether the other car is being driven by a machine.
Medicine is existential. The job of a doctor is not to look at data, give a diagnosis and leave. A crucial function of practicing doctors is communication and human interaction with their patients.
When your life is on the line (and frankly, even if it isn't), you do not want to talk to an LLM. At minimum you expect that another human can explain to you what is wrong with you and what options there are for you.
There's some sort of category error here. Not every doctor is that type of doctor. A radiologist could be a remote interpretation service staffed by humans or by AI, just as sending off blood for a blood test is done in a laboratory.
> There is no reason for a human to care whether the other car is being driven by a machine.
What? If I don't trust the machine or the software running it, absolutely I do, if I have to share the road with that car, as its mistakes are quite capable of killing me.
(Yes, I can die in other accidents too. But saying "there's no reason for me to care if the cars around me are filled with people sleeping while FSD tries to solve driving" is not accurate.)
You know, for most humans, empathy is a thing; all the more so when facing known or suspected health situations. Good on those who have transcended that need. I guess.