Cliff notes--
New EMR/ordering system. New physician. New nurse. New pharmacy dispensing system. The physician couldn't order 160mg of Bactrim because per kg dosing was required. They mistakenly ordered 160 "mg/kg" (40x the intended dose of 4mg/kg) on the order screen. The UX on the screen was partly to blame-- dosing mg or mg/kg apparently came down to a just a dropbox. With EMR ordering systems I've used, this mistake is virtually impossible to make since weight-based dosing pops up a completely different window, calculates the dose and then goes back to the original dose screen showing the total dose.
That order went to a pharmacy robot which diligently counted out 40pills, put them in baggies and sent them to the nurse. The nurse thought it was strange, but ultimately trusted the dispensing system that said everything was correct.
The medication error was noticed after the kiddo felt whole-body tingling. Poison control was called, but it didn't seem like they were able to give a clear treatment advice. A "rapid response" was called, they came and evaluated the patient. He was left in a non-ICU room. Several hours later he had a seizure. He recovered from the seizure and was then monitored in the ICU.
The problem, of course, is that elsewhere in the chain there was in fact too much checking and too little trust. In this case, it started with not trusting the physician to enter a known dose and requiring an unnecessary conversion, which kicked off the chain of events. This was immediately followed by the problem of alarm fatigue, where practically no set of medication orders can be entered without triggering a slew of useless automated warnings.
The problem of having to deal with too many alarms is not unique to the hospital system mentioned in this article. I discovered the other day that at least one model of ventilator has an alarm that sounds when any object sits in front of the display screen. There was a stethoscope dangling in front of a corner of that screen where nothing was displayed, and an alarm went off of approximately the same urgency as one that would sound if the ventilator were about to blow a patient's lungs out. The same alarm that goes off when a patient's heart rate goes from 60 to 250 bpm sounds when the patient's heart rate goes from 99 to 101 bpm. The pharmacist who was supposed to be checking my orders for sanity once paged me out of a patient room because he couldn't find the URL for the hospital's policy on titrating a particular medication, a document issued by the pharmacy. Most people would agree that it's insane to text and drive on the highway, and yet this is essentially what's being expected of every physician in every hospital while they're making major medical decisions.
If you read the article you will find that the amount of blind trust, as you call it, was created by the system saving the nurses ass a fuckton of times.
Makes perfect sense to trust it at that point.
I am more concerned that any medical system allows such a shitty UI, or allows giving something that could be a fatal dose through the system.
And to think that many are led to believe that the medical system is filled with "professionals" who know what they're doing.
I see enough people just blinding trusting the "experts", because they must be right, because they were trained by a university/college for many years.
Maybe I'm ranting, but I'm sick of blind faith in a system demonstrated more often than not, to be broken in various ways.
Medication errors are common enough that they happen on a regular basis in any reasonably sized hospital. They happen both with or without electronic systems.
The majority of errors are caught — by the prescriber, by the pharmacist, or by the nurse administering — but a few fall through the cracks. Order of magnitude errors, where the dose is x10^n the intended dose are some of the more common errors.
We have a lot of safeguards — for example packaging medications in dosages that are likely to be safe for a single dose — but there are also some factors that make errors more likely, such as paediatrics (who need smaller does), geriatrics (who often have many different medications which can interact), and critical care (where things move fast, and big doses might be needed).
I'm a student doctor, and hopefully you believe me that medicine is hard. Electronic systems might help with some of the hard bits, but they're often a hindrance, or pose their own hidden dangers. As a software engineer I know that a lot of medical software is far from fit for purpose.
Order of magnitude errors aren't that hard, it is improbable that a hospital will need to prescribe enough drugs for a small horse.
Doctors seem to run too much on autopilot, they leave discretion to laboratories who give recommended ranges for various hormones/vitamins/chemicals in the blood.
I won't disagree that medicine is hard, but this looks like a UX design flaw, the UX should factor in drug company recommendations for dosage per patient weight and present a graph, there should be some method of compare and contrast to recognize an error happened.
> Order of magnitude errors aren't that hard, it is improbable that a hospital will need to prescribe enough drugs for a small horse.
Weight range in children's hospitals ranges from 100s of grams to 100s of kilos - that's three orders of magnitude right there.
> Doctors seem to run too much on autopilot, they leave discretion to laboratories who give recommended ranges for various hormones/vitamins/chemicals in the blood.
Both population biochemistry and laboratory assays vary between hospitals and labs. So it's actually really important that we interpret biochemical results in relation to the normal ranges provided to us by the lab. Also, those normal ranges are not just set by the lab without thought, there are pathologists and clinical chemists involved in the process.
I think we think about these numbers a whole lot more than you think we think about them.
Well university hospitals run lots of "experimental protocols" where physicians need the autonomy and flexibility for prescribe all sorts of medications. Just curation of drug-drug interactions tables is a huge butt-load of work, and we do have "hard stops" for all sorts of potentially dangerous interactions.
Ultimately however, you need systems in place to control and regulate these systems (such as pharmacy, nurses, other physicians). Obviously this system had a major failure, but it's why this is the special exception rather than the rule. And for what it's worth, this article (by Bob Watcher, who basically invented hospital medicine) is part of any major graduate level healthcare education.
One of the biggest problems I have seen with tech getting into healthcare is that they seem to forget the human interaction portion of healthcare that mostly happens offline. Yes, technology can have an enormous impact, but you can't just "technology" away all the problems in our current healthcare system.
It’s funny you belittle the medical profession considering any rando can make a Github account and apply for software jobs with zero training.
No, the medical system isn’t perfect. It’s made up of humans, shocker. Ignoring the cost (in America), it does pretty well most of the time, and I’d rather blindly trust people who study medicine for 12+ years than someone who would otherwise try to drain my blood and balance my “humours”.
I suspect it's pretty common. I recently corrected a prescription / dosing issue with my 89-year-old grandfather that was largely the result of the people involved not sanity-checking what was going on.
I'm still of the opinion that most people are better off letting doctors and nurses do their job than trying to manage their own health based on stuff they've read on the internet. There's a lot of bad information out there and sorting through it takes practice. You have to enjoy reading medical literature more than blogs.
The specifics: he's in a home hospice care situation as a result of severe aortic stenosis along with some complicating factors. Hospice is designed to provide "last two weeks" care for their patients, and they have a specific drug cocktail for that, but he's been on home hospice for over a year now because his body just isn't done yet, and there is no death-with-dignity law in his state (nor available providers in the neighboring state).
The drug cocktail is an opioid, usually morphine, along with a benzodiazepine, usually Lorazepam (generic Ativan). Taken together, they help relieve anxiety, reduce respiratory distress, and lower blood pressure. That last is important, because the morphine/benzo cocktail is specifically cautioned against in medical literature for elderly patients who are still ambulatory, because it creates a fall risk. They go to stand up to use the bathroom, and there isn't sufficient blood pressure to stay conscious, and they pass out and hit the floor, hard.
And that's exactly what happened to him. The third time, it hospitalized him and, because his speech had been slurred and his consciousness had been altered before the fall, I suspected his medication wasn't right. I went out there and carefully went through everything, and sure 'nuff, that's what it was.
The doctors just prescribed whatever hospice asked for, and hospice just asked for their usual recipe. It took an annoying number of meetings with staff before a younger visiting physician dropped in to one of the meetings and followed up with the literature I was citing. The next morning they started reducing his dosages and he began recovering, including getting his mental faculties back.
He's been back home for a couple of months now, doing well.
Physicians aren't magic. But, they work in a field that's totally alien to most of us here on HN, and trying to navigate the field as a layman can easily lead you into some pretty woo-woo nonsense. Trust, but verify.
Medicine isn't alien if you spend enough years caring about it. The problem is, most people are of the attitude that they shouldn't pay any mind to the science, because the doctors will take care of it. This leads to total dependence on the infallibility of these professionals - and we all know that no professionals are infallible despite their title. The main jig docs have over polymathy knowledge-driven intelligent folk is first-hand sight/smell/ and feel for conditions by seeing patients day in and day out. Drugs though? Most of everyone can understand what would look sketch as fuck in terms of dosing. It pays to have some semblance of heuristics on what competent care and pharmacuetical administration looks like.
The populace would be far better off if we all stopped feigning lack of interest for the things that truly matter. Like medicine/biology. It's the study of us. Sure, delegate decisions to those in the day in, day out - that is diligence. But don't just accept to hear jargon. Learn what's going on. Be active. Knowledge isn't agnostic to specific fields..especially when its about your body itself. That is human knowledge. And everybody who is living should at least be half competent in knowing whether 38 pills of an antibiotic makes any goddamn sense. If this nurse had been a knowledge worker first, and a robotic nurse second, she probably would have caught the issue. Over specialization creates idiots following scripts. Meh..
I’m sorry I usually try not to do this but this is some seriously dangerous advice.
It took me literally 30 seconds to find the interaction you’re talking about while tapping about on my phone. Being informed about side effects and interactions needs to be the patient’s (or their family) job.
The space may be alien but we have people who we are paying to help Sherpa us through it. Ask questions. Ask why. If you don’t understand don’t blindly trust - ask your doctor. Seek second opinions.
I’m just a dev, I have 0 medical training. But I know the FDA publishes side effects and interactions. I always understand these before taking anything. Yeah I don’t understand the gibberish in pharmacology sections of things but I can read a list of effects and ask a doctor about them.
TLDR - if you don’t understand your care ask. If the provider won’t explain find another provider who will.
Have had docs prescribe med combos (same doc, same visit) that would have resulted in permanent damage or death if not caught. Luckily we have three people check all my other half’s meds before she takes them. She does an FDA search and checks drug/drug interactions. I do one independently and her father, idk what he does, but he’s actually an oncologist so presumably better than our checks. (Outpatient)
Too many close calls over the years. People make mistakes, it happens, ultimately the patient needs to be responsible for their care or have someone acting in their best interests.
Of course all of this gets into things like medically informed consent and patient goals.
Well on the outpatient side, the volume of drugs a doctor is prescribing is horrendously outpaced by the number of order you're getting as a hospitalist. When you are responsible for about a dozen patients and some need your attention NOW, it's not feasible to do a complete drug-drug interaction check. But that's where A) the modern EHR that has been invested into it by the healthcare system to build drug-drug interaction tables, and B) relying on the hospital pharmacists to modify orders when necessary. It's about building a robust system that integrates human systems and enhancing them with technology.
> The physician couldn't order 160mg of Bactrim because per kg dosing was required.
Not quite. The article suggested that, yes, they could have ordered it in just mg. But they needed to change a dropdown from "mg/kg" to "mg", which is easily overlooked.
> The doctor picked up the phone and called San Francisco’s poison control center. No one at the center had ever heard of an accidental overdose this large—for Septra or any other antibiotic, for that matter—and nothing close had ever been reported in the medical literature. The toxicology expert there told the panicked clinicians that there wasn’t much they could do other than monitor the patient closely.
It's unfortunate (in the "safety rules are written in blood" manner), but now there's a datapoint that says a 38x overdose of Septra may be survivable, and what the effects can be.
This horribly written series of articles should be titled "How technology could not prevent a hospital from giving a patient 38 times his dosage".
- The system's dosage caps were disabled
- The dosage was never double checked since "technology is so accurate", ignoring people still make mistakes when using it
- Vast swaths of alerts are regularly ignored rather than fixed resulting in them being ignored
But more than anything already listed... the nurse didn't question giving someone 38 pills beyond "must be dilluted" yet the article pushes the focus to how technology led to the error? I just can't see how this title was chosen beyond clickbait considering it has statistics included showing how this system has been more reliable than the classic solution.
The point you may have failed to get is that hospitals rely on systems for catching errors (which is hard because medicine is super hard). The point Wachler attempts to raise is how when we blindly substitute technological systems with human based processes we can miss the mark if we're not extra careful. Very important as we think about replacing human effort with that of robots (think Tesla vs. Toyota in how they make cars...).
I would also like to point out that simply fixing alerts isn't necessarily practical. When you're reliant on outside vendors for a highly regulated industry (requiring FDA approval of any and all changes), you can't exactly change on a dime. So while maybe we all in healthcare sneer at device manufactures, there's not a ton on the user side we can do.
"The point Wachler attempts to raise is how when we blindly substitute technological systems with human based processes we can miss the mark if we're not extra careful."
This isn't limited to us now using technological systems for steps. Unless you're arguing the portion of the article covering how the same errors used to happen more often before the system was put in place is incorrect.
Fixing ALL false alerts is an impractical goal but lowering them to something reasonable to work with day to day is not. Epic is a complex system and a medical system but that doesn't mean every change involves you going to the government, a great deal of it can be customized with local Epic trained staff you were mandated to have when you bought the product. Doesn't make it easy but the majority of the problem is within the organization not the government or vendor.
Anyway my overall disagreement with the article isn't that I think the technology is perfect rather it's written like it's worse than it was before because of the technology when it even admits things are better than before because of the technology. It did little to convince me technology led a hospital to do anything vs the same problems the hospital has always had with it's verification procedures.
I think the main points of the article is to introduce the potential complexities and pitfalls that come along with the digitization of clinical records and workflows. In my personal experience, engineers that haven't worked within a clinical setting always tend to struggle with how complex healthcare actually is, and that it requires a ton of nuance for understanding how to successfully develop digital clinical tools.
The other concern is that unlike Google, FB, or Uber, tech is not the funding priority of a healthcare system. They do spend money on large contracts for fully fledged EHR system (and a bunch of add ons), but because fundamentally it's an infrastructure problem and IT for healthcare systems are cost center, they're never going to be able to replicate the techno-centric workforce. I say this as someone who is intimately involved in developing technology for the healthcare system. But you do what you can; clinical informaticists (which is developing as it's own sub-specality) try to reduce the load, and do take on the continuous development of EHR systems. Asking for a level of polish that is unachievable isn't going to promote optimism.
> But more than anything already listed... the nurse didn't question giving someone 38 pills beyond "must be dilluted" yet the article pushes the focus to how technology led to the error?
The nurse did question it further. I feel as though the articles addressed this and your subsequent point pretty adequately.
> Since the Paleolithic Era, we humans have concocted explanations for stuff we don’t quite understand: tides, seasons, gravity, death. The idea that the Septra might have been diluted was the first of many rationalizations that Levitt would formulate to explain the unusual dose and to justify her decision to administer it. At first glance it might seem crazy for her to have done so, but the decisions she made that night were entirely consistent with patterns of error seen in medicine and other complex industries.
> What is new for medicine is the degree to which very expensive, state-of-the-art technology designed to prevent human mistakes not only helped give rise to
the Septra error, but also failed to stop it, despite functioning exactly as it was programmed.
> The human lapses that occurred after the computerized ordering system and pill-dispensing robots did their jobs perfectly well is a textbook case of English psychologist James Reason’s “Swiss cheese model” of error. Reason’s model holds that all complex organizations harbor many “latent errors,” unsafe conditions that are, in essence, mistakes waiting to happen. They’re like a forest carpeted with dry underbrush, just waiting for a match or a lightning strike.
> Still, there are legions of errors every day in complex organizations that don’t lead to major accidents. Why? Reason found that these organizations have built-in protections that block glitches from causing nuclear meltdowns, or plane crashes, or train derailments. Unfortunately, all these protective layers have holes, which he likened to the holes in slices of Swiss cheese.
> On most days, errors are caught in time, much as you
remember to grab your house keys right before you lock yourself out. Those errors that evade the first layer of protection are caught by the second. Or the third. When a terrible “organizational accident” occurs — say, a space shuttle crash or a September 11–like intelligence breakdown — post hoc analysis virtually always reveals that the root cause was the failure of multiple layers, a grim yet perfect alignment of the holes in the metaphorical slices of Swiss cheese. Reason’s model reminds us that most errors are caused by good, competent people who are trying to do the right thing, and that bolstering the system — shrinking the holes in the Swiss cheese or adding overlapping layers — is generally far more productive than trying to purge the system of human error, an impossibility.
I fail to see how the quote that she didn't question it beyond thinking it must be dilluted is an example of how she question giving someone 38 pills beyond "must be dilluted".
Is any other line here actually relevant to technology or just how errors end up occurring even though you have resilient systems and checks built in? I.e. what about any of those has to do with technology itself?
> I fail to see how the quote that she didn't question it beyond thinking it must be dilluted is an example of how she question giving someone 38 pills beyond "must be dilluted".
From the above:
> The idea that the Septra might have been diluted was the first of many rationalizations that Levitt would formulate to explain the unusual dose and to justify her decision to administer it.
If you read the rest of the article, and not just what I've selected, there are multiple points in which the author quotes the nurse about her decision making process.
I read all of the parts before making my first comment in the thread. I think you and I must disagree on what I meant by questioning the large dosage. I don't mean it as a thought exercise involving just the nurse trying to figure out why the dosage is so irregular rather the physical questioning process of validating the odd dosage with the assigning doctor before prescribing.
I pointed it out not to say it's all the singular nurses fault but rather something glossed over in an article written against a technology system that had it's safeguards purposefully disabled. If anything the human error presented in the article is many times greater than technology error, reading the headline I expected the pill robot to have some error relating to batch size or some code error converting units wrong not a single confusing UI element as part of a string of overall hospital process and validation errors.
The way I see it is even if the machinations of the process worked 100% in accordance to their specification or programming, in the end what matters is how people will respond to the system. If a system correctly shows a warning when a bad prescription is ordered, it is still faulty if the overall system encourages the user to ignore warnings.
In this case, the system must deal with human error. The system must factor in the fact that the people who interact with the system are UI/UX laymen, they're most likely overworked and sleep deprived. Perhaps it should not offer the option to override warnings. Perhaps warnings should be expressed in such a way that the user doesn't learn to ignore them in order to do their job
Perhaps fault falls on EPIC for allowing errors to be suppressed, or for having a UI that allows prescriptions to be written with ambiguous units, or the hospitals IT staff for suppressing the warnings. The argument can be made that the hospitals upper management can be blamed for hiring and scheduling practices that result in staff overworking or having to ignore software warnings.
All I know is that blaming human error has been tried for decades and results haven't changed. Maybe the overall system, top to bottom, needs to be reworked.
Quite frankly, as a software engineer who works next to a couple medical companies, I won't let automation near me in medical contexts. I've had more than one coworker get hired by a medical device company and peace out a couple months in after seeing their codebase. Stuff like panics on any error that then set morphine pumps to full on. Not the worst way to die, but not the way I want to go...
> I won't let automation near me in medical contexts
Is that even possible? Most of the machines are run by robots these days. Even simple devices (e.g. ECG monitor) has so much automation built into them.
I think it's all about the how you automate in healthcare. I'm working on a project to automate the way we collect MRIs, but we only are focusing on the "dumb" parts of the process that don't put patients at risk. I see a lot of the with tech in health over the past 3-4 years trying to automate everything all at once and forgetting nuance that goes into healthcare.
Some of this story has a real Therac vibe. Bad UI leading to input errors, an overload of undifferentiated and usually unimportant alerts creating a habit of overriding among operators, operators trusting the system in the face of clearly wrong-looking outcomes (including flesh which was warm to the touch, in the case of Therac-25 http://www.cse.msu.edu/~cse470/Public/Handouts/Therac/Therac... ).
There's a book from the 70s called Technology and Social Shock that my father, a research scientist at NIST (one of the US national labs), gifted to me recently. Given this crazy Internet rabbit hole that I've made the big thing in my life, I think the hidden message is to be careful and to take the time to consider the negative consequences of the "cool stuff" we're working on and giving out to the world on this new platform of the Web.
There are some really scary stories in there, and I think that it remains a timely exposé of what happens when you aren't careful, given the accelerating rate of change that we're seeing in the development, deployment, and "hands-off" attitude that we're taking broadly across so many different contexts.
Some of the most egregious examples include the use of thalidomide, which ended up being teratogenic (babies without arms and legs), along with the "cool tech!" of X-ray shoe-fitting machines that ended up causing hundreds of thousands of cases of cancer and other diseases, all in the name of advancement (read: profit) by participants. Whats really great is the book was written before the advent of computers, so it provides very useful perspective into matters that should be deeply concerning to those who are in positions that enable the rapid deployment of wide-spread technology.
Working in a hospital at the IT Service Desk, it has been astounding to see the vast amount of software at play and overwhelming sometimes to keep track of it (and especially, who to track down if the system goes down).
I don't have much experience in the EMR system we use but we have two teams of about 5-6 people each dedicated to working specifically on the software on the clinical and business sides of it. The clinical side, from what I understand, all have backgrounds in nursing so at least when they're doing interface and other upgrades, they have that experience to draw on.
The article touches briefly on it but the sheer amount of work the nurses and doctors do also have to be a factor. 12 hour or longer days and very rushed.
It's a poorly designed interface, but the real problem, IMO is that the nurse didn't double check with somebody before giving the kid 38 of the same pill. A dose like that just doesn't make sense, she should have known better.
That's quite a dismissive response for a complex issue.
Nurses deal with pills in all kinds of concentrations, particularly in pediatrics, and while I acknowledge that 38.5 pills doesn't pass the sniff test, it isn't absurd that in pediatric medicine you may wind up giving multiple lower dose pills relative to one full adult dosage (particularly if they're right below the adult dosage, and so a dosage was "created" using multiple lower dosage infant pills).
I think the article is terribly written (all five parts of it), but I cannot see how anyone could have read it and come away with the conclusion that "the nurse did it." There's like a million degrees of nuance here.
The same problems could cause lethal over/under-doses with much more reasonable numbers of pills, i.e. accidentally ordering 2 pills instead of 1.
The fact that 38 pills were ordered shows how crazy the problems can get, but a crazy number of pills like that should also be pretty trivial for a nurse to catch. How do you pour 38 pills down an 85-pound child's throat without considering that the consequence may be death?
Meanwhile that same technology has replaced a reportedly convoluted and error-prone manual process that presumably worked perfectly and never harmed anyone, otherwise surely the authors of the article would have gone into that.
> [...] But even in simplified form, you can see why the old system was hugely error-prone. A study from the pen-and-paper era showed that 1 in 15 hospitalized patients suffered from an adverse drug event, often due to medication errors. A 2010 study (using data collected during the pre-digital era) estimated the yearly cost of medication errors in U.S. hospitals at $21 billion.
> Those of us who worked in this Rube Goldberg system — and witnessed the harms it caused — anxiously awaited the arrival of computers to plug its leaks. [...]
Even before I got to part 2, my immediate reaction was that having two types of inputs, dosage/weight and absolute dosage, would easily lead to confusion and seemed like a really bad idea.
It would seem to me much better to require the person to put in just the absolute dosage and have the computer show the actual nearest pill rounded dosage as well as the effective per weight dosage of that.
Additionally, instead of a routine alert, why not have a reactive red highlight/warning while the person is filling out the form? The person would actually see the issue while entering it, instead of just seeing an alert that they reflexively close out.
The nurse is the the only "professional" human in the entire chain who actually handled the obviously anomalous dose of medicine, and she didn't raise a flag.
As more of the pipeline becomes automated and vulnerable to GIGO, the role of those physically administering the medication needs to adapt accordingly. The nurse should have been trained and encouraged to police such errors. If something looks abnormal, it probably is, push the button that summons the MD to verify the physical thing before administering it.
I imagine it won't get any better when the physical administration of all medication also gets automated... like it already is for intravenous systems.
A hacker getting access to such systems would literally be able to control a patient's life --- or the end thereof.
I saw it right away. That said, I agree with the article that the UI is horrible. It (and many other elements of this story) remind me of interacting with Enterprise software, although in that case people usually aren't in mortal danger because of it.
I probably noticed it more because I saw the first screenshot with a "5" in the same box and thus "160" looks surprising, but even without that first screenshot I would probably have noticed --- the "mg/kg of trimetho" with a search(?) button doesn't make sense. Why would you want to search for that phrase, and why is it cut off like that? Another sign of "Enterpriseness": the two buttons next to it, presumably to set standard doses, have huge amounts of empty space surrounding them, while the inexplicable "search box" is too small to contain its expected contents.
Of course, it could leave the unit [mg versus mg/kg] box blank [...] but few systems do that because of the large number of additional clicks it would generate
That makes no sense. As any science teacher (or at least the good ones) will make it known very very clearly, units are important! I can remember a few incidents[1][2] that occurred because of units confusion.
Had Lucca noticed it, she could have changed it to “mg” with two clicks
Two clicks? Just looking at it, this UI is not obvious at all to me how to even change the units. Do you type in "mg of trimetho..." in the "search box"? After thinking about this for a tiny bit, my proposal would be either radio buttons for each unit, with no default, or two text boxes, one labelled "mg" and the other "mg/kg", where editing one instantly updates the other.
We also needed to address another problem that is not limited to healthcare: overtrust in the technology
Or more generally, a lack of thinking; people who are taught to follow procedures or "best practices" for the "best results" are only going to follow them unquestionably. It's unfortunate that a lot of the time the management above only points to a lack of procedure when it's actually this "overproceduring" which can cause such errors --- the incident where NASA's satellite fell over[3] is one example of this. No one thought to even take a look to see if they had secured it, everyone was too busy executing lists of instructions. If they weren't, and just given a general description of what they needed to accomplish, I'd be pretty sure they would at least check the mounting before trying to turn it on its side.
I also immediately saw everything in the screenshot, but I would have to join the ranks of the people mentioned who didn't raise their hands.
I'm relaxed right now and it's the weekend, I'm reading an article in downtime, I have precisely 0 other things going on... and I've been reading for the past 20 minutes about a broken system. Of course I'm going to be primed to take the screenshots to bits, and have sufficient mental bandwidth to do so very effectively on the first try.
>> Had Lucca noticed it, she could have changed it to “mg” with two clicks
> Two clicks? Just looking at it, this UI is not obvious at all to me how to even change the units.
Me neither. Perhaps clicking in the search box also opens a dropdown with default values in it? That would satisfy one click, two click.
I was the CTO for medical software company that kept track of adverse drug events and other hospital incidents. We also made software for Root Cause Analysis projects and the subsequent committee meetings. UCSF was a client at one point.
Hospitals and practitioners are incentivized to never admit they made a mistake. This article is that writ long.
The screen said what would be ordered and did exactly that. The ordering physician did not read the screen. Perhaps the screen could be better but it's not bad. The dosage is bolded.
Someone else reviews it and also doesn't notice the error. I.e. they did a bad job of reviewing.
UCSF staff had turned off notifications and alerts in a very broad manner.
The robot pharmacist gets dragged into this even though it just followed orders, in an article titled "Beware of the Robot Pharmacist". Imagine how long this series would be if the robot had actually made decisions.
At one point a nurse asks the juvenile patient if he thinks 38 pills is too much. In movies, asking a child for advice is the comic low point where we are meant to realize the adult is incompetent. This nurse kept her job.
Hospitals are incredibly political environments and this article goes out of its way to keep everyone's hands clean. But at the end of the day multiple people made mistakes and the author just decides to blame everyone's new favorite boogie man "technology".
Having seen hundreds of incident reports, I assure you that most hospital issues are caused by people making mistakes and/or not following procedures. And all signs point to that being the case here.
What would have been more helpful is if the author would have followed his own conclusion, and centered and titled the articles around it:
"Safe organizations relentlessly promote a “stop the line” culture, in which every employee knows that she must speak up — not only when she’s sure that something is wrong, but also when she’s not sure it’s right. Organizations that create such a culture do so by focusing on it relentlessly and seeing it as a central job of leaders. No one should ever have to worry about looking dumb for speaking up, whether she’s questioning a directive from a senior surgeon or an order in the computer."
Perhaps if medical leaders stopped pretending that they always do everything perfect - i.e. the opposite of this literally CEO approved article - people further down the ladder would feel they could also be honest.
>>Technology led a hospital to give a patient 38 times his dosage
>>LETS GET RID OF TECHNOLOGY !
----(joke end)----
seriously, a bad title and bad article. I was thinking he/she might be a primitivist or some gradient of it. Look at medium article, see the promoted book. His book, "The Digital Doctor: Hope, Hype, and Harm at the Dawn of Medicine’s Computer Age " I assume is him expressing skepticism at why he thinks teledoctor and telemedicine are bad or something. Biased against technology much tho.
That order went to a pharmacy robot which diligently counted out 40pills, put them in baggies and sent them to the nurse. The nurse thought it was strange, but ultimately trusted the dispensing system that said everything was correct.
The medication error was noticed after the kiddo felt whole-body tingling. Poison control was called, but it didn't seem like they were able to give a clear treatment advice. A "rapid response" was called, they came and evaluated the patient. He was left in a non-ICU room. Several hours later he had a seizure. He recovered from the seizure and was then monitored in the ICU.