The article "Caution: Health IT Might Cause Unexpected Safety Issues" offers up the notion that health care information technology initiatives, while they demonstrate real benefits, may also carry significant risks for patients.
Author Cathryn Domrose points to the fact that increasing evidence shows that electronic health records and automated medication administration systems have reduced patient care errors. She adds, however, that, "Some studies have found health information technology has no effect on patient safety, and reports have emerged of serious harm caused by health information technology."
She notes that in 2010 an infant in a Chicago-area hospital died of a massive overdose when a pharmacy technician mistakenly ordered an automated machine to prepare the wrong dose of the medication. She also cites an Institute of Medicine report claiming that, "Health IT has the potential to improve patient safety but also can pose danger for patients if not properly designed and used."
Is there something inherently risky about the technologies that more and more health care providers are using, and are these risks tolerable? The answer to the first question is yes -- not because anyone has deliberately sold shoddy technology, but because software is, by definition, never perfect in its first version -- and rarely so in subsequent versions.
The unfortunate fact is that we have been conditioned by the software industry to tolerate a certain level of imperfection in our software. Of course, that's not such a big deal when the mistake is a misspelled word or a faulty font in our word processing application. When the miscues involve human health and safety, however, the stakes are much higher and the risks are much greater, not only for patients, but for providers and ultimately insurers.
Thus, the answer to our second question is no. Obviously, even one life lost to misused or badly designed health information technology is not tolerable, but given the very nature of software as we know it, mistakes may be inevitable. Still, we must do all we can to avoid miscues that could compromise health or life.
It is worth noting that the infant death cited above happened at least in part because of human error, so perhaps we can't point the finger solely at the technology itself. It could have stemmed from a lack of training, or perhaps the designer of the software didn't know enough about those who would eventually use it. Domrose notes, "those creating the technology need to understand the clinicians' workflow patterns and what will help or hinder them."
That's exactly right. The mere fact that a piece of technology "works" as a practical matter in testing doesn't necessarily mean it will function properly in the field. And since we aren't about to use human patients as guinea pigs to do beta testing, it becomes critical that health software and hardware designers work closely with users in the development of the products.
In the end, we shouldn't rely on technology alone to give us the right answers or to administer the proper doses of medication. Minimizing risk means always getting input from trained professionals to confirm or challenge what our technology tells us.
ARA TREMBLY is founder of The Tech Consultant and The Rogue Guru Blog. He can be reached at firstname.lastname@example.org.
May 1, 2012
Copyright 2012© LRP Publications