Sunday, May 6, 2012

Sociotechnical systems and EHR implementations


One of my students posted this in a recent course discussion online:

"Postscript: An EHR alone can't improve an organization.  It's just technology.  Engaged, motivated people with the right tools to do their jobs (possibly including an EHR) have the power to improve an organization."

Usually when an instructor reveals their position on a posted question, it kills the thread, but this time it was worth noting that governance and people are probably the two most important success factors in EHR implementations. 

We often forget that the focus of HIT is people, whether they are patients or clinical end-users. And that an organization’s real measure is the quality and motivation of its employees, and the EHR is just a tool to help the organization meet its business and clinical objectives. 

In any complex sociotechnical system there are two areas that need to be addressed - the social and the technical. This implies that the people need as much attention as the EHR, something that I have seen many organizations forget in their haste to put in “the best” EHR system.

Of course, if the leadership in an organization is able to understand the implications of HIT implementations and use their knowledge to motivate their employees to manage change appropriately, then the chances of success are greater. 

Wednesday, March 14, 2012

To test or not to test...

I was asked to write a guest blog at Trends in Graduate Education and Instruction about utilizing tests in online courses. Here's what I had to say:


As a student, I never particularly liked tests. But as an instructor, I’ve warmed to them. Especially those that enhance the learning experience and offer value to the course I’m teaching. 

I now use tests in my course for two reasons. Firstly, they allow students to check if they have met the learning objectives that I have set for them, and validates their learning. A good test allows students to examine the concepts they have been exposed to during the course, rather than rely on rote learning.

Sakai (the learning management software my department uses) has made my life distinctly easier. I can upload a test which is timed, has immediate feedback (so students don’t have to wait for their results), allows questions to be displayed randomly or in specific blocks, (which improves test security), and automatically exports their scores to the gradebook (so I don’t have to).

I use the instant feedback feature on Sakai to not only explain why the student picked the wrong answer (or provide positive reinforcement if they made the right selection), but also to explain why I developed the question, and why the concepts associated with the question are important to their learning. I’ve found that explaining the raison d’ĂȘtre for the question also preempts any concerns that students may have about the validity of the question itself.

The second reason why I deploy tests is that they allow me to calibrate the course for future offerings. I can modify course content, emphasizing concepts that students find challenging in the tests, and at the same time shore up potentially simplistic content. It’s a great way to adapt the course to allow maximum learning, and in conjunction with student feedback, an excellent way to incrementally improve the course.

But, as I realized when I took the recertification exam for my internal medicine boards, I still don’t like taking tests. I guess some things never change.

Tuesday, January 31, 2012

Where the world has not been broken up into fragments by narrow domestic walls


I was at a faculty retreat today when I was struck by a thought: that the folks siting at the table around me were extraordinarily diverse - in background, expertise, and interests  - but they all identified themselves as informaticists, and not just as physicians, or computer scientists, or as computational biologists.


I’m fortunate to work with folks whose backgrounds are incredibly diverse - a librarian with an MBA, for example, or a PhD in computer science who plays the french horn professionally. Some of my colleagues are hard-core quantitative researchers who define evidence-based practice, while others conduct qualitative research that sounds suspiciously like ethnographic studies. And some of them (for example the bioinformatics folks) describe their extraordinarily complex research projects in terms I don’t understand well, but I find their work to be incredibly interesting.


Things were never this diversely complicated when I was a full-time internist. My physician colleagues and I, we may look different to outsiders but fundamentally we are all cut from the same cloth, we worship the same gods. An internal medicine conference is a familiar place where we discuss the best methods to manage disease in a familiar way that is comprehensible to all. 


But informatics conferences are very different. My informatics colleagues and I, we are all very different, and professionally we do very different things. It seems we spend a lot of time at these conferences trying to figure out exactly what informatics is.  But we all identify ourselves as informaticists, and that’s not a bad thing at all.

Wednesday, December 28, 2011

What's at the end of a rainbow?


EHRs are expensive systems - they are expensive to implement, and even more expensive to maintain and upgrade. I often hear the argument that even though EHRs cost money to put in, they save money in the long term. The primary assumption in this contention is usually that EHRs reduce the cost of delivering healthcare delivery. 
Of course this line of reasoning hinges on our ability to actually measure the return on investment of an EHR with a significant degree of accuracy. As those of you familiar with EHR implementations probably know, this is more challenging than expected, and it is often a difficult task to make a true estimate on ROI on health IT. 
Studies that have attempted this exercise usually don't have a clear conclusion - a representative statement concludes that "...additional research utilizing broader perspectives and multidisciplinary techniques will be needed before a better understanding of ROI from health IT is achieved..." [1]. There are many reasons why it’s hard to calculate an ROI on EHRs, but one significant issue is that we often don’t know exactly how much it costs to get EHRs up and running.
So how much do EHRs cost to implement? One would think that given the number of implementations nationwide, that information should be reasonably easy to figure out, but surprisingly that’s not the case; the numbers in the clinical informatics literature are scattered and are often somewhat vague. 
We do have some idea of costs: for example David Bates once estimated that CPOE at Brigham & Women’s Hospital in Boston cost $1.9 million dollars to implement in 1998 [2]. But the Brigham is not a typical hospital; it's a teaching institution with a home-grown EHR that has been developed over decades. The level of internal informatics expertise within Brigham far exceeds that of other organizations, and perhaps even some EHR vendors.
One well-quoted study on inpatient EHR costs was commissioned by the American Hospital Association and the Federation of American Hospitals and looked at more representative hospitals. The study was conducted by the First Consulting Group [3], who found it would cost a 500-bed hospital $7.9 million in initial costs and $1.35 million in annual operating costs to implement and maintain an EHR. But even though this study looked only at six organizations, and it's a stretch to extrapolate these figures to the  multitude of US hospitals, we could use the results as a ballpark figure to do the math to figure out the projected expenditure associated with universal EHR use in the US. 
The cost is depressingly high, and then there is also the associated cost of setting up and maintaing a robust health information exchange network so that these EHRs can talk to each other. Rainu Kaushal calculated that a national healthcare health information network would cost $156 billion in capital investment over 5 years and $48 billion in annual operating costs [4].
But the costs of implementation will be mitigated or even countermanded by the cost savings associated with EHR use, right? 
Many studies that show cost savings of EHRs don't take into account the cost of implementation. And if they do, some choose to focus on the direct costs and don't factor in the indirect costs associated with implementation and maintenance. For example, part of the process of implementing EHRs includes monitoring for HIPAA violations, and HIPAA compliance costs money. One report documented that HIPAA compliance with a coronary artery disease registry led to more than $8,700 and $4,500 in incremental and follow-up yearly study costs, respectively [5]. I have yet to see a credible study that demonstrates that the cost of HIPAA compliance was factored into cost-savings calculations.
So why spend the money if we can’t show an ROI? I’ve previously noted that the march of progress is unrelenting, and while that’s a viable argument, I can offer a better line of reasoning than attributing the domination of EHRs to the inexorable advance of evolution. 
We currently focus much of our attention on the actual implementation of EHRs, which is a reasonable attitude since we want to get things right and avoid calamitous catastrophes such as adverse post-implementation patient outcomes or a wholesale clinician revolt against the EHR. 
But as we become more comfortable with EHR implementations we will surely shift our focus to the extraordinary ability of EHRs to collect vast quantities of data. We can then try to figure out how to mine and analyze the data accumulated by these interconnected systems in an efficient and innovative manner to improve patient care and outcomes. And that’s the real ROI, the proverbial treasure-trove at the end of the rainbow.
[1] Menachemi N, Brooks RG. Reviewing the benefits and costs of electronic health records and associated patient safety technologies.J Med Syst. 2006 Jun;30(3):159-68.
[2] Bates DW, Leape LL, Cullen DJ, Laird N, Petersen LA, Teich JM, Burdick E, Hickley M, Kleefield S, Shea B, Vander Vliet M, Seger DL. Effect of computerized physician order entry and a team intervention on prevention of serious medication errors. J Am Med Assoc 1998;280(15):1311-1316.
[3] First Consulting Group. Computerized Physician Order Entry: Costs, Benefits and Challenges. 2003.
[4] Kaushal R et al. The Costs of A National Health Information Network. Ann Intern Med August 2, 2005 143:165-173
[5] Armstrong D et al., "Potential Impact of the HIPAA Privacy Rule on Data Collection in a Registry of Patients with Acute Coronary Syndrome," Archives of Internal Medicine 165, no. 10 (2005): 1125–1129

Thursday, December 8, 2011

One standard to rule them all


I’ve been hearing a lot about creating a universal standard for EHRs lately. This standard would specify database, decision support, even interface and usability parameters. There is much to be said for a uniform EHR standard. Physicians don’t have to learn a different system if they see patients in different hospitals, IT staff will find it easier to modify EHRs, and a uniform standard may allow us to better measure quality of care and patient safety. So uniformity is good, right?
If we do have a universal EHR standard then what will differentiate one EHR from another except for pricing? How will organizations distinguish one EHR from another when it’s time to select a system? I’ve read online that implementing a universal standard will hamper innovation and stunt competitiveness. I’m not sure I buy the argument, but some suggest that setting a standard could decimate the EHR industry.
Another argument that I have heard recently is that mandating a uniform EHR standard is the only way to achieve universal interoperability. But is that necessarily true?
Verizon phones are on a CDMA network and AT&T phones are on a GSM network. The 2 networks are incompatible (I can't use a CDMA phone on a GSM network, and vice versa) but the end product (information flow of a digital sound signal) is universally compatible.
The key, I believe, is to think of standardization and interoperability as two separate issues.

Wednesday, November 23, 2011

Is the trend from paper to electronic health records irreversible?

When we moved from horse drawn carriages to automobiles, we certainly had some naysayers - they said that cars were too fast, too loud, too dangerous, too new, too untested, too modern.  But we never went back to horses, despite the complexity of the new technology and the issues associated with the internal combustion engine.

Similarly, I submit that whether we like it or not, the die is cast (I like the Suetonius version better, with Julius Caesar dramatically stating "alea iacta est" before proceeding to cross the Rubicon), for we have  crossed our Rubicon, and in the distance we can hear the relentless march of progress. EHRs will replace paper records, and it's too late to reverse course and close the barn door because the horse has bolted.

Thursday, November 17, 2011

For better or for worse

Even though the advantages of CPOE (computerized physician order entry) have been extensively extolled in the informatics literature in considerable detail, there is still some conflict whether CPOE actually reduces medication errors. I recently re-read Ross Koppel et al's paper in JAMA on the role of CPOE in facilitating, as opposed to mitigating medical errors [1].  Ross and his colleagues found "22 previously unexplored medication-error sources that users report to be facilitated by CPOE", predominantly due to human-computer interface flaws and errors of information.

This conclusion was controversial, and fueled much discussion at the time. Certainly the paper did have some shortcomings (most notably, as David Bates pointed out [2], a major limitation was that Koppel measured perceptions of errors rather than the errors themselves). But it did raise an important line of reasoning: that CPOE probably isn't as fantastic for improving safety as its initial sales pitch proclaimed. And more importantly, that these systems are quite complex and operate in dynamic environments and we need to be cautious and deliberate before jumping to conclusions.

Of course, there have always been medication errors, even before EHRs and CPOE. The challenge is to show how CPOE has influenced these errors, especially since we only started measuring these errors in an extensive way after EHRs came along. To complicate matters further, workflows vary from one institution to another, which can make it difficult to discern if the error is because of the CPOE or the workflow in that particular organization. It has been six years since the Koppel paper was published, and we haven’t formulated a definitive answer yet.

One of the selling points of CPOE is that it adds a level of safety, and allows errors to be discovered and fixed before the patient is harmed. But this also often adds extra documentation for the clinician, and in today's point-and-click world, often makes using the EHR more tedious. For example, I know of one institution that requires RNs to document any CPOE discrepancies. One of the RNs who was somewhat frustrated by the additional load of documenting said " I bet you Florence Nightingale wouldn't have liked to spend most of her time clicking away at a computer screen instead of taking care of people."

From what I have read about Florence Nightingale, the observation is probably accurate.


[1] Koppel R, Metlay J, Cohen A, Abaluck B, Localio AR, Kimmel SE, et al. Role of Computerized Physician Order Entry Systems in Facilitating Medication Errors. J Am Med Assoc 2005;293(10):1197–203.
[2] Bates DW. Computerized physician order entry and medication errors: finding a balance. J Biomed Inform. 2005 Aug;38(4):259-61.