Wednesday, December 28, 2011

What's at the end of a rainbow?


EHRs are expensive systems - they are expensive to implement, and even more expensive to maintain and upgrade. I often hear the argument that even though EHRs cost money to put in, they save money in the long term. The primary assumption in this contention is usually that EHRs reduce the cost of delivering healthcare delivery. 
Of course this line of reasoning hinges on our ability to actually measure the return on investment of an EHR with a significant degree of accuracy. As those of you familiar with EHR implementations probably know, this is more challenging than expected, and it is often a difficult task to make a true estimate on ROI on health IT. 
Studies that have attempted this exercise usually don't have a clear conclusion - a representative statement concludes that "...additional research utilizing broader perspectives and multidisciplinary techniques will be needed before a better understanding of ROI from health IT is achieved..." [1]. There are many reasons why it’s hard to calculate an ROI on EHRs, but one significant issue is that we often don’t know exactly how much it costs to get EHRs up and running.
So how much do EHRs cost to implement? One would think that given the number of implementations nationwide, that information should be reasonably easy to figure out, but surprisingly that’s not the case; the numbers in the clinical informatics literature are scattered and are often somewhat vague. 
We do have some idea of costs: for example David Bates once estimated that CPOE at Brigham & Women’s Hospital in Boston cost $1.9 million dollars to implement in 1998 [2]. But the Brigham is not a typical hospital; it's a teaching institution with a home-grown EHR that has been developed over decades. The level of internal informatics expertise within Brigham far exceeds that of other organizations, and perhaps even some EHR vendors.
One well-quoted study on inpatient EHR costs was commissioned by the American Hospital Association and the Federation of American Hospitals and looked at more representative hospitals. The study was conducted by the First Consulting Group [3], who found it would cost a 500-bed hospital $7.9 million in initial costs and $1.35 million in annual operating costs to implement and maintain an EHR. But even though this study looked only at six organizations, and it's a stretch to extrapolate these figures to the  multitude of US hospitals, we could use the results as a ballpark figure to do the math to figure out the projected expenditure associated with universal EHR use in the US. 
The cost is depressingly high, and then there is also the associated cost of setting up and maintaing a robust health information exchange network so that these EHRs can talk to each other. Rainu Kaushal calculated that a national healthcare health information network would cost $156 billion in capital investment over 5 years and $48 billion in annual operating costs [4].
But the costs of implementation will be mitigated or even countermanded by the cost savings associated with EHR use, right? 
Many studies that show cost savings of EHRs don't take into account the cost of implementation. And if they do, some choose to focus on the direct costs and don't factor in the indirect costs associated with implementation and maintenance. For example, part of the process of implementing EHRs includes monitoring for HIPAA violations, and HIPAA compliance costs money. One report documented that HIPAA compliance with a coronary artery disease registry led to more than $8,700 and $4,500 in incremental and follow-up yearly study costs, respectively [5]. I have yet to see a credible study that demonstrates that the cost of HIPAA compliance was factored into cost-savings calculations.
So why spend the money if we can’t show an ROI? I’ve previously noted that the march of progress is unrelenting, and while that’s a viable argument, I can offer a better line of reasoning than attributing the domination of EHRs to the inexorable advance of evolution. 
We currently focus much of our attention on the actual implementation of EHRs, which is a reasonable attitude since we want to get things right and avoid calamitous catastrophes such as adverse post-implementation patient outcomes or a wholesale clinician revolt against the EHR. 
But as we become more comfortable with EHR implementations we will surely shift our focus to the extraordinary ability of EHRs to collect vast quantities of data. We can then try to figure out how to mine and analyze the data accumulated by these interconnected systems in an efficient and innovative manner to improve patient care and outcomes. And that’s the real ROI, the proverbial treasure-trove at the end of the rainbow.
[1] Menachemi N, Brooks RG. Reviewing the benefits and costs of electronic health records and associated patient safety technologies.J Med Syst. 2006 Jun;30(3):159-68.
[2] Bates DW, Leape LL, Cullen DJ, Laird N, Petersen LA, Teich JM, Burdick E, Hickley M, Kleefield S, Shea B, Vander Vliet M, Seger DL. Effect of computerized physician order entry and a team intervention on prevention of serious medication errors. J Am Med Assoc 1998;280(15):1311-1316.
[3] First Consulting Group. Computerized Physician Order Entry: Costs, Benefits and Challenges. 2003.
[4] Kaushal R et al. The Costs of A National Health Information Network. Ann Intern Med August 2, 2005 143:165-173
[5] Armstrong D et al., "Potential Impact of the HIPAA Privacy Rule on Data Collection in a Registry of Patients with Acute Coronary Syndrome," Archives of Internal Medicine 165, no. 10 (2005): 1125–1129

Thursday, December 8, 2011

One standard to rule them all


I’ve been hearing a lot about creating a universal standard for EHRs lately. This standard would specify database, decision support, even interface and usability parameters. There is much to be said for a uniform EHR standard. Physicians don’t have to learn a different system if they see patients in different hospitals, IT staff will find it easier to modify EHRs, and a uniform standard may allow us to better measure quality of care and patient safety. So uniformity is good, right?
If we do have a universal EHR standard then what will differentiate one EHR from another except for pricing? How will organizations distinguish one EHR from another when it’s time to select a system? I’ve read online that implementing a universal standard will hamper innovation and stunt competitiveness. I’m not sure I buy the argument, but some suggest that setting a standard could decimate the EHR industry.
Another argument that I have heard recently is that mandating a uniform EHR standard is the only way to achieve universal interoperability. But is that necessarily true?
Verizon phones are on a CDMA network and AT&T phones are on a GSM network. The 2 networks are incompatible (I can't use a CDMA phone on a GSM network, and vice versa) but the end product (information flow of a digital sound signal) is universally compatible.
The key, I believe, is to think of standardization and interoperability as two separate issues.

Wednesday, November 23, 2011

Is the trend from paper to electronic health records irreversible?

When we moved from horse drawn carriages to automobiles, we certainly had some naysayers - they said that cars were too fast, too loud, too dangerous, too new, too untested, too modern.  But we never went back to horses, despite the complexity of the new technology and the issues associated with the internal combustion engine.

Similarly, I submit that whether we like it or not, the die is cast (I like the Suetonius version better, with Julius Caesar dramatically stating "alea iacta est" before proceeding to cross the Rubicon), for we have  crossed our Rubicon, and in the distance we can hear the relentless march of progress. EHRs will replace paper records, and it's too late to reverse course and close the barn door because the horse has bolted.

Thursday, November 17, 2011

For better or for worse

Even though the advantages of CPOE (computerized physician order entry) have been extensively extolled in the informatics literature in considerable detail, there is still some conflict whether CPOE actually reduces medication errors. I recently re-read Ross Koppel et al's paper in JAMA on the role of CPOE in facilitating, as opposed to mitigating medical errors [1].  Ross and his colleagues found "22 previously unexplored medication-error sources that users report to be facilitated by CPOE", predominantly due to human-computer interface flaws and errors of information.

This conclusion was controversial, and fueled much discussion at the time. Certainly the paper did have some shortcomings (most notably, as David Bates pointed out [2], a major limitation was that Koppel measured perceptions of errors rather than the errors themselves). But it did raise an important line of reasoning: that CPOE probably isn't as fantastic for improving safety as its initial sales pitch proclaimed. And more importantly, that these systems are quite complex and operate in dynamic environments and we need to be cautious and deliberate before jumping to conclusions.

Of course, there have always been medication errors, even before EHRs and CPOE. The challenge is to show how CPOE has influenced these errors, especially since we only started measuring these errors in an extensive way after EHRs came along. To complicate matters further, workflows vary from one institution to another, which can make it difficult to discern if the error is because of the CPOE or the workflow in that particular organization. It has been six years since the Koppel paper was published, and we haven’t formulated a definitive answer yet.

One of the selling points of CPOE is that it adds a level of safety, and allows errors to be discovered and fixed before the patient is harmed. But this also often adds extra documentation for the clinician, and in today's point-and-click world, often makes using the EHR more tedious. For example, I know of one institution that requires RNs to document any CPOE discrepancies. One of the RNs who was somewhat frustrated by the additional load of documenting said " I bet you Florence Nightingale wouldn't have liked to spend most of her time clicking away at a computer screen instead of taking care of people."

From what I have read about Florence Nightingale, the observation is probably accurate.


[1] Koppel R, Metlay J, Cohen A, Abaluck B, Localio AR, Kimmel SE, et al. Role of Computerized Physician Order Entry Systems in Facilitating Medication Errors. J Am Med Assoc 2005;293(10):1197–203.
[2] Bates DW. Computerized physician order entry and medication errors: finding a balance. J Biomed Inform. 2005 Aug;38(4):259-61.

Monday, November 14, 2011

So long, and thanks for all the fish

We recently had a discussion in the class I teach about the increasing popularity of telemedicine, and wondered if advances in telemedicine and e-communication would render the office visit as we know it extinct.

'Never!' proclaimed my students who were physicians, since the physical nature of the office encounter is critically essential to make the diagnosis. But is that really the case?

In the internal medicine universe that I live in, the emphasis is invariably on obtaining a good history, since most diagnoses can be made by the history alone; especially if relevant nuggets of information are unearthed during the process of history-taking. This then is the essence of the chase - the hunt for crucial information which can be processed to generate and modify a differential diagnosis; the clues on the treasure map that leads the clinician to the spot marked 'X'.

I remember my first attempts at videoconferencing, and how amazed I was by the tiny choppy pixelated images that would frequently stutter and freeze. Technology has advanced a great deal in the last couple of decades, and high quality video communication is now entirely feasible. A telemedicine interview is much easier to conduct today than it ever was in the past, and it might only a question of time before convenience trumps convention.

The title of this post is from Douglas Adams (I guess the dolphins are the telemedicine practitioners in this metaphor), but I was also thinking of the Arthurian legend of the Fisher King when I was writing this -- the Fisher King, much like the physician today, blights his kingdom because of his limited mobility and reach (unlike the promise of telemedicine, which truly offers clinicians an opportunity to practice as "médecins sans frontières").

Telemedicine is an upstart in the world of established medicine, but it has the potential to alter the delivery of patient care. Disruptive innovations like telemedicine will always displace the status quo to some extent, just like the concept of office visits displaced the status quo of home visits in the early 20th century.

And yes, I believe that the ultimate promise of HIT is that it will teach clinicians new ways to fish.


Christensen, Clayton M; Bohmer, Richard; Kenagy, John. "Will Disruptive Innovations Cure Health Care?" Harvard Business Review, September 2000.

Sunday, November 13, 2011

Improving healthcare with the assistance of the primeval snout


It's true that clinicians greatly benefit by technology that helps them make decisions, but clinical decision making involves both analytical processes as well as inductive (intuitive) processes of reasoning. Clinicians can learn analytical (hypotheticodeductive) processes from a textbook, but inductive learning in clinical practice is almost entirely experiential. 

Experiential learning requires time and repetitive (iterative) instructions, much like machine learning by a computer program. We are very good at teaching clinicians how to use EBM, but we often find it hard to teach them how to "use their gut", which is another way to describe inductive reasoning. Even though clinician-educators emphasize analytic reasoning when we teach students, in our own practices we often depend more on inductive logic to make decisions, as  Abraham Verghese wrote in “The Tennis Partner”: ‘I taught students to avoid the augenblick diagnosis, the blink-of-an-eye label, the snap judgment. But secretly, I trusted my primitive brain, trusted the animal snout. I listened when it spoke.’


So what is this process of diagnostic reasoning that occurs almost at the speed of light, ‘the blink-of-an-eye label’? Does it really work?


One way to demonstrate this is to show physicians a picture of an easily identified skin rash, such as shingles, and ask them to identify the rash and provide a diagnosis. Fresh medical students often cannot reach a diagnosis quickly, because they lack the experiential knowledge -- they can’t see the pattern that allows them to reach the obvious conclusion. Most experienced physicians will correctly identify the rash in milliseconds, but they may not be able to rationally describe the process that led them to the diagnosis ‘in the blink of an eye’. This is a classic example of the so-called ‘augenblick diagnosis’, a form of inductive reasoning.

If you've read "Blink" by Malcolm Gladwell, you know all about inductive reasoning and augenblick diagnoses, and how physicians depend on "thin slicing" to make their diagnoses. I’m very interested in learning how we can develop logical representations and models that capture inductive reasoning, and how to convert "instinctive" thought into reproducible (and measurable) actions. I've said before that if we increase the length of an internal medicine residency from 3 years to 4 years, we'll increase experiential learning, and ergo produce better doctors. But the lack of funding for an additional period of learning (and the associated costs of the instructors and facilities) prevents us from changing the educational paradigm. 
We’ve spent billions of dollars to increase the amount of heath IT we use in patient care, in the hope that it will improve healthcare. This strategy may work. But I wonder if we will reap a more generous reward if we spent the same amount of money to enhance the experiential learning of physicians by training them more comprehensively in the use of the primeval snout....

Friday, November 11, 2011

Clinical informatics, the newest medical subspecialty

What is clinical informatics?

The American Medical Informatics Association (AMIA) defines clinical informatics as "the application of informatics and information technology to deliver healthcare services." Physicians who practice clinical informatics (also known as 'informaticians' or 'clinical informaticists') collaborate with other health care and information technology professionals to improve patient care and enhance healthcare delivery. AMIA has been working for some time to promote clinical informatics as a legitimate medical subspecialty.

Why is clinical informatics in the news?

The American Board of Medical Specialties (ABMS) recently recognized clinical informatics as a subspecialty. Interestingly, this subspecialty is not linked to any single primary specialty, but will be attached to any primary specialty (such as internal medicine, family medicine, general surgery) whose board approves it.

Certification in clinical informatics should commence by 2013 and will be available to physicians who are board certified in their primary ABMS specialty, and also have advanced training in informatics. Additionally, they will need to take an exam, which will be administered by the American Board of Preventive Medicine.

What does the future hold?

Throughout my professional career, I have always been a part of a discipline that has been well-entrenched and highly institutionalized. General internal medicine was already a mature specialty when I signed up for residency, and by the time I took my boards, the IM board certification process was very well-developed (to the level that I was too late to get grandfathered into a "lifetime" certificate).

Clinical informatics is a very new field and is at the forefront of medicine today. This is the first time I have ever been a part of an emerging discipline, and it feels good to be a pioneer.