Wednesday, December 26, 2018

May I See Your Credentials...Oh, Never Mind

Chances are, your doctor is Board Certified.  Chances are, if you have gone to a hospital, it was accredited by The Joint Commission.  Many health care consumers look at these kinds of credentials as a proxy for quality, helping ensure that the people or institutions from whom/which they receive care are, in fact, well-qualified to provide. 

It turns out those expectations may be overly optimistic. 

Board Certification is overseen by the American Board of Medical Specialties (ABMS).  Following medical school, residency, and licensure, physicians can take a specialty-specific exam to obtain their board certification.  Following that, in order to remain board certified, they must follow a process called Maintenance of Certification" (MOC) in order to demonstrate ongoing competence, and it is MOC that has proved controversial. 
Credit: Association of American Physicians and Surgeons
In a recent survey from MDLinx, 65%  of physicians say MOC provides no clinical value, despite costing physicians thousands of dollars and significant time.  Only 4% felt it enhanced their ability to practice.  Seventy-seven percent also added comments elaborating on their frustrations.  One cardiologist told MDLinx, "These are people in a bureaucratic ivory tower who have the pomposity to state that these computer tests are going to help patients, when they're really helping themselves." 

Four internists have recently filed a lawsuit against their speciality board (ABIM), claiming the MOC process amounts to restraint of trade.  They'd like more acceptance of other certification options, such as offered through the National Board of Physicians and Surgeons.  This comes on top of an investigation three years ago that charged that the specialty boards -- and the ABIM in particular -- were using board certification and MOC to rake in huge windfalls for their organizations. 

The ABMS, for one, is not sitting idly.  It just released its draft report on its vision for the future of certification, trying to address some of the concerns raised and asking for public comment on their vision.  It's a good bet they'll get some.

Things aren't much better on the hospital side.  The accreditation process is costly for hospitals in terms of staff time and actual dollars.  Despite that, a new study found that patient outcomes were not really better at accredited hospitals than non-accredited ones, with no difference in mortality.  The authors were blunt in their conclusion:
...we found that hospitals accredited by private organizations did not have better patient outcomes than hospitals reviewed by a state survey agency. Furthermore, we found that accreditation by The Joint Commission, which is the most common form of hospital accreditation, was not associated with better patient outcomes than other lesser known, independent accrediting agencies.
It gets worse than that.  The Wall Street Journal reported that even when state or federal regulators find significant safety violations, hospitals rarely (e.g., 1%) lose their accreditation status. Professor Ashish Jha (who was one of the authors in the study above), told the Journal, "It’s clearly a failed system and time for a change,” and the Journal finding “shows accreditation is basically meaningless—it doesn’t mean a hospital is safe."  

Equally troubling, the Joint Commission actually has a subsidiary that provides consulting services to help hospitals with the accreditation  process, which creates potential conflicts of interest. The Journal's reporting caught the attention of CMS, which announced it was going to look into these.  CMS Administrator Seema Verma said: 
We are concerned that the practice of offering both accrediting and consulting services—and the financial relationships involved in this work—may undermine the integrity of accrediting organizations and erode the public’s trust.  
The Joint Commission, of course, defends its accreditation process and also insists it keeps a strict firewall between it and its consulting efforts.  

Credit: The Leapfrog Group
Meanwhile, the Leapfrog Group does its best to track hospitals' actual quality performance.  In its latest survey, of the 2,600 hospitals evaluated, only 32% received an "A."  A plurality (37%) only got a "C."  The results do not differ significantly from previous years, which should be worrisome.

All of this boils down to the same underlying problem: we don't really know how to measure quality in health care.  Sure, we have many quality measures and many ways of trying to gauge knowledge and/or processes that should lead to quality outcomes, but you still rarely know if you have one of the best doctors/hospitals, or one of the worst. 

And that is a problem.

I'm deeply sympathetic to physician and hospital complaints about the certification/accreditation processes.  They are expensive, take much time, and don't seem to have the desired results.  What I am not sympathetic, though, is the lack of meaningful options being proposed.

We already don't really understand the distinction between a certification from ABMS and one from NBPS, or between an accreditation from the Joint Commission and one from the Healthcare Facilities Accreditation Program.   We don't really know if graduating from Harvard Medical School makes someone a better doctor than graduating from Virginia Tech Carilion School of Medicine, or if doing a residency at the Cleveland Clinic matters more than where a doctor went to medical school.   We don't really know if a physician's performance is improving or declining over time.

What we need to know is, who is helping patients the most?  Who is harming them the least?  How much difference will it likely make if I go to this doctor/hospital versus that doctor/hospital? 

We should know these things, but we don't.  Physicians and hospitals should know them, and should use them to continuously improve their performance on them, but they don't and they can't effectively.  We can't entirely blame them: in today's litigious society, measuring and reporting on those kinds of specifics open the door to more lawsuits and higher awards.

The problem goes to the age-old debate: is medicine art or science?  Art is subjective; it can't be measured or easily compared.  Science thrives on measurement and proof.  We like to think of medicine as being scientifically-based, but human health is not only highly subjective but also highly variable.  So we tend to shrug at the problem, measure what we can, and use these kinds of pseudo-scientific accreditations to bridge the gap.

We have many problems in healthcare.  Much good work is being done on a number of them.  But until we can answer those three questions above -- for a specific doctor/hospital -- we're likely to keep risking our health, and wasting our dollars. 

No comments:

Post a Comment