The MPIRICA Quality Score: Methodology

Methodology Overview

MPIRICA Health is on a mission to help bring clarity to healthcare decisions.

Patients and payers want providers that will deliver the best results. They need data to make the right choices. But for most people, medical data's bewildering complexity makes it impenetrable. It's extraordinarily difficult to discern which provider has demonstrated the best success.

MPIRICA Health solves this problem with the MPIRICA Quality Score, a trustworthy and intuitive measure of surgery provider performance. It rates hospital and surgeon results for specific surgical procedures.

The score's physician-developed, risk-adjusted methodology uses years of data drawn from millions of Medicare claims. The result is a clear, simple, three-digit score that patients and payers can use to inform their decisions.

To calculate the score, MPIRICA first uses risk-adjusted analytic models (developed over almost 30 years) to predict the number of expected adverse outcomes for each provider. These outcomes include several measures that cover the entire continuum of care, up to 90 days after discharge. This way, the score avoids rewarding providers who adopt short-term - and short-sighted - solutions.

After generating predictions, MPIRICA compares how providers perform against them. To do that, MPIRICA's analysts examine actual cases from each individual provider. If providers produce fewer adverse outcomes than predicted, their score will rise. If there are more than expected, their score falls.

This clinically valid approach, decades in the making, is how MPIRICA ensures an apples-to-apples approach to ranking provider performance.


Measurement Criteria

MPIRICA uses the following key measurements to evaluate provider quality:

For inpatient procedures:

  • Inpatient mortality
  • Major complications in the hospital
  • Readmission within 90 days of discharge
  • Post-discharge mortality within 90 days without a readmission

For outpatient procedures:

  • Acute (within 7 days of procedure) mortality, or inpatient admission
  • Acute emergency department visit or follow-up outpatient procedure
  • Post-procedure admission to an acute inpatient facility, within a 90 day window (days 8 to 97 after procedure)
  • Post-procedure mortality without inpatient admission, within a 90-day window (days 8 to 97 after procedure)

Patient deaths, readmissions, and follow-up procedures are straightforward figures. There's not much ambiguity to them. But complications can be much harder to measure.

That's because of coding inconsistencies. When a patient comes to the hospital, providers record their conditions with present-on-admission (POA) codes. But that process isn't standardized, or audited. So some providers may under-report, while others might record more rigorously. This can make it difficult to discern if a condition was present-on-admission, or if something happened during the patient's stay (which is obviously a problem when trying to judge a hospital's performance).

Another issue - codes can be too vague. For example, one code for postoperative infections, could be used for either a small infection around a stitch, or for viscera protruding through a wound. With this wide range in severity, the code doesn't reveal much about a patient's surgical outcomes.

So, codes can't form an clear basis to measure complications. They fall short on reliability, and specificity. Instead, MPIRICA uses a statistical device called prolonged risk-adjusted post-procedural length-of-stay (prRALOS). It's an objective surrogate for complications. It appears in many peer-reviewed studies, and it allows for fair, standardized comparison between providers.


Risk Adjustment

Regardless of the surgeon's skill, more complex cases will always be at greater risk for adverse outcomes. A fair scoring system would never penalize a provider for taking these complex cases on. MPIRICA's risk-adjustment model, therefore, is a crucial component of The MPIRICA Quality Score's validity.

The model uses over 500 risk factors, all developed by physicians, to assess the health status of patients. These factors range from broad demographic information, to extremely specific details of individual case histories. This rigorous accounting ensures the valid comparison of performance across hospitals and surgeons, independent of the severity of their cases.


Data Collection & Analysis

MPIRICA builds prediction models using Medicare data on hospitals and physicians. This data includes the prior health conditions of patients for each procedure scored. MPIRICA uses the most recent three years of data available for hospitals (four years, for surgeons), and scores are updated as Medicare publishes new data.

MPIRICA also takes care to account for errors. Analysts apply integrity tests, and exclude problematic data. For example, we know cancer cannot arise as a complication from a procedure. So if cancer is ever coded as a complication, the coding is corrected to reflect cancer as pre-existing condition.

This results in MPIRICA's robust predictive model. It takes patients and their health statuses as inputs, and predicts the likelihood of adverse outcomes. MPIRICA then compares these predictions to the sum of observed outcomes for each provider.

(Note: See our FAQ to find out why MPIRICA uses Medicare data, and why it is a valid source of measurement. All hospitals and insurance companies are invited to provide their own clinical or claims data to fill in any gaps that may be left by using CMS data.)


Score Calculation

MPIRICA's prediction represents what a provider should achieve, given their patient mix. It will reflect a certain number of adverse outcomes, depending on the procedure, and the patient's health. The difference between this prediction, and the observed outcomes from a provider, determines how a provider will rank.

MPIRICA then converts these observations into its 800-point scale. Rules are applied for earning a set number of points across each of the measures that The Quality Score accounts for. Then, relative performance on each measure is compared to baseline averages, and the components are summed to produce the final MPIRICA Quality Score.

For a more in depth look at the statistical methodology, please send a message to quality@mpirica.com to request the MPIRICA Methodology Whitepaper.

Send feedback