Authors

  1. Nerenz, David R. PhD

Article Content

THE ARTICLE by Averill and colleagues (2016) raises a number of interesting questions about the use of outcome measures to evaluate quality of care, and then about the use of outcome measures in various forms of pay-for-performance programs, including those that explicitly involve some form of "pay-for-outcome."

 

There is little question about the potential value of focusing quality measurement and financial programs more heavily on outcome measures and less on process measures. The key points in that argument have been summarized by Averill and colleagues and in other articles in this issue, and do not need to be repeated here. As is almost always the case in health policy discussions, though, the "devil is in the details," and there are indeed many details or questions to be resolved before public and private P-4-P programs can move significantly to an emphasis on outcome measures.

 

The Averill article presents a set of simulation analyses on the basis of the proposed inpatient complication payment section of the "Incentivizing Health Care Quality Outcomes Act of 2014" (H.R. 5823 in the 113th Congress). In the proposed legislation, hospital payments would be adjusted on the basis of the relative rate of occurrence of a set of Potentially Preventable Complications (PPCs) (Hughes et al., 2006) identified in claims data. The simulation recognizes that inpatient complications can lead to 3 distinct financial effects in "penalty" models like this-a complication creates a financial penalty, adds to the required costs of care, but also may place a patient in a higher-paying DRG category. The net effect of the 3 effects is the focus of the simulation. The authors show, using a reasonable set of assumptions about hospitals' ability to reduce complications over a 2-year period, a net positive effect related primarily to reductions in underlying costs of care when complications are prevented.

 

On the basis of these simulation results, the authors argue that an incentive system focused on preventable complications, and implemented along the lines illustrated in the simulation, can be good for patients, good for payors, and good for hospitals themselves. They also argue that an incentive model linked to PPCs may be better in several ways than a model focused on a narrower set of Hospital-Acquired Conditions. A positive experience in Maryland hospitals using PPCs for payment adjustment is presented as evidence that the benefits of such a program are not just theoretical.

 

Questions about the simulation and the payment approach that it illustrates range from the narrow and technical to the broad and philosophical.

 

What about costs of improvement? The simulation results here, and most other analyses of "cost savings" related to P4P programs, do not include costs of quality improvement. There are clearly such costs. An initiative to reduce pressure ulcers, for example (a particularly high-cost complications), could involve costs to purchase pressure-sensitive mattress covers, costs to train nursing staff on prevention strategies, costs to hire and support dedicated "movement coaches" for patients, or costs for modifications to EMR systems to identify patients at risk or track preventive interventions. Similar capital, training, staff time, IT, analytic support, and other costs go along with almost any initiative to improve performance with regard to complications. Even if all parties agree that these improvements are necessary, accounting for costs of improvement has to be part of any comprehensive analysis of net financial impact at the hospital level.

 

There is also the issue of opportunity cost. Time and other resources devoted to improvement on any one measure are time and resources not spent on something else. One advantage of an incentive program like that illustrated here, with 64 potential complications included, is that it would allow hospitals to decide where to allocate their scarce resources, presumably on the basis of some balance of the importance of the complication (both clinical and financial) and the potential for actually making significant improvements. Incentive programs linked to just one, or to a small number, of selected outcome measures create the risk of drawing resources from an area of potentially greater benefit to the area reflected in the selected measure(s).

 

Are quality improvements real, or just coding changes? Other observers have noted that the reductions in hospital readmission rates that appear to be associated with the Medicare readmission penalty program may be due, in part at least, to hospitals' use of outpatient observation rather than inpatient care for patients potentially in need of readmission (Noel-Miller & Lind, 2015). Because the PPC-based program depends heavily on the coding of conditions "present on admission," it seems likely that some of the apparent reduction in complications that might be observed in an incentive program is, or will be, due to more careful and complete coding of conditions present on admission. Clinically insignificant or marginal complications that involve some element of clinician or coder judgment may be coded either more or less completely depending on whether the code leads to higher DRG payment or higher penalty. The Fuller et al. article cited by Averill notes a significant difference in coding of secondary diagnoses in California versus Maryland that seems to reflect the differential use of such diagnoses in the 2 states for payment purposes (Fuller et al., 2009).

 

Who or what is the accountable entity? The simulation is about hospitals and about hospital-based measures of PPCs. The earlier part of the article, though, and the text of the "Incentivizing Health Care Quality Outcomes Act" use the term "health care delivery organization" or "health delivery organization." Such terminology can raise some semantic questions (Is a "managed care plan" a "health care delivery organization"?) but more importantly, raise questions about which individual providers or organizations are truly responsible for measured patient outcomes. Although the responsibility of hospitals for complications occurring during an inpatient stay is relatively clear, the responsibility for other outcomes, particularly those occurring at a more remote time point and location, is much less clear.

 

Let us take the example of a patient admitted for cardiac surgery, with the outcome in question being hospital readmission within 30 days of discharge. The potentially responsible organizations or individual providers include the admitting cardiologist, the cardiac surgeon, the primary care physician or "medical home", the hospital, the postacute care provider(s) (rehab facility, skilled nursing facility, home health agency), an accountable care organization (ACO), and a managed care plan. In fact, one section of H.R. 5823 would hold all providers in the patient's community jointly responsible for events like readmissions occurring in that community, with rewards and penalties as high as 20%. Should all of these providers be held jointly responsible, with payment penalties applied to all of them in the event of an adverse outcome like readmission? Perhaps, but it seems likely that a movement toward outcome-based P4P systems will require some more careful attention to issues of clinical management authority linked to accountability for outcome, and clearer identification of which provider(s) are responsible for which outcome(s) at any point in time.

 

What about risk adjustment? Finally, any movement to outcomes as the basis for incentive payments has to take into account the "multiply-determined" nature of outcomes (Bilimora, 2015). That is, outcomes are almost always determined only to some extent by quality of care, and to some extent by other factors, including other factors totally outside the control of the providers whose performance is being measured. For many measures of inpatient complication rates, the relative contribution of quality and "nonquality" factors is such that no significant adjustment for those other factors is required beyond some basic adjustment for primary diagnosis and disease severity. On the other hand, the relative contributions of quality versus "nonquality" factors may be such that the definable, measurable clinical process quality factors contribute 5% or less to the overall variation in outcome (Stefan et al., 2013). The rest of the variation relates to patient-, family-, or community-level factors over which doctors or hospitals have little or no control. In those cases, extensive investment in risk adjustment will be required to create a fair and level "playing field" for entities whose performance is being compared and rewarded or punished.

 

Averill and colleagues have provided a valuable illustration of one possible example of a financial incentive program linked to outcome measures. The potential benefits of moving down this policy path are clear, but so are the difficulties and challenges.

 

REFERENCES

 

Averill R. F., Fuller R. L., McCullough E. C., Hughes J. S. (2016). Rethinking Medicare payment adjustments for quality. Journal of Ambulatory Care Management, 39(2), 98-107. [Context Link]

 

Bilimora K. Y. (2015). Facilitating quality improvement: Pushing the pendulum back toward process measures. JAMA, 314(13), 1333-1334. [Context Link]

 

Fuller R. L., McCullough E. C., Bao M. Z., Averill R. F. (2009). Estimating the costs of potentially preventable hospital acquired complications. Health Care Financing Review, 30(4), 17-32. [Context Link]

 

Hughes J. S., Averill R. F., Goldfield N. I., Gay J. C., Muldoon J., McCullough E., Xiang J. (2006). Identifying potentially preventable complications using a present on admission indicator. Health Care Financing Review, 27(3), 63-82. [Context Link]

 

Noel-Miller C., Lind K. (2015). Is observation status substituting for hospital readmission? Health Affairs Blog. Retrieved October 28, 2015, from http://healthaffairs.org/blog/2015/10/28/is-observation-status-substituting-for-[Context Link]

 

Stefan M. S., Pekow P. S., Nsa W., Priya A., Miller L. E., Bratzler D. W. (2013). Hospital performance measures and 30-day readmission rates. Journal of General Internal Medicine, 28(3), 377-385. [Context Link]