Authors

  1. Carroll, Jean Gayton PhD, Editor

Article Content

Pay for Performance has become one of the most-studied, most-discussed trends in the United States health care delivery system during the past few years. However, it takes time-years-to develop and implement complex evaluation and compensation plans that cover wide broad segments of the population. This means that, in the absence of any significant recorded history, much of the discussion up to this point has simply involved speculation. As Damberg and her coauthors state, "To date, there have been no empirical studies of the impact of a physician incentive program across a diverse patient and payer population."1 Beginning in 2002, with funding from the Robert Wood Johnson Foundation and the California HealthCare Foundation, California's Integrated Healthcare Association (IHA) has been planning, developing, and testing its Pay for Performance program. In their discussion of the IHA plan now in operation in California, the authors set forth 7 important lessons learned through its implementation. They provide detailed explanations of the challenges faced and of how they were handled.

 

A system-wide patient safety plan is presented by Nadzam and her coauthors, of the Cleveland Clinic Health System (CCHS). The plan is 1 of the 3 initiatives that together comprise the CCHS Strategic Performance Measurement and Improvement Plan. As the authors state, "Patient safety is an integral component of the CCHS strategic approach to performance improvement."2 The 7 strategies employed in the patient safety program include the promotion of a culture of safety, increased reporting of adverse events and error-prone processes, enhanced communication between health care professionals and patients, increased learning from analysis of reported adverse events, focused redesign of processes when and where indicated, promotion of appropriate application of technology, and focused education about new safety-enhancement applications. The authors provide a detailed explanation of the measures taken to implement the goals of the program.

 

Focusing on emergency department errors, Khare, Uren, and Wears point out that most emergency departments lack simple systems for capturing medical errors. As barriers to prompt incident reporting, they cite reluctance to "inform" on a colleague, the absence of feedback, and the nuisance of having to fill out complicated incident forms. Their objective was to overcome these barriers and to capture the maximum possible medical errors that occur in an emergency department. In their attempt to accomplish this, they developed a Web page that permits anonymous reporting not only by emergency department personnel, but also by other hospital departments and services that receive patients from the emergency departments.

 

Kollberg, Elg, and Lindmark present the experiences of 6 local development teams in Sweden's public health sector as they employed the flow model performance measurement system in demonstration projects funded by the Swedish County Councils (FCC). The flow model of tracing care is based on the medical decision process, following the patient's path through the health care system.

 

In the fall of 2004, Quality Management in Health Care published Dr Alemi's evaluation and discussion of Tukey's Control Chart, "a method of analyzing data based on the concepts developed by John Tukey for calculation of confidence intervals for medians."3

 

Dr Alemi cited the following as advantages implicit to the procedure:

 

* Simplicity in implementation;

 

* Absence of any assumptions about the distribution;

 

* Applicability to small data sets;

 

* Robust character, unaffected by occasional outliers.

 

 

Borckardt and his coauthors challenge certain of Alemi's findings. They concede that the Tukey Control Chart technique appears to demonstrate good Type-I error performance with small samples, and probably is robust to the presence of outliers and/or nonnormal distributions.4 However, in their article, "Empirical Evaluation of Tukey's Control Chart for Use in Health Care and Quality Management Applications," they assert that Tukey's approach presents problems when it is used to track human performance over time.

 

In her study of patient satisfaction in a Kuwaiti hospital, Al-Mailam assesses the relative importance of nursing care in determining the level of patient satisfaction. Her findings essentially corroborate those reported by Garman and his coauthors.5 Among her conclusions, Al-Mailam relates improvements in nursing care to enhanced training, and increased nursing staff satisfaction to the presence of transformational leadership skills in supervisory staff.

 

Langowski reviews recent literature for reports of evidence of the effect of point of care online documentation on patient safety. She finds that the authors of the articles reviewed had concluded that online point of care nursing documentation is associated with an improved quality of documentation and of end user satisfaction.

 

Jean Gayton Carroll, PhD

 

Editor

 

REFERENCES

 

1. Damberg CL, Raube K, Williams T, and Shortell SM. Paying for performance: implementing a statewide project in California. Qual Manag Health Care. 2005;14(2):66-79. [Context Link]

 

2. Nadzam DM, Atkins M, Waggoner DM, Shonk R. Cleveland Clinic Health System: a framework for a health system patient safety initiative. Qual Manag Health Care. 2005;14(2):80-90. [Context Link]

 

3. Alemi F. Tukey's control chart. Qual Manag Health Care. 2004;13(4):216-221. [Context Link]

 

4. Borckardt JJ, Nash MR, Hardesty S, Herbert J, Cooney H, Pelic C. An empirical evaluation of Tukey's Control Chart for use in health care and quality management applications. Qual Manag Health Care. 2005;14(2):112-115. [Context Link]

 

5. Garman A, Garcia J, Hargreaves M. Patient satisfaction as a predictor of return-to-provider behavior: analysis and assessment of financial implications. Qual Manag Health Care. 2004;13(1):75-80. [Context Link]