Authors

  1. Issel, L. Michele PhD, RN, Editor-in-Chief

Article Content

Clinicians and managers alike want to make the best possible decisions. However, what constitutes a "best possible" decision remains elusive, hence the ongoing emphasis on using research findings as evidence upon which to base decisions. Various and numerous tools, including meta-analyses, and resources, such as the Cochrane Collaboration, exist to help clinicians and managers sort through research results.

  
Figure. No caption a... - Click to enlarge in new windowFigure. No caption available.

The fundamental question each decision maker faces is "how much difference will this make." The answer to that question is weighed against considerations of resources, effort, time, and resistance. If a thoughtful health care manager or administrator were to look at the studies published in Health Care Management Review (HCMR), what single piece of information might provide an answer to the "how much difference" question? This question must be answered by authors submitting to HCMR.

 

Against this backdrop of practice needs is an ongoing debate among social scientists, and methodologists in particular (Jones & Tukey, 2002), on the use of null hypothesis significance testing (NHST) and p values when reporting statistical findings. Our reliance on statistical significance might be out of habit or conformity, among other reasons. Unfortunately, this reliance on NHST does not readily transfer results into practical decision making. Make no mistake, NHST has great value for many forms of research and research questions. However, for decision making, especially in health care, we need to know how much difference a strategic, administrative, or managerial course of action will make; we need to know (a) the effect size and (b) the practical significance. In comparison with medical journals, very few organization or management journals report effect size, power, or confidence intervals (CIs). Knowing the effect sizes and CIs facilitates managerial decision making, as does more intellectually accessible presentation of statistics.

 

In the 10 years since an editor called for a change in the statistics reported, practical significance reporting has increased (Fidler et al., 2005). As the Editor-in-Chief, I will be asking authors to include effect sizes and CIs when appropriate and possible and to provide HCMR readers with a clear sense of whether the amount of difference is meaningful in a practical sense. By asking for more practice significance reporting, I hope to increase the relevance for managerial practice of the research reported in HCMR.

 

L. Michele Issel, PhD, RN

 

Editor-in-Chief

 

References

 

Fidler, F., Cumming, G., Thomason, N., Pannuzzo, D., Smith, J., Fyffe, P., et al. (2005). Toward improved statistical reporting in the Journal of Consulting and Clinical Psychology. Journal of Consulting and Clinical Psychology, 73, 136-143. [Context Link]

 

Jones, L. V., & Tukey, J. W. (2002). A sensible formulation of the significance test. Psychological Methods, 5, 411-412. [Context Link]