1. Issel, L. Michele PhD, RN

Article Content

Over the past year or so, I noticed a marked increased in the number of submissions from countries that surprised me: Turkey, Iran, Brazil, Fiji, Lebanon, Pakistan, Slovenia, Tunisia, and Vietnam. In nearly every case, the manuscript was rejected without review. The main reason tended to be that the study was a simple description of what was occurring in that country. My heart went out to these authors, many of whom seemed to be making a genuine effort at conducting good research. My heart mostly ached because the good rigor of methodology built on an unsteady and weak foundation of theory and ignored research. The straightforward, simple description failed to advance our knowledge about health care management.

FIGURE. No caption a... - Click to enlarge in new windowFIGURE. No caption available

The dilemma of solid methodology applied to a weak research question comes in shades of gray. The international examples fall at the end of the continuum where the problem is readily recognizable. At the other end of the continuum are manuscripts that have solid methodology of a higher level coupled with a research question that borders on uninteresting because it fails to push forward knowledge generation. Such manuscripts generally receive reviews, which result in a rejection.


The heart of the problem stems from our reliance on description and correlation, exacerbated by the availability of large data sets and astounding computing capabilities. Health care administration as a discipline has matured in methodology and theory. I argue that it has not matured equally in prediction and theory testing. We assume, unconsciously, that having a description of a phenomenon and knowing what it is related to tells us what to do. This assumption is false. To answer the question of "what would be the most effective course of action under these conditions" requires having experimental knowledge of the effects of different courses of action. Granted, experimental organizational research has flourished in some areas, such as group dynamics. I also acknowledged that not all problems in health care management warrant or are amenable to experimentation. Nonetheless, many questions deserve answering through experimental research, which elucidates causation.


Moving forward, collaborations between managers, administrators, and researchers, as well as consortia of researchers, are needed to garner the resources and access necessary to conduct experimental studies, which by definition require an intervention. Interventions could range from varying the QI team membership to having different communication or feedback approaches to improve use of clinical guidelines. Natural experiments occurring in health care provide a valuable opportunity (i.e., Ginn, Shen, & Moseley, 2009). Leveraging collaborations and practice-based networks could yield results of a predictive nature. Lastly, given the expense and time realities of experimental studies, greater use of statistical techniques that compensate for lack of group assignment, such as propensity score analysis, remains an option.


Description is necessary, but no longer sufficient for guiding the practice of health care managers and administrators. We need more experimental research in health care administration with solid results to guide us through the causal maze toward optimal outcomes for organizations, employees, and patients.


L. Michele Issel, PhD, RN






Ginn G. O., Shen J. J., Moseley C. B. (2009). Community benefit laws, hospital ownership, community orientation activities, and health promotion services. Health Care Management Review, 34, 109-118. [Context Link]