Authors

  1. Begun, James W. PhD

Article Content

Reason dreams of an empire of knowledge, a mansion of the mind. Yet sometimes we end up living in a hovel by its side. - -Heinz Pagels (1988)

 

Health care management researchers are not immune from the fads, fashions, and bandwagons that bedevil decision making in health care organizations (Kaissi & Begun, 2008). The evidence-based management bandwagon is galloping through health care management education and research circles, with the term evidence based threatening to adorn the title of every new course, article, book, and funding proposal. The evidence-based management movement is in danger of becoming a passing fad, however, unless greater circumspection and constraint can rein in its scope. The article by Arndt and Bigelow (2009) is one such contribution to this badly needed correction. If anything, Arndt and Bigelow did not go far enough in their call for caution.

 

Arndt and Bigelow (2009) questioned the reliability and validity of scientific evidence about management, given that most management research questions are asked of and applied to complex social systems rather than simple or complicated systems. The nature of complex social systems means that scientific evidence is difficult to collect about behavior in those systems. Scientific evidence is equally difficult to apply in complex systems. Both steps (collection and application of evidence) destine scientific evidence to a minor role in management decision making. This same limitation applies to most health policy and public health research questions (Pawson, 2006; Sternman, 2006). I recently sought "evidence" on whether an integrated or fragmented structure works better for academic health centers. Wietecha, Lipstein, and Rabkin (2009, p. 174) surveyed the evidence and [wisely] declared, "The answer is that it depends. Both models have documented records of success and failure." Most sound assessments of evidence surrounding complex health care management issues will result in similar conclusions.

 

Arndt and Bigelow (2009) noted that there is little evidence that this state of affairs is harmful to health care organizations. In the eyes of executives, management research is largely trivial and irrelevant (Das, 2003). In fact, a strong case can be made that ignoring evidence is healthy for organizations if by evidence we mean the results of a typical study in a typical management journal. The management decision-making process is much more complex than can be eased by a new piece of management research. If it were otherwise, managers would seek out and use management research. Their incentives are to get their organizations to perform well. Ours are to do good science. Which one sounds better for organizations?

 

Moving Forward Realistically

We need not abandon the development of evidence-based management, but we need to pursue it more realistically. How do we move forward with a more realistic approach to evidence-based management?

 

Constrain the Scope of Evidence-Based Management

There are arenas of management life where scientific evidence is convincing enough that it should affect decision making-where hypothesis testing, experimental design or a good approximation of it, and generalizable results are possible. These arenas are composed of (relatively) simple systems in (relatively) stable environments. Studies of simple systems in stable environments get cumulated and refined over time. Often, these studies are closest to the clinical core of health care delivery and to evidence-based medicine (Shortell, Rundall, & Hsu, 2007). There is nothing wrong with promoting the application of scientific evidence when the quality of the evidence is high. The problem arises when the scope of evidence-based management is vastly overextended.

 

Focus on Systematic Reviews, Not on Individual Studies

When we do study simple systems in stable environments, good science results from the accumulation of findings and systematic reviews of them. Forcing researchers to draw implications for practitioners from single studies, as we do in this journal, is a questionable practice. We are fortunate that such recommendations rarely form the basis of decision making in organizations.

 

Compared with those of single studies, the findings of systematic reviews are less dramatic, more time-consuming to aggregate, and rarely surprising. Systematic reviews often raise more questions than they answer or conclude that "it depends," as did the review cited earlier (Wietecha et al., 2009). Realistically, Pawson (2006, p. 176) wrote, "[horizontal ellipsis]systematic review cannot browbeat the evidence into delivering unconditional and universal verdicts on the efficacy of interventions." Further, by the time evidence is percolated through science, most findings of systematic reviews will strike managers as well-accepted facts of managerial life. It takes many years to establish the scientific validity of evidence. This assigns researchers to a less heroic but more appropriate role in shaping the management of organizations.

 

Use Methods Appropriate for the Unit of Analysis

Our traditional methodological tools are too simple to attack complex social systems. Longitudinal, multimethod case studies and qualitative studies hold more promise for the development of evidence in complex systems (Begun, Zimmerman, & Dooley, 2003). On the quantitative front, agent-based modeling and system dynamics modeling, along with social network analysis, are emerging as reasonable ways to better understand complexity. Researchers often need continuing education to develop a greater appreciation of these methods.

 

Conclusions

Pagels' (1988) "empire of knowledge" will have to remain a dream for many decades to come. Our job is to do good science; it is the managers' jobs to use it or not. To dress up our findings, label them evidence, and ship them off to organizations before they have stood the test of accumulation and validation is dangerous to our craft, imperious, and damaging to organizational decision making. Wishing away complexity to generate management or policy impact betrays the integrity of our science.

 

James W. Begun, PhD

 

References

 

Arndt, M., & Bigelow, B. (2009). Evidence-based management in health care organizations: A cautionary note. Health Care Management Review, 34(3), 206-213. [Context Link]

 

Begun, J. W., Zimmerman, B., & Dooley, K. J. (2003). Health care organizations as complex adaptive systems. In S. S.Mick & M. E.Wyttenbach (Eds.), Advances in health care organization theory (pp. 253-288). San Francisco: Jossey-Bass. [Context Link]

 

Das, T. K. (2003). Managerial perceptions and the essence of the managerial world: What is an interloper business executive to make of the academic-researcher perceptions of managers? British Journal of Management, 14, 23-32. [Context Link]

 

Kaissi, A. A., & Begun, J. W. (2008). Fads, fashions, and bandwagons in health care strategy. Health Care Management Review, 33, 94-102. [Context Link]

 

Pagels, H. R. (1988). Dreams of reason. New York: Simon and Schuster. [Context Link]

 

Pawson, R. (2006). Evidence-based policy: A realist perspective. Thousand Oaks, CA: Sage. [Context Link]

 

Shortell, S. M., Rundall, T. G., & Hsu, J. (2007). Improving patient care by linking evidence-based medicine and evidence-based management. Journal of the American Medical Association, 298, 673-676. [Context Link]

 

Sternman, J. D. (2006). Learning from evidence in a complex world. American Journal of Public Health, 96, 505-514. [Context Link]

 

Wietecha, M., Lipstein, S. H., & Rabkin, M. T. (2009). Governance of the academic health center: Striking the balance between service and scholarship. Academic Medicine, 84, 170-176. [Context Link]