Authors

  1. Katz, Mitchell H. MD

Article Content

As a public health director, I have been disappointed with how little help the published literature is on identifying solutions to the pressing public health problems faced by my county. For example, there are few published interventions on reducing violence or on decreasing disparities in health outcomes. Reviews of the academic literature support my experience. A review of bibliographic sources in the United Kingdom found that only 0.4 percent of academic research focused on public heath interventions.1 The Task Force on Community Preventive Services has identified only three interventions to prevent violence for which there is sufficient evidence to recommend: early childhood visitation, therapeutic foster care, and universal school-based education.2,3 In the area of improving the cultural competence of healthcare, the task force found insufficient data to support the implementation of any intervention.2

 

The scarcity of published interventions does not reflect a lack of interest in public health topics. A large number of descriptive and risk factor studies are published on disparities in healthcare, violence, and other public health issues. In fact, many of these articles end with the classic phrase "Interventions are needed to [horizontal ellipsis]," indicating that researchers are aware of the gap. In other fields, such as medicine, intervention trials are commonly published. Why are there so few public health intervention studies?

 

What Accounts for the Scarcity of Public Health Intervention Studies?

A number of factors account for the scarcity of public health intervention studies, including that they are harder to do, take longer, are more expensive and require more sustained funding, are more difficult to publish, and are not sufficiently encouraged or rewarded in either the academic or the public health practice world than other types of studies.

 

Descriptive and risk factor studies can be performed with a single cross-sectional sample, often using a database that someone else has already assembled. In contrast, intervention studies require, at a minimum, a measurement of the outcome before and after the intervention.

 

Because changes that occur after an intervention may be due to factors other than the intervention, most intervention studies require a comparison group. In the case of biomedical research, this can usually be accomplished by randomizing individual patients to different conditions. In evaluating public health interventions, randomization may not be feasible (eg, change of a law) or may be too politically charged (eg, offering a violence prevention program to randomly chosen teenagers). Although it may be possible to randomize higher-order units (eg, neighborhoods or schools), unless there are many units to sample, the groups may not be comparable. For example, randomizing four neighborhoods to receive an intervention versus standard of care will not necessarily result in balanced groups because neighborhoods differ in important ways and a sample of four may not be sufficient to balance baseline characteristics. In situations where randomization is not possible, the investigators must use statistical adjustments such as multivariable modeling or propensity scores to approximate comparable groups. Because it is impossible to statistically adjust for unknown factors, even when nonrandomized intervention trials are adjusted for known factors, differences between the groups on the outcomes of interest may be confounded by unknown factors.

 

Intervention studies require the cooperation of the community and require a more complicated human subjects' review than descriptive or risk factor studies. Community-based participatory research has been a useful method of gaining community support and wisdom for intervention research, but the time requirements and the importance of ceding control to a broader decision-making group, can be daunting to investigators.4,5

 

Evaluating public health interventions with broad social impacts may take longer than interventions in other fields. For example, an investigator may need to wait for several years before evaluating the impact of quality childcare on decreasing racial disparities in health outcomes. In contrast, in the medical arena, drugs and devices often have their impact in a relatively shorter period of time.

 

Because intervention studies are harder to perform and take longer, they are more expensive and require more sustained funding. In the case of biomedical research, there is a profitable industry ready to bring promising drugs and medical devices to the market place. There is no similar industry for bringing promising social interventions to communities in need.

 

Intervention studies can be hard to publish because of the limitations inherent to intervention research. In comparison with descriptive and risk factor studies, interventions are more vulnerable to bias because of subjects dropping out, temporal changes in the problem, participants being exposed to naturally occurring influences, the diffusion of the intervention into the control population, and (if nonrandomized) groups being different on measured or unmeasured characteristics at baseline. Publication bias6,7 may be more serious with intervention than risk factor studies because the latter are more likely to yield at least one significant finding. In contrast, with an intervention study, there may only be one comparison that matters.

 

The reward systems of academic institutions, the site of most research, favor descriptive and risk factor studies over intervention studies. This is not because of any prejudice against intervention studies; rather academic institutions, reward obtaining grants and publications, both of which are harder with interventions than descriptive or risk factor studies.

 

The culture of public health also impedes rigorous intervention research. Within the public sector, there is often the push to "do something," without spending the time or money to determine what would be most effective. For example, Title 1 of the federal Ryan White Emergency CARE Act spends $600 million a year on providing services to localities for human immunodeficiency virus-infected persons. No funds have been set aside for evaluation, with the unsurprising effect, that few evaluations of Ryan White services have been performed.8 Similarly, the California Mental Health Services Act, a source of more than $1.5 billion of funding per year for mental health services, has no designated funds for evaluation.

 

In addition, government officials do not always welcome research, especially if the results may be politically unpopular. They may discourage specific projects, may not implement the recommendations of studies, or even pressure investigators to suppress the results. Also, public health departments often do not have the resources (staff, computer equipment, mentors) to facilitate research. The health department itself may be difficult to navigate; academic researchers interested in public health issues may find it difficult to even determine whom to talk to.

 

How Can We Increase the Number of Policy-Relevant Intervention Studies?

Investigator time, funding, and publication space are all finite resources. If we want to increase intervention research, we need to give such studies preference over descriptive and risk factor studies. In the funding arena, for example, Robert Wood Johnson Foundation has issued a call for interventions to address disparities in healthcare.9 Descriptive and risk factor studies were not solicited, thereby ensuring that the money would support interventions. Government service initiatives should dedicate a percentage of funding (eg, 3%-5%) specifically to evaluate the effectiveness of the programs.

 

As journal editors and peer reviewers, we need to prioritize well-done intervention studies over well-done descriptive and risk factor studies. I have often wondered what the effect would be of a moratorium on the publication of descriptive and risk factor studies on disparities and violence. I'm willing to bet that it would yield many more intervention studies. A more modest proposal would be dedicating journal space to intervention studies. For example, Journal of American Medical Association solicited "interventions to improve health among the poor" for a theme issue.10

 

In judging the inevitable weaknesses of intervention studies, especially those performed at the population level, we must be sure that we are using the right standard. Nonrandomized intervention studies should be judged by whether they are the best study that can be performed to answer the question, not whether a randomized study would be better, if the latter is unfeasible.11 On the other hand, we must demand that authors of nonrandomized interventions rigorously estimate the impact of biases. We must also encourage the development of new and better nonrandomized methods for evaluating interventions.

 

As mentors, we should encourage young investigators to purse intervention research. When I meet with investigators interested in documenting risk factors related to poor health outcomes, I ask them how the results of the research will improve the situation. Often, they say that the study will help in designing interventions. When I probe further, they often have excellent ideas of the types of interventions that might make a difference, so I encourage them to consider an intervention study.

 

To improve the quality of intervention research, our public health schools should ensure that graduates are trained on how to conduct intervention studies. Our criteria for academic advancement must account for the longer time frame needed to complete intervention studies. This does not mean lowering standards; rather, criteria are needed for properly valuing completion of the different stages of intervention studies.

 

Closer collaboration between public health departments and academia could propel better intervention research. Public health departments understand the pressing questions, know how to gain community buy in, how to implement and fund interventions, and how to institutionalize changes through legislation and/or regulation. Academic institutions understand the best methods of developing, testing, and evaluating interventions and promulgating the results.

 

Unfortunately, cultural differences between academics and public health practitioners have impeded them from working more closely together.12,13 In particular, academia and public health departments have different missions (generating knowledge vs providing services)14 and different scopes of practice (academics tend to be narrower in their focus than public health practitioners). Prior negative experiences (eg, researchers who did not report back the results of their study and public health administrators who did not follow through on needed resources) may also discourage joint initiatives. The standard grant mechanisms of government and philanthropy-funding specific time-limited projects-requires that a project be fully conceived prior to the start of funding, making it difficult for public health officials to involve academics early in the research process; many research opportunities are lost when service interventions are launched without collection of baseline data for establishing the effectiveness of the program.

 

Despite these challenges, there has been a wide range of successful collaborations between academia and public health departments.5,12,14-16 Some of these partnerships were formed as a response to external funding for collaborative centers.5,16 Others grew indigenously from the need of public health departments for additional resources or expertise from university partners.12,14,15 Although successful at resolving some of the cultural differences between academia and public health departments, none have resulted in sustained, focused effort on intervention research.

 

Using known strategies for strengthening ties between academics and health departments,13 collaborative centers should be created for developing, implementing, and evaluating interventions. To facilitate academics being involved at the time interventions are planned, collaborations should be funded on the basis of how compelling the problems are that the collaboration plans to address, the strength of the collaboration (ie, how likely is it that the people will work well together), and the assets of the collaborators for performing the research (eg, strength of working relationships with the community, statistical expertise for nonrandomized designs).

 

Funding for intervention collaborations should be for a minimum of 5 years to give enough time for the participants to resolve the cultural differences that impede academics and practitioners from working together. Grants should allow collaborations to vary in the proportion of effort by academia and public health departments. Some public health departments have good connections with affected communities, but would need to rely heavily on an academic institution to design and conduct the research. Other public health departments have research expertise and might rely on academia primarily for methodological and statistical consultation. Grants should be renewed on the basis of the success of the collaborations in producing policy-relevant interventions. Although additional funding for these centers would be welcome, diverting existing National Institute of Health, Centers for Disease Control and Prevention, and philanthropy funding into these types of centers would yield more policy-relevant research than the existing individual grant mechanisms.

 

Conclusions

When I speak to academic groups about intervention research to address serious public health problems in San Francisco, there is tremendous enthusiasm, especially among people early in their training. However, when I discuss the realities of funding and publication of intervention research, and how it can affect academic promotion, there are many nodding heads, especially those covered by gray hair. If we are to make progress on complicated public health problems, we need to change this perception. We must prioritize funding, conducting, and publishing intervention research, including continued development of methods for nonrandomized designs. Specifically, sustained funding for cooperative efforts between academia and public health departments is more likely to produce policy-relevant research than the individual project model predominately used by government and philanthropic funders. Unless we change the incentives to encourage more intervention studies, we will know more and more about public health problems that we cannot solve.

 

REFERENCES

 

1. Millward L, Kelly M, Nutbeam D. Public Health Intervention Research: The Evidence. London: Health Development Agency; 2003. http://www.nice.org.uk/niceMedia/documents/pubhealth_intervention.pdf. Accessed March 3, 2008. [Context Link]

 

2. Centers for Disease Control and Prevention. Guide to community prevention services. http://www.thecommunityguide.org. Accessed March 16, 2007. [Context Link]

 

3. Centers for Disease Control and Prevention. The effectiveness of universal school-based programs for the prevention of violent and aggressive behavior: a report on recommendations of the task force on community prevention services. MMWR Morb Mortal Wkly Rep. 2007;56(RR-7):1-12. [Context Link]

 

4. Minkler M, Blackwell AG, Thompson M, Tamir H. Community-based participatory research: implications for public health funding. Am J Public Health. 2003;93:1210-1213. [Context Link]

 

5. Metzler MM, Higgins DL, Beeker CG, et al. Addressing urban health in Detroit, New York City, and Seattle through community-based participatory research partnerships. Am J Public Health. 2003;93:803-811. [Context Link]

 

6. Callaham ML, Wears RL, Weber EJ, Barton C, Young G. Positive-outcome bias and other limitations in the outcome of research abstracts submitted to a scientific meeting. JAMA. 1998;280:254-257. [Context Link]

 

7. Klassen TP, Wiebe N, Russell K, et al. Abstracts of randomized controlled trials presented at the Society for Pediatric Research meeting: an example of publication bias. Arch Pediatr Adolesc Med. 2002;156:474-479. [Context Link]

 

8. Office of Inspector General. Ryan white evaluation systems. Title 1: grants to metropolitan areas. 1999. http://www.oig.hhs.gov/oei/reports/oei-05-98-00392.pdf. Accessed March 3, 2008. [Context Link]

 

9. Robert Wood Johnson Foundation. Finding answers: disparities research for change. http://www.rwjf.org/applications/solicited/cfp.jsp?ID=19833. Accessed March 3, 2008. [Context Link]

 

10. Flanagin A, Winkler, MA. Theme issue on poverty and human development: call for papers on interventions to improve health among the poor. JAMA. 2006;296:2970-2971. [Context Link]

 

11. Victora CG, Habicht J-P, Bryce J. Evidence-based public health: Moving beyond randomized trials. Am J Public Health. 2004;94:400-405. [Context Link]

 

12. Swain GR, Bennett N, Etkind P, Ransom J. Local health department and academic partnerships: Education beyond the ivy walls. J Public Health Manag Pract. 2006;12(1):33-36. [Context Link]

 

13. Conte C, Chang CS, Malcolm J, Russo PG. Academic health departments: From theory to practice. J Public Health Manag Pract. 2006;12:6-14. [Context Link]

 

14. Livingood WC, Goldhagen J, Little WL, Gornto J, Hou T. Assessing the status of partnerships between academic institutions and public health agencies. Am J Public Health. 2007;97:659-666. [Context Link]

 

15. Livingood WC, Goldhagen J, Bryant T, Wood D, Winterbauer N, Woodhouse LD. A community-centered model of the academic health department and implications for assessment. J Public Health Manag Pract. 2007;13:662-669. [Context Link]

 

16. Bruce TA, McKane SU, Brock RM. Taking a community-based public health approach: how does it make a difference? In: Bruce TA, Mckane SU, eds. Community-Based Public Health: A Partnership Model. Washington, DC: American Pubic Health Association; 2000:99-108. [Context Link]