Article Content

"If we want more evidence-based practice, we need more practice-based evidence." - - Green LW, Am J Public Health, 20061


Clinical intervention research traditionally has evolved through several stages before becoming truly useful for clinical decision-making. Broadly speaking, early research studies investigate intervention efficacy (Does it work at all, under the best of circumstances?) while later studies examine effectiveness (Does it work better than usual care in the real world?) and perhaps efficiency (Are the benefits worth the costs?).2 The gold standard study design for efficacy research is an explanatory randomized controlled trial (eRCT), in which the new intervention is compared to no intervention (or a placebo), and tight restrictions to ensure internal validity are employed. After several such studies provide evidence suggesting that an intervention does work under ideal conditions, effectiveness can be investigated through pragmatic randomized controlled trials (pRCT). Pragmatic trials compare the new intervention to an existing intervention, are designed with relaxed control methods to maximize external validity, and allow more direct generalization of study findings to real world settings.


Proponents of evidence-based and evidence-informed practice are justifiably eager to see more rapid and streamlined translation of knowledge from research to clinical application.3-5 This rising consensus that more pRCTs are needed exists alongside disagreement regarding their appropriate place in the investigation continuum.6 Highly regarded investigators differ in their opinions about whether numerous eRCTS are necessary prior to pRCTs.


Some researchers argue that intervention studies should not jump to the pRCT stage until sufficient prior evidence of efficacy has been reported.7,8 Conversely, other researchers decry the slow speed and high cost of an extended, sequential approach when clinicians and their clients need applicable evidence now.1,3,4 Investigators in this camp argue that for interventions with predictably minimal risk, pRCTs should proceed sooner rather than later, without waiting for numerous eRCTS to be completed first. Many physical therapy interventions would easily qualify as minimal risk assuming they are delivered appropriately.


Providers perceive pRCTs to be more clinically relevant, as comparative effectiveness results may better inform decisions about which intervention to recommend for whom.9,10 Would clinical implementation of evidence-based interventions be improved if more pRCTs were available for dissemination? If clinical journals like the JGPT seek to publish evidence that supports the integration of research and practice, should they recruit and publish more pRCTs? The answers are not as straightforward as they may appear.


On the affirmative side of the debate, pRCTS are well-suited to geriatric rehabilitation research because the older adult population is by nature heterogenous. Insofar as is feasible, pRCTs attempt to admit "all comers" with the targeted health condition who would typically be seen in a given type of clinical setting.11 Older adults from broad age ranges with chronic diseases are eligible to participate. Interventions are provided by clinicians not researchers. Data is often gathered from medical records, and data analysis is conducted on outcomes from all participants, whether or not every participant received exactly the same type or amount of care.12 These characteristics make it relatively easier to conduct multi-site pRCT studies that recruit and treat large numbers of participants within a short time frame. Compared to equally-sized eRCTs, pRCTs can be conducted more quickly and at lower cost.


Large numbers of participants are needed for pRCTs because high variability in participant characteristics, intervention implementation, and adherence is expected. Both the requirement for large sample sizes and the expected variability could be seen as negatives. In particular, if variability is excessive, truly important findings may be hidden or diluted.6 However, when sufficiently large participant samples are achieved, sub-group analysis may reduce variability and be especially helpful in discriminating between clients who did or did not benefit from the intervention.


For example, Mahoney and colleagues (2007) compared the effects of a home-based fall prevention exercise program to a home safety modifications program in a large group of community-dwelling older adults.13 Overall they found no between group difference in the rate of falls (however, see also Mahoney, 201015). But sub-group analysis found that for those participants with cognitive decline who lived with a caregiver, the rate of falls, hospitalizations, nursing home admissions and nursing home days were all significantly lower following the exercise intervention.


Pragmatic clinical trials typically produce smaller and more variable treatment effects than do eRCTS. This may occur when both of the interventions studied in a pRCT have positive effects, thus yielding smaller differences between groups. Alternatively, it is possible that a new intervention will be less effective than usual care. An increased emphasis on conducting studies with potentially less dramatic findings may concern researchers who are worried about journal bias against publication of studies with small effects or negative results. Nonetheless, the increased generalizability of pRCT findings can be a plus for clinicians, who obtain more accurate real-life estimates of the relative benefits of new interventions compared to usual care.


Support for an increased emphasis on pRCTs is far from unanimous. Legitimate professional criticisms note that any move away from the tried-and-true "efficacy first" model could undermine the quality and value of all evidence.7,8 The line between studies of efficacy versus effectiveness is not sharp or stable. Most clinical intervention studies are not at the far ends of the eRCT to pRCT spectrum but have elements of both. To avoid confusion and increase transparency, scales that estimate the degree of pragmatism have been developed.15,16 Using such scales helps address another criticism, that studies may be incorrectly labeled as pragmatic when they are not.17 A third criticism targets the actual conduct of pragmatic trials, pointing out that randomization and analysis methods specific to pRCTs are needed but not always used, thus reducing the validity of findings.


Critics further argue that overly simplistic interpretation of pRCT results can too easily occur. The application of pragmatic study findings to real-world decisions cannot be completely direct but requires judgement born of experience with the populations and environments studied.18 Researchers are especially worried about the misuse of pRCT results in policy-making decisions.19 Clinicians tend to make individual patient care decisions using multiple sources of information, recognizing the complexities of individuals, families and social support contexts. Policy-makers, however, may be pressed to find "one size fits all" solutions that may too easily disregard the messiness of human complexity and nuance. This can lead to overly broad and at times inappropriate policy applications.


Many barriers to increased use of evidence-based clinical interventions exist in the real world. Providing more generalizable evidence may reduce one barrier, but other obstacles that may be more powerful drivers of clinical behavior (such as payment models, organizational productivity demands, etc.) remain. Unless the most influential barriers are overcome, a shift to pRCTs may not make a meaningful difference.6 That is true, but does not obviate the valuable contribution an increased emphasis on pRCTS could make to improved knowledge translation.


In previous issues I have indicated that the goals of the JGPT Editorial Team include increasing the scientific rigor, clinical relevance, and usefulness of JGPT articles. Properly conducted pRCTs would seem to meet all three criteria. The JGPT welcomes the submission of high-quality pRCTs in support of our firm commitment to promoting the integration of research in geriatric physical therapy practice. A balance of both eRCTs and pRCTs would serve JGPT readers well.




1. Green LW. Public health asks of systems science: to advance our evidence-based practice, can you help us get more practice-based evidence? Am J Public Health. 2006;96(3):406-409. [Context Link]


2. Merali Z, Wilson JR. Explanatory versus pragmatic trials: An essential concept in study design and interpretation. Clin Spine Surg. 2017;30(9):404-406. [Context Link]


3. Green LW. Making research relevant: if it is an evidence-based practice, where's the practice-based evidence? Fam Prac. 2008;25(S1):i20-i24. [Context Link]


4. Tuzzio L, Larson EB, Chambers DA, et al Pragmatic clinical trials offer unique opportunities for disseminating, implementing, and sustaining evidence-based practices into clinical care: Proceedings of a workshop. Healthcare. 2019;7(1):51-57. [Context Link]


5. NIH Collaboratory, Health Care Systems Research Collaboratory, Rethinking Clinical Trials. Introduction to pragmatic clinical trials: How pragmatic clinical trials bridge the gap between research and care. Accessed October 28, 2019. [Context Link]


6. Delitto A. Pragmatic clinical trials: implementation opportunity, or just another fad? Phys Ther. 2016;96(2):137-138. [Context Link]


7. Kent DM, Kitsios G. Against pragmatism: on efficacy, effectiveness and the real world. Trials. 2009;10(48). doi:10.1186/1745-6215-10-48. [Context Link]


8. Treweek S, Zwarenstein M. Making trials matter: pragmatic and explanatory trials and the problem of applicability. Trials. 2009;10(37). doi:10.1186/1745-6215-10-37. [Context Link]


9. Chalkidou K, Tunis S, Whicher D, Fowler R, Zwarenstein M. The role for pragmatic randomized controlled trials (pRCTs) in comparative effectiveness research. Clinical Trials. 2012;9(4):436-446. [Context Link]


10. Nyenhuis SM, Apter AJ, Schatz M, Krishnan JA. Comparative effectiveness trials in asthma - how will I recognize one? Curr Opin Pulm Med. 2018;24(1):78-82. [Context Link]


11. Ford I, Norrie J. Pragmatic trials. N Engl J Med. 2016;375(5):454-463. [Context Link]


12. Zuidgeest MGP, Goetz I, Groenwold RHH, et al Series: Pragmatic trials and real world evidence: Paper 1. Introduction. J Clinical Epidemiol. 2017;88:7-13. [Context Link]


13. Mahoney JE, Shea TA, Przybelski R, et al Kenosha County Falls Prevention Study: A randomized, controlled trial of an intermediate intensity, community based multifactorial falls intervention. J Am Geriatr Soc. 2007;55(4):489-498. [Context Link]


14. Mahoney JE. Why multifactorial fall-prevention interventions may not work: Comment on "Multifactorial intervention to reduce falls in older people at high risk of recurrent falls". Arch Intern Med. 2010;170(13):1117-1119.


15. Loudon K, Treweek S, Sullivan F, et al The PRECIS-2 tool: designing trials that are fit for purpose. BMJ. 2015;350:h2147. [Context Link]


16. Baier RR, Jutkowitz E, Mitchell SL, McCreedy E, Mor V. Readiness assessment for pragmatic trials (RAPT): a model to assess the readiness of an intervention for testing in a pragmatic trial. BMC Med Res Methodol. 2019;19(1):156. doi:10.1186/s12874-019-0794-9 [Context Link]


17. Dal-Re R, Janiaud P, Ioannidis JPA. Real-world evidence: How pragmatic are randomized controlled trials labeled as pragmatic?. BMC Med. 2018;16(49). doi:10.1186/s12916-018-1038-2. [Context Link]


18. Zwarenstein M, Treweek S, Loudon K. PRECIS-2 helps researchers design more applicable RCTs while CONSORT Extension for Pragmatic Trials helps knowledge users decide whether to apply them. J Clin Epidemiol. 2017; 84: 27-29. [Context Link]


19. Cowen N, Virk B, Mascarenhas-Keyes S, Cartwright N. Randomized controlled trials: How can we know "what works"? Critical Review. 2017; 29(3):265-292. [Context Link]