Authors

  1. Sumner, Laura MSN, MEd, MBA, ANP-BC, ONC
  2. Burke, Sheila M. MSN, CCRN
  3. Chang, Lin-Ti MSN, RN-BC, ANP-BC, CCRN
  4. McAdams, Mary MEd, RN-BC
  5. Jones, Dorothy A. EdD, RNC, FAAN

Abstract

The purpose of this descriptive study was to evaluate knowledge retention over time and clinical application of basic arrhythmia knowledge following exposure to an orientation program. Data showed significant differences in knowledge retention at 4 weeks and clinical application in rhythm identification using simulation at 3 months.

 

Article Content

Little evidence exists to support innovative teaching strategies that ensure competency in cardiac arrhythmia identification, retention of knowledge related to arrhythmias, and application of this knowledge in clinical practice. Nonetheless, registered nurses (RNs) must be skilled and competent in identifying both basic and lethal arrhythmias and addressing these changes by either initiating resuscitation or notifying the physician.

 

A basic arrhythmia program taught during the first week of central hospital nursing orientation provides nurses with critical information essential to patient safety and quality care. However, no studies have documented RNs' knowledge retention or clinical application of the arrhythmia content over a specific period. The purpose of this study was to measure knowledge retention over time and clinical application of basic arrhythmia knowledge using simulation after exposure to an orientation program on basic arrhythmias.

 

REVIEW OF LITERATURE

A literature review was conducted using CINAHL (Cumulative Index to Nursing and Allied Health Literature, 1982 to December 2010), MEDLINE, and the following keywords: learning outcomes, summative evaluation, knowledge retention, arrhythmia knowledge, nurses' knowledge, and skill retention. Articles included in the review were studies identifying clinical knowledge and outcomes after education or continuing education, arrhythmia knowledge, simulation use, and/or interpretation of electrocardiograms (ECGs).

 

Knowledge for Practice

Clinical knowledge refers to knowledge embedded in the practice of nursing (Benner & Wrubel, 1982). Proficiency in ECG/arrhythmia interpretation requires a combination of knowledge, skill, and practical clinical experience (Salerno, Alguire, & Waxman, 2003). In 2003, The American College of Physicians identified competency evaluation of ECG training as critical and noted that there is little evidence available on the training needed to maintain skill levels (Salerno, Alguire, & Waxman, 2003). ECG interpretation is a fundamental part of emergency resident training, yet programs use a variety of methods to determine competency. Twenty-five percent of the programs use formal testing; 41% use informal testing with a combination of formal testing, and clinical observation deemed the best way to determine competency (Pines, Perina, & Brady, 2004). Keller and Raines designed a qualitative study of arrhythmia knowledge with a two-fold objective: to identify and describe critical care nurses' perception of arrhythmia knowledge and develop levels of arrhythmia competency (Keller & Raines, 2005). The study found a deficit in the nurse's ability to identify specific arrhythmias. However, there is little information about critical care nurses' competency in ECG interpretation and a paucity of literature with evidence to support content taught or the impact of nurses' knowledge of rhythm strip interpretation on patient outcomes (Keller & Raines, 2005). In 2009, Kaakinen and Arwood conducted a systematic analysis of nursing simulation literature (Kaakinen & Arwood, 2009). Their search verified the need for more research to investigate efficacy of simulation for improving student learning.

 

Knowledge Retention

Knowledge and skill retention after cardiopulmonary resuscitation training is well established in the literature. Hamilton (2005) conducted an integrative literature review and meta-analysis examining factors that enhance retention of knowledge and skills during and after resuscitation training and found that skills and knowledge decline over time and training should occur frequently and reflect potential situations nurses may face in practice to maintain skills. Broomfield (1996) described a quasi-experimental study to investigate retention of basic cardiopulmonary resuscitation skill and knowledge by qualified nurses following a course in professional development. Broomfield's study validated other research that concluded that knowledge and skill deteriorate in a period as short as 10 weeks if not used or updated regularly.

 

Evaluating Clinical Application of Knowledge

In April 2003, the Institute of Medicine released a report that described areas in need of change in the education of health professionals (Institute of Medicine, Committee on Quality of Health Care in America, 2003). Improved education systems should provide updated curriculum, evaluation, and student competencies in both nursing and medical education programs (Epstein & Hundert, 2002; Hand, 2006; Klein, 2006; Lenburg, 1999). An assessment validating ongoing nursing competency to meet regulatory standards is well documented in the literature (Arcand & Neumann, 2005; Bradley & Huseman, 2003; Landry, Oberleitner, & Borazjani, 2006). However, no clear consensus exists on what constitutes continuing competence or how to measure that competence (Landry et al., 2006). As patient needs and care environments become more complex, nurses need to attain requisite competencies to deliver high-quality care (Institute of Medicine, 2010).

 

Professional Development-Arrhythmia

Continuing education has been the method required by state boards of nursing for recertification and licensure renewal; however, there is a growing belief that mandatory continuing of professional education does not guarantee competence (Whittaker, Carson, & Smolenski, 2000). In 1992, Abruzzese developed an evaluation model describing four levels of continuing education evaluation (Underwood, Dahlen-Hartfield, & Mogle, 2004). The first two levels describe the learner's satisfaction with the program and achievement of objectives. These assess the individual's attitude or perception of learning, the "happiness index" (Dickerson, 2000). Levels 1 and 2 are the most common methods of evaluation for continuing education. Although these have a place in continuing education, neither addresses the goal of increasing the ability of the nurse to provide quality care (Dickerson, 2000). The American Society for Training and Development found that less than 20% of all organizations conduct evaluations at the application level (Level 3) or higher because of the resources and time involved (Horton, 2001).

 

Performance Evaluation

The third level, outcome evaluation, focuses on change in performance behavior, which continues after the program. Brunt designed a study to assess the effect of a workshop on behavioral change. Seventy participants completed a questionnaire on perceived expertise before the workshop, immediately after the workshop, and 3 months after the workshop (Brunt, 2000). There were significant findings when each of four variables influencing behavior change was correlated with actions and expertise 3 months after the workshop (Brunt, 2000). Self-reporting was a limitation in this study.

 

Recent literature supports the use of simulation as not only an effective teaching strategy but also a safe way to evaluate and apply clinical knowledge and critical thinking skills. Eaves and Flagg (2001) described the use of simulation in a group of new graduate Air Force nurses and found a remarkable level of confidence and increased ability to perform skills after exposure to the simulated learning environment. The new graduates' preceptors also validated findings from this study and reported that orientation time was decreased.

 

The fourth level is impact evaluation. This level focuses on an operational result, increased quality of care, and cost reduction. There is little research demonstrating the higher levels of evaluation of education on clinical care and patient outcomes in a healthcare setting.

 

PURPOSE OF THE STUDY

The purpose of this study was to evaluate RNs' knowledge retention over time and clinical application of basic arrhythmia knowledge using simulation after exposure to an orientation program on basic arrhythmias.

 

RESEARCH QUESTIONS

The study's specific research questions were as follows:

 

1. Is there a difference in pretest and posttest scores related to cardiac arrhythmia knowledge on a basic arrhythmia test (multiple choices, anatomy, and rhythm strips) following exposure to a program on arrhythmias during central hospital orientation?

 

2. Is there a difference in arrhythmia posttest scores on a basic arrhythmia test and retention of knowledge of similar arrhythmias using scores from a simulated arrhythmia experience?

 

3. Is there a relationship between achievement on pretest and posttest basic arrhythmia test scores and nurse-identified learning preferences?

 

 

METHODS

Permission to conduct the study was granted by the Partners Human Research Committee Internal Review Board at the Massachusetts General Hospital. During the central department orientation, a written script was read to participants and a study fact sheet was given to all nurses by the principal and/or coinvestigators. The RNs who consented to take part in the study completed a demographic information sheet.

 

Design

The study used a pretest/posttest descriptive design to evaluate basic arrhythmia knowledge retention and its clinical application. Data collection both before and after intervention are appropriate for measuring change and can determine differences between groups and change within groups (Polit & Beck, 2008).

 

Sample

A convenience sample of 138 newly hired full-time and part-time RNs were recruited during central hospital orientation over an 18-month period. To control for 25% attrition rate, general power analysis program (G*Power) was used to determine the sample size; 125 subjects provide 92% power to detect a moderate effect (0.15) with a significance of .05 (Erdfelder, Faul, & Buchner, 1996). To maintain the eligibility criteria, pediatric and travel nurses were excluded from the sample. Sixty-two (45%) RNs completed the entire study (pretest, posttest, and simulation). One hundred and two (74%) RNs completed only the pretest and posttest. Seventy-three percent had a bachelor of science degree in nursing. Table 1 presents demographic information on the study sample.

  
Table 1 - Click to enlarge in new windowTABLE 1 Demographics Descriptors

Procedure

At Massachusetts General Hospital, all RNs are required to pass a basic arrhythmia examination with a score of 80% or greater by the end of orientation. A time series approach was used for this study: (1) Newly hired RNs consenting to participate in the study completed a demographic information sheet. They completed a written 30-item multiple-choice instrument (pretest) 1 day before the arrhythmia program. (2) Study participants received the intervention by attending a 4-hour basic arrhythmia program on the second day of nursing orientation. (3) All study participants were notified by e-mail and a written memo to return 4 weeks after the arrhythmia program for completion of the posttest, a 30-item multiple-choice instrument, limited to 1-hour maximum. (4) Study participants were notified by e-mail and a written memo to return 3 months after the original arrhythmia program for completion of a posttest using simulated arrhythmias. Eight scenarios were presented, each with simulated rhythms of over 20 minutes. The nurses viewed, identified, and recorded the rhythms. At the completion of the study, participants were given a survey to describe their experience with the study and caring for monitored patients, identify resources reviewed during the study period, and provide any qualitative comments.

 

Instruments

Four instruments were used in this study to evaluate competence and knowledge: (1) a demographic information sheet that included age, gender, language, nursing education, nursing experience, arrhythmia experience, and self-identified learning preferences and (2) a pretest/posttest that consists 30-item multiple-choice, heart rate/interval calculation, fill-in the blank, and rhythm identification. The arrhythmia assessment tool was pilot tested using a sample of 15 newly hired RNs to determine the clarity of questions, effectiveness of test instructions, completeness of response sets, time required to complete the examination, and success of data collection techniques. Revision of the assessment tool and procedures were made based on written feedback received from the newly hired RNs and data from the pilot study (Burns & Grove, 2001). The revised test was given pre- and post-intervention; (3) the simulation session included identification of eight scenarios with simulated rhythms and (4) open-ended comments post-simulation survey on resources used, frequency of caring for patients with arrhythmias, and feedback on the study. Scores were calculated by correct/incorrect responses. Four orientation coordinators assessed face and content validity of the basic arrhythmia program (Burns & Grove, 2001). All study investigators contributed to the development and taught the basic arrhythmia content in central hospital nursing orientation.

 

Data Entry

A time series data collection was used, and data entry was completed by principal and coinvestigators. Each item of the instrument was coded numerically so it could be traced back for data analysis. All data were verified with a second study investigator.

 

Data Analysis

Data were managed and analyzed using the SPSS for Windows (Version 15.0, SPSS, Inc., Chicago, IL). Missing data were managed by mean substitution and case deletion. Written comments provided insights into participant responses. Descriptive statistics were used to examine differences in aggregate and paired t tests for pretest and posttest achievement scores. Pearson's chi-square test was computed to investigate the distribution of learner preferences and pretest and posttest achievement scores. Means and standard deviations were reported in continuous variables and percentages of frequencies for categorical variables. Statistical significance was set at p < .05.

 

RESULTS

The study showed a significant difference in learning (p < .01) in aggregate pretest and posttest scores when compared using a t test. Comparison of pretest and posttest variables (multiple choice, anatomy, and rhythms) in achievement test scores showed a significant difference (p < .01) in paired t test results (see Table 2). The study findings support (Research Question 1) knowledge retention of basic arrhythmia content was achieved 4 weeks following exposure to a program on basic arrhythmias during central hospital orientation. Scores and overall improvement in posttest scores on the basic arrhythmia test and scores from a simulated arrhythmia experience (Research Question 2) showed no significant difference at 3 months. Nurses retained the knowledge learned in the basic arrhythmia class. Data showed knowledge retention and clinical application in rhythm identification between posttest score and clinical application in simulation testing at 3 months (see Table 3). Demographic data revealed learning preferences of RNs as kinesthetic (51%) and visual (43%). Auditory learners self-identified at only 5% of the sample. Research Question 3 addressed the relationship between achievement of test scores and nurse-identified learning preferences. Data showed significant differences in paired two-tailed t test in visual and kinesthetic learners but not in auditory learners. Pretest and posttest scores between visual and kinesthetic learners showed no significant difference in achievement scores. Qualitative study comments overwhelmingly support simulation and scenarios in learning, which bring together learning and clinical experience.

  
Table 2 - Click to enlarge in new windowTABLE 2 Statistical Analysis: Paired
 
Table 3 - Click to enlarge in new windowTABLE 3 Posttest and Simulation Statistics

DISCUSSION

The study showed a significant difference in knowledge retention pre- and post-program. Three months following exposure to the arrhythmia, program data showed preservation of retained knowledge and transference of knowledge in clinical practice shown in simulation sessions.

 

LIMITATIONS

The results of this study may be limited by several factors. Sample size was smaller than desired because of nurses' inability to schedule posttest sessions. Additional reading or study prior to completing the posttest sessions may have influenced nurses' responses. Five percent (auditory learners) of the participants may not have had their learning preferences met during the study. Some nurses had more clinical experience with arrhythmia identification on clinical units prior to completing simulation.

 

IMPLICATIONS

It is important for academic nursing programs and hospital educators to include more in-depth basic arrhythmia content in academic and hospital orientation programs. To meet different learning styles and the needs of diverse age groups, emphasis should be on designing an adaptable learning environment using a variety of methodologies (Clark, 2000). Incorporation of clinical simulation as a learning experience in a basic arrhythmia program is crucial in nursing curricula and hospital orientation programs to enhance practice and ensure the delivery of safe and high quality care.

 

This study measured the proficiency in basic arrhythmia interpretation both pre- and post-basic program and at 3 months using clinical simulation. Results have refocused attention on the basic arrhythmia program in nursing orientation and provide a basis to integrate learning methods used in this study to improve the current program and support learning preferences of nurses. Updated course content is provided in an online method with a 2-hour follow-up review session integrating clinical scenarios and simulation to enhance and reinforce learning during the third week of nursing orientation. Study findings contribute to the body of nursing knowledge and evaluation of clinical practice needed to ensure cost-effective, patient-centered, high-quality care.

 

ACKNOWLEDGMENT

This study was supported by an award from the Yvonne L. Munn Center for Nursing Research at the Massachusetts General Hospital.

 

References

 

Arcand L. L., Neumann J. A. (2005). Nursing competence assessment across the continuum of care. Journal of Continuing Education in Nursing, 36 (6), 247-254. [Context Link]

 

Benner, P., & Wrubel, J. (1982). Skilled clinical knowledge: The value of perceptual awareness. Nurse Educator, 7 (3), 11-17. [Context Link]

 

Bradley D., Huseman S. (2003). Validating competency at the bedside. Journal for Nurses in Staff Development, 19 (4), 165-173. [Context Link]

 

Broomfield R. (1996). A quasi-experimental research to investigate the retention of basic cardiopulmonary resuscitation skills and knowledge by qualified nurses following a course in professional development. Journal of Advanced Nursing, 23 (5), 1016-1023. [Context Link]

 

Brunt B. (2000). Continuing education evaluation of behavior change. Journal for Nurses in Staff Development, 16 (2), 49-54. [Context Link]

 

Burns N., Grove S. (2001). The practice of nursing research (4th ed.). Philadelphia, PA: W. B. Saunders Company. [Context Link]

 

Clark, D. (2000). Learning styles: Or, how we go from the unknown to the known. Retrieved from http://www.nwlink.com/donclark/hrd/learning/styles.html

 

Dickerson P. S. (2000). A CQI approach to evaluating continuing education: Processes and outcomes. Journal for Nurses in Staff Development, 16 (1), 34-40. [Context Link]

 

Eaves R., Flagg A. (2001). The U.S. Air Force pilot simulated medical unit: A teaching strategy with multiple applications. Journal of Nursing Education, 40 (3), 110-115. [Context Link]

 

Epstein R. M., Hundert E. M. (2002). Defining and assessing professional competence. Journal of the American Medical Association, 287 (2), 226-234. [Context Link]

 

Erdfelder E., Faul F., Buchner A. (1996). GPOWER: A general power analysis program. Behavior Research Methods, Instruments, & Computers, 28, 1-11.

 

Hamilton R. (2005). Nurses' knowledge and skill retention following cardiopulmonary resuscitation training: A review of the literature. Journal of Advanced Nursing, 51 (3), 288-297. [Context Link]

 

Hand H. (2006). Assessment of learning in clinical practice. Nursing Standard, 21 (4), 48-56. [Context Link]

 

Horton W. (2001). Evaluating e-learning. Alexandria, VA: The American Society for Training and Development. [Context Link]

 

Institute of Medicine. (2010). The future of nursing: Focus on education. Washington, DC: National Academy Press. [Context Link]

 

Institute of Medicine, Committee on Quality of Health Care in America. (2003). Health Professions education: A bridge to quality. Washington, DC: National Academy Press.

 

Kaakinen J., Arwood E. (2009). Systematic review of nursing simulation literature for the use of learning theory. International Journal of Nursing Education Scholarship, 6 (1), 17-18. [Context Link]

 

Keller K. B., Raines D. A. (2005). Arrhythmia knowledge: A qualitative study. Heart & Lung, 34 (5), 309-316. [Context Link]

 

Klein C. J. (2006). Linking competency-based assessment to successful clinical practice. Educational Innovation, 45 (9), 379-383. [Context Link]

 

Landry M., Oberleitner M. G., Borazjani J. G. (2006). Education and practice collaboration: Using simulation and virtual reality technology to assess continuing nurse competency in the long-term acute care setting. Journal for Nurses in Staff Development, 22 (4), 163-169. [Context Link]

 

Lenburg, C. B. (1999). Redesigning expectations for initial and continuing competence for contemporary nursing practice. Online Journal of Issues in Nursing. Retrieved from http://www.nursingworld.org/ojin/topic10/tpc10_1.htm[Context Link]

 

Pines J. M., Perina D. G., Brady W. (2004). Electrocardiogram interpretation training and competency assessment in emergency medicine residency programs. Academic Emergency Medicine, 11 (9), 982-984. [Context Link]

 

Polit D., Beck C. T. (2008). Nursing research: Generating and assessing evidence for nursing practice (8th ed.). Philadelphia, PA: Wolters Kluwer/Lippincott Williams & Wilkins. [Context Link]

 

Salerno S. M., Alguire P. C., Waxman H. S. (2003). Training and competency evaluation for interpretation of 12-lead electrocardiograms: Recommendations from the American College of Physicians. Annals of Internal Medicine, 139 (9), 747-750. [Context Link]

 

Underwood P., Dahlen-Hartfield R., Mogle B. (2004). Continuing professional education: Does it make a difference in perceived nursing practice? Journal for Nurses in Staff Development, 20 (2), 90-98. [Context Link]

 

Whitaker, S., Carson, W., & Smolenski, M. C. (2000). Assuring continued competence-policy questions and approaches: How should the profession respond? Online Journal of Issues in Nursing. Retrieved from http://www.nursingworld.org/ojin/topic10/tpc10_4.htm[Context Link]