Authors

  1. Killingsworth, Erin PhD, RN

Article Content

Technology in nursing education enriches the educational experience from teaching simple skills to aiding interprofessional education and evidence-based decision making.1,2 What is not as readily apparent is how technology is being used to assess and evaluate student learning in nursing courses. Computerized standardized testing has been discussed in the literature as a method of reinforcing nursing curriculum and preparing for the NCLEX-RN, but there is limited information on types of tests used in prelicensure nursing programs for summative course grades.3,4 The purpose of this study was to determine how undergraduate nursing students are being tested in nursing courses, specifically the method of test delivery, setting, technology, sources of test items, and influential factors in determining student assessment and evaluation.

 

Methods

Design and Participants

This study reports a secondary analysis of a Web-based survey of faculty (N = 127) in BSN programs in 31 states in the United States. Faculty were asked to describe current testing practices in a nursing course with a clinical component in which they contributed to test development. Further description of the design and sample are published elsewhere.5 Approval was obtained for the primary study from 2 universities' institutional review boards.

 

Instruments

In this analysis, data from the Demographics and Teaching Background Questionnaire5 and adapted Evaluation of Learning Advisory Council (ELAC) survey6 were used to examine testing practices. The Demographics and Teaching Background Questionnaire was composed of 13 items related to information about the faculty member and nursing program.5 The adapted ELAC survey included 15 items asking the respondents to rate the importance of selected factors (individual, institution, accreditation, etc) on their decision making during assessment and evaluation using a 3-point Likert-type scale.5,6

 

Data Analysis

SPSS version 19 (IBM Corp, Armonk, New York) was used to conduct the secondary analysis. Descriptive statistics were used to describe the method of test delivery and other areas related to testing. Correlations between influential factors and sources of test items were explored with Spearman rank correlation coefficient.

 

Results

The main method of test delivery was paper and pencil (n = 79, 62.2%), with computer being solely used by 21 respondents (16.5%); 26 (20.5%) reported using both paper-and-pencil and computer testing methods for a total of 47 (37%) reporting use of computerized testing. Settings for testing, which faculty answered in a choose-all-that-apply item, were reported as primarily occurring in the classroom with paper-and-pencil tests (n = 102, 80.3%), followed by the computer laboratory (n = 25, 19.7%). Technology used in testing included computers (n = 45, 35.4%), learning management systems (n = 42, 33.1%), lockdown browsers (n = 25, 19.7%), and other (n = 13, 10.2%).

 

Sources of test items were measured using a Likert-type scale of 1 (never use) to 7 (always use). Primary sources of test items identified were from the individual faculty member (mean, 6.3 [SD, 1.3]), other current course faculty (mean, 5.0 [SD, 1.9]), and textbook tests or test item banks (mean, 5.2 [SD, 1.6]). Other sources of test items reported by the faculty included continuing education articles, case studies, content from guest speakers and student presentations, clinical experiences, and simulation. Influential factors were reported using a Likert-type scale of 1 (not influential) to 7 (most influential). The most influential factors in determining assessment and evaluation were the individual faculty member (mean, 6.0 [SD, 1.4]) and course faculty team (mean, 5.8 [SD, 1.4]) (Table).

  
Table. Sources of Te... - Click to enlarge in new windowTable. Sources of Test Items and Influential Factors (N = 127)

Significant correlations were found between the institution being viewed as an influential factor in student assessment and current course faculty members (r2= 0.201, P = .026) and previous nursing faculty members developing their own tests (r2= 0.297, P = .001). A potential explanation of this relationship could be an internal focus of the represented institutions on faculty ability and responsibility in assessing and evaluating student learning.

 

Discussion

Despite the focus in the literature on the use of technology in nursing education, this study indicates the widespread use of paper-and-pencil tests in the classroom setting with limited use of technology. Nursing students, particularly millennial generation students, have a preference for use of technology in nursing education including computerized classroom testing4; however, no research was found on the impact of test format in classroom testing on passing the NCLEX-RN. Nursing education programs use a variety of strategies to prepare students to take the NCLEX-RN, including the use of standardized computer testing. With classroom testing being conducted with paper-and-pencil tests, how much exposure to computerized testing is necessary to prepare students for the NCLEX-RN? No evidence has been found in the nursing literature, but the findings from this study indicated that nursing programs using primarily paper-and-pencil classroom testing had high NCLEX-RN pass rates (mean, 94.3 [SD, 6.1]).

 

In examining influential factors in student assessment and evaluation and sources of test items, faculty members were identified as most influential in creating classroom tests. The emphasis on the faculty member as test designer is seen not only in nursing education literature but also in educational and psychological literature on test development-in fact, some view item writing, a component of test development, as an art that must be practiced and skillfully applied.7-9

 

Limitations

Participants were not asked about the availability or access to technology. Their current testing practices could be due to lack of access to technology. In addition, they were not asked if their nursing education program used computerized standardized testing for NCLEX-RN preparation. This could potentially impact how technology is being used to prepare nursing students to take the NCLEX-RN, but not necessarily impact how students are assessed and evaluated in the nursing courses.

 

Conclusions

From the results of this analysis, there is limited use of technology in classroom testing that contributes to the student's summative course grade. It is recommended that future research examine the impact of various educational technology methods of classroom testing on nursing students' course grades and NCLEX-RN pass rates.

 

Acknowledgments

The author thanks Drs Laura P. Kimble and Tanya Sudia for their research support.

 

References

 

1. Berg BW, Wong L, Vincent DS. Technology-enabled interprofessional education for nursing and medical students: a pilot study. J Interprof Care. 2010; 24(5): 601-604. [Context Link]

 

2. Williamson KM, Fineout-Overholt E, Kent B, Hutchinson AM. Teaching EBP: integrating technology into academic curricula to facilitate evidence-based decision-making. Worldviews Evid Based Nurs. 2011; 8(4): 247-251. [Context Link]

 

3. Harding M. Predictability associated with exit examinations: a literature review. J Nurs Educ. 2010; 49(9): 493-497. [Context Link]

 

4. Montenery SM, Walker M, Sorensen E, et al. Millennial generation student nurses' perceptions of the impact of multiple technologies on learning. Nurs Educ Perspect. 2013; 34(6): 405-409. [Context Link]

 

5. Killingsworth E, Kimble LP, Sudia T. What goes into a decision? How nursing faculty decide which best practices to use for classroom testing. Nurs Educ Pers. 2015; 36(4): 220-225. [Context Link]

 

6. Oermann MH, Yarbrough SS, Saewert KJ, Ard N, Charasika ME, Yarbrough SS. Assessment and grading practices in schools of nursing: national survey findings part 2. Nurs Educ Perspect. 2009; 30(6): 352-357. [Context Link]

 

7. Baranowski RA. Item editing and editorial review. In: Downing SM, Haladyna TM, eds. Handbook of Test Development. New York: Routledge. 2009: 349-357. [Context Link]

 

8. Downing SM, Haladyna TM. Test item development: validity evidence from quality assurance procedures. Appl Meas Educ. 1997; 10(1): 61-82. [Context Link]

 

9. Oermann MH, Gaberson KB. Evaluation and Testing in Nursing Education. 4th ed. New York: Springer; 2014. [Context Link]