Keywords

Curriculum evaluation, Evaluation, NCLEX-PN, NCLEX-RN, Remediation

 

Authors

  1. NEWMAN, MARY PhD, RN
  2. BRITT, ROBIN B. EdD, RN
  3. LAUCHNER, KATHRYN A. PhD, RN

Abstract

This follow-up study compares the accuracy of the HESI Exit Exam (E2) in predicting NCLEX success for the academic years 1996-97 (N = 2,809) and 1997-98 (N = 3,752) and is designed to replicate the study of Lauchner et al1 and to implement recommendations suggested by the authors for further research. The E2 was again found to be highly predictive of licensure success, regardless of the type of program tested: associate degree, baccalaureate degree, diploma, or practical nurse programs. The predictive accuracy of the E2 was 98.27% for the RN group and 99.34% for the PN group. The E2 was again found to be significantly more accurate when administration of the exam was monitored than when it was not monitored (P = 0.05).

 

In the 1997-98 academic year, NCLEX success of low-scoring E2 students was examined. Significantly more (P = 0.001) of the low-scoring E2 students failed the licensure exam than high-scoring E2 students. However, significantly fewer (p = 0.05) of these low-scoring E2 students failed the licensing exam when the E2 was used as a benchmark or guide for remediation.

 

Article Content

Yocom reported that of the 56 State Boards of Nursing, 38 use schools' pass rates on the NCLEX-RN as a criterion for approval of nursing programs, and 37 use pass rates on the NCLEX-PN as a criterion for approval.2 Additionally, the 1999 Standards and Criteria of the National League for Nursing Accreditation Commission (NLNAC) mandated NCLEX pass rates as a criterion of the required outcome measures for accreditation of post-secondary, baccalaureate, and higher degree programs in nursing.3

 

In the first four quarters following implementation of the increased passing standard by the National Council of State Boards of Nursing (NCSBN),4 the NCLEX-RN pass rate for first-time US-educated candidates decreased by 3.66%. With the increased passing standard implemented for the NCLEX-PN in April 1999,5 a similar decrease in NCLEX-PN pass rates can also be expected. The pass rate for RN candidates for the last four quarters prior to implementation of the increased passing standard was 87.91%, and for the first four quarters following implementation of the increased passing standard, the pass rate was 84.25%.6 With the reported decrease in the national pass rate for RN candidates and the expected decrease in pass rate for PN candidates, it is highly valuable for schools of nursing to have a valid and reliable predictive measure of licensure success. Such a measure can provide a benchmark for initiating early remediation to improve pass rates so that probationary action by Boards of Nursing and the NLNAC can be avoided. Early remediation might assist in promoting success on the licensing exam, thereby increasing the number of nurses prepared to practice nursing.

 

PURPOSE OF FOLLOW-UP STUDY

Lauchner et al1 reported that the HESI Exit Exam(TM) (E2) was highly predictive of students' success on both the NCLEX-RN and NCLEX-PN, and since it is a computerized exam, results were available at the completion of the exam so that remediation could be initiated immediately, if indicated. Though the E2 was predictive of licensure success for all programs tested-associate degree (ADN), baccalaureate (BDN), diploma, and practical nurse (PN) programs-the authors reported that monitoring or proctoring the administration of the exams significantly improved the accuracy of predictions made by the E2. Based on data obtained during the 1996-97 academic year (N = 2,809), the E2 was determined to be 99.49% accurate when monitored and significantly less accurate (P = .05) when not monitored (96.82%). Because the PN sample was small (N = 170), the authors recommended replicating the study with a larger PN sample size. The purpose of this research was to replicate the previous study that used aggregate data from the 1996-97 (Year I) academic year and to compare the previous findings with aggregate data from the 1997-98 (Year II) academic year. In conducting both studies, the accuracy of the mathematical model, called the Health Education Systems, Inc. (HESI) Predictability Model (HPM), which was used to calculate students' probability of passing the NCLEX, was examined. Though HESI considers the HPM to be a proprietary model and does not disclose the formula to the public, company psychometricians expressed concern that the increased passing standard implemented by the NCSBN in April 1998 might affect the accuracy of the E2.

 

In keeping with the recommendations of Lauchner et al,1 Year II data comprised larger samples for both the RN and PN groups. Lauchner et al1 also recommended examining the pass rate of low-scoring E2 students, based on their study that gathered data regarding low-scoring E2 candidates' success on the licensing exam.

 

INSTRUMENT DESCRIPTION

E2 test items were developed by nurse educators and clinicians using the model for writing critical-thinking test items described by Morrison et al.7 The E2 followed the test blueprints of the NCSBN4,5 for both the RN and the PN versions of the exams. The RN and PN versions of the E2 were 160-item, Windows-based, computerized exams. Ten of the 160 items were pilot items and did not count toward the student's grade. Scores reported on the E2 were therefore based on 150 items and compared the student's responses on 50 to 60 different subject categories with all students who had previously answered the same test items. Students were able to view rationales for items missed immediately after completing the exam. ParSYSTEM(TM), a testing program distributed by Scantron Corporation (Tuftin, CA), was used for test-item banking and test analysis. Based on data obtained from the item analysis of each exam administered, test items were revised by a team of nursing education and nursing practice experts. RN and PN test items were maintained in separate item banks, and test items for each group were assembled from the appropriate item banks.

 

Aggregate data for Year II were obtained from E2s administered to 140 groups at 102 different schools: 84 RN and 18 PN schools. ParSYSTEM(TM) reported the Kuder-Richardson Formula 20 (KR20) for each of the groups tested. The average KR20, as calculated and reported by ParSYSTEM(TM), for the RN groups was 0.76, and the average KR20 for the PN groups was 0.77. Item analysis data, including item difficulty (P-value) and discrimination data (point biserial correlation coefficient), were calculated for each administration of the E2, and data were stored within the ParSYSTEM program(TM). The HPM was applied to each test as a whole, as well as within each subject area tested. HESI's SIMCAT program was used to administer the computerized tests and to apply the HPM to reported scores.

 

METHODOLOGY

All schools that purchased the E2 during the 1997-98 academic year were invited to participate in the study. Questionnaires were mailed to all schools that purchased the E2 during the study period, along with a listing of students' names. The names of those predicted to pass the licensing exam were highlighted in yellow, and the names of the low-scoring students were highlighted in green. Schools were asked how many of the students whose names were highlighted in yellow failed the licensing exam and how many of the students whose names were highlighted in green failed the licensing exam. A statement on the questionnaire indicated that the names of students who failed were not needed, only the total number that failed in each group.

 

Sample and Definition of Terms

For the purpose of this study, students were defined as individuals who took the E2 during the Years I and II within four months prior to graduation from a RN or PN school of nursing. Probability scores were defined as total scores on the E2 that reflected application of the HPM to students' scores and described the students' probability of passing the licensing exam. Predicted passes were defined as those students whose probability scores on the E2 were in the 90%-99% range. The sample was composed of 1566 (48%) RN and 141 (30.92%) PN predicted passes. Low-scoring students were defined as those students whose probability scores were in the 69% and below range. The sample was composed of 176 (5.34%) low-scoring RN students and 27 (5.92%) low-scoring PN students. Monitoring was defined as proctoring administration of the E2 by nursing faculty or a designee of the nursing faculty who was charged with maintaining security of the testing process.

 

Data Analysis

Of the 4,211 students who took the E2 during Year II, 3,677 were RN students and 534 were PN students. In the RN group, 146 were retesters, and in the PN group, 23 used the E2 as an entrance exam for PN to RN nursing programs. Data from the retesters and entrance exam uses were removed from the study group, leaving a potential of 3,531 RN students and 511 PN students, for a total of 4,042 total students. Of the RN schools, 80 (95.24%) of the 84 RN schools responded, and 17 (94.44%) of the 18 of the PN schools responded. Data were received for 3,752 students, which comprised 92.83% of the study group: 3,296 RN students, or 93.34% of the RN population, and 456 PN students, or 89.24% of the PN population. The reporting RN group consisted of 2,456 (74.51%) ADN, 796 (24.15%) BSN, and 44 (1.34%) diploma students. Reporting schools consisted of 75 ADN testing groups from 54 ADN schools of nursing, 38 BSN testing groups from 25 BSN schools of nursing, and 2 diploma testing groups from 1 diploma school of nursing. The RN sample was from 19 states throughout the United States. The reporting PN group consisted of 456 PN students from 17 PN schools of nursing. The PN sample was from five different states, predominantly from the southwestern and western parts of the United States.

 

Of the 3,752 RN and PN students who took the E2 during Year II, 1707 (45%) were predicted to pass the licensing exam without any additional preparation. Of the 3,296 RN students, 1566 (48%) were predicted to pass without any additional preparation: 1204 (49.02%) ADN, 326 (40.95%) BSN, and 36 (81.82%) diploma students. In the PN group, 141 of the 456 (30.92%) were predicted to pass the licensing exam without any additional preparation. For the total group of RN and PN students, 60 of the 1,707 (3.51%) students who were predicted to pass failed the licensing exam. In the RN group, 57 of the 1,566 (3.64%) students who were predicted to pass failed the licensing exam: 51 of the 1,204 (4.24%) ADN, 5 of the 326 (1.53%) BSN, and 1 of the 36 (2.78%) diploma students. In the PN group, 3 of the 141 (2.13%) students who were predicted to pass failed the licensing exam. Therefore, the accuracy of E2 when considering only those predicted to pass was 96.36% for the RN group, 97.87% for the PN group, and 96.49% for the combined RN and PN group. The predictive accuracy of the E2 was 98.27% for the total RN group, 99.34% for the PN group, and 98.40% for the combined RN and PN group (Table 1).

  
Table 1 - Click to enlarge in new windowTable 1 1997-98 Aggregate Data

In Year I, the E2 was found to be significantly more accurate when monitored than when not monitored. In Year II, the accuracy of the E2 was found to be 98.74% accurate when monitored and 97.20% when not monitored. The finding of Year I was replicated with Year II data, and the E2 was again found to be significantly more accurate ([chi]21 = 32.16, P = .001) when monitored than when it was not monitored. With or without monitoring, the likelihood that the accuracy of the E2 in predicting licensure success could have occurred by chance alone was infinitesimal. Using a [chi]2 test of significance, the difference in the predictive accuracy of the E2 in Year I and Year II was not significant ([chi]21 = 0.017, P = .001).

 

In Year I, the predictive accuracy of the E2 was not significantly different among the programs tested: ADN, BSN, diploma, and PN programs. Again in Year II, the difference among programs was not significant ([chi]23 = 6.26, P = .001).

 

In Year II, data were also collected regarding low-scoring E2 students' success on the licensing exam. In the RN group, 176 students obtained probability scores of 69% or below on the E2, and 78 (44.32%) of these low-scoring RN students failed the licensing exam. In the PN group, 27 students obtained probability scores of 69% or below on the E2, and 6 (22.22%) of these low-scoring PN students failed the licensing exam. Significantly more low-scoring students failed the licensing exam in both the RN and PN groups than high-scoring students or those who were predicted to pass the licensing exam ([chi]21 = 488.08, P = .001) (Table 2).

  
Table 2 - Click to enlarge in new windowTable 2 Low-Scoring E

In the unmonitored low-scoring group, several students' scores were less than one might score by chance alone, indicating that these students probably chose answers randomly and did not take the exam seriously. Because of suspected spurious data in the unmonitored low-scoring group, only the monitored low-scoring data were analyzed. In the RN monitored group, there were 121 low-scoring students, 79 of whom were from schools that reported using the E2 as a benchmark for remediation. Of these 79 students, 33 (41.77%) failed the licensure exam. In the monitored RN group that did not use the E2 as a benchmark for remediation, there were 42 low-scoring students, 26 (61.90%) of whom failed the licensing exam. Using a [chi]2 test of significance, it was determined that significantly fewer ([chi]21 = 6.46, P = .05) students failed the NCLEX-RN in schools that used the E2 for remediation than in schools that did not use the E2 for remediation (Table 3). Only 5 of the 17 PN schools used the E2 for remediation, and in these 5 schools, there were only 6 low-scoring E2 students. Therefore, PN data regarding low-scoring students were not analyzed.

  
Table 3 - Click to enlarge in new windowTable 3 Failures of Monitored RN Low-Scoring Students Who Used the E

INTERPRETATION OF FINDINGS

Aggregate data from Year II supported the findings reported earlier by Lauchner et al,1 which were based on aggregate data from Year I. The E2 was again found to be highly predictive of student success on the NCLEX-RN and NCLEX-PN, regardless of the type of program tested. Again in Year II, monitoring was found to influence the predictive accuracy of the E2. The E2 was significantly more accurate when administration of the exam was monitored or proctored than when it was not monitored or proctored. The accuracy of the E2 in predicting success on the licensing exam was once again so close to the student's actual outcome on the NCLEX-RN and NCLEX-PN that the probability that these results could have occurred by chance alone was infinitesimal. Though the E2 was less predictive of student success in Year II (98.74%) than in Year I (99.49%), the difference was not significant. One factor used in the HPM to calculate predictive scores was the average pass rate on the NCLEX for the previous four quarters. Data regarding pass rates using the more stringent passing standard for the NCLEX-RN established in April 1998 by the NCSBN were not available until after the administration of the E2 to second-quarter 1998 E2 users. When these data became available, the amount of decrease that occurred in the one reporting quarter was projected to the other three quarters. These projected decreases in the NCLEX-RN pass rate were probably responsible for the continued accuracy of the E2 projections.

 

Low-scoring E2 RN students were more likely to fail the NCLEX-RN than high-scoring E2 RN students. However, in RN schools that monitored the administration of the exams and used the E2 for remediation, significantly fewer low-scoring students failed than in schools that did not use the E2 for remediation.

 

CONCLUSIONS AND RECOMMENDATIONS

The E2 was again found to be highly predictive of student success for both the NCLEX-RN and NCLEX-PN. Once again, it was found to be significantly more accurate when administration of the exam was monitored than when it was not monitored.

 

Since the E2 is a computerized exam and students received their scores upon completion of the exam, remediation could be initiated immediately. In schools that use the E2 for remediation, significantly fewer of their low-scoring students failed the licensing exam. This finding has implications for nursing faculties because the E2 can serve as a benchmark for developing remediation programs that are designed to assist students in becoming successful first-time candidates to the licensure exam. Further research is needed, however, to evaluate the effectiveness of various types of remediation programs and to identify implementation strategies and content resources used in establishing successful remediation programs for those students identified to be at risk of failing the licensure exam.

 

REFERENCES

 

1. Lauchner K, Newman M, Britt R. Predicting licensure success with a computerized nursing exam: the HESI exit exam. Comput Nurs. 1999;17:120-125. [Context Link]

 

2. Yocom C. National Council of State Boards of Nursing: Profiles of Member Boards-1998. In press. [Context Link]

 

3. National League for Nursing Accreditation Commission. 1999 accreditation standards and criteria for academic quality of postsecondary, baccalaureate, and higher degree programs in nursing. New York; 1999. [Context Link]

 

4. National Council of State Boards of Nursing, Inc. NCLEX-RN examination test plan for the national council licensure examination for registered nurses effective date: April, 1998. Chicago; 1997. [Context Link]

 

5. National Council of State Boards of Nursing, Inc. NCLEX-PN examination test plan for the national council licensure examination for registered nurses effective date: April, 1999. Chicago; 1998. [Context Link]

 

6. National Council of State Boards of Nursing online. Available: http://www.ncsbn.org. August 16, 1999. [Context Link]

 

7. Morrison S, Smith P, Britt R. Critical Thinking and Test Item Writing. Houston: Health Education Systems, Inc.; 1996. [Context Link]