Source:

CIN: Computers, Informatics, Nursing

May 2005, Volume 23 Number 3 - Supplement , p 21S - 27S [FREE]

Authors

  • AINSLIE T. NIBERT, MSN, RN, CCRN
  • ANNE YOUNG, EdD, RN

Abstract

This was the third annual validity study designed to assess the accuracy of the HESI Exit Exam (E2) in predicting NCLEX success for graduating registered and practical nursing students. As in year I (N = 2,725) and year II (N = 3,752), in year III (N = 6,277), the E2 was highly predictive of NCLEX success for associate degree nursing, bachelor of science nursing, diploma, and practical nursing students. Unlike previous years, in year III, monitoring was not a significant factor in the predictive accuracy of the E2. NCLEX success of low-scoring E2 students, first examined in year II, was also examined in year III. As in year II, low-scoring E2 students were significantly more (P = .001) likely to fail the licensure examination than high-scoring E2 students. In year III, unlike year II, there was no significant difference in the pass rate of low-scoring E2 students who participated in a remediation program and those who did not. The authors recommended that a more definitive definition of remediation be used in future studies and that such studies focus on E2 implementation strategies and their relationship to NCLEX success.

 

A nursing shortage of unprecedented scope will grip the nation by 2020, fueled by the retirement of almost half of the current "baby-boomer" generation of nurses who are being replaced by fewer numbers of the next generation.1,2 Adding to this problem is a decreased number of candidates taking nursing licensure examinations. In 1999, there were 113,247 candidates who took the NCLEX-RN compared to 116,713 during 1998, which represents a decrease of nearly 3%. An even greater decrease occurred in the number of practical nurse (PN) candidates, with 47,592 PN candidates testing for the NCLEX-PN in 1999, compared to 50,230 in 1998, representing a decrease of 5.3%.3 Not only are the numbers of nursing students decreasing, but so are the pass rates on nursing licensure examinations. In the first four quarters following adoption of a higher passing standard by the National Council of State Boards of Nursing, Inc. (NCSBN), the NCLEX-RN pass rate dropped to 84.2% for United States-educated first-time takers, a decrease of 3.7%.4 When a similar policy was adopted one year later for practical nurses, the NCLEX-PN pass rate dropped to 85.69% in the four quarters following implementation of the increased passing standard, a decrease of 1.62%.5 Recruitment of a more culturally diverse candidate pool has been described as a possible solution to the documented nursing shortage.6-8 However, previous studies indicate that ethnic minorities and foreign-born nursing students experience higher attrition rates and higher NCLEX failure rates than do nonminority English-speaking students.9-11

 

NCLEX failure not only contributes to the nursing shortage by delaying new graduates' entrance into the work force, but it also has personal and financial consequences for the candidate, nursing faculties and administrators, and prospective employers. Unsuccessful NCLEX candidates suffer loss of potential wages that might have been earned if they were licensed nurses. In addition to the financial consequences of licensure failure for unsuccessful NCLEX candidates, Vance and Davidhizar12 reported that failure results in an even greater emotional loss, characterized by feelings of inadequacy and grief.

 

NCLEX pass rates affect a school's reputation, thereby having consequences for nursing faculties and administrators. The public's view of a school can affect its ability to recruit new nursing program students. The two national nursing accrediting agencies, the National League for Nursing Accrediting Commission (NLNAC) and the Commission on Collegiate Nursing Education (CCNE), as well as the approval standards of most states' Boards of Nurse Examiners (BNE), use pass-rate data as benchmarks for program effectiveness.13-15 A consistent pattern of low NCLEX pass rates can potentially place a nursing program's accreditation or state approval at risk.

 

NCLEX failures also have financial consequences for healthcare employers because hiring and orienting new graduates is a costly institutional expenditure. Messmer et al.16 estimated nursing orientation costs to be between $20,000 and $50,000 per person. Licensure failure negates any benefit of such expenditures because candidates failing the NCLEX cannot assume the licensed nursing positions for which they were hired and oriented.

 

Attempts have been made to identify students at risk of failing the NCLEX so that remediation to promote NCLEX success might be initiated. Several authors have described remediation strategies to improve NCLEX pass rates.11,17 Others have described academic predictors of NCLEX success, such as admission criteria, including SAT and ACT scores, high school percentile rank and pre-nursing science and liberal arts course grades, as well as nursing grades.9,11,12,18-23 Nonacademic factors, such as age, emotional state, ethnicity, family responsibilities, non-English primary language, stress, self-esteem, time management, and test anxiety, are also related to NCLEX success, but they have generally been less predictive than academic factors.9,10,24,25

 

Factors associated with the highest predictability of NCLEX success, such as cumulative grade-point average, grades in senior-level nursing courses, and outcomes on NCLEX readiness tests, occur at the end of the nursing program. Obtaining information this late in a student's curriculum leaves little time for NCLEX preparation, much less specific remediation of identified deficit areas. Several comprehensive examinations have demonstrated a moderate to high ability to predict NCLEX success.9,17,20,21,26,27 However, most are paper-and-pencil administered tests, and results are not available soon enough to use the data as a remediation resource. Also, paper-and-pencil tests do not provide students with practice using the NCLEX computerized adaptive testing (CAT) format, which is valuable in preparing students for the keystroke mechanics associated with taking licensure examinations. For this reason, nursing faculties have increasingly chosen computerized instruments to help prepare students for NCLEX.18,19,24,28 Software companies have responded by providing a variety of computer products, with expanded use of computerized testing now becoming common within nursing schools.24,28

 

Health Education Systems, Inc. (HESI), produces a variety of nursing examinations, all of which are computerized. The HESI Exit Exam (E2) is a secure, comprehensive examination that has demonstrated a high degree of accuracy in predicting NCLEX success as well as identifying those students at risk of failing the NCLEX.29-31 Because it is a computerized examination, students receive their scores immediately upon examination completion so that a specific remediation plan can be quickly implemented, if needed. Using a nationwide database, the E2 provides an analysis of student performance that allows schools to compare their nursing program with programs throughout the United States. The E2 also uses the same keystrokes as the NCLEX, thereby simulating the mechanics of NCLEX administration. More than one version of the E2 is available so that those students who have been remediated can be retested to evaluate the effectiveness of their remediation programs.

 

This follow-up study once again focused on the accuracy of the E2 in predicting NCLEX success. Previously presented findings of year I (1996-1997) by Lauchner et al.29 and year II (1997-1998) by Newman et al.31 were compared with data acquired from year III (1998-1999). In years I and II, the E2 was significantly more accurate (P = .05) in predicting NCLEX success when the administration of the examination was monitored or proctored than when it was not.29,31 In year II, the NCLEX outcomes of low-scoring students were examined for the first time. Findings indicated that significantly more (P = .001) low-scoring E2 students failed the NCLEX than did high-scoring E2 students. However, when the E2 was used as a guide for remediation, significantly fewer (P = .05) of the low-scoring E2 students failed the licensing examination than when the E2 was not used as a benchmark for remediation.31

 

The purpose of this study was to examine the accuracy of the E2 in predicting NCLEX success in year III and to compare year III data with year I and year II data. The authors examined year III data based on recommendations made by Lauchner et al.29 (year I) and Newman et al.31 (year II) for further research. Four specific factors were examined: (1) predictive accuracy, (2) accuracy of monitored and unmonitored administrations, (3) low-scoring students' outcomes on NCLEX, and (4) low-scoring students' outcomes on the NCLEX when the E2 was used as a benchmark for remediation.

 

The HESI RN and PN versions of the E2 consist of 160 items, 10 of which are pilot items and do not count toward the student's score. All HESI examinations are computerized, Windows based, and available in diskette or network versions.31,32 ParSYSTEM, a test analysis and item-banking software program distributed by Scantron Corporation, was used to conduct an item analysis for each test administered by HESI. The item difficulty and item discrimination (point biserial correlation coefficient) were calculated and stored with each test item. The reliability of each test administered was calculated using the Kuder-Richardson Formula 20 (KR-20). In year III, the average KR-20 was 0.74 for the RN group and 0.75 for the PN group. Each version of the E2 was developed from test banks containing questions written specifically for HESI by nurse educators and clinicians from across the United States. These writers used the model described by Morrison et al.33 and Morrison and Free34 to develop critical-thinking test items. The E2 follows the test blueprints for the NCLEX-RN and NCLEX-PN developed by the NCSBN.3 The HESI Predictability Model (HPM), a proprietary mathematical model, was used to calculate students' probability of passing NCLEX. The HPM was applied to tests as a whole and within each subject area tested. The E2 has been described as highly predictive of NCLEX success in all types of nursing programs: ADN, BSN, diploma, and PN.29,31

 

To provide consistency, this follow-up study defined terms as they were defined in the year I and year II studies.29,31 Participants were those who took the E2 for the first time during year III within 6 months before graduation from an RN or a PN school of nursing. Probability scores were the total scores on the E2 that were calculated using the HPM and described the student's probability of passing the licensure examination. High-scoring students were those students whose probability scores were calculated between 90 and 99 and were predicted to pass their licensure examinations. The year III sample included 2,206 (39.48%) RN and 228 (33.09%) PN high-scoring students who were predicted to pass their respective licensure examinations. Low-scoring students were the students whose probability scores were calculated in the range of 69 and below. The year III sample consisted of 321 (5.74%) RN and 65 (9.43%) PN students who were classified as low scoring. Monitoring was defined as proctoring during the administration of the E2 by nursing faculty members or their designees who were charged with maintaining examination security.

 

The year III sample consisted of 6,560 students: 5,851 RN and 709 PN students who took the E2 in year III. Aggregate data for year III were obtained from 124 RN schools in 35 states and 24 PN schools in 8 states. RN and PN program administrators at the 148 schools that purchased the E2 during the 1998-1999 academic year received questionnaires with a cover letter inviting their participation in the study. A list of the school's students was included in the mailing of the questionnaires, with the names of those predicted to pass (high-scoring) highlighted in one color and the names of the low-scoring students highlighted in another color. Program administrators were asked how many of the high-scoring and low-scoring students failed the NCLEX-RN or NCLEX-PN. Names of the students were not needed, only the total number of failures from each group. Of the 124 RN schools that were sent questionnaires, 120 (96.77%) responded, and 23 (95.83%) of 24 PN schools responded. Students who were retested on the E2 following a remediation program were not included in the study. Therefore, the total study sample consisted of 6,277 students: 5,588 from RN programs (95.51% of the total RN students tested) and 689 from PN programs (97.18% of the total PN students tested). The 5,588 RN study sample consisted of 3,651 (65.34%) ADN; 1,921 (34.38%) BSN; and 16 (0.28%) diploma students.

 

In year I, Lauchner et al.29 recommended that future replication studies include larger sample sizes. The sample size increased each successive year. The total sample size for year I was reported by Newman et al.31 as 2,809 students. However, the reporting sample for year I was only 2,725 because 84 of the total did not respond. In reviewing the respondents only, in year II, the reporting sample was 3,752, and in year III, it was 6,277. Almost half of all data gathered for all three years were collected in year III. Table 1 describes the sample by years and types of programs.

 

In year III, 5,588 RN and 689 PN students comprised the study sample. A total of 2,206 (39.48%) RN students was predicted to pass the NCLEX-RN without additional preparation. In the ADN group, 1,573 (43.08%) were predicted to pass the NCLEX-RN without any additional preparation, as were 622 (32.38%) of the BSN students, and 11 (68.75%) of the diploma students. A total of 228 (33.09%) PN students was identified as high-scoring students, and they were predicted to pass the NCLEX-PN. Of the 2,434 RN and PN students who were predicted to pass the licensure examinations, 54 (2.22%) failed. In the RN group, 52 (2.36%) of 2,206 high-scoring students failed the NCLEX-RN, and in the PN group, 2 (0.88%) of the 228 high-scoring students failed the NCLEX-PN. In the ADN group, 40 (2.54%) of the high-scoring students failed the NCLEX-RN, as did 12 (1.93%) of the BSN students. None of the high-scoring diploma students failed.

 

In year III, predictive accuracy of the E2 was calculated in the most stringent manner, by examining only the predicted-to-pass group. The number that failed the NCLEX was divided by the number predicted to pass and subtracted from one. Using this formula to calculate accuracy rates, the predictive accuracy of the E2 for the combined RN and PN group in year III was 97.78%; for the RNs only, 97.64%; and for the PNs only, 99.12%. Using a chi square goodness-of-fit test, the year III predictive accuracy (97.78%) was not significantly different from that of year I (97.41%) and year II (96.49%). As in years I and II, the predictive accuracy of the E2 was not significantly different among programs: ADN, BSN, diploma, and PN programs (Table 2).

 

Though the E2 was highly predictive of NCLEX success in both years I (97.41%) and II (96.49%), it was significantly more accurate when the test administration was monitored than when it was not monitored. In year III, the size of the monitored group (n = 5,831) was larger than the size of the monitored groups in years I (n = 1,971) and II (n = 2,930). The size of the unmonitored group in year III was smaller (n = 446) than the size of the unmonitored groups in year I (n = 754) and year II (n = 822). Again, in year III, the E2 was highly predictive of NCLEX success (97.78%). However, contrary to findings in years I and II, there was no significant difference in the predictive accuracy of monitored (97.95%) and unmonitored (95.88%) test administrations in year III.

 

In year II, data were collected and analyzed regarding low-scoring E2 students' outcomes on the NCLEX. Significantly more of the year II low-scoring students failed the licensing examination in both the RN and the PN groups than did the high-scoring students who were predicted to pass the examination.31 In year III, 52 (2.36%) of the 2,206 high-scoring RN students and 2 (0.88%) of the 228 high-scoring PN students failed the NCLEX, whereas 144 (48.81%) of the 295 low-scoring RN students and 20 (30.77%) of the 65 low-scoring PN students failed the NCLEX. Again in year III, significantly more low-scoring students failed the NCLEX than did high-scoring students who were predicted to pass ([chi]2 = 818.775, P = .001).

 

In year II, participants were asked if the E2 was used as a benchmark for remediation. Because few of the PN schools used the E2 as a benchmark for remediation, PN data regarding remediation were deleted from the year II analysis. Due to suspected spurious data in the unmonitored low-scoring RN group and because monitoring was found to be a significant factor in the predictive accuracy of the E2, analysis was confined to the monitored RN group only in year II. Again in year III, there were few PN low-scoring students; therefore, the PN low-scoring sample was excluded from further analysis. In the RN group, several low-scoring unmonitored students scored less than one might score by chance alone, indicating that the students probably did not take the examination seriously and simply randomly chose answers. Therefore, despite that monitoring was not a significant factor in the predictive accuracy of the E2 in year III, low-scoring unmonitored students' data were removed from analysis of low-scoring students' scores in relation to remediation and NCLEX success.

 

Of the 295 low-scoring RN students, 22 were from schools that did not monitor administration of the E2, leaving a total of 273 low-scoring monitored RN students. Of the 273 monitored low-scoring RN students, 163 participated in a remediation program, and 85 (52.15%) of the students who were remediated passed the NCLEX-RN, whereas 78 (47.85%) failed. Of the 110 low-scoring RN students who did not participate in a remediation program, 55 (50%) were successful on the NCLEX-RN, and 55 (50%) failed. Unlike year II, in year III, there was no significant difference in the NCLEX-RN pass rate of low-scoring students who were remediated and those who were not remediated.

 

As recommended by Lauchner et al.29 following the year I study, the sample size was increased in years II and III. The year III sample consisted of 2,525 more students than year II and 3,552 more students than year I. Analysis of year III data and comparison of year III with year I and year II support previous studies' findings regarding E2 accuracy in predicting NCLEX success. In year III, the predictive accuracy of the E2 was calculated by dividing the total number of failures by those predicted to pass and subtracting from one, a more stringent method than the methods used to calculate predictive accuracy in years I and II. Despite the use of the more stringent calculation method, the E2 continued to exhibit an extremely high degree of accuracy in predicting NCLEX success, regardless of the type of program tested: ADN, BSN, diploma, or PN. Additionally, no significant difference was found in the predictive accuracy of the E2 among the 3 years studied. Based on data obtained from 12,754 subjects, collected during a 3-year period, it can be concluded that the E2 is a highly accurate predictor of NCLEX success.

 

Monitoring, which was found to increase the predictive accuracy of the E2 in years I and II, was not a significant factor in year III. This finding might be related to the smaller number of unmonitored students in year III (446 students [7.11%]) as compared to 754 students (27.67%) in year I and 822 (21.91%) in year II. It is possible that faculties were influenced by previously published research findings associating monitoring with increased E2 accuracy, thereby reducing the number of unmonitored administrations in year III. Also, previous authors studied monitoring as a measure of the seriousness with which the E2 was administered. However, the definition of monitoring may have been too broad because monitoring referred to proctoring by nursing faculty or their designee who was charged with maintaining security. Some schools reported that computer laboratory personnel were present during the testing, but the degree to which security was maintained may have varied.

 

HESI reports that many schools have recently instituted progression policies related to E2 scores. These policies vary among the schools implementing them, but they generally require students to achieve a satisfactory score on the E2 as a requirement for graduation or as a requirement for the school's approval to take the licensure examination. If students do not achieve a satisfactory score on the E2, they are required to remediate and retest using a different E2. Further research is needed to study the value of progression policies in relation to NCLEX success.

 

In the year II study, Newman et al.31 recommended that further research be conducted to evaluate the effectiveness of various types of remediation programs. Contrary to the findings of year II, in year III, there was no significant difference in NCLEX-RN success of low-scoring students who were remediated and those who were not remediated. As with the definition of monitoring, the definition of remediation may need to be more definitive. Remediation was broadly interpreted to be any type of additional study that was based on E2 findings. Future research that focuses on E2 implementation strategies is likely to provide more information about the value of the E2 as a remediation resource than studying the effects of monitoring alone. Additionally, it would be worthwhile to compare the pass rates of schools administering the E2 with the average national NCLEX pass rates, as well as to compare the pass rates of schools that use the E2 for progression with those that only use the E2 as a benchmark for remediation.

 

In the midst of declining student enrollment in the nation's schools of nursing, declining NCLEX pass rates, and a shrinking nursing workforce, success for first-time NCLEX candidates remains a top priority for nursing faculties. Based on the data obtained from this study, as well as from the previous two studies, the E2 has been determined to be highly accurate in predicting NCLEX success. Because it is a computerized examination, results are obtained immediately and can serve as a benchmark for remediation to assist candidates in becoming successful first-time takers of the NCLEX. Success on the licensure examination enables the candidates to enter the workforce, thereby helping to ameliorate the nursing shortage. Though no significant difference was found in NCLEX-RN pass rates of the year III low-scoring students who were remediated and those who were not remediated, further research is needed to determine which E2 implementation strategies are most effective in improving NCLEX pass rates.

A nursing shortage of unprecedented scope will grip the nation by 2020, fueled by the retirement of almost half of the current "baby-boomer" generation of nurses who are being replaced by fewer numbers of the next generation.1,2 Adding to this problem is a decreased number of candidates taking nursing licensure examinations. In 1999, there were 113,247 candidates who took the NCLEX-RN compared to 116,713 during 1998, which represents a decrease of nearly 3%. An even greater decrease occurred in the number of practical nurse (PN) candidates, with 47,592 PN candidates testing for the NCLEX-PN in 1999, compared to 50,230 in 1998, representing a decrease of 5.3%.3 Not only are the numbers of nursing students decreasing, but so are the pass rates on nursing licensure examinations. In the first four quarters following adoption of a higher passing standard by the National Council of State Boards of Nursing, Inc. (NCSBN), the NCLEX-RN pass rate dropped to 84.2% for United States-educated first-time takers, a decrease of 3.7%.4 When a similar policy was adopted one year later for practical nurses, the NCLEX-PN pass rate dropped to 85.69% in the four quarters following implementation of the increased passing standard, a decrease of 1.62%.5 Recruitment of a more culturally diverse candidate pool has been described as a possible solution to the documented nursing shortage.6-8 However, previous studies indicate that ethnic minorities and foreign-born nursing students experience higher attrition rates and higher NCLEX failure rates than do nonminority English-speaking students.9-11

NCLEX failure not only contributes to the nursing shortage by delaying new graduates' entrance into the work force, but it also has personal and financial consequences for the candidate, nursing faculties and administrators, and prospective employers. Unsuccessful NCLEX candidates suffer loss of potential wages that might have been earned if they were licensed nurses. In addition to the financial consequences of licensure failure for unsuccessful NCLEX candidates, Vance and Davidhizar12 reported that failure results in an even greater emotional loss, characterized by feelings of inadequacy and grief.

NCLEX pass rates affect a school's reputation, thereby having consequences for nursing faculties and administrators. The public's view of a school can affect its ability to recruit new nursing program students. The two national nursing accrediting agencies, the National League for Nursing Accrediting Commission (NLNAC) and the Commission on Collegiate Nursing Education (CCNE), as well as the approval standards of most states' Boards of Nurse Examiners (BNE), use pass-rate data as benchmarks for program effectiveness.13-15 A consistent pattern of low NCLEX pass rates can potentially place a nursing program's accreditation or state approval at risk.

NCLEX failures also have financial consequences for healthcare employers because hiring and orienting new graduates is a costly institutional expenditure. Messmer et al.16 estimated nursing orientation costs to be between $20,000 and $50,000 per person. Licensure failure negates any benefit of such expenditures because candidates failing the NCLEX cannot assume the licensed nursing positions for which they were hired and oriented.

Attempts have been made to identify students at risk of failing the NCLEX so that remediation to promote NCLEX success might be initiated. Several authors have described remediation strategies to improve NCLEX pass rates.11,17 Others have described academic predictors of NCLEX success, such as admission criteria, including SAT and ACT scores, high school percentile rank and pre-nursing science and liberal arts course grades, as well as nursing grades.9,11,12,18-23 Nonacademic factors, such as age, emotional state, ethnicity, family responsibilities, non-English primary language, stress, self-esteem, time management, and test anxiety, are also related to NCLEX success, but they have generally been less predictive than academic factors.9,10,24,25

Factors associated with the highest predictability of NCLEX success, such as cumulative grade-point average, grades in senior-level nursing courses, and outcomes on NCLEX readiness tests, occur at the end of the nursing program. Obtaining information this late in a student's curriculum leaves little time for NCLEX preparation, much less specific remediation of identified deficit areas. Several comprehensive examinations have demonstrated a moderate to high ability to predict NCLEX success.9,17,20,21,26,27 However, most are paper-and-pencil administered tests, and results are not available soon enough to use the data as a remediation resource. Also, paper-and-pencil tests do not provide students with practice using the NCLEX computerized adaptive testing (CAT) format, which is valuable in preparing students for the keystroke mechanics associated with taking licensure examinations. For this reason, nursing faculties have increasingly chosen computerized instruments to help prepare students for NCLEX.18,19,24,28 Software companies have responded by providing a variety of computer products, with expanded use of computerized testing now becoming common within nursing schools.24,28

Health Education Systems, Inc. (HESI), produces a variety of nursing examinations, all of which are computerized. The HESI Exit Exam (E2) is a secure, comprehensive examination that has demonstrated a high degree of accuracy in predicting NCLEX success as well as identifying those students at risk of failing the NCLEX.29-31 Because it is a computerized examination, students receive their scores immediately upon examination completion so that a specific remediation plan can be quickly implemented, if needed. Using a nationwide database, the E2 provides an analysis of student performance that allows schools to compare their nursing program with programs throughout the United States. The E2 also uses the same keystrokes as the NCLEX, thereby simulating the mechanics of NCLEX administration. More than one version of the E2 is available so that those students who have been remediated can be retested to evaluate the effectiveness of their remediation programs.

This follow-up study once again focused on the accuracy of the E2 in predicting NCLEX success. Previously presented findings of year I (1996-1997) by Lauchner et al.29 and year II (1997-1998) by Newman et al.31 were compared with data acquired from year III (1998-1999). In years I and II, the E2 was significantly more accurate (P = .05) in predicting NCLEX success when the administration of the examination was monitored or proctored than when it was not.29,31 In year II, the NCLEX outcomes of low-scoring students were examined for the first time. Findings indicated that significantly more (P = .001) low-scoring E2 students failed the NCLEX than did high-scoring E2 students. However, when the E2 was used as a guide for remediation, significantly fewer (P = .05) of the low-scoring E2 students failed the licensing examination than when the E2 was not used as a benchmark for remediation.31

The purpose of this study was to examine the accuracy of the E2 in predicting NCLEX success in year III and to compare year III data with year I and year II data. The authors examined year III data based on recommendations made by Lauchner et al.29 (year I) and Newman et al.31 (year II) for further research. Four specific factors were examined: (1) predictive accuracy, (2) accuracy of monitored and unmonitored administrations, (3) low-scoring students' outcomes on NCLEX, and (4) low-scoring students' outcomes on the NCLEX when the E2 was used as a benchmark for remediation.

INSTRUMENT DESCRIPTION

The HESI RN and PN versions of the E2 consist of 160 items, 10 of which are pilot items and do not count toward the student's score. All HESI examinations are computerized, Windows based, and available in diskette or network versions.31,32 ParSYSTEM, a test analysis and item-banking software program distributed by Scantron Corporation, was used to conduct an item analysis for each test administered by HESI. The item difficulty and item discrimination (point biserial correlation coefficient) were calculated and stored with each test item. The reliability of each test administered was calculated using the Kuder-Richardson Formula 20 (KR-20). In year III, the average KR-20 was 0.74 for the RN group and 0.75 for the PN group. Each version of the E2 was developed from test banks containing questions written specifically for HESI by nurse educators and clinicians from across the United States. These writers used the model described by Morrison et al.33 and Morrison and Free34 to develop critical-thinking test items. The E2 follows the test blueprints for the NCLEX-RN and NCLEX-PN developed by the NCSBN.3 The HESI Predictability Model (HPM), a proprietary mathematical model, was used to calculate students' probability of passing NCLEX. The HPM was applied to tests as a whole and within each subject area tested. The E2 has been described as highly predictive of NCLEX success in all types of nursing programs: ADN, BSN, diploma, and PN.29,31

DEFINITION OF TERMS

To provide consistency, this follow-up study defined terms as they were defined in the year I and year II studies.29,31 Participants were those who took the E2 for the first time during year III within 6 months before graduation from an RN or a PN school of nursing. Probability scores were the total scores on the E2 that were calculated using the HPM and described the student's probability of passing the licensure examination. High-scoring students were those students whose probability scores were calculated between 90 and 99 and were predicted to pass their licensure examinations. The year III sample included 2,206 (39.48%) RN and 228 (33.09%) PN high-scoring students who were predicted to pass their respective licensure examinations. Low-scoring students were the students whose probability scores were calculated in the range of 69 and below. The year III sample consisted of 321 (5.74%) RN and 65 (9.43%) PN students who were classified as low scoring. Monitoring was defined as proctoring during the administration of the E2 by nursing faculty members or their designees who were charged with maintaining examination security.

DESCRIPTION OF THE SAMPLE

The year III sample consisted of 6,560 students: 5,851 RN and 709 PN students who took the E2 in year III. Aggregate data for year III were obtained from 124 RN schools in 35 states and 24 PN schools in 8 states. RN and PN program administrators at the 148 schools that purchased the E2 during the 1998-1999 academic year received questionnaires with a cover letter inviting their participation in the study. A list of the school's students was included in the mailing of the questionnaires, with the names of those predicted to pass (high-scoring) highlighted in one color and the names of the low-scoring students highlighted in another color. Program administrators were asked how many of the high-scoring and low-scoring students failed the NCLEX-RN or NCLEX-PN. Names of the students were not needed, only the total number of failures from each group. Of the 124 RN schools that were sent questionnaires, 120 (96.77%) responded, and 23 (95.83%) of 24 PN schools responded. Students who were retested on the E2 following a remediation program were not included in the study. Therefore, the total study sample consisted of 6,277 students: 5,588 from RN programs (95.51% of the total RN students tested) and 689 from PN programs (97.18% of the total PN students tested). The 5,588 RN study sample consisted of 3,651 (65.34%) ADN; 1,921 (34.38%) BSN; and 16 (0.28%) diploma students.

In year I, Lauchner et al.29 recommended that future replication studies include larger sample sizes. The sample size increased each successive year. The total sample size for year I was reported by Newman et al.31 as 2,809 students. However, the reporting sample for year I was only 2,725 because 84 of the total did not respond. In reviewing the respondents only, in year II, the reporting sample was 3,752, and in year III, it was 6,277. Almost half of all data gathered for all three years were collected in year III. Table 1 describes the sample by years and types of programs.

 
Table 1 - Click to enlarge in new windowTable 1 E

DATA ANALYSIS

Predictive Accuracy

In year III, 5,588 RN and 689 PN students comprised the study sample. A total of 2,206 (39.48%) RN students was predicted to pass the NCLEX-RN without additional preparation. In the ADN group, 1,573 (43.08%) were predicted to pass the NCLEX-RN without any additional preparation, as were 622 (32.38%) of the BSN students, and 11 (68.75%) of the diploma students. A total of 228 (33.09%) PN students was identified as high-scoring students, and they were predicted to pass the NCLEX-PN. Of the 2,434 RN and PN students who were predicted to pass the licensure examinations, 54 (2.22%) failed. In the RN group, 52 (2.36%) of 2,206 high-scoring students failed the NCLEX-RN, and in the PN group, 2 (0.88%) of the 228 high-scoring students failed the NCLEX-PN. In the ADN group, 40 (2.54%) of the high-scoring students failed the NCLEX-RN, as did 12 (1.93%) of the BSN students. None of the high-scoring diploma students failed.

In year III, predictive accuracy of the E2 was calculated in the most stringent manner, by examining only the predicted-to-pass group. The number that failed the NCLEX was divided by the number predicted to pass and subtracted from one. Using this formula to calculate accuracy rates, the predictive accuracy of the E2 for the combined RN and PN group in year III was 97.78%; for the RNs only, 97.64%; and for the PNs only, 99.12%. Using a chi square goodness-of-fit test, the year III predictive accuracy (97.78%) was not significantly different from that of year I (97.41%) and year II (96.49%). As in years I and II, the predictive accuracy of the E2 was not significantly different among programs: ADN, BSN, diploma, and PN programs (Table 2).

 
Table 2 - Click to enlarge in new windowTable 2 Summary of Findings by Year

Monitoring

Though the E2 was highly predictive of NCLEX success in both years I (97.41%) and II (96.49%), it was significantly more accurate when the test administration was monitored than when it was not monitored. In year III, the size of the monitored group (n = 5,831) was larger than the size of the monitored groups in years I (n = 1,971) and II (n = 2,930). The size of the unmonitored group in year III was smaller (n = 446) than the size of the unmonitored groups in year I (n = 754) and year II (n = 822). Again, in year III, the E2 was highly predictive of NCLEX success (97.78%). However, contrary to findings in years I and II, there was no significant difference in the predictive accuracy of monitored (97.95%) and unmonitored (95.88%) test administrations in year III.

Low-scoring Students

In year II, data were collected and analyzed regarding low-scoring E2 students' outcomes on the NCLEX. Significantly more of the year II low-scoring students failed the licensing examination in both the RN and the PN groups than did the high-scoring students who were predicted to pass the examination.31 In year III, 52 (2.36%) of the 2,206 high-scoring RN students and 2 (0.88%) of the 228 high-scoring PN students failed the NCLEX, whereas 144 (48.81%) of the 295 low-scoring RN students and 20 (30.77%) of the 65 low-scoring PN students failed the NCLEX. Again in year III, significantly more low-scoring students failed the NCLEX than did high-scoring students who were predicted to pass ([chi]2 = 818.775, P = .001).

Remediation

In year II, participants were asked if the E2 was used as a benchmark for remediation. Because few of the PN schools used the E2 as a benchmark for remediation, PN data regarding remediation were deleted from the year II analysis. Due to suspected spurious data in the unmonitored low-scoring RN group and because monitoring was found to be a significant factor in the predictive accuracy of the E2, analysis was confined to the monitored RN group only in year II. Again in year III, there were few PN low-scoring students; therefore, the PN low-scoring sample was excluded from further analysis. In the RN group, several low-scoring unmonitored students scored less than one might score by chance alone, indicating that the students probably did not take the examination seriously and simply randomly chose answers. Therefore, despite that monitoring was not a significant factor in the predictive accuracy of the E2 in year III, low-scoring unmonitored students' data were removed from analysis of low-scoring students' scores in relation to remediation and NCLEX success.

Of the 295 low-scoring RN students, 22 were from schools that did not monitor administration of the E2, leaving a total of 273 low-scoring monitored RN students. Of the 273 monitored low-scoring RN students, 163 participated in a remediation program, and 85 (52.15%) of the students who were remediated passed the NCLEX-RN, whereas 78 (47.85%) failed. Of the 110 low-scoring RN students who did not participate in a remediation program, 55 (50%) were successful on the NCLEX-RN, and 55 (50%) failed. Unlike year II, in year III, there was no significant difference in the NCLEX-RN pass rate of low-scoring students who were remediated and those who were not remediated.

DISCUSSION AND RECOMMENDATIONS

As recommended by Lauchner et al.29 following the year I study, the sample size was increased in years II and III. The year III sample consisted of 2,525 more students than year II and 3,552 more students than year I. Analysis of year III data and comparison of year III with year I and year II support previous studies' findings regarding E2 accuracy in predicting NCLEX success. In year III, the predictive accuracy of the E2 was calculated by dividing the total number of failures by those predicted to pass and subtracting from one, a more stringent method than the methods used to calculate predictive accuracy in years I and II. Despite the use of the more stringent calculation method, the E2 continued to exhibit an extremely high degree of accuracy in predicting NCLEX success, regardless of the type of program tested: ADN, BSN, diploma, or PN. Additionally, no significant difference was found in the predictive accuracy of the E2 among the 3 years studied. Based on data obtained from 12,754 subjects, collected during a 3-year period, it can be concluded that the E2 is a highly accurate predictor of NCLEX success.

Monitoring, which was found to increase the predictive accuracy of the E2 in years I and II, was not a significant factor in year III. This finding might be related to the smaller number of unmonitored students in year III (446 students [7.11%]) as compared to 754 students (27.67%) in year I and 822 (21.91%) in year II. It is possible that faculties were influenced by previously published research findings associating monitoring with increased E2 accuracy, thereby reducing the number of unmonitored administrations in year III. Also, previous authors studied monitoring as a measure of the seriousness with which the E2 was administered. However, the definition of monitoring may have been too broad because monitoring referred to proctoring by nursing faculty or their designee who was charged with maintaining security. Some schools reported that computer laboratory personnel were present during the testing, but the degree to which security was maintained may have varied.

HESI reports that many schools have recently instituted progression policies related to E2 scores. These policies vary among the schools implementing them, but they generally require students to achieve a satisfactory score on the E2 as a requirement for graduation or as a requirement for the school's approval to take the licensure examination. If students do not achieve a satisfactory score on the E2, they are required to remediate and retest using a different E2. Further research is needed to study the value of progression policies in relation to NCLEX success.

In the year II study, Newman et al.31 recommended that further research be conducted to evaluate the effectiveness of various types of remediation programs. Contrary to the findings of year II, in year III, there was no significant difference in NCLEX-RN success of low-scoring students who were remediated and those who were not remediated. As with the definition of monitoring, the definition of remediation may need to be more definitive. Remediation was broadly interpreted to be any type of additional study that was based on E2 findings. Future research that focuses on E2 implementation strategies is likely to provide more information about the value of the E2 as a remediation resource than studying the effects of monitoring alone. Additionally, it would be worthwhile to compare the pass rates of schools administering the E2 with the average national NCLEX pass rates, as well as to compare the pass rates of schools that use the E2 for progression with those that only use the E2 as a benchmark for remediation.

CONCLUSIONS

In the midst of declining student enrollment in the nation's schools of nursing, declining NCLEX pass rates, and a shrinking nursing workforce, success for first-time NCLEX candidates remains a top priority for nursing faculties. Based on the data obtained from this study, as well as from the previous two studies, the E2 has been determined to be highly accurate in predicting NCLEX success. Because it is a computerized examination, results are obtained immediately and can serve as a benchmark for remediation to assist candidates in becoming successful first-time takers of the NCLEX. Success on the licensure examination enables the candidates to enter the workforce, thereby helping to ameliorate the nursing shortage. Though no significant difference was found in NCLEX-RN pass rates of the year III low-scoring students who were remediated and those who were not remediated, further research is needed to determine which E2 implementation strategies are most effective in improving NCLEX pass rates.

References

 

1. Buerhaus P, Staiger D, Auerbach D. Implications of an aging registered nurse workforce. JAMA. 2000;283:2948-2954. [Context Link]

 

2. United States Census Bureau. Statistical abstract of the United States. Number 196 health professions schools-number, enrollment, and graduates: 1980 to 1997. Available at: http://www.census.gov/prod/www/statistical-abstract-us.html. Accessed July 2000. [Context Link]

 

3. National Council of State Boards of Nursing, Inc. Web site. Available at: http://www.ncsbn.org. Accessed July 2000. [Context Link]

 

4. National Council of State Boards of Nursing, Inc. Passing rate change for the NCLEX-RN examination an expected outcome. Available at: http://www.ncsbn.org/files/publications/issues/vol202/passing202.asp. Accessed July 2000. [Context Link]

 

5. National Council of State Boards of Nursing, Inc. NCLEX-RN and NCLEX-PN: examination statistics. Available at: http://www.ncsbn.org/research/nclexstats.nclex.asp. Accessed July 2000. [Context Link]

 

6. Grossman D, Massey P, Blais K, Geiger E, Lowe J, Pereira O, et al. Cultural diversity in Florida nursing programs: a survey of deans and directors. J Nurs Educ. 1998;37:22-26. [Context Link]

 

7. American Association of Colleges of Nursing. Dramatic reforms required to head off cycles of nursing shortages. JAMA editorial recommends. Available at: http://www.aacn.nche.edu/Media/NewsReleases/newslist.htm. Accessed June 2000. [Context Link]

 

8. American Association of Colleges of Nursing. Amid nursing shortages, schools employ strategies to boost enrollment. Available at: http://www.aacn.nche.edu/Publication/issues/ib600wb.htm. Accessed June 2000. [Context Link]

 

9. Endres D. A comparison of predictors of success on NCLEX-RN for African American, foreign-born, and white baccalaureate graduates. J Nurs Educ. 1997;36:365-371. [Context Link]

 

10. Arathuzik D, Aber C. Factors associated with National Council Licensure Examination-Registered Nurse success. J Prof Nurs. 1998;14:119-126. [Context Link]

 

11. Frierson H, Malone B, Shelton P. Enhancing NCLEX-RN performance: assessing a three-pronged intervention approach. J Nurs Educ. 1993;32:222-224. [Context Link]

 

12. Vance A, Davidhizar R. Strategies to assist students to be successful the next time around on the NCLEX-RN. J Nurs Educ. 1997;36:190-192. [Context Link]

 

13. American Association of Colleges of Nursing. CCNE accreditation standards. Available at: http://www.aacn.nche.edu/Accreditation/standrds.htm. Accessed July 2000. [Context Link]

 

14. National League for Nursing Accrediting Commission. Accreditation Manual 1999 for Post Secondary and Higher Degree Programs in Nursing. New York: National League for Nursing Accrediting Commission; 1999. [Context Link]

 

15. National League for Nursing Accrediting Commission. Accrediting Standards and Criteria for Academic Quality of Postsecondary and Higher Degree Programs in Nursing. Available at: http://www.accrediting-comm-nlnac.org/2am_stds&crit_fnl.htm#III. Accessed July 2000. [Context Link]

 

16. Messmer P, Abelleira A, Erb P. Code 50: an orientation matrix to track orientation cost. J Nurs Staff Dev. 1995;11:261-264. [Context Link]

 

17. Ross B, Nice A, May F, Billings D. Assisting students at risk: using computer NCLEX-RN review software. Nurs Educ. 1996;21:39-43. [Context Link]

 

18. Waterhouse J, Bucher L, Beeman P. Predicting NCLEX-RN performance: cross-validating an identified classification procedure. J Prof Nurs. 1994;10:255-260. [Context Link]

 

19. Waterhouse J, Carroll M, Beeman P. National Council Licensure Examination success: accurate prediction of student performance on the post 1988 examination. J Prof Nurs. 1993;9:278-283. [Context Link]

 

20. Alexander J, Brophy G. A five-year study of graduates' performance on NCLEX-RN. J Nurs Educ. 1997;36:443-445. [Context Link]

 

21. Barkley T, Rhodes R, Dufour C. Predictors of success on the NCLEX-RN. Nurs Health Care Perspect. 1998;19:132-137. [Context Link]

 

22. Heupel C. A model for intervention and predicting success on the national council licensure examination for registered nurses. J Prof Nurs. 1994;10:57-60. [Context Link]

 

23. Wall B, Miller D, Widerquist J. Predictors of success on the newest NCLEX-RN. West J Nurs Res. 1993;15:628-643. [Context Link]

 

24. Billings D, Hodson-Carlton K, Kirkpatrick J, Aaltonen P, Dillard N, Richardson V, et al. Computerized NCLEX-RN preparation programs: a comparative review. Comput Nurs. 1996;14: 272-286. [Context Link]

 

25. Poorman S, Martin E. The role of nonacademic variables in passing the national council licensure examination. J Prof Nurs. 1991;7:25-32. [Context Link]

 

26. Breyer R. The comprehensive nursing achievement test as a predictor of performance on the NCLEX-RN. Nurs Health Care. 1984;5:193-195. [Context Link]

 

27. Schmidt A. An approximation of a hierarchical logistic regression model used to establish the predictive validity of scores on a nursing licensure exam. Educ Psychol Meas. 2000;60:463-478. [Context Link]

 

28. Riner M, Muller C, Ihrke B, Smolen R, Wilson M, Richardson V, et al. Computerized NCLEX-RN and NCLEX-PN preparation programs: comparative review, 1997. Comput Nurs. 1997;15: 255-267. [Context Link]

 

29. Lauchner K, Newman M, Britt R. Predicting licensure success with a computerized comprehensive nursing exam: the HESI exit exam. Comput Nurs. 1999;17:120-125. [Context Link]

 

30. Hanks C. Indicia: letters to the editor. Comput Nurs. 1999; 17:241-246. [Context Link]

 

31. Newman M, Britt R, Lauchner K. Predictive accuracy of the HESI exit exam: a follow-up study. Comput Nurs. 2000;18:132-136. [Context Link]

 

32. Morrison S. Health Education Systems, Inc., Web site. Available at: http://www.healthedsystems.com. Accessed September 2000. [Context Link]

 

33. Morrison S, Smith P, Britt R. Critical Thinking and Test Item Writing. Houston, Tex: Health Education Systems, Inc.; 1996. [Context Link]

 

34. Morrison S, Free K. Writing multiple-choice test items that promote and measure critical thinking. J Nurs Educ. 2001; 40:17-24. [Context Link]