Authors

  1. Lauchner, Kathryn A. PhD, RN
  2. Newman, Mary PhD, RN
  3. Britt, Robin B. EdD, RN

Article Content

REPLY TO RESPONDENT

We are pleased with the many responses to our article "Predicting Licensure Success with a Computerized Comprehensive Nursing Exam: The HESI Exit Exam"1 and wish to express our appreciation to the publishers of Computers in Nursing for providing us the opportunity to reply to one respondent. Since this was a blind review, we will refer to the author(s) as "respondent." We have addressed the model and data sources used by the respondent in the calculations described as well as specific quotes made by the respondent.

 

MODEL AND CALCULATIONS PRESENTED BY RESPONDENT

Based on a public health model2 designed to measure the efficacy of biological screening tests used for disease detection, the respondent calculated "predicted failures" on the NCLEX-RN, substituting licensure failure for a disease entity such as cancer or tuberculosis. Though we wonder how those who might fail the licensing exam would feel about being equated to a disease process, we agree that the model had some degree of applicability to predicting licensure failures. However, the calculations presented by the respondent were based on assumptions regarding possible NCLEX-RN pass rates, and making such assumptions was unnecessary since these data are readily available and regularly updated at the National Council of State Boards of Nursing (NCSBN) web site.3 According to data provided at this site, the pass rate on the NCLEX-RN since implementation of the current test blueprint,4 which includes the past five quarters, from the second of quarter 1998 through the second quarter of 1999, was 84.97%. Using the model the respondent described, data presented in our study regarding the predictive accuracy of the E2, and current NCLEX-RN pass rates data reported by the NCSBN, the E2 was determined to be 91.15% accurate in predicting NCLEX-RN failure, referred to by the respondent as the instrument's sensitivity. Furthermore, our study indicated that the E2 was significantly more accurate when administration of the exam was monitored or proctored. Using the data from the monitored group, the sensitivity or the ability of the E2 to predict failures was 96.42% accurate. These findings, based on current actual data reported by the NCSBN and data that were uncontaminated by the extraneous variables associated with the unmonitored group, refute the respondent's statement that the E2 was "[horizontal ellipsis]very weak at predicting those likely to fail (the NCLEX-RN)."

 

Actual NCLEX-RN pass rate data were also used to calculate what the model described as the instrument's specificity, or the percentage of students who were predicted to pass the NCLEX-RN and did pass. For the total group, the instrument's specificity was 55.92%, and for the monitored group, the specificity was 51.30%.

 

The positive predictive value looked only at those students who were identified by the E2 to be at risk of failing the licensing exam and reflects the percentage of these students that actually did fail. Both the positive predictive value and the specificity indicate that the E2 erred on the side of caution. Unlike identifying the existence of a disease such as cancer or tuberculosis, the E2 identified potential risk of failing the licensing exam. It also identified the specific content areas where the risk existed so that remediation might be implemented to decrease the risk of licensure failure. Although no one wants to be told he or she has a disease that he or she does not have, identifying knowledge weaknesses through the use of the E2 can provide a positive outcome by encouraging students to obtain additional knowledge, thus reducing their potential risk of licensure failure. Using the model described by the respondent to test the efficacy of biological screening tools, an analogous measurement would be the identification of an individual's specific risk behavior that can be changed to prevent developing a disease such as cancer or tuberculosis. Such a test is likely to be valuable in preventing disease, just as we have found the E2 valuable in helping to prevent licensure failure.

 

The negative predictive value identified the accuracy of the E2 in predicting licensure success. This was, in fact, the stated purpose of our study. However, the respondent chose only to look at RN data. Since the same model was used to make predictions for the PN users of the E2, the study also investigated a PN sample, and the findings reported in our study included these data. For the RN group only (monitored and unmonitored), the accuracy of the E2 in predicting licensure success was 97.28%; and for the monitored RN group, it was 98.78% accurate (Table 6).

  
Table 6 - Click to enlarge in new windowTable 6 Measure of E

Respondent's Quotes

We would like to comment on six specific quotes made by the respondent.

 

Respondent's quote: "Neither Dr. Lauchner and her co-authors nor I address the reliability of such tests."

 

Though the respondent chose not to address reliability, we did address reliability:

 

"Each school group that administered an E2 was analyzed using ParSYSTEM's test analysis program. The reliability coefficient, the Kuder-Richardson Formula 20 (KR20), for each group was calculated within ParSYSTEM and described in the test analysis report. These reports were reviewed and data were tabulated for all administrations of the E2 during the academic year studied, 1996-1997. Fourteen different E2 were administered to 80 groups at 62 schools. The KR20 for these administrations ranged from 0.34 to 0.91, and the average KR20 for the 80 administrations was 0.85."1

 

Respondent's quote: "Once again, the best data about probable success on the NCLEX exam we have may be our clinical judgment of students' intelligence, competence, and motivation gained in 2 years of teaching."

 

Though we agree that most faculty members do make sound judgments, their exposure to a particular student is usually limited, and a student's performance may vary from class to class. Faculty may see only a small sample of a student's ability, which may be influenced by the student's personal interest and outside demands, as well as their intelligence, competence, and motivation. The predictability of faculty clinical judgement of students' success would need to be evaluated and compared with the E2 to determine which of these two provides the best information regarding NCLEX success. Our personal experience is that E2 findings usually substantiate faculty judgments of students' abilities, and we believe that it would make an interesting research project to compare faculty judgments of student's ability to pass the NCLEX with E2 findings. However, even if there did appear to be a high positive correlation between these two factors, we do not believe that such a correlation would negate the usefulness of an examination that compares a student with all other students across the United States who have answered the same test items. An outside examination of students' readiness for the licensure exam can provide a benchmark for students' preparedness so that remediation can be initiated as soon as possible. We have found that the E2 report is an effective guide in conducting such student remediation.

 

Respondent's quote: "In each situation, the test administrator and the person taking the test must decide if unnecessary worry from being mistakenly told that you may have a disease or that you may fail the real exam is worth experiencing in order to correctly identify a large number of people that have a reason to worry so they can take corrective action."

 

We believe that it is a fact of life that individuals have areas of strength as well as areas that need improvement. The E2 provided a means to address this responsibility so that action could be taken to strengthen areas where improvement was needed. Billings et al5 was quoted in our article, and our personal experience indicates, that failure on the licensure exam can result in lost employment opportunities and a sense of low self-esteem. It is our belief that a tool as accurate as the E2 is useful in assisting students to avoid the negative consequences that are associated with failure on the licensure exam.

 

Respondent's quote: "Of 2,555 RN students who took both tests and for whom schools would provide data, 1,248, or 48.8%, were predicted to pass. Thirty-four (2.7%) of the 1,248 students predicted to pass the NCLEX by their scores on the E2 failed the NCLEX."

 

The statement is sited as though it is a quote from our article-it is not. We stated: "The responding RN schools represented 2,555 or 97.78% of the RN students who took the E2 during the study period."1 A 97.78% return on a mail-out questionnaire is considered by most researchers to be highly representative. Though the respondent's statement "for whom schools would provide data" is accurate, it offers a negative connotation and does not consider the high response we had to our questionnaire. Also, while it is true that 34 students who were predicted to pass failed the licensure exam, we addressed this finding in relation to monitoring, reporting that a significant difference ([chi]2 = 4.98, P = .05) was found in the accuracy of monitored and unmonitored administrations of the E2. Only 10 students in monitored situations who were predicted by the E2 to pass failed the licensing exam.

 

Respondent's quote: "As Lauchner et al concluded, when most people pass a licensing exam, the HESI and probably other tests can reassure those most likely to pass that they will pass."

 

We presented no such conclusion and never suggested that the test would reassure those most likely to pass that they would pass.

 

Respondent's quote: "Neither students nor faculty can expect a pretest to accurately predict who will fail unless we admit so many unqualified students and/or provide such low-quality education that more than 20% of the students fail the licensing exam."

 

Realizing that nurses were being required to perform at a higher level of competence, the NCSBN increased the passing standard on the licensing exam,4 which is designed to protect the public. Since implementation of the increased passing standard by the NCSBN, the national failure rate on the NCLEX-RN is 15.03%. The current pass rate is very close to what the respondent hypothesized might be due to "unqualified students" and "low-quality education." We believe that faculty are interested in discovering methods to increase pass rates, one of which may be identifying NCLEX candidates at risk and implementing remediation prior to taking the licensing exam.

 

Summary and Conclusions

We welcome every opportunity to enter into a worthwhile intellectual exchange of ideas and discussions of data analyses that might be useful in promoting the evaluation of nursing research. Using the model presented by the respondent, which is designed to test the efficacy of biological screening tests and actual data provided by the NCSBN, we found that the E2 was indeed accurate in predicting failures on the NCLEX-RN. While it was not the purpose of our study to examine the E2 in terms of predicting licensure failures, we do appreciate the respondent's comments regarding such an examination. Research is currently underway that is designed to investigate the usefulness of various remediation methods in assisting low-scoring E2 students to be successful first-time candidates to the licensing exam. We believe these endeavors are far more valuable than predicting NCLEX failures, which we consider a negative process and one that might promote a feeling of defeat among those receiving the label of "predicted failure." Our personal use of the E2 as a guide to remediation has a more positive connotation and, for us, has had a positive outcome.

 

Kathryn A. Lauchner, PhD, RN

 

Professor of Nursing, Austin Community College

 

Mary Newman, PhD, RN

 

Assistant Clinical Professor of Nursing, Texas Women's University

 

Robin B. Britt, EdD, RN

 

Associate Professor of Nursing, Texas Women's University

 

REFERENCES

 

1. Lauchner K, Newman M, Britt R. Predicting licensure success with a computerized nursing exam: The HESI exit exam. Comput Nurs. 1999;17:120-125. [Context Link]

 

2. U.S. Preventive Services Task Force. Guide to clinical preventive services: an assessment of the effectiveness of 169 interventions. Report of the U.S. Preventive Services Task Force. Baltimore: Williams & Wilkins; 1989. [Context Link]

 

3. National Council of State Boards of Nursing. http://www.ncsbn.org, August 16, 1999. [Context Link]

 

4. National Council of State Boards of Nursing, Inc. NCLEX-RN (examination test plan for the national council licensure examination for registered nurses effective date: April, 1998; 1997. [Context Link]

 

5. Billings D, Hodson-Carlton K, Kirkpatrick J, Aaltonen P, Dillard N, Richardson V, Siktberg L, Vinten S. Computerized NCLEX-RN preparation programs: a comparative review. Comput Nurs. 1996;14:272-286. [Context Link]