Authors

  1. Hensel, Desiree PhD, RN, PNCS-BC, CNE, CHSE

Article Content

Beginning in April 2023, the National Council Licensure Examination (NCLEX) will be revised to include new methods to test clinical judgment skills.1 The Next Generation NCLEX (NGN) will feature variations of 5 new item types-extended multiple response, drag-and-drop, drop-down, matrix, and highlighting-to answer unfolding case studies and stand-alone clinical judgment questions.1 A recent study found 64% of faculty felt only slightly or not at all prepared to incorporate NGN items into their course examinations, and 71% believed their institutions were only slightly or not at all prepared to do the same.2 The dilemma many faculty now face is how to approach testing to ensure they are graduating students ready to take the NCLEX if they are not confident with their own NGN-style item writing skills. Originally written to address fairness issues in high-stakes examinations, the National League of Nursing fair testing guidelines3 can provide guidance to faculty considering making changes to their testing practices.

 

First and foremost, faculty should be mindful of their ethical responsibility to ensure that tests are fair, consistent, and valid.3 An underlying assumption of fair testing is that a test should evaluate what students have been taught. Faculty must teach clinical judgment before they test students' skills. Emphasizing active learning, giving opportunities to practice using clinical judgment, and using formative assessment are much more important in preparing students for the NGN than focusing on how to include every new item type on course examinations.

 

Nevertheless, faculty are responsible for evaluating student competency.3 This includes evaluating clinical judgment abilities. I have conducted multiple faculty workshops on learning to write NGN-style items and the process is not intuitive. There are at least 3 new item writing skills faculty should master: creating an unfolding scenario from a medical record, writing questions to address the National Council of State Boards of Nursing (NCSBN) Clinical Judgment Measurement Model, and formatting new item types. Given there are no data available to guide recommendations on what proportion of course examinations should be NGN items, faculty should make decisions about testing practices based on testing goals, writing skills, available resources, and general fair testing principles.

 

One choice may be to continue testing using current methods and rely on teaching strategies and nongraded practice questions from various sources to help prepare students for the NGN. This approach limits the chance that students are evaluated on poorly constructed examinations, but probably falls short of the fair testing goal that tests are used to support learning, improve teaching, and guide program improvements.3 A second option is to use vendor-created standardized examinations for NGN preparation. However, failing to make any updates to current teaching and testing practices equates to delegating the faculty responsibility of preparing students for the NCLEX to outside sources. It is also unfair to expose students to case studies and new item types only in high-stakes situations. Another choice may be to use questions from vendor-created test banks for course examinations. Aside from security issues, historically test bank items have been ladened with writing flaws.2 Faculty must still determine whether the test bank item should be included on an examination, and this requires understanding NGN item writing principles well enough to make the decision.

 

Ultimately, the best practice is for faculty to learn to write their own NGN-style items, and this takes time and practice. Undoubtedly, continued faculty development is needed, but more importantly programs need policies now to ensure all items used for student evaluation are fair. The NCSBN only uses scored items that align with the NCLEX test plan and have been through a rigorous analysis and review process. Minimally, faculty should ensure items they create are based on course objectives, follow a blueprint, and go through a peer review process before being used for testing.

 

Students should be given opportunities to practice taking questions in the same platform used for testing. Fair tests have good technical quality,3 and faculty should only use item formats that work well in a selected testing platform for student evaluation. NGN case studies unfold and are meant to be given in an electronic format that does not allow backtracking. Because many testing platforms cannot offer all new item types, faculty may be tempted to use workarounds. While these makeshift solutions may be appropriate for teaching and formative assessment, faculty should be cautious using workarounds for grading. For example, one workaround to simulate a matrix item involves placing numbers in a cell of a table and then asking students to select options for each number below the table. Consider how this approach increases the risk of a data entry error as students go back and forth between the table and answer options, and this risk would be even greater for a student with special learning needs.

 

Scoring procedures should be clearly explained to students.3 Explanations are easy when all items are weighted the same value and scored as correct or incorrect. However, NGN will use 3 different scoring rules based on different item formats and award different point values.1 For instance, the +/- rule uses a guessing penalty, but the 0/1 rule does not. The new scoring methods allow for more precise measurements of candidate ability but are difficult to replicate in nursing programs without requiring cumbersome hand grading. It is not essential for programs to follow the exact scoring methods used by the NCSBN, but it is important that programs develop scoring methods that can be consistently applied and can be clearly articulated to students. It is particularly important that students understand how guessing might affect their grades.

 

Psychometric analysis should continue to be used to ensure that faculty created examinations are valid and internally consistent.3 Methods used to conduct an item analysis may need to be modified slightly to account for polytomous (partial credit) scoring. The familiar Kuder-Richardson 21 (KR21) reliability coefficient and item point-biserial correlations can only be obtained on tests that score items as correct or incorrect.4 A Cronbach's [alpha] can be used instead of a KR21 for tests that use partial scoring methods. Item discrimination indexes can be used instead of a point-biserial to measure how well an item discriminates between top- and lower-performing students.

 

Finally, programs need to thoughtfully consider how they will gather, analyze, and use testing data to improve the development of clinical judgment and student readiness to enter practice. Creating master lists with case study difficulty, collecting metadata on clinical judgment questions, and creating remediation plans based on item response patterns are a few possible options. Faculty have an ethical responsibility to use fair testing practices. Aligning any testing practice changes with fair testing guidelines will help faculty achieve their testing goals and support student preparation to take the licensure examination.

 

References

 

1. National Council of State Boards of Nursing. NGN/CCNA Webinar. March 2022. Accessed July 28, 2022. https://www.ncsbn.org/16708.htm[Context Link]

 

2. Moran V, Wade H, Moore L, Israel H, Bultas M. Preparedness to write items for nursing education examinations: a national survey of nurse educators. Nurse Educ. 2022;47(2):63-68. doi:10.1097/NNE.0000000000001102 [Context Link]

 

3. National League for Nursing. NLN Fair testing guidelines for nursing education. 2012. Accessed July 31, 2022. https://www.nln.org/docs/default-source/uploadedfiles/advocacy-public-policy/fai[Context Link]

 

4. Hensel D, Cifrino S. Item analysis and next-generation NCLEX. Nurse Educ. 2022;47(5). doi:10.1097/NNE.0000000000001223 [Context Link]