Authors

  1. Simmons, Larry
  2. Christensen, Linda

Article Content

To a nurse educator taking a certification exam, the experience may seem similar to that of a student taking an instructor-created classroom exam, but the process is very different, particularly with regard to test development. As we are often asked about the development and scoring of the NLN Certified Nurse Educator certification exam, following is a brief explanation of how the exam was originally developed, how it remains relevant to current practice, how it is scored, and how security is assured.

 

TEST DEVELOPMENT

Certification exams are the end product of a rigorous test development process. The genesis of certification begins with the identification of the subject matter content/practice competencies and accompanying task statements for the role, which are derived from current literature. The competencies and task statements become the basis for the development of a practice analysis survey by a panel of subject matter experts. Results of the practice analysis are used to develop a test blueprint that outlines and guides test development. To ensure that the test blueprint remains reflective of the current practice of the role, the practice analysis survey is repeated every four to six years.

 

Subject matter experts write specific test items, each linked to a competency and task statement of the test plan. In addition, each test item must be referenced to current practice literature and conform to the best practices of writing standardized test items. Once the items are developed, they are pilot tested so that statistical data for the items can be collected and analyzed. Based on item analysis, the test item may be retained "as is" (making it an "active" exam item suitable for use as a scored item on a test), revised, or deleted. If the item is revised, it must again be piloted before it can be considered active.

 

DATA-DRIVEN SCORING

The NLN certification program uses a minimum of two forms of the certification exam at any one time. Each form must strictly conform to the test blueprint, but the actual test items will vary. Test development committees (TDCs), composed of subject matter experts, are integral in the exam process. The TDC reviews the statistical performance of individual test items, as well as overall exam performance validity and reliability. This process is data driven. No arbitrary decisions about test items are made during the development and review process.

 

The passing scores for each exam form are set by a method called "cut score setting." The cut score refers to the number of questions that must be answered correctly to pass the exam. The TDC uses a variety of statistical processes in analyzing data to set the cut score, such as the Angoff method or a more general equating method. The final cut score is determined by the TDC in collaboration with experienced psychometricians.

 

Because there are two forms of the certification exam, each exam's statistics must be examined independently to determine its specific cut score. It is therefore possible that the two exams may have different scores required for passing, with the slightly more difficult exam having a slightly lower passing score.

 

Test development activities remain a highly secure process. Information about test items is not shared by anyone outside of the test development meetings. Access to the test bank for review is strictly limited to TDC members and tightly controlled by the agency that provides test administration to candidates.

 

Periodically, a posttesting candidate requests a review of test items. This is never permitted with a certification exam, as these exams are high-stakes testing for a profession. The security of exams must never be impugned.

 

Another occasional request is to discard one or more test items from a tester's exam based on the rationale that there was more than one correct response. Certification items are written so that there is only one correct response that is easily referenced to current literature. Sometimes testers state that certain test items did not reflect their role as a nurse educator. It is important to remember that the certification test is based on the role competencies and task statements for the role and not on individual practice.

 

One similarity between a nurse educator taking a certification exam and a student taking an instructor-created classroom exam has to do with preparation. A well-prepared student for a classroom exam will know generally what type of content will be covered and will study accordingly. A well-prepared nurse educator will review the candidate handbook, identify the test plan, and prepare accordingly.