Keywords

Appreciative Inquiry, Multiple-Choice Exams, Nurse Educators

 

Authors

  1. O'Rae, Amanda
  2. Hnatyshyn, Tammy
  3. Beck, Amy J.
  4. Mannion, Cynthia
  5. Patel, Shruti

Abstract

Abstract: Multiple-choice examinations (MCEs) are commonly used to evaluate nursing students. Nurse educators require support to develop questions and engage in postexam analysis to ensure reliable assessment of student learning. We surveyed nurse educators and conducted focus groups to investigate current writing practices associated with MCEs. Using appreciative inquiry, participants proposed ideals to strengthen MCE practice: guidelines and expectations for faculty, faculty-developed test banks, team development, and an assessment blueprint at the curriculum level. Faculty supports are necessary to strengthen current MCE practices and best utilize the skills of educators.

 

Article Content

Although multiple-choice examinations (MCEs) are widely used to evaluate students in nursing education, the quality of constructed multiple-choice questions (MCQs) is variable. Tarrant and Ware (2012) reported that few nurse educators have specific training to write high quality, valid, and reliable MCQs. Constructing quality MCQs is challenging and time-consuming (Hijji, 2017); educators often construct MCQs hastily and inadequately analyze written items, compromising quality (Redmond, Hartigan-Rogers, & Cobbett, 2012).

 

It has long been debated whether MCEs are capable of testing students' higher cognitive domains, such as critical thinking (Bailey, Mossey, Moroso, Cloutier, & Love, 2012). Some students become test-wise, recognizing clues suggesting correct answers rather than recalling knowledge (Nemec & Welch, 2016). Bailey et al. (2012) reported that poorly constructed MCQs can wrongly inform students. Tarrant, Knierim, Hayes, and Ware (2006) estimated that 50 percent of MCQs do not differentiate among students with variable understanding of the material tested.

 

Educators support MCEs in the assessment of student learning (Tarrant et al., 2006), but not all MCEs are valid. Students have also identified that, given fair assessment, MCEs are an essential component of satisfaction (Leung, Mok, & Wong, 2008). As interdisciplinary studies show improvement with appropriate training, it is recommended that universities provide training for MCE composition (Tarrant & Ware, 2012). High quality MCEs can result from the use of a collaborative strategy to create a blueprint and develop and review a standard set of items for high-stakes examinations (AlMahmoud, Elzubeir, Shaban, & Branicki, 2015; Leung et al., 2008).

 

The purposes of this study were twofold: 1) to determine how nurse educators create, review, and modify MCQs and 2) to provide an opportunity for nurse educators to envision their ideal nursing education practice for MCEs given optimal supports.

 

METHOD

The 4-D Cycle of Appreciative Inquiry (AI; Cooperrider & Whitney, 2002) was used to explore nurse educators' MCE practices. AI focuses on acknowledging current organizational strengths to facilitate change in a system or community. Academic institutions represent a community of educators, and the best understanding of exam practices requires an approach accounting for all perspectives. Participants' perspectives reveal what is working while sharing and valuing collected insights. Participants identify themes by describing ideals of what could be and what may work, formulate an action plan, and identify resources (Cooperrider & Whitney, 2002). The researchers' delimited data collection to a single faculty of nursing is consistent with AI, where the focus is on strengthening a single organization.

 

Eligible faculty of nursing educators (n = 110) who used MCEs in undergraduate courses were invited to participate. Based on an historical account of courses, it was estimated that half of the invitees used MCEs; to ensure anonymity, online surveys (Fluid Surveys Software, http://fluidsurveys.com) were sent to all faculty. A paper copy to mail slots followed due to the low response. Only 14 faculty responded to the online survey, and five returned the paper copy. As a member of the senior administration was on the team, tracking participants was excluded to avoid coercion and maintain confidentiality. All respondents were invited to participate in a focus group or interview.

 

The research team developed and piloted a 13-item survey that recorded teaching years, MCQ and MCE construction practices, and management of poorly performing questions. Focus group and interview guides were developed using the 4-D Cycle of AI (Cooperrider & Whitney, 2002). The focus groups and interviews were digitally audio-recorded and transcribed verbatim. Investigators independently reviewed the transcripts and identified potential codes and themes using Braun and Clarke's (2006) phases of thematic analysis. A team expert in AI guided the discussion, refinement, and finalization of themes. The study was approved by the institution's Research Ethics Board (REB14-1273).

 

RESULTS

Of the 19 nurse educators who responded to the survey, eight taught theory courses for >5 years and six had recently (< 5 years) taken exam-writing workshops. Commonly used resources included inherited faculty exams (68 percent), textbook items (63 percent), commercial test banks (52 percent), and online exam questions (52 percent). The majority (74 percent) removed poorly performing MCQs from student-completed exams, reducing the total achievable score of the exam.

 

Two focus groups (n = 9) and two interviews were held for a total of 11 participants. Guided by AI, participants proposed ideal practices and supports needed to strengthen MCE practices: 1) guidelines and expectations for faculty members, 2) faculty-generated test bank, 3) team development, and 4) assessment blueprint at the curriculum level.

 

Ideal 1: Guidelines and Expectations

Participants lacked formalized resources to make valid MCEs. "I feel like a lot of it, in my experience, was learning as you go[horizontal ellipsis]But at the same time[horizontal ellipsis]looking for guidance." Participants wanted guidance on the development of course-specific test plans, defined expectations for peer review of MCQs and MCEs, consistent use of exam statistics across courses, guidelines on management of poorly performing questions, and an exam review guideline for use with students.

 

Ideal 2: Faculty-Generated Test Bank

Time constraints were a commonly reported barrier; thus, a faculty-generated test bank with previously tested questions was favored. "I don't have enough knowledge and background in making that question to make sure it is a good question. Yeah, I know the content, and I know the stuff but - to ask [you] the right way? So, the test bank is probably better."

 

Ideal 3: Team Development

Some participants wanted support from colleagues: "I think it is there, it's just trying to figure out how to better utilize the skills we have on some of our teams." The development of term teams with diverse strengths and an overarching reporting structure would provide a supportive learning context.

 

Ideal 4: Assessment Blueprint at the Curriculum Level

An assessment blueprint at the curricular level would provide guidance to nurse educators and consistent quality of MCEs across the curriculum. MCQs should exhibit increasing challenges to students as they progress through the curriculum, which could be tracked with a blueprint. A process for vetting changes to the blueprint as the curriculum developed would be required.

 

DISCUSSION

The small convenience sample in one institution and a low response rate may have contributed to bias in the results of this study. In addition, before resources are dedicated, further information is needed to assess the administration's approach to MCQ/MCE development. However, the study highlights that nursing faculties may need to train and support nurse educators to construct fair and valid MCEs.

 

Ideas to carry forward include faculty-generated test banks fitted to curriculum objectives. Nurse educators could be confident in using pretested questions, which would also improve time efficiency. They could also find peer support to review and modify MCEs using term teams. Peer review of MCQs could lead to the identification and modification of test questions that do not match intended cognitive levels stated in the learning objectives. MCE reviews in a classroom setting or individual student/instructor meetings provide a learning opportunity for students to reflect on exam writing and for the instructor to correct any misinformation. This is part of the evidence-based educational foundation upon which good exams are built (Hijji, 2017).

 

CONCLUSION AND RECOMMENDATIONS

This study sharpens the approach to evaluative assessments in nursing education by proposing ideals generated from an inquiry detailing the opinions and experience of nurse educators. The researchers recommend identifying and building upon the strengths existing in an academic community and to consider training in MCE construction, item writing, and analysis as well as a test blueprint based on curriculum goals. The researchers acknowledge that MCEs with well-written and discerning MCQs cannot be the only assessment method applied to students. To give a comprehensive assessment, multiple methods are recommended.

 

REFERENCES

 

AlMahmoud T., Elzubeir M. A., Shaban S., & & Branicki F. (2015). An enhancement-focused framework for developing high quality single best answer multiple choice questions. Education Health (Abingdon), 28(3), 194-200. doi:10.4103/1357-6283.178604 [Context Link]

 

Bailey P. H., Mossey S., Moroso S., Cloutier J. D., & & Love A. (2012). Implications of multiple-choice testing in nursing education. Nurse Education Today, 32(6), e40-e44. doi:10.1016/j.nedt.2011.09.011 [Context Link]

 

Braun V., & & Clarke V. (2006). Using thematic analysis in psychology. Qualitative Research in Psychology, 3(2), 77-101. doi:10.1191/1478088706qp063oa [Context Link]

 

Cooperrider D. L., & & Whitney D. (2002). Appreciative inquiry: The handbook. Euclid, OH: Lakeshore Publishers. [Context Link]

 

Hijji B. M. (2017). Flaws of multiple choice questions in teacher-constructed nursing examinations: A pilot descriptive study. Journal of Nursing Education, 56(8), 490-496. doi:10.3928/01484834-20170712-08 [Context Link]

 

Leung S. F., Mok E., & & Wong D. (2008). The impact of assessment methods on the learning of nursing students. Nurse Education Today, 28(6), 711-719. doi:10.1016/j.nedt.2007.11.004 [Context Link]

 

Nemec E. C., & & Welch B. (2016). The impact of faculty development seminar on the quality of multiple-choice questions. Currents in Pharmacy Teaching and Learning, 8(2), 160-163. [Context Link]

 

Redmond S. P., Hartigan-Rogers J. A., & & Cobbett S. (2012). High time for a change: Psychometric analysis of multiple-choice questions in nursing. International Journal of Nursing Education Scholarship, 9. doi:10.1515/1548-923x.2487 [Context Link]

 

Tarrant M., Knierim A., Hayes S. K., & & Ware J. (2006). The frequency of item writing flaws in multiple-choice questions used in high stakes nursing assessments. Nurse Education in Practice, 6(6), 354-363. doi:10.1016/j.nepr.2006.07.002 [Context Link]

 

Tarrant M., & & Ware J. (2012). A framework for improving the quality of multiple-choice assessments. Nurse Educator, 37(3), 98-104. doi:10.1097/NNE.0b013e31825041d0 [Context Link]