Keywords

Clinical Evaluation, Clinical Grading, Nursing Education, Nursing Education Research, Simulation

 

Authors

  1. Reising, Deanna L.
  2. Carr, Douglas E.
  3. Gindling, Sally
  4. Barnes, Roxie
  5. Garletts, Derrick
  6. Ozdogan, Zulfukar

Abstract

Abstract: The purpose of this study was to determine whether student performance in a simulation varied according to which grading method was used: pass/fail versus numerical grading with calculation into a course grade. Results showed that student performances were not significantly different when the pass/fail graded group was compared to the numerically graded group, even though students knew which grading schema would be used in their evaluation. The study challenges the opinion that students perform better when they know that they will be numerically graded in simulation.

 

Article Content

Whether to assign a numerical grade for clinical activities has been long debated in nursing education. Surveys indicate that there are ongoing differences of opinions on whether to assign a clinical grade using a pass/fail approach or by letter or numerical approach (Oermann, Yarbrough, Saewert, Ard, & Charasika, 2009). Some of the issues related to numerical grading in clinical education involve the availability of standardized, reliable, and valid approaches to evaluation (Docherty & Dieckmann, 2015; Oermann & Gaberson, 2013).

 

With the availability of more reliable and valid grading rubrics, the opportunity presents itself to assign a numerical grade for clinical events, including clinical simulations (National League for Nursing, 2017; Renjith, George, Renu, & D'Souza, 2015). However, it is unknown whether there is an empirical difference in student performance during clinical simulation when using a pass/fail versus a numerical grading system. The purpose of this study was determine whether assigning a numerical grade, to be included in the calculation into a course grade, when compared to a pass/fail grade has an effect on student performance in an interprofessional clinical simulation.

 

In a survey by Oermann et al. (2013), 83 percent of nursing education faculty respondents indicated that their programs used a pass/fail methodology for evaluating clinical performance. Faculty may choose pass/fail grading over numerical or letter grading because they view clinical grading as subjective (Andre, 2000) or feel a lack of support around grading issues (Amicucci, 2012). Though faculty perceived increased intensity and conflict in assigning a numerical grade, they still believed that using a numerical grade would result in more student clinical effort, even when such data were lacking (Amicucci, 2012).

 

As Oermann and Gaberson (2013) noted, "Clinical evaluation is not the same as grading" (p. 238). A grade may be used to represent a judgment about the observations made during a clinical performance, but a clinical performance may be evaluated without assigning a grade. A literature search on the topic of clinical grading produced no specific research on the effect of numerical grading compared to pass/fail grading in a clinical simulation. Moreover, little research was found with regard to the method of grading during clinical evaluation in recent literature.

 

METHOD

This study is a secondary analysis of data collected for a larger simulation study. A retrospective, comparative design was used to determine if there were significant differences in performances between pass/fail and numerically graded groups. Human subjects approval for the secondary analysis was secured.

 

The parent study is a longitudinal investigation on interprofessional communication and teamwork. During a curricular change that involved restructuring how courses are graded, nurse faculty decided to trial increasing the amount of numerically graded clinical events in place of some clinical events that were graded by pass/fail, including the simulation event involved in the longitudinal study. Therefore, it became necessary to determine the impact of the grading change on the longitudinal study that is the basis of this study.

 

Participants included baccalaureate nursing students who were involved in an interprofessional high-fidelity simulation using an asthma scenario at a large Midwestern university. Students, second-semester juniors in their second sequence of medical-surgical clinical, were in interprofessional teams with first-year medical students. Group 1 students (n = 60) were enrolled in the medical-surgical clinical in a year where all clinical activities were pass/fail, including this simulation. Group 2 students (n = 54) were enrolled in the same clinical the following year. At this time, numeric grading was implemented for clinical activities and included in the final course grade; the simulation activity represented 3 percent of the total course grade.

 

The total number of participants across both groups was 114, with 35 teams in each semester for a total of 70 teams. Groups were from separate admission cohorts; the groups had similar cumulative grade point averages upon admission and completion of the program. Both groups received the grading rubric in advance of the simulation.

 

The tool used was the IUSIR, which contains six individual and six team performance items. Each item is rated on a 1 to 5 scale with 1 representing novice performance and 5 representing expert performance; the maximum score for each section is 30 (for a total of 60). Reliability and validity were established in a separate study involving the larger group of students (Reising, Carr, Tieman, Feather, & Oxdogan, 2015). Cronbach's alphas were .82 for nursing individual scoring and .79 for team scoring.

 

RESULTS

A secondary analysis of the IUSIR individual and team scores was conducted using IBM SPSS (Version 24.0). Individual and team scores were compared across the pass/fail graded group (Group 1) and numerically graded group (Group 2). There were no significant differences between groups on individual and team scores, respectively, t(111) = .554, p = .581 and t(68) = -.956, p = .342.

 

DISCUSSION

The results from this study suggest that the decision by faculty to include the numerical grade in a course calculation does not significantly affect individual and team performance in this interprofessional simulation activity. Anecdotal observations from faculty were that students seemed more prepared when they knew they would receive a numerical grade, but these observations were not statistically validated.

 

The findings are consistent with a review of research on the impact of grading, particularly the theoretical underpinnings of student motivations in learning (Schinske & Tanner, 2014), which concluded that grading does not motivate students to learn and, in some cases, may have the opposite effect. The impetus for the current study was the belief by faculty that students would be motivated toward a higher performance in the interprofessional simulation if they received a numerical grade that was calculated into their final grade for the course. The results show there was not a significant impact supporting the findings by Schniske and Tanner (2014).

 

The use of reliable and valid rubrics for evaluation is strongly encouraged regardless of the grading methodology (Adamson, 2015; Oermann & Gaberson, 2013; Rizzolo, Kardong-Edgren, Oerman, & Jeffries, 2014). Although we used the IUSIR to assign a numerical value to student performances, each criterion is defined by behaviors that accompany each rating level. More important than rating students, the tool was specifically used to debrief student teams on desired behaviors expected in the interprofessional simulation. Likewise, because we use the rubric regardless of whether a numerical grade is assigned for calculations, faculty workload was not increased, an issue advanced by Amicucci (2012).

 

CONCLUSION AND FUTURE RESEARCH

This study was conducted at a single site with a single nursing rater in a secondary analysis of nonrandomized groups. In addition, because the clinical event was a single event in a multifaceted medical-surgical clinical, assumptions may not be made about the grading of an entire clinical experience.

 

Future studies in other types of clinical experiences are recommended, along with continued research involving the development of reliable and valid tools that use criterion-referenced measurement. Furthermore, because students already experience stress from clinical observation, one would reasonably postulate whether students were more "open" to feedback when they were not receiving a numerical grade. This hypothesis would be worth testing in future studies.

 

In conclusion, the assignment of a numerical grade did not affect individual and team performance for nursing students in an interprofessional high-fidelity simulation experience. Despite the lack of significant difference, a reliable and valid tool exists to effectively evaluate and debrief students so that they may improve their subsequent performances. Future studies would further elucidate whether numerically grading of an entire clinical experience would yield similar results.

 

REFERENCES

 

Adamson K. (2015). A systematic review of the literature related to the NLN/Jeffries simulation framework. Nursing Education Perspectives, 36(5), 281-291. doi:10.5480/15-1655 [Context Link]

 

Amicucci B. (2012). What nurse faculty have to say about clinical grading. Teaching and Learning in Nursing, 7(2), 51-55. doi:10.1016/j.teln.2011.09.002 [Context Link]

 

Andre K. (2000). Grading student clinical practice performance: The Australian perspective. Nurse Education Today, 20(8), 672-679. doi:10.1054/nedt.2000.0493 [Context Link]

 

Docherty A., & Dieckmann N. (2015). Is there evidence of failing to fail in our schools of nursing? Nursing Education Perspectives, 36(4), 226-231. doi:10.5480/14-1485 [Context Link]

 

National League for Nursing. (2017). Descriptions of available instruments. Retrieved from http://www.nln.org/professional-development-programs/research/tools-and-instrume[Context Link]

 

Oermann M. H., & Gaberson K. B. (2013). Evaluation and testing in nursing education (4th ed.). New York, NY: Springer. [Context Link]

 

Oermann M. H., Yarbrough S. S., Saewert K. J., Ard N., & Charasika M. (2009). Clinical evaluation and grading practices in schools of nursing: National survey findings part II. Nursing Education Perspectives, 30(6), 352-357. [Context Link]

 

Reising D. L., Carr D. E., Tieman S., Feather R. A., & Ozdogan Z. (2015). Psychometric testing of a simulation rubric for measuring interprofessional communication. Nursing Education Perspectives, 36(5), 311-316. doi: 10.5480/15-1659 [Context Link]

 

Renjith V., George A., Renu G., & D'Souza P. (2015). Rubrics in nursing education. International Journal of Advanced Research, 3(5), 423-428. [Context Link]

 

Rizzolo M. A., Kardong-Edgren S., Oermann M. H., & Jeffries P. R. (2014). The National League for Nursing project to explore the use of simulation for high-stakes assessment: Process, outcomes, and recommendations. Nursing Education Perspectives, 36(5), 299-303. doi:10.5480/15-1639 [Context Link]

 

Schinske J., & Tanner K. (2014). Teaching more by grading less. CBE Life Sciences Education, 13(2), 159-166. doi:10.1187/cbe.CBE-14-03-0054 [Context Link]