Authors

  1. Obayashi, Yota OT, PhD
  2. Uehara, Shintaro PT, PhD
  3. Kokuwa, Ryu PT
  4. Otaka, Yohei MD, PhD

Abstract

Objective: To investigate whether automatic facial expression analysis can quantify differences in the intensity of facial responses depending on the affective stimuli in a patient with minimally conscious state (MCS).

 

Methods: We filmed the facial responses of a patient with MCS during the delivery of three 1-minute auditory stimuli: audio clips of comedy movies, a nurse hilariously talking, and recitation of a novel (comedy, nurse, and recitation conditions, respectively). These measures were repeated at least 13 times for each condition on different days for approximately 10 months. The intensity of being "happy" was estimated from the smiling face using a software called FaceReader. The intensity among 5 conditions including those at 2 resting conditions (pre- and poststimuli) was compared using the Kruskal-Wallis test and the Dunn-Bonferroni test for multiple comparisons.

 

Results: Significantly higher values were found in the intensity of being "happy" in the comedy and nurse conditions versus other conditions, with no significant differences between the recitation and pre- or poststimulus conditions. These findings indicate that the automated facial expression analysis can quantify differences in context-dependent facial responses in the patient recruited in this study.

 

Conclusions: This case study demonstrates the feasibility of using automated facial expression analysis to quantitatively evaluate the differences in facial expressions and their corresponding emotions in a single patient with MCS.