Making Learning Visible in the Clinical Team-based Simulations – UROP Spring Symposium 2021

Making Learning Visible in the Clinical Team-based Simulations

Coco Yu


Pronouns: She, Her

Research Mentor(s): Vitaliy Popov, Assistant Professor
Research Mentor School/College/Department: Department of Learning Health Sciences, Michigan Medicine
Presentation Date: Thursday, April 22, 2021
Session: Session 5 (3pm-3:50pm)
Breakout Room: Room 16
Presenter: 5

Event Link


In the medical field, the concept of “breaking bad news” is incredibly important for future doctors and social workers to practice and receive meaningful feedback on. The moment a medical professional tells a family member bad news, that instance stays with the patient’s family for the rest of their life. Our team transcribed, analyzed, and interpreted over 150 medical simulation videos to analyze body language, tone of voice, and responses to see how they reacted to feedback from debriefers. Our goal is to optimize the feedback given in order to fully prepare future medical professionals for this critical task. This study was conducted on a sample of over 150 fifteen-minute videos of medical students debriefing with supervisors about their breaking bad news patient simulation. First, the videos were transcribed. Next, using the video annotating software ELAN, the videos were segmented and annotated based on the established coding scheme. There were two coding schemes, one for the debriefers, and one for the students that each use different rating scales. Once all the videos have been segmented and analyzed, that data can be used to create a program to help optimize the feedback given. As this is a one-year study, we are only halfway through finding the final results. We have transcribed 150+ videos, segmented them and applied annotation labels based on two coding schemes. During this process, we analyzed and learnt the patterns of the valence of students’ reflection and the Positivity, Negativity, Constructiveness and Specificity scales of the debriefer’s feedback. In the end, we will need to employ a methodology for multimodal sentiment analysis, which consists of gathering sentiments from available simulation videos by extracting audio, visual, and textual data features as sources of information. Then merge affective information extracted from multiple modalities and apply this merged multimodal technique to predict/analyze a trainee’s emotional states when receiving feedback. Our findings show a range of different reactions to the feedback ranging from negative deactivating to positive activating. This research is valuable to the future of medical education, analyzing the quality of feedback given can help to optimize these patient simulations and better prepare medical students for real-life situations. At the same time, this research will also develop a system for automatic quantification and interpretation of an individual’s emotion when receiving feedback based on verbal and nonverbal behaviors, such as words (speech content), facial expressions (when possible given camera angle), tone of voice, and turn-taking.

Authors: Coco Yu, Nicole Meimaris, Ava Gizoni, Vitaliy Popov
Research Method: Computer Programming

lsa logoum logo