Preisig, Basil; Eggenberger, Noëmi; Cazzoli, Dario; Nyffeler, Thomas; Gutbrod, Klemens; Annoni, Jean-Marie; Meichtry, Jurka; Nef, Tobias; Müri, René Martin (2018). Multimodal Communication in Aphasia: Perception and Production of Co-speech Gestures During Face-to-Face Conversation. Frontiers in human neuroscience, 12, p. 200. Frontiers Research Foundation 10.3389/fnhum.2018.00200
|
Text
fnhum-12-00200.pdf - Published Version Available under License Creative Commons: Attribution (CC-BY). Download (1MB) | Preview |
The role of nonverbal communication in patients with post-stroke language impairment (aphasia) is not yet fully understood. This study investigated how aphasic patients perceive and produce co-speech gestures during face-to-face interaction, and whether distinct brain lesions would predict the frequency of spontaneous co-speech gesturing. For this purpose, we recorded samples of conversations in patients with aphasia and healthy participants. Gesture perception was assessed by means of a head-mounted eye-tracking system, and the produced co-speech gestures were coded according to a linguistic classification system. The main results are that meaning-laden gestures (e.g., iconic gestures representing object shapes) are more likely to attract visual attention than meaningless hand movements, and that patients with aphasia are more likely to fixate co-speech gestures overall than healthy participants. This implies that patients with aphasia may benefit from the multimodal information provided by co-speech gestures. On the level of co-speech gesture production, we found that patients with damage to the anterior part of the arcuate fasciculus showed a higher frequency of meaning-laden gestures. This area lies in close vicinity to the premotor cortex and is considered to be important for speech production. This may suggest that the use of meaning-laden gestures depends on the integrity of patients’ speech production abilities.