In construing meaning, the brain recruits multimodal (conceptual) systems and embodied (modality-specific) mechanisms. Yet, no consensus exists on how crucial the latter are for the inception of semantic distinctions. To address this issue, we combined electroencephalographic (EEG) and intracranial EEG (iEEG) to examine when nouns denoting facial body parts (FBPs) and nonFBPs are discriminated in face-processing and multimodal networks. First, FBP words increased N170 amplitude (a hallmark of early facial processing). Second, they triggered fast (similar to 100 ms) activity boosts within the face-processing network, alongside later (similar to 275 ms) effects in multimodal circuits. Third, iEEG recordings from face-processing hubs allowed decoding similar to 80% of items before 200 ms, while classification based on multimodal-network activity only surpassed similar to 70% after 250 ms. Finally, EEG and iEEG connectivity between both networks proved greater in early (0-200 ms) than later (200-400 ms) windows. Collectively, our findings indicate that, at least for some lexico-semantic categories, meaning is construed through fast reenactments of modality-specific experience.
Time to Face Language: Embodied Mechanisms Underpin the Inception of Face-Related Meanings in the Human Brain / A.M. Garcia, E. Hesse, A. Birba, F. Adolfi, E. Mikulan, M.M. Caro, A. Petroni, T.A. Bekinschtein, M. Del Carmen Garcia, W. Silva, C. Ciraolo, E. Vaucheret, L. Sedeno, A. Ibanez. - In: CEREBRAL CORTEX. - ISSN 1460-2199. - 30:11(2020 Nov), pp. 6051-6068. [10.1093/cercor/bhaa178]
Time to Face Language: Embodied Mechanisms Underpin the Inception of Face-Related Meanings in the Human Brain
E. Mikulan;A. Petroni;
2020
Abstract
In construing meaning, the brain recruits multimodal (conceptual) systems and embodied (modality-specific) mechanisms. Yet, no consensus exists on how crucial the latter are for the inception of semantic distinctions. To address this issue, we combined electroencephalographic (EEG) and intracranial EEG (iEEG) to examine when nouns denoting facial body parts (FBPs) and nonFBPs are discriminated in face-processing and multimodal networks. First, FBP words increased N170 amplitude (a hallmark of early facial processing). Second, they triggered fast (similar to 100 ms) activity boosts within the face-processing network, alongside later (similar to 275 ms) effects in multimodal circuits. Third, iEEG recordings from face-processing hubs allowed decoding similar to 80% of items before 200 ms, while classification based on multimodal-network activity only surpassed similar to 70% after 250 ms. Finally, EEG and iEEG connectivity between both networks proved greater in early (0-200 ms) than later (200-400 ms) windows. Collectively, our findings indicate that, at least for some lexico-semantic categories, meaning is construed through fast reenactments of modality-specific experience.File | Dimensione | Formato | |
---|---|---|---|
bhaa178.pdf
accesso aperto
Tipologia:
Publisher's version/PDF
Dimensione
897.93 kB
Formato
Adobe PDF
|
897.93 kB | Adobe PDF | Visualizza/Apri |
Pubblicazioni consigliate
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.