An Integrated Neural Framework for Dynamic and Static Face Processing.

Journal: Scientific reports
Published Date:

Abstract

Faces convey rich information including identity, gender and expression. Current neural models of face processing suggest a dissociation between the processing of invariant facial aspects such as identity and gender, that engage the fusiform face area (FFA) and the processing of changeable aspects, such as expression and eye gaze, that engage the posterior superior temporal sulcus face area (pSTS-FA). Recent studies report a second dissociation within this network such that the pSTS-FA, but not the FFA, shows much stronger response to dynamic than static faces. The aim of the current study was to test a unified model that accounts for these two functional characteristics of the neural face network. In an fMRI experiment, we presented static and dynamic faces while subjects judged an invariant (gender) or a changeable facial aspect (expression). We found that the pSTS-FA was more engaged in processing dynamic than static faces and changeable than invariant aspects, whereas the OFA and FFA showed similar response across all four conditions. These findings support an integrated neural model of face processing in which the ventral areas extract form information from both invariant and changeable facial aspects whereas the dorsal face areas are sensitive to dynamic and changeable facial aspects.

Authors

  • Michal Bernstein
    Sagol School of Neuroscience, Tel Aviv University, Tel Aviv, 6997801, Israel. bernste2@mail.tau.ac.il.
  • Yaara Erez
    MRC Cognition and Brain Sciences Unit, 15 Chaucer Rd, Cambridge, UK.
  • Idan Blank
    Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA.
  • Galit Yovel
    Sagol School of Neuroscience, Tel Aviv University, Tel Aviv, 6997801, Israel.