Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

An event-related potential comparison of facial expression processing between cartoon and real faces

  • Jiayin Zhao,

    Roles Conceptualization, Formal analysis, Investigation, Methodology, Software, Validation, Writing – original draft, Writing – review & editing

    Affiliation Beijing Key Laboratory of Learning and Cognition, Department of Psychology, Capital Normal University, Beijing, China

  • Qi Meng,

    Roles Data curation, Methodology, Validation, Visualization, Writing – review & editing

    Affiliation Beijing Key Laboratory of Learning and Cognition, Department of Psychology, Capital Normal University, Beijing, China

  • Licong An,

    Roles Data curation, Methodology, Validation, Visualization, Writing – review & editing

    Affiliation Beijing Key Laboratory of Learning and Cognition, Department of Psychology, Capital Normal University, Beijing, China

  • Yifang Wang

    Roles Funding acquisition, Project administration, Resources, Supervision, Writing – review & editing

    wangyifang6275@126.com

    Affiliation Beijing Key Laboratory of Learning and Cognition, Department of Psychology, Capital Normal University, Beijing, China

Abstract

Faces play important roles in the social lives of humans. Besides real faces, people also encounter numerous cartoon faces in daily life which convey basic emotional states through facial expressions. Using event-related potentials (ERPs), we conducted a facial expression recognition experiment with 17 university students to compare the processing of cartoon faces with that of real faces. This study used face type (real vs. cartoon), emotion valence (happy vs. angry) and participant gender (male vs. female) as independent variables. Reaction time, recognition accuracy, and the amplitudes and latencies of emotion processing-related ERP components such as N170, VPP (vertex positive potential), and LPP (late positive potential) were used as dependent variables. The ERP results revealed that cartoon faces caused larger N170 and VPP amplitudes as well as a briefer N170 latency than did real faces; that real faces induced larger LPP amplitudes than did cartoon faces. In addition, the results showed a significant difference in the brain regions as reflected in a right hemispheric advantage. The behavioral results showed that the reaction times for happy faces were shorter than those for angry faces; that females showed a higher accuracy than did males; and that males showed a higher recognition accuracy for angry faces than happy faces. Due to the sample size, these results may suggestively but not rigorously demonstrate differences in facial expression recognition and neurological processing between cartoon faces and real faces. Cartoon faces showed a higher processing intensity and speed than real faces during the early processing stage. However, more attentional resources were allocated for real faces during the late processing stage.

Introduction

Faces play important roles in human social life. They convey unique identity information and basic emotions through facial expressions. In daily life, facial expressions provide important non-verbal forms of information and communication [1]. The ability to recognize a facial expression reflects an individual's ability to infer the psychological states of others [2]. Facial expression recognition not only helps to determine internal emotional states and the intentions conveyed by an individual but also provides feedback and induces social interactions [3,4]. Ekman and Friesen (1978) summarized six basic human facial expressions including happiness, sadness, surprise, fear, anger, and disgust [5]. These facial expressions have been identified and confirmed across different cultural contexts [6].

In addition to real faces, people also encounter many cartoon faces on daily life. Common social networks (e.g., WeChat) provide various cartoon face emoji for communicating and expressing emotions. Compared with real faces, cartoon faces usually have larger eyes, smaller noses, and finer skin texture [7]. Chen and colleagues (2010) found that people developed a preference for real faces with larger eyes after adaption to cartoon faces with unusually large eyes in Japanese cartoons [8]. Some researchers compared cartoon faces and real faces with regard to recognition accuracy and reaction time. Kendall, Raffaelli, Kingstone, and Todd (2016) asked participants to identify emotions on five sets of briefly presented faces that ranged from photorealistic to fully iconic. The results showed stronger emotion recognition accuracy for cartoonized faces [9]. In another study, participants showed faster reaction times to real faces than cartoon faces when they were required to determine whether an image was a face or a car [10]. Moreover, it was specific to real faces instead of cartoon faces that children were more accurate at recognizing upright faces than inverted faces [11]. Similarly, brain imaging studies have also found that the fusiform face area differs between the processing of cartoon faces and real faces [12, 13].

However, research on the recognition of cartoon and real faces has shown mixed results. Using synthesized emotion images, Hoptman and Levy (1988) studied the processing preference of left- and right-handed individuals for cartoon and real faces. The results failed to reveal significant difference between cartoon and real faces [14]. In the comparative study of facial expression processing between Asperger Syndrome children and normal children, no difference was found in processing between cartoon faces and real face [15]. Moreover, Rosset et al. (2008) found that children relied on a configural strategy with all faces when processing emotional expressions of real faces, human cartoon and non-human cartoon faces [16].

In existing researches, cartoon face has no uniform form and definition. In present study, the definition of a cartoon face is a cartoonized face transformed by software called MYOTee (a cartoon image editor, Shenzhen MYOTee Technology Co., Ltd.) according to the characteristics of the real face. It is stylized by a more exaggerated expression and larger eyes, a smaller nose and a more delicate skin than a real face.

Both cartoon and real faces convey emotional information through facial expression. The six basic facial expressions can be categorized as positive or negative expressions. Mixed results have been reported by researches on the reaction times and recognition accuracies of positive and negative facial expressions. Some believe that reaction times for positive expressions are faster than those for other facial expressions [17, 18]. However, there are other studies suggesting that people recognize negative facial expressions faster than positive ones [19, 20].

Event-related potentials (ERPs) were also used to study the neurophysiological basis behind these differences. ERPs can be used to classify different visual stimulus and differentiate disparate emotional states. Without any participant response, ERP testing enables the measurement of emotional attitudes that people are unwilling to express [21].

The ERP components related to faces and facial expressions include N170, VPP, LPP, and others. N170 is primarily distributed in the occipito-temporal region of the brain and usually shows a larger response in the right hemisphere [22]. N170 is a face-specific ERP component, and its peak shows face selectivity. N170 is only induced by face stimuli (i.e., not by furniture, cars, hand gestures, or other stimuli) [23]. Related to face type, research has shown that the N170 component induced by real faces and cars was stronger than cartoon faces and cars [10]. Furthermore, the attractiveness of cartoon faces will affect the amplitude of N170 [24,25]. Another study showed that real faces induced a stronger N170 effect than abstract sketches of faces. Compared with schematic faces, however, the difference was not significant [26]. Facial expressions are also related to N170 during early processing showing that emotional faces induced larger N170 amplitudes than did neutral faces [2729]. Batty and Taylor (2003) recorded the ERPs of participants responding to the six basic facial expressions and neutral expressions. The results showed that positive expressions resulted in shorter N170 latencies than negative expressions and that fear expressions induced significantly larger amplitudes than did other expressions [1].

N170 has a corresponding positive component at the mind-central sites, namely VPP. VPP and N170 have similar functional properties. They are two manifestations of the same brain processes [30]. VPP sometimes shows more sensitivity to facial expression information than N170, and VPP is influenced by facial expressions when N170 is not [31].

Additional processing of emotional face is reflected by the LPP component which originate from the occipital lobe and the posterior parietal cortex, reflecting the cerebral cortex's evaluation of emotional stimuli, stimulus representation in working memory, and processing of decision making [3234]. Researchers who investigated adults’ ERP processing of real and cartoon faces with neutral expressions found that real faces induce significantly higher average LPP amplitudes than do cartoon faces [10]. Stronger LPP was also found in neutral real faces compared with neutral puppet faces [35]. Schindler et al. (2017) employed six face-stylization levels varying from abstract to realistic and investigated the difference in the processing of real and cartoon faces showing that the LPP amplitude increased as the faces became more realistic [7]. LPP component is also sensitive to various emotional stimuli including faces [3640]. The findings related to the influence of emotion valence on LPP are not consistent. Although some reports concluded that negative expressions induce smaller amplitudes than do positive expressions [41], others found that negative expressions induce larger LPP components than do positive expressions [42]. In addition, other studies found no significant differences between the processing of positive and negative expressions [43, 44].

Moreover, other studies have investigated the influence of participant gender on the recognition of facial expressions. Hoffmann and colleagues asked participants to identify six basic but subtle facial emotions (50% emotional content). The results showed that women were more accurate than men at recognizing subtle facial displays of anger, disgust, and fear [45]. Wildgruber, Pihan, Ackermann, Erb and Grodd (2002) found no behavioral difference between males and females with regard to differentiating happy from sad sounds. However, higher response amplitudes within the left-hemisphere posterior middle temporal gyrus were found among women compared with men, whereas a larger increase of activation within the right middle frontal gyrus was observed among the latter [46]. Han, Gao, Humphreys and Ge (2008) found significant differences in the behaviors and brain activities between men and women during emotion-related tasks. Women showed faster threat detection times, while men showed stronger posterior parietal activation [47].

In summary, the existing research suggests that the N170, VPP, and LPP are closely related to facial expression processing but consistent conclusions do not exist regarding the comparison of processing methods, speeds, and intensities between cartoon faces and real faces. With respect to facial expression selection, the present study used anger and happiness for comparison [7, 17, 19, 20, 23, 41]. The present study used an ERP methodology to investigate the processing of real and cartoon facial expressions among men and women. We hypothesized that (1) face type (i.e., real and cartoon faces) would influence the amplitudes and latencies of N170 and VPP though the tendency is not clear; (2) the LPP amplitudes of cartoon faces would be smaller than real faces which are more unique by carrying more details; (3) the late component LPP, but not N170 or VPP, would be affected by emotional valance; (4) recognition time would be faster with regarding to a positive emotion (i.e., happiness) than a negative emotion (i.e., anger); and (5) women would recognize facial expressions faster and more accurately than would men.

Method

Participants

We recruited 17 participants (11 males, 6 females; average age = 24.18, SD = 2.32) from universities in Beijing. All participants were right-handed, had normal hearing and vision (with or without correction), and no history of hearing, neurological, or psychiatric disorders. Participants were compensated after the experiment. The current research was approved by the Capital Normal University Institutional Review Board, and written informed consent was obtained from each participant.

Materials

The pictures used in the experiment were selected from the Chinese Facial Affective Picture System (CFAPS; Wang and Luo, 2005) and the Japanese Female Facial Expression (JAFEE) database. Fifty pictures of happy faces (25 males and 25 females) and 50 pictures of angry faces (25 males and 25 females) were selected from the two picture databases. In total, 100 pictures were selected. We used MYOTee (a cartoon image editor) to convert these faces into cartoon faces. Subsequently, we used Photoshop to overlay the cartoon faces onto the original pictures for fine-tuning, and we retained the same face structure and hairstyle to synthesize 100 cartoon facial expression pictures. In total, 200 pictures were used in this experiment. All pictures were presented in black and white with a resolution of 260 × 300 at a consistent contrast (Fig 1). The individuals in Fig 1 have given written informed consent (as outlined in PLOS consent form) to publish these case details.

Twenty additional volunteers (non-participants; mean age = 25.3 years) evaluated the pictures. The evaluation included the identification of facial expression type (i.e., by pressing the “G” key for happiness and the “F” key for anger) and a Likert rating of the facial emotion (9 = extremely happy or angry; 1 = not at all happy or angry). The evaluation results revealed a recognition accuracy of 95.9%, with an emotion intensity rating of 4.81 ± 1.91 (Table 1). Therefore, all 200 pictures were retained as stimuli for the experiment.

thumbnail
Table 1. Means (and standard deviations) for accuracy of facial expression recognition and valuation of emotional intensity.

https://doi.org/10.1371/journal.pone.0198868.t001

Procedure

The experiment was conducted in a quiet and dimly lit laboratory. The stimulus images were presented on a 16-inch CRT monitor with a screen resolution of 1920 × 1080. Participants were required to complete facial expression identification tasks according to instructions presented on the monitor. Their electroencephalogram (EEG) data were collected during the experiment. For each trial, a focus point was presented for 1,000 ms. Subsequently, a facial image was presented, and the participant was required to determine whether the face was happy or angry by pressing a button (happy = 1; angry = 2) within 1,000 ms. If a button was pressed within 1,000 ms, then the picture disappeared, and a blank screen was presented until the next picture appeared. If no button was pressed, then the picture disappeared after 1,000 ms, and a blank screen was presented until the next picture appeared. The duration of the blank screens varied randomly from 900 ms to 1,700 ms. Fig 2 shows the experimental procedure. The experiment was divided into two blocks, each with 100 trials. The pictures within each block were balanced. Participants were given 2–3 min to rest between blocks.

ERP recording and data analyses

The EEG data were collected using an elastic cap from the 64 channel system (HydroCel Geodesic Sensor Net, Electrical Geodesics, Inc., Eugene, OR, USA) with Net Station EEG Software. The impedance of all electrodes was kept below 50 kΩ during data acquisition. All electrodes were physically referenced to Cz (fixed by the EGI system).

Off-line EEG processing and analyses were performed by adopting Net Station EEG Software. The EEG data were band-pass filtered (0.1±40 Hz) and then re-referenced to the average of all electrodes. Trails with artifacts including blink, eye movement and skin potentials where peak to peak deflection exceeding ±80μV and trails with wrong answer were cut off from averaging.

Based on the overall mean chart, the early ERP components (N170 and VPP) generated by the stimuli showed clear peaks. A time window of 125–195 ms was used to measure the ERP peak and peak latency data collected at electrode sites P7 of left hemisphere and P8 of right hemisphere for N170 and electrode site Cz for VPP. For LPP, the average amplitude was calculated with 6 electrode sites (P3, P4, PO3, PO4, Pz, POz) with a time window of 450–650 ms. Latency of LPP component was not included in analysis because that latencies of peaks can vary for components spaning longer time intervals.

For behavioral performances, repeated-measures ANOVAs, with factors gender (male, female), face type (real, cartoon), emotion valence (happy, angry) as independent variables, with RT and accuracy as dependent variables, were conducted. For N170, VPP and LPP components, repeated-measures ANOVAs, with factors gender (male, female), face type (real, cartoon), emotion valence (happy, angry), and lateralization (only for N170) as independent variables, with amplitudes and latency (not for LPP) as dependent variables, were conducted.

Result

Behavioral performance

Mean response time (RT) and accuracy and standard deviations are shown in Table 2. For RT, a significant effect of emotion valence, F (1, 15) = 4.95, p = 0.042, ηp2 = 0.25, revealed that RT was shorter for happy face than angry face. The main effects of gender and face type did not reach significance, F (1, 15) = 2.40, p > 0.05, ηp2 = 0.14, F (1, 15) = 2.17, p > 0.05, ηp2 = 0.13, neither did all interactions (ps > 0.05).

thumbnail
Table 2. Means (and standard deviations) for response time (RT), accuracy.

https://doi.org/10.1371/journal.pone.0198868.t002

For accuracy, a significant effect of gender, F (1, 15) = 5.38, p = 0.035, ηp2 = 0.26, indicated that the female performed better than the male. Results also showed a significant interaction between gender and emotion valence, F (1, 15) = 5.50, p = 0.033, ηp2 = 0.27. Follow-up simple effect analysis showed that for the male, performance was better in condition of angry face than happy face, F (1, 15) = 10.48, p < 0.01, ηp2 = 0.41; for the female, accuracy did not differ for happy face versus angry face, F (1, 15) = 0.03, p > 0.05, ηp2 = 0.002. The main effects of face type and emotion valence did not reach significance, F (1, 15) = 0.34, p > 0.05, ηp2 = 0.02, F (1, 15) = 4.40, p > 0.05, ηp2 = 0.23, neither did other interactions (ps > 0.05).

N170

Mean amplitudes and latency of N170, VPP, LPP and standard deviations are shown in S1 Table. For N170 amplitudes, since the main effect of emotion valence is not significant, F (1, 13) = 1.03, p > 0.05, ηp2 = 0.07, we analyze the data separately in two cases. In the cases of both happy faces and angry faces, there were significant effects of face type, F (1, 13) = 28.58, p < 0.01, ηp2 = 0.69, F (1, 13) = 34.90, p < 0.01, ηp2 = 0.73, revealed that amplitudes were bigger for cartoon face than real face. However, only for happy faces, results showed a significant interaction between face type and lateralization, F (1, 13) = 10.12, p < 0.01, ηp2 = 0.44. Follow-up simple effect analysis showed that for right hemisphere, amplitude was bigger in condition of cartoon face than real face, F(1, 13) = 40.71, p < 0.01, ηp2 = 0.76; for left hemisphere, amplitude did not differ between cartoon and real face, F (1, 13) = 3.58, p > 0.05, ηp2 = 0.22. The main effects of gender and lateralization did not reach significance, F (1, 13) = 0.61, p > 0.05, ηp2 = 0.05, F (1, 13) = 2.32, p > 0.05, ηp2 = 0.15, neither did other interactions (ps > 0.05).

For N170 latency, since the main effect of emotion valence is not significant, F (1, 13) = 0.04, p > 0.05, ηp2 = 0.003, we analyze the data separately in two cases. In the cases of both happy faces and angry faces, there were significant effects of face type, F (1, 13) = 5.95, p = 0.03, ηp2 = 0.31, F (1, 13) = 4.94, p = 0.045, ηp2 = 0.28, indicated that latency was longer for real face than cartoon face. However, only for angry faces, results showed a significant interaction between face type and lateralization, F (1, 13) = 5.68, p = 0.033, ηp2 = 0.30. Follow-up simple effect analysis showed that for right hemisphere, latency was longer in condition of real face than cartoon face, F(1, 13) = 26.09, p < 0.01, ηp2 = 0.67; for left hemisphere, latency did not differ between real face and cartoon face, F (1, 13) = 0.12, p > 0.05, ηp2 = 0.01. The main effects of gender and lateralization did not reach significance, F (1, 13) = 0.69, p > 0.05, ηp2 = 0.05, F (1, 13) = 0.04, p > 0.05, ηp2 = 0.003, neither did other interactions (ps > 0.05).

VPP

For VPP amplitudes, since the main effect of emotion valence is not significant, F (1, 13) = 3.71, p > 0.05, ηp2 = 0.22, we analyze the data separately in two cases. Only in the cases of happy faces, a significant effect of face type, F(1, 13) = 9.54, p < 0.01, ηp2 = 0.42, revealed that for happy faces amplitudes were bigger for cartoon face than real face. The main effect of gender did not reach significance, F (1, 13) = 3.21, p > 0.05, ηp2 = 0.20, neither did all interactions (ps>0.05).

For VPP latency, results showed that interaction between face type and emotion valence was significant, F (1, 13) = 6.80, p = 0.022, ηp2 = 0.34. Follow-up simple effect analysis found no significant effect. For real face, latency in condition of happy face was 161.04(3.15) ms, and it was 164.90(2.71) ms in condition of angry face. For cartoon face, latency in condition of happy face was 159.88(3.16) ms, and it was 158.11(3.36) ms in condition of angry face. The main effects of gender, face type and emotion valence did not reach significance, F (1, 13) = 2.15, p > 0.05, ηp2 = 0.14, F (1, 13) = 1.50, p > 0.05, ηp2 = 0.10, F (1, 13) = 0.64, p > 0.05, ηp2 = 0.05, neither did other interactions (ps > 0.05).

LPP

For LPP amplitudes, Since the main effect of emotion valence is not significant, F (1, 13) = 3.09, p > 0.05, ηp2 = 0.19, we analyze the data separately in two cases. Only in the cases of angry faces, a significant effect of face type, F(1, 13) = 7.08, p = 0.02, ηp2 = 0.35, revealed that for angry faces amplitudes were bigger for real face than cartoon face. The main effect of gender was not significant, F (1, 13) = 3.03, p > 0.05, ηp2 = 0.19, neither were all interactions (ps > 0.05).

Discussion

The differences in the processing of the two face types were primarily reflected by the amplitudes and latencies of N170, VPP, and LPP. Cartoon faces were associated with significantly higher amplitudes than real faces on N170 and VPP. And cartoon faces resulted in shorter N170 latencies than did real faces. In addition, for N170 there was a significant interaction between face type and lateralization showing larger amplitudes and shorter latencies for cartoon faces in the right hemisphere. Real and cartoon faces also differed in LPP amplitudes which induced by real faces were significantly larger than those induced by cartoon faces. Although no significant effect of emotion valence and gender was found with regard to ERP components, happy faces resulted in shorter reaction time than did angry faces and the females were more accurate than the males. Moreover, there was a significant interaction between emotion valence and gender showing that the males recognized angry faces more accurately than happy face whereas no difference was found in the females.

For the early components N170 and VPP, cartoon faces induced larger amplitudes and shorter latencies. This finding suggests that cartoon facial expressions are more easily recognized than real facial expressions, that is inconsistent with the results of previous studies that have indicated that real faces induce larger N170 and VPP amplitudes and shorter latencies than do cartoon faces [10, 26]. These inconsistent results might have been caused by the differences in stimulus materials. In Wang’s study, real faces were collected from preschool children (mean age = ~6 years), and cartoon faces were obtained from screenshots of high-resolution popular cartoon DVDs which have various face shapes and facial features[10]. In the present study, the real face stimulus were collected from adult facial expression databases (CFAPS and JAFFE), and the cartoon faces were converted from these real faces which have uniformed face shapes and facial features. Therefore, different facial structures may lead to inconsistent results. Also, The cartoon faces used in this study are more simplified and abstract and, therefore, might have resulted in stronger N170 amplitudes. Schindler et al. (2017) employed six face-stylization levels varying from abstract to realistic and investigated the difference in the processing of real and cartoon faces [7]. The results revealed a U-shape relationship between N170 and face realism. That is, both the most abstract and most realistic faces caused stronger reactions compared with medium-stylized faces. In addition, Proverbio, Riva, Martin and Zani (2010) found that infant faces elicited higher N170 amplitudes than did adult faces, most likely because of juvenile characteristics such as the larger proportion of the eyes [48]. In the present study, the eyes of the cartoon faces were much larger than those of real people.

Real and cartoon faces also differed in LPP amplitude. The LPP amplitudes induced by real faces were significantly larger than those induced by cartoon faces. This finding is consistent with those of previous studies [7, 10, 49]. When the neutral expressions of real faces and puppet faces were compared, no differences in N170 were observed. However, a stronger LPP was found with regard to real faces starting at 400 ms [49]. This effect is probably because of the uniqueness of the real face as well as the understanding of the portrayed individual [35], considering that computer-generated faces are usually more difficult to remember [50, 51]. Bruce and Young (1986) considered facial feature encoding and identify recognition as the second stage of face recognition [52]. This stage includes the accurate processing of facial information such as age, gender, race, and facial expression. Besides, compared with simplified cartoon faces, real faces convey more personal information and social meaning. Thence adults may invest more psychological resources to real faces during late face processing. Furthermore, LPP is related to facial attractiveness [49, 53, 54]. Therefore, the results of the present study might suggest that real faces are more attractive than simplified cartoon faces to adults.

For behavioral data, reaction time was shorter for happy faces than for angry faces, and response accuracy was better for the female than for the male, which were consistent with previous studies. Positive facial expressions are more easily recognized than negative facial expressions [17, 18] and women have a face recognition advantage over men [5558]. In addition, an interaction effect was found between emotion valence and gender. Men showed a higher accuracy for recognizing angry faces, whereas women showed no difference. A possible explanation is that women are better at identifying emotion in general, and in the conditions of both happy faces and angry faces there maybe is a ceiling effect on accuracy but the two emotions are more difficult for men. The high accuracy of angry face recognition among men might be because they are more physically aggressive than women [59]. Therefore, men are likely more sensitive to social signals that convey aggressiveness.

Our selected stimuli (i.e., the cartoon and real facial expression pictures) represent the advantage of this study. One important advantage is that the cartoon faces were converted from real faces and therefore retain the same facial structure and hairstyle. Most of the existing research has used screenshots of cartoon characters or sketched faces and expression icons [10, 20, 26]; therefore, they cannot exclude nuanced information other than facial expressions compared with real faces. Another advantage is that all of the images used were of Asian adults, which prevented the introduction of cultural and age differences that might have been caused by use of western emotional faces. One limitation of this study is its limited sample size (17 participants), which might have resulted in the large standard deviation in VPP latency. Although the interaction effect of face type and emotion valence was significant, the simple effects were not. However, in addition to the significant main effect of faces type for ERP components, these ηp2 were also considerable (all ηp2 > 0.20). But due to the sample size, there is still such a possibility that these findings are suggestive and may not be rigorous. Future research should increase the sample size to examine the interaction of cartoon and real faces with regard to facial expression type. Another limitation is that although ERP is advantageous for its temporal resolution, its special resolution is low. Future research should apply fMRI, which has a high spatial resolution. Finally, children are the primary audience for cartoons. Childhood is an important stage for developing emotional cognition. Based on the present study, future studies should compare adults and children with regard to the processing of cartoon and real facial expressions. This line of research might help draw a clearer picture of the developmental process associated with cartoon face processing.

Conclusions

We used ERPs to measure the brain activity responses induced by the facial expressions of cartoon and real faces. According to the neurophysiological evidence in this study, face type has a strong but heterogeneous effect on the N170, VPP, and LPP components. During the early processing stage, adults process cartoon faces faster than real faces. However, adults allocate more attentional resources for real face processing during late processing stage. Future research should use larger sample sizes to examine the interaction between face type (real vs. cartoon) and facial expression.

Supporting information

S1 Table. Means (and standard deviations) for amplitudes and latency of N170, VPP and LPP.

https://doi.org/10.1371/journal.pone.0198868.s001

(PDF)

Acknowledgments

The authors thank all the volunteers who participated in this study.

References

  1. 1. Batty M, Taylor MJ. Early processing of the six basic facial emotional expressions. Brain Res Cogn Brain Res. 2003;17(3):613–20. pmid:14561449
  2. 2. Nelson CA, Morse PA, Leavitt LA. Recognition of facial expressions by seven-month-old infants. Child Dev. 1979;50(4):1239–42. pmid:535438
  3. 3. Thompson DF, Meltzer L. Communication of emotional intent by facial expression. J Abnorm Psychol. 1964;68:129–35. pmid:14117963
  4. 4. Erickson K, Schulkin J. Facial expressions of emotion: a cognitive neuroscience perspective. Brain Cogn. 2003;52(1):52–60. pmid:12812804
  5. 5. Ekman P, Friesen WV. Facial action coding system (facs): a technique for the measurement of facial actions. Riv Psichiatr. 1978;47(2):126–38.
  6. 6. Ekman P, Friesen WV. Constants across cultures in the face and emotion. J Pers Soc Psychol. 1971;17(2):124–9. pmid:5542557
  7. 7. Schindler S, Zell E, Botsch M, Kissler J. Differential effects of face-realism and emotion on event-related brain potentials and their implications for the uncanny valley theory. Sci Rep. 2017;7:45003. pmid:28332557
  8. 8. Chen H, Russell R, Nakayama K, Livingstone M. Crossing the 'uncanny valley': adaptation to cartoon faces can influence perception of human faces. Perception. 2010;39(3):378–86. pmid:20465173
  9. 9. Kendall LN, Raffaelli Q, Kingstone A, Todd RM. Iconic faces are not real faces: enhanced emotion detection and altered neural processing as faces become more iconic. Cogn Res Princ Implic. 2016;1(1):19. pmid:28180170
  10. 10. Wang L,Wang J, Wang J, Lu Y. A comparative event-related potential study on recognition of cartoon face and real face. Psychol Res. 2012;5(5):19–28.
  11. 11. Rosset DB, Santos A, Da FD, Poinso F, O'Connor K, Deruelle C. Do children perceive features of real and cartoon faces in the same way? evidence from typical development and autism. J Clin Exp Neuropsychol. 2010;32(2):212–218. pmid:19562609
  12. 12. Tong F, Nakayama K, Moscovitch M, Weinrib O, Kanwisher N. Response properties of the human fusiform face area. Cogn Neuropsychol. 2000;17(1):257–80. pmid:20945183
  13. 13. Jovicich J, Peters RJ, Koch C, Chang L, Ernst T. Human perception of faces and face cartoons: An fMRI study. In Proceedings of the 8th Scientific Meeting and Exhibition of the International Society of Magnetic Resonance in Medicine (pp 884). Denver, CO, USA.
  14. 14. Hoptman MJ, Levy J. Perceptual asymmetries in left- and right-handers for cartoon and real faces. Brain Cogn. 1988;8(2):178–88. pmid:3196482
  15. 15. Miyahara M, Bray A, Tsujii M, Fujita C, Sugiyama T. Reaction time of facial affect recognition in asperger’s disorder for cartoon and real, static and moving faces. Child Psychiatry Hum Dev. 2007;38(2):121–34. pmid:17340170
  16. 16. Rosset DB, Rondan C, Fonseca DD, Santos A, Assouline B, Deruelle C. Typical emotion processing for cartoon but not for real faces in children with autistic spectrum disorders. J Autism Dev Disord. 2008;38(5):919–25. pmid:17952583
  17. 17. Eimer M, Holmes A, Mcglone FP. The role of spatial attention in the processing of facial expression: an erp study of rapid brain responses to six basic emotions. Cogn Affect Behav Neurosci. 2003;3(2):97–110. pmid:12943325
  18. 18. Calvo MG, Lundqvist D. Facial expressions of emotion (kdef): identification under different display-duration conditions. Behav Res Methods. 2008;40(1):109–15. pmid:18411533
  19. 19. Hansen CH, Hansen RD. Finding the face in the crowd: an anger superiority effect. J Pers Soc Psychol. 1988;54(6):917–24. pmid:3397866
  20. 20. Eastwood JD, Smilek D, Merikle PM. Differential attentional guidance by unattended faces expressing positive and negative emotion. Percept Psychophys. 2001;63(6):1004–13. pmid:11578045
  21. 21. Bernat E, Bunce S, Shevrin H. Event-related brain potentials differentiate positive and negative mood adjectives during both supraliminal and subliminal visual processing. Int J Psychophysiol. 2001;42(1):11–34. pmid:11451477
  22. 22. Rossion B, Jacques C. Does physical interstimulus variance account for early electrophysiological face sensitive responses in the human brain? Ten lessons on the N170. Neuroimage. 2008;39(4):1959–79. pmid:18055223
  23. 23. Bentin S, Allison T, Puce A, Perez E, Mccarthy G. Electrophysiological studies of face perception in humans. J Cogn Neurosci. 1996;8(6):551–65. pmid:20740065
  24. 24. Lu Y, Wang J, Wang L, Wang J, Qin J. Neural responses to cartoon facial attractiveness: an event-related potential study. Neurosci Bull. 2014;30(3):441–50. pmid:24526658
  25. 25. Marzi T, Viggiano MP. When memory meets beauty: insights from event-related potentials. Biol Psychol. 2010;84(2):192–205. pmid:20109520
  26. 26. Sagiv N, Bentin S. Structural encoding of human and schematic faces: holistic and part-based processes. J Cogn Neurosci. 2001;13(7):937–51. pmid:11595097
  27. 27. Galli G, Feurra M, Viggiano MP. "Did you see him in the newspaper?" Electrophysiological correlates of context and valence in face processing. Brain Res. 2006;1119(1):190–202. pmid:17005161
  28. 28. Hinojosa JA, Mercado F, Carretié L. N170 sensitivity to facial expression: a meta-analysis. Neurosci Biobehav Rev. 2015;55:498–509. pmid:26067902
  29. 29. Rellecke J, Sommer W, Schacht A. Does processing of emotional facial expressions depend on intention? Time-resolved evidence from event-related brain potentials. Biol Psychol. 2012;90(1):23–32. pmid:22361274
  30. 30. Joyce C, Rossion B. The face-sensitive n170 and vpp components manifest the same brain processes: the effect of reference electrode site. Clin Neurophysiol. 2005;116(11):2613–31. pmid:16214404
  31. 31. Ashley V, Vuilleumier P, Swick D. Time course and specificity of event-related potentials to emotional expressions. Neuroreport. 2004;15(1):211–6. pmid:15106860
  32. 32. Bublatzky F, Gerdes AB, White AJ, Riemer M, Alpers GW. Social and emotional relevance in face processing: happy faces of future interaction partners enhance the late positive potential. Front Hum Neurosci. 2014;8:1–10.
  33. 33. Keil A, Bradley MM, Hauk O, Rockstroh B, Elbert T, Lang PJ. Large-scale neural correlates of affective picture processing. Psychophysiology. 2002;39(5):641–9. pmid:12236331
  34. 34. Schupp HT, Flaisch T, Stockburger J, Junghöfer M. Emotion and attention: event-related brain potential studies. Prog Brain Res. 2006;156:31–51. pmid:17015073
  35. 35. Wheatley T, Weinberg A, Looser C, Moran T, Hajcak G. Mind perception: real but not artificial faces sustain neural activity beyond the N170/VPP. PLoS One. 2011;6(3):e17960. pmid:21483856
  36. 36. Flaisch T, Häcker F, Renner B, Schupp HT. Emotion and the processing of symbolic gestures: an event-related brain potential study. Soc Cogn Affect Neurosci. 2011; 6(1):109–18. pmid:20212003
  37. 37. Schindler S, Kissler J. People matter: perceived sender identity modulates cerebral processing of socio-emotional language feedback. Neuroimage. 2016;134:160–169. pmid:27039140
  38. 38. Schupp HT, Junghöfer M, Weike AI, Hamm AO. The selective processing of briefly presented affective pictures: an ERP analysis. Psychophysiology. 2004;41(3):441–9. pmid:15102130
  39. 39. Steppacher I, Schindler S, Kissler J. Higher, faster, worse? An event-related potentials study of affective picture processing in migraine. Cephalalgia. 2015;36(3):249–57. pmid:25997644
  40. 40. Wieser MJ, Pauli P, Reicherts P, Mühlberger A. Don't look at me in anger! Enhanced processing of angry faces in anticipation of public speaking. Psychophysiology. 2010;47(2):271–280. pmid:20030758
  41. 41. Hietanen JK, Astikainen P. N170 response to facial expressions is modulated by the affective congruency between the emotional expression and preceding affective picture. Biol Psychol. 2013;92(2):114–24. pmid:23131616
  42. 42. Zhu Y, Liu Z. An ERP study of dynamic facial emotional expressions under different attention conditions. Chin J Appl Psychol. 2014;20(4):375–84.
  43. 43. Codispoti M, Ferrari V, Bradley MM. Repetitive picture processing: autonomic and cortical correlates. Brain Res. 2006;1068(1):213–20. pmid:16403475
  44. 44. Recio G, Sommer W, Schacht A. Electrophysiological correlates of perceiving and evaluating static and dynamic facial emotional expressions. Brain Res. 2011;1376:66–75. pmid:21172314
  45. 45. Hoffmann H, Kessler H, Eppel T, Rukavina S, Traue HC. Expression intensity, gender and facial emotion recognition: women recognize only subtle facial emotions better than men. Acta Psychol. 2010;135(3);278–83.
  46. 46. Wildgruber D, Pihan H, Ackermann H, Erb M, Grodd W. Dynamic brain activation during processing of emotional intonation: influence of acoustic parameters, emotional valence, and sex. Neuroimage. 2002;15(4):856–69. pmid:11906226
  47. 47. Han S, Gao X, Humphreys GW, Ge J. Neural processing of threat cues in social environments. Hum Brain Mapp. 2008;29(8):945–57. pmid:17636562
  48. 48. Proverbio AM, Riva F, Martin E, Zani A. Face coding is bilateral in the female brain. PLoS One. 2010;5(6):e11242. pmid:20574528
  49. 49. Ma Q, Qian D, Hu L, Wang L. Hello handsome! Male's facial attractiveness gives rise to female's fairness bias in Ultimatum Game scenarios-An ERP study. PloS One. 2017;12(7):e0180459. pmid:28678888
  50. 50. Balas B, Pacella J. Artificial faces are harder to remember. Comput Human Behav. 2015;52:331–7. pmid:26195852
  51. 51. Crookes K, Ewing L, Gildenhuys JD, Kloth N, Hayward WG, Oxner M, et al. How well do computer-generated faces tap face expertise? PloS One. 2015;10(11):e0141353. pmid:26535910
  52. 52. Bruce V, Young A. Understanding face recognition. Br J Psychol. 1986;77(3):305–27.
  53. 53. Marzi T, Viggiano MP. When memory meets beauty: insights from event-related potentials. Biol Psychol. 2010;84(2):192–205. pmid:20109520
  54. 54. Werheid K, Schacht A, Sommer W. Facial attractiveness modulates early and late event-related brain potentials. Biol Psychol. 2007;76(1–2):100–8. pmid:17681418
  55. 55. Hall JK, Hutton SB, Morgan MJ. Sex differences in scanning faces: does attention to the eyes explain female superiority in facial expression recognition? Cogn Emot. 2010;24(4):629–37.
  56. 56. Brewster PW, Mullin CR, Dobrin RA, Steeves JK. Sex differences in face processing are mediated by handedness and sexual orientation. Laterality. 2011;16(2):188–200. pmid:20544495
  57. 57. McBain R, Norton D, Chen Y. Females excel at basic face perception. Acta Psychol. 2009;130(2):168–73.
  58. 58. Megreya AM, Bindemann M, Havard C. Sex differences in unfamiliar face identification: evidence from matching tasks. Acta Psychol. 2011;137(1):83–9.
  59. 59. Eagly AH, Steffen VJ. Gender and aggressive behavior: a meta-analytic review of the social psychological literature. Psychol Bull. 1986;100(3):309–30. pmid:3797558