Next Article in Journal
Effects of Juvenile or Adolescent Working Memory Experience and Inter-Alpha Inhibitor Protein Treatment after Neonatal Hypoxia-Ischemia
Previous Article in Journal
Effectiveness of Dry Needling versus Placebo on Gait Performance, Spasticity, Electromyographic Activity, Pain, Range-of-Movement and Quality of Life in Patients with Multiple Sclerosis: A Randomized Controlled Trial Protocol
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Spatially Filtered Emotional Faces Dominate during Binocular Rivalry

1
Department of Neuroscience, Psychology, Drug Research & Child’s Health, University of Florence, 50100 Florence, Italy
2
Fondazione Turano Onlus, 00195 Roma, Italy
3
Diagnostic and Nuclear Research Institute, IRCCS SDN, 80121 Napoli, Italy
*
Author to whom correspondence should be addressed.
Submission received: 9 November 2020 / Revised: 12 December 2020 / Accepted: 15 December 2020 / Published: 17 December 2020

Abstract

:
The present investigation explores the role of bottom-up and top-down factors in the recognition of emotional facial expressions during binocular rivalry. We manipulated spatial frequencies (SF) and emotive features and asked subjects to indicate whether the emotional or the neutral expression was dominant during binocular rivalry. Controlling the bottom-up saliency with a computational model, physically comparable happy and fearful faces were presented dichoptically with neutral faces. The results showed the dominance of emotional faces over neutral ones. In particular, happy faces were reported more frequently as the first dominant percept even in the presence of coarse information (at a low SF level: 2–6 cycle/degree). Following current theories of emotion processing, the results provide further support for the influence of positive compared to negative meaning on binocular rivalry and, for the first time, showed that individuals perceive the affective quality of happiness even in the absence of details in the visual display. Furthermore, our findings represent an advance in knowledge regarding the association between the high- and low-level mechanisms behind binocular rivalry.

1. Introduction

Emotional facial expressions are the most relevant social cues in everyday human interactions. From an evolutionary perspective, emotions have evolved in order to provide adaptive regulation of our behavior, helping the individual to evaluate the presence of threats or potential mates, and to avoid or approach them depending on whether or not they constitute a relevant concern [1]. Aside from being detected more rapidly in the visual stream, evidence also suggests that emotional facial expressions are more likely to come into awareness and resist failures of attention [2,3]. Several studies have also posited the existence of neural modules with long-standing evolutionary roots, which would be activated preferentially by such stimuli (for a review, see references [4,5]). The amygdala shows a greater fMRI response to fearful and happy faces as compared to neutral faces, even during periods of suppression [6]. Specifically, the interaction between the perigenual prefrontal cortex and amygdala modulates the threshold for awareness of emotional stimuli [7].
Although emotional stimuli (including faces) are recognized quickly, there is no convergence of the results on the neural advantage regarding emotional valence. For example, it has been repeatedly observed that happy faces are recognized more quickly and more accurately than any other facial expression [8]. This is evidence of the fact that positive emotion promotes well-being [9] and, therefore, happy faces are more likely to be chosen during early decisional processes. Conversely, other studies reported that fearful expressions are perceived more quickly than happy [10,11] and neutral ones [12].
Despite our ability to consciously produce and understand facial expressions, in many cases, we obtain emotional information from faces by using pre-conscious processing [13]. This pre-attentive processing results in better stimulus detection [14] and relies on brain mechanisms at least partially dissociable from attentive ones [15].
In pursuit of clues to identify the mechanism of perceptual awareness, binocular rivalry affords unique windows and provides insights into how the visual system handles visual decisional processes. When two different visual stimuli are presented simultaneously to both eyes, they usually do not merge into a single stable combination, but compete for exclusive perceptual dominance. If both stimuli have similar bottom-up salience, such as the same spatial frequency levels, human observers typically experience a perceptual alternation between the two stimuli every few seconds, a phenomenon called binocular rivalry [16]. Whenever one of the two rival stimuli dominates conscious perception, the other respective stimulus is suppressed from conscious awareness [17,18]. However, if one of the two stimuli has a higher top-down salience (e.g., emotional valence) compared to the other, it dominates over time [19,20]. Indeed, both top-down and bottom-up factors, such as high- and low-level properties, can influence the perceptual switching and the duration of dominance periods during binocular rivalry. Earlier studies documented low-level properties, i.e., contrast [21], motion [22], and spatial frequency [23], as the most important forces behind driving which binocular rivalry mechanism is most able to affect the duration of dominance period. Therefore, for some time, researchers believed rivalry was a low-level process concerning only stimulus strengths, such as brightness, contrast, and spatial frequency [24,25]. However, after the pioneering study by Engel [26], the literature began to view rivalry as a high-level process concerning top-down stimuli dimensions, such as their meaning [27,28]. In 2005, van Ee and colleagues [29] claimed that the mechanism behind rivalry (i.e., cycles of perceptual dominance and suppression) is largely independent on voluntary control, engages neural stages along several neural visual pathways, and thus is likely the result of different neural processes [16]. However, even if it has been proposed that binocular rivalry is resolved early in the visual pathway [17], top-down salience could also make a difference.
Given our limited ability to consciously process all the information in our environment (including facial expressions), we can assume that top-down factors, such as the emotional meaning of faces might have priority access to visual awareness during binocular rivalry [19]. How the emotion interacts with sensorial characteristics, such as the composition of visual inputs, in influencing visual decisional processing remains unclear.
In the present study, we aimed to explore the role of bottom-up and top-down factors in the recognition of emotional facial expressions during binocular rivalry. In particular, we manipulated spatial frequencies (SF) and emotive features. Recently, growing interest has arisen concerning low spatial frequencies (LSFs) specificity in the emotional response to happy faces [30,31]; for instance, happy expressions preceded by global processing were identified faster as compared with local processing, and vice versa [32]. In addition, global processing [33,34] has been identified as facilitating identification of faces with a happy expression while local processing facilitated the identification of faces with negative expressions [35]. It is worth noting that the recognition of emotional facial expressions displaying various levels of detail, such as SF, has not been studied previously during binocular rivalry, even if both bottom-up and top-down factors have been shown to trigger rivalry processes. The recent literature has highlighted that the golden standard approach to control the influence of low-level properties of the stimuli is employing inverted stimuli [36,37]. However, we chose to control the role of bottom-up information using the Itti and Koch saliency model [38,39]. This computational model analyzes natural images by simulating the early processing stages of the human visual system (e.g., luminance, color, and orientation) with a feed-forward feature-extraction architecture. The resulting map is able to detect salient objects in complex scenes where locations of higher salience (e.g., salient traffic signs) are more likely to be fixated [40,41]. Several studies [42,43,44,45] have successfully employed this model (and the associated Saliency toolbox) to control the bottom-up saliency of stimuli.
Given the above premises, the present study examined the joint effects of the social relevance of facial stimuli and the role of SF during binocular rivalry, with the aim of investigating which (by keeping contrast and brightness constant) SF-ranges are most relevant in terms of emotional advantage for perceptual dominance during binocular rivalry. For this purpose, happy, fearful, and neutral faces were presented and different bands of SF (very-low, medium-low, low, and broad) were used to examine dominance periods. We selected faces with opposed valence (happy and fearful) that—according to the approach/withdrawal evolutionary model—should inspire opposite reactions [46]. We generated several spatially filtered face stimuli and subsequently created a visual array in which facial expressions were presented dichoptically (neutral and happy or neutral and fearful) during an emotional–binocular rivalry paradigm. We recorded under binocular conditions: (1) the perceptual dominance (i.e., the first dominant percept) between neutral and emotional faces as a function of different bands of SF; and (2) the emotional dominance (i.e., happy and fearful expressions) duration as a function of different bands of SF (very-low, medium-low, low, and broad).

2. Materials and Methods

2.1. Participants

Eighteen young adults (Mage = 22.7, SDage = 4, nine males) participated in the experiment after giving written informed consent in accordance with the Declaration of Helsinki. Exclusion criteria were a history of neurological or psychiatric diseases, and alcohol or substance abuse. All participants reported normal vision and were right-handed. The study—performed according to the Declaration of Helsinki—is part of a set of behavioral and non-invasive studies on face recognition processing, which were approved by the Research Committee of the University of Florence (protocol number 17245_2012).

2.2. Stimuli

We used 96 digitized grayscale frontal view images of male and female individuals selected from the Radboud Faces Database [47], showing neutral (48 images, 24 males), happy (24 images, 12 males), and fearful (24 images, 12 males) facial expressions. We selected the 96 stimuli to be used in our experiment on the basis of the results of a validation study [47]. According to these findings, all the frontal view faces included in the database were perceived as emotional with an overall agreement of 82% (median 88%, SD 19%).
These face stimuli had a size of 1024 × 681 pixels and a resolution of 300 dpi. For the filtering process, we applied the procedure described by Vannucci and colleagues [48]. Firstly, all stimuli were normalized to have the same mean luminance and contrast ranges. Then, 37 face stimuli were filtered using Matlab codes in order to remove specific ranges of spatial frequencies from their spectrum. This filtering process created a multiresolution representation of each image. The different resolutions were obtained by means of a digital filter applied to the bi-dimensional array representing the original image. The multiresolution filter that was selected was the Gaussian mask which performed low-pass filtering (two-dimensional Gaussian convolution). The widths of the filtering windows were the key parameters that determined the bandwidth of the filter. We used three different ascending resolution levels measured on a logarithmic scale of decibels: very-low (2 cycle/degree), medium-low (4 cycle/degree), and low (6 cycle/degree) (see Figure 1). Therefore, we used four different bands in the experiment: one broad and three filtered.
After the filtering procedures, using Adobe Photoshop program, we cropped all faces from hairline to chin and fit them, reducing their size, in a gray “frame” sized 8.89 by 9.16 visual degrees (330 by 340 pixels) with an empty oval window containing the face stimulus sized 6.12 by 5.53 visual degrees (227 by 205 pixels). We created in total 48 visual arrays containing random noise and two spatially aligned framed face stimuli (one neutral and one happy or fearful), which were displayed towards two monocular fixation points (placed at the same position, around the nose of each stimulus). Note that the frames, in which the faces were fitted, were horizontally tilted: the face stimulus presented to the left eye was tilted to the left, and the face stimulus presented to the right eye was tilted to the right (see Figure 1).
Stimuli were tilted in order to present opposing orientations, ensuring the occurrence of perceptual alternation and to induce rivalry [49].
Viewed through the stereoscope, the stimuli included in each visual array presented the same spatial frequencies but different facial expressions: one neutral and the other one happy or fearful.
To rule out the possibility that the visual features of the stimuli (such as the presence of teeth vs. the absence of teeth) could act as confounding variable [50], the saliency model of Itti and Koch [38,39] was employed. The approach we used analyzes natural images by simulating the early processing stages of the human visual system (e.g., luminance, color, and orientation) with a feed-forward feature-extraction architecture. The resulting map is able to detect salient objects in complex scenes where locations of higher salience (e.g., salient traffic signs) are more likely to be fixated [40,41].
Using the SaliencyToolbox 2.3 [51] in default settings, the saliency map of each image was computed. For each corresponding pair of stimuli, we computed the pixelwise difference in the resulting maps via a paired sample t-test. Pixels associated with the background that coincided in both images were excluded. The analysis of visual features of stimuli showed that neither of the images in the corresponding pairs differed in a statistically significant way (for all: p > 0.1).
In total, each participant was presented with 48 dichoptical trials. In each trial, a neutral face was presented along with an emotional one, 12 for each low-SF band (i.e., very-low, medium-low, low, and broad), so that during each LSF band dichoptical presentation, we presented at total of 12 neutral faces and 12 emotive faces.

2.3. Apparatus and Procedures

E-prime software and E-basic language codes were programmed to run the experiment and collect the data. All stimuli were presented on a PC monitor (1024 × 768 h) with a 60 Hz refresh rate. Each of the 48 visual arrays containing the face pairs was presented once in random order for 30 s. For all instructions, we projected identical material to both eyes, so as to create normal vision.
Responses were recorded using a response-box with three horizontally placed keys (emotional, neutral, and mixed perception). A chin rest ensured a viewing distance of 57 cm. While viewing the rival stimuli through a mirror stereoscope [52], participants concurrently reported what they perceived using the response-box. The order of the response-box buttons was counterbalanced across participants. Participants were asked to report any change in their perception as quickly as possible by pressing response buttons using the index finger of their dominant hand. To avoid coding errors that are often observed when specific emotions are categorized [53,54], we simply instructed participants to indicate whether the emotional or the neutral expression was dominant. If they saw a mixture or if none of the two incongruent pictures clearly appeared in the foreground, they reported “mixed”, otherwise, they reported “neutral” or “emotional”. Before starting the task, participants performed two independent trials to familiarize themselves with the procedure. At the beginning of each rival presentation, participants were asked to report the first dominant perception only when one of the rival face stimuli was perceived as exclusive. After the first dominant percept (following the first keypress), they were also required to report the transition from one dominant image to the other (i.e., those periods in which they did not clearly perceive a dominant stimulus but a mix of monocular stimuli). Between the stimuli presentations, there was a pause of 5 s, consisting of two gray frames on a white noise background. Fixation points were always displayed.

2.4. Statistical Analyses

For each participant and across each trial, we collected the first dominant perception and the emotional dominance duration.
The first dominant perception (FDP) measure was calculated for each expression and for each SF band as the average relative frequency of trials in which participants reported the emotive compared to the neutral expression as the first perception. To prevent coding errors of expressions during this type of task [53], participants were only required to indicate whether the dominant stimulus was neutral or emotive, therefore participants did not directly report which specific facial expression (i.e., happy or fearful) was dominant. We derived the frequencies for happy and fearful emotions for each SF band based on the type of trial presented to the participant.
Following Levelt’s approach [25], for each participant and for each SF band (very-low, medium-low, low, and broad), we calculated the emotive dominance duration (EDD) for each facial expression by means of the following formula:
emotive dominance duration = ED/(ED + EN)
where ED represents the cumulative duration of the happy or fearful dominant perception and EN indicates the cumulative duration of the dominant neutral percept. It is important to note that in determining EDD, we used a trial-by-trial approach so that the effect of the duration of periods of mixed perception was not considered.
Preliminarily, we checked if data were normally distributed using the Kolmogorov–Smirnov test. In order to compare emotive and neutral faces, for both FDP and EDD measures and for each SF band, we compared the mean value to the reference value of 50% by means of a one-sample Student’s t test.
Differences between trials in which happy faces were compared to neutral faces and trials in which fearful faces were compared to neutral faces were assessed by means of a repeated measures analysis of variance (ANOVA 2 × 4) for FDP and EDD separately. In both cases, the factors taken into account were expressions (two levels: happy and fearful) and spatial frequencies (four levels: very-low, medium-low, low, and broad). Degrees of freedom for repeated measure effects were Greenhouse–Geisser corrected. Post-hoc comparisons were Bonferroni corrected.

3. Results

3.1. First Dominant Percept

Both indices were normally distributed according to the Kolmogorov–Smirnov test (for all: p > 0.070). Figure 2 reports FDP mean values across the happy–neutral and fearful–neutral conditions for each SF band (very-low, medium-low, low, broad). Happy expressions showed an advantage compared to neutral expression in terms of FDP for all SFs with the exception of very-low SF (t(17) = 0.30, p = 0.769). Indeed, the FDP value associated to happy faces was significantly higher than 50% during medium-low (t(17) = 2.40, p = 0.028), low (t(17) = 6.68, p < 0.001), and broad (t(17) = 3.62, p = 0.002) conditions. With regard to the fearful–neutral comparison, the advantage of fearful expression was observed only within low (t(17) = 2.26, p = 0.038) and broad (t(17) = 2.78, p = 0.013) SFs, but it was not present within very-low (t(17) = 0.80, p = 0.432) and medium-low (t(17) = −0.91, p = 0.374) SF stimuli.
The repeated measure ANOVA revealed the main effect of expressions (F(1, 17) = 14.76, p = 0.001, ηp2 = 0.465). Participants reported more frequently as first perception the facial expression of happiness in the happy/neutral trials compared to fearful expression in the fearful/neutral trials. The main effect of SFs was also statistically significant (F(3, 51) = 8.55, p = 0.001, ηp2 = 0.335). Participants reported more frequently as first perception an emotive face (independently from the specific expression) in low SF than in very-low (p = 0.004) and in medium-low SF (p < 0.001).
These effects were further qualified by the significant interaction between expressions and SFs (F(3, 51) = 2.81, p = 0.049, ηp2 = 0.142). The FDP value was higher for the happy trials than the fearful trials within the broad SF band (t(17) = 2.13, p = 0.049), low SF band (t(17) = 3.29, p = 0.004), and medium-low SF band (t(17) = 2.32, p = 0.033), but not within the very-low SF band (t(17) = −0.53, p = 0.603).

3.2. Emotive Dominance Duration

EDD mean values measured in the happy–neutral and fearful–neutral trials as a function of each SF band (very-low, medium-low, low, broad) are reported in Figure 3. Happy expressions showed an advantage compared to neutral expression in terms of EDD for all SFs: the mean values associated to happy faces were significantly higher than 50% during very-low (t(17) = 2.87, p = 0.011), medium-low (t(17) = 3.47, p = 0.003), low (t(17) = 6.07, p < 0.001) and broad (t(17) = 4.00, p = 0.001) conditions. In the fearful–neutral trials, the advantage of fearful expression was observed only within low (t(17) = 2.23, p = 0.040) and broad (t(17) = 2.90, p = 0.010) SFs. No differences were observed with the very-low (t(17) = 1.94, p = 0.069) and medium-low (t(17) = −1.93, p = 0.070) SF stimuli.
The repeated measure ANOVA revealed the main effect of expressions (F(1, 17) = 15.80, p = 0.001, ηp2 = 0.482) and of SFs (F(3, 51) = 3.83, p = 0.037, ηp2 = 0.184). The dominance duration of happiness in the happy–neutral trials was higher compared to fearful expression in the fearful–neutral trials. EDD mean values (independently from the specific expression) observed during low were higher than the values observed during medium-low SF stimuli (p < 0.001). The interaction effect was also significant (F(3, 51) = 3.20, p = 0.031, ηp2 = 0.158). The EDD value was higher for the happy trials than the fearful trials within the broad SF band (t(17) = 2.15, p = 0.046), low SF band (t(17) = 3.02, p = 0.008), medium-low SF band (t(17) = 3.71, p = 0.002), but not within the very-low SF band (t(17) = −0.05, p = 0.958).

4. Discussion

The present study was designed to investigate both the bottom-up effects of spatial frequency manipulations and the top-down effects of emotional content on the perception of faces during binocular rivalry. Namely, spatially filtered and unfiltered happy and fearful faces, both of them particularly salient to human vision, were presented dichoptically along with neutral faces. Results provide evidence of an emotional bias that is more pronounced for happy faces (the happy face advantage).
We observed an “emotive advantage”: emotional faces (happy and fearful) were better detected (as measured by first perception) than neutral faces despite being filtered at increasing levels of SF. In particular, the happy over neutral face advantage was already observed from medium-low SF levels, whereas the fearful over neutral face advantage was only observed from low-level SF levels. Therefore, we confirmed the previous finding that a more meaningful stimulus (i.e., an emotional face) has perceptual predominance over a less meaningful stimulus (i.e., a neutral face) [19]. The first dominant percept is considered as an index of the perceptual strength of facial expression during binocular rivalry, which is independent of habituation or inaccuracy over time. This suggests that the visual system is sensitive to stimuli that signal emotional information and our data are consistent with theories demonstrating the detection and categorization of facial expressions (emotional vs. neutral) as being performed at a very early processing stage [54]. The data herein demonstrated, in the presence of coarse information, a prioritization of happy faces over the neutral ones—with respect to fearful faces—in the competition for awareness of emotional valence of stimulus emerged. Such observations support the notion that coarse, rapid, magnocellular input to the brain is sufficient for the evaluation and subsequent detection of emotional stimuli. Our results are consistent with the hypothesis of distinct processing for happiness/positive as compared to fear/negative, i.e., the happy bias effect [55]. This effect emerged even when a coarse representation of stimuli was presented, and this was not due to distinctive facial features of the smiling faces such as the salient marker of a mouth showing teeth. Indeed, we gradually reduced the amount of spatial information and only when a large filter was applied (amplitude of 46 db), vastly degrading the stimuli, happiness and fear were perceived as dominant with the same frequency. The band of very-low SF used in this study could therefore be considered as a post-hoc putative control condition.
The ability to select is crucial when our brain needs to evaluate internal or external stimuli and direct its early attention. These automatic processes are termed as “silent” since they occur outside conscious awareness, and are related to detection processes, analysis, and identification of stimuli [56]. We evaluated this sensorial gating effectively with binocular rivalry. The emotional valence of happy faces, particularly relevant to human vision, is probably evaluated without awareness [13], and preferentially proposed to conscious perception. This happy advantage could be related to the existence of a specialized and innate mechanism that promotes positive stimuli to awareness, as well as to a learned mechanism that improves our sensory processing of positive stimuli that cause pleasure and well-being [9]. For example, increased positive emotion promotes creative thinking, social connection with others [57], emotional resilience in the face of stressors [58], and better physical health [59]. Although some authors have embraced the hypothesis of a specific competence for happy expression that triggers the happy bias effect [60], the existence of an efficient attentional mechanism with an important adaptive function cannot be excluded.
The present data could be framed within theories that posit an emotive processing gating [61], which interrupts the incoming negative information and promotes the positive ones which will be processed in the subsequent perceptual and recognition stages. Thus, it could be the case that preconscious visual processes selectively promote happy faces that resemble conspecific stimuli to conscious perception, presumably because of their social relevance. However, since the top-down effects found in binocular rivalry (and similar techniques) have been attributed to perceptual properties [62,63,64], we ruled out the potential confounding variable related to the bottom-up saliency with the saliency map model of Itti and Koch [38,39], as was done in previous research [42,43,44,45]. Consistently, Barrneman et al. [65] also found a general top-down effect of emotional expression in face perception in a binocular rivalry paradigm excluding the influence of low-level properties on the basis of a control experiment with inverted faces.
However, although our results suggest a top-down effect, we cannot exclude with certainty that the expression recognition performance is affected by the current spatial filters and this effect could be related to the magnitude of the expression effects. Therefore, as a limitation, we are aware that a control of the magnitude spectra of images would have been necessary to ascertain whether the magnitude spectra were suffice for decoding the emotional expression. It would be crucial to understand whether the ‘bottom-up saliency’ remains equal when visual sensitivity can still differ between the conditions, even if no local saliency differences are found. Future studies should address this issue by taking the average magnitude spectra of the image in one filtering condition and use those to recreate the individual images with an inverse Fast Fourier Transform where the unique phase spectra are applied to the average magnitude spectra. According to our expectations, an experiment with the application of this technique to control for the images’ spatial frequencies should replicate the same effects we observed.
Apart from our results limitations, this study is an important step in opening interesting avenues for future research. For example, whether amygdala activity to presented threat stimuli (in response to either low-level or affective properties) has a functional role in modulations, promoting or stopping their detection, remains an interesting question. It is important to note that our results were ultimately based on keypresses (i.e., participants self-reported their precepts) and, thus, we cannot fully eliminate the possibility of biased self-reporting.
Future studies should incorporate control conditions (e.g., using ERPs) to replicate and corroborate the present findings. To this end, we have recently investigated the electrophysiological coding of all the basic facial expressions plus neutral ones using a repetition suppression paradigm to assess emotion modulations on the early N170 face-sensitive ERP component [66]. While we observed occipito-temporal responses for fear on the N170 time window, other researchers have also shown greater N170 modulations for other facial expressions, such as anger and happiness (for a review, see references [67,68,69]). Differences in the experimental designs have been proposed to explain different results [66]. Indeed, the advantage for processing happy faces has been mostly demonstrated in long-term memory tasks [69]. Experiments in the context of binocular rivalry could help clarify this debate.

5. Conclusions

The main finding of this study is that emotional meaning modulates binocular rivalry: when emotional and neutral monocular faces were presented dichotoptically, emotional faces, and in particular happy faces, were detected more frequently than neutral ones. Therefore, we support the view that emotion routinely alters our perception across many levels of visual perception and from the very early stages concerning decisional processes. Importantly, our data suggest a happy advantage, which persists even when low-level image properties, a driving force behind binocular rivalry, were manipulated. Keeping the contrast constant, even when we limited the spatial frequency range, the duration of perceptual predominance of happy faces did not decrease. One important explanation for this finding may be that it is vital for an organism to attend to information that is of high importance for behavioral goals, because it will assist in guiding both actions and thoughts [70,71]. Thus, emotional facial expressions, which are known to convey a high adaptive value in signaling crucial social information, undergo preferential perceptual processing [70,72,73]. Experiments of the neural correlate of consciousness in the context of binocular rivalry could help clarify the debate as to whether positive and negative content modulates the conscious perception differently [74].

Author Contributions

Conceptualization, M.T.T., S.L., and M.P.V.; methodology, M.T.T., F.G. (Fabio Giovannelli), and G.G. (Gioele Gavazzi); formal analysis, G.G. (Gioele Gavazzi) and G.G. (Giorgio Gronchi); investigation, M.T.T. and S.L.; data curation, M.T.T., G.G. (Gioele Gavazzi), F.G. (Fabio Giovannelli), and S.L.; writing—original draft preparation, M.T.T., F.G. (Fiorenza Giganti), and M.P.V.; writing—review and editing, F.G. (Fiorenza Giganti), M.T.T., G.G. (Giorgio Gronchi), A.P., F.G. (Fabio Giovannelli), and M.P.V. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Lang, P.J.; Bradley, M.M.; Cuthbert, B.N. Motivated attention: Affect, activation, and action. In Attention and Orienting; Lang, P.J., Simons, R.F., Balaban, M., Eds.; Erlbaum: Mahwah, NJ, USA, 1997; pp. 97–135. [Google Scholar]
  2. Fox, E.; Russo, R.; Georgiou, G.A. Anxiety modulates the degree of attentive resources required to process emotional faces. Cogn. Affect. Behav. Neurosci. 2005, 5, 396–404. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  3. Milders, M.; Sahraie, A.; Logan, S.; Donnellon, N. Awareness of faces is modulated by their emotional meaning. Emotion 2006, 6, 10–17. [Google Scholar] [CrossRef] [PubMed]
  4. Calder, A.J.; Young, A.W. Understanding the recognition of facial identity and facial expression. Nat. Rev. Neurosci. 2005, 6, 641–651. [Google Scholar] [CrossRef] [PubMed]
  5. Pessoa, L. On the relationship between emotion and cognition. Nat. Rev. Neurosci. 2008, 9, 148–158. [Google Scholar] [CrossRef] [PubMed]
  6. Williams, M.A.; Mattingley, J.B. Unconscious perception of non-threatening facial emotion in parietal extinction. Exp. Brain Res. 2004, 154, 403–406. [Google Scholar] [CrossRef]
  7. Amting, J.M.; Greening, S.G.; Mitchell, D.G. Multiple Mechanisms of Consciousness: The neural correlates of emotional awareness. J. Neurosci. 2010, 30, 10039–10047. [Google Scholar] [CrossRef] [Green Version]
  8. Ekman, P. An argument for basic emotions. Cogn. Emot. 1992, 6, 169–200. [Google Scholar] [CrossRef]
  9. Seligman, M.E.P.; Csikszentmihalyi, M. Positive psychology. An introduction. Am. Psychol. 2000, 55, 5–14. [Google Scholar] [CrossRef]
  10. Sylvers, P.D.; Brennan, P.A.; Lilienfeld, S.O. Psychopathic traits and preattentive threat processing in children: A novel test of the fearlessness hypothesis. Psychol. Sci. 2011, 22, 1280–1287. [Google Scholar] [CrossRef]
  11. Yang, E.; Zald, D.H.; Blake, R. Fearful expressions gain preferential access to awareness during continuous flash suppression. Emotion 2007, 7, 882–886. [Google Scholar] [CrossRef] [Green Version]
  12. Ritchie, K.L.; Bannerman, R.L.; Sahraie, A. The effect of fear in the periphery in binocular rivalry. Perception 2012, 41, 1395–1401. [Google Scholar] [CrossRef] [PubMed]
  13. Balconi, M. Consciousness and Emotion: Attentive vs. Pre-attentive Elaboration of Face Processing. In Encyclopedia of the Sciences of Learning; Seel, N.M., Ed.; Springer: Boston, MA, USA, 2012. [Google Scholar] [CrossRef]
  14. Dolan, R.J. Emotion, Cognition, and Behavior. Science 2002, 298, 1191. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  15. Oliver, L.D.; Mao, A.; Mitchell, D.G. “Blindsight” and subjective awareness of fearful faces: Inversion reverses the deficits in fear perception associated with core psychopathic traits. Cogn. Emot. 2014, 29, 1256–1277. [Google Scholar] [CrossRef]
  16. Blake, R.; Logothetis, N.K. Visual competition. Nat. Rev. Neurosci. 2002, 3, 13–21. [Google Scholar] [CrossRef] [PubMed]
  17. Blake, R. A neural theory of binocular rivalry. Psychol. Rev. 1989, 96, 145–167. [Google Scholar] [CrossRef]
  18. Wilson, H.R.; Blake, R.; Lee, S.H. Dynamics of travelling waves in visual perception. Nature 2001, 412, 907–910. [Google Scholar] [CrossRef]
  19. Alpers, W.G.; Gerdes, B.M. Here is Looking at You: Emotional Faces Predominate in Binocular Rivalry. Emotion 2007, 7, 495–506. [Google Scholar] [CrossRef]
  20. Yu, K.; Blake, R. Do recognizable figures enjoy an advantage in binocular rivalry? J. Exp. Psychol. Hum. Percept. Perform. 1992, 18, 1158–1173. [Google Scholar] [CrossRef]
  21. Mueller, T.J.; Blake, R. A fresh look at the temporal dynamics of binocular rivalry. Biol. Cybern. 1989, 61, 223–232. [Google Scholar] [CrossRef]
  22. Breese, B.B. On inhibition. Psychol. Monogr. 1909, 3, 1–65. [Google Scholar] [CrossRef] [Green Version]
  23. Fahle, M. Binocular rivalry: Suppression depends on orientation and spatial frequency. Vis. Res. 1982, 22, 787–800. [Google Scholar] [CrossRef]
  24. Carter, O.L.; Campbell, T.G.; Liu, G.B.; Wallis, G. Contradictory influence of context on predominance during binocular rivalry. Clin. Exp. Optom. 2004, 87, 153–162. [Google Scholar] [CrossRef] [PubMed]
  25. Levelt, W. On Binocular Rivalry; Royal Van Gorcum: Assen, The Netherlands, 1965. [Google Scholar]
  26. Engel, E. The role of content in binocular resolution. Am. J. Psychol. 1956, 69, 87–91. [Google Scholar] [CrossRef] [PubMed]
  27. Alais, D.; Blake, R. Interactions between global motion and local binocular rivalry. Vis. Res. 1998, 38, 637–644. [Google Scholar] [CrossRef] [Green Version]
  28. Sobel, K.V.; Blake, R. How context influences predominance during binocular rivalry. Perception 2002, 31, 813–824. [Google Scholar] [CrossRef]
  29. Van Ee, R.; van Dam, G.J.; Brouwer, V. Voluntary control and the dynamics of perceptual bi-stability. Vis. Res. 2005, 45, 41–55. [Google Scholar] [CrossRef] [Green Version]
  30. Vuilleumier, P.; Armony, J.L.; Driver, J.; Raymond, D.J. Distinct spatial frequency sensitivities for processing faces and emotional expressions. Nat. Neurosci. 2003, 6, 624–631. [Google Scholar] [CrossRef]
  31. Pourtois, G.; de Gelder, B.; Bol, A.; Crommelinck, M. Perception of facial expressions and voices and of their combination in the human brain. Cortex 2005, 41, 49–59. [Google Scholar] [CrossRef] [Green Version]
  32. Srinivasan, N.; Hanif, A. Global-happy and local-sad: Perceptual processing affects emotion identification. Cogn. Emot. 2010, 24, 1062–1069. [Google Scholar] [CrossRef]
  33. Badcock, J.C.; Whitworth, F.A.; Badcock, D.R.; Lovegrove, W.J. Low-frequency filtering and the processing of local-global stimuli. Perception 1990, 19, 617–629. [Google Scholar] [CrossRef]
  34. Shulman, G.L.; Wilson, J. Spatial frequency and selective attention to local and global information. Perception 1987, 16, 89–101. [Google Scholar] [CrossRef] [PubMed]
  35. Srinivasan, N.; Gupta, R. Rapid communication: Global-local processing affects recognition of distractor emotional faces. J. Exp. Psychol. 2011, 64, 425–433. [Google Scholar] [CrossRef] [PubMed]
  36. McKone, E.; Davies, A.A.; Darke, H.; Crookes, K.; Wickramariyaratne, T.; Zappia, S.; Fiorentini, C.; Favelle, S.; Broughton, M.; Fernando, D. Importance of the inverted control in measuring holistic face processing with the composite effect and part-whole effect. Front. Psychol. 2013, 4, 33. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  37. Persike, M.; Meinhardt-Injac, B.; Meinhardt, G. The face inversion effect in opponent-stimulus rivalry. Front. Hum. Neurosci. 2014, 8, 295. [Google Scholar] [CrossRef] [Green Version]
  38. Itti, L.; Koch, C. A saliency-based search mechanism for overt and covert shifts of visual attention. Vis. Res. 2000, 40, 1489–1506. [Google Scholar] [CrossRef] [Green Version]
  39. Itti, L.; Koch, C. Feature combination strategies for saliency-based visual attention systems. J. Electron. Imaging 2001, 10, 161–169. [Google Scholar] [CrossRef]
  40. Itti, L.; Koch, C.; Niebur, E. A model of saliency-based visual attention for rapid scene analysis. IEEE Trans. Pattern Anal. Mach. Intell. 1998, 11, 1254–1259. [Google Scholar] [CrossRef] [Green Version]
  41. Parkhurst, D.; Law, K.; Niebur, E. Modeling the role of salience in the allocation of overt visual attention. Vis. Res. 2002, 42, 107–123. [Google Scholar] [CrossRef] [Green Version]
  42. Caudek, C.; Ceccarini, F.; Sica, C. Posterror speeding after threat-detection failure. J. Exp. Psychol. Hum. Percept. Perform. 2015, 41, 324–341. [Google Scholar] [CrossRef] [Green Version]
  43. Humphrey, K.; Underwood, G.; Lambert, T. Salience of the lambs: A test of the saliency map hypothesis with pictures of emotive objects. J. Vis. 2012, 12, 1–15. [Google Scholar] [CrossRef]
  44. Calvo, M.G.; Nummenmaa, L. Detection of emotional faces: Salient physical features guide effective visual search. J. Exp. Psychol. Gen. 2008, 137, 471–494. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  45. Righi, S.; Gronchi, G.; Pierguidi, G.; Messina, S.; Viggiano, M.P. Aesthetic shapes our perception of every-day objects: An ERP study. New Ideas Psychol. 2017, 47, 103–112. [Google Scholar] [CrossRef]
  46. Demaree, H.; Everhart, D.; Youngstrom, E.; Harrison, D. Brain Lateralization of Emotional Processing: Historical Roots and a Future Incorporating “Dominance”. Behav. Cogn. Neurosci. Rev. 2005, 4, 3–20. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  47. Langner, O.; Dotsch, R.; Bijlstra, G.; Wigboldus, D.H.J.; Hawk, S.T.; van Knippenberg, A. Presentation and validation of the Radboud Faces Database. Cogn. Emot. 2010, 24, 1377–1388. [Google Scholar] [CrossRef]
  48. Vannucci, M.; Viggiano, M.P.; Argenti, F. Identification of spatially filtered stimuli as function of the semantic category. Cogn. Brain Res. 2001, 12, 475–478. [Google Scholar] [CrossRef]
  49. Veser, S.; O’Shea, R.P.; Schröger, E.; Trujillo-Barreto, N.J.; Roeber, U. Early correlates of visual awareness following orientation and colour rivalry. Vis. Res. 2008, 48, 2359–2369. [Google Scholar] [CrossRef]
  50. Purcell, D.G.; Stewart, A.L. Still another confounded face in the crowd. Atten. Percept. Psychophys. 2010, 72, 2115–2127. [Google Scholar] [CrossRef]
  51. Walther, D.; Koch, C. Modeling attention to salient proto-objects. Neural Netw. 2006, 19, 1395–1407. [Google Scholar] [CrossRef]
  52. Carmel, D.; Arcaro, M.; Kastner, S.; Hasson, U. How to create and use binocular rivalry. J. Vis. Exp. 2010, 45, e2030. [Google Scholar] [CrossRef] [Green Version]
  53. Adolphs, R. Recognizing emotion from facial expressions: Psychological and neurological mechanism. Behav. Cogn. Neurosci. Rev. 2002, 1, 21–62. [Google Scholar] [CrossRef] [Green Version]
  54. Hess, U.; Blairy, S.; Kleck, R.E. The intensity of emotional facial expressions and decoding accuracy. J. Nonverbal Behav. 1997, 21, 241–257. [Google Scholar] [CrossRef]
  55. Cacioppo, J.T.; Gardner, W.L. Emotion. Annu. Rev. Psychol. 1999, 50, 91–214. [Google Scholar] [CrossRef] [PubMed]
  56. Dawson, M.E.; Schell, A.M.; Swerdlow, N.R.; Filion, D.L. Cognitive, clinical, and neuropsychological implications of startle modulation. In Attention and Orienting: Sensory and Motivational Processes; Mahwah, N., Lang, P., Simons, R., Balaban, M., Eds.; Lawrence Erlbaum Associates: Mahwah, NJ, USA, 1997; pp. 257–279. [Google Scholar]
  57. Fredrickson, B.L. What Good Are Positive Emotions? Rev. Gen. Psychol. 1998, 2, 300–319. [Google Scholar] [CrossRef] [PubMed]
  58. Folkman, S.; Moskowitz, J.T. Positive affect and the other side of coping. Am. Psychol. 2000, 55, 647–654. [Google Scholar] [CrossRef] [PubMed]
  59. Tugade, M.M.; Fredrickson, B.L.; Barrett, L.F. Psychological resilience and positive emotional granularity: Examining the benefits of positive emotions on coping and health. J. Personal. 2004, 72, 1161–1190. [Google Scholar] [CrossRef]
  60. Calvo, M.G.; Gutiérrez-García, A.; Fernández-Martín, A.; Nummenmaa, L. Recognition of Facial Expressions of Emotion is Related to their Frequency in Everyday Life. J. Nonverbal Behav. 2014, 38, 549–567. [Google Scholar] [CrossRef]
  61. Leppänen, J.M.; Hietanen, J.K. Positive facial expressions are recognized faster than negative facial expressions, but why? Psychol. Res. 2004, 69, 22–29. [Google Scholar] [CrossRef]
  62. Hedger, N.; Gray, K.L.H.; Garner, M.; Adams, W.J. Are visual threats prioritized without awareness? A critical review and meta-analysis involving 3 behavioral paradigms and 2696 observers. Psychol. Bull. 2016, 142, 934–968. [Google Scholar] [CrossRef] [Green Version]
  63. Moors, P.; Gayet, S.; Hedger, N.; Stein, T.; Sterzer, P.; van Ee, R.; Wagemans, J.; Hesselmann, G. Three Criteria for Evaluating High-Level Processing in Continuous Flash Suppression. Trends Cogn. Sci. 2019, 23, 267–269. [Google Scholar] [CrossRef]
  64. Stein, T.; Sterzer, P. Not Just Another Face in The Crowd: Detecting Emotional Schematic Faces During Continuous Flash Suppression. Emotion 2012, 12, 988–996. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  65. Bannerman, R.L.; Milders, M.; De Gelder, B.; Sahraie, A. Influence of emotional facial expressions on binocular rivalry. Ophthalmic Physiol. Opt. 2008, 28, 317–326. [Google Scholar] [CrossRef] [PubMed]
  66. Turano, M.T.; Lao, J.; Richoz, A.R.; De Lissa, P.; Degosciu, S.B.A.; Viggiano, M.P.; Caldara, R. Fear boosts the early neural coding of faces. Soc. Cogn. Affect. Neurosci. 2017, 12, 1959–1971. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  67. Williams, L.M.; Palmer, D.; Liddell, B.; Song, L.; Gordon, E. The when and where of perceiving signals of threat versus non—Threat. Neuroimage 2006, 31, 458–467. [Google Scholar] [CrossRef] [PubMed]
  68. Almeida, P.R.; Ferreira-Santos, F.; Chaves, P.L.; Paiva, T.O.; Barbosa, F.; Marques-Teixeira, J. Perceived arousal of facial expressions of emotion modulates the N170, regardless of emotional category: Time domain and time–frequency dynamics. Int. J. Psychophysiol. 2016, 99, 48–56. [Google Scholar] [CrossRef]
  69. Hinojosa, J.A.; Mercado, F.; Carretie, L. N170 sensitivity to facial expression: A meta-analysis. Biobehav. Rev. 2015, 55, 498–509. [Google Scholar] [CrossRef]
  70. Fenske, M.J.; Eastwood, J.D. Modulation of focused attention by faces expressing emotion: Evidence from flanker tasks. Emotion 2003, 3, 327–343. [Google Scholar] [CrossRef]
  71. Pierguidi, L.; Righi, S.; Gronchi, G.; Marzi, T.; Caharel, S.; Giovannelli, F.; Viggiano, M.P. Emotional contexts modulate intentional memory suppression of neutral faces: Insights from ERPs. Int. J. Psychophysiol. 2016, 106, 1–13. [Google Scholar] [CrossRef]
  72. Gronchi, G.; Righi, S.; Pierguidi, L.; Giovannelli, F.; Murasecco, I.; Viggiano, M.P. Automatic and controlled attentional orienting in the elderly: A dual-process view of the positivity effect. Acta Psychol. 2018, 185, 229–234. [Google Scholar] [CrossRef]
  73. Righi, S.; Gronchi, G.; Marzi, T.; Rebai, M.; Viggiano, M.P. You are that smiling guy I met at the party! Socially positive signals foster memory for identities and contexts. Acta Psychol. 2015, 159, 1–7. [Google Scholar] [CrossRef]
  74. Hernández-Lorca, M.; Sandberg, K.; Kessel, D.; Fernández-Folgueiras, U.; Overgaard, M.; Carretié, L. Binocular rivalry and emotion: Implications for neural correlates of consciousness and emotional biases in conscious perception. Cortex 2019, 120, 539–555. [Google Scholar] [CrossRef]
Figure 1. The binocular rivalry paradigm. (A) Happy and fearful faces were presented dichoptically along with neutral faces through a mirror stereoscope. (B) Participants were asked to report any change in their perception (emotional, neutral, or mixed). (C) Examples of filtered happy and fearful faces.
Figure 1. The binocular rivalry paradigm. (A) Happy and fearful faces were presented dichoptically along with neutral faces through a mirror stereoscope. (B) Participants were asked to report any change in their perception (emotional, neutral, or mixed). (C) Examples of filtered happy and fearful faces.
Brainsci 10 00998 g001
Figure 2. Average frequency of first dominant perception as function of spatial frequency bands with 95% confidence interval. The dotted line represents the reference proportion of 50%.
Figure 2. Average frequency of first dominant perception as function of spatial frequency bands with 95% confidence interval. The dotted line represents the reference proportion of 50%.
Brainsci 10 00998 g002
Figure 3. Average emotive dominance duration as a function of spatial frequency bands with 95% confidence interval. The dotted line represents the reference proportion of 50%.
Figure 3. Average emotive dominance duration as a function of spatial frequency bands with 95% confidence interval. The dotted line represents the reference proportion of 50%.
Brainsci 10 00998 g003
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Turano, M.T.; Giganti, F.; Gavazzi, G.; Lamberto, S.; Gronchi, G.; Giovannelli, F.; Peru, A.; Viggiano, M.P. Spatially Filtered Emotional Faces Dominate during Binocular Rivalry. Brain Sci. 2020, 10, 998. https://0-doi-org.brum.beds.ac.uk/10.3390/brainsci10120998

AMA Style

Turano MT, Giganti F, Gavazzi G, Lamberto S, Gronchi G, Giovannelli F, Peru A, Viggiano MP. Spatially Filtered Emotional Faces Dominate during Binocular Rivalry. Brain Sciences. 2020; 10(12):998. https://0-doi-org.brum.beds.ac.uk/10.3390/brainsci10120998

Chicago/Turabian Style

Turano, Maria Teresa, Fiorenza Giganti, Gioele Gavazzi, Simone Lamberto, Giorgio Gronchi, Fabio Giovannelli, Andrea Peru, and Maria Pia Viggiano. 2020. "Spatially Filtered Emotional Faces Dominate during Binocular Rivalry" Brain Sciences 10, no. 12: 998. https://0-doi-org.brum.beds.ac.uk/10.3390/brainsci10120998

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop