Quantification of anticipation of excitement with a three-axial model of emotion with EEG.
OBJECTIVES: Multiple facets of human emotion underlie diverse and sparse neural mechanisms. Among the many existing models of emotion, the two-dimensional circumplex model of emotion is an important theory. The use of the circumplex model allows us to model variable aspects of emotion; however, such momentary expressions of one’s internal mental state still lacks a notion of the third dimension of time. Here, we report an exploratory attempt to build a three-axis model of human emotion to model our sense of anticipatory excitement, “Waku-Waku” (in Japanese), in which people predictively code upcoming emotional events.
APPROACH: Electroencephalography (EEG) data were recorded from 28 young adult participants while they mentalized upcoming emotional pictures. Three auditory tones were used as indicative cues, predicting the likelihood of the valence of an upcoming picture: positive, negative, or unknown. While seeing an image, the participants judged its emotional valence during the task and subsequently rated their subjective experiences on valence, arousal, expectation, and Waku-Waku immediately after the experiment. The collected EEG data were then analyzed to identify contributory neural signatures for each of the three axes.
MAIN RESULTS: A three-axis model was built to quantify Waku-Waku. As expected, this model revealed the considerable contribution of the third dimension over the classical two-dimensional model. Distinctive EEG components were identified. Furthermore, a novel brain-emotion interface was proposed and validated within the scope of limitations.
SIGNIFICANCE: The proposed notion may shed new light on the theories of emotion and support multiplex dimensions of emotion. With the introduction of the cognitive domain for a brain-computer interface, we propose a novel brain-emotion interface. Limitations of the study and potential applications of this interface are discussed.
PMID: 32416601 [PubMed – as supplied by publisher]
J Neural Eng. 2020 May 16;:
Authors: Machizawa M, Lisi G, Kanayama N, Mizuochi R, Makita K, Sasaoka T, Yamawaki S