scholarly journals Neural evidence for the prediction of animacy features during language comprehension: Evidence from MEG and EEG Representational Similarity Analysis

2019 ◽  
Author(s):  
Lin Wang ◽  
Edward Wlotko ◽  
Edward Alexander ◽  
Lotte Schoot ◽  
Minjae Kim ◽  
...  

AbstractIt has been proposed that people can generate probabilistic predictions at multiple levels of representation during language comprehension. We used Magnetoencephalography (MEG) and Electroencephalography (EEG), in combination with Representational Similarity Analysis (RSA), to seek neural evidence for the prediction of animacy features. In two studies, MEG and EEG activity was measured as human participants (both sexes) read three-sentence scenarios. Verbs in the final sentences constrained for either animate or inanimate semantic features of upcoming nouns, and the broader discourse context constrained for either a specific noun or for multiple nouns belonging to the same animacy category. We quantified the similarity between spatial patterns of brain activity following the verbs until just before the presentation of the nouns. The MEG and EEG datasets revealed converging evidence that the similarity between spatial patterns of neural activity following animate constraining verbs was greater than following inanimate constraining verbs. This effect could not be explained by lexical-semantic processing of the verbs themselves. We therefore suggest that it reflected the inherent difference in the semantic similarity structure of the predicted animate and inanimate nouns. Moreover, the effect was present regardless of whether a specific word could be predicted, providing strong evidence for the prediction of coarse-grained semantic features that goes beyond the prediction of individual words.Significance statementLanguage inputs unfold very quickly during real-time communication. By predicting ahead we can give our brains a “head-start”, so that language comprehension is faster and more efficient. While most contexts do not constrain strongly for a specific word, they do allow us to predict some upcoming information. For example, following the context, “they cautioned the…”, we can predict that the next word will be animate rather than inanimate (we can caution a person, but not an object). Here we used EEG and MEG techniques to show that the brain is able to use these contextual constraints to predict the animacy of upcoming words during sentence comprehension, and that these predictions are associated with specific spatial patterns of neural activity.

eLife ◽  
2018 ◽  
Vol 7 ◽  
Author(s):  
Lin Wang ◽  
Gina Kuperberg ◽  
Ole Jensen

We used Magnetoencephalography (MEG) in combination with Representational Similarity Analysis to probe neural activity associated with distinct, item-specific lexico-semantic predictions during language comprehension. MEG activity was measured as participants read highly constraining sentences in which the final words could be predicted. Before the onset of the predicted words, both the spatial and temporal patterns of brain activity were more similar when the same words were predicted than when different words were predicted. The temporal patterns localized to the left inferior and medial temporal lobe. These findings provide evidence that unique spatial and temporal patterns of neural activity are associated with item-specific lexico-semantic predictions. We suggest that the unique spatial patterns reflected the prediction of spatially distributed semantic features associated with the predicted word, and that the left inferior/medial temporal lobe played a role in temporally ‘binding’ these features, giving rise to unique lexico-semantic predictions.


2021 ◽  
Vol 12 ◽  
Author(s):  
Nasim Boustani ◽  
Reza Pishghadam ◽  
Shaghayegh Shayesteh

Multisensory input is an aid to language comprehension; however, it remains to be seen to what extent various combinations of senses may affect the P200 component and attention-related cognitive processing associated with L2 sentence comprehension along with the N400 as a later component. To this aim, we provided some multisensory input (enriched with data from three (i.e., exvolvement) and five senses (i.e., involvement)) for a list of unfamiliar words to 18 subjects. Subsequently, the words were embedded in an acceptability judgment task with 360 pragmatically correct and incorrect sentences. The task, along with the ERP recording, was conducted after a 1-week consolidation period to track any possible behavioral and electrophysiological distinctions in the retrieval of information with various sense combinations. According to the behavioral results, we found that the combination of five senses leads to more accurate and quicker responses. Based on the electrophysiological results, the combination of five senses induced a larger P200 amplitude compared to the three-sense combination. The implication is that as the sensory weight of the input increases, vocabulary retrieval is facilitated and more attention is directed to the overall comprehension of L2 sentences which leads to more accurate and quicker responses. This finding was not, however, reflected in the neural activity of the N400 component.


NeuroImage ◽  
2020 ◽  
Vol 209 ◽  
pp. 116499 ◽  
Author(s):  
Roger E. Beaty ◽  
Qunlin Chen ◽  
Alexander P. Christensen ◽  
Yoed N. Kenett ◽  
Paul J. Silvia ◽  
...  

2016 ◽  
Author(s):  
Jörn Diedrichsen ◽  
Nikolaus Kriegeskorte

AbstractRepresentational models specify how activity patterns in populations of neurons (or, more generally, in multivariate brain-activity measurements) relate to sensory stimuli, motor responses, or cognitive processes. In an experimental context, representational models can be defined as hypotheses about the distribution of activity profiles across experimental conditions. Currently, three different methods are being used to test such hypotheses: encoding analysis, pattern component modeling (PCM), and representational similarity analysis (RSA). Here we develop a common mathematical framework for understanding the relationship of these three methods, which share one core commonality: all three evaluate the second moment of the distribution of activity profiles, which determines the representational geometry, and thus how well any feature can be decoded from population activity with any readout mechanism capable of a linear transform. Using simulated data for three different experimental designs, we compare the power of the methods to adjudicate between competing representational models. PCM implements a likelihood-ratio test and therefore provides the most powerful test if its assumptions hold. However, the other two approaches – when conducted appropriately – can perform similarly. In encoding analysis, the linear model needs to be appropriately regularized, which effectively imposes a prior on the activity profiles. With such a prior, an encoding model specifies a well-defined distribution of activity profiles. In RSA, the unequal variances and statistical dependencies of the dissimilarity estimates need to be taken into account to reach near-optimal power in inference. The three methods render different aspects of the information explicit (e.g. single-response tuning in encoding analysis and population-response representational dissimilarity in RSA) and have specific advantages in terms of computational demands, ease of use, and extensibility. The three methods are properly construed as complementary components of a single data-analytical toolkit for understanding neural representations on the basis of multivariate brain-activity data.Author SummaryModern neuroscience can measure activity of many neurons or the local blood oxygenation of many brain locations simultaneously. As the number of simultaneous measurements grows, we can better investigate how the brain represents and transforms information, to enable perception, cognition, and behavior. Recent studies go beyond showing that a brain region is involved in some function. They use representational models that specify how different perceptions, cognitions, and actions are encoded in brain-activity patterns. In this paper, we provide a general mathematical framework for such representational models, which clarifies the relationships between three different methods that are currently used in the neuroscience community. All three methods evaluate the same core feature of the data, but each has distinct advantages and disadvantages. Pattern component modelling (PCM) implements the most powerful test between models, and is analytically tractable and expandable. Representational similarity analysis (RSA) provides a highly useful summary statistic (the dissimilarity) and enables model comparison with weaker distributional assumptions. Finally, encoding models characterize individual responses and enable the study of their layout across cortex. We argue that these methods should be considered components of a larger toolkit for testing hypotheses about the way the brain represents information.


2018 ◽  
Author(s):  
Ming Bo Cai ◽  
Nicolas W. Schuck ◽  
Jonathan W. Pillow ◽  
Yael Niv

AbstractThe activity of neural populations in the brains of humans and animals can exhibit vastly different spatial patterns when faced with different tasks or environmental stimuli. The degree of similarity between these neural activity patterns in response to different events is used to characterize the representational structure of cognitive states in a neural population. The dominant methods of investigating this similarity structure first estimate neural activity patterns from noisy neural imaging data using linear regression, and then examine the similarity between the estimated patterns. Here, we show that this approach introduces spurious bias structure in the resulting similarity matrix, in particular when applied to fMRI data. This problem is especially severe when the signal-to-noise ratio is low and in cases where experimental conditions cannot be fully randomized in a task. We propose Bayesian Representational Similarity Analysis (BRSA), an alternative method for computing representational similarity, in which we treat the covariance structure of neural activity patterns as a hyper-parameter in a generative model of the neural data. By marginalizing over the unknown activity patterns, we can directly estimate this covariance structure from imaging data. This method offers significant reductions in bias and allows estimation of neural representational similarity with previously unattained levels of precision at low signal-to-noise ratio. The probabilistic framework allows for jointly analyzing data from a group of participants. The method can also simultaneously estimate a signal-to-noise ratio map that shows where the learned representational structure is supported more strongly. Both this map and the learned covariance matrix can be used as a structured prior for maximum a posteriori estimation of neural activity patterns, which can be further used for fMRI decoding. We make our tool freely available in Brain Imaging Analysis Kit (BrainIAK).Author summaryWe show the severity of the bias introduced when performing representational similarity analysis (RSA) based on neural activity pattern estimated within imaging runs. Our Bayesian RSA method significantly reduces the bias and can learn a shared representational structure across multiple participants. We also demonstrate its extension as a new multi-class decoding tool.


2020 ◽  
Vol 30 (12) ◽  
pp. 6426-6443
Author(s):  
Yingying Tan ◽  
Peter Hagoort

Abstract Catecholamine (CA) function has been widely implicated in cognitive functions that are tied to the prefrontal cortex and striatal areas. The present study investigated the effects of methylphenidate, which is a CA agonist, on the electroencephalogram (EEG) response related to semantic processing using a double-blind, placebo-controlled, randomized, crossover, within-subject design. Forty-eight healthy participants read semantically congruent or incongruent sentences after receiving 20-mg methylphenidate or a placebo while their brain activity was monitored with EEG. To probe whether the catecholaminergic modulation is task-dependent, in one condition participants had to focus on comprehending the sentences, while in the other condition, they only had to attend to the font size of the sentence. The results demonstrate that methylphenidate has a task-dependent effect on semantic processing. Compared to placebo, when semantic processing was task-irrelevant, methylphenidate enhanced the detection of semantic incongruence as indexed by a larger N400 amplitude in the incongruent sentences; when semantic processing was task-relevant, methylphenidate induced a larger N400 amplitude in the semantically congruent condition, which was followed by a larger late positive complex effect. These results suggest that CA-related neurotransmitters influence language processing, possibly through the projections between the prefrontal cortex and the striatum, which contain many CA receptors.


2021 ◽  
Vol 12 ◽  
Author(s):  
Seth M. Levine ◽  
Jens V. Schwarzbach

Representational similarity analysis (RSA) is a popular multivariate analysis technique in cognitive neuroscience that uses functional neuroimaging to investigate the informational content encoded in brain activity. As RSA is increasingly being used to investigate more clinically-geared questions, the focus of such translational studies turns toward the importance of individual differences and their optimization within the experimental design. In this perspective, we focus on two design aspects: applying individual vs. averaged behavioral dissimilarity matrices to multiple participants' neuroimaging data and ensuring the congruency between tasks when measuring behavioral and neural representational spaces. Incorporating these methods permits the detection of individual differences in representational spaces and yields a better-defined transfer of information from representational spaces onto multivoxel patterns. Such design adaptations are prerequisites for optimal translation of RSA to the field of precision psychiatry.


2020 ◽  
Author(s):  
Andrea G. Russo ◽  
Michael Lührs ◽  
Francesco Di Salle ◽  
Fabrizio Esposito ◽  
Rainer Goebel

AbstractObjectiveReal-time functional magnetic resonance imaging neurofeedback (rt-fMRI-NF) is a non-invasive MRI procedure allowing examined participants to learn to self-regulate brain activity by performing mental tasks. A novel two-step rt-fMRI-NF procedure is proposed whereby the feedback display is updated in real-time based on high level (semantic) representations of experimental stimuli via real-time representational similarity analysis of multi-voxel patterns of brain activity.ApproachIn a localizer session, the stimuli become associated with anchored points on a two-dimensional representational space where distances approximate between-pattern (dis)similarities. In the NF session, participants modulate their brain response, displayed as a movable point, to engage in a specific neural representation. The developed method pipeline is verified in a proof-of-concept rt-fMRI-NF study at 7 Tesla using imagery of concrete objects. The dependence on noise is more systematically assessed on artificial fMRI data with similar (simulated) spatio-temporal structure and variable (injected) signal and noise. A series of brain activity patterns from the ventral visual cortex is evaluated via on-line and off-line analyses and the performances of the method are reported under different noise conditions.Main resultsThe participant in the proof-of-concept study exhibited robust activation patterns in the localizer session and managed to control the neural representation of a stimulus towards the selected target, in the NF session. The offline analyses validated the rt-fMRI-NF results, showing that the rapid convergence to the target representation is noise-dependent.SignificanceOur proof-of-concept study demonstrates the potential of semantic NF designs where the participant navigates among different mental states. Compared to traditional NF designs (e.g. using a thermometer display to set the level of the neural signal), the proposed approach provides content-specific feedback to the participant and extra degrees of freedom to the experimenter enabling real-time control of the neural activity towards a target brain state without suggesting a specific mental strategy to the subject.


2018 ◽  
Author(s):  
Maria Tsantani ◽  
Nikolaus Kriegeskorte ◽  
Carolyn McGettigan ◽  
Lúcia Garrido

AbstractFace-selective and voice-selective brain regions have been shown to represent face-identity and voice-identity, respectively. Here we investigated whether there are modality-general person-identity representations in the brain that can be driven by either a face or a voice, and that invariantly represent naturalistically varying face and voice tokens of the same identity. According to two distinct models, such representations could exist either in multimodal brain regions (Campanella and Belin, 2007) or in face-selective brain regions via direct coupling between face- and voice-selective regions (von Kriegstein et al., 2005). To test the predictions of these two models, we used fMRI to measure brain activity patterns elicited by the faces and voices of familiar people in multimodal, face-selective and voice-selective brain regions. We used representational similarity analysis (RSA) to compare the representational geometries of face- and voice-elicited person-identities, and to investigate the degree to which pattern discriminants for pairs of identities generalise from one modality to the other. We found no matching geometries for faces and voices in any brain regions. However, we showed crossmodal generalisation of the pattern discriminants in the multimodal right posterior superior temporal sulcus (rpSTS), suggesting a modality-general person-identity representation in this region. Importantly, the rpSTS showed invariant representations of face- and voice-identities, in that discriminants were trained and tested on independent face videos (different viewpoint, lighting, background) and voice recordings (different vocalizations). Our findings support the Multimodal Processing Model, which proposes that face and voice information is integrated in multimodal brain regions.Significance statementIt is possible to identify a familiar person either by looking at their face or by listening to their voice. Using fMRI and representational similarity analysis (RSA) we show that the right posterior superior sulcus (rpSTS), a multimodal brain region that responds to both faces and voices, contains representations that can distinguish between familiar people independently of whether we are looking at their face or listening to their voice. Crucially, these representations generalised across different particular face videos and voice recordings. Our findings suggest that identity information from visual and auditory processing systems is combined and integrated in the multimodal rpSTS region.


Sign in / Sign up

Export Citation Format

Share Document