Journal of Cognitive Neuroscience
Latest Publications


TOTAL DOCUMENTS

4164
(FIVE YEARS 491)

H-INDEX

215
(FIVE YEARS 10)

Published By Mit Press

1530-8898, 0898-929x

2022 ◽  
pp. 1-13
Author(s):  
Audrey Siqi-Liu ◽  
Tobias Egner ◽  
Marty G. Woldorff

Abstract To adaptively interact with the uncertainties of daily life, we must match our level of cognitive flexibility to contextual demands—being more flexible when frequent shifting between different tasks is required and more stable when the current task requires a strong focus of attention. Such cognitive flexibility adjustments in response to changing contextual demands have been observed in cued task-switching paradigms, where the performance cost incurred by switching versus repeating tasks (switch cost) scales inversely with the proportion of switches (PS) within a block of trials. However, the neural underpinnings of these adjustments in cognitive flexibility are not well understood. Here, we recorded 64-channel EEG measures of electrical brain activity as participants switched between letter and digit categorization tasks in varying PS contexts, from which we extracted ERPs elicited by the task cue and alpha power differences during the cue-to-target interval and the resting precue period. The temporal resolution of the EEG allowed us to test whether contextual adjustments in cognitive flexibility are mediated by tonic changes in processing mode or by changes in phasic, task cue-triggered processes. We observed reliable modulation of behavioral switch cost by PS context that was mirrored in both cue-evoked ERP and time–frequency effects but not by blockwide precue EEG changes. These results indicate that different levels of cognitive flexibility are instantiated after the presentation of task cues, rather than by being maintained as a tonic state throughout low- or high-switch contexts.


2022 ◽  
pp. 1-16
Author(s):  
Jamal A. Williams ◽  
Elizabeth H. Margulis ◽  
Samuel A. Nastase ◽  
Janice Chen ◽  
Uri Hasson ◽  
...  

Abstract Recent fMRI studies of event segmentation have found that default mode regions represent high-level event structure during movie watching. In these regions, neural patterns are relatively stable during events and shift at event boundaries. Music, like narratives, contains hierarchical event structure (e.g., sections are composed of phrases). Here, we tested the hypothesis that brain activity patterns in default mode regions reflect the high-level event structure of music. We used fMRI to record brain activity from 25 participants (male and female) as they listened to a continuous playlist of 16 musical excerpts and additionally collected annotations for these excerpts by asking a separate group of participants to mark when meaningful changes occurred in each one. We then identified temporal boundaries between stable patterns of brain activity using a hidden Markov model and compared the location of the model boundaries to the location of the human annotations. We identified multiple brain regions with significant matches to the observer-identified boundaries, including auditory cortex, medial pFC, parietal cortex, and angular gyrus. From these results, we conclude that both higher-order and sensory areas contain information relating to the high-level event structure of music. Moreover, the higher-order areas in this study overlap with areas found in previous studies of event perception in movies and audio narratives, including regions in the default mode network.


2022 ◽  
pp. 1-12
Author(s):  
Simon Kwon ◽  
Franziska R. Richter ◽  
Michael J. Siena ◽  
Jon S. Simons

Abstract The qualities of remembered experiences are often used to inform “reality monitoring” judgments, our ability to distinguish real and imagined events [Johnson, M. K., & Raye, C. L. Reality monitoring. Psychological Review, 88, 67–85, 1981]. Previous experiments have tended to investigate only whether reality monitoring decisions are accurate or not, providing little insight into the extent to which reality monitoring may be affected by qualities of the underlying mnemonic representations. We used a continuous-response memory precision task to measure the quality of remembered experiences that underlie two different types of reality monitoring decisions: self/experimenter decisions that distinguish actions performed by participants and the experimenter and imagined/perceived decisions that distinguish imagined and perceived experiences. The data revealed memory precision to be associated with higher accuracy in both self/experimenter and imagined/perceived reality monitoring decisions, with lower precision linked with a tendency to misattribute self-generated experiences to external sources. We then sought to investigate the possible neurocognitive basis of these observed associations by applying brain stimulation to a region that has been implicated in precise recollection of personal events, the left angular gyrus. Stimulation of angular gyrus selectively reduced the association between memory precision and self-referential reality monitoring decisions, relative to control site stimulation. The angular gyrus may, therefore, be important for the mnemonic processes involved in representing remembered experiences that give rise to a sense of self-agency, a key component of “autonoetic consciousness” that characterizes episodic memory [Tulving, E. Elements of episodic memory. Oxford, United Kingdom: Oxford University Press, 1985].


2021 ◽  
pp. 1-14
Author(s):  
Assaf Harel ◽  
Jeffery D. Nador ◽  
Michael F. Bonner ◽  
Russell A. Epstein

Abstract Scene perception and spatial navigation are interdependent cognitive functions, and there is increasing evidence that cortical areas that process perceptual scene properties also carry information about the potential for navigation in the environment (navigational affordances). However, the temporal stages by which visual information is transformed into navigationally relevant information are not yet known. We hypothesized that navigational affordances are encoded during perceptual processing and therefore should modulate early visually evoked ERPs, especially the scene-selective P2 component. To test this idea, we recorded ERPs from participants while they passively viewed computer-generated room scenes matched in visual complexity. By simply changing the number of doors (no doors, 1 door, 2 doors, 3 doors), we were able to systematically vary the number of pathways that afford movement in the local environment, while keeping the overall size and shape of the environment constant. We found that rooms with no doors evoked a higher P2 response than rooms with three doors, consistent with prior research reporting higher P2 amplitude to closed relative to open scenes. Moreover, we found P2 amplitude scaled linearly with the number of doors in the scenes. Navigability effects on the ERP waveform were also observed in a multivariate analysis, which showed significant decoding of the number of doors and their location at earlier time windows. Together, our results suggest that navigational affordances are represented in the early stages of scene perception. This complements research showing that the occipital place area automatically encodes the structure of navigable space and strengthens the link between scene perception and navigation.


2021 ◽  
pp. 1-14
Author(s):  
Octave Etard ◽  
Rémy Ben Messaoud ◽  
Gabriel Gaugain ◽  
Tobias Reichenbach

Abstract Speech and music are spectrotemporally complex acoustic signals that are highly relevant for humans. Both contain a temporal fine structure that is encoded in the neural responses of subcortical and cortical processing centers. The subcortical response to the temporal fine structure of speech has recently been shown to be modulated by selective attention to one of two competing voices. Music similarly often consists of several simultaneous melodic lines, and a listener can selectively attend to a particular one at a time. However, the neural mechanisms that enable such selective attention remain largely enigmatic, not least since most investigations to date have focused on short and simplified musical stimuli. Here, we studied the neural encoding of classical musical pieces in human volunteers, using scalp EEG recordings. We presented volunteers with continuous musical pieces composed of one or two instruments. In the latter case, the participants were asked to selectively attend to one of the two competing instruments and to perform a vibrato identification task. We used linear encoding and decoding models to relate the recorded EEG activity to the stimulus waveform. We show that we can measure neural responses to the temporal fine structure of melodic lines played by one single instrument, at the population level as well as for most individual participants. The neural response peaks at a latency of 7.6 msec and is not measurable past 15 msec. When analyzing the neural responses to the temporal fine structure elicited by competing instruments, we found no evidence of attentional modulation. We observed, however, that low-frequency neural activity exhibited a modulation consistent with the behavioral task at latencies from 100 to 160 msec, in a similar manner to the attentional modulation observed in continuous speech (N100). Our results show that, much like speech, the temporal fine structure of music is tracked by neural activity. In contrast to speech, however, this response appears unaffected by selective attention in the context of our experiment.


2021 ◽  
pp. 1-19
Author(s):  
Wim Strijbosch ◽  
Edward A. Vessel ◽  
Dominik Welke ◽  
Ondrej Mitas ◽  
John Gelissen ◽  
...  

Abstract Aesthetic experiences have an influence on many aspects of life. Interest in the neural basis of aesthetic experiences has grown rapidly in the past decade, and fMRI studies have identified several brain systems supporting aesthetic experiences. Work on the rapid neuronal dynamics of aesthetic experience, however, is relatively scarce. This study adds to this field by investigating the experience of being aesthetically moved by means of ERP and time–frequency analysis. Participants' electroencephalography (EEG) was recorded while they viewed a diverse set of artworks and evaluated the extent to which these artworks moved them. Results show that being aesthetically moved is associated with a sustained increase in gamma activity over centroparietal regions. In addition, alpha power over right frontocentral regions was reduced in high- and low-moving images, compared to artworks given intermediate ratings. We interpret the gamma effect as an indication for sustained savoring processes for aesthetically moving artworks compared to aesthetically less-moving artworks. The alpha effect is interpreted as an indication of increased attention for aesthetically salient images. In contrast to previous works, we observed no significant effects in any of the established ERP components, but we did observe effects at latencies longer than 1 sec. We conclude that EEG time–frequency analysis provides useful information on the neuronal dynamics of aesthetic experience.


2021 ◽  
pp. 1-12
Author(s):  
William Matchin ◽  
Deniz İlkbaşaran ◽  
Marla Hatrak ◽  
Austin Roth ◽  
Agnes Villwock ◽  
...  

Abstract Areas within the left-lateralized neural network for language have been found to be sensitive to syntactic complexity in spoken and written language. Previous research has revealed that these areas are active for sign language as well, but whether these areas are specifically responsive to syntactic complexity in sign language independent of lexical processing has yet to be found. To investigate the question, we used fMRI to neuroimage deaf native signers' comprehension of 180 sign strings in American Sign Language (ASL) with a picture-probe recognition task. The ASL strings were all six signs in length but varied at three levels of syntactic complexity: sign lists, two-word sentences, and complex sentences. Syntactic complexity significantly affected comprehension and memory, both behaviorally and neurally, by facilitating accuracy and response time on the picture-probe recognition task and eliciting a left lateralized activation response pattern in anterior and posterior superior temporal sulcus (aSTS and pSTS). Minimal or absent syntactic structure reduced picture-probe recognition and elicited activation in bilateral pSTS and occipital-temporal cortex. These results provide evidence from a sign language, ASL, that the combinatorial processing of anterior STS and pSTS is supramodal in nature. The results further suggest that the neurolinguistic processing of ASL is characterized by overlapping and separable neural systems for syntactic and lexical processing.


2021 ◽  
pp. 1-26
Author(s):  
Felicia A. Hardi ◽  
Leigh G. Goetschius ◽  
Melissa K. Peckins ◽  
Jeanne Brooks-Gunn ◽  
Sara S. McLanahan ◽  
...  

Abstract Accumulating literature has linked poverty to brain structure and function, particularly in affective neural regions; however, few studies have examined associations with structural connections or the importance of developmental timing of exposure. Moreover, prior neuroimaging studies have not used a proximal measure of poverty (i.e., material hardship, which assesses food, housing, and medical insecurity) to capture the lived experience of growing up in harsh economic conditions. The present investigation addressed these gaps collectively by examining the associations between material hardship (ages 1, 3, 5, 9, and 15 years) and white matter connectivity of frontolimbic structures (age of 15 years) in a low-income sample. We applied probabilistic tractography to diffusion imaging data collected from 194 adolescents. Results showed that material hardship related to amygdala–prefrontal, but not hippocampus–prefrontal or hippocampus–amygdala, white matter connectivity. Specifically, hardship during middle childhood (ages 5 and 9 years) was associated with greater connectivity between the amygdala and dorsomedial pFC, whereas hardship during adolescence (age of 15 years) was related to reduced amygdala–orbitofrontal (OFC) and greater amygdala–subgenual ACC connectivity. Growth curve analyses showed that greater increases of hardship across time were associated with both greater (amygdala–subgenual ACC) and reduced (amygdala–OFC) white matter connectivity. Furthermore, these effects remained above and beyond other types of adversity, and greater hardship and decreased amygdala–OFC connectivity were related to increased anxiety and depressive symptoms. Results demonstrate that the associations between material hardship and white matter connections differ across key prefrontal regions and developmental periods, providing support for potential windows of plasticity for structural circuits that support emotion processing.


2021 ◽  
pp. 1-16
Author(s):  
Heejung Jung ◽  
Tor D. Wager ◽  
R. McKell Carter

Abstract Functions in higher-order brain regions are the source of extensive debate. Although past trends have been to describe the brain—especially posterior cortical areas—in terms of a set of functional modules, a new emerging paradigm focuses on the integration of proximal functions. In this review, we synthesize emerging evidence that a variety of novel functions in the higher-order brain regions are due to convergence: convergence of macroscale gradients brings feature-rich representations into close proximity, presenting an opportunity for novel functions to arise. Using the TPJ as an example, we demonstrate that convergence is enabled via three properties of the brain: (1) hierarchical organization, (2) abstraction, and (3) equidistance. As gradients travel from primary sensory cortices to higher-order brain regions, information becomes abstracted and hierarchical, and eventually, gradients meet at a point maximally and equally distant from their sensory origins. This convergence, which produces multifaceted combinations, such as mentalizing another person's thought or projecting into a future space, parallels evolutionary and developmental characteristics in such regions, resulting in new cognitive and affective faculties.


2021 ◽  
pp. 1-19
Author(s):  
Johanna Kreither ◽  
Orestis Papaioannou ◽  
Steven J. Luck

Abstract Working memory is thought to serve as a buffer for ongoing cognitive operations, even in tasks that have no obvious memory requirements. This conceptualization has been supported by dual-task experiments, in which interference is observed between a primary task involving short-term memory storage and a secondary task that presumably requires the same buffer as the primary task. Little or no interference is typically observed when the secondary task is very simple. Here, we test the hypothesis that even very simple tasks require the working memory buffer, but interference can be minimized by using activity-silent representations to store the information from the primary task. We tested this hypothesis using dual-task paradigm in which a simple discrimination task was interposed in the retention interval of a change detection task. We used contralateral delay activity (CDA) to track the active maintenance of information for the change detection task. We found that the CDA was massively disrupted after the interposed task. Despite this disruption of active maintenance, we found that performance in the change detection task was only slightly impaired, suggesting that activity-silent representations were used to retain the information for the change detection task. A second experiment replicated this result and also showed that automated discriminations could be performed without producing a large CDA disruption. Together, these results suggest that simple but non-automated discrimination tasks require the same processes that underlie active maintenance of information in working memory.


Sign in / Sign up

Export Citation Format

Share Document