automatic encoding
Recently Published Documents


TOTAL DOCUMENTS

42
(FIVE YEARS 9)

H-INDEX

9
(FIVE YEARS 2)

Author(s):  
Antonio Prieto ◽  
Vanesa Peinado ◽  
Julia Mayas

AbstractVisual working memory has been defined as a system of limited capacity that enables the maintenance and manipulation of visual information. However, some perceptual features like Gestalt grouping could improve visual working memory effectiveness. In two different experiments, we aimed to explore how the presence of elements grouped by color similarity affects the change detection performance of both, grouped and non-grouped items. We combined a change detection task with a retrocue paradigm in which a six item array had to be remembered. An always valid, variable-delay retrocue appeared in some trials during the retention interval, either after 100 ms (iconic-trace period) or 1400 ms (working memory period), signaling the location of the probe. The results indicated that similarity grouping biased the information entered into the visual working memory, improving change detection accuracy only for previously grouped probes, but hindering change detection for non-grouped probes in certain conditions (Exp. 1). However, this bottom-up automatic encoding bias was overridden when participants were explicitly instructed to ignore grouped items as they were irrelevant for the task (Exp. 2).


2020 ◽  
Vol 30 (12) ◽  
pp. 6270-6283
Author(s):  
He Chen ◽  
Yuji Naya

Abstract Perceptual processing along the ventral visual pathway to the hippocampus (HPC) is hypothesized to be substantiated by signal transformation from retinotopic space to relational space, which represents interrelations among constituent visual elements. However, our visual perception necessarily reflects the first person’s perspective based on the retinotopic space. To investigate this two-facedness of visual perception, we compared neural activities in the temporal lobe (anterior inferotemporal cortex, perirhinal and parahippocampal cortices, and HPC) between when monkeys gazed on an object and when they fixated on the screen center with an object in their peripheral vision. We found that in addition to the spatially invariant object signal, the temporal lobe areas automatically represent a large-scale background image, which specify the subject’s viewing location. These results suggest that a combination of two distinct visual signals on relational space and retinotopic space may provide the first person’s perspective serving for perception and presumably subsequent episodic memory.


2020 ◽  
Author(s):  
He Chen ◽  
Yuji Naya

AbstractPerceptual processing along the ventral visual pathway to the hippocampus is hypothesized to be substantiated by signal transformation from retinotopic space to relational space, which represents interrelations among constituent visual elements. However, our visual perception necessarily reflects the first person’s perspective based on the retinotopic space. To investigate this two-facedness of visual perception, we compared neural activities in the temporal lobe (anterior inferotemporal cortex, perirhinal and parahippocampal cortices, and hippocampus) between when monkeys gazed on an object and when they fixated on the screen center with an object in their peripheral vision. We found that in addition to the spatially invariant object signal, the temporal lobe areas automatically represent a large-scale background image, which specify the subject’s viewing location. These results suggest that a combination of two distinct visual signals on relational space and retinotopic space may provide the first person’s perspective serving for perception and presumably subsequent episodic memory.


2019 ◽  
Vol 122 (5) ◽  
pp. 1849-1860 ◽  
Author(s):  
Nobuyuki Nishimura ◽  
Motoaki Uchimura ◽  
Shigeru Kitazawa

We previously showed that the brain automatically represents a target position for reaching relative to a large square in the background. In the present study, we tested whether a natural scene with many complex details serves as an effective background for representing a target. In the first experiment, we used upright and inverted pictures of a natural scene. A shift of pictures significantly attenuated prism adaptation of reaching movements as long as they were upright. In one-third of participants, adaptation was almost completely cancelled whether the pictures were upright or inverted. It was remarkable that there were two distinct groups of participants, one who relies fully on the allocentric coordinate and the other who depended only when the scene was upright. In the second experiment, we examined how long it takes for a novel upright scene to serve as a background. A shift of the novel scene had no significant effects when it was presented for 500 ms before presenting a target, but significant effects were recovered when presented for 1,500 ms. These results show that a natural scene serves as a background against which a target is automatically represented once we spend 1,500 ms in the scene. NEW & NOTEWORTHY Prism adaptation of reaching was attenuated by a shift of natural scenes as long as they were upright. In one-third of participants, adaptation was fully canceled whether the scene was upright or inverted. When an upright scene was novel, it took 1,500 ms to prepare the scene for allocentric coding. These results show that a natural scene serves as a background against which a target is automatically represented once we spend 1,500 ms in the scene.


Entropy ◽  
2019 ◽  
Vol 21 (6) ◽  
pp. 570 ◽  
Author(s):  
Jingchun Piao ◽  
Yunfan Chen ◽  
Hyunchul Shin

In this paper, we present a new effective infrared (IR) and visible (VIS) image fusion method by using a deep neural network. In our method, a Siamese convolutional neural network (CNN) is applied to automatically generate a weight map which represents the saliency of each pixel for a pair of source images. A CNN plays a role in automatic encoding an image into a feature domain for classification. By applying the proposed method, the key problems in image fusion, which are the activity level measurement and fusion rule design, can be figured out in one shot. The fusion is carried out through the multi-scale image decomposition based on wavelet transform, and the reconstruction result is more perceptual to a human visual system. In addition, the visual qualitative effectiveness of the proposed fusion method is evaluated by comparing pedestrian detection results with other methods, by using the YOLOv3 object detector using a public benchmark dataset. The experimental results show that our proposed method showed competitive results in terms of both quantitative assessment and visual quality.


2019 ◽  
Author(s):  
Christine Fawcett ◽  
Ben Kenward

To test how early social environments affect children’s consideration of gender, 3- to 6-year-old children (N = 80) enrolled in gender-neutral or typical preschool programs in the central district of a large Swedish city completed measures designed to assess their gender-based social preferences, stereotypes, and automatic encoding. Compared with children in typical preschools, a greater proportion of children in the gender-neutral school were interested in playing with unfamiliar other-gender children. In addition, children attending the gender-neutral preschool scored lower on a gender stereotyping measure than children attending typical preschools. Children at the gender-neutral school, however, were not less likely to automatically encode others’ gender. The findings suggest that gender-neutral pedagogy has moderate effects on how children think and feel about people of different genders but might not affect children’s tendency to spontaneously notice gender.


2018 ◽  
Vol 18 (10) ◽  
pp. 316
Author(s):  
Nicholas DeWind ◽  
Marty Woldorff ◽  
Elizabeth Brannon
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document