information processing capacity
Recently Published Documents


TOTAL DOCUMENTS

75
(FIVE YEARS 18)

H-INDEX

12
(FIVE YEARS 2)

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Tomer Fekete ◽  
Hermann Hinrichs ◽  
Jacobo Diego Sitt ◽  
Hans-Jochen Heinze ◽  
Oren Shriki

AbstractThe brain is universally regarded as a system for processing information. If so, any behavioral or cognitive dysfunction should lend itself to depiction in terms of information processing deficiencies. Information is characterized by recursive, hierarchical complexity. The brain accommodates this complexity by a hierarchy of large/slow and small/fast spatiotemporal loops of activity. Thus, successful information processing hinges upon tightly regulating the spatiotemporal makeup of activity, to optimally match the underlying multiscale delay structure of such hierarchical networks. Reduced capacity for information processing will then be expressed as deviance from this requisite multiscale character of spatiotemporal activity. This deviance is captured by a general family of multiscale criticality measures (MsCr). MsCr measures reflect the behavior of conventional criticality measures (such as the branching parameter) across temporal scale. We applied MsCr to MEG and EEG data in several telling degraded information processing scenarios. Consistently with our previous modeling work, MsCr measures systematically varied with information processing capacity: MsCr fingerprints showed deviance in the four states of compromised information processing examined in this study, disorders of consciousness, mild cognitive impairment, schizophrenia and even during pre-ictal activity. MsCr measures might thus be able to serve as general gauges of information processing capacity and, therefore, as normative measures of brain health.


Author(s):  
Onur Kulce ◽  
Deniz Mengu ◽  
Yair Rivenson ◽  
Aydogan Ozcan

2021 ◽  
Vol 10 (1) ◽  
Author(s):  
Onur Kulce ◽  
Deniz Mengu ◽  
Yair Rivenson ◽  
Aydogan Ozcan

AbstractThe precise engineering of materials and surfaces has been at the heart of some of the recent advances in optics and photonics. These advances related to the engineering of materials with new functionalities have also opened up exciting avenues for designing trainable surfaces that can perform computation and machine-learning tasks through light–matter interactions and diffraction. Here, we analyze the information-processing capacity of coherent optical networks formed by diffractive surfaces that are trained to perform an all-optical computational task between a given input and output field-of-view. We show that the dimensionality of the all-optical solution space covering the complex-valued transformations between the input and output fields-of-view is linearly proportional to the number of diffractive surfaces within the optical network, up to a limit that is dictated by the extent of the input and output fields-of-view. Deeper diffractive networks that are composed of larger numbers of trainable surfaces can cover a higher-dimensional subspace of the complex-valued linear transformations between a larger input field-of-view and a larger output field-of-view and exhibit depth advantages in terms of their statistical inference, learning, and generalization capabilities for different image classification tasks when compared with a single trainable diffractive surface. These analyses and conclusions are broadly applicable to various forms of diffractive surfaces, including, e.g., plasmonic and/or dielectric-based metasurfaces and flat optics, which can be used to form all-optical processors.


2021 ◽  
pp. 265-275
Author(s):  
Nisha Baral ◽  
Chathika Gunaratne ◽  
Chathura Jayalath ◽  
William Rand ◽  
Chathurani Senevirathna ◽  
...  

2021 ◽  
Author(s):  
Onur Kulce ◽  
Deniz Mengu ◽  
Yair Rivenson ◽  
Aydogan Ozcan

2020 ◽  
Vol 2 (4) ◽  
Author(s):  
Nozomi Akashi ◽  
Terufumi Yamaguchi ◽  
Sumito Tsunegi ◽  
Tomohiro Taniguchi ◽  
Mitsuhiro Nishida ◽  
...  

2020 ◽  
Author(s):  
Yang Tian ◽  
Justin L. Gardner ◽  
Guoqi Li ◽  
Pei Sun

AbstractInformation experiences complex transformation processes in the brain, involving various errors. A daunting and critical challenge in neuroscience is to understand the origin of these errors and their effects on neural information processing. While previous efforts have made substantial progresses in studying the information errors in bounded, unreliable and noisy transformation cases, it still remains elusive whether the neural system is inherently error-free under an ideal and noise-free condition. This work brings the controversy to an end with a negative answer. We propose a novel neural information confusion theory, indicating the widespread presence of information confusion phenomenon after the end of transmission process, which originates from innate neuron characteristics rather than external noises. Then, we reformulate the definition of zero-error capacity under the context of neuroscience, presenting an optimal upper bound of the zero-error transformation rates determined by the tuning properties of neurons. By applying this theory to neural coding analysis, we unveil the multi-dimensional impacts of information confusion on neural coding. Although it reduces the variability of neural responses and limits mutual information, it controls the stimulus-irrelevant neural activities and improves the interpretability of neural responses based on stimuli. Together, the present study discovers an inherent and ubiquitous precision limitation of neural information transformation, which shapes the coding process by neural ensembles. These discoveries reveal that the neural system is intrinsically error-prone in information processing even in the most ideal cases.Author summaryOne of the most central challenges in neuroscience is to understand the information processing capacity of the neural system. Decades of efforts have identified various errors in nonideal neural information processing cases, indicating that the neural system is not optimal in information processing because of the widespread presences of external noises and limitations. These incredible progresses, however, can not address the problem about whether the neural system is essentially error-free and optimal under ideal information processing conditions, leading to extensive controversies in neuroscience. Our work brings this well-known controversy to an end with a negative answer. We demonstrate that the neural system is intrinsically error-prone in information processing even in the most ideal cases, challenging the conventional ideas about the superior neural information processing capacity. We further indicate that the neural coding process is shaped by this innate limit, revealing how the characteristics of neural information functions and further cognitive functions are determined by the inherent limitation of the neural system.


Sign in / Sign up

Export Citation Format

Share Document