The Optical Building Blocks of Eyes

Author(s):  
Thomas W. Cronin ◽  
Sönke Johnsen ◽  
N. Justin Marshall ◽  
Eric J. Warrant

This chapter analyzes the optical building blocks of eyes. Irrespective of their optical specializations, all eyes have one thing in common: they collect and absorb light arriving from different places in the environment, thus giving animals information about the relative distribution of light and dark in the surrounding world, including the contrasts and positions of objects. This information is used to support a variety of visual tasks, such as identifying and avoiding predators, detecting and pursuing prey or conspecifics, and orientating and navigating within the habitat. Although some animals use their eyes to perform more or less all of these tasks, others do not. All visual systems evolved within one of two main categories, being either general purpose or special purpose.

Author(s):  
Alexandros Ioannidis-Pantopikos ◽  
Donat Agosti

In the landscape of general-purpose repositories, Zenodo was built at the European Laboratory for Particle Physics' (CERN) data center to facilitate the sharing and preservation of the long tail of research across all disciplines and scientific domains. Given Zenodo’s long tradition of making research artifacts FAIR (Findable, Accessible, Interoperable, and Reusable), there are still challenges in applying these principles effectively when serving the needs of specific research domains. Plazi’s biodiversity taxonomic literature processing pipeline liberates data from publications, making it FAIR via extensive metadata, the minting of a DataCite Digital Object Identifier (DOI), a licence and both human- and machine-readable output provided by Zenodo, and accessible via the Biodiversity Literature Repository community at Zenodo. The deposits (e.g., taxonomic treatments, figures) are an example of how local networks of information can be formally linked to explicit resources in a broader context of other platforms like GBIF (Global Biodiversity Information Facility). In the context of biodiversity taxonomic literature data workflows, a general-purpose repository’s traditional submission approach is not enough to preserve rich metadata and to capture highly interlinked objects, such as taxonomic treatments and digital specimens. As a prerequisite to serve these use cases and ensure that the artifacts remain FAIR, Zenodo introduced the concept of custom metadata, which allows enhancing submissions such as figures or taxonomic treatments (see as an example the treatment of Eurygyrus peloponnesius) with custom keywords, based on terms from common biodiversity vocabularies like Darwin Core and Audubon Core and with an explicit link to the respective vocabulary term. The aforementioned pipelines and features are designed to be served first and foremost using public Representational State Transfer Application Programming Interfaces (REST APIs) and open web technologies like webhooks. This approach allows researchers and platforms to integrate existing and new automated workflows into Zenodo and thus empowers research communities to create self-sustained cross-platform ecosystems. The BiCIKL project (Biodiversity Community Integrated Knowledge Library) exemplifies how repositories and tools can become building blocks for broader adoption of the FAIR principles. Starting with the above literature processing pipeline, the concepts of and resulting FAIR data, with a focus on the custom metadata used to enhance the deposits, will be explained.


1997 ◽  
Vol 3 (S2) ◽  
pp. 1131-1132
Author(s):  
Jansma P.L ◽  
M.A. Landis ◽  
L.C. Hansen ◽  
N.C. Merchant ◽  
N.J. Vickers ◽  
...  

We are using Data Explorer (DX), a general-purpose, interactive visualization program developed by IBM, to perform three-dimensional reconstructions of neural structures from microscopic or optical sections. We use the program on a Silicon Graphics workstation; it also can run on Sun, IBM RS/6000, and Hewlett Packard workstations. DX comprises modular building blocks that the user assembles into data-flow networks for specific uses. Many modules come with the program, but others, written by users (including ourselves), are continually being added and are available at the DX ftp site, http://www.tc.cornell.edu/DXhttp://www.nice.org.uk/page.aspx?o=43210.Initally, our efforts were aimed at developing methods for isosurface- and volume-rendering of structures visible in three-dimensional stacks of optical sections of insect brains gathered on our Bio-Rad MRC-600 laser scanning confocal microscope. We also wanted to be able to merge two 3-D data sets (collected on two different photomultiplier channels) and to display them at various angles of view.


Robotica ◽  
1993 ◽  
Vol 11 (2) ◽  
pp. 119-128
Author(s):  
David Bar-On ◽  
Shaul Gutman ◽  
Amos Israeli

SUMMARYA modular hierarchical model for controlling robots is presented. This model is targeted mainly for research and development; it enables researchers to concentrate on a certain specific task of robotics, while using existing building blocks for the rest of their applications. The presentation begins by discussing the problems with which researchers and engineers of robotics are faced whenever trying to use existing commercial robots. Based on this discussion we propose a new general model for robot control to be referred as TERM (TEchnion Robotic Model). The viability of the new model is demonstrated by implementing a general purpose robot controller.


2004 ◽  
Vol 5 (2) ◽  
pp. 271-301 ◽  
Author(s):  
Henrik Hautop Lund ◽  
Patrizia Marti

I-BLOCKS are an innovative concept of building blocks allowing users to manipulate conceptual structures and compose atomic actions while building physical constructions. They represent an example of enabling technologies for tangible interfaces since they emphasise physicality of interaction through the use of spatial and kinaesthetic knowledge. The technology presented in this paper is integrated in physical building blocks augmented with embedded and invisible microprocessors. Connectivity and behaviour of such structures are defined by the physical connectivity between the blocks. These are general purpose, constructive, tangible user interface devices that can have a variety of applications. Unlike other approaches, I-BLOCKS do not only specify a computation that is performed by the target system but perform at the same time the computation and the associated action/functionality. Manipulating I-BLOCKS do not only mean constructing physical or conceptual structures but also composing atomic actions into complex behaviours. To illustrate this concept, the paper presents different scenarios in which the technology has been applied: storytelling performed through the construction of physical characters exhibiting emotional states, and learning activities for speech therapy in cases of dyslexia and aphasia. The scenarios are presented; discussing both the features of the technology used and the related interaction design issues. The paper concludes by reporting about informal trials that have been conducted with children. It should be noted that, even if both trials represent application scenarios for children, the I-BLOCKS technology is in principle open to different kinds of applications and target users like, for example, games for adults or brainstorming activities.


2018 ◽  
Author(s):  
Jianfu Zhou ◽  
Alexandra E. Panaitiu ◽  
Gevorg Grigoryan

AbstractThe ability to routinely design functional proteins, in a targeted manner, would have enormous implications for biomedical research and therapeutic development. Computational protein design (CPD) offers the potential to fulfill this need, and though recent years have brought considerable progress in the field, major limitations remain. Current state-of-the-art approaches to CPD aim to capture the determinants of structure from physical principles. While this has led to many successful designs, it does have strong limitations associated with inaccuracies in physical modeling, such that a robust general solution to CPD has yet to be found. Here we propose a fundamentally novel design framework—one based on identifying and applying patterns of sequence-structure compatibility found in known proteins, rather than approximating them from models of inter-atomic interactions. Specifically, we systematically decompose the target structure to be designed into structural building blocks we call TERMs (tertiary motifs) and use rapid structure search against the Protein Data Bank (PDB) to identify sequence patterns associated with each TERM from known protein structures that contain it. These results are then combined to produce a sequence-level pseudo-energy model that can score any sequence for compatibility with the target structure. This model can then be used to extract the optimal-scoring sequence via combinatorial optimization or otherwise sample the sequence space predicted to be well compatible with folding to the target. Here we carry out extensive computational analyses, showing that our method, which we dub dTERMen (design with TERM energies): 1) produces native-like sequences given native crystallographic or NMR backbones, 2) produces sequence-structure compatibility scores that correlate with thermodynamic stability, and 3) is able to predict experimental success of designed sequences generated with other methods, and 4) designs sequences that are found to fold to the desired target by structure prediction more frequently than sequences designed with an atomistic method. As an experimental validation of dTERMen, we perform a total surface redesign of Red Fluorescent Protein mCherry, marking a total of 64 residues as variable. The single sequence identified as optimal by dTERMen harbors 48 mutations relative to mCherry, but nevertheless folds, is monomeric in solution, exhibits similar stability to chemical denaturation as mCherry, and even preserves the fluorescence property. Our results strongly argue that the PDB is now sufficiently large to enable proteins to be designed by using only examples of structural motifs from unrelated proteins. This is highly significant, given that the structural database will only continue to grow, and signals the possibility of a whole host of novel data-driven CPD methods. Because such methods are likely to have orthogonal strengths relative to existing techniques, they could represent an important step towards removing remaining barriers to robust CPD.


Author(s):  
Steven Vercruysse ◽  
Martin Kuiper

Scientific progress is increasingly dependent on knowledge in computation-ready forms. In the life sciences, among others, many scientists carefully extract and structure knowledge from the scientific literature. In a process called manual curation, they enter knowledge into spreadsheets, or into databases where it serves their and many others' research. Valuable as these curation efforts are, the range and detail of what can practically be captured and shared remains limited, because of the constraints of current curation tools. Many important contextual aspects of observations described in literature simply do not fit in the form defined by these tools, and thus cannot be captured. Here we present the design of an easy-to-use, general-purpose method and interface, that enables the precise semantic capture of virtually unlimited types of information and details, using only a minimal set of building blocks. Scientists from any discipline can use this to convert any complex knowledge into a form that is easily readable and meaningful for both humans and computers. The method VSM forms a universal and high-level language for encoding ideas, and for interacting with digital knowledge.


2002 ◽  
Vol 16 (20n22) ◽  
pp. 3398-3398
Author(s):  
A. MIGLIORI ◽  
F. F. BALAKIREV ◽  
J. B. BETTS ◽  
G. S. BOEBINGER ◽  
C. H. MIELKE ◽  
...  

The DC and pulsed magnets now available at the NHMFL provide routine access to high magnetic fields in cryogenic environments (down to even dilution refrigerator levels), that are world-record unique. This uniqueness comes with a price that reflects constraints of the magnets and the low temperatures, including limited volume and time at peak magnetic field, cryogenic power limits on electronics, and, particularly for pulsed magnets, increased noise. In effect, the instrumentation constraints are similar for NHMFL superconducting, resistive and pulsed magnets. An NHMFL experimentalist therefore has a simple goal: acquisition of all the information produced by a measurement in the shortest time permitted by information theory, with minimum sensitivity to noise and interference. To assist with this, we propose here to eliminate commercial general-purpose lock-in amplifiers, preamplifiers and digitizers and replace them with commercial-quality custom building blocks optimized for NHMFL measurements, that are faster, quieter, more versatile, and cheaper. We will use these new instruments to support users by improving present measurements as well as adding new capabilities, including specific heat for materials that suffer adiabatic effects in pulsed fields, and thermal conductivity in both dc and pulsed magnets based on 3rd harmonic methods. We will use these techniques to measures the thermal conductivity of high Tc superconductors at high field in the normal state, and to test the Weideman-Franz relationship between electronic thermal conductivity and electrical conductivity in the extreme high-field limit.


2017 ◽  
Vol 27 (03n04) ◽  
pp. 1750006 ◽  
Author(s):  
Farhad Merchant ◽  
Anupam Chattopadhyay ◽  
Soumyendu Raha ◽  
S. K. Nandy ◽  
Ranjani Narayan

Basic Linear Algebra Subprograms (BLAS) and Linear Algebra Package (LAPACK) form basic building blocks for several High Performance Computing (HPC) applications and hence dictate performance of the HPC applications. Performance in such tuned packages is attained through tuning of several algorithmic and architectural parameters such as number of parallel operations in the Directed Acyclic Graph of the BLAS/LAPACK routines, sizes of the memories in the memory hierarchy of the underlying platform, bandwidth of the memory, and structure of the compute resources in the underlying platform. In this paper, we closely investigate the impact of the Floating Point Unit (FPU) micro-architecture for performance tuning of BLAS and LAPACK. We present theoretical analysis for pipeline depth of different floating point operations like multiplier, adder, square root, and divider followed by characterization of BLAS and LAPACK to determine several parameters required in the theoretical framework for deciding optimum pipeline depth of the floating operations. A simple design of a Processing Element (PE) is presented and shown that the PE outperforms the most recent custom realizations of BLAS and LAPACK by 1.1X to 1.5X in GFlops/W, and 1.9X to 2.1X in Gflops/mm2. Compared to multicore, General Purpose Graphics Processing Unit (GPGPU), Field Programmable Gate Array (FPGA), and ClearSpeed CSX700, performance improvement of 1.8-80x is reported in PE.


Author(s):  
Steven Vercruysse ◽  
Martin Kuiper

Scientific progress is increasingly dependent on knowledge in computation-ready forms. In the life sciences, among others, many scientists therefore extract and structure knowledge from the literature. In a process called manual curation, they enter knowledge into spreadsheets, or into databases where it serves their and many others' research. Valuable as these curation efforts are, the range and detail of what can practically be captured and shared remains limited, because of the constraints of current curation tools. Many important contextual aspects of observations described in literature simply do not fit in the form defined by these tools, and thus cannot be captured. Here we present the design of an easy-to-use, general-purpose method and interface, that enables the precise semantic capture of virtually unlimited types of information and details, using only a minimal set of building blocks. Scientists from any discipline can use this to convert any complex knowledge into a form that is easily readable and meaningful for both humans and computers. The method VSM forms a universal and high-level language for encoding ideas, and for interacting with digital knowledge.


SIMULATION ◽  
1963 ◽  
Vol 1 (1) ◽  
pp. R-2-R-16
Author(s):  
Hans S. Witsenhausen

A function dependent on the solution of a set of dif ferential equations containing adjustable parameters, can be minimized by systematic search procedures in parameter space. Such procedures can be imple mented by a hybrid system consisting of a general purpose analog computer and a digital expansion providing parallel logic building blocks. Program ming of such a system is illustrated for a simple search procedure in n parameters.


Sign in / Sign up

Export Citation Format

Share Document