scholarly journals Data modelling and data processing generated by human eye movements

Author(s):  
Velin Kralev ◽  
Radoslava Kraleva ◽  
Petia Koprinkova-Hristova

Data modeling and data processing are important activities in any scientific research. This research focuses on the modeling of data and processing of data generated by a saccadometer. The approach used is based on the relational data model, but the processing and storage of the data is done with client datasets. The experiments were performed with 26 randomly selected files from a total of 264 experimental sessions. The data from each experimental session was stored in three different formats, respectively text, binary and extensible markup language (XML) based. The results showed that the text format and the binary format were the most compact. Several actions related to data processing were analyzed. Based on the results obtained, it was found that the two fastest actions are respectively loading data from a binary file and storing data into a binary file. In contrast, the two slowest actions were storing the data in XML format and loading the data from a text file, respectively. Also, one of the time-consuming operations turned out to be the conversion of data from text format to binary format. Moreover, the time required to perform this action does not depend in proportion on the number of records processed.

Author(s):  
V.G. Belenkov ◽  
V.I. Korolev ◽  
V.I. Budzko ◽  
D.A. Melnikov

The article discusses the features of the use of the cryptographic information protection means (CIPM)in the environment of distributed processing and storage of data of large information and telecommunication systems (LITS).A brief characteristic is given of the properties of the cryptographic protection control subsystem - the key system (CS). A description is given of symmetric and asymmetric cryptographic systems, required to describe the problem of using KS in LITS.Functional and structural models of the use of KS and CIPM in LITS, are described. Generalized information about the features of using KS in LITS is given. The obtained results form the basis for further work on the development of the architecture and principles of KS construction in LITS that implement distributed data processing and storage technologies. They can be used both as a methodological guide, and when carrying out specific work on the creation and development of systems that implement these technologies, as well as when forming technical specifications for the implementation of work on the creation of such systems.


2008 ◽  
pp. 1184-1191
Author(s):  
Jan Owens ◽  
Suresh Chalasani ◽  
Jayavel Sounderpandian

The use of Radio Frequency Identification (RFID) is becoming prevalent in supply chains, with large corporations such as Wal-Mart, Tesco, and the Department of Defense phasing in RFID requirements on their suppliers. The implementation of RFID can necessitate changes in the existing data models and will add to the demand for processing and storage capacities. This article discusses the implications of the RFID technology on data processing in supply chains.


2019 ◽  
Vol 8 (S1) ◽  
pp. 87-88
Author(s):  
S. Annapoorani ◽  
B. Srinivasan

This paper is concerned with the study and implementation of effective Data Emplacement Algorithm in large set of databases called Big Data and proposes a model for improving the efficiency of data processing and storage utilization for dynamic load imbalance among nodes in a heterogeneous cloud environment. With the era of explosive information and data receiving, more and more fields need to deal with massive, large scale of data. A method has been proposed with an improved Data Placement algorithm called Effective Data Emplacement Algorithm with computing capacity of each node as a predominant factor that promotes and improves the efficiency in data processing in a short duration time from large set of data. The adaptability of the proposed model can be obtained by minimizing the time with processing efficiency through the computing capacity of each node in the cluster. The proposed solution improves the performance of the heterogeneous cluster environment by effectively distributing data based on the performance oriented sampling as the experimental results made with word count applications.


2012 ◽  
Vol 10 (3) ◽  
pp. 13-26
Author(s):  
Xiaomin Zhu ◽  
Zhongxiang He ◽  
Shengbo Shi

Extensible Markup Language (XML) is a textual markup language which becomes more and more important in the Internet web service. However, some distinct disadvantages exist in XML, such as its nature of redundancy, which consumes the limited network’s bandwidth greatly especially in mobile computing. Considering the characteristics of the mobile commerce, the handsets’ memory capability and data processing time are two problems for XML being applied. This paper studies an enhancement of XML for the purpose of application in mobile e-commerce, called SXML, which means Simple XML to enhance the XML used in mobile web service. It helps XML producers minimizing the size effects of XML, e.g., the size overhead and slow implementation speed. Comprehensive simulations show that the SXML could reduce the size of XML documents and reduce the time of implementation, consequently utilize the bandwidth effectively.


Author(s):  
Jan Owens ◽  
Suresh Chalasani ◽  
Jayavel Sounderpandian

The use of Radio Frequency Identification (RFID) is becoming prevalent in supply chains, with large corporations such as Wal-Mart, Tesco, and the Department of Defense phasing in RFID requirements on their suppliers. The implementation of RFID can necessitate changes in the existing data models and will add to the demand for processing and storage capacities. This article discusses the implications of the RFID technology on data processing in supply chains.


2015 ◽  
Vol 10 (S318) ◽  
pp. 299-305
Author(s):  
Larry Denneau

AbstractFor even small astronomy projects, the petabyte scale is now upon us. The Asteroid Terrestrial-impact Last Alert System (Tonry 2011) will survey the entire visible sky from Hawaii multiple times per night to search for near-Earth asteroids on impact trajectories. While the ATLAS optical system is modest by modern astronomical standards — two 0.5 m F/2.0 telescopes — each night the ATLAS system will measure nearly 109 astronomical sources to a photometric accuracy of <5%, totaling 1012 individual observations over its initial 3-year mission. This ever-growing dataset must be searched in real-time for moving objects and transients then archived for further analysis, and alerts for newly discovered near-Earth asteroids (NEAs) disseminated within tens of minutes from detection. ATLAS's all-sky coverage ensures it will discover many ‘rifle shot’ near-misses moving rapidly on the sky as they shoot past the Earth, so the system will need software to automatically detect highly-trailed sources and discriminate them from the thousands of low-Earth orbit (LEO) and geosynchronous orbit (GEO) satellites ATLAS will see each night. Additional interrogation will identify interesting phenomena from millions of transient sources per night beyond the solar system. The data processing and storage requirements for ATLAS demand a ‘big data’ approach typical of commercial internet enterprises. We describe our experience in deploying a nimble, scalable and reliable data processing infrastructure, and suggest ATLAS as steppingstone to data processing capability needed as we enter the era of LSST.


Sign in / Sign up

Export Citation Format

Share Document