Embedded System Based Television Data Collection and Return Technology

2011 ◽  
Vol 48-49 ◽  
pp. 496-501
Author(s):  
Yu Su ◽  
Shu Hong Wen ◽  
Jian Ping Chai

Television data collection and return technologies are one of key technologies in television secure broadcasting system, TV video content surveillance, TV program copyright protection, and client advertisement broadcasting. In china, the dominating methods of TV video content surveillance are manual tape recording and whole TV program Automatic Return. Manual method costs too much, whole TV program return method needs lots of net bandwidth and storage space. This paper proposes a new method of television data collection and return technology, video field is extracted from continuous video and coded at frequency of about one field per second, in other words, one field is extracted from continuous fifty fields of original video for PAL TV system, extracted frame can be coded by all means, for example JPEG2000, or intra mode code of H.264 or MPEG2. TV programs whose content and topic change most frequently are news and advertisement program, which may change topic in five to ten seconds, so extracted sequences hold the same topic and content and enough information with original video for TV program content surveillance application. The data quantity of extracted sequence is about 3 percent of the original video program, which will save large quantity of network bandwidth and storage space. One hardware implementation method of this technology based on embedded system is proposed, the TV Field Extractor, which circularly extracts images from target TV program, uses high-performance compression algorithm for image compression and stores the final output sequences of stationary images on the hard disk, or transmits these sequences to the monitoring center via network. This method evidently reduces device cost, network bandwidth and storage space, which can be widely adopted in TV program content surveillance and TV secure broadcasting system.

2017 ◽  
Vol 1 (1) ◽  
pp. 1-12 ◽  
Author(s):  
Dani Gunawan ◽  
Amalia Amalia ◽  
Atras Najwan

Collecting or harvesting data from the Internet is often done by using web crawler. General web crawler is developed to be more focus on certain topic. The type of this web crawler called focused crawler. To improve the datacollection performance, creating focused crawler is not enough as the focused crawler makes efficient usage of network bandwidth and storage capacity. This research proposes a distributed focused crawler in order to improve the web crawler performance which also efficient in network bandwidth and storage capacity. This distributed focused crawler implements crawling scheduling, site ordering to determine URL queue, and focused crawler by using Naïve Bayes. This research also tests the web crawling performance by conducting multithreaded, then observe the CPU and memory utilization. The conclusion is the web crawling performance will be decrease when too many threads are used. As the consequences, the CPU and memory utilization will be very high, meanwhile performance of the distributed focused crawler will be low.


2014 ◽  
Vol 14 (4) ◽  
pp. 901-916 ◽  
Author(s):  
D. Molinari ◽  
S. Menoni ◽  
G. T. Aronica ◽  
F. Ballio ◽  
N. Berni ◽  
...  

Abstract. In recent years, awareness of a need for more effective disaster data collection, storage, and sharing of analyses has developed in many parts of the world. In line with this advance, Italian local authorities have expressed the need for enhanced methods and procedures for post-event damage assessment in order to obtain data that can serve numerous purposes: to create a reliable and consistent database on the basis of which damage models can be defined or validated; and to supply a comprehensive scenario of flooding impacts according to which priorities can be identified during the emergency and recovery phase, and the compensation due to citizens from insurers or local authorities can be established. This paper studies this context, and describes ongoing activities in the Umbria and Sicily regions of Italy intended to identifying new tools and procedures for flood damage data surveys and storage in the aftermath of floods. In the first part of the paper, the current procedures for data gathering in Italy are analysed. The analysis shows that the available knowledge does not enable the definition or validation of damage curves, as information is poor, fragmented, and inconsistent. A new procedure for data collection and storage is therefore proposed. The entire analysis was carried out at a local level for the residential and commercial sectors only. The objective of the next steps for the research in the short term will be (i) to extend the procedure to other types of damage, and (ii) to make the procedure operational with the Italian Civil Protection system. The long-term aim is to develop specific depth–damage curves for Italian contexts.


2020 ◽  
Author(s):  
Rostislav Kouznetsov

Abstract. Lossy compression of scientific data arrays is a powerful tool to save network bandwidth and storage space. Properly applied lossy compression can reduce the size of a dataset by orders of magnitude keeping all essential information, whereas a wrong choice of lossy compression parameters leads to the loss of valuable data. The paper considers statistical properties of several lossy compression methods implemented in "NetCDF operators" (NCO), a popular tool for handling and transformation of numerical data in NetCDF format. We compare the effects of imprecisions and artifacts resulting from use of a lossy compression of floating-point data arrays. In particular, we show that a popular Bit Grooming algorithm (default in NCO) has sub-optimal accuracy and produces substantial artifacts in multipoint statistics. We suggest a simple implementation of two algorithms that are free from these artifacts and have twice higher precision. Besides that, we suggest a way to rectify the data already processed with Bit Grooming. The algorithm has been contributed to NCO mainstream. The supplementary material contains the implementation of the algorithm in Python 3.


2019 ◽  
Vol 4 (1) ◽  
Author(s):  
Deka Anggawira ◽  
Tamara Adriani Salim

This study discusses the implementation of local wisdom in the preservation of manuscripts at Universitas Indonesia’s library. The purpose of this study is to identify the implementation of local wisdom in the preservation of manuscripts in that library. This research uses a qualitative approach coupled with direct observation and structured interviews as data collection methods. The results of this study indicate that Universitas Indonesia Library has implemented local wisdom in preserving manuscripts. This can be seen from the use of local wisdom in the storage process, including the design of the rooms and storage facilities and the pattern of behavior in its storage process. The maintenance process of local wisdom includes the control of the environment using traditional approaches and the use of traditional materials in the maintenance of manuscripts. Another finding is that the process of capturing or inheriting knowledge from a previous manuscript is based on the manpower manifested in its preservation behavior. Therefore, it can be understood that the implementation of local wisdom in the process of preservation of manuscripts in UI Library is based on the preservation of knowledge from previous manuscript managers or librarians.


Complexity ◽  
2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Qing An ◽  
Jun Zhang ◽  
Xin Li ◽  
Xiaobing Mao ◽  
Yulong Feng ◽  
...  

The economical/environmental scheduling problem (EESP) of the ship integrated energy system (SIES) has high computational complexity, which includes more than one optimization objective, various types of constraints, and frequently fluctuated load demand. Therefore, the intelligent scheduling strategies cannot be applied to the ship energy management system (SEMS) online, which has limited computing power and storage space. Aiming at realizing green computing on SEMS, in this paper a typical SIES-EESP optimization model is built, considering the form of decision vectors, the economical/environmental optimization objectives, and various types of real-world constraints of the SIES. Based on the complexity of SIES-EESPs, a two-stage offline-to-online multiobjective optimization strategy for SIES-EESP is proposed, which transfers part of the energy dispatch online computing task to the offline high-performance computer systems. The specific constraints handling methods are designed to reduce both continuous and discrete constraints violations of SIES-EESPs. Then, an establishment method of energy scheduling scheme-base is proposed. By using the big data offline, the economical/environmental scheduling solutions of a typical year can be obtained and stored with more computing resources and operation time on land. Thereafter, a short-term multiobjective offline-to-online optimization approach by SEMS is considered, with the application of multiobjective evolutionary algorithm (MOEA) and typical schemes corresponding to the actual SIES-EESPs. Simulation results show that the proposed strategy can obtain enough feasible Pareto solutions in a shorter time and get well-distributed Pareto sets with better convergence performance, which can well adapt to the features of real-world SIES-EESPs and save plenty of operation time and storage space for the SEMS.


Sign in / Sign up

Export Citation Format

Share Document