scholarly journals Smart Surveillance System using Deep Learning

2020 ◽  
Vol 9 (1) ◽  
pp. 1151-1155

In industry and research area big data applications are consuming most of the spaces. Among some examples of big data, the video streams from CCTV cameras as equal importance with other sources like medical data, social media data. Based on the security purpose CCTV cameras are implemented in all places where security having much importance. Security can be defined in different ways like theft identification, violence detection etc. In most of the highly secured areas security plays a major role in a real time environment. This paper discusses the detecting and recognising the facial features of the persons using deep learning concepts. This paper includes deep learning concepts starts from object detection, action detection and identification. The issues recognized in existing methods are identified and summarized.

Now days, Big data applications are having most of the importance and space in industry and research area. Surveillance videos are a major contribution to unstructured big data. The main objective of this paper is to give brief about video analysis using deep learning techniques in order to detect suspicious activities. Our main focus is on applications of deep learning techniques in detection the count, no of involved persons and the activity going on in a crowd considering all conditions [9]. This video analysis helps us to achieve security. Security can be defined in different terms like identification of theft, detecting violence etc. Suspicious Human Activity Detection is simply the process of detection of unusual (abnormal)l human activities . For this we need to convert the video into frames and processing these frames helps us to analyze the persons and their activities. There are two modules in this system first one Object Detection Module and Second one is Activity Detection Module .Object detection module detects whether the object is present or not. After detecting the object the next module is going to check whether the activity is suspicious or not.


2016 ◽  
Vol 16 (3) ◽  
pp. 35-51 ◽  
Author(s):  
M. Senthilkumar ◽  
P. Ilango

Abstract Big Data Applications with Scheduling becomes an active research area in last three years. The Hadoop framework becomes very popular and most used frameworks in a distributed data processing. Hadoop is also open source software that allows the user to effectively utilize the hardware. Various scheduling algorithms of the MapReduce model using Hadoop vary with design and behavior, and are used for handling many issues like data locality, awareness with resource, energy and time. This paper gives the outline of job scheduling, classification of the scheduler, and comparison of different existing algorithms with advantages, drawbacks, limitations. In this paper, we discussed various tools and frameworks used for monitoring and the ways to improve the performance in MapReduce. This paper helps the beginners and researchers in understanding the scheduling mechanisms used in Big Data.


Author(s):  
Seong-Wook Park ◽  
Junyoung Park ◽  
Kyeongryeol Bong ◽  
Dongjoo Shin ◽  
Jinmook Lee ◽  
...  

2017 ◽  
Vol 28 (06) ◽  
pp. 661-682
Author(s):  
Rashed Mazumder ◽  
Atsuko Miyaji ◽  
Chunhua Su

Security, privacy and data integrity are the critical issues in Big Data application of IoT-enable environment and cloud-based services. There are many upcoming challenges to establish secure computations for Big Data applications. Authenticated encryption (AE) plays one of the core roles for Big Data’s confidentiality, integrity, and real-time security. However, many proposals exist in the research area of authenticated encryption. Generally, there are two concepts of nonce respect and nonce reuse under the security notion of the AE. However, recent studies show that nonce reuse needs to sacrifice security bound of the AE. In this paper, we consider nonce respect scheme and probabilistic encryption scheme which are more efficient and suitable for big data applications. Both schemes are based on keyed function. Our first scheme (FS) operates in parallel mode whose security is based on nonce respect and supports associated data. Furthermore, it needs less call of functions/block-cipher. On the contrary, our second scheme is based on probabilistic encryption. It is expected to be a light solution because of weaker security model construction. Moreover, both schemes satisfy reasonable privacy security bound.


Author(s):  
Rajni Aron ◽  
Deepak Kumar Aggarwal

Cloud Computing has become a buzzword in the IT industry. Cloud Computing which provides inexpensive computing resources on the pay-as-you-go basis is promptly gaining momentum as a substitute for traditional Information Technology (IT) based organizations. Therefore, the increased utilization of Clouds makes an execution of Big Data processing jobs a vital research area. As more and more users have started to store/process their real-time data in Cloud environments, Resource Provisioning and Scheduling of Big Data processing jobs becomes a key element of consideration for efficient execution of Big Data applications. This chapter discusses the fundamental concepts supporting Cloud Computing & Big Data terms and the relationship between them. This chapter will help researchers find the important characteristics of Cloud Resource Management Systems to handle Big Data processing jobs and will also help to select the most suitable technique for processing Big Data jobs in Cloud Computing environment.


At present the Big Data applications, for example, informal communication, therapeutic human services, horticulture, banking, financial exchange, instruction, Facebook and so forth are producing the information with extremely rapid. Volume and Velocity of the Big information assumes a significant job in the presentation of Big information applications. Execution of the Big information application can be influenced by different parameters. Expediently search, proficiency and precision are the a portion of the overwhelming parameters which influence the general execution of any Big information applications. Due the immediate and aberrant inclusion of the qualities of 7Vs of Big information, each Big Data administrations anticipate the elite. Elite is the greatest test in the present evolving situation. In this paper we propose the Big Data characterization way to deal with speedup the Big Data applications. This paper is the review paper, we allude different Big information advancements and the related work in the field of Big Data Classification. In the wake of learning and understanding the writing we discover the holes in existing work and techniques. Finally we propose the novel methodology of Big Data characterization. Our methodology relies on the Deep Learning and Apache Spark engineering. In the proposed work two stages are appeared; first stage is include choice and second stage is Big Data Classification. Apache Spark is the most reasonable and predominant innovation to execute this proposed work. Apache Spark is having two hubs; introductory hubs and last hubs. The element choice will be occur in introductory hubs and Big Data Classification will happen in definite hubs of Apache Spark


Author(s):  
Sangwon Chae ◽  
Sungjun Kwon ◽  
Donghyun Lee

Infectious disease occurs when a person is infected by a pathogen from another person or an animal. It is a problem that causes harm at both individual and macro scales. The Korea Center for Disease Control (KCDC) operates a surveillance system to minimize infectious disease contagions. However, in this system, it is difficult to immediately act against infectious disease because of missing and delayed reports. Moreover, infectious disease trends are not known, which means prediction is not easy. This study predicts infectious diseases by optimizing the parameters of deep learning algorithms while considering big data including social media data. The performance of the deep neural network (DNN) and long-short term memory (LSTM) learning models were compared with the autoregressive integrated moving average (ARIMA) when predicting three infectious diseases one week into the future. The results show that the DNN and LSTM models perform better than ARIMA. When predicting chickenpox, the top-10 DNN and LSTM models improved average performance by 24% and 19%, respectively. The DNN model performed stably and the LSTM model was more accurate when infectious disease was spreading. We believe that this study’s models can help eliminate reporting delays in existing surveillance systems and, therefore, minimize costs to society.


Sign in / Sign up

Export Citation Format

Share Document