A Lossless Data-Hiding based IoT Data Authenticity Model in Edge-AI for Connected Living

2022 ◽  
Vol 22 (3) ◽  
pp. 1-25
Author(s):  
Mohammad Saidur Rahman ◽  
Ibrahim Khalil ◽  
Xun Yi ◽  
Mohammed Atiquzzaman ◽  
Elisa Bertino

Edge computing is an emerging technology for the acquisition of Internet-of-Things (IoT) data and provisioning different services in connected living. Artificial Intelligence (AI) powered edge devices (edge-AI) facilitate intelligent IoT data acquisition and services through data analytics. However, data in edge networks are prone to several security threats such as external and internal attacks and transmission errors. Attackers can inject false data during data acquisition or modify stored data in the edge data storage to hamper data analytics. Therefore, an edge-AI device must verify the authenticity of IoT data before using them in data analytics. This article presents an IoT data authenticity model in edge-AI for a connected living using data hiding techniques. Our proposed data authenticity model securely hides the data source’s identification number within IoT data before sending it to edge devices. Edge-AI devices extract hidden information for verifying data authenticity. Existing data hiding approaches for biosignal cannot reconstruct original IoT data after extracting the hidden message from it (i.e., lossy) and are not usable for IoT data authenticity. We propose the first lossless IoT data hiding technique in this article based on error-correcting codes (ECCs). We conduct several experiments to demonstrate the performance of our proposed method. Experimental results establish the lossless property of the proposed approach while maintaining other data hiding properties.

2020 ◽  
Vol 9 (1) ◽  
pp. 45-56
Author(s):  
Akella Subhadra

Data Science is associated with new discoveries, the discovery of value from the data. It is a practice of deriving insights and developing business strategies through transformation of data in to useful information. It has been evaluated as a scientific field and research evolution in disciplines like statistics, computing science, intelligence science, and practical transformation in the domains like science, engineering, public sector, business and lifestyle. The field encompasses the larger areas of artificial intelligence, data analytics, machine learning, pattern recognition, natural language understanding, and big data manipulation. It also tackles related new scientific challenges, ranging from data capture, creation, storage, retrieval, sharing, analysis, optimization, and visualization, to integrative analysis across heterogeneous and interdependent complex resources for better decision-making, collaboration, and, ultimately, value creation. In this paper we entitled epicycles of analysis, formal modeling, from data analysis to data science, data analytics -A keystone of data science, The Big data is not a single technology but an amalgamation of old and new technologies that assistance companies gain actionable awareness. The big data is vital because it manages, store and manipulates large amount of data at the desirable speed and time. Big data addresses detached requirements, in other words the amalgamate of multiple un-associated datasets, processing of large amounts of amorphous data and harvesting of unseen information in a time-sensitive generation. As businesses struggle to stay up with changing market requirements, some companies are finding creative ways to use Big Data to their growing business needs and increasingly complex problems. As organizations evolve their processes and see the opportunities that Big Data can provide, they struggle to beyond traditional Business Intelligence activities, like using data to populate reports and dashboards, and move toward Data Science- driven projects that plan to answer more open-ended and sophisticated questions. Although some organizations are fortunate to have data scientists, most are not, because there is a growing talent gap that makes finding and hiring data scientists in a timely manner is difficult. This paper, aimed to demonstrate a close view about Data science, big data, including big data concepts like data storage, data processing, and data analysis of these technological developments, we also provide brief description about big data analytics and its characteristics , data structures, data analytics life cycle, emphasizes critical points on these issues.


2018 ◽  
Vol 06 (06) ◽  
pp. 110-115
Author(s):  
Panchami Anil ◽  
Anas P V ◽  
Naseef Kuruvakkottil ◽  
Anusha K V ◽  
Balagopal N

2015 ◽  
Author(s):  
Vishal Ahuja ◽  
John R. Birge ◽  
Chad Syverson ◽  
Elbert S. Huang ◽  
Min-Woong Sohn

Author(s):  
Benjamin Shao ◽  
Robert D. St. Louis

Many companies are forming data analytics teams to put data to work. To enhance procurement practices, chief procurement officers (CPOs) must work effectively with data analytics teams, from hiring and training to managing and utilizing team members. This chapter presents the findings of a study on how CPOs use data analytics teams to support the procurement process. Surveys and interviews indicate companies are exhibiting different levels of maturity in using data analytics, but both the goal of CPOs (i.e., improving performance to support the business strategy) and the way to interact with data analytics teams for achieving that goal are common across companies. However, as data become more reliably available and technologies become more intelligently embedded, the best practices of organizing and managing data analytics teams for procurement will need to be constantly updated.


2021 ◽  
pp. 1-11
Author(s):  
Kusan Biswas

In this paper, we propose a frequency domain data hiding method for the JPEG compressed images. The proposed method embeds data in the DCT coefficients of the selected 8 × 8 blocks. According to the theories of Human Visual Systems  (HVS), human vision is less sensitive to perturbation of pixel values in the uneven areas of the image. In this paper we propose a Singular Value Decomposition based image roughness measure (SVD-IRM) using which we select the coarse 8 × 8 blocks as data embedding destinations. Moreover, to make the embedded data more robust against re-compression attack and error due to transmission over noisy channels, we employ Turbo error correcting codes. The actual data embedding is done using a proposed variant of matrix encoding that is capable of embedding three bits by modifying only one bit in block of seven carrier features. We have carried out experiments to validate the performance and it is found that the proposed method achieves better payload capacity and visual quality and is more robust than some of the recent state-of-the-art methods proposed in the literature.


2021 ◽  
Vol 3 (6) ◽  
Author(s):  
César de Oliveira Ferreira Silva ◽  
Mariana Matulovic ◽  
Rodrigo Lilla Manzione

Abstract Groundwater governance uses modeling to support decision making. Therefore, data science techniques are essential. Specific difficulties arise because variables must be used that cannot be directly measured, such as aquifer recharge and groundwater flow. However, such techniques involve dealing with (often not very explicitly stated) ethical questions. To support groundwater governance, these ethical questions cannot be solved straightforward. In this study, we propose an approach called “open-minded roadmap” to guide data analytics and modeling for groundwater governance decision making. To frame the ethical questions, we use the concept of geoethical thinking, a method to combine geoscience-expertise and societal responsibility of the geoscientist. We present a case study in groundwater monitoring modeling experiment using data analytics methods in southeast Brazil. A model based on fuzzy logic (with high expert intervention) and three data-driven models (with low expert intervention) are tested and evaluated for aquifer recharge in watersheds. The roadmap approach consists of three issues: (a) data acquisition, (b) modeling and (c) the open-minded (geo)ethical attitude. The level of expert intervention in the modeling stage and model validation are discussed. A search for gaps in the model use is made, anticipating issues through the development of application scenarios, to reach a final decision. When the model is validated in one watershed and then extrapolated to neighboring watersheds, we found large asymmetries in the recharge estimatives. Hence, we can show that more information (data, expertise etc.) is needed to improve the models’ predictability-skill. In the resulting iterative approach, new questions will arise (as new information comes available), and therefore, steady recourse to the open-minded roadmap is recommended. Graphic abstract


Sign in / Sign up

Export Citation Format

Share Document