DeFFusion: CNN-based Continuous Authentication Using Deep Feature Fusion

2022 ◽  
Vol 18 (2) ◽  
pp. 1-20
Author(s):  
Yantao Li ◽  
Peng Tao ◽  
Shaojiang Deng ◽  
Gang Zhou

Smartphones have become crucial and important in our daily life, but the security and privacy issues have been major concerns of smartphone users. In this article, we present DeFFusion, a CNN-based continuous authentication system using Deep Feature Fusion for smartphone users by leveraging the accelerometer and gyroscope ubiquitously built into smartphones. With the collected data, DeFFusion first converts the time domain data into frequency domain data using the fast Fourier transform and then inputs both of them into a designed CNN, respectively. With the CNN-extracted features, DeFFusion conducts the feature selection utilizing factor analysis and exploits balanced feature concatenation to fuse these deep features. Based on the one-class SVM classifier, DeFFusion authenticates current users as a legitimate user or an impostor. We evaluate the authentication performance of DeFFusion in terms of impact of training data size and time window size, accuracy comparison on different features over different classifiers and on different classifiers with the same CNN-extracted features, accuracy on unseen users, time efficiency, and comparison with representative authentication methods. The experimental results demonstrate that DeFFusion has the best accuracy by achieving the mean equal error rate of 1.00% in a 5-second time window size.

IEEE Access ◽  
2021 ◽  
Vol 9 ◽  
pp. 26138-26146
Author(s):  
Xue Ni ◽  
Huali Wang ◽  
Fan Meng ◽  
Jing Hu ◽  
Changkai Tong
Keyword(s):  

2021 ◽  
Vol 13 (2) ◽  
pp. 328
Author(s):  
Wenkai Liang ◽  
Yan Wu ◽  
Ming Li ◽  
Yice Cao ◽  
Xin Hu

The classification of high-resolution (HR) synthetic aperture radar (SAR) images is of great importance for SAR scene interpretation and application. However, the presence of intricate spatial structural patterns and complex statistical nature makes SAR image classification a challenging task, especially in the case of limited labeled SAR data. This paper proposes a novel HR SAR image classification method, using a multi-scale deep feature fusion network and covariance pooling manifold network (MFFN-CPMN). MFFN-CPMN combines the advantages of local spatial features and global statistical properties and considers the multi-feature information fusion of SAR images in representation learning. First, we propose a Gabor-filtering-based multi-scale feature fusion network (MFFN) to capture the spatial pattern and get the discriminative features of SAR images. The MFFN belongs to a deep convolutional neural network (CNN). To make full use of a large amount of unlabeled data, the weights of each layer of MFFN are optimized by unsupervised denoising dual-sparse encoder. Moreover, the feature fusion strategy in MFFN can effectively exploit the complementary information between different levels and different scales. Second, we utilize a covariance pooling manifold network to extract further the global second-order statistics of SAR images over the fusional feature maps. Finally, the obtained covariance descriptor is more distinct for various land covers. Experimental results on four HR SAR images demonstrate the effectiveness of the proposed method and achieve promising results over other related algorithms.


Author(s):  
Y. A. Lumban-Gaol ◽  
K. A. Ohori ◽  
R. Y. Peters

Abstract. Satellite-Derived Bathymetry (SDB) has been used in many applications related to coastal management. SDB can efficiently fill data gaps obtained from traditional measurements with echo sounding. However, it still requires numerous training data, which is not available in many areas. Furthermore, the accuracy problem still arises considering the linear model could not address the non-relationship between reflectance and depth due to bottom variations and noise. Convolutional Neural Networks (CNN) offers the ability to capture the connection between neighbouring pixels and the non-linear relationship. These CNN characteristics make it compelling to be used for shallow water depth extraction. We investigate the accuracy of different architectures using different window sizes and band combinations. We use Sentinel-2 Level 2A images to provide reflectance values, and Lidar and Multi Beam Echo Sounder (MBES) datasets are used as depth references to train and test the model. A set of Sentinel-2 and in-situ depth subimage pairs are extracted to perform CNN training. The model is compared to the linear transform and applied to two other study areas. Resulting accuracy ranges from 1.3 m to 1.94 m, and the coefficient of determination reaches 0.94. The SDB model generated using a window size of 9x9 indicates compatibility with the reference depths, especially at areas deeper than 15 m. The addition of both short wave infrared bands to the four visible bands in training improves the overall accuracy of SDB. The implementation of the pre-trained model to other study areas provides similar results depending on the water conditions.


Author(s):  
D. Gritzner ◽  
J. Ostermann

Abstract. Modern machine learning, especially deep learning, which is used in a variety of applications, requires a lot of labelled data for model training. Having an insufficient amount of training examples leads to models which do not generalize well to new input instances. This is a particular significant problem for tasks involving aerial images: often training data is only available for a limited geographical area and a narrow time window, thus leading to models which perform poorly in different regions, at different times of day, or during different seasons. Domain adaptation can mitigate this issue by using labelled source domain training examples and unlabeled target domain images to train a model which performs well on both domains. Modern adversarial domain adaptation approaches use unpaired data. We propose using pairs of semantically similar images, i.e., whose segmentations are accurate predictions of each other, for improved model performance. In this paper we show that, as an upper limit based on ground truth, using semantically paired aerial images during training almost always increases model performance with an average improvement of 4.2% accuracy and .036 mean intersection-over-union (mIoU). Using a practical estimate of semantic similarity, we still achieve improvements in more than half of all cases, with average improvements of 2.5% accuracy and .017 mIoU in those cases.


Author(s):  
Wenting Zhao ◽  
Yunhong Wang ◽  
Xunxun Chen ◽  
Yuanyan Tang ◽  
Qingjie Liu
Keyword(s):  

Author(s):  
Ayda Saidane ◽  
Saleh Al-Sharieh

Regulatory compliance is a top priority for organizations in highly regulated ecosystems. As most operations are automated, the compliance efforts focus on the information systems supporting the business processes of the organizations and, to a lesser extent, on the humans using, managing, and maintaining them. Yet, the human factor is an unpredictable and challenging component of a secure system development and should be considered throughout the development process as both a legitimate user and a threat. In this chapter, the authors propose COMPARCH as a compliance-driven system engineering framework for privacy and security in socio-technical systems. It consists of (1) a risk-based requirement management process, (2) a test-driven security and privacy modeling framework, and (3) a simulation-based validation approach. The satisfaction of the regulatory requirements is evaluated through the simulation traces analysis. The authors use as a running example an E-CITY system providing municipality services to local communities.


2020 ◽  
Vol 9 (2) ◽  
pp. 109 ◽  
Author(s):  
Bo Cheng ◽  
Shiai Cui ◽  
Xiaoxiao Ma ◽  
Chenbin Liang

Feature extraction of an urban area is one of the most important directions of polarimetric synthetic aperture radar (PolSAR) applications. A high-resolution PolSAR image has the characteristics of high dimensions and nonlinearity. Therefore, to find intrinsic features for target recognition, a building area extraction method for PolSAR images based on the Adaptive Neighborhoods selection Neighborhood Preserving Embedding (ANSNPE) algorithm is proposed. First, 52 features are extracted by using the Gray level co-occurrence matrix (GLCM) and five polarization decomposition methods. The feature set is divided into 20 dimensions, 36 dimensions, and 52 dimensions. Next, the ANSNPE algorithm is applied to the training samples, and the projection matrix is obtained for the test image to extract the new features. Lastly, the Support Vector machine (SVM) classifier and post processing are used to extract the building area, and the accuracy is evaluated. Comparative experiments are conducted using Radarsat-2, and the results show that the ANSNPE algorithm could effectively extract the building area and that it had a better generalization ability; the projection matrix is obtained using the training data and could be directly applied to the new sample, and the building area extraction accuracy is above 80%. The combination of polarization and texture features provide a wealth of information that is more conducive to the extraction of building areas.


Sign in / Sign up

Export Citation Format

Share Document