optimal fusion
Recently Published Documents


TOTAL DOCUMENTS

118
(FIVE YEARS 21)

H-INDEX

13
(FIVE YEARS 3)

2021 ◽  
Vol 2137 (1) ◽  
pp. 012061
Author(s):  
Jiebin Zhang ◽  
Shangyou Zeng ◽  
Ying Wang ◽  
Jinjin Wang ◽  
Hongyang Chen

Abstract Since the existing commercial imaging equipment cannot meet the requirements of high dynamic range, multi-exposure image fusion is an economical and fast method to implement HDR. However, the existing multi-exposure image fusion algorithms have the problems of long fusion time and large data storage. We propose an extreme exposure image fusion method based on deep learning. In this method, two extreme exposure image sequences are sent to the network, channel and spatial attention mechanisms are introduced to automatically learn and optimize the weights, and the optimal fusion weights are output. In addition, the model in this paper adopts real-value training and makes the output closer to the real value through a new custom loss function. Experimental results show that this method is superior to existing methods in both objective and subjective aspects.


2021 ◽  
Author(s):  
Mingrong Xiang ◽  
Jingyu Hou ◽  
Wei Luo ◽  
Wenjing Tao ◽  
Deshou Wang

2021 ◽  
Vol 32 (3) ◽  
pp. 538-544
Author(s):  
Chen Sheng ◽  
Zhao Yongbo ◽  
Pang Xiaojiao ◽  
Hu Yili ◽  
Cao Chenghu

Sensors ◽  
2021 ◽  
Vol 21 (9) ◽  
pp. 3059
Author(s):  
Christopher Funk ◽  
Benjamin Noack ◽  
Uwe D. Hanebeck

Information fusion in networked systems poses challenges with respect to both theory and implementation. Limited available bandwidth can become a bottleneck when high-dimensional estimates and associated error covariance matrices need to be transmitted. Compression of estimates and covariance matrices can endanger desirable properties like unbiasedness and may lead to unreliable fusion results. In this work, quantization methods for estimates and covariance matrices are presented and their usage with the optimal fusion formulas and covariance intersection is demonstrated. The proposed quantization methods significantly reduce the bandwidth required for data transmission while retaining unbiasedness and conservativeness of the considered fusion methods. Their performance is evaluated using simulations, showing their effectiveness even in the case of substantial data reduction.


2021 ◽  
Vol 11 (5) ◽  
pp. 2005
Author(s):  
Toan Huy Bui ◽  
Kazuhiko Hamamoto ◽  
May Phu Paing

Caries is the most well-known disease and relates to the oral health of billions of people around the world. Despite the importance and necessity of a well-designed detection method, studies in caries detection are still limited and show a restriction in performance. In this paper, we proposed a computer-aided diagnosis (CAD) method to detect caries among normal patients using dental radiographs. The proposed method mainly consists of two processes: feature extraction and classification. In the feature extraction phase, the chosen 2D tooth image was employed to extract deep activated features using a deep pre-trained model and geometric features using mathematic formulas. Both feature sets were then combined, called fusion feature, to complement each other defects. Then, the optimal fusion feature set was fed into well-known classification models such as support vector machine (SVM), k-nearest neighbor (KNN), decision tree (DT), Naïve Bayes (NB), and random forest (RF) to determine the best classification model that fit the fusion features set and perform the most preeminent result. The results show 91.70%, 90.43%, and 92.67% for accuracy, sensitivity, and specificity, respectively. The proposed method has outperformed the previous state-of-the-art and shows promising results when none of the measured factors is less than 90%; therefore, the method is promising for dentists and capable of wide-scale implementation caries detection in hospitals.


Author(s):  
Liping Yan ◽  
Lu Jiang ◽  
Yuanqing Xia
Keyword(s):  

2020 ◽  
Vol 2020 ◽  
pp. 1-11
Author(s):  
Ehtesham Hassan ◽  
Yasser Khalil ◽  
Imtiaz Ahmad

Object detection in real images is a challenging problem in computer vision. Despite several advancements in detection and recognition techniques, robust and accurate localization of interesting objects in images from real-life scenarios remains unsolved because of the difficulties posed by intraclass and interclass variations, occlusion, lightning, and scale changes at different levels. In this work, we present an object detection framework by learning-based fusion of handcrafted features with deep features. Deep features characterize different regions of interest in a testing image with a rich set of statistical features. Our hypothesis is to reinforce these features with handcrafted features by learning the optimal fusion during network training. Our detection framework is based on the recent version of YOLO object detection architecture. Experimental evaluation on PASCAL-VOC and MS-COCO datasets achieved the detection rate increase of 11.4% and 1.9% on the mAP scale in comparison with the YOLO version-3 detector (Redmon and Farhadi 2018). An important step in the proposed learning-based feature fusion strategy is to correctly identify the layer feeding in new features. The present work shows a qualitative approach to identify the best layer for fusion and design steps for feeding in the additional feature sets in convolutional network-based detectors.


Sign in / Sign up

Export Citation Format

Share Document