scholarly journals Super-Resolution Model Quantized in Multi-Precision

Electronics ◽  
2021 ◽  
Vol 10 (17) ◽  
pp. 2176
Author(s):  
Jingyu Liu ◽  
Qiong Wang ◽  
Dunbo Zhang ◽  
Li Shen

Deep learning has achieved outstanding results in various tasks in machine learning under the background of rapid increase in equipment’s computing capacity. However, while achieving higher performance and effects, model size is larger, training and inference time longer, the memory and storage occupancy increasing, the computing efficiency shrinking, and the energy consumption augmenting. Consequently, it’s difficult to let these models run on edge devices such as micro and mobile devices. Model compression technology is gradually emerging and researched, for instance, model quantization. Quantization aware training can take more accuracy loss resulting from data mapping in model training into account, which clamps and approximates the data when updating parameters, and introduces quantization errors into the model loss function. In quantization, we found that some stages of the two super-resolution model networks, SRGAN and ESRGAN, showed sensitivity to quantization, which greatly reduced the performance. Therefore, we use higher-bits integer quantization for the sensitive stage, and train the model together in quantization aware training. Although model size was sacrificed a little, the accuracy approaching the original model was achieved. The ESRGAN model was still reduced by nearly 67.14% and SRGAN model was reduced by nearly 68.48%, and the inference time was reduced by nearly 30.48% and 39.85% respectively. What’s more, the PI values of SRGAN and ESRGAN are 2.1049 and 2.2075 respectively.

Author(s):  
A. Valli Bhasha ◽  
B. D. Venkatramana Reddy

The image super-resolution methods with deep learning using Convolutional Neural Network (CNN) have been producing admirable advancements. The proposed image resolution model involves the following two main analyses: (i) analysis using Adaptive Discrete Wavelet Transform (ADWT) with Deep CNN and (ii) analysis using Non-negative Structured Sparse Representation (NSSR). The technique termed as NSSR is used to recover the high-resolution (HR) images from the low-resolution (LR) images. The experimental evaluation involves two phases: Training and Testing. In the training phase, the information regarding the residual images of the dataset are trained using the optimized Deep CNN. On the other hand, the testing phase helps to generate the super resolution image using the HR wavelet subbands (HRSB) and residual images. As the main novelty, the filter coefficients of DWT are optimized by the hybrid Fire Fly-based Spotted Hyena Optimization (FF-SHO) to develop ADWT. Finally, a valuable performance evaluation on the two benchmark hyperspectral image datasets confirms the effectiveness of the proposed model over the existing algorithms.


2020 ◽  
Vol 34 (04) ◽  
pp. 6623-6630
Author(s):  
Li Yang ◽  
Zhezhi He ◽  
Deliang Fan

Deep convolutional neural network (DNN) has demonstrated phenomenal success and been widely used in many computer vision tasks. However, its enormous model size and high computing complexity prohibits its wide deployment into resource limited embedded system, such as FPGA and mGPU. As the two most widely adopted model compression techniques, weight pruning and quantization compress DNN model through introducing weight sparsity (i.e., forcing partial weights as zeros) and quantizing weights into limited bit-width values, respectively. Although there are works attempting to combine the weight pruning and quantization, we still observe disharmony between weight pruning and quantization, especially when more aggressive compression schemes (e.g., Structured pruning and low bit-width quantization) are used. In this work, taking FPGA as the test computing platform and Processing Elements (PE) as the basic parallel computing unit, we first propose a PE-wise structured pruning scheme, which introduces weight sparsification with considering of the architecture of PE. In addition, we integrate it with an optimized weight ternarization approach which quantizes weights into ternary values ({-1,0,+1}), thus converting the dominant convolution operations in DNN from multiplication-and-accumulation (MAC) to addition-only, as well as compressing the original model (from 32-bit floating point to 2-bit ternary representation) by at least 16 times. Then, we investigate and solve the coexistence issue between PE-wise Structured pruning and ternarization, through proposing a Weight Penalty Clipping (WPC) technique with self-adapting threshold. Our experiment shows that the fusion of our proposed techniques can achieve the best state-of-the-art ∼21× PE-wise structured compression rate with merely 1.74%/0.94% (top-1/top-5) accuracy degradation of ResNet-18 on ImageNet dataset.


Electronics ◽  
2021 ◽  
Vol 10 (11) ◽  
pp. 1234
Author(s):  
Lei Zha ◽  
Yu Yang ◽  
Zicheng Lai ◽  
Ziwei Zhang ◽  
Juan Wen

In recent years, neural networks for single image super-resolution (SISR) have applied more profound and deeper network structures to extract extra image details, which brings difficulties in model training. To deal with deep model training problems, researchers utilize dense skip connections to promote the model’s feature representation ability by reusing deep features of different receptive fields. Benefiting from the dense connection block, SRDensenet has achieved excellent performance in SISR. Despite the fact that the dense connected structure can provide rich information, it will also introduce redundant and useless information. To tackle this problem, in this paper, we propose a Lightweight Dense Connected Approach with Attention for Single Image Super-Resolution (LDCASR), which employs the attention mechanism to extract useful information in channel dimension. Particularly, we propose the recursive dense group (RDG), consisting of Dense Attention Blocks (DABs), which can obtain more significant representations by extracting deep features with the aid of both dense connections and the attention module, making our whole network attach importance to learning more advanced feature information. Additionally, we introduce the group convolution in DABs, which can reduce the number of parameters to 0.6 M. Extensive experiments on benchmark datasets demonstrate the superiority of our proposed method over five chosen SISR methods.


Author(s):  
Yanfei Zuo ◽  
Jianjun Wang ◽  
Weimeng Ma ◽  
Xue Zhai ◽  
Xinyu Yao

A method of selecting master degrees of freedom (DOFs) for rotating substructure is presented in this paper to obtain reduced 3D rotor models. Fixed modes of the substructure below thrice the operating frequency are analyzed. According to each mode shape, the DOFs at where main kinetic energy locates are selected as master DOFs to decrease missing of dynamic coupling. Additional DOFs may be selected based on traditional substructure method. In the stationary reference frame, frequency-dependent gyroscopic effects can be included as damping matrices changing with spin speed. Besides, by selecting appropriate substructure, localized damping and key parts of the rotor for analysis can be kept the same as the original model. A reduced model of a high pressure rotor amply demonstrated the capability of the method in reducing the model size and increasing the computational efficiency with less than two percent error.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Valli Bhasha A. ◽  
Venkatramana Reddy B.D.

Purpose The problems of Super resolution are broadly discussed in diverse fields. Rather than the progression toward the super resolution models for real-time images, operating hyperspectral images still remains a challenging problem. Design/methodology/approach This paper aims to develop the enhanced image super-resolution model using “optimized Non-negative Structured Sparse Representation (NSSR), Adaptive Discrete Wavelet Transform (ADWT), and Optimized Deep Convolutional Neural Network”. Once after converting the HR images into LR images, the NSSR images are generated by the optimized NSSR. Then the ADWT is used for generating the subbands of both NSSR and HRSB images. The residual image with this information is obtained by the optimized Deep CNN. All the improvements on the algorithms are done by the Opposition-based Barnacles Mating Optimization (O-BMO), with the objective of attaining the multi-objective function concerning the “Peak Signal-to-Noise Ratio (PSNR), and Structural similarity (SSIM) index”. Extensive analysis on benchmark hyperspectral image datasets shows that the proposed model achieves superior performance over typical other existing super-resolution models. Findings From the analysis, the overall analysis of the suggested and the conventional super resolution models relies that the PSNR of the improved O-BMO-(NSSR+DWT+CNN) was 38.8% better than bicubic, 11% better than NSSR, 16.7% better than DWT+CNN, 1.3% better than NSSR+DWT+CNN, and 0.5% better than NSSR+FF-SHO-(DWT+CNN). Hence, it has been confirmed that the developed O-BMO-(NSSR+DWT+CNN) is performing well in converting LR images to HR images. Originality/value This paper adopts a latest optimization algorithm called O-BMO with optimized Non-negative Structured Sparse Representation (NSSR), Adaptive Discrete Wavelet Transform (ADWT) and Optimized Deep Convolutional Neural Network for developing the enhanced image super-resolution model. This is the first work that uses O-BMO-based Deep CNN for image super-resolution model enhancement.


Sign in / Sign up

Export Citation Format

Share Document