symmetric loss
Recently Published Documents


TOTAL DOCUMENTS

37
(FIVE YEARS 19)

H-INDEX

5
(FIVE YEARS 0)

2021 ◽  
Vol 15 ◽  
Author(s):  
Zhikui Chen ◽  
Shan Jin ◽  
Runze Liu ◽  
Jianing Zhang

Nowadays, deep representations have been attracting much attention owing to the great performance in various tasks. However, the interpretability of deep representations poses a vast challenge on real-world applications. To alleviate the challenge, a deep matrix factorization method with non-negative constraints is proposed to learn deep part-based representations of interpretability for big data in this paper. Specifically, a deep architecture with a supervisor network suppressing noise in data and a student network learning deep representations of interpretability is designed, which is an end-to-end framework for pattern mining. Furthermore, to train the deep matrix factorization architecture, an interpretability loss is defined, including a symmetric loss, an apposition loss, and a non-negative constraint loss, which can ensure the knowledge transfer from the supervisor network to the student network, enhancing the robustness of deep representations. Finally, extensive experimental results on two benchmark datasets demonstrate the superiority of the deep matrix factorization method.


Author(s):  
G. R. AL-Dayian ◽  
A. A. EL-Helbawy ◽  
N. T. AL-Sayed ◽  
E. M. Swielum

Prediction of future events on the basis of the past and present information is a fundamental problem of statistics, arising in many contexts and producing varied solutions. The predictor can be either a point or an interval predictor. This paper focuses on predicting the future observations from the modified Topp-Leone Chen distribution based on progressive Type-II censored scheme. The two-sample prediction is applied to obtain the maximum likelihood, Bayesian and E-Bayesian prediction (point and interval) for future order statistics. The Bayesian and E-Bayesian predictors are considered based on two different loss functions, the balanced squared error loss function; as a symmetric loss function and balanced linear exponential loss function; as an asymmetric loss function. The predictors are obtained based on conjugate gamma prior and uniform hyperprior distributions. A numerical example is provided to illustrate the theoretical results and an application using real data sets are used to demonstrate how the results can be used in practice.


Symmetry ◽  
2021 ◽  
Vol 13 (2) ◽  
pp. 240
Author(s):  
Cristian Busu ◽  
Mihail Busu

Kalman filtering is a linear quadratic estimation (LQE) algorithm that uses a time series of observed data to produce estimations of unknown variables. The Kalman filter (KF) concept is widely used in applied mathematics and signal processing. In this study, we developed a methodology for estimating Gaussian errors by minimizing the symmetric loss function. Relevant applications of the kinetic models are described at the end of the manuscript.


2020 ◽  
Vol 22 (12) ◽  
pp. 3153-3165
Author(s):  
Hantang Liu ◽  
Yinghao Xu ◽  
Jialiang Zhang ◽  
Jianke Zhu ◽  
Yang Li ◽  
...  

2020 ◽  
Vol 140 (12) ◽  
pp. 1328-1335
Author(s):  
Keita Ishikawa ◽  
Tiancheng Wang ◽  
Tsuyoshi Sasaki Usuda
Keyword(s):  

2020 ◽  
Vol 39 (3) ◽  
pp. 2881-2892
Author(s):  
Hongwei Dong ◽  
Liming Yang

 Symmetric loss functions are widely used in regression algorithms to focus on estimating the means. Huber loss, a symmetric smooth loss function, has been proved that it can be optimized with high efficiency and certain robustness. However, mean estimators may be poor when the noise distribution is asymmetric (even outliers caused heavy-tailed distribution noise) and estimators beyond the means are necessary. Under the circumstances, quantile regression is a natural choice which estimates quantiles instead of means through asymmetric loss functions. In this paper, an asymmetric Huber loss function is proposed to implement different penalty for overestimation and underestimation so as to deal with more general noise. Moreover, a smooth truncated version of the proposed loss is introduced to enhance stronger robustness to outliers. Concave-convex procedure is developed in the primal space with the proof of convergence to handle the non-convexity of the involved truncated objective. Experiments are carried out on both artificial and benchmark datasets and robustness of the proposed methods are verified.


Sign in / Sign up

Export Citation Format

Share Document