scholarly journals Loss-Driven Adversarial Ensemble Deep Learning for On-Line Time Series Analysis

2019 ◽  
Vol 11 (12) ◽  
pp. 3489
Author(s):  
Hyungjin Ko ◽  
Jaewook Lee ◽  
Junyoung Byun ◽  
Bumho Son ◽  
Saerom Park

Developing a robust and sustainable system is an important problem in which deep learning models are used in real-world applications. Ensemble methods combine diverse models to improve performance and achieve robustness. The analysis of time series data requires dealing with continuously incoming instances; however, most ensemble models suffer when adapting to a change in data distribution. Therefore, we propose an on-line ensemble deep learning algorithm that aggregates deep learning models and adjusts the ensemble weight based on loss value in this study. We theoretically demonstrate that the ensemble weight converges to the limiting distribution, and, thus, minimizes the average total loss from a new regret measure based on adversarial assumption. We also present an overall framework that can be applied to analyze time series. In the experiments, we focused on the on-line phase, in which the ensemble models predict the binary class for the simulated data and the financial and non-financial real data. The proposed method outperformed other ensemble approaches. Moreover, our method was not only robust to the intentional attacks but also sustainable in data distribution changes. In the future, our algorithm can be extended to regression and multiclass classification problems.

2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Zulkifli Halim ◽  
Shuhaida Mohamed Shuhidan ◽  
Zuraidah Mohd Sanusi

PurposeIn the previous study of financial distress prediction, deep learning techniques performed better than traditional techniques over time-series data. This study investigates the performance of deep learning models: recurrent neural network, long short-term memory and gated recurrent unit for the financial distress prediction among the Malaysian public listed corporation over the time-series data. This study also compares the performance of logistic regression, support vector machine, neural network, decision tree and the deep learning models on single-year data.Design/methodology/approachThe data used are the financial data of public listed companies that been classified as PN17 status (distress) and non-PN17 (not distress) in Malaysia. This study was conducted using machine learning library of Python programming language.FindingsThe findings indicate that all deep learning models used for this study achieved 90% accuracy and above with long short-term memory (LSTM) and gated recurrent unit (GRU) getting 93% accuracy. In addition, deep learning models consistently have good performance compared to the other models over single-year data. The results show LSTM and GRU getting 90% and recurrent neural network (RNN) 88% accuracy. The results also show that LSTM and GRU get better precision and recall compared to RNN. The findings of this study show that the deep learning approach will lead to better performance in financial distress prediction studies. To be added, time-series data should be highlighted in any financial distress prediction studies since it has a big impact on credit risk assessment.Research limitations/implicationsThe first limitation of this study is the hyperparameter tuning only applied for deep learning models. Secondly, the time-series data are only used for deep learning models since the other models optimally fit on single-year data.Practical implicationsThis study proposes recommendations that deep learning is a new approach that will lead to better performance in financial distress prediction studies. Besides that, time-series data should be highlighted in any financial distress prediction studies since the data have a big impact on the assessment of credit risk.Originality/valueTo the best of authors' knowledge, this article is the first study that uses the gated recurrent unit in financial distress prediction studies based on time-series data for Malaysian public listed companies. The findings of this study can help financial institutions/investors to find a better and accurate approach for credit risk assessment.


Author(s):  
Hiroyuki Moriguchi ◽  
◽  
Ichiro Takeuchi ◽  
Masayuki Karasuyama ◽  
Shin-ichi Horikawa ◽  
...  

In this paper, we study a problem of anomaly detection from time series-data. We use kernel quantile regression (KQR) to predict the extreme (such as 0.01 or 0.99) quantiles of the future time-series data distribution. It enables us to tell whether the probability of observing a certain time-series sequence is larger than, say, 1 percent or not. In this paper, we develop an efficient update algorithm of KQR in order to adapt the KQR in on-line manner. We propose a new algorithm that allows us to compute the optimal solution of the KQR when a new training pattern is inserted or deleted. We demonstrate the effectiveness of our methodology through numerical experiment using real-world time-series data.


2021 ◽  
pp. 129-159
Author(s):  
Mahbuba Tasmin ◽  
Sharif Uddin Ruman ◽  
Taoseef Ishtiak ◽  
Arif-ur-Rahman Chowdhury Suhan ◽  
Redwan Hasif ◽  
...  

2021 ◽  
Vol 11 (20) ◽  
pp. 9373
Author(s):  
Jie Ju ◽  
Fang-Ai Liu

Deep learning models have been widely used in prediction problems in various scenarios and have shown excellent prediction effects. As a deep learning model, the long short-term memory neural network (LSTM) is potent in predicting time series data. However, with the advancement of technology, data collection has become more accessible, and multivariate time series data have emerged. Multivariate time series data are often characterized by a large amount of data, tight timeline, and many related sequences. Especially in real data sets, the change rules of many sequences will be affected by the changes of other sequences. The interacting factors data, mutation information, and other issues seriously impact the prediction accuracy of deep learning models when predicting this type of data. On the other hand, we can also extract the mutual influence information between different sequences and simultaneously use the extracted information as part of the model input to make the prediction results more accurate. Therefore, we propose an ATT-LSTM model. The network applies the attention mechanism (attention) to the LSTM to filter the mutual influence information in the data when predicting the multivariate time series data, which makes up for the poor ability of the network to process data. Weaknesses have greatly improved the accuracy of the network in predicting multivariate time series data. To evaluate the model’s accuracy, we compare the ATT-LSTM model with the other six models on two real multivariate time series data sets based on two evaluation indicators: Mean Absolute Error (MAE) and Root Mean Square Error (RMSE). The experimental results show that the model has an excellent performance improvement compared with the other six models, proving the model’s effectiveness in predicting multivariate time series data.


Sensors ◽  
2021 ◽  
Vol 22 (1) ◽  
pp. 29
Author(s):  
Manas Bazarbaev ◽  
Tserenpurev Chuluunsaikhan ◽  
Hyoseok Oh ◽  
Ga-Ae Ryu ◽  
Aziz Nasridinov ◽  
...  

Product quality is a major concern in manufacturing. In the metal processing industry, low-quality products must be remanufactured, which requires additional labor, money, and time. Therefore, user-controllable variables for machines and raw material compositions are key factors for ensuring product quality. In this study, we propose a method for generating the time-series working patterns of the control variables for metal-melting induction furnaces and continuous casting machines, thus improving product quality by aiding machine operators. We used an auxiliary classifier generative adversarial network (AC-GAN) model to generate time-series working patterns of two processes depending on product type and additional material data. To check accuracy, the difference between the generated time-series data of the model and the ground truth data was calculated. Specifically, the proposed model results were compared with those of other deep learning models: multilayer perceptron (MLP), convolutional neural network (CNN), long short-term memory (LSTM), and gated recurrent unit (GRU). It was demonstrated that the proposed model outperformed the other deep learning models. Moreover, the proposed method generated different time-series data for different inputs, whereas the other deep learning models generated the same time-series data.


2020 ◽  
Author(s):  
Xi Chen ◽  
Ruyi Yu ◽  
Sajid Ullah ◽  
Dianming Wu ◽  
Min Liu ◽  
...  

<p>Wind speed forecasting is very important for a lot of real-life applications, especially for controlling and monitoring of wind power plants. Owing to the non-linearity of wind speed time series, it is hard to improve the accuracy of runoff forecasting, especially several days ahead. In order to improve the forecasting performance, many forecasting models have been proposed. Recently, deep learning models have been paid great attention, since they excel the conventional machine learning models. The majority of existing deep learning models take the mean squared error (MSE) loss as the loss function for forecasting. MSE loss is linear. Consequently, it hinders further improvement of forecasting performance over nonlinear wind speed time series data.   <br> <br>In this work, we propose a new weighted MSE loss function for wind speed forecasting based on deep learning. As is well known, the training procedure is dominated by easy-training samples in applications. The domination will cause the ineffectiveness and inefficiency of computation. In the new weighted MSE loss function, loss weights of samples can be automatically reduced, according to the contribution of easy-training samples. Thus, the total loss mainly focuses on hard-training samples. To verify the new loss function, Recurrent Neural Network (RNN), Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU) have been used as base models. <br> <br>A number of experiments have been carried out by using open wind speed time series data collected from China and Unites states to demonstrate the effectiveness of the new loss function with three popular models. The performances of the models have been evaluated through the statistical error measures, such as Mean Absolute Error (MAE). MAE of the proposed weighted MSE loss are at most 55% lower than traditional MSE loss. The experimental results indicate that the new weighted loss function can outperform the popular MSE loss function in wind speed forecasting. </p>


2017 ◽  
Vol 85 ◽  
pp. 292-304 ◽  
Author(s):  
Stratis Kanarachos ◽  
Stavros-Richard G. Christopoulos ◽  
Alexander Chroneos ◽  
Michael E. Fitzpatrick

Energies ◽  
2020 ◽  
Vol 13 (24) ◽  
pp. 6623
Author(s):  
Rial A. Rajagukguk ◽  
Raden A. A. Ramadhan ◽  
Hyun-Jin Lee

Presently, deep learning models are an alternative solution for predicting solar energy because of their accuracy. The present study reviews deep learning models for handling time-series data to predict solar irradiance and photovoltaic (PV) power. We selected three standalone models and one hybrid model for the discussion, namely, recurrent neural network (RNN), long short-term memory (LSTM), gated recurrent unit (GRU), and convolutional neural network-LSTM (CNN–LSTM). The selected models were compared based on the accuracy, input data, forecasting horizon, type of season and weather, and training time. The performance analysis shows that these models have their strengths and limitations in different conditions. Generally, for standalone models, LSTM shows the best performance regarding the root-mean-square error evaluation metric (RMSE). On the other hand, the hybrid model (CNN–LSTM) outperforms the three standalone models, although it requires longer training data time. The most significant finding is that the deep learning models of interest are more suitable for predicting solar irradiance and PV power than other conventional machine learning models. Additionally, we recommend using the relative RMSE as the representative evaluation metric to facilitate accuracy comparison between studies.


Open Physics ◽  
2021 ◽  
Vol 19 (1) ◽  
pp. 360-374
Author(s):  
Yuan Pei ◽  
Lei Zhenglin ◽  
Zeng Qinghui ◽  
Wu Yixiao ◽  
Lu Yanli ◽  
...  

Abstract The load of the showcase is a nonlinear and unstable time series data, and the traditional forecasting method is not applicable. Deep learning algorithms are introduced to predict the load of the showcase. Based on the CEEMD–IPSO–LSTM combination algorithm, this paper builds a refrigerated display cabinet load forecasting model. Compared with the forecast results of other models, it finally proves that the CEEMD–IPSO–LSTM model has the highest load forecasting accuracy, and the model’s determination coefficient is 0.9105, which is obviously excellent. Compared with other models, the model constructed in this paper can predict the load of showcases, which can provide a reference for energy saving and consumption reduction of display cabinet.


Sign in / Sign up

Export Citation Format

Share Document