scholarly journals A Labeling Method for Financial Time Series Prediction Based on Trends

Entropy ◽  
2020 ◽  
Vol 22 (10) ◽  
pp. 1162
Author(s):  
Dingming Wu ◽  
Xiaolong Wang ◽  
Jingyong Su ◽  
Buzhou Tang ◽  
Shaocong Wu

Time series prediction has been widely applied to the finance industry in applications such as stock market price and commodity price forecasting. Machine learning methods have been widely used in financial time series prediction in recent years. How to label financial time series data to determine the prediction accuracy of machine learning models and subsequently determine final investment returns is a hot topic. Existing labeling methods of financial time series mainly label data by comparing the current data with those of a short time period in the future. However, financial time series data are typically non-linear with obvious short-term randomness. Therefore, these labeling methods have not captured the continuous trend features of financial time series data, leading to a difference between their labeling results and real market trends. In this paper, a new labeling method called “continuous trend labeling” is proposed to address the above problem. In the feature preprocessing stage, this paper proposed a new method that can avoid the problem of look-ahead bias in traditional data standardization or normalization processes. Then, a detailed logical explanation was given, the definition of continuous trend labeling was proposed and also an automatic labeling algorithm was given to extract the continuous trend features of financial time series data. Experiments on the Shanghai Composite Index and Shenzhen Component Index and some stocks of China showed that our labeling method is a much better state-of-the-art labeling method in terms of classification accuracy and some other classification evaluation metrics. The results of the paper also proved that deep learning models such as LSTM and GRU are more suitable for dealing with the prediction of financial time series data.

2021 ◽  
Vol 11 (9) ◽  
pp. 3876
Author(s):  
Weiming Mai ◽  
Raymond S. T. Lee

Chart patterns are significant for financial market behavior analysis. Lots of approaches have been proposed to detect specific patterns in financial time series data, most of them can be categorized as distance-based or training-based. In this paper, we applied a trainable continuous Hopfield Neural Network for financial time series pattern matching. The Perceptually Important Points (PIP) segmentation method is used as the data preprocessing procedure to reduce the fluctuation. We conducted a synthetic data experiment on both high-level noisy data and low-level noisy data. The result shows that our proposed method outperforms the Template Based (TB) and Euclidean Distance (ED) and has an advantage over Dynamic Time Warping (DTW) in terms of the processing time. That indicates the Hopfield network has a potential advantage over other distance-based matching methods.


2021 ◽  
Author(s):  
Erik Otović ◽  
Marko Njirjak ◽  
Dario Jozinović ◽  
Goran Mauša ◽  
Alberto Michelini ◽  
...  

<p>In this study, we compared the performance of machine learning models trained using transfer learning and those that were trained from scratch - on time series data. Four machine learning models were used for the experiment. Two models were taken from the field of seismology, and the other two are general-purpose models for working with time series data. The accuracy of selected models was systematically observed and analyzed when switching within the same domain of application (seismology), as well as between mutually different domains of application (seismology, speech, medicine, finance). In seismology, we used two databases of local earthquakes (one in counts, and the other with the instrument response removed) and a database of global earthquakes for predicting earthquake magnitude; other datasets targeted classifying spoken words (speech), predicting stock prices (finance) and classifying muscle movement from EMG signals (medicine).<br>In practice, it is very demanding and sometimes impossible to collect datasets of tagged data large enough to successfully train a machine learning model. Therefore, in our experiment, we use reduced data sets of 1,500 and 9,000 data instances to mimic such conditions. Using the same scaled-down datasets, we trained two sets of machine learning models: those that used transfer learning for training and those that were trained from scratch. We compared the performances between pairs of models in order to draw conclusions about the utility of transfer learning. In order to confirm the validity of the obtained results, we repeated the experiments several times and applied statistical tests to confirm the significance of the results. The study shows when, within the set experimental framework, the transfer of knowledge brought improvements in terms of model accuracy and in terms of model convergence rate.<br><br>Our results show that it is possible to achieve better performance and faster convergence by transferring knowledge from the domain of global earthquakes to the domain of local earthquakes; sometimes also vice versa. However, improvements in seismology can sometimes also be achieved by transferring knowledge from medical and audio domains. The results show that the transfer of knowledge between other domains brought even more significant improvements, compared to those within the field of seismology. For example, it has been shown that models in the field of sound recognition have achieved much better performance compared to classical models and that the domain of sound recognition is very compatible with knowledge from other domains. We came to similar conclusions for the domains of medicine and finance. Ultimately, the paper offers suggestions when transfer learning is useful, and the explanations offered can provide a good starting point for knowledge transfer using time series data.</p>


2012 ◽  
Vol 2012 ◽  
pp. 1-21 ◽  
Author(s):  
Md. Rabiul Islam ◽  
Md. Rashed-Al-Mahfuz ◽  
Shamim Ahmad ◽  
Md. Khademul Islam Molla

This paper presents a subband approach to financial time series prediction. Multivariate empirical mode decomposition (MEMD) is employed here for multiband representation of multichannel financial time series together. Autoregressive moving average (ARMA) model is used in prediction of individual subband of any time series data. Then all the predicted subband signals are summed up to obtain the overall prediction. The ARMA model works better for stationary signal. With multiband representation, each subband becomes a band-limited (narrow band) signal and hence better prediction is achieved. The performance of the proposed MEMD-ARMA model is compared with classical EMD, discrete wavelet transform (DWT), and with full band ARMA model in terms of signal-to-noise ratio (SNR) and mean square error (MSE) between the original and predicted time series. The simulation results show that the MEMD-ARMA-based method performs better than the other methods.


2005 ◽  
Vol 50 (01) ◽  
pp. 1-8 ◽  
Author(s):  
PETER M. ROBINSON

Much time series data are recorded on economic and financial variables. Statistical modeling of such data is now very well developed, and has applications in forecasting. We review a variety of statistical models from the viewpoint of "memory", or strength of dependence across time, which is a helpful discriminator between different phenomena of interest. Both linear and nonlinear models are discussed.


Sign in / Sign up

Export Citation Format

Share Document