Prediction of Time Series Empowered with a Novel SREKRLS Algorithm
2021-12-16BilalShoaibYasirJavedMuhammadAdnanKhanFahadAhmadRizwanMajeedMuhammadSaqibNawazMuhammadAdeelAshrafAbidIqbalandMuhammadIdrees
Bilal Shoaib,Yasir Javed,Muhammad Adnan Khan,Fahad Ahmad,Rizwan Majeed,Muhammad Saqib Nawaz,Muhammad Adeel Ashraf,Abid Iqbal and Muhammad Idrees
1School of Computer Science,Minhaj University Lahore,Lahore,54000,Pakistan
2Prince Sultan University,Riyadh,11586,Saudi Arabia
3Department of Computer Science,Riphah International University Lahore Campus,Lahore,54000,Pakistan
4Department of Basic Sciences,Deanship of Common First Year,Jouf University,Sakaka,Aljouf,72341,Saudi Arabia
5Directorate of IT,The Islamia University of Bahawalpur,Bahawalpur,63100,Pakistan
6Department of Computer Science,University of Management and Technology,Lahore,54770,Pakistan
7PUCIT,University of the Punjab,Lahore,54000,Pakistan
Abstract:For the unforced dynamical non-linear state-space model,a new Q1 and efficient square root extended kernel recursive least square estimation algorithm is developed in this article.The proposed algorithm lends itself towards the parallel implementation as in the FPGA systems.With the help of an ortho-normal triangularization method,which relies on numerically stable givens rotation,matrix inversion causes a computational burden,is reduced.Matrix computation possesses many excellent numerical properties such as singularity,symmetry,skew symmetry,and triangularity is achieved by using this algorithm.The proposed method is validated for the prediction of stationary and non-stationary Mackey-Glass Time Series,along with that a component in the x-direction of the Lorenz Times Series is also predicted to illustrate its usefulness.By the learning curves regarding mean square error(MSE) are witnessed for demonstration with prediction performance of the proposed algorithm from where it’s concluded that the proposed algorithm performs better than EKRLS.This new SREKRLS based design positively offers an innovative era towards non-linear systolic arrays,which is efficient in developing very-large-scale integration(VLSI)applications with non-linear input data.Multiple experiments are carried out to validate the reliability,effectiveness,and applicability of the proposed algorithm and with different noise levels compared to the Extended kernel recursive least-squares(EKRLS)algorithm.
Keywords:Kernel methods;square root adaptive filtering;givens rotation;mackey glass time series prediction;recursive least squares;kernel recursive least squares;extended kernel recursive least squares;square root extended kernel recursive least squares algorithm
1 Introduction
The Recursive least squares (RLS) algorithm is being studied for the previous two to three decades for solving well known linear signal processing problems such as object tracking,adaptive beamforming,control systems and channel equalization.RLS shows faster convergence than the Least Mean Square Algorithm.However,in non-stationary environments,the performance of the RLS algorithm slightly degrades.Extended RLS and Weighted Extended RLS Algorithm further extend the RLS family that significantly improves its performance in a non-stationary environment.Since the support vector machines have been successful in regression and learning techniques.A further extension to support vector machines is the kernel approach.To generate elegant and efficient nonlinear algorithms,the kernel approach is applied to linear signal processing algorithms such as Least Mean Square,Affine Projection,RLS,Extended RLS and Principal Component Analysis by reworking in the Reproducing Kernel Hilbert Spaces (RKHS) with the framed kernel trick.In recent decades,kernel-based learning algorithms such as vector machine learning,kernel discriminant analysis,kernel principal components analysis,etc.,have become widely known for research.In a complicated non-linear environment,these methods illustrate noticeable performance while dealing with regression and classification problems.The motive for the kernel-based approach is to transform content from inputs to feature space and then computing in the feature space through linear learning.The inputs have been transformed into the high dimensional feature space where they are linearly separable,and the linear computations have easily been applied.The famous kernel trick makes an inner product of the inputs linear while working in the high dimensional feature space.Kernel methods such as kernel LMS,kernel RLS,Kernel Affine Projection Algorithm,Extended kernel RLS,and kernel Principal component analysis have gained enormous popularity in the recent past while applied to many complex nonlinear problems [1-4].
To increase the precision and accuracy of the kernel RLS algorithm,an error amending technique for the forecasting of the short-term wind-power system is introduced.This method iteratively collected the data at several intervals and gave a hybrid model for regression in wind power forecasting systems.On the other hand,after carefully selecting the input variables or data window length,this method also captures the characteristics of fluctuation because of the spatial and temporal correlations in wind farms identification error [5].
The restricted window length in the sliding window KRLS algorithm appears to have an inability of handling temporal data points on a larger scale of time.A concept of the reservoir is established that contains a larger number of hidden units.All the units are interconnected sparsely and are viewed as a temporal structure that transforms the time series forecasting history into a particular state space in high dimensional feature space.The KRLS algorithm is used to estimate the relationship between the target and the reservoir state [6].
The prediction of the transition matrix in RKHS is a challenging task.To cope with this issue,an extension in the EKRLS is introduced that constructed the state model in the conventional state space,and the Kalman filter is used to estimate the hidden states.The KRLS Algorithm estimates concealed states in model measurement.This approach shows suitable flexibility in estimating the noise model and allows a linear mapping in RKHS [7].
Kernel adaptive filters use kernel functions that exhibit smoothness,symmetry,and linearity on the entire input domain.The activation function in the neural networks is also a kind of symmetrical kernel function.A flexible kernel expansion-based activation function reduces the overall cost of the system.By using kernels as an activation function in the neural networks helps in approximating the comprehensive mapping specified across a subset on a real line,either non-convex or a convex one [8].
As a network grows with the incoming data samples,the overall computation time and memory time increases.A compact and accurate method of restricting network’s size can accelerate the learning time and computational burden in the kernel adaptive filtering.A least important data sample is removed from the dictionary and ordering the remaining data in centers by the value of their importance.This pruning technique to KRLS dramatically improves the convergence time of KRLS [9].
Linear LMS is used for training the weights in the Cerebellar Model Articulated Neural(CMAC) Network in an online manner [10].CMAC method is a fast converging method,and typically only one epoch is required,so there is no issue of selecting a suitable learning rate.The use of the RLS algorithm in CMAC for weight updating is computationally inefficient [11-14].Kernel methods for CMAC,on the other hand,does have a computational burden,but the memory requirement and modeling capabilities are somehow improved [15,16].
This research work presents a novelty of the square root EKRLS algorithm with a spontaneous dynamic model’s prediction for chaotic stationary and non-stationary time series.The square root version of EKRLS converges fast as compared to the EKRLS.The computational burden is also lowered because of the givens rotation that gives excellent online numerical stability.At each time step,the new patterns are added in the data window of the given’s rotation in the triangular form,and the rotations are performed with sines and cosines that avoid inverse at each iteration.The usefulness of the proposed algorithm is checked on the estimation for stationary and non-stationary cases of the chaotic and non-linear Mackey Glass time series prediction.
The structure of the given paper is constructed with six sections.In,Section 2 details about the Extended RLS algorithm are presented.Section 3 carries a brief introduction of kernels and EKRLS.A unique unforced dynamical model of the proposed SREKRLS algorithm has discoursed in Section 4.For simplicity,all deviations and formulation of the algorithm are performed with the help of real variables.Section 5 holds the result of the simulation and discussion under the proposed framework.Conclusion and future directions are presented in Section 6.
2 Algorithms
When describing sequential approximation,a generic linear state-space model is given as:
Eq.(1) represents the space model,and Eq.(2) represents an observation model.Here A,n(i)andv(i)represents the state transition matrix,state noisen(i)and even the process noise or observation.The state model is commonly referred to as the process equation,while the observation model is referred to as the measurement equation.Either the noises are white as well as the Gaussian distribution.Tab.1 shown Algorithm 2,which contains a summary of the Extended recursive least square algorithm (ERLS).
Table 1:Algorithm 2 ERLS
3 Introduction to Kernel Methods and EKRLS Algorithm
This section is about the detailed description of positive definite kernels,feature space,or reproducing kernel Hilbert space,then the EKRLS algorithm along-with brief short details of the algorithm.
3.1 Kernels
The intended theme of using kernel functions is carrying out the transformation of points(xi)of input data to F a feature space asφ(x(i)).To discover the presence of linear relationships in data,several different methods can be applied in feature space [2].Symmetric,continuous,and positive definite property is maintained by the kernel method.k:X×X→R,X:input domain,a subset of Rm.The Gaussian kernel function is perhaps the most widely used kernel function:-
Here,the kernel’s width is represented byσ.An adaptive algorithm is aided by kernel function for the manipulation of converted input data in feature space.Kernel function allows the adaptive algorithm to manage transformed inputs in feature space not know co-ordinates of data in a given space.The kernel function maintains the property of symmetry and convexity throughout the domain.
As per mercer’s principle [1,2]every kernel mechanism k(c,c′)masks or transforms the input data in high dimensional feature space (reproducing kernel Hilbert space) [8]and generally calculated through inner products,provided as:
Thus mapping/projection is being performed asφ:U→F.This method is also known as the famous kernel trick.The specified scheme is recycled to develop an algorithm named kernel filtering.
3.2 Extended Kernel Recursive Least Squares Algorithm
Triggering through particular un-forced dynamical non-linear state-space model state model is given in Eq.(5).
The measurement model is given in Eq.(6).
With the mean of zero,the measurement noise is white as well as no process noise due to a spontaneous dynamic model.When considering the model as mentioned above,recurrent ERSL equations are written as:
Eq.(7) is the difference equation named Riccati in feature space.By keeping in view,the dimensions of this equation.This equation will be Decomposed and adjusted in matrix form for further computations through the given’s notation.
Figure 1:Cell structure of the square root elements
In Fig.1,the primary cell structure is described in the given’s rotation using high dimension feature space.The transition matrix will be dependent on the value of lambda.
Fig.2 described the complete architecture of the given rotation works in RKHS.In the first quadrant,the input is transformed in the high dimensional feature space using a positive definite Gaussian kernel function.Then the inverse of the Riccati difference equation is computed by decomposing the M(n)in the triangular form.
Figure 2:Architecture of the given’s rotation in the feature space
4 Proposed Square Root Extended Kernel Recursive Least Squares Algorithm
To acquire KRLS rotation-based characteristic for the unforced dynamic model,we first totally disregard the term n(n) because qI reduces as well as organizes the essential terms in the form of a matrix consisting of the critical terms noted as:
1.A 1×1scaler βn+φ(n)TM(n−1)φ(n)
2.A 1×nvectorφ(n)TM(n−1)A
3.A vector n×1 vector AM(n−1)φ(n)
4.An n×nmatrix AM(n−1)AT
Given four terms,it carries all the information regarding Riccati difference Eq.(7).By keeping an eye on the dimensionality of these terms,we can arrange the terms described above in the form of a matrix,and the combined form of these terms will be written as given below:-
Its factored form yields.
We interpreted the equation’s RHS (8) as one matrix’ product and its transposed term.Application of this interpretation &matrix decomposition Lemma,we then set the Eq.(5) and calculate the unspecified or unclear as in Eq.(9).
Non-zero block elements of the matrix call it matrix B is made up of scalarb11(n),vector term b21(i)and matrix B22(n)that forms as a result of unitary rotationΘ(n).To determine the missing or unknown elements in Eq.(7),take a square on both sides of Eq.(9).
By identifying point whichΘ(n)is a unit matrix and hence,it is resultantΘ(n)Θ(n)Tis equivalent to unity.The remaining concepts are worded as follows:
When fulfilling the above mentioned in Eqs.(10)-(13).
The conceptr(i)1/2is the one-half power variance.The second equation accounts for the gain together with the term of variance.The Cholesky state-error correlation factor matrix is given for the third term.
Theβn/2and 0 elements in the pre-array induce 2 variables to be created:r(n)variance andg(n)gain.Both these variables come mainly from the division and squaring term ofThe series of equations Eq.(14) through Eq.(16) gives a specific description of the extended kernel Square root RLS and explicitly provides us with definitions of EKRLS parameters.However,irrespective of this,the actual practice of a rotation matrix isΘθwhich is a unit matrix excluding for 4 points wherever the pairs of the poleskandloverlap.The pairs of column’s rowskandlare symbolized as rowskand,respectively.A complete summary of an algorithm i s specified in Tab.2,articulates the calculation from pre-array to pasture.
Table 2:SREKRLS (spontaneous dynamical model) algorithm
5 Simulations and Results
In this part of a given paper,the findings of the prediction of Mackey Glass Time Series and the x-component of the 3-dimensional prediction of the Lorenz time series are discussed with the aid of the suggested algorithm.
5.1 Prediction of Mackey Glass Time Series(PMGTS)
To estimate MGTS,we found both stationary and non-stationary time series in this experiment.The Mackey Glass Time Series is created using the delay differential equation given below:
Withα=0.1,β=0.2and τ=20.So,we obtain a seriesx(t)fort=1,2,3,4,...5000 by the delay as mentioned above differential equation.The series is received by the continuous curve ofx(t)with the time of a one-second interval.The cost function of the time series projections/predictions is provided here ase=‖ydes−yobt‖2,here,ydes=wTxtr,wandxtrare represents the weights that have been trained and samples for training,respectively.yobt=wTxte,wdenote weights that are trained andxtedenote samples that are utilized for testing.
5.1.1 Stationary Series
In this experiment,the filter’s order is taken as 10,i.e.,x(t)=[x(t−10),x(t−9),x(t−8),...,x(t−1)]T,for the prediction of the current onex(t)previous 10 sample points are used as input.Three thousand (3000) sampling points have been used for training purposes,while the next 250 sample points are absorbed with the proposed EKRLS algorithm for testing.Training with 1501-4500 sample points on the stationary sequence for the output analysis of the proposed algorithm is performed,while 4600-4850 sample points are used to evaluate the proposed algorithm.During the training step,weights are modified and tested using all these modified weights;learning curves are produced to evaluate the proposed algorithm through MSE (mean square error).Noise is applied to time series with multiple variances for further validation of the proposed algorithm’s output.Five hundred (500) Monte Carlo simulations are run for algorithmic weights modification,and these modified weights are used for the sample prediction.To produce results,zero-mean additive noise with a variance of 0.09 is applied to series;see Fig.3.
Figure 3:Mean square error curves with stationary MGTSP for EKRLS and SREKRLS while noise variance (NV)=0.09
The algorithm parameters used are set as EKRLS(A=αI,∊=0.01,P(0)=λI with λ=β=0.01795,α=0.3 and Gaussian kernel with a width of 0.2),and the offered SREKRLS(A=αI,∊=0.01,P(0)=λI with λ=β=0.01795,α=0.3 and Gaussian kernel with a width of 0.3).
AWG noise of variance (0.07) is applied to the unique series,and the estimated output of the suggested algorithm is enhanced by 0.01 dB.The noise level for the proposed algorithm is increased by a variance of 0.20 to confirm the output and outcomes further;enhancement of 0.07 dB is shown with the EKRLS algorithm.From tests,it’s derived that the proposed algorithm shows more noise robustness.
The curves present in Fig.4 depict the progression of 0.04 dB through the noise of variance of 0.2 and 0.05 dB progression if the noise of variance of 0.1 is applied;data present in Tab.3 shows mean square error with different noise variances for EKRLS and SREKRLS.
Figure 4:MSE curve of EKRLS and SREKRLS for stationary Mackey Glass Time series Prediction with noise variance 0.1
See Tab.3 for a comparative analysis of the proposed algorithm with EKRLS at different noise variance levels.It is noted that SREKRLS gives more attractive results in terms of mean square error as comparedto EKRLS.It’s also observed that the proposed SREKRLS gives more accurate results with high noise variance.
Table 3:Mean square error with stationary MGTSP at different noise variance
5.1.2 Non-Stationary Series
For more examination and validation of the proposed algorithm’s behavior in comparison with EKRLS,a non-stationary series with the addition of a sinusoidal (y=0.3sin(2πft),f=1/5000) with an amplitude of 0.3 having an occurrence of five thousand (5000) samples/second is summed up with original MGTSP.The same factors and samples are used as in stationary series.
Figs.5-8 show that MSE curves when noise variance is set as 0.09,0.07,0.2,and 0.1,respectively.
Figure 5:Mean square error with non-stationary MGTSP for EKRLS and SREKRLS while noise variance (NV)=0.09
Tab.4 provides insight into the mean square error values for different values of noise variance(NV) with non-stationary MGTSP for both EKRLS and SREKRLS.
5.2 Lorenz’Time Series Prediction
The Lorenz series is a non-linear,3D,and deterministic equation.It is represented as a sequence of fractional differential equations provided under:
Figure 6:Mean square error with non-stationary MGTSP for EKRLS and SREKRLS while noise variance (NV)=0.2
Figure 7:Mean square error for x-component of LTSP of EKRLS and SREKRLS while noise variance is absent
Lorenz’ unpredictable conduct on the parameters,α=10,γ=28and B=With the help of a first-order approximation procedure including a sample time of 0.001 s with some the initial conditions are defined asx(0)y(0)=1 andz(0)=1.This type of series is a threedimensional (3D) Lorenz series.Thus finding the component named as x of LTS and derive 5000 training incidences,i.e.,1500-4500 for the preparation of training the suggested algorithm and its corresponding part,while the rest of the 300 samples are utilized as a testing set.In terms of mean square error,learning curves in the absence of noise are shown in Fig.8.To predict the current one,the sequence of the filter is designated as three (3).The algorithm parameters can be optimized as EKRLS:(A=αI,∊=0.01,P(0)=λIwith λ=β=0.01795,α=0.3andGaussian kernel with a width of 0.2),and the proposed SREKRLS(A=αI,∊=0.01,P(0)=λIwithλ=β=0.01795,α=0.3and Gaussian kernel with a width of0.4).
Figure 8:Mean square error for x-component of LTSP of EKRLS and SREKRLS while noise variance (NV)=0.09
Table 4:Mean square error with non-stationary MGTSP at different noise variance
In Fig.8 Proposed algorithm’s performance is evaluated without adding noise;performance improvement is recorded of 0.02 dB as compared to EKRLS.
For further investigation in performance,noise variance of 0.09 is added in Lorenz Series’x-component,and performance is observed see Fig.8.
Performance results produced by the proposed algorithm compared to EKRLS are displayed in Tab.5.When noise is present or absent,conditions are applied to series generated to the X-Component of LTSP with different noise variances.
Table 5:Mean square error for x-component of LTSP of EKRLS and SREKRLS
6 Conclusion
The proposed algorithm offers an enhanced square root algorithm called Extended Kernel Recursive Least Square (EKRTS).The EKRLS algorithm topic is also presented with reasonable discussion.One version of the stationary and non-stationary Mackey Glass Time Series prediction is seen at various noise levels.By the learning curves regarding mean square error (MSE) are witnessed for demonstration with prediction performance of the proposed algorithm from where it’s concluded that the proposed algorithm performs better than EKRLS.For further testing,the validity of the proposed algorithm,LTSP’s x-component is also predicted.This new SREKRLS based design positively offers an innovative era towards non-linear systolic arrays,which is efficient in developing very-large-scale integration (VLSI) applications with non-linear input data.
Acknowledgement:Thanks to our families &colleagues,who supported us morally.
Funding Statement:This work is funded by Prince Sultan University,Riyadh,Saudi Arabia.
Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.
杂志排行
Computers Materials&Continua的其它文章
- Recognition and Detection of Diabetic Retinopathy Using Densenet-65 Based Faster-RCNN
- Adaptation of Vehicular Ad hoc Network Clustering Protocol for Smart Transportation
- Computational Microfluidic Channel for Separation of Escherichia coli from Blood-Cells
- A Fractal-Fractional Model for the MHD Flow of Casson Fluid in a Channel
- Simulation,Modeling,and Optimization of Intelligent Kidney Disease Predication Empowered with Computational Intelligence Approaches
- Intrusion Detection System Using FKNN and Improved PSO