Next Article in Journal / Special Issue
Analogue Realization of Fractional-Order Dynamical Systems
Previous Article in Journal / Special Issue
Learning Entropy: Multiscale Measure for Incremental Learning
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Fractional Differential Polynomial Neural Network for Approximation of Functions

Institute of Mathematical Sciences, University Malaya, Kuala Lumpur 50603, Malaysia
Entropy 2013, 15(10), 4188-4198; https://0-doi-org.brum.beds.ac.uk/10.3390/e15104188
Submission received: 26 August 2013 / Revised: 5 September 2013 / Accepted: 24 September 2013 / Published: 30 September 2013
(This article belongs to the Special Issue Dynamical Systems)

Abstract

:
In this work, we introduce a generalization of the differential polynomial neural network utilizing fractional calculus. Fractional calculus is taken in the sense of the Caputo differential operator. It approximates a multi-parametric function with particular polynomials characterizing its functional output as a generalization of input patterns. This method can be employed on data to describe modelling of complex systems. Furthermore, the total information is calculated by using the fractional Poisson process.

1. Introduction

The Polynomial Neural Network (PNN) algorithm is one of the most important methods for extracting knowledge from experimental data and to locate its best mathematical characterization. The proposed algorithm can be utilized to analyze complex data sets with the objective to conclude internal data relationships and to impose knowledge about these relationships in the form of mathematical formulations (polynomial regressions). One of the most common types of PNN is the Group Method of Data Handling (GMDH) polynomial neural network created in 1968 by Professor Ivakhnenko at the Institute of Cybernetics in Kyiv (Ukraine).
Based on GMDH, Zjavka developed a new type of neural network called Differential Polynomial Neural Network (D-PNN) [1,2,3,4]. It organizes and designs some special partial differential equations, performing a complex system model of dependent variables. It makes a sum of fractional polynomial formulas, determining partial mutual derivative alterations of input variable combinations. This kind of retreatment is based on learning generalized data connections. Furthermore, it offers dynamic system models a standard time-series prediction, as the character of relative data allow it to employ a wider range of input interval values than defined by the trained data. In addition, the advantages of differential equation solutions facilitate a major variety of model styles. The principle of this type is similar to the artificial neural network (ANN) construction [5,6].
Fractional calculus is a section of mathematical analysis that deals with considering real number powers or complex number powers of the differentiation and integration operators. The integrals are of convoluted form and exhibit power-law type kernels. It can be viewed as an experimenter for special functions and integral transforms [7,8,9,10,11,12]. It is well known that the physical interview of the fractional derivative is an open problem today. In [13], the author utilized fractional operators, in the sense of the Caputo differential operator, to define and study the stability of recurrent neural network (NN). In [14], Gardner employed a discrete fractional calculus to study Artificial Neural Network Augmentation. In [15], Almarashi used neural networks with a radial basis function method to solve a class of initial boundary values of fractional partial differential equations. Recently, Jalab et al., applied the neural network method for finding the numerical solution for some special fractional differential equations [16]. Zhou et al. propoced a fractional time-domain identification algorithm based on a genetic algorithm [17], while Chen et al. studied the synchronization problem for a class of fractional-order chaotic neural networks [18].
Here, our aim is to introduce a generalization of the differential polynomial neural network utilizing fractional calculus. The fractional calculus is assumed in the sense of the Caputo differential operator. It approximates a multi-parametric function with particular polynomials characterizing its functional output as a generalization of input patterns. This method can be employed on data to describe modelling of complex systems [19].

2. Preliminaries

This section concerns with some basic preliminaries and notations regarding the fractional calculus. One of the most considerably utilized instruments in the theory of fractional calculus is provided by the Caputo differential operator.
Definition 2.1 The fractional (arbitrary) order integral of the function h of order β > 0 is defined by:
Entropy 15 04188 i001
When a = 0 , we write I a β h ( t ) = h ( t ) *   χ β ( t ) , where ( * ) denoted the convolution product (see [7]), χ β ( t ) = t β 1 Γ ( β ) ,   t > 0 and χ β ( t ) = 0 ,   t 0 and χ β ( t ) as β 0 where ( t ) is the delta function.
Definition 2.2 The Riemann-Liouville fractional derivative of the function h of order 0 β < 1 is defined by:
Entropy 15 04188 i002
Remark 2.1 [7]
D β t μ = Γ ( μ + 1 ) Γ ( μ β + 1 ) t μ β ,   μ > 1 ;   0 < β < 1
and:
I β t μ = Γ ( μ + 1 ) Γ ( μ + β + 1 ) t μ + β ,   μ > 1 ;   β > 0.
The Leibniz rule is:
Entropy 15 04188 i003
Definition 2.3. The Caputo fractional derivative fractional derivative of order β>0 is defined, for a smooth function f by:
Entropy 15 04188 i004
The local fractional Taylor formula has been generalized by many authors [20,21,22]. This generalization admits the following formula:
Entropy 15 04188 i013
where D β x c is the Caputo differential operator and:
D n β x c = D β x c D β x c ... D β x c n t i m e s

3. Results

3.1. Proposed Method

The fractional differential polynomial neural network (FD-PNN) is based on an equation of the form:
Entropy 15 04188 i005
where u : = f ( x 1 , x 2 , ... , x n ) is a function of all input variables, a , b i , c i j , d i j k are the polynomial coefficients. Solutions of fractional differential equations can be expressed in term of the Mittag-Leffler function:
Entropy 15 04188 i006
Recently, numerical routines for Mittag-Leffler functions have been developed, e.g., by Freed et al. [23], Gorenflo et al. [24] (with MATHEMATICA), Podlubny [25] (with MATLAB), Seybold and Hilfer [26].
We proceed to form sum derivative terms changing the fractional partial differential equation (9) by applying different math techniques, e.g. fractional wave series, [27]:
y i β = ( a 0 + a 1 x 1 + a 2 x 2 + ... + a n x n + a n + 1 x 1 x 2 + ... ) m + β n b 0 + b 1 x 1 + ... = m β f ( x 1 , x 2 , ... , x n ) x 1 β x 2 β ... x m β ,
where n refers to the combined degree of n input variable polynomial of numerator; while m indicates to the combined degree of denominator w t – weights of terms and y i β is the output neuron. Note that when β 1 , Equation (11) reduces to Equation (4) in [4]. The fractional polynomials of fractional power (11), determining relations of n -input variables, appear summation derivative terms (neurons) of a fractional differential equation. The numerator of Equation (11) is a complete n -variable polynomial, which recognizes a new partial function u of Equation (9). The denominator of Equation (11) is a fractional derivative part, which implies a fractional partial change of some input variables combination. Equation (11) indicates a aingle output for fixed fractional power. Each layer of the FD-PNN contains blocks. These blocks stress fractional derivative neurons. For each fractional polynomial of fractional order formulates the fractional partial derivative depending on the change of some input variables. Each block implicates a unique fractional polynomial which forms its output access into the next hidden layer (Figure 1). For example of a system of the form : input layer, first hidden layer, second hidden layer and output layer; we may use y11/4 to perform its output to the first layer; y21/2 to execute its output to the second hidden layer and y33/4 to carry out the last y of the system in the output layer.
Figure 1. GMDH-PNN.
Figure 1. GMDH-PNN.
Entropy 15 04188 g001
Let there be a network with two inputs, formulating one functional output value y β , then, for special values of β , the sum derivative terms is:
Entropy 15 04188 i007
we realize that y β includes only one block of two neurons, terms of both fractional derivative variables x 1 and x 2 . Table 1 shows approximation errors (y-axis) of the trained network, i.e. differences of the true and estimated function, to random input vectors with dependent variables.
Table 1. Approximation values of f ( x 1 , x 2 ) = x 1 + x 2 .
Table 1. Approximation values of f ( x 1 , x 2 ) = x 1 + x 2 .
DataActual ValueApproximate ValueAbsolute Error
(1,0)1 y 1 = y 3 / 4 = 3 4 0.25
y 1 / 2 = y 1 / 4 = 3 4 0.25
(0,1)1 y 1 = y 3 / 4 = 3 4 0.25
y 1 / 2 = y 1 / 4 = 3 4 0.25
(1,1)1 y 1 = 1.5 0.5
y 3 / 4 = 1.3 0.3
y 1 / 2 = 1.1 0.1
y 1 / 4 = 0.99 0.01
(1/2,1/2)1 y 1 = 1.66 0.66
y 3 / 4 = 1.6 0.6
y 1 / 2 = 1.57 0.57
y 1 / 4 = 1.53 0.53
y 0.1 = 1.4 0.4
The 3-variable FD-PNN (Table 2) for linear true function approximation (e.g., f ( x 1 , x 2 , x 3 ) = x 1 + x 2 + x 3 ) may involve one block of six neurons, FDE terms of all 1 and 2-combination derivative variables of the complete FDE, e.g.:
Entropy 15 04188 i008
and:
Entropy 15 04188 i009
Table 2. Approximation values of f ( x 1 , x 2 , x 3 ) = x 1 + x 2 + x 3 .
Table 2. Approximation values of f ( x 1 , x 2 , x 3 ) = x 1 + x 2 + x 3 .
DataActual ValueApproximate ValueAbsolute Error
(1,0,0)1 y 4 1 = y 4 3 / 4 = 1 2 0.5
y 4 1 / 2 = y 4 1 / 4 = 1 2 0.5
(0,1,0)1 y 4 1 = y 4 3 / 4 = 1 2 0.5
y 4 1 / 2 = y 4 1 / 4 = 1 2 0.5
(0,0,1)1 y 4 1 = y 4 3 / 4 = 1 0
y 4 1 / 2 = y 4 1 / 4 = 1
(1,1,0)1 y 4 1 = 1.125 0.125
y 4 3 / 4 = 1.025 0.025
y 4 1 / 2 = 1.873 0.12
y 4 1 / 4 = 0.936 0.063
(1,0,1)1 y 4 1 = 1.5 0.5
y 4 3 / 4 = 1.368 0.368
y 4 1 / 2 = 1.249 0.249
y 4 1 / 4 = 1.1 0.1
(1,1,1)1 y 4 1 = 1.6 0.6
y 4 3 / 4 = 1.488 0.488
y 4 1 / 2 = 1.26 0.26
y 4 1 / 4 = 1.0755 0.0755
We proceed to compute approximations for non-linear functions. We let u :   = f ( x 1 , x 2 ) be a function with square power variables, then we have:
F ( x 1 , x 2 , u , β u x 1 β , β u x 2 β , 2 β u x 1 2 β , 2 β u x 2 2 β , 2 β u x 1 β x 2 β ) = 0.
For example, for β = 1 , we get [4]:
Entropy 15 04188 i010
In general, for fractional power β , we have:
Entropy 15 04188 i011
For example:
Entropy 15 04188 i012

3.2. Modified Information Theory

In this section, we try to measure the learning of the neuron of the system in Figure 1. We wish to improve a applicable measure of the information we get from observing the appearance of an event having probability p. The approach depends on the probability of extinction, which describes by the fractional Poisson process as follows [28]:
P β   ( N , y ) =   ( σ y ) N   N !   n = 0 ( n + N ) ! n !   ( σ y β ) n   Γ ( β ( n + N ) + 1 ) ,
where σ ϵ R is a physical coefficient, β ϵ (0,1]. Let N be the number of neurons, I be the average information and further that the source emits the symbols with probabilities P1, P2, ... , PN, respectively such that Pi=Pβ (i,y). Thus we may compute the total information as follows:
I = i = 1 N ( N P i     ) log ( 1 P i ) .
The last assertion is modified work due to Shannon [29]. For example, to compute the average information of the system with N=3, for the last fractional derivative in Table 3, we have:
I = i = 1 3 ( N P i     ) log ( 1 P i ) = 3 P 1 log ( 1 P 1 ) + 3 P 2   log ( 1 P 2 ) + 3 P 3 log ( 1 P 3 )   0.2408 0.09 0.051 = 0.051 ,
where Pi converged to a hypergeometric function, which computed with the help of Maple.
Table 3. The approximation errors for f ( x 1 , x 2 ) = ( x 1 + x 2 ) 2 .
Table 3. The approximation errors for f ( x 1 , x 2 ) = ( x 1 + x 2 ) 2 .
DataActual ValueApproximate ValueAbsolute Error
(1,0)2 y 10 1 = 2 0
y 10 3 / 4 = 1.834 0.166
y 10 1 / 2 = 1.681 0.319
y 10 1 / 4 = 1.5422 0.4577
(0,1)2 y 10 1 = 2 0
y 10 3 / 4 = 1.834 0.166
y 10 1 / 2 = 1.681 0.319
y 10 1 / 4 = 1.5422 0.4577
(1,1)2 y 10 1 = 2.49 0.49
y 10 3 / 4 = 2.04 0.04
y 10 1 / 2 = 1.665 0.335
y 10 1 / 4 = 1.336 0.633

4. Discussion

The presented 2-variable FD-PNN (Table 1) is able to approximate any linear function, e.g., the simple sum f ( x 1 , x 2 ) = x 1 + x 2 . The comparison processes with respect to D-PNN (normal case) showed that the proposed method converged to the exact values rapidly. For example, the case (1,1) implied ABE=0.01 at y 1 / 4 . In this experiment, we let b 0 = 1 , w 1 = w 2 = 1. Figure 2 shows the approximation of the fractional derivative for the function f ( x 1 , x 2 ) = x 1 + x 2 . The x-axis represents to the values when x 1 = x 2 . It is clear that the interval of convergence is [ 0.2 , 1 ] . The endowed 3-variable FD-PNN (Table 2) is qualified to approximate any linear function e.g. simple sum f ( x 1 , x 2 , x 3 ) = x 1 + x 2 + x 3 . The comparison procedure with respect to D-PNN displayed that the proposed method, of 3-variables, is converged swiftly to the exact values. For example, the case (1,1,0), with w 2 = 3 / 2 and (1,1,1), with w 2 = 1 yield ABE=0.063 and 0.0755 respectively at y 1 / 4 . Furthermore, Figure 3 shows the interval of convergence at [ 0.3 , 1 ] . Here, we let x1 = x2 = x3. Comparable argument can be concluded from the non-linear case, where Table 3 computes approximation values, by utilizing FD-PNN. For example, the data (1,1) give the best approximation at y 3 / 4 when w 10 = 1.5. In Figure 4, the x-axis performs to the value when x 1 = x 2 . Obviously, the interval of convergence is [0.4,2].
Figure 2. Selected fractional approximation derivative of f(x1,x2) = x1 + x2.
Figure 2. Selected fractional approximation derivative of f(x1,x2) = x1 + x2.
Entropy 15 04188 g002
Figure 3. The fractional approximation y4 of the function f(x1,x2,x3) = x1 + x2 + x3.
Figure 3. The fractional approximation y4 of the function f(x1,x2,x3) = x1 + x2 + x3.
Entropy 15 04188 g003
Figure 4. The fractional approximation y10 of the function f(x1,x2) = (x1 + x2)2.
Figure 4. The fractional approximation y10 of the function f(x1,x2) = (x1 + x2)2.
Entropy 15 04188 g004

5. Conclusions

Based on GMDH-PNN (Figure 1) and modifying the work described in [4], we suggested a generalized D-PNN, called FD-PNN. The experimental results showed that the proposed method satisfies a quick approximation to the exact value comparison with the normal method. The generalization depended on the Riemann-Liouville differential operator. This method can be employed on data to describe modelling of complex systems. Next step, our aim is to modify this work by utilizing mixed D-PNN and FD-PNN, e.g. one can consider a function of the form:
F ( x 1 , x 2 , u , u x 1 , u x 2 , ... β u x 1 β , β u x 2 β , 2 β u x 1 2 β , 2 β u x 2 2 β , 2 β u x 1 β x 2 β , ... ) = 0

Acknowledgments

The author would like to thank the reviewers for their comments on earlier versions of this paper.
This research has been funded by the University of Malaya, under Grant No. RG208-11AFR.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Zjavka, L. Generalization of patterns by identification with polynomial neural network. J. Elec. Eng. 2010, 61, 120–124. [Google Scholar] [CrossRef]
  2. Zjavka, L. Construction and adjustment of differential polynomial neural network. J. Eng. Comp. Inn. 2011, 2, 40–50. [Google Scholar]
  3. Zjavka, L. Recognition of generalized patterns by a differential polynomial neural network. Eng. Tech. Appl. Sci. Res. 2012, 2, 167–172. [Google Scholar]
  4. Zjavka, L. Approximation of multi-parametric functions using the differential polynomial neural network. Math. Sci. 2013, 7, 1–7. [Google Scholar] [CrossRef]
  5. Giles, C.L. Noisy time series prediction using recurrent neural networks and grammatical inference. Machine Learning 2001, 44, 161–183. [Google Scholar] [CrossRef]
  6. Tsoulos, I.; Gavrilis, D.; Glavas, E. Solving differential equations with constructed neural networks. Neurocomputing 2009, 72, 2385–2391. [Google Scholar] [CrossRef]
  7. Podlubny, I. Fractional Differential Equations; Academic Press: New York, NY, USA, 1999. [Google Scholar]
  8. Hilfer, R. Application of Fractional Calculus in Physics; World Scientific: Singapore, 2000. [Google Scholar]
  9. West, B.J.; Bologna, M.; Grigolini, P. Physics of Fractal Operators; Academic Press: New York, NY, USA, 2003. [Google Scholar]
  10. Kilbas, A.A.; Srivastava, H.M.; Trujillo, J.J. Theory and Applications of Fractional Differential Equations; Elsevier: Amsterdam, The Netherland, 2006. [Google Scholar]
  11. Sabatier, J.; Agrawal, O.P.; Machado, T. Advance in Fractional Calculus: Theoretical Developments and Applications in Physics and Engineering; Springer: London, UK, 2007. [Google Scholar]
  12. Lakshmikantham, V.; Leela, S.; Devi, J.V. Theory of Fractional Dynamic Systems; Cambridge Scientific Pub.: Cambridge, UK, 2009. [Google Scholar]
  13. Jalab, J.A.; Ibrahim, R.W. Stability of recurrent neural networks. Int. J. Comp. Sci. Net. Sec. 2006, 6, 159–164. [Google Scholar]
  14. Gardner, S. Exploring fractional order calculus as an artifficial neural network augmentation. Master’s Thesis, Montana State University, Bozeman, Montana, April 2009. [Google Scholar]
  15. Almarashi, A. Approximation solution of fractional partial differential equations by neural networks. Adv. Numer. Anal. 2012, 2012, 912810. [Google Scholar] [CrossRef]
  16. Jalab, H.A.; Ibrahim, R.W.; Murad, S.A.; Hadid, S.B. Exact and numerical solution for fractional differential equation based on neural network. Proc. Pakistan Aca. Sci. 2012, 49, 199–208. [Google Scholar]
  17. Zhou, S.; Cao, J.; Chen, Y. Genetic algorithm-based identification of fractional-order systems. Entropy 2013, 15, 1624–1642. [Google Scholar] [CrossRef]
  18. Chen, L.; Qu, J.; Chai, Y.; Wu, R.; Qi, G. Synchronization of a class of fractional-order chaotic neural networks. Entropy 2013, 15, 3265–3276. [Google Scholar] [CrossRef]
  19. Ivachnenko, A.G. Polynomial Theory of Complex Systems. IEEE Trans. Sys. Man Cyb. 1971, 4, 364–378. [Google Scholar] [CrossRef]
  20. Kolwankar, K.M.; Gangal, A.D. Fractional differentiability of nowhere differentiable functions and dimensions. Chaos 1996, 6, 505–513. [Google Scholar] [CrossRef] [PubMed]
  21. Adda, F.B.; Cresson, J. About non-differentiable functions. J. Math. Anal. Appl. 2001, 263, 721–737. [Google Scholar] [CrossRef]
  22. Odibat, Z.M.; Shawagfeh, N.T. Generalized Taylor’s formula. Appl. Math. Comp. 2007, 186, 286–293. [Google Scholar] [CrossRef]
  23. Freed, A.; Diethelm, K.; Luchko, Y. Fractional-order viscoelasticity (FOV): Constitutive development using the fractional calculus. In First Annual Report NASA/TM-2002-211914; NASA's Glenn Research Center: Cleveland, OH, USA, 2002. [Google Scholar]
  24. Gorenflo, R.; Loutchko, J.; Luchko, Y. Computation of the Mittag-Leffler function Eα,β(z) and its derivative. Frac. Calc. Appl. Anal. 2002, 5, 491–518. [Google Scholar]
  25. Podlubny, I. Mittag-Leffler function, The MATLAB routine. http://www.mathworks.com/matlabcentral/fileexchange (accessed on 25 March 2009).
  26. Seybold, H.J.; Hilfer, R. Numerical results for the generalized Mittag-Leffler function. Frac. Calc. Appl. Anal. 2005, 8, 127–139. [Google Scholar]
  27. Ibrahim, R.W. Fractional complex transforms for fractional differential equations. Adv. Diff. Equ. 2012, 192, 1–11. [Google Scholar] [CrossRef]
  28. Casasanta, G.; Ciani, D.; Garra, R. Non-exponential extinction of radiation by fractional calculus modelling. J. Quan. Spec. Radi. Trans. 2012, 113, 194–197. [Google Scholar] [CrossRef]
  29. Shannon, C.E. A mathematical theory of communication. Bell Syst. Tech. J. 1948, Volume, 379–423. [Google Scholar] [CrossRef]

Share and Cite

MDPI and ACS Style

Ibrahim, R.W. The Fractional Differential Polynomial Neural Network for Approximation of Functions. Entropy 2013, 15, 4188-4198. https://0-doi-org.brum.beds.ac.uk/10.3390/e15104188

AMA Style

Ibrahim RW. The Fractional Differential Polynomial Neural Network for Approximation of Functions. Entropy. 2013; 15(10):4188-4198. https://0-doi-org.brum.beds.ac.uk/10.3390/e15104188

Chicago/Turabian Style

Ibrahim, Rabha W. 2013. "The Fractional Differential Polynomial Neural Network for Approximation of Functions" Entropy 15, no. 10: 4188-4198. https://0-doi-org.brum.beds.ac.uk/10.3390/e15104188

Article Metrics

Back to TopTop