Modifying Explicit Finite Difference Method by Using Radial Basis Function Neural Network

In this research, we use artificial neural networks, specifically radial basis function neural network (RBFNN) to improve the performance and work of the explicit finite differences method (EFDM), where it was compared, the modified method with an explicit finite differences method through solving the Murray equation and showing by comparing results with the exact solution that the improved method by using (RBFNN) is the best and most accurate by giving less error rate through root mean square error (RMSE) from the classical method (EFDM).


Introduction:
Artificial neural networks (ANN) has been widely used in many researches as a very important member of computational intelligence and artificial intelligence. Neural networks which include radial basis function (RBF) networks, are to be very efficient in approximating nonlinear and multivariable functions when the sample training set is selected in a rigorous manner [12]. An (ANN) is applied for solving various problems in industrial applications, such as non-linear control, system diagnosis , data classification, pattern recognition and function approximation [13].
Finite difference method is one of the most popular methods of numerical solution of partial differential equations, which are used to solve many problems that involve unknown functions of several variables, where is distributed in space, or distributed in space and time [11]. In the finite difference method, we approximate the solution by using the numerical operators of the function's derivatives and finding the solution at specific preassigned grids [10].
The proposed or modified method is a hybrid method which is based on finite differences method and an artificial neural network . After we get the numerical solution from finite difference approximation, we get the first stage, and then enter the solution to the (RBFNN) as a second stage of process, which will be used to find the solutions for these points. Where the modified method gave errors less than from the classical method after using the explicit finite difference method in the first stage and the (RBFNN) in the second stage to solve Murray equation problem.

2.Artificial Neural Network (ANN) :
A neural network is a parallel system, which is capable of resolving paradigms that linear computing cannot. An artificial neural network (ANN) is a system based on the operation of biological neural networks [7]. It is composed of a large number of interconnected elements (neurons) working in parallel to solve specific problems. A given (ANN) is configured for the specific application, such as pattern recognition or data classification, through a learning process [8]. Learning in biological systems involves adjustments to the synaptic connections that exist between the neurons. Neural networks with their ability to derive meaning from imprecise data can be thought as an "expert" and used to extract patterns and detect trends that are too complex to be noticed by either humans or other computer techniques. [9] and [12].

Neural Network Structure : consistent
An artificial neural network is composed of several elements [12] and [13]: 1. Input layer : The role of the input units is to receive the raw information that is fed into the network. 2. Hidden layers : It is the processing unit for the network. Its activity is determined by the activities of the input units and the weights of the connections between the input and the first row of the hidden units, or between nodes of adjacent hidden layers. 3. Output layer : The behavior of the output units depends on the activity of the last row of the hidden units and the weights between the nodes in this row and the output units. 4. Neuron : It is the basic element of the neural network. It is a communication conduit that accepts inputs and produces outputs. In the case when a neuron produces output, it becomes active. A neuron will be active when the sum of its inputs satisfies the neuron's activation function.

Radial Basis Function Neural Networks (RBFNN) :
A radial basis function network is an (ANN) that uses radial basis functions as activation functions. It is a linear combination of radial basis functions [6], (RBF) network provides a powerful alternative to multilayer perceptron (MLP) neural networks to approximate or to classify a pattern set. (RBFNN) differs from (MLP) in that the overall input output map is constructed from local contributions of Gaussian axons, require fewer training samples and train faster than (MLP). The exact interpolation of a set of N data points in a multi-dimensional space requires every one of the D dimensional input vectors to be mapped onto the corresponding target output [3]. The radial basis function approach introduces a set of N basis functions, one for each data point, which takes the form where is non-linear function. The output of the mapping is then taken to be a linear combination of the basis functions, i.e.

Input Layer
Hidden Layer of RBF Output Layer Layer Y = …(1) If we take , and , then where …(2) Because of the similar layer-by-layer topology, it is often considered that (RBF) networks belong to (MLP) networks. It was proved that (RBF) networks can be implemented by (MLP) networks with increased input dimensions [2]. A (RBF) neural network configured for software effort estimation has only one implement of the outputinput relation in the eq.(1), which is indeed a composition of the non-linear mapping realized by the hidden layer with the linear mapping realized by the output layer output neuron. [11] and [8] If we take is Gaussian radial basis function which is given by the following [6]: and ( ) are basis center and the width of hidden neuron respectively and denotes the Euclidean distance.

Determining the Weights of the (RBFNN):
From the general error we get [6][8]: … (4) And deriving with respect to w …(4a) …(4b) Setting to zero the derivative and finding w, we obtain: …(5) (RBF) networks act as local approximation networks, because the network outputs are determined by the specified hidden units in certain local receptive fields a realvalued function whose value depends only on the distance from the origin, so that ; or alternatively on the distance from some other point c, called a center, so that any function that satisfies the property is a radial function. The norm is usually Euclidean distance, although other distance functions are also possible [3].

Training RBF Networks:
The training of a (RBF) network can be formulated as a non-linear unconstrained optimization problem given below [2] :

Numerical Solution Using Finite Difference Methods :
Finite difference methods (FDM) are numerical methods for approximating the solutions to differential equations by using finite difference equations to approximate derivatives. Assuming the function whose derivatives are to be approximated is properly-behaved by Taylor's theorem [14].
The basic idea of how the finite differences method is the conversion equation partial differential to algebraic equation as they are discretization out function to a set of specific points and rounded derivatives in the equation partial differential formulas finite differences linking values function is known at this point specific. After order issue regular format as used this formula to approximate the function values at these points [10].

4.1.Finite Difference Formulas:
Below we list the commonly used finite difference formulas to approximate the first order derivative of a function u(x) using Taylor series as below [14] [5]: Where, h is constant mesh spacing discretized. Recall from calculus that the following approximations are valid for the first derivative of u(x). From eq. (7) and by discard the high orders of h, we get the forward difference approximation : And by the same way, we get the backward difference approximation from eq. (8) : By adding eq.(9) and eq.(10) we get a centered difference approximation: Similarly by adding eq.(7) and eq.(8) with discard the high orders of h 2 we get the centered finite difference approximation for the second derivative of u(x) : We will use these approximations to find a numerical solution to the non-linear reaction-diffusion equation.

Mathematical Model:
We consider the nonlinear reaction-diffusion equations with convection term of the form [1] and [4]: is an unknown function, A(u), B(u) and C(u) are arbitrary smooth functions. In eq.(13) , we generalized a great number of the well-known non-linear second-order evolution equations describing various processes in biology. When A(u)=1, , eq.(13) becomes: which is called the non-linear Murray equation with initial condition: Such that F(x) is prescribed space-dependent and G(t), I(t) are prescribed timedependent.

Explicit Finite-Difference Method (EFDM):
We will apply the classical explicit finite-difference method to solve non-linear Murray equation in eq.(14). First we discretized the rectangle Compensate for the derivatives in eq.(14) by the finite difference approximate such that : Where, h and k are the space and time discretized . Now, multiply the eq.(15) by k and let 2 / h k r  we get: Put the terms of level j+1 in the left and the terms of level j in the right we get: By reordering the equation, we have : Which will be used to find the numerical solution for the points of the mesh as shown in the Figure ( Where, Then, we get a set of linear equations that have numbers equal with the number of unknown in terms of which to obtain the desired numerical solution.

The Modified Method (RBFNN_EFDM):
The modified method is a proposed method that consists of two stages of processor are: 1-The training Stage. 2-The testing Stage.
The training stage consists of two main parts of the initial processing parts :  Explicit Finite Difference Method (EFDM).  Radial Basis Function Neural Network (RBFNN).
In the training stage, the output of (EFDM) becomes the input of (RBFNN) to get the optimal weight and bias. While, the testing stage requires only the optimal weight and bias to find the results for any level and any time step (k).
Where, the architecture of the proposed method (RBFNN_EFDM) in the training stage is given by Figure (4), while the proposed method (RBFNN_EFDM) in the testing stage is given by Figure (5), and also the general representation of the modified method (EFDM_RBFNN) can be taken as the following form:

Numerical Results and Discussion:
In this section, we present the results of non-linear Murray equation by the classical explicit finite difference method and compare those with modified method by using (RBFNN). Where, we apply these methods to compute solutions numerically and compare these solutions with the exact solutions at various times. After being trained (RBFNN) and get the optimal parameters, we enter (EFDM) outputs in each level t to the trained (RBFNN) in testing phase, where all numerical computations in training stage, we performed with the space step h=0.1 and the time step k=0.001, while the numerical computations in testing stage, we performed with the space step h=0.1 and at various times, and we take the parameters in eq(14). where : With the mixed boundary conditions as follows: And with the exact solution for eq(14) in the form : , and 0 c is constant such that 0 c is arbitrary constant [4][14]. If we take the root mean square error (RMSE) as a scale, which is defined in the following : Assuming that represents the exact solution for ith value and is the resulting value from solution, we get:  (2), (3) and (4) for times t=0.1 , t=0.5 and t=1 respectively, with h=0.1 represents results at those times, and the modified method by using (RBFNN) at all times have (RMSE) better than the classical explicit finite difference method (EFDM). We see an excellent agreement between the modified method (RBFNN_EFDM) and exact solution in results for all tables, where the error of the solution rapidly decreases as shown in the table (1)