Application of ANNs approach for solving fully fuzzy polynomials system

In processing indecisive or unclear information, the advantages of fuzzy logic and neurocomputing disciplines should be taken into account and combined by fuzzy neural networks. The current research intends to present a fuzzy modeling method using multi-layer fuzzy neural networks for solving a fully fuzzy polynomials system. To clarify the point, it is necessary to inform that a supervised gradient descent-based learning law is employed. The feasibility of the method is examined using computer simulations on a numerical example. The experimental results obtained from the investigation of the proposed method are valid and delivers very good approximation results.


Introduction
Very often, there have been plenty of problems in some applied fields such as mathematical economics and optimal control theory.In order to find out solutions to solve such problems, the mathematical formulations of physical phenomena that carry fuzzy polynomials are used which found to be useful in the area.Simply said, there already exist numerous methods of solving the problems in the literature.It is productive to narrate the relevant for this work here.Takagi and Sugeno [14] have presented first numerical approach of fuzzy systems.A general model has been proffered for solving a fuzzy linear system using embedding method by Friedman et al. [7].Further, Dubois and Prade [6] studied theoretical features of a fuzzy linear system .Iterative methods have been utilized to find the solution of fully fuzzy linear system by Dehgan et al. [5].The solutions to linear and nonlinear fuzzy systems were applied by [1,2,4].In addition, applying fuzzy neural networks (FNNs) meant to be a satisfied tool for finding indirect solutions to fuzzy polynomials system.through determining a cost function for each pair of fuzzy output vector and each of its corresponding fuzzy target vector, Ishibuchi et al. [12] designed a learning algorithm of fuzzy neural networks where the weighs are usually considered as triangular and trapezoidal fuzzy.Also Hayashi et al. [11] fuzzified the delta learning rule such that Zadeh's extension principle [15] was employed to determine the input-output relation to every unit .Whereas Abbasbandy et al. [3] focused in developing a new learning algorithm to reach a real root of a fuzzy polynomials system in order to illustrate a structure of feed-forward fuzzy neural networks.
It is quite possible to mention that neural networks simulations due to their noticeable efficiency in extracting meaning from vague data, seems to be useful in solving the problems.In this paper, we are eager to construct a new algorithm by using fuzzy neural networks to approximate solution of a fully fuzzy polynomials system.So, a three-layer fuzzy feed-forward neural network (FFNN) which is trainable is used.For the suggested architecture of neural nets, the different power of unknowns are considered as connection weight parameters corresponding to the output layer.First, we aim to adjust the fuzziness of the actual output data to match the desired fuzziness even if the difference between actual and desired modal value is still high.Hence, for the given training patterns, a cost function is defined for the level sets of fuzzy output and target output in which measures the difference between the target outputs and corresponding actual outputs.Therefore, the error function is presented as a function on the fuzzy weight space of the net.By minimizing of the error size, the approximate problem can be achieved.Then, a supervised learning rule based on the gradient descent method is derived for turning the values of the weights into any desired degree of accuracy.Now, the learning problem of the fuzzy net can be considered as a problem of optimization.Eventually, the connection weights are corrected directly after presenting set of training data.This paper is organized as follows.A brief review on the proposed architecture of artificial neural networks and fuzzy numbers is presented in sections 2. Section 3 describes how to find a real solution of the fuzzy system by using FFNN.Finally, an example is illustrated in Section 4. Section 5 concludes the paper.

Basic concepts
This section intends to introduce some general and powerful definitions and concepts that will be utilized in the following sections.

Fuzzy calculus
In this part the most basic used notations in fuzzy calculus are briefly introduced.More detailed information can be found in [8,13].We begin by defining the fuzzy number.Definition 2.1.A fuzzy number v is a pair (v, v) of functions v(r) and v(r) : 0 ≤ r ≤ 1, which satisfy the following requirements: i) v(r) is a bounded monotonically increasing, left continuous function on (0, 1] and right continuous at 0, ii) v(r) is a bounded monotonically decreasing, left continuous function on (0, 1] and right continuous at 0, For arbitrary fuzzy numbers u = (u, u) and v = (v, v), we determine addition (u + v) and multiplication by k as follows: The collection of all fuzzy numbers (as given by definition 1) with addition and multiplication as defined above, is denoted by E 1 .For 0 < r ≤ 1, a r-level set of fuzzy number u is defined with [u] r = {x| u(x) ≥ r, x ∈ R}, and for r = 0, the support of u is defined as

An overview on neural networks
An artificial neural network, often just named a neural network, is a novel structure of information processing system that is inspired by biological nervous systems.Using neural network applications, some mathematical problems appear which cannot be conveniently solved with exact formulas.For further readings about the ANNs, see [10,9].

Input-output relation of each unit
Here, in order to determine an algorithm that will be applied for solving the mentioned fuzzy problem, a brief framework of the proposed architecture of neural networks is to be offered.Consider the two-layer feed-back architecture shown in Figure 1.This network receives input datum of the training set {A p1 , ..., A pn } (for p = 1, 2).Also, the activation function of the output units is assumed to be the identity function.In this paper we assume that input vector and the target output are triangular fuzzy numbers and connection weights are crisp numbers.Now, consider set of n training patterns {(A p1 , ..., A pn ); A p,0 } (for p = 1, 2), where A p,0 refers to the desired network output upon the presentation of the input signals {A p1 , ..., A pn }.Using the figure, input-output relation of each unit and calculated output Y p can be written as following:

Input units:
The input neurons make no change in their inputs, so: (2.1) Output unit: where w j and W j are weight parameters corresponding to hidden and output network layers, respectively.The relations between input neurons and output neuron in Eqs.(2.1)-(2.2) are defined by the extension principle, as in Hayashi et al. [11].

Calculation of fuzzy output
In output layer the fuzzy output neuron is numerically calculated for level sets of fuzzy input signals and fuzzy connection weights.The input-output relations of our fuzzy neural network can be express for the α-level sets as follows: International Scientific Publications and Consulting Services Input units:

Hidden units:
Let f be an one-to-one activation function.Now we get: Output unit: For simplify we assume the α-level sets of the fuzzy input A pi is nonnegative, i.e., 0 Considering that the activation function f is identity function, the input-output relations of the neural network can be given for the α-level sets as: , where , and where where 3 System of fuzzy polynomials Fuzzy neural nets are globally interconnected feed-forward networks.The processing characters and the neuron's relations of these networks work on fuzzy numbers than real numbers.In this section, we aim to achieve a supervised learning law based on the traditional fuzzy algorithm to the approximate solution of fully fuzzy polynomials system: Considering A = (A p,i ) 2×n , B = (A 1,0 , A 2,0 ) T and X = (xy, ..., x n y n ), Eq. (3.6) can be transformed to the below form: where A is a matrix of fuzzy number entries, X and B are fuzzy number vectors.As seen, an architecture of FFNN solution to Eq. (3.7) is illustrated in Fig. 1.The straightforward and flexible fuzzy neural network architecture is of modeling scheme.Considering the following procedure, the output of the proposed neural architecture with offered conditions, derives a representation for the given fuzzy problem.

Cost function
Consider the three-layer fuzzy feed-forward architecture shown in Fig. 1.Let A p0 be fuzzy target output corresponding to the fuzzy input vector A p = (A p1 , ..., A pn ), w i = x i and W j = y j (for i, j = 1, ..., n).So, we have: The aim is to define and minimize a cost function to the proposed network, which compute the error in the actual output data on the set of training data.Therefore this function is refferred to a function on the weight parameters space of the network.So, the cost function of a fuzzy network can be provided for α-level sets of fuzzy output Y p and its corresponding target output A p0 by: e α p = e α pl + e α pu , ( where e α pl = α.
In the relation (3.8), e α pl and e α pu denote the squared errors for the lower limits and the upper limits of the α-level sets of the fuzzy output Y p and target output A p0 , respectively.Now the cost function for the training pattern {A P ; A p0 } is determined as [12]: (3.11)

Learning algorithm of the FFNN
In artificial neural networks, learning is of great importance in creating network parameters which neccessary drives the output error to zero.Here the purpose is to apply the supervised gradient descent-based learning procedure that is a natural generalization of the delta learning rule in which it tends is to minimize the error function on the weight space by finding the weight parameters.Let the trapezoidal fuzzy quantities a 1 = (x 1 0 , x 2 0 , x 3 0 , x 4 0 ) and a 2 = (y 1 0 , y 2 0 , y 3 0 , y 4 0 ) are initialized at small random values for variables x and y, respectively.In order to set the mentioned learning rule to the above quantities, consider the following.Consider this learning rule: a r j (t + 1) = a r j (t) + ∆a r j (t), r = 1, 2, 3, 4, (3.12) ∆a r j (t) = −η.
∂ e p (α) ∂ a r j + γ.∆a r j (t − 1), p = 1, 2, ( where t is the number of adjustments, η and α are the small constant learning rate and the momentum term constant in which normally chosen between 0 and 1, respectively.The network parameters are updated to deduct the error and then the network output converges for each given input to the desired output.To do this, the new value for each input is found by taking the current input and adding an amount that is proportional to the slope of training.Now, the partial derivative where Using a similar procedure as an outline which mentioned above, we have the correspondingly corollary for partial derivative , in which we refrain to go through proven details.In addition, the proposed neural network parameter a r 1 is updated in which it owns a simplest extension of the rule that has been given above.So, the details will be left for the readers to look for.It should be mentioned that, the derivatives of [Net p ] α l , with respect to variable a r 1 is: where Considering the input-output pair {A p ; A p0 } where A p = (A p1 , ..., A pn ) and different arbitrary values for α-level sets (i.e.α 1 , α 2 , ..., α m ), learning process of suggested algorithm consists of the following five stages: i) Forward calculation: Choose the input pattern A p from the training set and distribute it through the network.
Then approximate actual fuzzy output Y p based on the current weights space.
ii) Back-propagation: Update all weights according to the Eq.(3.12) for the hidden and output layers.It is, now, convenient to note that, back-propagation is found to be extremely sensitive to initial conditions.Subsequently, if the training cycle is complex, it may not provide a possible solution.In such case, the learning parameters must be measured with new values where the convergence speed of the recommended process is highly related to the learning rate and momentum constant parameters.

Numerical example
Trying to find the approximate fuzzy solution of the present example, the proposed method is selected to do so.In the following simulations, we consider the specifications as follows: 1) Learning rate η = 0.01,Where the exact solution is x = (1, 2, 2, 5) and y = (2,3,5,6).A point should be clarified that the present problem is solved using neural network architecture which is provided in this paper, considering the network parameters a 1 and a 2 are quantified with (0.1, 0.2, 0.3, 0.4) and (0.5, 0.6, 0.7, 0.8), respectively.The cost function on the number of iterations is illustrated in Figure 2.More importantly, the more the iterations on the increase, the more the cost function would lead to zero.Further, collected numerical results on the various number of iterations, can be seen in Table 1.Also, Figure 3 shows the convergence of the approximate solutions obtained from the proposed algorithm.

Conclusion
In short, artificial neural networks can deal with the issues outside the range of conventional processors due to the fact that they are not conventionally programmed yet.A worldwide iterative approach using ANNs, which is convenient for approximating a fuzzy polynomials system, is discussed in present paper.The proposed multi-layer feed forward architecture selects a computationally efficient training method that is based on a supervised gradient International Scientific Publications and Consulting Services descent-based learning rule to approximate solutions of the fuzzy problems.To check the validity of the method, one numerical example was provided.Obtained results clarify that the proposed technique creates a powerful tool which is demanded for solving the fuzzy systems.An important point should be made here that if the present method developed with a claim that the number of iterations are chosen large enough, the method is said to have the best approximate solution.Having this knowledge, any attempt to establish the approximate solution of any kinds of fuzzy systems can be quite convenient.And last, the analyzed examples really did determine the extent of the ability and high validity of the proposed method without any ambiguity.

Definition 2 . 2 .
Let u, v ∈ E 1 .If there exists w ∈ E 1 such that u = v + w, then w is called the H-difference of u, v and it is denoted by u − v. Definition 2.3.For arbitrary fuzzy numbers u = (u, u) and v = (v, v) the quantity D(u, v) = sup 0≤r≤1 {max[|u(r) − v(r)|, |u(r) − v(r)|]}, is the distance between u and v. International Scientific Publications and Consulting Services

1 ,,
jεC International Scientific Publications and Consulting Services

Stage 4 : 5 :
Accumulative cycle error is computed by adding the present error to E. Stage The training cycle is completed.For E < Emax, cease the training session.If E > Emax then E is set to 0 and a new training cycle is initiated by turning back to Stage 3.

Figure 2 :b
Figure 2: The cost function for Example 4.1 over the number of iterations.

Figure 3 :
Figure 3: Convergence of the approximate solutions for Example 4.1.
y 0 is evaluated at the current fuzzy weight values.Having the chain rule for differentiation, one may illustrate the present partial derivative as: pl ∂ y 0 .So, we have: process Stage 1: Set the learning rate, momentum constant and Emax to minimum affirmative values.Then, initialize all initial fuzzy connection weights.Stage 2: Set t := 0 where t is number of learning iterations and also the running error E is set to 0. Stage 3: Set t := t + 1. Repeat following procedure for different values of p, q:

Table 1 :
The approximated solution with error analysis for Example 4.1.