A method for solving nonlinear equations

https://doi.org/10.1016/j.amc.2005.05.036Get rights and content

Abstract

A method for finding a solution of the equation f(x) = 0 is presented. The method is based on some specially derived quadrature rules. It is shown that the method can give better results than the Newton method.

Introduction

One of the most basic problems in Numerical Analysis (and of the oldest numerical approximation problems) is that of finding values of the variable x, which satisfy f(x) = 0 for a given function f. The Newton method is the most popular method for solving such equations. Some historical points about this method can be found in [15].

In recent years a number of authors have considered methods for solving the nonlinear equations. For example, see [1], [2], [3], [4], [6], [8], [9], [10]. It is well-known that some of these methods can be obtained using Taylor or interpolating polynomials. As we know, these polynomials give approximations of functions. If we integrate these approximations then we get corresponding quadrature formulas—for example Newton–Cotes formulas. Certain recently obtained results in Numerical Integration have emphasized that error bounds for the quadrature formulas are, generally speaking, better than error bounds for the corresponding approximate polynomials. A natural question is: Whether we can use these quadrature formulas to obtain methods for solving nonlinear equations? It is already known that quadrature formulas and nonlinear equations are connected—for example see [9].

In this paper we give a new approach to this subject. We derive a method for solving nonlinear equations using some specially derived quadrature formulas. This new approach is different than the above mentioned connection between quadrature formulas and nonlinear equations.

In Section 2 we derive two quadrature formulas and use them to obtain the method for solving nonlinear equations. In this section we also give an algorithm for the obtained method and give an algorithm for the Newton method.

In Section 3 we consider some convergence results. In Section 4 we give few numerical examples and compare this method with the Newton method. We show that this new method can give better results than the Newton method. Moreover, we show that the new method can be applied in some cases where the Newton method fails to give desired results.

Section snippets

The method

We define the mappingK1(x,t)=t-3a+b4,t[a,x],t-a+3b4,t(x,b],where x  [a, b]. We suppose that f  C1(a, b). Integrating by parts, we haveabK1(x,t)f(t)dt=axt-3a+b4f(t)dt+xbt-a+3b4f(t)dt=b-a4[f(a)+2f(x)+f(b)]-abf(t)dt.

If we set x=a+b2 in (2), then we get the quadrature ruleb-a4f(a)+2fa+b2+f(b)-abf(t)dt=abK1a+b2,tf(t)dt.The quadrature rule (3) is considered, for example, in [7], [11]. In [11] it is shown that (3) has a better estimation of the error than the well-known Simpson’s quadrature

Convergence results

Here we suppose that f  C2(c, d), f(b) = 0, b  (c, d), f′(x)  0, f″(x)  0, x  (c, d). We define the functionΨ(a)=a+4(x-a)f(a)3f(a)-2f(x),wherex=a-αf(a)f(a),0<α1.(Compare (17), (18) with (13), (14).)

We havelimabx=limaba-αf(a)f(a)=b.

We now consider the derivative Ψ′(a) in a neighborhood of the point b. We calculateΨ(a)=1+4(xa-1)f(a)3f(a)-2f(x)-4(x-a)f(a)3f(a)-2f(x)-f(a)(3f(a)-2f(x)xa)(3f(a)-2f(x))2.

We havelimabxa=limab1-αf(a)2-f(a)f(a)f(a)2=1-α1-limabf(a)f(a)f(a)2=1-α.

We also havelimabf

Numerical examples

In this section we give few numerical examples and compare the method (15), (16) and the Newton method. We use Algorithm 1, Algorithm 2 with ε = 1.0E−10, α = 1/2 and m = n = 10,000 for finding a solution of the equation f(x) = 0.

Example 5

Let f(x)=x3. A simple calculation givesxk+1=323323+2xk,if we apply the method (15), (16). Since323323+2<1,it is obvious that the above sequence converges to 0—the exact solution of the equation f(x) = 0.

On the other hand, the Newton method gives the sequencexk+1=-2xk.It is also

Conclusion remarks

Of course, we can derive many quadrature rules (using different Peano kernels) and use the procedure presented in this paper to obtain methods for solving nonlinear equations. It is also clear that it has sense to consider only those rules (and their combinations) which give better results than some existing methods (for example, the Newton method). Many analytical and numerical experiments have showed that only few of these combinations satisfy the above requirement. For example, one of

Cited by (36)

  • Discrete-time noise-tolerant Z-type model for online solving nonlinear time-varying equations in the presence of noises

    2022, Journal of Computational and Applied Mathematics
    Citation Excerpt :

    Being an important branch of zeroing finding problems, the NTVEPs are generally encountered in engineering applications and scientific computing fields. More and more approaches have been developed for computing NTVEPs [1–6]. However, owing to the traditional computers with serial computing characteristics, some of these methods are not effective for solving NTVEPs [7].

  • Root finding by high order iterative methods based on quadratures

    2015, Applied Mathematics and Computation
    Citation Excerpt :

    The use of quadrature rules for the construction of iterative methods, applied to the solution of nonlinear equations or systems, has been considered by many authors (see, for example [5,7,10,13,16,17,18,19]).

  • Methods Involving Second or Higher Derivatives

    2013, Studies in Computational Mathematics
  • Bisection and Interpolation Methods

    2013, Studies in Computational Mathematics
View all citing articles on Scopus
View full text