A method for solving nonlinear equations
Introduction
One of the most basic problems in Numerical Analysis (and of the oldest numerical approximation problems) is that of finding values of the variable x, which satisfy f(x) = 0 for a given function f. The Newton method is the most popular method for solving such equations. Some historical points about this method can be found in [15].
In recent years a number of authors have considered methods for solving the nonlinear equations. For example, see [1], [2], [3], [4], [6], [8], [9], [10]. It is well-known that some of these methods can be obtained using Taylor or interpolating polynomials. As we know, these polynomials give approximations of functions. If we integrate these approximations then we get corresponding quadrature formulas—for example Newton–Cotes formulas. Certain recently obtained results in Numerical Integration have emphasized that error bounds for the quadrature formulas are, generally speaking, better than error bounds for the corresponding approximate polynomials. A natural question is: Whether we can use these quadrature formulas to obtain methods for solving nonlinear equations? It is already known that quadrature formulas and nonlinear equations are connected—for example see [9].
In this paper we give a new approach to this subject. We derive a method for solving nonlinear equations using some specially derived quadrature formulas. This new approach is different than the above mentioned connection between quadrature formulas and nonlinear equations.
In Section 2 we derive two quadrature formulas and use them to obtain the method for solving nonlinear equations. In this section we also give an algorithm for the obtained method and give an algorithm for the Newton method.
In Section 3 we consider some convergence results. In Section 4 we give few numerical examples and compare this method with the Newton method. We show that this new method can give better results than the Newton method. Moreover, we show that the new method can be applied in some cases where the Newton method fails to give desired results.
Section snippets
The method
We define the mappingwhere x ∈ [a, b]. We suppose that f ∈ C1(a, b). Integrating by parts, we have
If we set in (2), then we get the quadrature ruleThe quadrature rule (3) is considered, for example, in [7], [11]. In [11] it is shown that (3) has a better estimation of the error than the well-known Simpson’s quadrature
Convergence results
Here we suppose that f ∈ C2(c, d), f(b) = 0, b ∈ (c, d), f′(x) ≠ 0, f″(x) ≠ 0, x ∈ (c, d). We define the functionwhere(Compare (17), (18) with (13), (14).)
We have
We now consider the derivative Ψ′(a) in a neighborhood of the point b. We calculate
We have
We also have
Numerical examples
In this section we give few numerical examples and compare the method (15), (16) and the Newton method. We use Algorithm 1, Algorithm 2 with ε = 1.0E−10, α = 1/2 and m = n = 10,000 for finding a solution of the equation f(x) = 0. Example 5 Let . A simple calculation givesif we apply the method (15), (16). Sinceit is obvious that the above sequence converges to 0—the exact solution of the equation f(x) = 0. On the other hand, the Newton method gives the sequenceIt is also
Conclusion remarks
Of course, we can derive many quadrature rules (using different Peano kernels) and use the procedure presented in this paper to obtain methods for solving nonlinear equations. It is also clear that it has sense to consider only those rules (and their combinations) which give better results than some existing methods (for example, the Newton method). Many analytical and numerical experiments have showed that only few of these combinations satisfy the above requirement. For example, one of
References (15)
Improving Newton–Raphson method for nonlinear equations by modified Adomian decomposition method
Appl. Math. Comput.
(2003)- et al.
Geometric constructions of iterative functions to solve nonlinear equations
J. Comput. Appl. Math.
(2003) - et al.
Solution of nonlinear equations by adomian decomposition method
Appl. Math. Comput.
(2002) Improvement of some Ostrowski–Grüss type inequalities
Comput. Math. Appl.
(2001)- et al.
A new generalization of Ostrowski’s integral inequality for mappings whose derivatives are bounded and applications in numerical integration and for special means
Appl. Math. Lett.
(2000) - et al.
Third-order methods from quadrature formulae for solving systems of nonlinear equations
Appl. Math. Comput.
(2004) New bounds for the first inequality of Ostrowski–Gruss type and applications
Comput. Math. Appl.
(2003)
Cited by (36)
Discrete-time noise-tolerant Z-type model for online solving nonlinear time-varying equations in the presence of noises
2022, Journal of Computational and Applied MathematicsCitation Excerpt :Being an important branch of zeroing finding problems, the NTVEPs are generally encountered in engineering applications and scientific computing fields. More and more approaches have been developed for computing NTVEPs [1–6]. However, owing to the traditional computers with serial computing characteristics, some of these methods are not effective for solving NTVEPs [7].
Root finding by high order iterative methods based on quadratures
2015, Applied Mathematics and ComputationCitation Excerpt :The use of quadrature rules for the construction of iterative methods, applied to the solution of nonlinear equations or systems, has been considered by many authors (see, for example [5,7,10,13,16,17,18,19]).
Methods Involving Second or Higher Derivatives
2013, Studies in Computational MathematicsBisection and Interpolation Methods
2013, Studies in Computational Mathematics