Next Article in Journal
A Solution for Volterra Fractional Integral Equations by Hybrid Contractions
Next Article in Special Issue
Approximating Fixed Points of Bregman Generalized α-Nonexpansive Mappings
Previous Article in Journal
Geodesic Mappings of Vn(K)-Spaces and Concircular Vector Fields
Previous Article in Special Issue
On a Bi-Parametric Family of Fourth Order Composite Newton–Jarratt Methods for Nonlinear Systems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Modified Self-Adaptive Conjugate Gradient Method for Solving Convex Constrained Monotone Nonlinear Equations for Signal Recovery Problems

by
Auwal Bala Abubakar
1,2,
Poom Kumam
1,3,4,*,
Aliyu Muhammed Awwal
1,5 and
Phatiphat Thounthong
6
1
KMUTTFixed Point Research Laboratory, Department of Mathematics, Faculty of Science, King Mongkut’s University of Technology Thonburi (KMUTT), 126 Pracha-Uthit Road, Bang Mod, Thrung Khru, Bangkok 10140, Thailand
2
Department of Mathematical Sciences, Faculty of Physical Sciences, Bayero University, Kano 700241, Nigeria
3
Department of Medical Research, China Medical University Hospital, China Medical University, Taichung 40402, Taiwan
4
Center of Excellence in Theoretical and Computational Science (TaCS-CoE), Science Laboratory Building, King Mongkut’s University of Technology Thonburi (KMUTT), 126 Pracha-Uthit Road, Bang Mod, Thrung Khru, Bangkok 10140, Thailand
5
Department of Mathematics, Faculty of Science, Gombe State University, Gombe 760214, Nigeria
6
Renewable Energy Research Centre, Department of Teacher Training in Electrical Engineering, Faculty of Technical Education, King Mongkut’s University of Technology North Bangkok, 1518 Pracharat 1 Road, Bangsue, Bangkok 10800, Thailand
*
Author to whom correspondence should be addressed.
Mathematics 2019, 7(8), 693; https://doi.org/10.3390/math7080693
Submission received: 24 June 2019 / Revised: 17 July 2019 / Accepted: 19 July 2019 / Published: 1 August 2019
(This article belongs to the Special Issue Computational Methods in Analysis and Applications)

Abstract

:
In this article, we propose a modified self-adaptive conjugate gradient algorithm for handling nonlinear monotone equations with the constraints being convex. Under some nice conditions, the global convergence of the method was established. Numerical examples reported show that the method is promising and efficient for solving monotone nonlinear equations. In addition, we applied the proposed algorithm to solve sparse signal reconstruction problems.

1. Introduction

Suppose Ω is a nonempty, closed and convex subset of R n , F a continuous function from R n to R n . A constrained nonlinear monotone equation involves finding a point x Ω , such that
F ( x ) = 0 .
Many algorithms have been proposed in the literature to solve nonlinear constrained equations, some of which are the trust region and the Levenberg-Marquardt method [1]. However, the need for these methods to compute and store matrix in every iteration, make them unsuitable for solving large-scale nonlinear equations.
Conjugate gradient (CG) methods is an iterative method developed for handling the unconstrained optimization problem [2,3,4,5,6,7,8,9]. CG methods do not require matrix storage, which make them one of the most efficient methods for handling large-scale unconstrained optimization problems. Moreover, generating a descent direction does not always hold based on the secant conditions. In order to obtain a descent direction, Narushima et al. [6] and Zhang et al. [8] proposed three term CG methods, which always generate a descent direction and established the convergence of the methods under some suitable conditions. Also in Reference [5], Narushima proposed a smoothing CG algorithm, which combine the smoothing approach with the Polak–Ribière–Polyak CG methods in Reference [9], to handle unconstrained non-smooth equations. The convergence of the method was established under some mild conditions.
Methods for solving unconstrained problems sometimes become less useful (see for example References [10,11,12,13]), as in many practical applications, such as equilibrium problems, the solution of the unconstrained problem may lie outside the constrained set Ω . This reason made researchers shift their attention to the constrained case (1). In the last few years, many kinds of algorithms for solving nonlinear monotone equations with convex constrained set Ω have been developed and one of the popular is the projection method. For example, in Reference [14] Wang et al. proposed a projection method for solving systems of monotone nonlinear equations with convex constraints. The method was based on the inexact Newton backtracking technique and the direction was obtained by minimization of a linear system together with the constrained condition at each iteration. Also, in Reference [15] Wang et al. presented a modification of the method in Reference [16], and the global convergence as well as the super-linear rate of convergence were established under same conditions in Reference [14]. However, the direction of the methods in Reference [15,16] were determined by minimization of linear equations at each step. In trying to avoid solving the linear equation to obtain the direction at each step, Xiao and Zhu [17] proposed a projected CG methods, which combines the CG-DESCENT method in [4] and the projection technique by Solodov and Svaiter [18]. In Reference [19], a modification of the method in Reference [17] was proposed by Liu and Li. The advantage of this modification was that it improves the numerical performance of the method in Reference [17] and still retains its nice properties. Furthermore, Wang et al. [20] proposed a self-adaptive three-term CG methods for solving constrained nonlinear monotone equations. The method can be viewed as combination of the CG methods, the projection method and the self-adaptive method. Motivated by the above methods, we propose a modification of the method in Reference [20] for solving nonlinear monotone equations with convex constraints. The modification improves the numerical performance of the method in Reference [20] and still inherits its nice properties. The difference between the two methods is that y k 1 in Reference [20] is replaced by w k 1 (More details can be found in the next section). Under appropriate conditions, the global convergence of the proposed method is established. Numerical results presented show that the proposed method is efficient and promising compared to some similar existing algorithms.
The remaining part of this paper is organized as follows. In Section 2, we state some preliminaries and then present the algorithm. The global convergence of the proposed method is proved in Section 3. In Section 4, we report some numerical experiments to show its performance in solving nonlinear monotone equations with convex constraints, and lastly apply it to solve some signal recovery problems.

2. Preliminaries and Algorithm

This section gives some basic concepts and properties of the projection mapping as well as some assumptions. · denotes the Euclidean norm throughout the paper.
Definition 1.
Let Ω R n be a nonempty closed convex set. Then for any x R n , its orthogonal projection onto Ω, denoted by P Ω ( x ) , is defined by
P Ω ( x ) = arg min { x y : y Ω } .
The following Lemma provides us with some well-known properties of the projection mapping.
Lemma 1.
[20] Let Ω R n be a nonempty, closed and convex set. Then the following statements are true:
1. 
( x P Ω ( x ) ) T ( P Ω ( x ) z ) 0 ,  x R n , z Ω .
2. 
P Ω ( x ) P Ω ( y ) x y ,  x , y R n .
3. 
P Ω ( x ) z 2 x z 2 x P Ω ( x ) 2 ,  x R n , z Ω .
All through this article, we assume the following
( A 1
The solution set of (1), denoted by Ω , is nonempty.
( A 2
The mapping F is monotone, that is,
( F ( x ) F ( y ) ) T ( x y ) 0 , x , y R n .
( A 3
The mapping F ( . ) is Lipschitz continuous, that is there exists a positive constant L such that F ( x ) F ( y ) L x y ,  x , y R n .
Algorithm 1: Modified Self-adaptive CG method (MSCG).
Step 0. Given an arbitrary initial point x 0 Ω , parameters β > 0 , r > 0 , 0 < μ < 2 , σ > 0 , 0 < ρ < 1 , T o l > 0 , and set k : = 0 .
Step 1.  If F ( x k ) T o l , stop, otherwise go to Step 2.
Step 2.  Compute
d k = F ( x k ) , if k = 0 , F ( x k ) + β k d k 1 θ k w k 1 , if k 1 ,
where
β k = F ( x k ) T w k 1 d k 1 T w k 1 , θ k = F ( x k ) T d k 1 d k 1 T w k 1
y k 1 = F ( x k ) F ( x k 1 ) + r s k 1 , s k 1 = x k x k 1 ,
w k 1 = y k 1 + t k 1 d k 1 , t k 1 = 1 + m a x 0 , d k 1 T y k 1 d k 1 T d k 1 .
Step 3.  Compute the step length α k = β ρ m k and m k is the smallest non-negative integer m such that
F ( x k + β ρ m d k ) , d k σ β ρ m d k 2 .
Step 4.  Set z k = x k + α k d k and compute
x k + 1 = P Ω [ x k μ ζ k F ( z k ) ]
where
ζ k = F ( z k ) T ( x k z k ) F ( z k ) 2 .
Step 5.  Let k = k + 1 and go to Step 1.
It can be observed that the modification made is by replacing β k H S , θ k in [20] with β k , θ k respectively in the proposed algorithm.
Remark 1.
F ( x k ) T d k = F ( x k ) T F ( x k ) + F ( x k ) T ( F ( x k ) T w k 1 ) d k 1 F ( x k ) T ( F ( x k ) T d k 1 ) w k 1 d k 1 T w k 1 = F ( x k ) 2 + ( F ( x k ) T d k 1 ) ( F ( x k ) T w k 1 ) ( F ( x k ) T w k 1 ) ( F ( x k ) T d k 1 ) d k 1 T w k 1 = F ( x k ) 2 .
Using Cauchy-Schwartz inequality, we get
F ( x k ) d k .
Remark 2.
From the definition of w k 1 , t k 1 and (8), we have
d k 1 T w k 1 d k 1 T y k 1 + d k 1 2 d k 1 T y k 1 = d k 1 2
The above inequality shows that the denominator of β k and θ k cannot be zero except at the solution. However, there is no any guarantee that the denominator of β k H S and θ k defined in Reference [20] cannot be zero. In addition, the conditions imposed in step 2 of Algorithm 2.1 of [20] were not considered in our case.

3. Convergence Analysis

To prove the global convergence of Algorithm 1, the following Lemmas are needed. The following Lemma shows that Algorithm 1 is well-defined.
Lemma 2.
Suppose that assumptions ( A 1 )–( A 3 ) hold, then there exists a step-length α k satisfying the line search (6) k 0 .
Proof. 
Suppose there exists k 0 0 such that (6) does not hold for any non-negative integer i, that is,
F ( x k 0 + β ρ i d k 0 ) , d k 0 < σ β ρ i d k 0 2 .
Using assumption ( A 3 ) and allowing i , we get
F ( x k 0 ) , d k 0 0 .
Also from (7), we have
F ( x k 0 ) , d k 0 = F ( x k 0 ) 2 > 0 ,
which contradicts (10). The proof is complete. □
Lemma 3.
Suppose that ( A 3 ) hold and the sequences { x k } and { z k } be generated by Algorithm 1. Then we have
α k ρ min β , ρ F ( x k ) 2 ( L + σ ) d k 2 .
Proof. 
Suppose α k β , then α k ρ does not satisfy Equation (6), that is
F x k + α k ρ d k T d k < σ α k ρ d k 2 .
This combined with (7) and the fact that F is Lipschitz continuous yields
F ( x k ) 2 = F ( x k ) T d k = F ( x k + α k ρ d k ) F ( x k ) T d k F x k + α k ρ d k T d k L α k ρ d k 2 + σ α k ρ d k 2 = L + σ ρ α k d k 2 .
The above equation implies
α k ρ F ( x k ) 2 ( L + σ ) d k 2 ,
which completes the proof. □
Lemma 4.
Suppose that assumptions ( A 1 )–( A 3 ) hold, then the sequences { x k } and { z k } generated by Algorithm 1 are bounded. Moreover, we have
lim k x k z k = 0 ,
and
lim k x k + 1 x k = 0 .
Proof. 
We will start by showing that the sequences { x k } and { z k } are bounded. Suppose x ¯ Ω , then by monotonicity of F, we get
F ( z k ) , x k x ¯ F ( z k ) , x k z k .
Also by definition of z k and the line search (6), we have
F ( z k ) , x k z k σ α k 2 d k 2 0 .
So, we have
x k + 1 x ¯ 2 = P Ω [ x k μ ζ k F ( z k ) ] x ¯ 2 x k μ ζ k F ( z k ) x ¯ 2 = x k x ¯ 2 2 μ ζ k F ( z k ) , x k x ¯ + μ ζ k F ( z k ) 2 = x k x ¯ 2 2 μ F ( z k ) , x k z k F ( z k ) 2 F ( z k ) , x k x ¯ + μ 2 F ( z k ) , x k z k F ( z k ) 2 x k x ¯ 2 2 μ F ( z k ) , x k z k F ( z k ) 2 F ( z k ) , x k z k + μ 2 F ( z k ) , x k z k F ( z k ) 2 x k x ¯ 2 μ ( 2 μ ) F ( z k ) , x k z k F ( z k ) 2 = x k x ¯ 2 μ ( 2 μ ) σ 2 x k z k 4 F ( z k ) 2 .
Thus the sequence { x k x ¯ } is non increasing and convergent and hence { x k } is bounded. Furthermore, from Equation (16), we have
x k + 1 x ¯ 2 x k x ¯ 2 ,
and we can deduce recursively that
x k x ¯ 2 x 0 x ¯ 2 , k 0 .
Then from Assumption ( A 3 ), we obtain
F ( x k ) = F ( x k ) F ( x ¯ ) L x k x ¯ L x 0 x ¯ .
If we let L x 0 x ¯ = ω , then the sequence { F ( x k ) } is bounded, that is,
F ( x k ) ω , k 0 .
By the definition of z k , Equation (15), monotonicity of F and the Cauchy-Schwatz inequality, we get
σ x k z k = σ α k d k 2 x k z k F ( z k ) , x k z k x k z k F ( x k ) , x k z k x k z k F ( x k ) .
The boundedness of the sequence { x k } together with Equations (18) and (19), implies that the sequence { z k } is bounded.
Since { z k } is bounded, then for any x ¯ Ω , the sequence { z k x ¯ } is also bounded, that is, there exists a positive constant ν > 0 such that
z k x ¯ ν , k 0 .
This together with Assumption ( A 3 ) yields
F ( z k ) = F ( z k ) F ( x ¯ ) L z k x ¯ L ν .
Therefore, using Equation (16), we have
μ ( 2 μ ) σ 2 ( L ν ) 2 x k z k 4 x k x ¯ 2 x k + 1 x ¯ 2 ,
which implies
μ ( 2 μ ) σ 2 ( L ν ) 2 k = 0 x k z k 4 k = 0 ( x k x ¯ 2 x k + 1 x ¯ 2 ) x 0 x ¯ < .
Equation (20) implies
lim k x k z k = 0 .
However, using statement 2 of Lemma 1, the definition of ζ k and the Cauchy-Schwatz inequality, we have
x k + 1 x k = P Ω [ x k μ ζ k F ( z k ) ] x k = x k μ ζ k F ( z k ) x k = μ ζ k F ( z k ) = μ x k z k , k 0 ,
which yields
lim k x k + 1 x k = 0 .
 □
Remark 3.
By Equation (12) and definition of z k , we have
lim k α k d k = 0 .
Theorem 1.
Suppose that assumptions ( A 1 )–( A 3 ) hold and let the sequence { x k } be generated by Algorithm 1, then
lim inf k F ( x k ) = 0 .
Proof. 
Assume that Equation (23) is not true, then there exists a constant ϵ > 0 such that
F ( x k ) ϵ , k 0 .
We will first show that the sequence { d k } is bounded. From the definition of t k 1 , we have
| t k 1 | = 1 + m a x 0 , d k 1 T y k 1 d k 1 2 1 + | d k 1 T y k 1 | d k 1 2 1 + d k 1 y k 1 d k 1 2 = 1 + y k 1 d k 1 .
Also from definition of y k 1 and assumption ( A 3 ), we have
y k 1 F ( x k ) F ( x k 1 ) + r s k 1 ( L + r ) s k 1 ( L + r ) α k 1 d k 1 .
Furthermore by definition of w k 1 , (25) and (26), we obtain
w k 1 = y k 1 + t k 1 d k 1 y k 1 + | t k 1 | d k 1 ( L + r ) α k 1 d k 1 + 1 + y k 1 d k 1 d k 1 = ( L + r ) α k 1 d k 1 + d k 1 + y k 1 ( 2 ( L + r ) α k 1 + 1 ) d k 1 .
Therefore, by (2), (9), (18), (27) and Cauchy-Schwatz inequality, we have
d k F ( x k ) + F ( x k ) w k 1 d k 1 | d k 1 T w k 1 | + F ( x k ) d k 1 w k 1 | d k 1 T w k 1 | F ( x k ) + ( 4 ( L + r ) α k 1 + 2 ) F ( x k ) = ( 1 + 4 ( L + r ) α k 1 + 2 ) F ( x k ) ( 1 + 4 ( L + r ) β + 2 ) ω .
Letting C = ( 1 + 4 ( L + r ) β + 2 ) ω , then d k C , k 0 .
Combining (8) and (24), we have
d k F ( x k ) ϵ , k 0 .
As z k = x k + α k d k and lim k x k z k = 0 , we get lim k α k d k = 0 and
lim k α k = 0 .
On the other side, Lemma 3 and (28) imply α k d k min β ϵ ϵ 2 ( L + σ ) C 2 , which contradicts with (29). Therefore, (23) must hold. □

4. Numerical Examples

This section reports some numerical results to show the efficiency of Algorithm 1. For convenience sake, we denote Algorithm 1 by the modified self-adaptive method MSCG. We also divide this section into two. First we compare the MSCG method with the projected conjugate gradient PCG and the self-adaptive three-term conjugate gradient SATCGM methods in References [19,20] respectively, by solving some monotone nonlinear equations with convex constraints using different initial points and several dimensions. Secondly, the MSCG method is applied to solve signal recovery problems. All codes were written in MATLAB R2017a and run on a PC with intel COREi5 processor with 4 GB of RAM and CPU 2.3 GHZ.

4.1. Numerical Examples on Some Convex Constrained Nonlinear Monotone Equations

Same line search implementation was used for both MSCG, PCG and SATCGM. The specific parameters used for each method are as follows:
MSCG method:
β = 1 , μ = 1.8 , ρ = 0.6 , r = 0.1 , σ = 0.0001 .
PCG method:
All parameters are chosen as in [19].
SATCGM method:
All parameters are chosen as in [20]. All runs were stopped whenever F ( x k ) < 10 6 .
We test problems 1 to 9 with dimensions of n = 1000, 5000, 10,000, 50,000, 100,000 and different initial points: x 1 = ( 1 , 1 , , 1 ) T , x 2 = ( 2 , 2 , , 2 ) T , x 3 = ( 3 , 3 , , 3 ) T , x 4 = ( 5 , 5 , , 5 ) T , x 5 = ( 8 , 8 , , 8 ) T , x 6 = ( 0.5 , 0.5 , 0.5 ) T , x 7 = ( 0.1 , 0.1 , , 0.1 ) T , x 8 = ( 10 , 10 , , 10 ) T . The numerical results in Table 1, Table 2, Table 3, Table 4, Table 5, Table 6, Table 7, Table 8 and Table 9 report the number of iterations (Iter), number of function evaluations (Fval), CPU time in seconds (Time) and the norm at the approximate solution (Norm). The symbol ‘−’ is used to indicate that the number of iterations exceeds 1000 and/or the number of function evaluations exceeds 2000.
The problem functions F ( x ) = ( f 1 ( x ) , f 2 ( x ) , , f n ( x ) ) T , where x = ( x 1 , x 2 , , x n ) T , and feasible sets Ω R n tested are listed as follows:
Problem 1 Modified exponential function
f 1 ( x ) = e x 1 1 f i ( x ) = e x i + x i 1 1 for i = 2 , 3 , , n and Ω = R + n .
Problem 2 Logarithmic Function
f i ( x ) = ln ( | x i | + 1 ) x i n , for i = 2 , 3 , , n and Ω = R + n .
Problem 3 [21]
f i ( x ) = 2 x i sin | x i | , i = 1 , 2 , 3 , , n and Ω = R + n .
Problem 4 [22]
f i ( x ) = min min ( | x i | , x i 2 ) , max ( | x i | , x i 3 ) for i = 2 , 3 , , n and Ω = R + n .
Problem 5 Strictly convex function [14]
f i ( x ) = e x i 1 , for i = 2 , 3 , , n and Ω = R + n .
Problem 6 Linear monotone problem
f 1 ( x ) = 2.5 x 1 + x 2 1 f i ( x ) = x i 1 + 2.5 x i + x i + 1 1 for i = 2 , 3 , , n 1 f n ( x ) = x n 1 + 2.5 x n 1 and Ω = R + n .
Problem 7 Tridiagonal Exponential Problem [23]
f 1 ( x ) = x 1 e cos ( h ( x 1 + x 2 ) ) f i ( x ) = x i e cos ( h ( x i 1 + x i + x i + 1 ) ) for i = 2 , 3 , , n 1 f n ( x ) = x n e cos ( h ( x n 1 + x n ) ) , where h = 1 n + 1 and Ω = R + n .
Problem 8
f 1 ( x ) = 3 x 1 3 + 2 x 2 5 + sin ( x 1 x 2 ) sin ( x 1 + x 2 ) f i ( x ) = 3 x i 3 + 2 x i + 1 5 + sin ( x i x i + 1 ) sin ( x i + x i + 1 ) + 4 x i x i 1 e x i 1 x i 3 for i = 2 , 3 , , n 1 f n ( x ) = x n 1 e x n 1 x n 4 x n 3 , where h = 1 n + 1 and Ω = R + n .
Problem 9
f i ( x ) = x i sin | x i 1 | , i = 1 , 2 , 3 , , n and Ω = R + n .
The numerical results indicate that the MSCG method is more effective than the PCG and SATCGM methods for the given problems as it solves and win 6 out of 9 of the problems tested both in terms of number of iterations, number of function evaluations and CPU time (see Table 1, Table 2, Table 3, Table 4, Table 5, Table 6, Table 7, Table 8 and Table 9). In particular, the PCG method fails to solve problems 4 completely while MSCG was able to solve all the problems except for the initial points x 6 and x 7 (see Table 4). In addition, the SATCGM method fails to solve problem 4 except in the case of dimension 1000 and initial point x 1 for the remaining dimensions considered. Therefore, we can conclude that the MSCG method is a very efficient tool for solving nonlinear monotone equations with convex constraints, especially for large-scale dimensions.

4.2. Experiments on Solving Some Signal Recovery Problems in Compressive Sensing

There are many problems in signal processing and statistical inference involving finding sparse solutions to ill-conditioned linear systems of equations. Among popular approaches is minimizing an objective function which contains quadratic ( 2 ) error term and a sparse 1 regularization term, that is,
min x 1 2 y A x 2 2 + τ x 1 ,
where x R n , y R k is an observation, A R k × n ( k < < n ) is a linear operator, τ is a nonnegative parameter, x 2 denotes the Euclidean norm of x and x 1 = i = 1 n | x i | is the 1 norm of x. It is easy to see that problem (30) is a convex unconstrained minimization problem. Due to the fact that if the original signal is sparse or approximately sparse in some orthogonal basis, problem (30) frequently appears in compressive sensing, and hence an exact restoration can be produced by solving (30).
Iterative methods for solving (30) have been presented in the literature, (see References [24,25,26,27,28,29]). The most popular method among these methods is the gradient based method and the earliest gradient projection method for sparse reconstruction (GPRS) was proposed by Figueiredo et al. [27]. The first step of the GPRS method is to express (30) as a quadratic problem using the following process.
Let x R n and splitting it into its positive and negative parts. Then x can be formulated as
x = u v , u 0 , v 0 ,
where u i = ( x i ) + , v i = ( x i ) + for all i = 1 , 2 , , n , and ( . ) + = max { 0 , . } . By definition of 1 -norm, we have x 1 = e n T u + e n T v , where e n = ( 1 , 1 , , 1 ) T R n . Now (30) can be written as
min u , v 1 2 y A ( u v ) 2 2 + τ e n T u + τ e n T v , u 0 , v 0 ,
which is a bound-constrained quadratic program. However, from Reference [27], Equation (31) can be written in standard form as
min z 1 2 z T D z + c T z , such that z 0 ,
where z = u v ,  c = τ e 2 n + b b ,  b = A T y ,  D = A T A A T A A T A A T A .
Clearly, D is a positive semi-definite matrix, which implies that Equation (32) is a convex quadratic problem.
Xiao et al. [17] translated (32) into a linear variable inequality problem which is equivalent to a linear complementarity problem. Furthermore, they pointed out that z is a solution of the linear complementarity problem if and only if it is a solution of the nonlinear equation:
F ( z ) = min { z , D z + c } = 0 .
It was proved in Reference [30,31] that F ( z ) is continuous and monotone. Therefore problem (30) can be translated into problem (1) and thus MSCG method can be applied to solve (30).
In this experiment, we consider a simple compressive sensing possible situation, where our goal is to reconstruct a sparse signal of length n from k observations. The quality of restoration is assessed by mean of squared error (MSE) to the original signal x ˜ ,
M S E = 1 n x ˜ x * 2 ,
where x * is the recovered or restored signal. The signal size is chosen as n = 2 12 , k = 2 10 and the original signal contains 2 7 randomly nonzero elements. A is the Gaussian matrix generated by the command r a n d ( m , n ) in MATLAB. In addition, the measurement y is distributed with noise, that is, y = A x ˜ + η , where η is the Gaussian noise distributed normally with mean 0 and variance 10 4 ( N ( 0 , 10 4 ) ).
To show the performance of the MSCG method in compressive sensing, we compare it with the PCG method. The parameters in both MSCG and PCG methods are chosen as β = 1 , σ = 10 4 , ρ = 0.8 , and r = 0.1 and the merit function used is f ( x ) = 1 2 y A x 2 2 + τ x 1 . To achieve fairness in comparison, each code was run from same initial point, same continuation technique on the parameter τ , and observed only the behaviour of the convergence of each method to have a similar accurate solution. The experiment is initialized by x 0 = A T y and terminates when
f k f k 1 f k 1 < 10 5
where f k is the function evaluation at x k . In Figure 1, MSCG and PCG methods recovered the disturbed signal almost exactly. In order to show the performance of both methods visually, four figures were plotted to demonstrate their convergence behaviour based on MSE, objective function values, number of iterations and CPU time (see Figure 2, Figure 3, Figure 4 and Figure 5). Furthermore, the experiment was repeated for 25 different noise samples (see Table 10). From the Table, it can be observed that the MSCG is more efficient in terms of iterations and CPU time than the PCG method in most cases.

5. Conclusions

In this paper, a modified three-term conjugate gradient method for solving monotone nonlinear equations with convex constraints was presented. The proposed algorithm is suitable for solving non-smooth equations because it requires no Jacobian information of the nonlinear equations. Under some assumptions, global convergence properties of the proposed method were proved. The numerical experiments presented clearly show how effective the MSCG algorithm is compared to the PCG and SATCGM methods of References [19,20] for the given constrained problems. In addition, the MSCG algorithm was shown to be effective in signal recovery problems.

Author Contributions

Conceptualization, A.B.A.; methodology, A.B.A.; software, A.B.A.; validation, P.K., A.M.A. and P.T.; formal analysis, P.K. and P.T.; investigation, P.K.; resources, P.K. and P.T.; data curation, A.M.A.; writing–original draft preparation, A.B.A.; writing–review and editing, A.M.A.; visualization, P.T.; supervision, P.K.; project administration, P.K. and P.T.; funding acquisition, P.K. and P.T.

Funding

Petchra Pra Jom Klao Doctoral Scholarship for Ph.D. program of King Mongkut’s University of Technology Thonburi (KMUTT) and Theoretical and Computational Science (TaCS) Center. Moreover, this project was partially supported by the Thailand Research Fund (TRF) and the King Mongkut’s University of Technology Thonburi (KMUTT) under the TRF Research Scholar Award (Grant No. RSA6080047). Also, this research work was funded by King Mongkut’s University of Technology North Bangkok, contract no. KMUTNB-61-GOV-D-68.

Acknowledgments

This project was supported by Center of Excellence in Theoretical and Computational Science (TaCS-CoE) Center under Computational and Applied Science for Smart Innovation research Cluster (CLASSIC), Faculty of Science, KMUTT. The first author thanks for the support of the Petchra Pra Jom Klao Doctoral Scholarship Academic for Ph.D. Program at King Mongkut’s University of Technology Thonburi (KMUTT). Moreover, this research work was financially supported by King Mongkut’s University of Technology Thonburi through the KMUTT 55th Anniversary Commemorative Fund.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Kanzow, C.; Yamashita, N.; Fukushima, M. Levenberg–Marquardt methods with strong local convergence properties for solving nonlinear equations with convex constraints. J. Comput. Appl. Math. 2004, 172, 375–397. [Google Scholar] [CrossRef]
  2. Al-Baali, M.; Narushima, Y.; Yabe, H. A family of three term conjugate gradient methods with sufficient descent property for unconstrained optimization. Comput. Optim. Appl. 2015, 60, 89–110. [Google Scholar] [CrossRef]
  3. Hager, W.W.; Zhang, H. A survey of nonlinear conjugate gradient methods. Pac. J. Optim. 2006, 2, 35–58. [Google Scholar]
  4. Hager, W.; Zhang, H. A New Conjugate Gradient Method with Guaranteed Descent and an Efficient Line Search. SIAM J. Optim. 2005, 16, 170–192. [Google Scholar] [CrossRef] [Green Version]
  5. Narushima, Y. A smoothing conjugate gradient method for solving systems of nonsmooth equations. Appl. Math. Comput. 2013, 219, 8646–8655. [Google Scholar] [CrossRef]
  6. Narushima, Y.; Yabe, H.; Ford, J. A Three-Term Conjugate Gradient Method with Sufficient Descent Property for Unconstrained Optimization. SIAM J. Optim. 2011, 21, 212–230. [Google Scholar] [CrossRef] [Green Version]
  7. Sugiki, K.; Narushima, Y.; Yabe, H. Globally Convergent Three-Term Conjugate Gradient Methods that Use Secant Conditions and Generate Descent Search Directions for Unconstrained Optimization. J. Optim. Theory Appl. 2012, 153, 733–757. [Google Scholar] [CrossRef]
  8. Zhang, L.; Zhou, W.; Li, D. Some descent three-term conjugate gradient methods and their global convergence. Optim. Methods Softw. 2007, 22, 697–711. [Google Scholar] [CrossRef]
  9. Zhang, L.; Zhou, W.; Li, D.H. A descent modified Polak–Ribière–Polyak conjugate gradient method and its global convergence. IMA J. Numer. Anal. 2006, 26, 629–640. [Google Scholar] [CrossRef]
  10. Abubakar, A.B.; Kumam, P. An improved three-term derivative-free method for solving nonlinear equations. Comput. Appl. Math. 2018, 37, 6760–6773. [Google Scholar] [CrossRef]
  11. Abubakar, A.B.; Kumam, P. A descent Dai-Liao conjugate gradient method for nonlinear equations. Numer. Algorithms 2019, 81, 197–210. [Google Scholar] [CrossRef]
  12. Muhammed, A.A.; Kumam, P.; Abubakar, A.B.; Wakili, A.; Pakkaranang, N. A New Hybrid Spectral Gradient Projection Method for Monotone System of Nonlinear Equations with Convex Constraints. Thai J. Math. 2018, 16, 125–147. [Google Scholar]
  13. Mohammad, H.; Abubakar, A.B. A positive spectral gradient-like method for nonlinear monotone equations. Bull. Comput. Appl. Math. 2017, 5, 99–115. [Google Scholar]
  14. Wang, C.; Wang, Y.; Xu, C. A projection method for a system of nonlinear monotone equations with convex constraints. Math. Methods Oper. Res. 2007, 66, 33–46. [Google Scholar] [CrossRef]
  15. Wang, C.; Wang, Y. A superlinearly convergent projection method for constrained systems of nonlinear equations. J. Glob. Optim. 2009, 44, 283–296. [Google Scholar] [CrossRef]
  16. Solodov, M.; Svaiter, B. A New Projection Method for Variational Inequality Problems. SIAM J. Control Optim. 1999, 37, 765–776. [Google Scholar] [CrossRef]
  17. Xiao, Y.; Zhu, H. A conjugate gradient method to solve convex constrained monotone equations with applications in compressive sensing. J. Math. Anal. Appl. 2013, 405, 310–319. [Google Scholar] [CrossRef]
  18. Solodov, M.V.; Svaiter, B.F. A Globally Convergent Inexact Newton Method for Systems of Monotone Equations. In Reformulation: Nonsmooth, Piecewise Smooth, Semismooth and Smoothing Methods; Fukushima, M., Qi, L., Eds.; Applied Optimization; Springer: Boston, MA, USA; Berlin/Heidelberg, Germany, 1998; Volume 22, pp. 355–369. [Google Scholar]
  19. Liu, J.; Li, S. A projection method for convex constrained monotone nonlinear equations with applications. Comput. Math. Appl. 2015, 70, 2442–2453. [Google Scholar] [CrossRef]
  20. Wang, X.Y.; Li, S.J.; Kou, X.P. A self-adaptive three-term conjugate gradient method for monotone nonlinear equations with convex constraints. Calcolo 2016, 53, 133–145. [Google Scholar] [CrossRef]
  21. Zhou, W.; Li, D. Limited memory BFGS method for nonlinear monotone equations. J. Comput. Math. 2007, 25, 89–96. [Google Scholar]
  22. La Cruz, W. A spectral algorithm for large-scale systems of nonlinear monotone equations. Numer. Algorithms 2017, 76, 1109–1130. [Google Scholar] [CrossRef]
  23. Bing, Y.; Lin, G. An efficient implementation of Merrill’s method for sparse or partially separable systems of nonlinear equations. SIAM J. Optim. 1991, 1, 206–221. [Google Scholar] [CrossRef]
  24. Figueiredo, M.A.; Nowak, R.D. An EM algorithm for wavelet-based image restoration. IEEE Trans. Image Process. 2003, 12, 906–916. [Google Scholar] [CrossRef] [Green Version]
  25. Hale, E.T.; Yin, W.; Zhang, Y. A fixed-point continuation method for 1-regularized minimization with applications to compressed sensing. CAAM TR07-07 Rice Univ. 2007, 43, 44. [Google Scholar]
  26. Beck, A.; Teboulle, M. A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2009, 2, 183–202. [Google Scholar] [CrossRef]
  27. Figueiredo, M.A.; Nowak, R.D.; Wright, S.J. Gradient projection for sparse reconstruction: Application to compressed sensing and other inverse problems. IEEE J. Sel. Top. Signal Process. 2007, 1, 586–597. [Google Scholar] [CrossRef]
  28. Van Den Berg, E.; Friedlander, M.P. Probing the Pareto frontier for basis pursuit solutions. SIAM J. Sci. Comput. 2008, 31, 890–912. [Google Scholar] [CrossRef]
  29. Birgin, E.G.; Martínez, J.M.; Raydan, M. Nonmonotone spectral projected gradient methods on convex sets. SIAM J. Optim. 2000, 10, 1196–1211. [Google Scholar] [CrossRef]
  30. Xiao, Y.; Wang, Q.; Hu, Q. Non-smooth equations based method for 1-norm problems with applications to compressed sensing. Nonlinear Anal. Theory Methods Appl. 2011, 74, 3570–3577. [Google Scholar] [CrossRef]
  31. Pang, J.S. Inexact Newton methods for the nonlinear complementarity problem. Math. Program. 1986, 36, 54–71. [Google Scholar] [CrossRef]
Figure 1. From top to bottom: the original image, the measurement, and the recovered signals b they PCG and MSCG methods.
Figure 1. From top to bottom: the original image, the measurement, and the recovered signals b they PCG and MSCG methods.
Mathematics 07 00693 g001
Figure 2. Iterations.
Figure 2. Iterations.
Mathematics 07 00693 g002
Figure 3. CPU time (seconds).
Figure 3. CPU time (seconds).
Mathematics 07 00693 g003
Figure 4. Iterations.
Figure 4. Iterations.
Mathematics 07 00693 g004
Figure 5. CPU time (seconds).
Figure 5. CPU time (seconds).
Mathematics 07 00693 g005
Table 1. Numerical Results for modified self-adaptive method MSCG, projected conjugate gradient PCG and the self-adaptive three-term conjugate gradient SATCGM for Problem 1 with given initial points and dimensions.
Table 1. Numerical Results for modified self-adaptive method MSCG, projected conjugate gradient PCG and the self-adaptive three-term conjugate gradient SATCGM for Problem 1 with given initial points and dimensions.
MSCGPCGSATCGM
DimensionInitial PointIterFvalTimeNormIterFvalTimeNormIterFvalTimeNorm
1000 x 1 290.0074890521890.1103218.55   × 10 7 010.0004880
x 2 2100.0055510562040.0351418.99   × 10 7 290.0012010
x 3 2110.0148990562050.0335338.39   × 10 7 2100.0009990
x 3 2140.0062430652370.0416388.27   × 10 7 8390.0024237.2   × 10 7
x 5 2190.0060220783100.053629.48   × 10 7 2160.0013690
x 6 290.0026810541960.0350216.83   × 10 7 7290.0018782.24   × 10 7
x 7 290.0040440491780.0426357.85   × 10 7 280.0014290
x 8 2230.0053360692660.0439628.44   × 10 7 9480.0029061.14   × 10 7
5000 x 1 290.0228570501830.091499.98   × 10 7 010.0008070
x 2 2100.0068540562050.1030926.81   × 10 7 261320.0501195.8   × 10 7
x 3 2110.0110490541990.1040659.9   × 10 7 271370.0532486.74   × 10 7
x 3 2140.0113550642340.1150139.48   × 10 7 261340.0582459.79   × 10 7
x 5 2190.0131440702780.1317788.96   × 10 7 2160.0071470
x 6 290.0121890531930.098867.86   × 10 7 241200.047579.33   × 10 7
x 7 290.0095160471720.0953049.24   × 10 7 211060.0467859.89   × 10 7
x 8 2230.0144010702690.1287416.8   × 10 7 261410.0523278.83   × 10 7
10,000 x 1 290.0176260511870.1736966.85   × 10 7 010.0010960
x 2 2100.0138780552020.2426267.46   × 10 7 251270.1015659.68   × 10 7
x 3 2110.0164710552030.1810916.79   × 10 7 271370.1017026.74   × 10 7
x 3 2140.0168870642340.2079399.63   × 10 7 261340.0864159.78   × 10 7
x 5 2190.0196610874670.3701029.32   × 10 7 2160.0121660
x 6 290.0115910521900.1663518.5   × 10 7 241200.0780869.33   × 10 7
x 7 290.0119880461690.150831   × 10 6 211060.0720479.43   × 10 7
x 8 2230.0220060682620.2158829.85   × 10 7 261410.1001328.39   × 10 7
50,000 x 1 290.0403260491810.7071638.25   × 10 7 010.0032710
x 2 2100.0394810541990.9018248.74   × 10 7 251270.4085028.76   × 10 7
x 3 2110.0441920531970.7797498.21   × 10 7 271370.4315246.74   × 10 7
x 3 2140.0660190632321.0471257.61   × 10 7 261340.4324649.78   × 10 7
x 5 2190.0624540732931.4794997.54   × 10 7 2160.0555190
x 6 290.0387380511870.7693769.74   × 10 7 241200.3596239.32   × 10 7
x 7 290.0482110461700.6501997.66   × 10 7 211060.3401558.53   × 10 7
x 8 2230.0752390682621.0348249.43   × 10 7 261410.4270857.54   × 10 7
100,000 x 1 290.0719540491811.4567188.7   × 10 7 010.0053250
x 2 2100.0703370531961.5641019.51   × 10 7 251270.7952998.39   × 10 7
x 3 2110.1295370531971.6329048.69   × 10 7 271370.8059376.74   × 10 7
x 3 2140.1037650622291.8589228.28   × 10 7 261340.8163029.78   × 10 7
x 5 2190.1320330913942.9095799.4   × 10 7 2160.1048510
x 6 290.0666130511881.4806647.13   × 10 7 241200.7089759.32   × 10 7
x 7 290.0700070461701.3349438.04   × 10 7 211060.6238668.17   × 10 7
x 8 2230.1589840682621.9811329.39   × 10 7 261410.8418297.23   × 10 7
Table 2. Numerical Results for MSCG, PCG and SATCGM for Problem 2 with given initial points and dimensions.
Table 2. Numerical Results for MSCG, PCG and SATCGM for Problem 2 with given initial points and dimensions.
MSCGPCGSATCGM
DimensionInitial PointIterFvalTimeNormIterFvalTimeNormIterFvalTimeNorm
1000 x 1 270.0022870490.0051490010.0410110
x 2 3100.00283705110.005127014430.0069891.84   × 10 7
x 3 3100.004206130.004934037811350.2214750
x 3 3100.00400207150.006266037311200.0789630
x 5 4130.00492709190.009583037111140.0716260
x 6 270.0028390370.00369508250.002377.06   × 10 7
x 7 270.0031920250.00307707220.0021221.55   × 10 7
x 8 5160.005398010210.008973037111140.0775390
5000 x 1 270.0055810490.0098380010.0008780
x 2 3100.01218205110.01221509360.0200121.86   × 10 7
x 3 3100.01438506130.01673703100.0058620
x 3 3100.00959407150.01758903100.0075430
x 5 4130.01134409190.02015304130.0082660
x 6 270.0075450370.0109680270.0047190
x 7 270.0064250250.00825309370.0199361.09   × 10 7
x 8 5160.018064010210.024443012460.0244721.72   × 10 7
10,000 x 1 270.014180490.0188190010.0010230
x 2 3100.01415505110.02157209360.0327952.56   × 10 7
x 3 3100.01492506130.02453203100.0106190
x 3 3100.01488607150.02873103100.0102950
x 5 4130.01771709190.03385704130.0154790
x 6 270.0103650370.0181780270.0068030
x 7 270.0105090250.01215209370.0330131.53   × 10 7
x 8 5160.022966010210.037341012460.0425462.37   × 10 7
50,000 x 1 270.0328160490.0598740010.0034580
x 2 3100.05705305110.08075309360.1373085.63   × 10 7
x 3 3100.04754406130.08880103100.0444050
x 3 3100.04503607150.10231503100.0398370
x 5 4130.06364209190.12946704130.0572160
x 6 270.0348520370.0544390270.0301660
x 7 270.0357970250.03532709370.1324063.41   × 10 7
x 8 5160.087191010210.143865012460.1893295.22   × 10 7
100,000 x 1 270.0570210490.1191860010.0053450
x 2 3100.09160105110.14437509360.2903647.94   × 10 7
x 3 3100.087606130.17627803100.0813860
x 3 3100.1186707150.20212103100.0787660
x 5 4130.12220709190.25442404130.1055360
x 6 270.0612730370.0928840270.0508010
x 7 270.0867320250.06490609370.309134.82   × 10 7
x 8 5160.137848010210.279527012460.3844667.36   × 10 7
Table 3. Numerical Results for MSCG, PCG and SATCGM for Problem 3 with given initial points and dimensions.
Table 3. Numerical Results for MSCG, PCG and SATCGM for Problem 3 with given initial points and dimensions.
MSCGPCGSATCGM
DimensionInitial PointIterFvalTimeNormIterFvalTimeNormIterFvalTimeNorm
1000 x 1 280.002131011290.0076475.91   × 10 7 010.0303170
x 2 280.002618011290.0064159.95   × 10 7 280.0033620
x 3 290.003474011300.0070097.4   × 10 7 8340.0030584.12   × 10 7
x 3 290.003267013360.0101255.86   × 10 7 7300.0023819.62   × 10 7
x 5 290.002792012340.0068021.45   × 10 7 3130.0018920
x 6 280.003134011290.0083196.19   × 10 7 7290.0022174.45   × 10 7
x 7 280.00293010260.0086983.51E-077290.0022661.39   × 10 7
x 8 290.003217012340.0113991.02   × 10 7 8340.0027921.8   × 10 7
5000 x 1 280.004982012310.0236951.23   × 10 7 010.0006940
x 2 280.007135012320.0249079.71   × 10 7 280.0048920
x 3 290.009629012330.0247847.22   × 10 7 10420.0182142.06   × 10 7
x 3 290.009662014380.0281321.22   × 10 7 9380.0171154.81   × 10 7
x 5 290.007819012340.0251623.25   × 10 7 3130.006730
x 6 280.006236012310.0226881.28   × 10 7 9370.016182.22   × 10 7
x 7 280.00624010260.0193497.85   × 10 7 8330.0144796.96   × 10 7
x 8 290.006238012340.0261132.27   × 10 7 9380.0181849   × 10 7
10,000 x 1 280.008726012310.0425521.73   × 10 7 010.0013910
x 2 280.012242013340.043961.27   × 10 7 280.0079360
x 3 290.012729013350.0415319.47   × 10 7 10420.0326852.92   × 10 7
x 3 290.012833014380.0436161.72   × 10 7 9380.029036.8   × 10 7
x 5 290.008266012340.0405754.59   × 10 7 3130.0118080
x 6 280.018599012310.0373811.81   × 10 7 9370.0278413.14   × 10 7
x 7 280.009012011290.0345014.84   × 10 7 8330.0250449.85   × 10 7
x 8 290.009328012340.0422233.21   × 10 7 10420.0318961.27   × 10 7
50,000 x 1 280.032446012310.1395583.88   × 10 7 010.0030990
x 2 280.04508013340.1531582.85   × 10 7 280.0264790
x 3 290.035499013350.1543922.12   × 10 7 10420.1444326.52   × 10 7
x 3 290.043984014380.1652443.85   × 10 7 10420.1351361.52   × 10 7
x 5 290.037903013370.1625554.48   × 10 7 3130.0434190
x 6 280.03081012310.1359044.06   × 10 7 9370.1147767.03   × 10 7
x 7 280.031101012310.1334711.01   × 10 7 9370.1145572.2   × 10 7
x 8 290.034806012340.1480587.18   × 10 7 10420.1567542.85   × 10 7
100,000 x 1 280.063409012310.2729965.48   × 10 7 010.0048480
x 2 280.064054013340.2983724.03   × 10 7 280.0517390
x 3 290.085457013350.3037072.99   × 10 7 10420.2466699.22   × 10 7
x 3 290.065247014380.3291495.44   × 10 7 10420.2534352.15   × 10 7
x 5 290.070299013370.3243636.34   × 10 7 3130.0839620
x 6 280.058418012310.2728165.74   × 10 7 9370.2212649.94   × 10 7
x 7 280.056572012310.2749541.42   × 10 7 9370.2231743.11   × 10 7
x 8 290.092306013370.3573514.43   × 10 7 10420.2540614.03   × 10 7
Table 4. Numerical Results for MSCG, PCG and SATCGM for Problem 4 with given initial points and dimensions.
Table 4. Numerical Results for MSCG, PCG and SATCGM for Problem 4 with given initial points and dimensions.
MSCGPCGSATCGM
DimensionInitial PointIterFvalTimeNormIterFvalTimeNormIterFvalTimeNorm
1000 x 1 280.0025190----010.0328120
x 2 280.0042860----65519670.1398129.97   × 10 7
x 3 280.0037140----65519670.1246189.98   × 10 7
x 3 280.0045560----64919490.1261329.98   × 10 7
x 5 280.0050030----65619710.1292959.98   × 10 7
x 6 --------64819450.123819.98   × 10 7
x 7 --------65219570.1329229.98   × 10 7
x 8 280.0028680----65419650.121199.98   × 10 7
5000 x 1 280.0124320----010.0011310
x 2 280.0115340--------
x 3 280.0090140--------
x 3 280.0089440--------
x 5 280.0091430--------
x 6 ------------
x 7 ------------
x 8 280.0105860--------
10,000 x 1 280.0136010----010.001480
x 2 280.0174820--------
x 3 280.0204850--------
x 3 280.0167170--------
x 5 280.0168280--------
x 6 ------------
x 7 ------------
x 8 280.0140880--------
50,000 x 1 280.0596160----010.0048830
x 2 280.0688170--------
x 3 280.0646860--------
x 3 280.0610620--------
x 5 280.0678140--------
x 6 ------------
x 7 ------------
x 8 280.0649790--------
100,000 x 1 280.1153260----010.009630
x 2 280.1207460--------
x 3 280.1354220--------
x 3 280.1384360--------
x 5 280.1294480--------
x 6 ------------
x 7 ------------
x 8 280.1207020--------
Table 5. Numerical Results for MSCG, PCG and SATCGM for Problem 5 with given initial points and dimensions.
Table 5. Numerical Results for MSCG, PCG and SATCGM for Problem 5 with given initial points and dimensions.
MSCGPCGSATCGM
DimensionInitial PointIterFvalTimeNormIterFvalTimeNormIterFvalTimeNorm
1000 x 1 290.00237011290.0067321.54E-07010.0076970
x 2 2100.002475011300.0071491.79   × 10 7 290.0029720
x 3 2110.002987013390.0093952.64   × 10 7 2100.0010820
x 3 2140.003295012370.0086082.18   × 10 7 3170.0016280
x 5 2190.00434012410.0095462.64   × 10 7 2160.0019330
x 6 280.00273011290.007462.65   × 10 7 280.0010170
x 7 280.003503010260.0091292.28   × 10 7 6250.0021567.36   × 10 7
x 8 2230.00465801140.00356603230.0018520
5000 x 1 290.009611011290.019073.45   × 10 7 010.0007460
x 2 2100.006664011300.0198654   × 10 7 290.004750
x 3 2110.006446013390.024965.9   × 10 7 2100.0042440
x 3 2140.008092012370.0246944.88   × 10 7 3170.0067770
x 5 2190.012408012410.026165.9   × 10 7 2160.0071670
x 6 280.009391011290.0205715.91   × 10 7 280.0037630
x 7 280.005317010260.0195935.1   × 10 7 8330.0132043.68   × 10 7
x 8 2230.0244801140.01090703230.0080390
10,000 x 1 290.007472011290.0294254.87   × 10 7 010.0009010
x 2 2100.012876011300.0306025.65   × 10 7 290.0071090
x 3 2110.011031013390.0398.34   × 10 7 2100.0069250
x 3 2140.011727012370.035426.9   × 10 7 3170.0128820
x 5 2190.025132012410.0357588.34   × 10 7 2160.0111740
x 6 280.006515011290.0282158.36   × 10 7 280.0054410
x 7 280.008092010260.037427.21   × 10 7 8330.0208775.2   × 10 7
x 8 2230.01934401140.01292103230.0139160
50,000 x 1 290.028692012320.1142364.75   × 10 7 010.0022260
x 2 2100.030939012330.1283435.51   × 10 7 290.0260630
x 3 2110.03574014420.1451848.13   × 10 7 2100.0256240
x 3 2140.042175013400.1349066.73   × 10 7 3170.0445290
x 5 2190.051753013440.1516628.14   × 10 7 2160.041470
x 6 280.030385012320.1122518.16   × 10 7 280.0215940
x 7 280.025703011290.2367187.04   × 10 7 9370.1093171.16   × 10 7
x 8 2230.07404901140.07209303230.0582220
100,000 x 1 290.051896012320.3790626.72   × 10 7 010.003850
x 2 2100.057322012330.2960277.8   × 10 7 290.0488270
x 3 2110.074678015440.4209861.07   × 10 7 2100.0539150
x 3 2140.073831013400.372959.52   × 10 7 3170.0917450
x 5 2190.112341014460.3717471.07   × 10 7 2160.0858490
x 6 280.044526013340.33381.07   × 10 7 280.0401330
x 7 280.050034011290.2746079.95   × 10 7 9370.1832051.65   × 10 7
x 8 2230.10581601140.07905603230.1059570
Table 6. Numerical Results for MSCG, PCG and SATCGM for Problem 6 with given initial points and dimensions.
Table 6. Numerical Results for MSCG, PCG and SATCGM for Problem 6 with given initial points and dimensions.
MSCGPCGSATCGM
DimensionInitial PointIterFvalTimeNormIterFvalTimeNormIterFvalTimeNorm
1000 x 1 382430.0268744.9   × 10 7 803660.045248.69   × 10 7 271360.0301257.98   × 10 7
x 2 583670.0453645.9   × 10 7 642950.041169.27   × 10 7 281410.0156317.98   × 10 7
x 3 583650.0405426.6   × 10 7 813710.049379.38   × 10 7 281410.0096927.98   × 10 7
x 3 452880.0351039.3   × 10 7 924200.05999.21   × 10 7 281410.0097987.98   × 10 7
x 5 432760.0368874.9   × 10 7 984470.063139.72   × 10 7 281410.0122477.98   × 10 7
x 6 553480.0463448.2   × 10 7 884010.058728.89   × 10 7 271360.0114275.98   × 10 7
x 7 483040.0373476.5   × 10 7 934230.059818.67   × 10 7 261310.013329.02   × 10 7
x 8 392510.0328867.3   × 10 7 1024650.06868.53   × 10 7 281410.0113957.98   × 10 7
5000 x 1 392500.0950439   × 10 7 773530.164379.81   × 10 7 351930.0715975.32   × 10 7
x 2 553480.1546426.3   × 10 7 632910.149188.23   × 10 7 744420.1600349.27   × 10 7
x 3 432750.10284.9   × 10 7 803670.170728.51   × 10 7 372050.0781597.82   × 10 7
x 3 342230.0808454.9   × 10 7 914160.201718.36   × 10 7 502770.1128648.29   × 10 7
x 5 513220.1225679.5   × 10 7 974430.245488.84   × 10 7 502770.0973878.68   × 10 7
x 6 392520.1217558.1   × 10 7 853880.180999.98   × 10 7 281550.0518538.69   × 10 7
x 7 352260.1199037.1   × 10 7 904100.190889.71   × 10 7 372050.0884558.1   × 10 7
x 8 533350.1160794.8   × 10 7 994520.209599.56   × 10 7 522880.1195195.08   × 10 7
10,000 x 1 422700.2472049.4   × 10 7 763490.338269.6   × 10 7 331820.1300178.6   × 10 7
x 2 422710.1954217.3   × 10 7 612820.268139.8   × 10 7 744430.3367459.93   × 10 7
x 3 392520.2277711   × 10 6 803670.35358.11   × 10 7 935560.3934599.31   × 10 7
x 3 553460.2536617.8   × 10 7 884030.402179.94   × 10 7 965750.409339.64   × 10 7
x 5 392520.2154844.2   × 10 7 964390.429768.6   × 10 7 1006000.4229099.8   × 10 7
x 6 402570.1935186.3   × 10 7 843840.369189.64   × 10 7 331830.1326156.75   × 10 7
x 7 402590.19378.9   × 10 7 904100.395089.38   × 10 7 372040.1560075.58   × 10 7
x 8 442840.2345784.8   × 10 7 984480.441349.32   × 10 7 1026120.4638349.46   × 10 7
50,000 x 1 664171.3574178.7   × 10 7 763491.428718.74   × 10 7 331820.6807797.07   × 10 7
x 2 573621.1742495.3   × 10 7 602781.115548.73   × 10 7 402220.6758867.8   × 10 7
x 3 533381.1754478.1   × 10 7 773541.479949.12   × 10 7 442440.7782246.61   × 10 7
x 3 543431.1486985.3   × 10 7 873991.642059.08   × 10 7 402220.7277458.56   × 10 7
x 5 644031.3329015.5   × 10 7 934261.764839.69   × 10 7 402220.7423467.11   × 10 7
x 6 573611.1563739.7   × 10 7 833801.585238.85   × 10 7 351940.6511896.22   × 10 7
x 7 563561.1835576.7   × 10 7 894061.651918.53   × 10 7 392150.685236.55   × 10 7
x 8 694341.422086.9   × 10 7 984481.847748.45   × 10 7 402220.7171287.69   × 10 7
100,000 x 1 573632.5959967.4   × 10 7 753453.363078.42   × 10 7 351941.3865078.91   × 10 7
x 2 483092.272899.2   × 10 7 602782.640658.38   × 10 7 362001.4155896.35   × 10 7
x 3 563592.5428955.3   × 10 7 763503.487798.89   × 10 7 382111.5215388.92   × 10 7
x 3 654122.9147849.9   × 10 7 873993.839868.66   × 10 7 382111.522629.77   × 10 7
x 5 593762.6744557.6   × 10 7 934264.063549.21   × 10 7 462541.8191435.33   × 10 7
x 6 613862.7760467.1   × 10 7 833803.602898.44   × 10 7 372051.4495686   × 10 7
x 7 523332.4723884.6   × 10 7 884023.868738.22   × 10 7 311711.2226385.94   × 10 7
x 8 513272.3673178.5   × 10 7 954354.30919.98   × 10 7 422321.6659429.69   × 10 7
Table 7. Numerical Results for MSCG, PCG and SATCGM for Problem 7 with given initial points and dimensions.
Table 7. Numerical Results for MSCG, PCG and SATCGM for Problem 7 with given initial points and dimensions.
MSCGPCGSATCGM
DimensionInitial PointIterFvalTimeNormIterFvalTimeNormIterFvalTimeNorm
1000 x 1 9370.0125289.18   × 10 8 12310.0109642.45   × 10 7 10490.0665416.29   × 10 7
x 2 8330.0086344.79   × 10 7 12310.0142421.02   × 10 7 8410.0040336.37   × 10 7
x 3 8330.0120011.88   × 10 7 10260.0070359.94   × 10 7 9450.0051789.16   × 10 7
x 3 9370.0120421.22   × 10 7 12310.0125853.25   × 10 7 10500.0043371.27   × 10 7
x 5 9370.0117772.82   × 10 7 12310.0164467.53   × 10 7 10490.0056166.07   × 10 7
x 6 9370.0075881.18   × 10 7 12310.0137523.17   × 10 7 9460.0039952.22   × 10 7
x 7 9370.0118521.4   × 10 7 12310.0142763.74   × 10 7 10490.0043876.31   × 10 7
x 8 9370.0121233.9   × 10 7 13340.0177654.53   × 10 7 11520.0043386.29   × 10 7
5000 x 1 9370.0367162.04   × 10 7 12310.0481085.51   × 10 7 10410.0581331.92   × 10 7
x 2 9370.035218.52   × 10 8 12310.0449472.3   × 10 7 9370.0310975.08   × 10 7
x 3 8330.0361314.18   × 10 7 11290.036749.73   × 10 7 9370.0320281.99   × 10 7
x 3 9370.0348922.71   × 10 7 12310.0572877.31   × 10 7 10410.0282691.61   × 10 7
x 5 9370.0321416.27   × 10 7 13340.038667.39   × 10 7 10410.0299283.73   × 10 7
x 6 9370.0338812.63   × 10 7 12310.0415637.11   × 10 7 10410.0290491.57   × 10 7
x 7 9370.0335723.11   × 10 7 12310.0456078.39   × 10 7 10410.0343991.85   × 10 7
x 8 9370.0361098.64   × 10 7 14360.0493459.45   × 10 8 10410.0290485.15   × 10 7
10,000 x 1 9370.0578112.88   × 10 7 12310.0702047.79   × 10 7 10410.0555642.72   × 10 7
x 2 9370.0571911.21   × 10 7 12310.0700293.26   × 10 7 9370.0535167.18   × 10 7
x 3 8330.075485.91   × 10 7 12310.0655841.28   × 10 7 9370.0494192.82   × 10 7
x 3 9370.0524423.83   × 10 7 13340.0724244.51   × 10 7 10410.0611142.28   × 10 7
x 5 9370.0768228.86   × 10 7 14360.0746769.69   × 10 8 10410.0566725.28   × 10 7
x 6 9370.0637513.72   × 10 7 13340.0851154.39   × 10 7 10410.0608592.22   × 10 7
x 7 9370.0552044.39   × 10 7 13340.0739135.18   × 10 7 10410.0628712.62   × 10 7
x 8 10410.0835539.77   × 10 8 14360.0873171.34   × 10 7 10410.0588837.28   × 10 7
50,000 x 1 9370.2084856.45   × 10 7 13340.3021897.6   × 10 7 10410.2510476.08   × 10 7
x 2 9370.2315842.69   × 10 7 12310.2727967.28   × 10 7 10410.2624281.61   × 10 7
x 3 9370.2156841.06   × 10 7 12310.3381262.86   × 10 7 9370.2287646.3   × 10 7
x 3 9370.2331938.56   × 10 7 14360.4252499.36   × 10 8 10410.2531585.1   × 10 7
x 5 10410.2380341.59   × 10 7 14360.4352612.17   × 10 7 11450.2685141.18   × 10 7
x 6 9370.2840858.32   × 10 7 13340.5915639.81   × 10 7 10410.2353734.96   × 10 7
x 7 9370.2235289.82   × 10 7 14360.4015541.07   × 10 7 10410.2313055.85   × 10 7
x 8 10410.2440652.19   × 10 7 14360.4744722.99   × 10 7 11450.2623471.63   × 10 7
100,000 x 1 9370.454499.12   × 10 7 14360.7538569.97   × 10 8 10410.5209398.6   × 10 7
x 2 9370.5042093.81   × 10 7 13340.6929574.49   × 10 7 10410.5062422.27   × 10 7
x 3 9370.5986171.49   × 10 7 12310.6062754.04   × 10 7 9370.463888.91   × 10 7
x 3 10410.4945939.68   × 10 8 14360.7676061.32   × 10 7 10410.5137257.22   × 10 7
x 5 10410.5774112.24   × 10 7 14360.7444963.06   × 10 7 11450.621111.67   × 10 7
x 6 10410.5618669.42   × 10 8 14360.756631.29   × 10 7 10410.6182477.01   × 10 7
x 7 10410.556871.11   × 10 7 14360.7289911.52   × 10 7 10410.5443198.28   × 10 7
x 8 10410.5627733.09   × 10 7 14360.7039064.23   × 10 7 11450.5437992.3   × 10 7
Table 8. Numerical Results for MSCG, PCG and SATCGM for Problem 8 with given initial points and dimensions.
Table 8. Numerical Results for MSCG, PCG and SATCGM for Problem 8 with given initial points and dimensions.
MSCGPCGSATCGM
DimensionInitial PointIterFvalTimeNormIterFvalTimeNormIterFvalTimeNorm
1000 x 1 010.0017790010.001328013890.1583912.25   × 10 7
x 2 413560.1800469.5   × 10 7 181280.0720225.55   × 10 7 151070.0105355.89   × 10 7
x 3 403520.1661799.02   × 10 7 191370.0825378.24   × 10 7 171220.0173729.6   × 10 7
x 3 534730.2134447.73   × 10 7 191380.0818414.96   × 10 7 181330.0187671.88   × 10 7
x 5 312840.1426135.81   × 10 7 453130.1675769.2   × 10 7 161170.0118817.24   × 10 7
x 6 403600.1563938.3   × 10 7 463140.1715457.72   × 10 7 14940.0114453.19   × 10 7
x 7 312790.1238887.98   × 10 7 322190.1258637.83   × 10 7 181230.0122744.31   × 10 7
x 8 352830.1275239.52   × 10 7 ----191290.0151649.14   × 10 7
5000 x 1 010.0017110010.0016720342540.4523856.31   × 10 7
x 2 564890.8650668.21   × 10 7 191350.2786254.25   × 10 7 382730.4645376.65   × 10 7
x 3 443830.6559668   × 10 7 201440.3041176.27   × 10 7 553950.6844549.47   × 10 7
x 3 423810.6875887.45   × 10 7 201450.311773.73   × 10 7 413010.4856842.87   × 10 7
x 5 282570.4381798.43   × 10 7 483310.6654789.68   × 10 7 352650.4298895.99   × 10 7
x 6 373330.5787897.75   × 10 7 201410.2765624.24   × 10 7 241810.290185.67   × 10 7
x 7 302700.45977.02   × 10 7 291990.3845238.46   × 10 7 342540.4228719.83   × 10 7
x 8 534530.7709328.01   × 10 7 482950.5858347.16   × 10 7 ----
10,000 x 1 010.0032120010.0048460282090.6949658.3   × 10 7
x 2 302520.8272427.1   × 10 7 191350.5029176.37   × 10 7 463181.0145327.99   × 10 7
x 3 524451.4448829.42   × 10 7 201440.5447199.58   × 10 7 493601.1865354.75   × 10 7
x 3 393541.1647137.61   × 10 7 201450.5421925.52   × 10 7 513691.2480334.27   × 10 7
x 5 272480.8259879.85   × 10 7 483311.3148587.51   × 10 7 292210.6902854.89   × 10 7
x 6 363241.1024437.47   × 10 7 201410.5245633.81   × 10 7 322370.8365339.41   × 10 7
x 7 302700.881727.45   × 10 7 271870.6969476.15   × 10 7 352580.8145547.82   × 10 7
x 8 595041.6286727.67   × 10 7 322200.8304396.31   × 10 7
50,000 x 1 010.014180010.0096380443164.5471286.58   × 10 7
x 2 292553.7505168.09   × 10 7 201422.4732955.46   × 10 7 372674.7210438.72   × 10 7
x 3 463985.8019099.64   × 10 7 211512.6486158.04   × 10 7 ----
x 3 383454.9696099.49   × 10 7 211522.6199384.6   × 10 7 ----
x 5 262393.4710288.45   × 10 7 422935.073558.25   × 10 7 453274.6017417.26   × 10 7
x 6 343064.4796817.04   × 10 7 201412.4259917.18   × 10 7 453324.7314084.76   × 10 7
x 7 282523.6120338.54   × 10 7 241672.90878.62   × 10 7 372673.8731987   × 10 7
x 8 514436.3610799.55   × 10 7 211542.6276034.8   × 10 7 ----
100,000 x 1 010.0270510010.0241370----
x 2 292557.2707229.08   × 10 7 201425.1926228.61   × 10 7 ----
x 3 4436810.870127.71   × 10 7 221585.56713.91   × 10 7 ----
x 3 373369.779219.4   × 10 7 211525.240377.3   × 10 7 ----
x 5 262397.0448539.27   × 10 7 4329910.948746.69   × 10 7 ----
x 6 332978.6816058.44   × 10 7 221565.4628715.41   × 10 7 302256.4201686.36   × 10 7
x 7 272437.1805618.73   × 10 7 231615.7089246.57   × 10 7 392808.0967697.51   × 10 7
x 8 4741612.170017.77   × 10 7 211545.4766086.48   × 10 7 ----
Table 9. Numerical Results for MSCG, PCG and SATCGM for Problem 9 with given initial points and dimensions.
Table 9. Numerical Results for MSCG, PCG and SATCGM for Problem 9 with given initial points and dimensions.
MSCGPCGSATCGM
DimensionInitial PointIterFvalTimeNormIterFvalTimeNormIterFvalTimeNorm
1000 x 1 13660.0078353.04   × 10 7 10360.0081423.03   × 10 7 10500.0121612.2   × 10 7
x 2 13640.0122856.68   × 10 7 10340.0112196.61   × 10 7 11530.0036382.2   × 10 7
x 3 13640.0120866.68   × 10 7 10340.0115377.04   × 10 8 11530.0040432.2   × 10 7
x 3 13650.0191516.68   × 10 7 11370.0099023.94   × 10 7 11540.0038322.2   × 10 7
x 5 13640.0090386.68   × 10 7 11380.0121951.5   × 10 7 9440.003592.7   × 10 7
x 6 10510.0115985.96   × 10 7 8290.0103774.31   × 10 7 7360.0025894.39   × 10 7
x 7 12610.0129596.38   × 10 7 9320.0080835.32   × 10 7 10500.0038961.91   × 10 7
x 8 13650.0161456.68   × 10 7 12400.0111249.64   × 10 7 10490.0033171.67   × 10 7
5000 x 1 13660.0397596.79   × 10 7 10360.0292666.77   × 10 7 12600.0257572.73   × 10 7
x 2 14690.0399393.18   × 10 7 11380.0319456.23   × 10 7 13630.0273262.73   × 10 7
x 3 14690.0365693.18   × 10 7 10340.0284281.57   × 10 7 13630.0307122.73   × 10 7
x 3 14700.0449593.18   × 10 7 11370.027438.81   × 10 7 13640.0298552.73   × 10 7
x 5 14690.040453.18   × 10 7 11380.0292143.36   × 10 7 11540.0257313.35   × 10 7
x 6 11560.0297512.84   × 10 7 8290.0267999.65   × 10 7 9460.0192325.45   × 10 7
x 7 13660.0367433.04   × 10 7 10360.0259155.02   × 10 7 12600.0244562.36   × 10 7
x 8 14700.037233.18   × 10 7 13440.0350789.09   × 10 7 12590.037582.07   × 10 7
10,000 x 1 13660.0931649.61   × 10 7 10360.0424089.57   × 10 7 12600.0395763.86   × 10 7
x 2 14690.0808924.5   × 10 7 11380.0665488.82   × 10 7 13630.0447413.86   × 10 7
x 3 14690.0551364.5   × 10 7 10340.0491212.23   × 10 7 13630.0505723.86   × 10 7
x 3 14700.0800064.5   × 10 7 12410.0518115.26   × 10 7 13640.0433013.86   × 10 7
x 5 14690.0841064.5   × 10 7 11380.0520244.75   × 10 7 11540.0408924.74   × 10 7
x 6 11560.044384.02   × 10 7 9330.0420295.75   × 10 7 9460.0314417.7   × 10 7
x 7 13660.0746544.3   × 10 7 10360.0480077.1   × 10 7 12600.0464453.34   × 10 7
x 8 14700.0723954.5   × 10 7 14470.0666328.67   × 10 8 12590.0442712.93   × 10 7
50,000 x 1 14710.2227344.58   × 10 7 11400.1654549.02   × 10 7 12600.1683848.63   × 10 7
x 2 15740.2599812.15   × 10 7 12410.1872991.33   × 10 7 13630.1808868.63   × 10 7
x 3 15740.2183282.15   × 10 7 10340.1482324.98   × 10 7 13630.2038148.63   × 10 7
x 3 15750.2220712.15   × 10 7 13440.2037047.93   × 10 8 13640.1954248.63   × 10 7
x 5 15740.2303322.15   × 10 7 12420.17714.48   × 10 7 12590.1676241.67   × 10 7
x 6 11560.1687228.99   × 10 7 10360.1583278.68   × 10 8 10510.1443442.71   × 10 7
x 7 13660.1906259.62   × 10 7 11390.1757211.07   × 10 7 12600.1673797.48   × 10 7
x 8 15750.2531422.15   × 10 7 14470.220661.94   × 10 7 12590.2014186.55   × 10 7
100,000 x 1 14710.4620826.48   × 10 7 12430.3814888.61   × 10 8 13650.4064481.92   × 10 7
x 2 15740.5528613.04   × 10 7 12410.3650631.88   × 10 7 14680.4243041.92   × 10 7
x 3 15740.5388223.04   × 10 7 10340.2955177.04   × 10 7 14680.4264711.92   × 10 7
x 3 15750.5959133.04   × 10 7 13440.3794921.12   × 10 7 14690.4275961.92   × 10 7
x 5 15740.512133.04   × 10 7 12420.3592356.34   × 10 7 12590.3602582.36   × 10 7
x 6 12610.4613062.71   × 10 7 10360.3052151.23   × 10 7 10510.2930423.84   × 10 7
x 7 14710.5073792.9   × 10 7 11390.3737671.51   × 10 7 13650.3772971.66   × 10 7
x 8 15750.5257493.04   × 10 7 14470.423662.74   × 10 7 12590.3369579.27   × 10 7
Table 10. Twenty five experiment results of 1 norm regularization problem for MSCG and PCG methods.
Table 10. Twenty five experiment results of 1 norm regularization problem for MSCG and PCG methods.
MSCGPCG
MSEITERCPU(s)MSEITERCPU(s)
r = 0.19.90   × 10 4 1013.971.48   × 10 5 1455.5
1.58   × 10 3 1033.311.63   × 10 5 1404.5
7.68   × 10 4 1133.171.30   × 10 5 1333.47
1.07   × 10 3 1283.451.70   × 10 5 1453.95
1.23   × 10 3 1223.21.38   × 10 5 1433.77
1.62   × 10 3 882.341.48   × 10 5 1393.72
1.66   × 10 3 1143.191.84   × 10 5 1323.59
2.63   × 10 3 952.751.83   × 10 5 1233.41
1.16   × 10 3 992.671.22   × 10 5 1132.92
1.91   × 10 3 1072.841.79   × 10 5 1142.92
2.18   × 10 3 1062.692.09   × 10 5 1102.81
8.60   × 10 3 1072.771.63   × 10 5 1313.38
1.33   × 10 3 1022.781.27   × 10 5 1433.78
1.03   × 10 3 1194.531.06   × 10 5 1405.34
1.15   × 10 3 1103.031.48   × 10 5 1353.61
1.77   × 10 3 1104.271.69   × 10 5 1485.75
1.36   × 10 3 1033.831.47   × 10 5 1144.34
1.67   × 10 3 1123.421.78   × 10 5 1203.88
1.21   × 10 3 1074.381.47   × 10 5 1144.91
9.99   × 10 4 1013.861.47   × 10 5 1455.55
1.58   × 10 3 1032.781.63   × 10 5 1403.7
7.68   × 10 4 1133.161.30   × 10 5 1333.92
1.07   × 10 3 1284.771.70   × 10 5 1455.59
1.23   × 10 3 1223.231.38   × 10 5 1433.91
1.62   × 10 3 882.411.48   × 10 5 1393.81

Share and Cite

MDPI and ACS Style

Abubakar, A.B.; Kumam, P.; Awwal, A.M.; Thounthong, P. A Modified Self-Adaptive Conjugate Gradient Method for Solving Convex Constrained Monotone Nonlinear Equations for Signal Recovery Problems. Mathematics 2019, 7, 693. https://doi.org/10.3390/math7080693

AMA Style

Abubakar AB, Kumam P, Awwal AM, Thounthong P. A Modified Self-Adaptive Conjugate Gradient Method for Solving Convex Constrained Monotone Nonlinear Equations for Signal Recovery Problems. Mathematics. 2019; 7(8):693. https://doi.org/10.3390/math7080693

Chicago/Turabian Style

Abubakar, Auwal Bala, Poom Kumam, Aliyu Muhammed Awwal, and Phatiphat Thounthong. 2019. "A Modified Self-Adaptive Conjugate Gradient Method for Solving Convex Constrained Monotone Nonlinear Equations for Signal Recovery Problems" Mathematics 7, no. 8: 693. https://doi.org/10.3390/math7080693

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop