We propose a new method for solving equilibrium problems on polyhedra, where the underlying function is continuous and satisfies a pseudomonone assumption which is called an interior proximal cutting hyperplane method. The method is based on the special interior proximal function which replaces the usual quadratic function. This leads to an interior proximal algorithm. The algorithm can be viewed as combining the cutting hyperplane method and the special interior proximal function. Finally some preliminary computational results are given.
The authors declare that they have no competing interests.
Authors' contributions
JKK conceived the study and participated in its design and coordination. JKK suggested many good ideas that are useful for achievement this paper and made the revision. PNA and JKK prepared the manuscript initially and performed all the steps of proof in this research. All authors read and approved the final manuscript.
1 Introduction
Equilibrium problems appear frequently in many practical problems arising, for instance, physics, engineering, game theory, transportation, economics and network (see [1, 2]). They become an attractive field for many researchers both theory and applications (see [3‐8]). These problems are models whose formulation includes optimization, variational inequalities, (vector) optimization problems, fixed point problems, saddle point problems, Nash equilibria and complementarity problems as particular cases (see [1, 5, 9]). In this article, we consider the equilibrium problems (shortly EP (f, C)):
where C is a polyhedral set on ℝndefined by
(1.1)
Anzeige
A is a p × n matrix, b ∈ ℝp, f : C × C → ℝ is a bifunction such that f(x, x) = 0 for every x ∈ C.
Throughout this article, we assume that:
(A.1) intC = {x | Ax < b} is nonempty.
(A.2) f(x,·) is convex on C for all x∈ C.
Anzeige
(A.3) f is continuous on C × C.
(A.4) The solution set S of EP (f, C) is nonempty.
Theory of equilibrium problems has been studied extensively and intensively on the existence of solutions and generalization to many abstract ways. However, methods for solving EP(f, C) still restrict and have not satisfy the need of applications. There are popular approaches for solving EP(f, C) to our knowledge. The first approach is based on the gap function (see [10]), the second way is to use the proximal point method [7] and the third one is the auxiliary subproblem principle [8].
In [3, 4], Anh proposed interior proximal methods for solving monotone equilibrium problems when C is a polyhedral convex set. The method is based on a special interior proximal function which replaces the usual quadratic function. The method has also been studied to variational inequalities by many authors (see [11, 12]). This leads to an interior proximal-type algorithm, which can be viewed as combining an Armijo-type line search technique and the special interior proximal function. The only assumption required is that f is monotone on C.
In this article, we propose an algorithm for solving EP(f, C), by making no assumptions on the problem other than continuity and pseudomonotonicity of the bifunction f. Recently, Anh and Kuno [13] introduced a new method for solving multivalued variational inequalities on a closed convex set, where the underlying function is upper semicontinuous and generalized monotone. We extend the cutting hyperplane method to EP(f, C). First, we construct an appropriate hyperplane which separates the current iterative point from the solution set. Next we combine this technique with Armijo-type line search technique to obtain a convergent algorithm for pseudomonotone equilibrium problems. Then the next iteration is obtained as the projection of the current iteration onto the intersection of the feasible set with the halfspace containing the solution set.
The article is organized as follows. In Section 2, we give formal definitions of our target EP(f, C) and the pseudomonotonicity of f. We then combined an idea often used for multivalued variational inequalities to EP(f, C) and interior proximal technique to develop an iterative algorithm. Section 3 is devoted to the proof of its global convergence to a solution of EP(f, C). In the last section, we apply the algorithm for the Nash-Cournot oligopolistic market equilibrium model. The numerical results are implemented to verify our development.
2 The interior proximal cutting hyperplane algorithm
We list some well known definitions and the projection under the Euclidean norm which will be required in our following analysis.
Definition 2.1Let C be a closed convex subset of ℝn, we denote the projection on C by PrC(·), i.e,
Then the bifunction f: C × C → ℝ ∪ {+∞} is said to be
(i) monotone on C if for each x, y ∈ C,
(ii) pseudomonotone on C if for each x, y ∈ C,
It is observe that (i) ⇒ (ii). But the converse is not true. There are some examples in [14].
Classical variational inequality problems (shortly VIP) are to find a vector x* ∈ C such that
where C ⊆ ℝnis a nonempty closed convex subset of ℝnand F is a continuous mapping from C into ℝn. Then they can be alternatively formulated as finding the zero point of the operator T(x) = F(x) + NC(x) where
A well known method to solve this problem is the proximal point algorithm [2], which starting with any point x0 ∈ C and λk≥ λ > 0, iteratively updates xk+1conforming the following problem:
(2.1)
where
Motivation for studying the algorithm of problem (2.1) could be found in [11, 15, 16].
Auslender et al. [12] have proposed an interior proximal-type method for solving (VIP) on through replacing function h(x, xk) by dϕ(x, xk) which is defined as
where
(2.2)
with ν > μ > 0. The fundamental difference here is that the term dϕis used to force the iterations {xk+1} to stay in the interior of . This technique is extended by many authors to variational inequalities and equilibrium problems (see [1, 3]).
Applying this idea to the equilibrium problem EP(f, C), we consider another interior proximal function defined by
with μ ∈ (0, 1), ai(i = 1, ..., p) are the rows of the matrix A, and
We denote by ∇1D(x, y) the gradient of D(·, y) at x for every y ∈ C. It is easy to see that
where
Then we consider the following regularized auxiliary problem (shortly RAP):
where c > 0 is a regularization parameter.
The equivalence between EP(f, C) and (RAP) is due to the following lemma (see [1]).
Lemma 2.2Let f: C × C → ℝ ⋂ {+∞} be a bifunction and x* ∈ C. Then x* is a solution to EP(f, C) if and only if x* is a solution to (RAP).
Lemma 2.2 shows that the solution of the equilibrium problem EP(f, C) can be approximated by an iterative procedure xk+l= h(xk), k = 0, 1,..., where c > 0, x0 is any starting point in C and h(xk) is the unique solution of the strongly convex program:
However, generally, the sequence {xk} does not converge to a solution of the equilibrium problems (see [2]).
Let f be a mapping defined by
where is a multivalued mapping such that for all x ∈ C. Then EP(f, C) can be formulated as the multivalued variational inequality problems (shortly MVIP):
Find x* ∈ C, w* ∈ F(x*) such that
In this case, it is known that solutions coincide with zeros of the following projected residual function
In other words, with x0 ∈ C, w0 ∈ F(x0), the point (x0, w0) is a solution of (MVIP) if and only if T(x0) = 0, where T(x0) = x0 - PrC(x0 - w0) (see [16]). Applying this idea and interior proximal function technique D(·,·) to the equilibrium problem EP(f, C), we obtain the solution scheme: Let xkbe a current approximation to the solution of EP(f, C). First, we compute yk= arg min{f(xk, y) + βD(y, xk) | y ∈ C} for some positive constant β. Next, we search the line segment between xkand r(xk) = xk-ykfor a point such that the hyperplane strictly separates xkfrom the solution set S of EP(f, C). To find such , we may use a computationally inexpensive Armijo-type procedure. Then we compute the next iterate xk+1by projecting xkonto the intersection of the feasible set C with the halfspace .
Then, the algorithm is described as follows.
Algorithm 2.3
Step 0. Choose , and γ ∈ (0, 1).
Step 1. Compute
(2.3)
Find the smallest nonnegative number mkof m such that
(2.4)
Step 2. (Cutting hyperplane) Choose, where.
Set
Find.
Step 3. Set k: = k + 1, and go to Step 1.
3 Convergence of the algorithm
In the next lemma, we justify the stopping criterion.
Lemma 3.1If r(xk) = 0, then xkis a solution to equilibrium problem EP(f, C).
Proof. Since ykis the solution to problem (2.3) and an optimization result in convex programming (see [1]), we have
where NCdenotes the normal cone. From yk∈ intC, it follows that NC(yk) = {0}. Hence
where ξk∈ ∂2f (xk, yk). Replacing yk= xkin this equality, we get
Since
(3.1)
we have
Thus ξk= 0. Combining this with f(xk,xk) = 0, we obtain
which means that xkis a solution to EP(f, C).
In Algorithm 2.3, we need to show the existence of the nonnegative integer mk.
Lemma 3.2For, if r (xk) > 0 then there exists the smallest nonnegative integer mksuch that the inequality (2.4) holds.
Proof. Assume on the contrary, the inequality (2.4) is not satisfied for any nonnegative integer i, i.e.,
Letting i → ∞, from the continuity of f we have
(3.2)
Otherwise, for each t > 0 we have . We obtain after multiplication by for each i = 1, ..., p,
Then,
(3.3)
Since ykis the solution to the strongly convex program (2.4), we have
Substituting y = xk∈ C and using assumptions f(xk, xk) = 0, D(xk, xk) = 0, we get
(3.4)
Combining (3.3) with (3.4), we obtain
(3.5)
Then, inequalities (3.2) and (3.5) imply that
Hence it must be either r(xk) = 0 or . The first case contradicts to r(xk) ≠ 0, while the second one contradicts to the fact .
The following results perform some property of the cutting hyperplane Hk.
Lemma 3.3Let {xk} be the sequence generated by Algorithm 2.3. Then the following hold:
(i)
xk∉ Hk, S ⊆ C ⋂ Hk.
(ii)
, where .
Proof. (i) Since , and , we have
.
Combining this with (2.4), we obtain that
(3.6)
Hence
This implies xk∉ Hk.
Since f is assumed to be pseudomonotone on C, zk∈ C and x* ∈ S,
Combining this with , we get
Thus, x* ∈ Hk.
(ii)
We know that
Hence,
Otherwise, for every y ∈ C ⋂ Hkthere exists λ ∈ (0, 1) such that
where , because xk∈ C but xk∉ Hk.
(3.7)
because . Also we have
Since , using the Pythagorean theorem we can reduce that
(3.8)
From (3.7) and (3.8), we have
which implies
In order to prove the convergence of Algorithm 2.3, we give the following key property of the sequence {xk} generated by the algorithm.
Lemma 3.4The sequence {xk} generated by Algorithm 2.3 satisfies the following inequality.
(3.9)
Proof. Since , we have
Substituting z = x* ∈ C ⋂ Hk, then we have
which implies
Hence,
(3.10)
Since and
we have
(3.11)
From (3.6), it follows that
Thus, (3.11) reduces to
(3.12)
Combining (3.10) and (3.12), we obtain the inequality (3.9)
Theorem 3.5Suppose that Assumptions A.1-A.4 hold, the mapping ∂2f (·, zk) is uniformly bounded by M > 0, and f is pseudomonone on C. Then the sequence {xk} generated by Algorithm 2.3 converges to a solution of EP(f, C).
Proof. The inequality (3.9) implies that the sequence {∥xk-x*∥} is nonincreasing and hence convergent. Consequently, the sequence {xk} is bounded.
Since the mapping ∂2f (·, zk) is uniformly bounded by M > 0, i.e.,
This, together with (3.9), implies
(3.13)
Since {∥xk- x*∥} converges to zero, it is easy to see that
The cases remaining to consider are the following.
Case 1. . This case must follow that . Since {xk} is bounded, there exists which is an accumulation point of {xk}. In other words, a subsequence converges to some such that , as i → ∞. Then we see from Lemma 3.3 that , and besides we can take , in particular in (3.13). Thus is a convergent sequence. Since is an accumulation point of {xk}, the sequence {∥xk- x*∥} converges to zero, i.e., {xk} converges to .
Case 2. . Since mkis the smallest nonnegative integer, mk- 1 does not satisfy (2.4). Hence, we have
and besides
(3.14)
Passing onto the limit in (3.14), as i → ∞, and using the continuity of f, we have
(3.15)
From (3.5) we have
Since f is continuous, passing onto the limit, as i → ∞, we obtain
Combining this with (3.15), we have
which implies or . The second case contradicts to the fact and hence . Letting and repeating the previous arguments, we conclude that the whole sequence {xk} converges to .
4 Numerical results
We applied the algorithm to solve a problem of production competition under the Nash-Cournot oligopolistic market equilibrium model (see [1, 2, 17]). In this model, it is assumed that there are n-firms producing a common homogenous commodity and that the price piof firm i depends on the total quantity of the commodity.
Let hi(xi) denote the cost of the firm i when its production level is xi. Suppose that the profit of firm i is given by
(4.1)
where hiis the cost function of firm i that is assumed to be dependent only on its production level.
Let be closed convex which denotes the strategy set of firms. Each firm seeks to maximize its own profit by choosing the corresponding production level under the presumption that the production of the other firms are parametric input. In this context, a Nash equilibrium is a production pattern in which no firm can increase its profit by changing its controlled variables. Thus under this equilibrium concept, each firm determines its best response given other firms' actions. Mathematically, a point is said to be a Nash equilibrium point if
(4.2)
When hiis affine, this market problem can be formulated as a special Nash equilibrium problem in the n-person nonco-operative game theory.
Set
(4.3)
and
(4.4)
Then it has been proved in [17] that the problem of finding an equilibrium point of this model can be formulated as EP(f, C):
Proposition 4.1[2]A point x* is an equilibrium point for the oligopolistic market problem if and only if it is a solution to EP(f, C), where
The following proposition gives some properties of the bifunction f.
Proposition 4.2[2]Let p : C → ℝ+be convex, twice continuously differentiable, and nonincreasing and let the function μτ: ℝ+→ ℝ+, defined by μτ(σx) = σxp(σx+ τ) be concave for every τ ≥ 0. Also, let the function hi: ℝ+→ ℝ, i = 1, ..., n, be convex and twice continuously differentiable. Then, the cost bifunction
is monotone on C.
We now applied the algorithm to the example with seven firms (n = 7) provided in [9, 17], where the cost and inverse demand functions have the form
Then Propositions 4.1 and 4.2 show that the bifunction defined by (4.4) is monotone on C × C and therefore assumptions of our algorithm are satisfied.
In this example, we choose
Note that in this case, at iteration k, we have
where . Lemma 3.1 shows that if r(xk) = 0, then xkis a solution to EP(f, C). So we can say that xkis an ϵ-solution to EP(f, C) if we have ∥r(xk)∥ ≤ ϵ. The tolerance is taken by ϵ = 10-6, we obtained the following Table 1.
Table 1
Approximate solutions for k
Iter (k)
0
3
3
3
3
3
3
3
1
2.0919
1.0015
1.0036
1.4664
1.0493
5.0019
1.3946
2
2.0943
1.0020
0.9991
1.4648
1.0509
5.0019
1.3979
3
2.0933
0.9993
0.9982
1.4608
1.0472
4.9992
1.3955
4
2.0938
1.0000
1.0011
1.4617
1.0483
5.0003
1.3964
5
2.0939
1.0002
1.0009
1.4618
1.0487
5.0007
1.3970
6
2.0939
0.9998
0.9995
1.4607
1.0480
4.9998
1.3965
7
2.0940
1.0000
1.0003
1.4610
1.0482
5.0001
1.3968
The approximate solution obtained after seven iterations is
Acknowledgements
This study was completed while the first author was staying at the Kyungnam University for the NRF Postdoctoral Fellowship for Foreign Researchers. And the second author was supported by the Kyungnam University Research Fund, 2011.
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Competing interests
The authors declare that they have no competing interests.
Authors' contributions
JKK conceived the study and participated in its design and coordination. JKK suggested many good ideas that are useful for achievement this paper and made the revision. PNA and JKK prepared the manuscript initially and performed all the steps of proof in this research. All authors read and approved the final manuscript.