Optimality conditions for interval-valued univex programming
- Open Access
- 01-12-2019
- Research
Activate our intelligent search to find suitable subject content or patents.
Select sections of text to find matching patents with Artificial Intelligence. powered by
Select sections of text to find additional relevant content using AI-assisted search. powered by (Link opens in a new window)
Abstract
1 Introduction
Convexity and generalized convexity are important in mathematical programming. Invex functions, introduced by Hanson [17], are important generalized convex functions and are successfully used in optimization and equilibrium problems. For example, necessary and sufficient conditions are obtained for K-invex functions in [14]. The concept of G-invex functions is introduced by Antczak [3]. Optimality and duality for differentiable G-multiobjective problems are considered in [4, 5]. Noor [26] considered invex equilibrium problems in the context of invexity. As an extension and refinement of Noor [26], Farajzadeh [15] gave some results for invex Ky Fan inequalities in topological vector spaces.
Another important type of generalized convex functions, called univex functions and preunivex functions, is introduced in [8]. Suppose \(\emptyset \neq X\subseteq R^{n}\), \(\eta :X \times X\rightarrow R^{n}\), \(\varPhi :R\rightarrow R\), and \(b=b(x,y):X \times X \rightarrow R^{+}\). A differentiable function \(F: X\rightarrow R\) is said to be univex at \(y\in X\) with respect to η, Φ, b if, for all \(x\in X\),
$$\begin{aligned} b(x,y)\varPhi \bigl[F(x)-F(y)\bigr]\geq \eta ^{t}(x,y)\nabla F(y). \end{aligned}$$
(1)
Advertisement
Later, some generalized optimality conditions of primal and dual problems were considered by Hanson and Mond [18]. Combing with generalized type I and univex functions, optimality conditions and duality for several mathematical programming problems were considered by many researchers [1, 16, 29], and more and more scholars pay attention to type I and univex functions [24, 25, 34, 35].
The authors of [2, 6, 9, 12, 27, 30, 33, 36‐39] have studied generalized convex interval-valued mappings and their connection with interval-valued optimization. For example, Steuer [33] proposed three algorithms, called the F-cone algorithm, E-cone algorithm, and emanating algorithms, to solve the linear programming problems with interval-valued objective functions. To prove strong duality theorems, Wu [37] derived KKT optimality conditions in the interval-valued problems under convexity hypotheses. Wu [36] also obtained KKT conditions in an optimization problem with an interval-valued objective function using H-derivatives and the concept of weakly differentiable functions. Since the H-derivative suffers certain disadvantages, Chalco-Cano et al. [10] gave KKT-type optimality conditions, which were obtained using the gH-derivatives of interval-valued functions. Also, they studied the relationship between the approach presented with other known approaches given by Wu [36]. However, these methods cannot solve a kind of optimization problems with interval-valued objective functions that are not LU-convex but univex. Antczak [6] used the classical exact \(l_{1}\) penalty function method for solving nondifferentiable interval-valued optimization problems under convexity hypotheses. Optimality conditions in invex optimization problems with an interval-valued objective function were discussed by Zhang et al. [39]. Using gH-differentiability, Li et al. [21] introduced interval-valued invex mappings and gave the optimality conditions for interval-valued objective functions under invexity. By using the weak derivative of fuzzy functions, Li et al. [22] defined fuzzy weakly univex functions and considered optimization conditions for fuzzy minimization problem.
Followed by [21] and [22], in this paper, we introduce the concept of interval-valued univex mappings, consider optimization conditions for interval-valued univex functions for the constrained interval-valued minimization problem, and show examples for illustration purposes. The present paper can be seen as promotion and expansion of [20]. The method presented in this paper is different from that in [6]. Our method cannot solve Example 3.1 of [6] because the objective function is not gH-differentiable. Example 4.1 shows that the methods given by [6, 33, 36, 37] cannot solve a kind of optimization problems for interval-valued univex mappings. Example 4.2 shows that the methods given by Li et al. [22] cannot solve a kind of fuzzy optimization problems for interval-valued univex mappings. Finally, Example 4.3 shows that the method given in [10] cannot solve a kind of optimization problems for interval-valued univex mappings. In Sect. 3, we introduce the concept of interval-valued univex mappings and discuss some their properties. Section 4 deals with optimality conditions for the constrained interval-valued minimization problem under the assumption of interval-valued univexity.
2 Preliminaries
In this paper, a closed interval in R is denoted by \(A=[a^{L}, a ^{U}]\). Every \(a\in R\) is considered as a particular closed interval \(a=[a,a]\). The set of closed intervals is denoted by \(\mathcal{I}\).
Advertisement
Given \(A=[a^{L},a^{U}]\) and \(B=[b^{L},b^{U}] \in \mathcal{I}\), the arithmetic operations and order are defined in [32] as follows:
(1)
\(A+B=[a^{L}+b^{L}, a^{U}+b^{U}] \) and \(-A=\{-a:a\in A\}=[-a^{U},-a ^{L}]\);
(2)
\(A\ominus _{gH} B=[\min (a^{L}-b^{L},a^{U}-b^{U}),\max (a^{L}-b ^{L},a^{U}-b^{U})]\);
(3)
\(A\preceq B\Leftrightarrow a^{L}\leq b^{L}\) and \(a^{U}\leq b^{U}\); \(A\prec B \Leftrightarrow A\preceq B\) and \(A\neq B\).
For \(X\subseteq R^{n}\), a mapping \(F:X\rightarrow \mathcal{I}\) is called an interval-valued function. Then \(F(x)=[F^{L}(x),F^{U}(x)]\), where \(F^{L}(x)\) and \(F^{U}(x)\) are two real-valued functions defined on \(R^{n}\) and satisfying \(F^{L}(x)\leq F^{U}(x)\) for every \(x\in X\). If \(F^{L}(x)\) and \(F^{U}(x)\) are continuous, then \(F(x)\) is said to be continuous.
It is well known that the derivative and subderivative of a function is important in the study of generalized convexity and mathematical programming. For example, a classic subdifferential is introduced by Azimov and Gasimov [7]. Some theorems connecting operations on the weak subdifferential in the nonsmooth and nonconvex analysis are provided in [13]. The derivative and subderivative of interval-valued functions are extensions of real-valued functions. Due to different arithmetics of intervals, several definitions about derivatives of interval-valued functions are introduced by the authors, such as weakly differentiable functions [36], H-differentiable functions (based on the Hukuhara difference of two closed intervals [36]), gH-differentiable functions (based on the operation \(\ominus _{gH}\) of two closed intervals [11, 31]), and subdifferentiable functions (based on the difference \(A- B=[a^{L}-b^{U}, a^{U}-b^{L}]\) of two closed intervals [6]). In this paper, we always use weakly differentiable and gH-differentiable functions, which are defined as follows.
Let X be an open set in \(R^{n}\), and let \(F(x)=[F^{L}(x),F^{U}(x)]\). Then \(F(x)\) is called weakly differentiable at \(x_{0}\) if \(F^{L}(x)\) and \(F^{U}(x)\) are differentiable at \(x_{0}\).
Let \(x_{0} \in (a, b)\) and h be such that \(x_{0} + h \in (a, b)\). Then
If \(F^{\prime }(x_{0})\in \mathcal{I}\) exists, then F is gH- differentiable at \(x_{0}\).
$$\begin{aligned} F^{\prime }(x_{0})= \lim_{x\rightarrow 0} \bigl[F(x_{0}+h)\ominus _{gH}F(x _{0})\bigr]. \end{aligned}$$
(2)
If \(F^{L}(x)\) and \(F^{U}(x)\) are differentiable functions at \(x\in (a, b)\), then \(F(x)\) is gH-differentiable at x, and
$$\begin{aligned} F^{\prime }(x)= \bigl[\min \bigl\{ \bigl(F^{L}\bigr)^{\prime }(x), \bigl(F^{U}\bigr)^{\prime }(x)\bigr\} , \max \bigl\{ \bigl(F^{L}\bigr)^{\prime }(x),\bigl(F^{U} \bigr)^{\prime }(x)\bigr\} \bigr]. \end{aligned}$$
(3)
We say that an interval-valued function F is gH-differentiable at \(x=(x_{1},\ldots ,x_{n})\in X\) if all the partial gH-derivatives \(( \frac{\partial F}{\partial x_{1}})(x),\ldots , ( \frac{\partial F}{ \partial x_{n}})(x)\) exist on some neighborhood of x and are continuous at x. We write
and we call \(\nabla F(x)\) the gradient of a gH-differentiable interval-valued function F at x.
$$\begin{aligned} \nabla F(x)=\biggl(\biggl( \frac{\partial F}{\partial x_{1}}\biggr) (x),\biggl( \frac{\partial F}{\partial x_{2}}\biggr) (x),\ldots ,\biggl( \frac{\partial F}{\partial x_{n}}\biggr) (x) \biggr)^{t}, \end{aligned}$$
Let \(\mathbb{H}(R^{n})\) denote the family of nonempty compact subsets of \(R^{n}\). For \(A,B\in \mathbb{H}(R^{n})\), the Hausdorff metric \(h(A,B)\) on \(\mathbb{H}(R^{n})\) is defined by
where
$$\begin{aligned} h(A,B)=\inf \bigl\{ \varepsilon \mid A\subseteq N(B,\varepsilon ),B\subseteq N(A, \varepsilon )\bigr\} , \end{aligned}$$
$$\begin{aligned} N(A,\varepsilon )=\bigl\{ x\in R^{n}\mid d(x,A)< \varepsilon \bigr\} , \quad d(x,A)= \inf_{a\in A} \Vert x-a \Vert . \end{aligned}$$
The following basic result (which can be found in Lemma 3.1. of [19]) of the mathematical analysis is well known:
Suppose that \(\varPhi :R^{n}\rightarrow R^{n}\) is continuous and let \(X\in \mathbb{H}(R^{n})\). Then the mapping
is uniformly continuous in h-metric.
$$\begin{aligned} \varPsi : \mathbb{H}\bigl(X^{n}\bigr)\rightarrow \mathbb{H} \bigl(R^{n}\bigr), \qquad \varPsi (A)=\bigl\{ \phi (a)\mid a\in A\bigr\} \end{aligned}$$
We say that \(\varPsi :\mathcal{I}\rightarrow \mathcal{I}\) is increasing if \(A\preceq B\) implies \(\varPsi (A)\preceq \varPsi (B)\). From the above result we can prove the following result:
If function \(\varPhi : R\rightarrow R\) is increasing, then \(\varPsi : \mathcal{I}\rightarrow \mathcal{I}\) is increasing. Moreover, \(\varPsi ([a^{L}, a^{U}])=[\varPhi (a^{L}),\varPhi (a^{U})]\).
3 Interval-valued univex functions
In this section, we define interval-valued univex functions as a generalization of univex functions [8] and discuss some their properties.
Let X be an invex set in \(R^{n}\) (the concept of an invex set can be found in [8]), and let F be an interval-valued function. The following definition is a particular case of fuzzy weakly univex functions, which has been introduced in [22].
Suppose F is a weakly differentiable interval-valued function. Then F is weakly univex at \(y\in X\) with respect to η, Φ, b if and only if both \(F^{L}(x)\) and \(F^{U}(x)\) are univex at \(y\in X\), that is, for all \(x\in X\),
where \(\eta =\eta (x,y):X \times X\rightarrow R^{n}\), \(\varPhi :R\rightarrow R\), and \(b = b(x,y):X\times X \times [0, 1]\rightarrow R^{+}\).
$$\begin{aligned}& b(x,y)\varPhi \bigl[F^{L}(x)-F^{L}(y)\bigr]\geq \eta ^{t}(x,y)\nabla F^{L}(y), \end{aligned}$$
(4)
$$\begin{aligned}& b(x,y)\varPhi \bigl[F^{U}(x)-F^{U}(y)\bigr]\geq \eta ^{t}(x,y)\nabla F^{U}(y), \end{aligned}$$
(5)
Remark 3.1
The concept of LU-invexity for interval-valued functions is introduced in [39], since it considers the endpoint functions; in this paper, we call them weakly invex. Every interval-valued weakly invex function is interval-valued weakly univex with respect to η, b, Φ, where
but the converse is not true.
$$\begin{aligned}& \varPhi (x) = x,\quad b=1, \end{aligned}$$
Example 3.1
Consider the function \(F: (-\infty ,0)\rightarrow \mathcal{I}\) defined by
$$\begin{aligned}& F(x)=[1,2]x^{3}, \\& \eta (x,y)=\textstyle\begin{cases} x^{2}+xy+y^{2}, & x>y, \\ x-y, & x\leq y, \end{cases}\displaystyle \\& b(x,y)=\textstyle\begin{cases} \frac{y^{2}}{x-y}, & x>y, \\ 0, & x\leq y. \end{cases}\displaystyle \end{aligned}$$
Let \(\varPhi :R\rightarrow R\) be defined by \(\varPhi (V)=3V\), \(F^{L}(x)=2x ^{3}\), and \(F^{U}(x)=x^{3}\); then \(\nabla F^{L}(x)=6x^{2}\) and \(\nabla F^{U}(x)=3 x^{2}\). Then F is interval-valued weakly univex but not interval-valued weakly invex, since for \(x=-2\) and \(y=-1\), \(F^{U}(x)-F^{U}(y)< \eta ^{t}(x,y)\nabla F^{U}(y)\).
Let X be a nonempty open set in \(R^{n}\), \(\eta :X \times X\rightarrow R^{n}\), \(\varPsi :\mathcal{I}\rightarrow \mathcal{I}\), and \(b = b(x,y): X \times X \rightarrow R^{+}\).
Definition 3.1
Suppose F is a gH-differentiable interval-valued function. Then F is univex at \(y\in X\) with respect to η, Ψ, b if for all \(x\in X\),
$$\begin{aligned}& b(x,y)\varPsi \bigl[F(x)\ominus _{gH}F(y)\bigr]\succeq \eta ^{t}(x,y)\nabla F(y). \end{aligned}$$
(6)
The following example shows that an interval-valued univex function may not be an interval-valued weakly univex function.
Example 3.2
Suppose \(F(x)=[-|x|,|x|]\), \(x\in R\), \(b=1\), and \(\varPhi (a)=a\). Then \(\varPsi [a,b]=[a,b]\) is induced by \(\varPhi (a)=a\), and
Then \(F(x)\) is gH-differentiable on R, and \(F^{\prime }(y)=[-1, 1]\). We can prove that
Therefore \(F(x)\) is univex with respect to η, b, Ψ, but \(F(x)\) is not weakly univex since \(F^{L}(x)\) is not univex with respect to η, b, Φ.
$$\begin{aligned} \eta (x,y)=\textstyle\begin{cases} x-y, &x y\geq 0, \\ x+y, &x y< 0. \end{cases}\displaystyle \end{aligned}$$
$$\begin{aligned} b\varPsi \bigl[F(x)\ominus _{gH} F(y)\bigr]\succeq \eta ^{t}(x,y)\nabla F(y). \end{aligned}$$
Theorem 3.1
Suppose
\(F(x)\)
is
gH-differentiable. If
\(F(x)\)
is an interval-valued weakly univex function with respect to
η, b, Φ
and
Φ
is increasing, then
\(F(x)\)
is an interval-valued univex function with respect to the same
η, b, and
Ψ, where
Ψ
is an extension of
Φ.
Proof
Since \(F(x)\) is weakly univex at y, then real-valued functions \(F^{L}\) and \(F^{U}\) are univex at y, that is,
for all \(x\in X\).
$$\begin{aligned}& b(x,y)\varPhi \bigl[F^{L}(x)-F^{L}(y)\bigr]\geq \eta ^{t}(x,y)\nabla F^{L}(y)\quad \text{and} \\& b(x,y)\varPhi \bigl[F^{U}(x)-F^{U}(y)\bigr]\geq \eta ^{t}(x,y)\nabla F^{U}(y) \end{aligned}$$
(i) Under the condition \(\eta ^{t}(x,y)\nabla F^{L}(y) \leq \eta ^{t}(x,y) \nabla F^{U}(y)\), we have
If \(F(x)\ominus _{gH} F(y)=[F^{L}(x)-F^{L}(y), F^{U}(x)-F^{U}(y)]\), then since Φ is increasing, we have
If \(F(x)\ominus _{gH} F(y)=[F^{U}(x)-F^{U}(y), F^{L}(x)-F^{L}(y)]\), then
and since Φ is increasing, we have
$$\begin{aligned}& \eta ^{t}(x,y)\nabla F(y)=\bigl[\eta ^{t}(x,y)\nabla F^{L}(y), \eta ^{t}(x,y) \nabla F^{U}(y)\bigr]. \end{aligned}$$
$$\begin{aligned}& b(x,y)\varPsi \bigl[F(x)\ominus _{gH}F(y)\bigr] \\& \quad =\bigl[b(x,y)\varPhi \bigl(F^{L}(x)-F^{L}(y)\bigr), b(x,y)\varPhi \bigl(F^{U}(x)-F^{U}(y)\bigr)\bigr] \\& \quad \succeq \bigl[\eta ^{t}(x,y)\nabla F^{L}(y), \eta ^{t}(x,y)\nabla F^{U}(y)\bigr] \\& \quad = \eta ^{t}(x,y)\nabla F(y). \end{aligned}$$
$$\begin{aligned}& b(x,y)\varPhi \bigl(F^{L}(x)-F^{L}(y)\bigr) \\& \quad \succeq b(x,y)\varPhi \bigl(F^{U}(x)-F^{U}(y)\bigr) \\& \quad \succeq \eta ^{t}(x,y)\nabla F^{U}(y) \\& \quad \succeq \eta ^{t}(x,y)\nabla F^{L}(y), \end{aligned}$$
$$\begin{aligned}& b(x,y)\varPsi \bigl[F(x)\ominus _{gH}F(y)\bigr] \\& \quad =[b(x,y)\varPsi \bigl[F^{U}(x)-F^{U}(y), F^{L}(x)-F^{L}(y)\bigr] \\& \quad =\bigl[b(x,y)\varPhi \bigl(F^{U}(x)-F^{U}(y)\bigr), b(x,y)\varPhi \bigl(F^{L}(x)-F^{L}(y)\bigr)\bigr] \\& \quad \succeq \bigl[\eta ^{t}(x,y)\nabla F^{L}(y), \eta ^{t}(x,y)\nabla F^{U}(y)\bigr] \\& \quad = \eta ^{t}(x,y)\nabla F(y). \end{aligned}$$
(ii) Under the condition \(\eta ^{t}(x,y)\nabla F^{L}(y) > \eta ^{t}(x,y) \nabla F^{U}(y)\), we have
If \(F(x)\ominus _{gH} F(y)=[F^{U}(x)-F^{U}(y), F^{L}(x)-F^{L}(y)]\), then since Φ is increasing, we have
If \(F(x)\ominus _{gH} F(y)=[F^{L}(x)-F^{L}(y), F^{U}(x)-F^{U}(y)]\), then
Since Φ is increasing, we have
□
$$\begin{aligned} \eta ^{t}(x,y)\nabla F(y)=\bigl[\eta ^{t}(x,y)\nabla F^{U}(y),\eta ^{t}(x,y) \nabla F^{L}(y)\bigr]. \end{aligned}$$
$$\begin{aligned}& b(x,y)\varPsi \bigl[F(x)\ominus _{gH}F(y)\bigr] \\& \quad =\bigl[b(x,y)\varPhi \bigl(F^{U}(x)-F^{U}(y)\bigr), b(x,y)\varPhi \bigl(F^{L}(x)-F^{L}(y)\bigr)\bigr] \\& \quad \succeq \bigl[\eta ^{t}(x,y)\nabla F^{U}(y), \eta ^{t}(x,y)\nabla F^{L}(y)\bigr] \\& \quad = \eta ^{t}(x,y)\nabla F(y). \end{aligned}$$
$$\begin{aligned}& b(x,y)\varPhi \bigl(F^{U}(x)-F^{U}(y)\bigr) \\& \quad \succeq b(x,y)\varPhi \bigl(F^{L}(x)-F^{L}(y)\bigr) \\& \quad \succeq \eta ^{t}(x,y)\nabla F^{L}(y) \\& \quad \succeq \eta ^{t}(x,y)\nabla F^{U}(y). \end{aligned}$$
$$\begin{aligned}& b(x,y)\varPsi \bigl[F(x)\ominus _{gH}F(y)\bigr] \\& \quad =b(x,y)\varPsi \bigl[F^{L}(x)-F^{L}(y), F^{U}(x)-F^{U}(y)\bigr] \\& \quad =\bigl[b(x,y)\varPhi \bigl(F^{L}(x)-F^{L}(y) \bigr),b(x,y)\varPhi \bigl(F^{U}(x)-F^{U}(y)\bigr)\bigr] \\& \quad \succeq \bigl[ \eta ^{t}(x,y)\nabla F^{U}(y), \eta ^{t}(x,y)\nabla F^{L}(y)\bigr] \\& \quad = \eta ^{t}(x,y)\nabla F(y). \end{aligned}$$
Remark 3.2
Example 3.3
Suppose \(F(x)=[-2,1]x^{2}\), \(x<0\). Then \(F(x)\) is gH-differentiable and weakly differentiable. It is easy to check that \(F(x)\) is weakly univex with respect to \(\eta (x,y)=x-y\),
and \(\varPhi (a)=|a|\). However, \(F(x)\) is not univex with respect to the same \(\eta (x,y)\), b, and Ψ, where Ψ is defined by the extension of \(\varPhi (a)=|a|\).
$$b(x,y)=\textstyle\begin{cases} 1, & x\leq y< 0, \\ \frac{-2y(x-y)}{-x^{2}+y^{2}}, & y< x< 0, \end{cases} $$
4 Optimality criteria for interval-valued univex mappings
In this section, for gH-differentiable interval-valued univex functions, we establish sufficient optimality conditions for a feasible solution \(x^{\ast }\) to be an optimal solution or a nondominated solution for \((P)\).
Suppose \(F(x)\), \(g_{1}(x),\ldots , g_{m}(x)\) are gH-differentiable interval-valued mappings defined on a nonempty open set \(X\subseteq R ^{n}\). Then, we consider the primal problem:
$$\begin{aligned}& (P)\quad \min F(x) \\& \hphantom{(P)\quad} \text{s.t.}\quad g(x)\preceq 0. \end{aligned}$$
Let \(P:=\{x\in X :g(x)\preceq 0\}\) denote the feasible set of \((P)\).
Since ⪯ is a partial order, the optimal solution may not exist for some interval-valued optimization problems. Therefore, authors always consider the concept of a nondominated solution in this situation. We reconsider an optimal solution and nondominated solution as follows.
Definition 4.1
(i)
\(x^{\ast }\in P\) is an optimal solution of \((P)\Leftrightarrow F(x^{\ast })\preceq F(x)\) for all \(x\in P\). In this case, \(F(x^{\ast })\) is called the optimal objective value of F.
(ii)
\(x^{\ast }\in P\) is a nondominated solution of \((P)\Leftrightarrow \) there exists no \(x_{0}\in P\) such that \(F(x_{0})\prec F(x^{\ast })\). In this case, \(F(x^{\ast })\) is called the nondominated objective value of F.
Theorem 4.1
Let
\(x^{\ast }\)
be
P-feasible. Suppose that:
(i)
there exist
η, \(\varPsi _{0}\), \(b_{0}\), \(\varPsi _{i}\), \(b_{i}\), \(i=1, 2 , \ldots ,m \), such that
and
for all feasible
x;
$$\begin{aligned}& b_{0}(x,y)\varPsi _{0}\bigl[F(x)\ominus _{gH}F \bigl(x^{\ast }\bigr)\bigr]\succeq \eta ^{t}\bigl(x,x ^{\ast }\bigr)\nabla F\bigl(x^{\ast }\bigr) {} \end{aligned}$$
(7)
$$\begin{aligned}& -b_{i}\bigl(x,x^{\ast }\bigr)\varPsi _{i} \bigl[g_{i}\bigl(x^{\ast }\bigr)\bigr]\succeq \eta ^{t}\bigl(x,x^{ \ast }\bigr)\nabla g_{i} \bigl(x^{\ast }\bigr) \end{aligned}$$
(8)
(ii)
there exists
\(y^{\ast }\in R^{m}\)
such that
Further suppose that
and
for all feasible
x. Then
\(x^{\ast }\)
is an optimal solution of
\((P)\).
$$\begin{aligned}& \nabla F\bigl(x^{\ast }\bigr)=-y^{\ast t}\nabla g \bigl(x^{\ast }\bigr), \end{aligned}$$
(9)
$$\begin{aligned}& y^{\ast }\geq 0. \end{aligned}$$
(10)
$$\begin{aligned}& \varPsi _{0}(\mu )\succeq 0 \quad \Rightarrow \quad \mu \succeq 0, \end{aligned}$$
(11)
$$\begin{aligned}& \mu \preceq 0 \quad \Rightarrow\quad \varPsi _{i}( \mu )\succeq 0, \end{aligned}$$
(12)
$$\begin{aligned}& b_{0}\bigl(x,x^{\ast }\bigr)> 0,\qquad b_{i} \bigl(x,x^{\ast }\bigr)\geq 0, \end{aligned}$$
(13)
Proof
Let x be P-feasible. Then
This, along with (12), yields
From (7)–(13) it follows that
From (13) it follows that
By (11) we have
Thus
Therefore \(x^{\ast }\) is an optimal solution of \((P)\). □
$$\begin{aligned} g(x)\preceq 0. \end{aligned}$$
$$\begin{aligned} \varPsi _{i}\bigl[g_{i}(x)\bigr]\succeq 0. \end{aligned}$$
$$\begin{aligned} b_{0}\bigl(x,x^{\ast }\bigr)\varPsi _{0}\bigl[F(x) \ominus _{gH}F\bigl(x^{\ast }\bigr)\bigr] \succeq &\eta ^{t}\bigl(x,x^{\ast }\bigr)\nabla F\bigl(x^{\ast }\bigr) \\ =&-\eta ^{t}\bigl(x,x^{\ast }\bigr) \sum _{i=1} ^{m}y_{i}\nabla g_{i} \bigl(x^{\ast }\bigr) \\ \succeq &\sum_{i=1} ^{m}b_{i} \bigl(x,x^{\ast }\bigr)y_{i}\varPsi _{i} \bigl[g_{i}(x^{ \ast }\bigr] \\ \succeq &0. \end{aligned}$$
$$\begin{aligned} \varPsi _{0}\bigl[F(x)\ominus _{gH}F\bigl(x^{\ast } \bigr)\bigr]\succeq 0. \end{aligned}$$
$$\begin{aligned} F(x)\ominus _{gH}F\bigl(x^{\ast }\bigr)\succeq 0. \end{aligned}$$
$$\begin{aligned} F(x)\succeq F\bigl(x^{\ast }\bigr). \end{aligned}$$
Remark 4.1
If we change the condition
of Theorem 4.1 by
then \(x^{\ast }\) is a nondominated solution of \((P)\).
$$\begin{aligned} \varPsi _{0}(\mu )\succeq 0 \quad \Rightarrow\quad \mu \succeq 0 \end{aligned}$$
$$\begin{aligned} \varPsi _{0}(\mu )\nprec 0 \quad \Rightarrow\quad \mu \nprec 0, \end{aligned}$$
(14)
In Theorem 18 of [20], the authors also gave a sufficient optimality condition for a feasible solution \(x^{\ast }\) to be an optimal solution. In this theorem, the equation
was used, substituted for (9) of Theorem 4.1. We can prove that the previous equation is very restrictive. In fact, in case \(F(x)\) is a unary function, suppose \(\nabla F(x^{\ast })=[a,b]\) and \(y^{\ast t} \nabla g(x^{\ast })=[yc,yd]\). Then we have \([a,b]+[yc,yd]=[a+yc,b+yd]=0\), where \(a\leq b\) and \(yc\leq yd\). Therefore we have \(a=b\) and \(c=d\) since \(y\geq 0\). That is to say, \(\nabla F(x^{\ast })\) is a real number instead of an interval. In the following example, we can observe that \(x^{\ast }\) is an optimal solution of \((P)\), but \(x^{\ast }\) do not satisfies the previous equation. The following example also shows the advantages of our method over [6, 33, 36, 37].
$$\begin{aligned} \nabla F\bigl(x^{\ast }\bigr)+y^{\ast t}\nabla g \bigl(x^{\ast }\bigr)=0 \end{aligned}$$
Example 4.1
$$\begin{aligned}& \min F(x)=\biggl[\frac{1}{2},\frac{3}{2}\biggr]\sin ^{2}x_{1}+\biggl[\frac{1}{2},\frac{3}{2} \biggr] \sin ^{2}x_{2} \\& \textit{s.t.}\quad g(x)=\biggl[\frac{1}{2},\frac{3}{2}\biggr](\sin x_{1}-1)^{2}+\biggl[\frac{1}{2}, \frac{3}{2} \biggr](\sin x_{2}-1)^{2}\preceq \frac{1}{4}\biggl[ \frac{1}{2}, \frac{3}{2}\biggr], \\& \hphantom{\mbox{s.t.}\quad} x_{1},x_{2}\in \biggl(0,\frac{\pi }{2} \biggr). \end{aligned}$$
The function \(F(x)\) is interval-valued univex with respect to
and Ψ is induced by \(\varPhi (a)=2a\), \(b_{1}(x,y)=b_{0}(x,y)\), and \(\varPsi _{1}\) is induced by \(\varPhi _{1}(a)=|a|\), where \(x=(x_{1},x_{2})^{t}\) and \(y=(y_{1},y_{2})^{t}\). The point \(x^{\ast }=(\sin ^{-1}(1-\frac{1}{2 \sqrt{2}}),\sin ^{-1}(1-\frac{1}{2\sqrt{2}}))^{t}\) is a feasible solution. We can also see that \((F,g)\) satisfies the hypotheses of Theorem 4.1. Therefore \(x^{\ast }=(\sin ^{-1}(1-\frac{1}{2\sqrt{2}}), \sin ^{-1}(1-\frac{1}{2\sqrt{2}}))^{t}\) is an optimal solution.
$$\begin{aligned}& \eta (x,y)=\textstyle\begin{cases} (\frac{\sin x_{1}-\sin y_{1}}{\cos y_{1}},\frac{\sin x_{2}-\sin y_{2}}{ \cos y_{2}})^{t}, &(x_{1},x_{2})\geq (y_{1},y_{2}), \\ 0 & \text{otherwise}, \end{cases}\displaystyle \\& b_{0}(x,y)=\textstyle\begin{cases} 1, &(x_{1},x_{2})\geq (y_{1},y_{2}), \\ 0 & \text{otherwise}, \end{cases}\displaystyle \end{aligned}$$
Theorem 4.2
Let
\(x^{\ast }\)
be
P-feasible. Suppose that:
(i)
there exist
η, \(\varPsi _{0}\), \(b_{0}\), \(\varPsi _{i}\), \(b_{i}\), \(i=1, 2 , \ldots ,m \), such that
and
for all feasible
x;
$$\begin{aligned}& b_{0}(x,y)\varPsi _{0}\bigl[F(x)\ominus _{gH}F\bigl(x^{\ast }\bigr)\bigr]\succeq \eta ^{t} \bigl(x,x ^{\ast }\bigr)\nabla F\bigl(x^{\ast }\bigr) \end{aligned}$$
(15)
$$\begin{aligned}& -b_{i}\bigl(x,x^{\ast }\bigr)\varPsi _{i}\bigl[g_{i}\bigl(x^{\ast }\bigr)\bigr]\succeq \eta ^{t}\bigl(x,x^{ \ast }\bigr)\nabla g_{i} \bigl(x^{\ast }\bigr) \end{aligned}$$
(16)
(ii)
there exists
\(y^{\ast }\in R^{m}\)
such that
Further, suppose that
and
for all feasible
x. Then
\(x^{\ast }\)
is a nondominated solution of
\((P)\).
$$\begin{aligned}& \bigl\{ \nabla F\bigl(x^{\ast }\bigr)\bigr\} ^{L}= \bigl\{ -y^{\ast t}\nabla g\bigl(x^{\ast }\bigr)\bigr\} ^{L}, \end{aligned}$$
(17)
$$\begin{aligned}& y^{\ast }\geq 0. \end{aligned}$$
(18)
$$\begin{aligned}& \varPsi _{0}(\mu )\nprec 0 \quad \Rightarrow \quad \mu \nprec 0, \end{aligned}$$
(19)
$$\begin{aligned}& \mu \preceq 0 \quad \Rightarrow \quad \varPsi _{i}( \mu )\succeq 0, \end{aligned}$$
(20)
$$\begin{aligned}& b_{0}\bigl(x,x^{\ast }\bigr)> 0,\qquad b_{i}\bigl(x,x^{\ast }\bigr)\geq 0 \end{aligned}$$
(21)
Proof
Let x be P-feasible. Then
From (20) we conclude
From (15), (16) it follows that
and
$$\begin{aligned}& \widetilde{g}(x)\preceq 0. \end{aligned}$$
$$\begin{aligned}& \varPsi _{i}\bigl[g_{i}(x)\bigr]\succeq 0. \end{aligned}$$
$$\begin{aligned}& b_{0}(x,y)\bigl\{ \varPsi _{0}\bigl[F(x)\ominus _{gH}F\bigl(x^{\ast }\bigr)\bigr]\bigr\} ^{L}\geq \bigl\{ \eta ^{t}\bigl(x,x^{\ast }\bigr)\nabla F\bigl(x^{\ast } \bigr)\bigr\} ^{L}, \\& b_{0}(x,y)\bigl\{ \varPsi _{0}\bigl[F(x)\ominus _{g}F\bigl(x^{\ast }\bigr)\bigr]\bigr\} ^{U}\geq \bigl\{ \eta ^{t}\bigl(x,x^{\ast }\bigr)\nabla F\bigl(x^{\ast } \bigr)\bigr\} ^{U}, \end{aligned}$$
$$\begin{aligned}& b_{i}\bigl(x,x^{\ast }\bigr)\bigl\{ \varPsi _{i} \bigl[g_{i}\bigl(x^{\ast }\bigr)\bigr]\bigr\} ^{L}\leq \bigl\{ -\eta ^{t}\bigl(x,x ^{\ast }\bigr)\nabla g_{i} \bigl(x^{\ast }\bigr)\bigr\} ^{L}, \\& b_{i}\bigl(x,x^{\ast }\bigr)\bigl\{ \varPsi _{i} \bigl[g_{i}\bigl(x^{\ast }\bigr)\bigr]\bigr\} ^{U}\leq \bigl\{ -\eta ^{t}\bigl(x,x ^{\ast }\bigr)\nabla g_{i} \bigl(x^{\ast }\bigr)\bigr\} ^{U}. \end{aligned}$$
Since
and
we consider the following two cases.
$$\begin{aligned} \eta ^{t}\bigl(x,x^{\ast }\bigr)\nabla F\bigl(x^{\ast } \bigr) =&\eta ^{t}\bigl(x,x^{\ast }\bigr)\bigl[\bigl\{ \nabla F \bigl(x^{\ast }\bigr)\bigr\} ^{L},\bigl\{ \nabla F \bigl(x^{\ast }\bigr)\bigr\} ^{U}\bigr]] \\ =&\textstyle\begin{cases} [\eta ^{t}(x,x^{\ast })\{\nabla F(x^{\ast })\}^{L},\eta ^{t}(x,x^{ \ast })\{\nabla F(x^{\ast })\}^{U}], &\eta ^{t}(x,x^{\ast })\geq 0, \\ [\eta ^{t}(x,x^{\ast })\{\nabla F(x^{\ast })\}^{U},\eta ^{t}(x,x^{ \ast })\{\nabla F(x^{\ast })\}^{L}], &\eta ^{t}(x,x^{\ast })< 0, \end{cases}\displaystyle \end{aligned}$$
$$\begin{aligned} -\eta ^{t}\bigl(x,x^{\ast }\bigr)\nabla g_{i} \bigl(x^{\ast }\bigr) =&-\eta ^{t}\bigl(x,x^{\ast }\bigr) \bigl[ \bigl\{ \nabla g_{i}\bigl(x^{\ast }\bigr)\bigr\} ^{L},\bigl\{ \nabla g_{i}\bigl(x^{\ast }\bigr)\bigr\} ^{U}\bigr]] \\ =&\textstyle\begin{cases} [\eta ^{t}(x,x^{\ast })\{-\nabla g_{i}(x^{\ast })\}^{U},\eta ^{t}(x,x ^{\ast })\{-\nabla g_{i}(x^{\ast })\}^{L}], &\eta ^{t}(x,x^{\ast }) \geq 0, \\ [\eta ^{t}(x,x^{\ast })\{-\nabla g_{i}(x^{\ast })\}^{L},\eta ^{t}(x,x ^{\ast })\{-\nabla g_{i}(x^{\ast })\}^{U}], &\eta ^{t}(x,x^{\ast })< 0, \end{cases}\displaystyle \end{aligned}$$
Case (i)
and
yield
and
Thus
From (21) it follows that
Then
and thus
Therefore \(x^{\ast }\) is a nondominated solution of \((P)\).
$$\begin{aligned}& \bigl\{ \eta ^{t}\bigl(x,x^{\ast }\bigr)\nabla F \bigl(x^{\ast }\bigr)\bigr\} ^{L}=\eta ^{t} \bigl(x,x^{\ast }\bigr) \bigl\{ \nabla F\bigl(x^{\ast }\bigr)\bigr\} ^{L} \end{aligned}$$
$$\begin{aligned}& \bigl\{ -\eta ^{t}\bigl(x,x^{\ast }\bigr)\nabla g_{i} \bigl(x^{\ast }\bigr)\bigr\} ^{L}=\eta ^{t}\bigl(x,x ^{\ast }\bigr)\bigl\{ -\nabla g_{i}\bigl(x^{\ast }\bigr) \bigr\} ^{U} \end{aligned}$$
$$\begin{aligned}& \bigl\{ \eta ^{t}\bigl(x,x^{\ast }\bigr)\nabla F \bigl(x^{\ast }\bigr)\bigr\} ^{U}=\eta ^{t} \bigl(x,x^{\ast }\bigr) \bigl\{ \nabla F\bigl(x^{\ast }\bigr)\bigr\} ^{U} \end{aligned}$$
$$\begin{aligned}& \bigl\{ -\eta ^{t}\bigl(x,x^{\ast }\bigr)\nabla g_{i} \bigl(x^{\ast }\bigr)\bigr\} ^{U}=\eta ^{t}\bigl(x,x ^{\ast }\bigr)\bigl\{ -\nabla g_{i}\bigl(x^{\ast }\bigr) \bigr\} ^{L}. \end{aligned}$$
$$\begin{aligned} b_{0}(x,y)\bigl\{ \varPsi _{0}\bigl[F(x)\ominus _{gH}F\bigl(x^{\ast }\bigr)\bigr]\bigr\} ^{L} \geq & \bigl\{ \eta ^{t}\bigl(x,x^{\ast }\bigr)\nabla F \bigl(x^{\ast }\bigr)\bigr\} ^{L} \\ =&\eta ^{t}\bigl(x,x^{\ast }\bigr)\bigl\{ \nabla F \bigl(x^{\ast }\bigr)\bigr\} ^{L} \\ =&\eta ^{t}\bigl(x,x^{\ast }\bigr)\bigl\{ -y^{\ast t} \nabla g\bigl(x^{\ast }\bigr)\bigr\} ^{L} \\ \geq &\sum_{i=1} ^{m}b_{i} \bigl(x,x^{\ast }\bigr)y_{i}\bigl\{ \varPsi _{i} \bigl[g_{i}\bigl(x^{ \ast }\bigr)\bigr]\bigr\} ^{L} \\ \geq &0. \end{aligned}$$
$$\begin{aligned}& \varPsi _{0}\bigl[F(x)\ominus _{gH}F\bigl(x^{\ast } \bigr)\bigr]\succeq 0. \end{aligned}$$
$$\begin{aligned}& F(x)\ominus _{gH}F\bigl(x^{\ast }\bigr)\nprec 0, \end{aligned}$$
$$\begin{aligned}& F(x)\nprec F\bigl(x^{\ast }\bigr). \end{aligned}$$
Case (ii)
and
yield
and
Thus
From (21) it follows that
Then
and thus
Therefore \(x^{\ast }\) is a nondominated solution of \((P)\). □
$$\begin{aligned}& \bigl\{ \eta ^{t}\bigl(x,x^{\ast }\bigr)\nabla F \bigl(x^{\ast }\bigr)\bigr\} ^{L}=\eta ^{t} \bigl(x,x^{\ast }\bigr) \bigl\{ \nabla F\bigl(x^{\ast }\bigr)\bigr\} ^{U} \end{aligned}$$
$$\begin{aligned}& \bigl\{ -\eta ^{t}\bigl(x,x^{\ast }\bigr)\nabla g_{i} \bigl(x^{\ast }\bigr)\bigr\} ^{L}=\eta ^{t}\bigl(x,x ^{\ast }\bigr)\bigl\{ -\nabla g_{i}\bigl(x^{\ast }\bigr) \bigr\} ^{L} \end{aligned}$$
$$\begin{aligned}& \bigl\{ \eta ^{t}\bigl(x,x^{\ast }\bigr)\nabla F \bigl(x^{\ast }\bigr)\bigr\} ^{U}=\eta ^{t} \bigl(x,x^{\ast }\bigr) \bigl\{ \nabla F\bigl(x^{\ast }\bigr)\bigr\} ^{L} \end{aligned}$$
$$\begin{aligned}& \bigl\{ -\eta ^{t}\bigl(x,x^{\ast }\bigr)\nabla g_{i} \bigl(x^{\ast }\bigr)\bigr\} ^{U}=\eta ^{t}\bigl(x,x ^{\ast }\bigr)\bigl\{ -\nabla g_{i}\bigl(x^{\ast }\bigr) \bigr\} ^{U}. \end{aligned}$$
$$\begin{aligned} b_{0}(x,y)\bigl\{ \varPsi _{0}\bigl[F(x)\ominus _{gH}F\bigl(x^{\ast }\bigr)\bigr]\bigr\} ^{U} \geq & \bigl\{ \eta ^{t}\bigl(x,x^{\ast }\bigr)\nabla F \bigl(x^{\ast }\bigr)\bigr\} ^{U} \\ =&\eta ^{t}\bigl(x,x^{\ast }\bigr)\bigl\{ \nabla F \bigl(x^{\ast }\bigr)\bigr\} ^{L} \\ =&\eta ^{t}\bigl(x,x^{\ast }\bigr)\bigl\{ -y^{\ast t} \nabla g\bigl(x^{\ast }\bigr)\bigr\} ^{L} \\ \geq &\sum_{i=1} ^{m}b_{i} \bigl(x,x^{\ast }\bigr)y_{i}\bigl\{ \varPsi _{i} \bigl[g_{i}\bigl(x^{ \ast }\bigr)\bigr]\bigr\} ^{L} \\ \geq &0, \end{aligned}$$
$$\begin{aligned}& \varPsi _{0}\bigl[F(x)\ominus _{gH}F\bigl(x^{\ast } \bigr)\bigr]\nprec 0. \end{aligned}$$
$$\begin{aligned}& F(x)\ominus _{gH}F\bigl(x^{\ast }\bigr)\nprec 0, \end{aligned}$$
$$\begin{aligned}& F(x)\nprec F\bigl(x^{\ast }\bigr). \end{aligned}$$
Theorem 4.3
Let
\(x^{\ast }\)
be
P-feasible. Suppose that:
(i)
there exist
η, \(\varPsi _{0}\), \(b_{0}\), \(\varPsi _{i}\), \(b_{i}\), \(i=1, 2 , \ldots ,m \), such that
and
for all feasible
x;
$$\begin{aligned}& b_{0}(x,y)\varPsi _{0}\bigl[F(x)\ominus _{gH}F\bigl(x^{\ast }\bigr)\bigr]\succeq \eta ^{t} \bigl(x,x ^{\ast }\bigr)\nabla F\bigl(x^{\ast }\bigr) \end{aligned}$$
(22)
$$\begin{aligned}& -b_{i}\bigl(x,x^{\ast }\bigr)\varPsi _{i}\bigl[g_{i}\bigl(x^{\ast }\bigr)\bigr]\succeq \eta ^{t}\bigl(x,x^{ \ast }\bigr)\nabla g_{i} \bigl(x^{\ast }\bigr) \end{aligned}$$
(23)
(ii)
there exists
\(y^{\ast }\in R^{m}\)
such that
Further, suppose that
and
for all feasible
x. Then
\(x^{\ast }\)
is a nondominated solution of
\((P)\).
$$\begin{aligned}& \bigl\{ \nabla F\bigl(x^{\ast }\bigr)\bigr\} ^{U}= \bigl\{ -y^{\ast t}\nabla g\bigl(x^{\ast }\bigr)\bigr\} ^{U}, \end{aligned}$$
(24)
$$\begin{aligned}& y^{\ast }\geq 0. \end{aligned}$$
(25)
$$\begin{aligned}& \varPsi _{0}(\mu )\nprec 0 \quad \Rightarrow\quad \mu \nprec 0, \end{aligned}$$
(26)
$$\begin{aligned}& \mu \preceq 0 \quad \Rightarrow\quad \varPsi _{i}( \mu )\succeq 0, \end{aligned}$$
(27)
$$\begin{aligned}& b_{0}\bigl(x,x^{\ast }\bigr)> 0,\qquad b_{i}\bigl(x,x^{\ast }\bigr)\geq 0 \end{aligned}$$
(28)
The following example shows the advantages of our method over [22].
Example 4.2
$$\begin{aligned}& \min F(x)=[-1,1] \vert x \vert \\& \textit{s.t.}\quad g(x)=x-1\leq 0. \end{aligned}$$
Since \(F^{L}(x)=-|x|\) and \(F^{U} (x)=|x|\) is not differentiable at \(x=0\), \(F(x)\) is not weakly differentiable at \(x=0\). Therefore the method in [22] cannot be used.
Note that the objective function \(F(x)\) is gH-differentiable on R and that \(F^{\prime }(y)=[-1,1]\). Let
the function \(\varPsi _{0}[a,b]=[a,b]\) is induced by \(\varPhi _{0}(a)=a\), and
Let \(b_{1}=1\), and let \(\varPsi _{1}\) be induced by \(\varPhi _{1}(a)=|a|\). The point \(x^{\ast }=1\) is a feasible solution. We can see that \((F,g)\) satisfies the hypotheses of Theorem 4.2. Therefore \(x^{\ast }=1\) is a nondominated solution.
$$\begin{aligned} b_{0}(x,y)=\textstyle\begin{cases} 1, &x< y< 0 \text{ or } 0< x< y, \\ 0 &\text{otherwise}. \end{cases}\displaystyle \end{aligned}$$
$$\begin{aligned}& \eta (x,y)=\textstyle\begin{cases} x-y, &x< y< 0 \text{ or } 0< x< y, \\ 0 &\text{otherwise}. \end{cases}\displaystyle \end{aligned}$$
Example 4.3
$$\begin{aligned}& \min F(x)=[-2,1]x^{2},\quad x< 0, \\& \textit{s.t.}\quad g(x)=x+1\leq 0. \end{aligned}$$
Let
where \(\varPsi ([a,b])\) induced by \(\varPhi (a)=|a|\), and
Let \(b_{1}(x,y)=1\) and \(\varPhi _{1}(a)=\varPhi (a)=|a|\). The point \(x^{\ast }=-1\) is a feasible solution. We can see that \((F,g)\) satisfies the hypotheses of Theorem 4.3, and therefore \(x^{\ast }=-1\) is a nondominated solution.
$$\begin{aligned}& b_{0}(x,y)=\textstyle\begin{cases} 1, & x\leq y< 0, \\ \frac{-2y(x-y)}{-x^{2}+y^{2}}, & y< x< 0, \end{cases}\displaystyle \quad \mbox{and} \quad \varPsi _{0}[a,b]=\textstyle\begin{cases} [a,b], & [a,b]\preceq 0, \\ \varPsi ([a,b]), &[a,b]\npreceq 0, \end{cases}\displaystyle \end{aligned}$$
$$\begin{aligned} \eta (x,y)=\textstyle\begin{cases} x-y, &x< y< 0 \text{ or } 0< x< y, \\ 0 &\text{otherwise}. \end{cases}\displaystyle \end{aligned}$$
5 Conclusion
The objective of this paper is to introduce the concept of gH-differentiable interval-valued univex mappings and discuss the relationship between interval-valued univex mappings and interval-valued weakly univex mappings. We derive sufficient optimality conditions for constrained interval-valued minimization problem under interval-valued univex mappings. In future work, we hope to give sufficient optimality conditions for a nondifferentiable interval-valued optimization problem under univexity hypotheses.
Competing interests
The authors declare that they have no competing interests.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.