Skip to main content
Top
Published in: Journal of Inequalities and Applications 1/2015

Open Access 01-12-2015 | Research

Weighted majorization theorems via generalization of Taylor’s formula

Authors: Andrea Aglić Aljinović, Asif R Khan, Josip E Pečarić

Published in: Journal of Inequalities and Applications | Issue 1/2015

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

A new generalization of the weighted majorization theorem for n-convex functions is given, by using a generalization of Taylor’s formula. Bounds for the remainders in new majorization identities are given by using the Čebyšev type inequalities. Mean value theorems and n-exponential convexity are discussed for functionals related to the new majorization identities.
Notes

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

JP made the main contribution in conceiving the presented research. AAA, ARK, and JP worked jointly on each section, while AAA and ARK drafted the manuscript. All authors read and approved the final manuscript.

1 Introduction

Unless stated otherwise throughout this section I is an interval of \(\mathbb{R}\).
Definition 1
A function \(f:I\to\mathbb{R}\) is called convex if the inequality
$$ f \bigl( \lambda x_{1}+(1-\lambda) x_{2} \bigr) \leq \lambda f(x_{1}) + (1-\lambda) f(x_{2}) $$
(1.1)
holds for each \(x_{1}, x_{2} \in I\) and \(\lambda\in[0,1]\).
Remark 1
(a)
If inequality (1.1) is strict for each \(x_{1}\neq x_{2}\) and \(\lambda\in(0,1)\), then f is called strictly convex.
 
(b)
If the inequality in (1.1) is reversed, then f is called concave. If it is strict for each \(x_{1}\neq x_{2}\) and \(\lambda\in(0,1)\), then f is called strictly concave.
 
The following proposition gives us an alternative definition of convex functions [1], p.2.
Proposition 1
A function \(f:I\to\mathbb{R}\) is convex if the inequality
$$ ( x_{3}-x_{2} ) f(x_{1})+ ( x_{1}-x_{3} ) f(x_{2})+ ( x_{2}-x_{1} ) f(x_{3})\geq0 $$
holds for each \(x_{1}, x_{2}, x_{3} \in I\) such that \(x_{1} < x_{2} < x_{3}\).
The following result can be deduced from Proposition 1.
Proposition 2
If a function \(f:I\rightarrow\mathbb{R}\) is convex, then the inequality
$$\frac{f(x_{2})-f(x_{1})}{x_{2} - x_{1}} \leq\frac {f(y_{2})-f(y_{1})}{y_{2} - y_{1}}$$
holds for each \(x_{1}, x_{2}, y_{1}, y_{2} \in I \) such that \(x_{1} \leq y_{1}\), \(x_{2} \leq y_{2}\), \(x_{1} \neq x_{2}\), \(y_{1} \neq y_{2}\).
Now we define the generalized convex function which can be found in [2, 3] and [1].
Definition 2
The nth order divided difference of a function \(f:I\to\mathbb {R}\) at distinct points \(x_{i},x_{i+1},\ldots,x_{i+n}\in I=[a,b]\subset\mathbb {R}\) for some \(i\in\mathbb{N}\) is defined recursively by
$$\begin{aligned}& [ x_{j};f ] =f ( x_{j} ) ,\quad j\in\{i,\ldots, i+n\}, \\& {}[ x_{i},\ldots,x_{i+n};f ] =\frac{ [ x_{i+1},\ldots,x_{i+n};f ] - [ x_{i},\ldots,x_{i+n-1};f ] }{ x_{i+n}-x_{i}}. \end{aligned}$$
It may easily be verified that
$$[x_{i},\ldots,x_{i+n};f]=\sum_{k=0}^{n} \frac{f(x_{i+k})}{ \prod_{j=i, j\neq i+k}^{i+n}(x_{i+k}-x_{j}) }. $$
Remark 2
Let us denote \([x_{i},\ldots,x_{i+n};f]\) by \(\Delta_{(n)} f(x_{i})\). The value \([ x_{i},\ldots,x_{i+n};f ] \) is independent of the order of the points \(x_{i},x_{i+1},\ldots,x_{i+n}\). We can extend this definition by including the cases in which two or more points coincide by taking respective limits.
Definition 3
A function \(f:I\rightarrow\mathbb{R}\) is called convex of order n or n-convex if for all choices of \((n+1)\) distinct points \(x_{i},\ldots,x_{i+n}\) we have \(\Delta_{(n)}f(x_{i})\geq0\).
If the nth order derivative \(f^{(n)}\) exists, then f is n-convex if and only if \(f^{(n)}\geq0\).
Remark 3
For \(n=2\) and \(i=0\), we get the second order divided difference of a function \(f:I\rightarrow\mathbb{R}\), which is defined recursively by
$$\begin{aligned}& [ x_{j};f ] =f ( x_{j} ) ,\quad j\in\{0,1,2\}, \\& {}[ x_{j},x_{j+1};f ] =\frac {f(x_{j+1})-f(x_{j})}{x_{j+1}-x_{j}},\quad j\in\{0,1\}, \\& {}[ x_{0},x_{1},x_{2};f ] =\frac{ [ x_{1},x_{2};f ] - [ x_{0},x_{1};f ] }{x_{2}-x_{0}}, \end{aligned}$$
(1.2)
for arbitrary points \(x_{0},x_{1},x_{2}\in I\). Now, we discuss some limiting cases as follows: taking the limit as \(x_{1}\rightarrow x_{0}\) in (1.2), we get
$$\lim_{x_{1}\rightarrow x_{0}}[x_{0},x_{1},x_{2};f]=[x_{0},x_{0},x_{2};f]= \frac{f(x_{2})-f(x_{0})-f^{\prime}(x_{0})(x_{2}-x_{0})}{(x_{2} -x_{0})^{2}},\quad x_{2}\neq x_{0}, $$
provided that \(f^{\prime}(x_{0})\) exists. Furthermore, taking the limits as \(x_{i}\rightarrow x_{0}\), \(i\in\{1,2\}\) in (1.2), we obtain
$$\mathop{\lim_{x_{1}\rightarrow x_{0}}}_{x_{2}\rightarrow x_{0} } [x_{0},x_{1},x_{2};f]=[x_{0},x_{0},x_{0};f]= \frac{f^{\prime\prime }(x_{0})}{2}, $$
provided that \(f^{\prime\prime}(x_{0})\) exists.
For fixed \(m\geq2\), let \(\mathbf{x}= ( x_{1},\ldots ,x_{m} ) \) and \(\mathbf{y}= ( y_{1},\ldots,y_{m} ) \) denote two real m-tuples and \(x_{[1]} \geq x_{[2]} \geq\cdots\geq x_{[m]}\), \(y_{[1]} \geq y_{[2]} \geq\cdots\geq y_{[m]}\) their ordered components.
Definition 4
For \(\mathbf{x}, \mathbf{y} \in\mathbb{R}^{m}\),
$$\begin{aligned} \mathbf{x}\prec\mathbf{y} \quad\text{if } \textstyle\begin{cases} \sum_{i=1}^{k} x_{[i]}\leq\sum_{i=1}^{k} y_{[i]} ,& k\in\{1,\ldots ,m-1\},\\ \sum_{i=1}^{m} x_{[i]}=\sum_{i=1}^{m} y_{[i]} , \end{cases}\displaystyle \end{aligned}$$
when \(\mathbf{x}\prec\mathbf{y}\), x is said to be majorized by y or y majorizes x.
This notion and notation of majorization was introduced by Hardy et al. [4]. Now, we state the well-known majorization theorem from the same book [4] as follows.
Proposition 3
Let \(\mathbf{x}, \mathbf{y} \in [ a,b ] ^{m}\). The inequality
$$ \sum_{i=1}^{m}f ( x_{i} ) \leq \sum_{i=1}^{m}f ( y_{i} ) $$
(1.3)
holds for every continuous convex function \(f: [ a,b ] \rightarrow\mathbb{R}\) if and only if \(\mathbf{x}\prec\mathbf{y}\). Moreover, if f is a strictly convex function, then equality in (1.3) is valid if and only if \(x_{[i]}=y_{[i]}\) for each \(i\in\{1,\ldots,m\}\).
The following weighted version of the majorization theorem was given by Fuchs in [5] (see also [6], p.580 and [1], p.323).
Proposition 4
Let \(\mathbf{w}\in\mathbb{R}^{m}\) and let \(\mathbf{x}, \mathbf{y}\in [ a,b ] ^{m}\) be two decreasing real m-tuples such that
$$\begin{aligned}& \sum_{i=1}^{k}w_{i} x_{i} \leq\sum_{i=1}^{k}w_{i} y_{i},\quad k\in\{1,\ldots,m-1\}\quad \textit{and} \end{aligned}$$
(1.4)
$$\begin{aligned}& \sum_{i=1}^{m}w_{i} x_{i} =\sum_{i=1}^{m}w_{i} y_{i}. \end{aligned}$$
(1.5)
Then for every continuous convex function \(f: [ a,b ] \rightarrow\mathbb{R}\), the following inequality holds:
$$ \sum_{i=1}^{m}w_{i}f(x_{i}) \leq\sum_{i=1}^{m}w_{i} f(y_{i}).$$
(1.6)
Remark 4
Under the assumptions of Proposition 4, for every concave function f the reverse inequality holds in (1.6).
The following proposition is a consequence of Theorem 1 in [7] (see also [1], p.328) and represents an integral majorization result.
Proposition 5
Let \(x, y:[\alpha,\beta]\rightarrow I\) be two decreasing continuous functions and \(w:[\alpha,\beta]\rightarrow \mathbb{R} \) continuous. Then if
$$\begin{aligned}& \int_{\alpha}^{u}w ( t ) x(t) \,dt \leq \int _{\alpha}^{u}w ( t ) y(t)\,dt, \quad\textit{for each } u\in(\alpha ,\beta),\quad \textit{and} \end{aligned}$$
(1.7)
$$\begin{aligned}& \int_{\alpha}^{\beta}w ( t ) x(t)\,dt = \int _{\alpha}^{\beta}w ( t ) y(t)\,dt, \end{aligned}$$
(1.8)
hold, then for every continuous convex function \(f:I\rightarrow\mathbb {R}\) the following inequality holds:
$$ \int_{\alpha}^{\beta}w ( t ) f\bigl(x(t)\bigr)\,dt\leq\int _{\alpha }^{\beta }w ( t ) f\bigl(y(t)\bigr)\,dt. $$
(1.9)
Remark 5
Let \(x, y:[\alpha,\beta]\rightarrow I\) be two increasing continuous functions and \(w:[\alpha,\beta]\rightarrow \mathbb{R} \) continuous. If
$$\begin{aligned}& \int_{u}^{\beta}w ( t ) x(t)\,dt \leq \int _{u}^{\beta }w ( t ) y(t)\,dt, \quad\text{for each } u\in( \alpha,\beta ),\quad \text{and} \\& \int_{\alpha}^{\beta}w ( t ) x(t)\,dt = \int _{\alpha}^{\beta}w ( t ) y(t)\,dt, \end{aligned}$$
then again inequality (1.9) holds. In this paper we will state our results for decreasing x and y satisfying the assumption of Proposition 5, but they are still valid for increasing x and t satisfying the above condition; see for example [6], p.584.
In paper [8] the following extension of Montgomery identity via Taylor’s formula is obtained.
Proposition 6
Let \(n\in\mathbb{N}\), \(f:I\rightarrow\mathbb{R}\) be such that \(f^{ ( n-1 ) }\) is absolutely continuous, \(I\subset\mathbb{R}\) an open interval, \(a,b\in I\), \(a< b\). Then the following identity holds:
$$\begin{aligned} f ( x ) =&\frac{1}{b-a}\int_{a}^{b}f ( t )\,dt+\sum_{k=0}^{n-2}\frac{f^{ ( k+1 ) } ( a ) }{k! ( k+2 ) } \frac{ ( x-a ) ^{k+2}}{b-a}-\sum_{k=0}^{n-2}\frac{f^{ ( k+1 ) } ( b ) }{k! ( k+2 ) }\frac{ ( x-b ) ^{k+2}}{b-a} \\ &{} +\frac{1}{ ( n-1 ) !}\int_{a}^{b}T_{n} ( x,s ) f^{ ( n ) } ( s )\,ds, \end{aligned}$$
(1.10)
where
$$ T_{n} ( x,s ) = \textstyle\begin{cases} -\frac{ ( x-s ) ^{n}}{n ( b-a ) }+\frac{x-a}{b-a} ( x-s ) ^{n-1}, & a\leq s\leq x,\\ -\frac{ ( x-s ) ^{n}}{n ( b-a ) }+\frac{x-b}{b-a} ( x-s ) ^{n-1}, & x< s\leq b. \end{cases} $$
(1.11)
In case \(n=1\) the sum \(\sum_{k=0}^{n-2}\cdots\) is empty, so identity (1.10) reduces to the well-known Montgomery identity (see for instance [9])
$$f ( x ) =\frac{1}{b-a}\int_{a}^{b}f ( t )\,dt+\int_{a}^{b}P ( x,s ) f^{\prime} ( s )\,ds, $$
where \(P ( x,s ) \) is the Peano kernel, defined by
$$P ( x,s ) = \textstyle\begin{cases} \frac{s-a}{b-a}, & a\leq s\leq x,\\ \frac{s-b}{b-a}, & x< s\leq b. \end{cases} $$
The aim of this paper is to present a new generalization of weighted majorization theorem for n-convex functions, by using generalization of Taylor’s formula. We also obtain bounds for the remainders in new majorization identities by using the Čebyšev type inequalities. We give mean value theorems and n-exponential convexity for functionals related to these new majorization identities.

2 Majorization inequality by extension of Montgomery identity via Taylor’s formula

Theorem 1
Suppose all the assumptions from Proposition 6 hold. Additionally suppose that \(m\in\mathbb{N}\), \(x_{i},y_{i}\in [ a,b ] \) and \(w_{i}\in\mathbb{R}\) for \(i\in\{1,2,\ldots,m\}\). Then
$$\begin{aligned}& \sum_{i=1}^{m}w_{i}f ( y_{i} ) -\sum_{i=1}^{m}w_{i}f ( x_{i} ) \\& \quad =\frac{1}{b-a} \Biggl[ \sum_{k=0}^{n-2} \frac{1}{k! ( k+2 ) !}\sum_{i=1}^{m}w_{i} \bigl[ f^{ ( k+1 ) } ( a ) \bigl[ ( y_{i}-a ) ^{k+2}- ( x_{i}-a ) ^{k+2} \bigr] \\& \quad\quad {} -f^{ ( k+1 ) } ( b ) \bigl[ ( y_{i}-b ) ^{k+2}- ( x_{i}-b ) ^{k+2} \bigr] \bigr] \Biggr] \\& \quad\quad{} +\frac{1}{ ( n-1 ) !}\int_{a}^{b} \Biggl( \sum _{i=1}^{m}w_{i} \bigl( T_{n} ( y_{i},s ) -T_{n} ( x_{i},s ) \bigr) \Biggr) f^{ ( n ) } ( s )\,ds. \end{aligned}$$
(2.1)
Proof
We take extension of Montgomery identity via Taylor’s formula (1.10) to obtain
$$\begin{aligned}& \sum_{i=1}^{m}w_{i}f ( y_{i} ) -\sum_{i=1}^{m}w_{i}f ( x_{i} ) \\& \quad=\frac{1}{b-a}\int_{a}^{b}f ( t )\,dt\sum_{i=1}^{m}w_{i}- \frac{1}{b-a}\int_{a}^{b}f ( t )\,dt\sum _{i=1}^{m}w_{i} \\& \quad\quad{} +\sum_{i=1}^{m}w_{i} \Biggl( \sum_{k=0}^{n-2}\frac{f^{ ( k+1 ) } ( a ) }{k! ( k+2 ) } \frac{ ( y_{i}-a ) ^{k+2}}{b-a}-\sum_{k=0}^{n-2} \frac{f^{ ( k+1 ) } ( b ) }{k! ( k+2 ) }\frac{ ( y_{i}-b ) ^{k+2}}{b-a} \Biggr) \\& \quad\quad{} -\sum_{i=1}^{m}w_{i} \Biggl( \sum_{k=0}^{n-2}\frac{f^{ ( k+1 ) } ( a ) }{k! ( k+2 ) } \frac{ ( x_{i}-a ) ^{k+2}}{b-a}-\sum_{k=0}^{n-2} \frac{f^{ ( k+1 ) } ( b ) }{k! ( k+2 ) }\frac{ ( x_{i}-b ) ^{k+2}}{b-a} \Biggr) \\& \quad\quad{} +\frac{1}{ ( n-1 ) !}\sum_{i=1}^{m}w_{i} \int_{a}^{b}T_{n} ( y_{i},s ) f^{ ( n ) } ( s )\,ds-\frac{1}{ ( n-1 ) !}\sum_{i=1}^{m}w_{i} \int_{a}^{b}T_{n} ( x_{i},s ) f^{ ( n ) } ( s )\,ds. \end{aligned}$$
By simplifying this expressions we obtain (2.1). □
We may state its integral version as follows:
Theorem 2
Let \(x,y:[\alpha,\beta]\rightarrow [ a,b]\) be two functions and \(w:[\alpha,\beta]\rightarrow \mathbb{R} \) continuous function. Let \(f:I\rightarrow\mathbb{R}\) be such that \(f^{ ( n-1 ) }\) is absolutely continuous for some \(n\in\mathbb{N}\), \(I\subset\mathbb{R}\) an open interval, \(a,b\in I\), \(a< b\), then for all \(s\in[ a,b]\) we have the following identity:
$$\begin{aligned}& \int_{\alpha}^{\beta}w ( t ) f\bigl(y(t)\bigr)\,dt-\int _{\alpha}^{\beta }w ( t ) f\bigl(x(t)\bigr)\,dt \\& \quad =\frac{1}{b-a} \Biggl[ \sum_{k=0}^{n-2} \frac{1}{k! ( k+2 ) !}\int_{\alpha}^{\beta}w ( t ) \bigl[ f^{ ( k+1 ) } ( a ) \bigl[ \bigl( y(t)-a \bigr) ^{k+2}- \bigl( x(t)-a \bigr) ^{k+2} \bigr] \\& \quad\quad{} -f^{ ( k+1 ) } ( b ) \bigl[ \bigl( y ( t ) -b \bigr) ^{k+2}- \bigl( x ( t ) -b \bigr) ^{k+2} \bigr] \bigr]\,dt \Biggr] \\& \quad\quad{} +\frac{1}{ ( n-1 ) !}\int_{a}^{b} \biggl( \int _{\alpha}^{\beta }w ( t ) \bigl( T_{n} \bigl( y(t),s \bigr) -T_{n} \bigl( x(t),s \bigr) \bigr)\,dt \biggr) f^{ ( n ) } ( s )\,ds, \end{aligned}$$
(2.2)
where \(T_{n}{(\cdot,s)}\) is as defined in Theorem 6.
Proof
Our required result is obtained by using extension of Montgomery identity via Taylor’s formula (1.10) in the following expression:
$$\int_{\alpha}^{\beta}w ( t ) f\bigl(y(t)\bigr)\,dt-\int _{\alpha}^{\beta }w ( t ) f\bigl(x(t)\bigr)\,dt $$
and then using Fubini’s theorem. □
Now we state the main generalization of the majorization inequality by using the identities just obtained.
Theorem 3
Let all the assumptions of Theorem 1 hold with the additional condition
$$ \sum_{i=1}^{m}w_{i}T_{n}(x_{i},s) \leq\sum_{i=1}^{m}w_{i}T_{n}(y_{i},s),\quad \forall s\in[ a,b].$$
(2.3)
Then for every n-convex function \(f:I\rightarrow\mathbb{R}\) the following inequality holds:
$$\begin{aligned}& \sum_{i=1}^{m}w_{i}f ( y_{i} ) -\sum_{i=1}^{m}w_{i}f ( x_{i} ) \\& \quad \geq\frac{1}{b-a} \Biggl[ \sum_{k=0}^{n-2} \frac{1}{k! ( k+2 ) !}\sum_{i=1}^{m}w_{i} \bigl[ f^{ ( k+1 ) } ( a ) \bigl[ ( y_{i}-a ) ^{k+2}- ( x_{i}-a ) ^{k+2} \bigr] \\& \quad\quad{} -f^{ ( k+1 ) } ( b ) \bigl[ ( y_{i}-b ) ^{k+2}- ( x_{i}-b ) ^{k+2} \bigr] \bigr] \Biggr] . \end{aligned}$$
(2.4)
Proof
Since the function f is n-convex so we have \(f^{(n)}\geq0\). Using this fact and (2.3) in (2.1) we easily arrive at our required result. □
Remark 6
If reverse inequality holds in (2.3) then reverse inequality holds in (2.4).
Now we state important consequence as follows:
Corollary 1
Suppose all the assumptions from Theorem 1 hold. Additionally suppose that \(\mathbf{x}, \mathbf{y}\in[ a,b]^{m}\) are two decreasing m-tuples and \(\mathbf{w}\in\mathbb{R}^{m}\) which satisfy conditions (1.4), (1.5). If f is 2n-convex then the following inequality holds:
$$\begin{aligned}& \sum_{i=1}^{m}w_{i}f ( y_{i} ) -\sum_{i=1}^{m}w_{i}f ( x_{i} ) \\& \quad \geq\frac{1}{b-a} \Biggl[ \sum_{k=0}^{2n-2} \frac{1}{k! ( k+2 ) !}\sum_{i=1}^{m}w_{i} \bigl[ f^{ ( k+1 ) } ( a ) \bigl[ ( y_{i}-a ) ^{k+2}- ( x_{i}-a ) ^{k+2} \bigr] \\& \quad\quad {} -f^{ ( k+1 ) } ( b ) \bigl[ ( y_{i}-b ) ^{k+2}- ( x_{i}-b ) ^{k+2} \bigr] \bigr]\Biggr] . \end{aligned}$$
(2.5)
Moreover, if \(f^{ ( j ) } ( a ) \geq0\) and \(( -1 ) ^{j}f^{ ( j ) } ( b ) \geq0\) for \(j=1,\ldots,2n-1\) then
$$ \sum_{i=1}^{m}w_{i}f ( y_{i} ) \geq\sum_{i=1}^{m}w_{i}f ( x_{i} ) .$$
(2.6)
Proof
Since
$$T_{n} ( x,s ) =\textstyle\begin{cases} -\frac{ ( x-s ) ^{n}}{n ( b-a ) }+\frac{x-a}{b-a} ( x-s ) ^{n-1}, & a\leq s\leq x\leq b,\\ -\frac{ ( x-s ) ^{n}}{n ( b-a ) }+\frac{x-b}{b-a} ( x-s ) ^{n-1}, & a\leq x< s\leq b. \end{cases} $$
and
$$\frac{d^{2}}{dx^{2}}T_{n} ( x,s ) = \textstyle\begin{cases} \frac{n-1}{b-a} [ ( x-s ) ^{n-2}+(n-2) ( x-a ) ( x-s ) ^{n-3} ] , & a\leq s\leq x\leq b,\\ \frac{n-1}{b-a} [ ( x-s ) ^{n-2}+(n-2) ( x-b ) ( x-s ) ^{n-3} ] , & a\leq x< s\leq b. \end{cases} $$
\(T_{n} ( \cdot,s ) \) is continuous for every \(n\geq2\) and convex function for even n. Thus it satisfies inequality (2.3) by weighted majorization theorem (Proposition 4) and hence (2.3) by Theorem 3 provides us (2.4) with 2n instead of n. Furthermore, we consider condition \(f^{ ( j ) } ( a ) \geq0\) and \(( -1 ) ^{j}f^{ ( j ) } ( b ) \geq0\) for each \(j=1,\ldots,2n-1\). By applying Proposition 4 with the continuous convex function \(f ( x ) = ( x-a ) ^{k+2}\), \(x\in [ a,b ] \) we have
$$\sum_{i=1}^{m}w_{i} ( y_{i}-a ) ^{k+2}\geq\sum_{i=1}^{m}w_{i} ( x_{i}-a ) ^{k+2}. $$
Since continuous function \(f ( x ) = ( x-b ) ^{k+2}\), \(x\in [ a,b ] \) is convex if k is even and concave if k is odd, by the same proposition we have
$$\begin{aligned}& \sum_{i=1}^{m}w_{i} ( y_{i}-b ) ^{k+2} \geq\sum_{i=1}^{m}w_{i} ( x_{i}-b ) ^{k+2}\quad\text{if }k\text{ is even,} \\& \sum_{i=1}^{m}w_{i} ( y_{i}-b ) ^{k+2} \leq\sum_{i=1}^{m}w_{i} ( x_{i}-b ) ^{k+2}\quad\text{if }k\text{ is odd.} \end{aligned}$$
Now considering the assumption \(f^{ ( j ) } ( a ) \geq0\) and \(( -1 ) ^{j}f^{ ( j ) } ( b ) \geq0\) for each \(j=1,\ldots,2n-1\) we have
$$\begin{aligned}& \sum_{i=1}^{m}w_{i} \bigl[ f^{ ( k+1 ) } ( a ) \bigl[ ( y_{i}-a ) ^{k+2}- ( x_{i}-a ) ^{k+2} \bigr] -f^{ ( k+1 ) } ( b ) \bigl[ ( y_{i}-b ) ^{k+2}- ( x_{i}-b ) ^{k+2} \bigr] \bigr] \\ & \quad =f^{ ( k+1 ) } ( a ) \sum_{i=1}^{m}w_{i} \bigl[ ( y_{i}-a ) ^{k+2}- ( x_{i}-a ) ^{k+2} \bigr] -f^{ ( k+1 ) } ( b ) \sum_{i=1}^{m}w_{i} \bigl[ ( y_{i}-b ) ^{k+2}- ( x_{i}-b ) ^{k+2} \bigr] \end{aligned}$$
is positive for all \(k=0,1,\ldots,2n-2\). Thus the right-hand side of (2.5) is positive and (2.6) holds. □
Remark 7
Since in case \(a\leq s\leq x\leq b\) \(\frac{d^{2}}{dx^{2}}T_{n} ( x,s ) \) is always positive, \(T_{n}(x,s)\) cannot be concave and reverse inequalities cannot be observed.
Also, if \(w_{i}=1\), \(i=1,\ldots,m\) the result of the previous corollary holds for any \(\mathbf{x}, \mathbf{y} \in\mathbb{R}^{m}\) such that \(\mathbf {x}\prec\mathbf{y}\).
Its integral analogs are given as follows:
Theorem 4
Let all the assumptions of Theorem 2 hold with the additional condition
$$ \int_{\alpha}^{\beta}w ( t ) T_{n}\bigl(x(t),s \bigr)\,dt\leq\int_{\alpha }^{\beta}w ( t ) T_{n} \bigl(y(t),s\bigr)\,dt,\quad \forall s\in [ a,b],$$
(2.7)
where \(T_{n}(\cdot,s)\) is defined in Proposition 6. Then for every n-convex function \(f:I\rightarrow\mathbb{R}\) the following inequality holds:
$$\begin{aligned}& \int_{\alpha}^{\beta}w ( t ) f\bigl(y(t)\bigr)\,dt-\int _{\alpha}^{\beta }w ( t ) f\bigl(x(t)\bigr)\,dt \\ & \quad \geq\frac{1}{b-a} \Biggl[ \sum_{k=0}^{n-2} \frac{1}{k! ( k+2 ) !}\int_{\alpha}^{\beta}w ( t ) \bigl[ f^{ ( k+1 ) } ( a ) \bigl[ \bigl( y(t)-a \bigr) ^{k+2}- \bigl( x(t)-a \bigr) ^{k+2} \bigr] \\ & \quad\quad{} -f^{ ( k+1 ) } ( b ) \bigl[ \bigl( y ( t ) -b \bigr) ^{k+2}- \bigl( x ( t ) -b \bigr) ^{k+2} \bigr] \bigr]\,dt \Biggr] . \end{aligned}$$
(2.8)
Proof
Since the function f is n-convex so we have \(f^{(n)}\geq0\). Using this fact and (2.7) in (2.2) we easily arrive at our required result. □
Remark 8
If reverse inequality holds in (2.7) then reverse inequality holds in (2.8).
Corollary 2
Suppose all the assumptions from Theorem 2 hold. Additionally suppose that x and y are decreasing and satisfy conditions (1.7), (1.8). If f is 2n-convex then the following inequality holds:
$$\begin{aligned}& \int_{\alpha}^{\beta}w ( t ) f\bigl(y(t)\bigr)\,dt-\int _{\alpha}^{\beta }w ( t ) f\bigl(x(t)\bigr)\,dt \\ & \quad \geq\frac{1}{b-a} \Biggl[ \sum_{k=0}^{2n-2} \frac{1}{k! ( k+2 ) !}\int_{\alpha}^{\beta}w ( t ) \bigl[ f^{ ( k+1 ) } ( a ) \bigl[ \bigl( y(t)-a \bigr) ^{k+2}- \bigl( x(t)-a \bigr) ^{k+2} \bigr] \\ & \quad\quad{} -f^{ ( k+1 ) } ( b ) \bigl[ \bigl( y ( t ) -b \bigr) ^{k+2}- \bigl( x ( t ) -b \bigr) ^{k+2} \bigr] \bigr]\,dt \Biggr]. \end{aligned}$$
(2.9)
Moreover, if \(f^{ ( j ) } ( a ) \geq0\) and \(( -1 ) ^{j}f^{ ( j ) } ( b ) \geq0\) for \(j=1,\ldots,2n-1\) then
$$\int_{\alpha}^{\beta}w ( t ) f\bigl(y(t)\bigr)\,dt\geq\int _{\alpha }^{\beta }w ( t ) f\bigl(x(t)\bigr)\,dt. $$
Proof
By using the same arguments as we have given in Corollary 1, we easily arrive at our required results simply by replacing (1.6) by (1.9), (2.3) by (2.7) and (2.4) by (2.8). □

3 Bounds for identities related to generalization of majorization inequality

Let \(g,h:[a,b]\rightarrow\mathbb{R}\) be two Lebesgue integrable functions. We consider the Čebyšev functional
$$ T(g,h)=\frac{1}{b-a}\int_{a}^{b}g(x)h(x)\,dx- \biggl( \frac{1}{b-a}\int_{a}^{b}g(x)\,dx \biggr) \biggl( \frac{1}{b-a}\int_{a}^{b}h(x)\,dx \biggr) . $$
(3.1)
The following results can be found in [10].
Proposition 7
Let \(g:[a,b]\rightarrow\mathbb{R}\) be a Lebesgue integrable function and \(h:[a,b]\rightarrow\mathbb{R}\) be an absolutely continuous function with \((\cdot-a)(b-\cdot)[h^{\prime}]^{2}\in L[a,b]\). Then we have the inequality
$$ \bigl|T(g,h)\bigr|\leq\frac{1}{\sqrt{2}} \biggl( \frac{1}{b-a}\bigl|T(g,g)\bigr|\int _{a}^{b}(x-a) (b-x) \bigl[h^{\prime}(x)\bigr]^{2}\,dx \biggr) ^{1/2}. $$
(3.2)
The constant \(\frac{1}{\sqrt{2}}\) in (3.2) is the best possible.
Proposition 8
Let \(h : [a, b] \to\mathbb{R}\) be a monotonic nondecreasing function and let \(g : [a, b] \to\mathbb{R}\) be an absolutely continuous function such that \(g^{\prime}\in L_{\infty}[a, b]\). Then we have the inequality
$$ \bigl|T(g,h)\bigr|\le\frac{1}{2(b-a)}\bigl\| g^{\prime}\bigr\| _{\infty} \int_{a}^{b}(x-a) (b-x)\,dh(x). $$
(3.3)
The constant \(\frac{1}{2}\) in (3.3) is the best possible.
Now by using aforementioned results, we are going to obtain generalizations of the results proved in the previous section.
For m-tuples \(w=(w_{1},\ldots,w_{m})\), \(x=(x_{1},\ldots,x_{m})\), and \(y=(y_{1},\ldots,y_{m})\) with \(x_{i},y_{i}\in[ a,b]\), \(w_{i}\in\mathbb{R}\) (\(i=1,\ldots,m\)), and the function \(T_{n}\) defined as in (1.11), denote
$$ \delta(s)=\sum_{i=1}^{m}w_{i}T_{n}(y_{i},s)- \sum_{i=1}^{m}w_{i}T_{n}(x_{i},s),\quad \forall s\in [ a,b].$$
(3.4)
Similarly for continuous functions \(x,y:[\alpha,\beta]\rightarrow [ a,b]\) and \(w:[\alpha,\beta]\rightarrow\mathbb{R}\), denote
$$ \Delta(s)=\int_{\alpha}^{\beta}w ( t ) T_{n} \bigl(y(t),s\bigr)\,dt-\int_{\alpha}^{\beta}w ( t ) T_{n}\bigl(x(t),s\bigr)\,dt,\quad \forall s\in [ a,b].$$
(3.5)
Hence by using these notations we define Čebyšev functionals as follows:
$$\begin{aligned}& T(\delta,\delta) =\frac{1}{b-a}\int_{a}^{b} \delta^{2}(s)\,ds- \biggl( \frac{1}{b-a}\int_{a}^{b} \delta(s)\,ds \biggr) ^{2}, \\& T(\Delta,\Delta) =\frac{1}{b-a}\int_{a}^{b} \Delta^{2}(s)\,ds- \biggl( \frac{1}{b-a}\int_{a}^{b} \Delta(s)\,ds \biggr) ^{2}. \end{aligned}$$
Now, we are ready to state the main results of this section:
Theorem 5
Let \(n\in\mathbb{N}\), \(f:[a,b]\rightarrow\mathbb{R}\) be such that \(f^{(n)}\) is an absolutely continuous function with \((\cdot-a)(b-\cdot)[f^{(n+1)}]^{2}\in L[a,b]\) and \(x_{i},y_{i}\in [ a,b]\), \(w_{i}\in\mathbb{R}\) (\(i=1,2,\ldots,m\)), and let the functions \(T_{n}\), T, and δ be defined in (1.11), (3.1) and (3.4), respectively. Then we have
$$\begin{aligned}& \sum_{i=1}^{m}w_{i}f ( y_{i} ) -\sum_{i=1}^{m}w_{i}f ( x_{i} ) \\& \quad =\frac{1}{b-a}\sum_{i=1}^{m}w_{i} \Biggl[ \sum_{k=0}^{n-2}\frac {1}{k! ( k+2 ) !} \bigl[ f^{ ( k+1 ) } ( a ) \bigl[ ( y_{i}-a ) ^{k+2}- ( x_{i}-a ) ^{k+2} \bigr] \\& \quad\quad{}-f^{ ( k+1 ) } ( b ) \bigl[ ( y_{i}-b ) ^{k+2}- ( x_{i}-b ) ^{k+2} \bigr] \bigr] \Biggr] \\& \quad\quad{} +\frac{ [ f^{(n-1)}(b)-f^{(n-1)}(a) ] }{(n-1)!(b-a)}\int_{a}^{b} \delta(s)\,ds+R_{n}^{1}(f;a,b), \end{aligned}$$
(3.6)
where the remainder \(R_{n}^{1}(f;a,b)\) satisfies the estimation
$$ \bigl|R_{n}^{1}(f;a,b)\bigr|\leq\frac{1}{(n-1)!} \biggl( \frac{b-a}{2}\biggl\vert T(\delta,\delta)\int_{a}^{b}(s-a) (b-s)\bigl[f^{(n+1)}(s)\bigr]^{2}\,ds\biggr\vert \biggr) ^{1/2}.$$
(3.7)
Proof
If we apply Proposition 7 for \(g\rightarrow\delta\) and \(h\rightarrow f^{(n)}\), then we obtain
$$\begin{aligned}& \biggl\vert \frac{1}{b-a}\int_{a}^{b} \delta(s)f^{(n)}(s)\,ds- \biggl( \frac {1}{b-a}\int_{a}^{b} \delta(s)\,ds \biggr) \biggl( \frac{1}{b-a}\int_{a}^{b}f^{(n)}(s)\,ds \biggr) \biggr\vert \\& \quad \leq\frac{1}{\sqrt{2}} \biggl( \frac{1}{b-a}\bigl|T(\delta,\delta)\bigr|\int _{a}^{b}(s-a) (b-s) \bigl[f^{(n+1)}(s)\bigr]^{2}\,ds \biggr) ^{1/2}. \end{aligned}$$
Therefore we have
$$\begin{aligned} \frac{1}{(n-1)!(b-a)}\int_{a}^{b} \delta(s)f^{(n)}(s)\,ds =&\frac{ [ f^{(n-1)}(b)-f^{(n-1)}(a) ] }{(n-1)!(b-a)^{2}}\int_{a}^{b} \delta (s)\,ds +\frac{1}{b-a}R_{n}^{1}(f;a,b), \end{aligned}$$
where \(R_{n}^{1}(f;a,b)\) satisfies inequality (3.7). Now from identity (2.1) we obtain (3.6). □
Here we state the integral version of the previous theorem.
Theorem 6
Let \(f:[a,b]\rightarrow\mathbb{R}\) be such that \(f\in C^{n}[a,b]\) for \(n\in\mathbb{N}\) with \((\cdot-a)(b-\cdot)[f^{(n+1)}]^{2}\in L[a,b]\) and \(x,y:[\alpha ,\beta]\rightarrow[ a,b]\) and \(w:[\alpha,\beta]\rightarrow\mathbb{R}\) and let the functions \(T_{n}\), T and Δ be defined in (1.11), (3.1) and (3.5), respectively. Then we have
$$\begin{aligned}& \int_{\alpha}^{\beta}w ( t ) f\bigl(y(t)\bigr)\,dt-\int _{\alpha}^{\beta }w ( t ) f\bigl(x(t)\bigr)\,dt \\& \quad =\frac{1}{b-a} \Biggl[ \sum_{k=0}^{n-2} \frac{1}{k! ( k+2 ) !}\int_{\alpha}^{\beta}w ( t ) \bigl[ f^{ ( k+1 ) } ( a ) \bigl[ \bigl( y(t)-a \bigr) ^{k+2}- \bigl( x(t)-a \bigr) ^{k+2} \bigr] \\& \quad\quad{} -f^{ ( k+1 ) } ( b ) \bigl[ \bigl( y ( t ) -b \bigr) ^{k+2}- \bigl( x ( t ) -b \bigr) ^{k+2} \bigr] \bigr]\,dt \Biggr] \\& \quad\quad{} +\frac{ [ f^{(n-1)}(b)-f^{(n-1)}(a) ] }{(n-1)!(b-a)}\int_{a}^{b} \Delta(s)\,ds+R_{n}^{2}(f;a,b), \end{aligned}$$
(3.8)
where the remainder \(R_{n}^{2}(f;a,b)\) satisfies the estimation
$$ \bigl|R_{n}^{2}(f;a,b)\bigr|\leq\frac{1}{(n-1)!} \biggl( \frac{b-a}{2}\biggl\vert T(\Delta,\Delta)\int_{a}^{b}(s-a) (b-s)\bigl[f^{(n+1)}(s)\bigr]^{2}\,ds\biggr\vert \biggr) ^{1/2}.$$
(3.9)
Proof
This result easily follows by proceeding as in the proof of previous theorem and by replacing (2.1) by (2.2). □
By using Proposition 8 we obtain the following Grüss type inequality.
Theorem 7
Let \(f:[a,b]\rightarrow\mathbb{R}\) be such that \(f\in C^{n}[a,b]\) for \(n\in\mathbb{N}\) with \(f^{(n+1)}\geq0\) on \([a,b]\) and let the functions T and δ be defined in (3.1) and (3.4), respectively. Then we have the representation (3.6) and the remainder \(R_{n}^{1}(f;a,b)\) satisfies the following condition:
$$ \bigl|R_{n}^{1}(f;a,b)\bigr|\leq\frac{1}{(n-1)!}\bigl\Vert \delta^{\prime}\bigr\Vert _{\infty } \biggl[ \frac{b-a}{2} \bigl[ f^{(n-1)}(b)+f^{(n-1)}(a) \bigr] - \bigl[ f^{(n-2)}(b)-f^{(n-2)}(a) \bigr] \biggr] .$$
(3.10)
Proof
If we apply Proposition 8 for \(g\rightarrow\delta\) and \(h\rightarrow f^{(n)}\), then we obtain
$$\begin{aligned}& \biggl\vert \frac{1}{b-a}\int_{a}^{b} \delta(s)f^{(n)}(s)\,ds- \biggl( \frac {1}{b-a}\int_{a}^{b} \delta(s)\,ds \biggr) \biggl( \frac{1}{b-a}\int_{a}^{b}f^{(n)}(s)\,ds \biggr) \biggr\vert \\& \quad\leq\frac{1}{2(b-a)}\bigl\Vert \delta^{\prime}\bigr\Vert _{\infty}\int _{a}^{b}(s-a) (b-s)f^{(n+1)}(s)\,ds. \end{aligned}$$
Since
$$\begin{aligned}& \int_{a}^{b}(s-a) (b-s)f^{(n+1)}(s)\,ds \\& \quad= \int_{a}^{b}(2s-a-b)f^{(n)}(s)\,ds \\& \quad = (b-a) \bigl[ f^{(n-1)}(b)+f^{(n-1)}(a) \bigr] -2 \bigl[ f^{(n-2)}(b)-f^{(n-2)}(a) \bigr] . \end{aligned}$$
(3.11)
Therefore, by using the identities (2.1) and (3.11) we deduce (3.10). □
Integral version of the above theorem can be given as:
Theorem 8
Let \(f:[a,b]\rightarrow\mathbb{R}\) be such that \(f\in C^{n}[a,b]\) for \(n\in\mathbb{N}\) with \(f^{(n+1)}\geq0\) on \([a,b]\) and let the functions T and Δ be defined in (3.1) and (3.5), respectively. Then we have the representation (3.8) and the remainder \(R_{n}^{2}(f;a,b)\) satisfies the following condition:
$$\bigl|R_{n}^{2}(f;a,b)\bigr|\leq\frac{1}{(n-1)!}\bigl\Vert \Delta^{\prime}\bigr\Vert _{\infty } \biggl[ \frac{b-a}{2} \bigl[ f^{(n-1)}(b)+f^{(n-1)}(a) \bigr] - \bigl[ f^{(n-2)}(b)-f^{(n-2)}(a) \bigr] \biggr] . $$
Here, the symbol \(L_{p} [ a,b ] \) (\(1\leq p<\infty \)) denotes the space of p-power integrable functions on the interval \([ a,b ] \) equipped with the norm
$$\Vert f\Vert _{p}= \biggl( \int_{a}^{b} \bigl\vert f ( t ) \bigr\vert ^{p}\,dt \biggr) ^{\frac{1}{p}}$$
and \(L_{\infty} [ a,b ] \) denotes the space of essentially bounded functions on \([ a,b ] \) with the norm
$$\Vert f\Vert _{\infty}=\mathop{\operatorname{ess\,sup}}_{t\in [ a,b ] }\bigl\vert f ( t ) \bigr\vert . $$
Now we state some Ostrowski-type inequalities related to the generalized majorization inequalities.
Theorem 9
Let all the assumptions of Theorem 1 hold. Furthermore, let \((p,q)\) be a pair of conjugate exponents, that is, \(1\leq p,q\leq\infty\), \(\frac{1}{p} +\frac{1}{q}=1\). Let \(f^{(n)}\in L_{p} [ a,b ] \) for some \(n\in\mathbb{N}\), \(n>1\). Then we have
$$\begin{aligned}& \Biggl\vert \sum_{i=1}^{m}w_{i}f ( y_{i} ) -\sum_{i=1}^{m}w_{i}f ( x_{i} ) -\frac{1}{b-a}\sum_{i=1}^{m}w_{i} \Biggl[ \sum_{k=0}^{n-2}\frac{1}{k! ( k+2 ) !} \bigl[ f^{ ( k+1 ) } ( a ) \\& \quad\quad{} \times \bigl[ ( y_{i}-a ) ^{k+2}- ( x_{i}-a ) ^{k+2} \bigr] -f^{ ( k+1 ) } ( b ) \bigl[ ( y_{i}-b ) ^{k+2}- ( x_{i}-b ) ^{k+2} \bigr] \bigr] \Biggr] \Biggr\vert \\& \quad \leq\frac{1}{(n-1)!}\bigl\Vert f^{(n)}\bigr\Vert _{p}\Biggl\Vert \sum_{i=1}^{m}w_{i} \bigl( T_{n} ( y_{i},\cdot ) -T_{n} ( x_{i},\cdot ) \bigr) \Biggr\Vert _{q}. \end{aligned}$$
(3.12)
The constant on the right-hand side of (3.12) is sharp for \(1< p\leq\infty\) and the best possible for \(p=1\).
Proof
Let us denote
$$\lambda(s)=\frac{1}{(n-1)!}\sum_{i=1}^{m}w_{i} \bigl[ T_{n} ( y_{i},s ) -T_{n} ( x_{i},s ) \bigr]. $$
Now, by using identity (2.1) and applying Hölder’s inequality we obtain
$$\begin{aligned}& \Biggl\vert \sum_{i=1}^{m}w_{i}f ( y_{i} ) -\sum_{i=1}^{m}w_{i}f ( x_{i} ) -\frac{1}{b-a}\sum_{i=1}^{m}w_{i} \Biggl[ \sum_{k=0}^{n-2}\frac{1}{k! ( k+2 ) !} \bigl[ f^{ ( k+1 ) } ( a ) \\& \quad\quad{} \times \bigl[ ( y_{i}-a ) ^{k+2}- ( x_{i}-a ) ^{k+2} \bigr] -f^{ ( k+1 ) } ( b ) \bigl[ ( y_{i}-b ) ^{k+2}- ( x_{i}-b ) ^{k+2} \bigr] \bigr] \Biggr] \Biggr\vert \\& \quad =\biggl\vert \int_{a}^{b} \lambda(s)f^{(n)}(s)\,ds\biggr\vert \leq\bigl\Vert f^{(n)} \bigr\Vert _{p}\Vert\lambda\Vert_{q}. \end{aligned}$$
(3.13)
For the proof of the sharpness of the constant \(( \int_{a}^{b}\vert \lambda(s)\vert ^{q}\,ds ) ^{1/q}\), let us find a function f for which the equality in (3.13) is obtained.
For \(1< p<\infty\) take f to be such that
$$f^{(n)}(s)=\operatorname{sgn}\lambda(s)\cdot\bigl|\lambda(s)\bigr|^{1/(p-1)}. $$
For \(p=\infty\), take f such that
$$f^{(n)}(s)=\operatorname{sgn}\lambda(s). $$
Finally, for \(p=1\), we prove that
$$ \biggl\vert \int_{a}^{b}\lambda(s)f^{(n)}(s)\,ds \biggr\vert \leq\max_{s\in [ a,b]}\bigl\vert \lambda(s)\bigr\vert \int_{a}^{b}f^{(n)}(s)\,ds $$
(3.14)
is the best possible inequality.
Function \(T_{n} ( x,\cdot ) \) for \(n=1\) has jump of −1 at point x. But for \(n\geq2\) it is continuous, and thus \(\lambda(s)\) is continuous. Suppose that \(\vert \lambda(s)\vert \) attains its maximum at \(s_{0}\in[ a,b]\). First we consider the case \(\lambda(s_{0})>0\). For ϵ small enough we define \(f_{\epsilon}(s)\) by
$$ f_{\epsilon}(s)= \textstyle\begin{cases} 0 , & a\leq s\leq s_{0},\\ \frac{1}{\epsilon n!}(s-s_{0})^{n} , & s_{0}\leq s\leq s_{0}+\epsilon,\\ \frac{1}{n!}(s-s_{0})^{n-1} , & s_{0}+\epsilon\leq s\leq b. \end{cases} $$
(3.15)
So, we have
$$\biggl\vert \int_{a}^{b}\lambda(s)f_{\epsilon}^{(n)}(s)\,ds \biggr\vert =\biggl\vert \int_{s_{0}}^{s_{0}+\epsilon} \lambda(s)\frac{1}{\epsilon }\,ds\biggr\vert =\frac{1}{\epsilon}\int _{s_{0}}^{s_{0}+\epsilon}\lambda(s)\,ds. $$
Now from inequality (3.14) we have
$$\frac{1}{\epsilon}\int_{s_{0}}^{s_{0}+\epsilon}\lambda(s)\,ds\leq \lambda (s_{0})\frac{1}{\epsilon}\int_{s_{0}}^{s_{0}+\epsilon}\,ds= \lambda(s_{0}). $$
Since
$$\lim_{\epsilon\rightarrow0}\frac{1}{\epsilon}\int_{s_{0}}^{s_{0}+\epsilon } \lambda(s)\,ds=\lambda(s_{0}) $$
the statement follows.
In the case \(\lambda(s_{0})<0\), we define \(f_{\epsilon}(s)\) by
$$ f_{\epsilon}(s)= \textstyle\begin{cases} \frac{1}{ n!}(s-s_{0}-\epsilon)^{n-1} , & a\le s \le s_{0},\\ -\frac{1}{\epsilon n!}(s-s_{0}-\epsilon)^{n} , & s_{0}\le s \le s_{0}+\epsilon,\\ 0 , & s_{0}+\epsilon\le s \le b, \end{cases} $$
(3.16)
and the rest of the proof is the same as above. □
The integral case of the above theorem can be given as follows.
Theorem 10
Let all the assumptions of Theorem 2 hold. Furthermore, let \((p,q)\) be a pair of conjugate exponents, that is, \(1\leq p,q\leq\infty\), \(\frac{1}{p} +\frac{1}{q}=1\). Let \(f^{(n)}\in L_{p} [ a,b ] \) for some \(n\in\mathbb{N}\). Then we have
$$\begin{aligned}& \Biggl\vert \int_{\alpha}^{\beta}w ( t ) f\bigl(y(t)\bigr)\,dt- \int_{\alpha}^{\beta}w ( t ) f\bigl(x(t)\bigr)\,dt \\& \quad\quad{} -\frac{1}{b-a} \Biggl[ \sum_{k=0}^{n-2} \frac{1}{k! ( k+2 ) !}\int_{\alpha}^{\beta}w ( t ) \bigl[ f^{ ( k+1 ) } ( a ) \bigl[ \bigl( y(t)-a \bigr) ^{k+2}- \bigl( x(t)-a \bigr) ^{k+2} \bigr] \\& \quad\quad{} -f^{ ( k+1 ) } ( b ) \bigl[ \bigl( y ( t ) -b \bigr) ^{k+2}- \bigl( x ( t ) -b \bigr) ^{k+2} \bigr] \bigr]\,dt \Biggr] \Biggr\vert \\& \quad \leq\frac{1}{(n-1)!}\bigl\Vert f^{(n)}\bigr\Vert _{p}\biggl\Vert \int_{\alpha }^{\beta }w(t) \bigl( T_{n} \bigl( y(t),s \bigr) -T_{n} \bigl( x(t),s \bigr) \bigr)\,dt\biggr\Vert _{q}. \end{aligned}$$
(3.17)
The constant on the right-hand side of (3.17) is sharp for \(1< p\leq\infty\) and the best possible for \(p=1\).
For our next two sections, we give here some constructions as follows. Under the assumptions of Theorem 3 using (2.4) and Theorem 4 using (2.7) we define the following functionals, respectively:
$$\begin{aligned}& \Lambda_{1}(f) = \sum_{i=1}^{m}w_{i}f ( y_{i} ) -\sum_{i=1}^{m}w_{i}f ( x_{i} ) -\frac{1}{b-a}\sum_{i=1}^{m}w_{i} \Biggl[ \sum_{k=0}^{n-2}\frac{1}{k! ( k+2 ) !} \bigl[ f^{ ( k+1 ) } ( a ) \\& \hphantom{\Lambda_{1}(f) =}{} \times \bigl[ ( y_{i}-a ) ^{k+2}- ( x_{i}-a ) ^{k+2} \bigr] -f^{ ( k+1 ) } ( b ) \bigl[ ( y_{i}-b ) ^{k+2}- ( x_{i}-b ) ^{k+2} \bigr] \bigr] \Biggr] , \end{aligned}$$
(A1)
$$\begin{aligned}& \Lambda_{2}(f)=\int_{\alpha}^{\beta}w ( t ) f \bigl(y(t)\bigr)\,dt-\int_{\alpha}^{\beta}w ( t ) f \bigl(x(t)\bigr)\,dt \\& \hphantom{\Lambda_{2}(f)=}{} -\frac{1}{b-a} \Biggl[ \sum_{k=0}^{n-2} \frac{1}{k! ( k+2 ) !}\int_{\alpha}^{\beta}w ( t ) \bigl[ f^{ ( k+1 ) } ( a ) \bigl[ \bigl( y(t)-a \bigr) ^{k+2}- \bigl( x(t)-a \bigr) ^{k+2} \bigr] \\& \hphantom{\Lambda_{2}(f)=}{} -f^{ ( k+1 ) } ( b ) \bigl[ \bigl( y ( t ) -b \bigr) ^{k+2}- \bigl( x ( t ) -b \bigr) ^{k+2} \bigr] \bigr]\,dt \Biggr] . \end{aligned}$$
(A2)

4 Mean value theorems

Now we give mean value theorems for \(\Lambda_{k}\), \(k\in\{1,2\}\). Here \(f_{0}(x)=\frac{x^{n}}{n!}\).
Theorem 11
Let \(f\in C^{n}[a,b]\) and let \(\Lambda _{k}:C^{n}[a,b]\rightarrow \mathbb{R}\) for \(k\in\{1,2\}\) be linear functionals as defined in (A1) and (A2), respectively. Then there exists \(\xi_{k} \in [a,b]\) for \(k\in\{1,2\}\) such that
$$ \Lambda_{k}(f)=f^{(n)}(\xi_{k}) \Lambda_{k}(f_{0}),\quad k\in\{1,2\}. $$
(4.1)
Proof
Since \(f^{(n)}\) is continuous on \([a,b]\), so \(L\le f^{(n)}(x)\le M\) for \(x\in[a,b]\) where \(L=\min_{x\in[a,b]}f^{(n)}(x)\) and \(M=\max_{x\in[a,b]}f^{(n)}(x)\).
Therefore the function
$$F(x)=M\frac{x^{n}}{n!}-f(x)=Mf_{0}(x)-f(x) $$
gives us
$$F^{(n)}(x)=M-f^{(n)}(x)\geq0 $$
i.e. F is n-convex function. Hence \(\Lambda_{k}(F)\geq0\) and we conclude that for \(k\in\{1,2\}\)
$$\Lambda_{k}(f)\leq M\Lambda_{k}(f_{0}). $$
Similarly, for \(k\in\{1,2\}\) we have
$$L\Lambda_{k}(f_{0})\leq\Lambda_{k}(f). $$
Combining the two inequalities we get
$$L\Lambda_{k}(f_{0})\leq\Lambda_{k}(f)\leq M \Lambda_{k}(f_{0}), $$
which gives us (4.1). □
Theorem 12
Let \(f,g\in C^{n}[a,b]\) and let \(\Lambda_{k}:C^{n}[a,b]\rightarrow\mathbb{R}\) for \(k\in\{1,2\}\) be the linear functionals as defined in (A1) and (A2), respectively. Then there exists \(\xi_{k} \in [a,b]\) for \(k\in\{1,2\}\) such that
$$\frac{\Lambda_{k}(f)}{\Lambda_{k}(g)}=\frac{f^{(n)}(\xi _{k})}{g^{(n)}(\xi _{k})}$$
assuming that both denominators are non-zero.
Proof
Fix \(k\in\{1,2\}\). Let \(h\in C^{n}[a,b]\) be defined as
$$h=\Lambda_{k}(g)f-\Lambda_{k}(f)g. $$
Using Theorem 11 there exists \(\xi_{k}\) such that
$$0=\Lambda_{k}(h)=h^{(n)}(\xi_{k}) \Lambda_{k}(f_{0}) $$
or
$$\bigl[\Lambda_{k}(g)f^{(n)}(\xi_{k})- \Lambda_{k}(f)g^{(n)}(\xi_{k})\bigr] \Lambda_{k}(f_{0})=0, $$
which gives us the required result. □
Remark 9
If the inverse of \(\frac{f^{(n)}}{g^{(n)}}\) exists, then from the above mean value theorems we can give the generalized means,
$$ \xi_{k}= \biggl( \frac{f^{(n)}}{g^{(n)}} \biggr) ^{-1} \biggl( \frac{ \Lambda_{k}(f)}{ \Lambda_{k}(g)} \biggr) ,\quad k\in\{1,2\}. $$
(4.2)

5 Log-convexity and n-exponential convexity

5.1 Logarithmically convex functions

A number of important inequalities arise from the logarithmic convexity of some functions as one can see in [6].
Now, we recall some definitions. The following definition was originally given by Jensen in 1906 [11]. Here I is an interval in \(\mathbb{R}\).
Definition 5
A function \(f:I\to{\mathbb{R}}_{+}\) is called log-convex in the J-sense if the inequality
$$f^{2} \biggl( {\frac{x_{1}+x_{2}}{2}} \biggr) \leq f ( x_{1} ) f ( x_{2} ) $$
holds for each \(x_{1},x_{2} \in I\).
Definition 6
([1], p.7)
A function \(f:I\to\mathbb{R}_{+}\) is called log-convex if the inequality
$$f\bigl(\lambda x_{1}+(1-\lambda)x_{2}\bigr)\leq{ \bigl[f(x_{1})\bigr]}^{\lambda} {\bigl[f(x_{2}) \bigr]}^{(1-\lambda)} $$
holds for each \(x_{1}, x_{2} \in I\) and \(\lambda\in[0,1]\).
Remark 10
A function log-convex in the J-sense is log-convex if it is continuous as well.

5.2 n-Exponentially convex functions

Bernstein [12] and Widder [13] independently introduced an important sub-class of convex functions, which is called the class of exponentially convex functions on a given open interval, and studied some properties of this newly defined class. Pečarić and Perić in [14] introduced the notion of n-exponentially convex functions, which is in fact a generalization of the concept of exponentially convex functions. In the present subsection, we discus the same notion of n-exponential convexity by describing related definitions and some important results with some remarks from [14].
Definition 7
A function \(f : I \rightarrow\mathbb{R}\) is n-exponentially convex in the J-sense if the inequality
$$\sum_{i, j =1}^{n} u_{i} u_{j} f \biggl( \frac{t_{i} + t_{j}}{2} \biggr) \geq0 $$
holds for each \(t_{i} \in I\) and \(u_{i} \in\mathbb{R}\), \(i\in\{1, \dots , n\}\).
Definition 8
A function \(f: I \to\mathbb{R}\) is n-exponentially convex if it is n-exponentially convex in the J-sense and continuous on I.
Remark 11
We can see from the definition that 1-exponentially convex functions in the J-sense are in fact nonnegative functions. Also, n-exponentially convex functions in the J-sense are k-exponentially convex in the J-sense for every \(k\in\mathbb{N}\) such that \(k\leq n\).
Definition 9
A function \(f: I \rightarrow\mathbb{R}\) is exponentially convex in the J-sense, if it is n-exponentially convex in the J-sense for each \(n\in\mathbb{N}\).
Remark 12
A function \(f : I \to\mathbb{R}\) is exponentially convex if it is n-exponentially convex in the J-sense and continuous on I.
Proposition 9
If function \(f:I\to\mathbb{R}\) is n-exponentially convex in the J-sense, then the matrix
$$\biggl[ f \biggl( \frac{t_{i}+t_{j}}{2} \biggr) \biggr] _{i, j=1}^{m}$$
is positive-semidefinite. Particularly
$$\det \biggl[ f \biggl( \frac{t_{i}+t_{j}}{2} \biggr) \biggr] _{i, j=1}^{m} \geq0 $$
for each \(m \in\mathbb{N}\), \(m \leq n\) and \(t_{i}\in I\) for \(i\in\{ 1,\ldots,m\}\).
Corollary 3
If function \(f:I\to\mathbb{R}\) is exponentially convex, then the matrix
$$\biggl[ f \biggl( \frac{t_{i} + t_{j}}{2} \biggr) \biggr] _{i,j=1}^{m}$$
is positive-semidefinite. Particularly
$$\det \biggl[ f \biggl( \frac{t_{i} + t_{j}}{2} \biggr) \biggr] _{i,j=1}^{m} \geq 0 $$
for each \(m \in\mathbb{N}\) and \(t_{i}\in I\) for \(i\in\{1,\ldots,m\}\).
Corollary 4
If the function \(f:I\to\mathbb{R}_{+}\) is exponentially convex, then f is log-convex.
Remark 13
A function \(f:I\to\mathbb{R}_{+}\) is log-convex in J-sense if and only if the inequality
$$u_{1}^{2} f(t_{1})+2u_{1}u_{2}f \biggl( {\frac{t_{1}+t_{2}}{2}} \biggr) + u_{2}^{2}f(t_{2}) \geq0 $$
holds for each \(t_{1},t_{2} \in I\) and \(u_{1},u_{2} \in\mathbb{R}\). It follows that a positive function is log-convex in the J-sense if and only if it is 2-exponentially convex in the J-sense. Also, using basic convexity theory it follows that a positive function is log-convex if and only if it is 2-exponentially convex.
Here, we get our results concerning the n-exponential convexity and exponential convexity for our functionals \(\Lambda_{k}\), \(k\in\{1,2\}\), as defined in (A1) and (A2). Throughout the section I is an interval in \(\mathbb{R}\).
Theorem 13
Let \(D_{1}=\{f_{t}:t\in I\}\) be a class of functions such that the function \(t\mapsto[ z_{0},z_{1},\ldots,z_{n};f_{t}]\) is n-exponentially convex in the J-sense on I for any \(n+1\) mutually distinct points \(z_{0},z_{1},\ldots,z_{n}\in[ a,b]\). Let \(\Lambda_{k}\) be the linear functionals for \(k\in\{1,2\}\) as defined in (A1) and (A2). Then the following statements are valid:
(a)
The function \(t\mapsto\Lambda_{k}(f_{t})\) is n-exponentially convex function in the J-sense on I.
 
(b)
If the function \(t\mapsto\Lambda_{k} (f_{t})\) is continuous on I, then the function \(t\mapsto\Lambda_{k} (f_{t})\) is n-exponentially convex on I.
 
Proof
(a) Fix \(k\in\{1,2\}\). Let us define the function ω for \(t_{i}\in I\), \(u_{i}\in\mathbb{R}\), \(i\in\{1,\ldots,n\}\) as follows:
$$\omega=\sum_{i,j=1}^{n}u_{i}u_{j}f_{\frac{t_{i}+t_{j}}{2}}. $$
Since the function \(t \mapsto[z_{0},z_{1},\ldots,z_{n};f_{t}]\) is n-exponentially convex in the J-sense,
$$[z_{0},z_{1},\ldots,z_{n};\omega]=\sum _{i,j=1}^{n}u_{i}u_{j}[z_{0},z_{1}, \ldots,z_{n};f_{\frac{t_{i}+t_{j}}{2}}]\geq0, $$
which implies that ω is n-convex function on I and therefore \(\Lambda_{k}(\omega)\geq0\). Hence
$$\sum_{i,j=1}^{n}u_{i}u_{j} \Lambda_{k} (f_{\frac{t_{i}+t_{j}}{2}})\geq0. $$
We conclude that the function \(t\mapsto\Lambda_{k}(f_{t})\) is an n-exponentially convex function on I in the J-sense.
(b) This part easily follows from the definition of the n-exponentially convex function. □
As a consequence of the above theorem we give the following corollaries.
Corollary 5
Let \({D}_{2}=\{f_{t}:t\in I\}\) be a class of functions such that the function \(t\mapsto [ z_{0},z_{1},\ldots,z_{n};f_{t}]\) is exponentially convex in the J-sense on I for any \(n+1\) mutually distinct points \(z_{0},z_{1},\ldots,z_{n}\in [ a,b]\). Let \(\Lambda_{k}\) be the linear functionals for \(k\in\{1,2\}\) as defined in (A1) and (A2). Then the following statements are valid:
(a)
The function \(t\mapsto\Lambda_{k}(f_{t})\) is exponentially convex in the J-sense on I.
 
(b)
If the function \(t\mapsto\Lambda_{k} (f_{t})\) is continuous on I, then the function \(t\mapsto\Lambda_{k} (f_{t})\) is exponentially convex on I.
 
(c)
The matrix \([ \Lambda_{k} ( f_{\frac {t_{i}+t_{j}}{2}} ) ] _{i,j=1}^{m}\) is positive-semidefinite. Particularly,
$$\det \bigl[ \Lambda_{k} ( f_{\frac{t_{i}+t_{j}}{2}} ) \bigr] _{i,j=1}^{m} \geq0 $$
for each \(m \in\mathbb{N}\) and \(t_{i}\in I\) where \(i\in\{1,\ldots,m\}\).
 
Proof
The proof follows directly from Theorem 13 by using the definition of exponential convexity and Corollary 3. □
Corollary 6
Let \({D}_{3}=\{f_{t}:t\in I\}\) be a class of functions such that the function \(t\mapsto [ z_{0},z_{1},\ldots,z_{n};f_{t}]\) is 2-exponentially convex in the J-sense on I for any \(n+1\) mutually distinct points \(z_{0},z_{1},\ldots,z_{n}\in [ a,b]\). Let \(\Lambda_{k}\) be the linear functionals for \(k\in\{1,2\}\) as defined in (A1) and (A2). Then the following statements are valid:
(a)
If the function \(t\mapsto\Lambda_{k} (f_{t})\) is continuous on I, then it is 2-exponentially convex on I. If the function \(t\mapsto\Lambda_{k}(f_{t})\) is additionally positive, then it is also log-convex on I. Moreover, the following Lyapunov inequality holds for \(r< s< t\), \(r, s, t \in I\):
$$ \bigl[\Lambda_{k}(f_{s}) \bigr]^{t-r} \le \bigl[\Lambda_{k}(f_{r}) \bigr]^{t-s} \bigl[\Lambda_{k}(f_{t}) \bigr]^{s-r}. $$
(5.1)
 
(b)
If the function \(t\mapsto\Lambda_{k}(f_{t})\) is positive and differentiable on I, then for every \(s,t,u,v\in I\) such that \(s\leq u\) and \(t\leq v\), we have
$$ \mu_{s,t}(\Lambda_{k},D_{3})\leq \mu_{u,v}(\Lambda_{k},D_{3}), $$
(5.2)
where \(\mu_{s,t}\) is defined as
$$ \mu_{s,t}(\Lambda_{k},D_{3})=\textstyle\begin{cases} (\frac{\Lambda_{k}(f_{s})}{\Lambda_{k}(f_{t})} )^{\frac{1}{s-t}} ,& s\neq t,\\ \exp (\frac{\frac{d}{ds}\Lambda_{k}(f_{s})}{\Lambda_{k}(f_{s})} ) ,& s=t, \end{cases} $$
(5.3)
for \(f_{s},f_{t}\in D_{3}\).
 
Proof
(a)
It follows directly from Theorem 13 and Remark 13. As the function \(t\mapsto\Lambda_{k}(f_{t})\) is log-convex, i.e., \(\ln\Lambda_{k}(f_{t})\) is convex, by using Proposition 1, we have
$$\ln\bigl[\Lambda_{k}(f_{s})\bigr]^{t-r} \leq \ln \bigl[\Lambda_{k}(f_{r})\bigr]^{t-s} + \ln\bigl[\Lambda_{k}(f_{t})\bigr]^{s-r},\quad k\in\{1,2 \}, $$
which gives us (5.1).
 
(b)
From Proposition 2, for the convex function f, the inequality
$$ \frac{f(s) - f(t)}{s - t} \leq \frac{f(u) - f(v)}{u - v}$$
(5.4)
holds \(\forall s, t,u,v \in I\subset\mathbb{R}\) such that \(s\leq u\), \(t\leq v\), \(s\neq t\), \(u\neq v\).
Since by (c), \(\Lambda(f_{t}) \) is log-convex, setting \(f(t)=\ln\Lambda(f_{t})\) in (5.4) we have
$$ \frac{\ln\Lambda_{k}(f_{s}) -\ln\Lambda_{k}(f_{t})}{s-t} \leq \frac{\ln\Lambda_{k}(f_{u})-\ln\Lambda_{k}(f_{v})}{u-v}$$
(5.5)
for \(s\leq u\), \(t\leq v\), \(s\neq t\), \(u\neq v\), which is equivalent to (5.3). The cases for \(s= t\) and/or \(u= v\) are easily treated from (5.5) by taking the respective limits.
 
 □
Remark 14
The results from Theorem 13 and Corollaries 5 and 6 still hold when any two (all) points \(z_{0}, z_{1},\ldots , z_{n} \in[a,b]\) coincide for a family of differentiable (n times differentiable) functions \(f_{t}\) such that the function \(t \mapsto[z_{0}, z_{1},\ldots, z_{n};f_{t}]\) is n-exponentially convex, exponentially convex, and 2-exponentially convex in the J-sense, respectively.
Now, we give two important remarks and one useful corollary from [15], which we will use in some examples in the next section.
Remark 15
To \({\mu}_{s,t}(\Lambda_{k},\Omega)\) defined with (5.3) we will refer as a mean if
$$a \leq {\mu}_{s,t}(\Lambda_{k},\Omega) \leq b $$
for \(s,t \in I\) and \(k\in\{1,2\}\) where \(\Omega=\{f_{t}:t \in I\}\) is a family of functions and \([a,b]\subset \operatorname{Dom}(f_{t})\).
Theorem 13 gives us the following corollary.
Corollary 7
Let \(a,b\in\mathbb{R}\) and \(\Lambda_{k}\) be linear functionals for \(k\in\{1,2\}\). Let \(\Omega=\{f_{t}:t\in I\}\) be a family of functions in \(C^{2}[a,b]\). If
$$ a \leq \biggl( \frac{\frac{d^{2}f_{s}}{dx^{2}}}{\frac{d^{2} f_{t}}{dx^{2}}} \biggr) ^{\frac{1}{s-t}}(\xi) \leq b, $$
for \(\xi\in [ a,b]\), \(s,t\in I\), then \({\mu}_{s,t}(\Lambda _{k},\Omega)\) is a mean for \(k\in\{1,2\}\).
Remark 16
In some examples, we will get a mean of this type:
$$\biggl( \frac{\frac{d^{2}f_{s}}{dx^{2}}}{\frac{d^{2}f_{t}}{dx^{2}}} \biggr) ^{\frac{1}{s-t}}(\xi) =\xi,\quad \xi\in[a,b], s\neq t. $$

6 Examples with applications

In this section, we use various classes of functions \(\Omega=\{f_{t}:t \in I\}\) for any open interval \(I \subset\mathbb{R}\) to construct different examples of exponentially convex functions and applications to Stolarsky-type means. Let us consider some examples.
Example 1
Let \(\Omega_{1}= \{\psi_{t}:\mathbb{R}\rightarrow [0,\infty): t\in\mathbb{R}\}\) be a family of functions defined by
$$ \psi_{t}(x) = \textstyle\begin{cases} \frac{e^{tx}}{t^{n}} ,& t \neq 0,\\ \frac{x^{n}}{n!} ,&t = 0. \end{cases} $$
Since \(\frac{d^{n}}{dx^{n}}\psi_{t}(x)=e^{tx}>0\), the function \(\psi_{t}(x)\) is n-convex on \(\mathbb{R}\) for every \(t\in\mathbb{R}\) and \(t\to \frac{d^{n}}{dx^{n}}\psi_{t}(x)\) is exponentially convex by definition. Using analogous arguing to the proof of Theorems 13, we see that \(t\mapsto[z_{0},z_{1},\ldots,z_{n};\psi_{t}]\) is exponentially convex (and so exponentially convex in the J-sense). Using Corollary 5 we conclude that \(t\mapsto\Lambda_{k}(\psi_{t})\), \(k\in\{1,2\}\) are exponentially convex in the J-sense. It is easy to see that these mappings are continuous, so they are exponentially convex.
Assume that \(t\mapsto\Lambda_{k}(\psi_{t})>0\) for \(k\in\{1,2\}\). By introducing convex functions \(\psi_{t}\) in (4.2), we obtain the following means: for \(k\in\{1,2\}\)
$$\mathfrak{M}_{s,t}(\Lambda_{k},\Omega_{1}) = \textstyle\begin{cases} \frac{1}{s-t}\ln ( \frac{\Lambda_{k}(\psi_{s})}{\Lambda_{k}(\psi_{t} )} ) ,& s\neq t,\\ \frac{\Lambda_{k}(\mathit{id}\cdot\psi_{s})}{\Lambda_{k}(\psi_{s})}-\frac{n}{s} ,& s=t\neq0,\\ \frac{\Lambda_{k}(\mathit{id}\cdot\psi_{0})}{(n+1)\Lambda_{k}(\psi_{0})} ,& s=t=0, \end{cases} $$
where id stands for the identity function on \(\mathbb{R}\). Here \(\mathfrak{M}_{s,t}(\Lambda_{k},\Omega_{1})=\ln(\mu_{s,t}(\Lambda_{k},\Omega _{1}))\), \(k\in\{1,2\}\) are in fact means.
Remark 17
We observe here that \(( \frac{\frac{d^{n}\psi_{s}}{dx^{n}}}{\frac {d^{n}\psi_{t}}{dx^{n}}} ) ^{\frac{1}{s-t}}(\ln\xi) = \xi\) is a mean for \(\xi\in[a,b]\) where \(a,b \in\mathbb{R_{+}}\).
Example 2
Let \(\Omega_{2}=\{\varphi_{t}:(0,\infty)\rightarrow\mathbb{R}:t\in \mathbb{R}\}\) be a family of functions defined as
$$\varphi_{t}(x) = \textstyle\begin{cases} \frac{(x)^{t}}{t(t-1)\cdots(t-n+1)} ,& t\notin\{0,\ldots,n-1\} ,\\ \frac{(x)^{j} \ln(x)}{(-1)^{n-1-j}j!(n-1-j)!} ,& t=j \in\{0,\ldots ,n-1\}. \end{cases} $$
Since \(\varphi_{t}(x)\) is n-convex function for \(x\in(0,\infty)\) and \(t\mapsto\frac{d^{2}}{dx^{2}}\varphi_{t}(x)\) is exponentially convex, by the same arguments as given in the previous example we conclude that \(\Lambda_{k}(\varphi_{t})\), \(k\in\{1,2\}\) are exponentially convex.
We assume that \(\Lambda_{k}(\varphi_{t})>0\) for \(k\in\{1,2\}\). For this family of convex functions we obtain the following means: for \(k\in\{1,2\}\)
$$\mathfrak{M}_{s,t}(\Lambda_{k},\Omega_{2})= \textstyle\begin{cases} ( \frac{\Lambda_{k}(\varphi_{s})}{\Lambda_{k}(\varphi_{t})} ) ^{\frac{1}{s-t}}, & s\neq t,\\ \exp ( (-1)^{n-1}(n-1)!\frac{\Lambda_{k}(\varphi_{0}\varphi_{s})}{\Lambda_{k}(\varphi_{s})}+\sum_{k=0}^{n-1}\frac{1}{k-t} ) , & s=t\notin\{0,\ldots,n-1\},\\ \exp ( (-1)^{n-1}(n-1)!\frac{\Lambda_{k}(\varphi_{0}\varphi_{s})}{2\Lambda_{k}(\varphi_{s})}+\sum_{k=0,k\neq t}^{n-1}\frac {1}{k-t} ) , & s=t\in\{0,\ldots,n-1\}. \end{cases} $$
Here \(\mathfrak{M}_{s,t}(\Lambda_{k},\Omega_{2})=\mu_{s,t}(\Lambda_{k},\Omega_{2})\), \(k\in\{1,2\}\), are in fact means.
Remark 18
Further, in this choice of family \(\Omega_{2}\), we have
$$\biggl( \frac{\frac{d^{n}\varphi_{s}}{dx^{n}}}{\frac{d^{n}\varphi _{t}}{dx^{n}}} \biggr) ^{\frac{1}{s-t}}(\xi) = \xi,\quad \xi\in[a,b], s \neq t, \text{ where } a,b \in(0,\infty). $$
So, using Remark 16 we have the important conclusion that \({\mu }_{s,t}(\Lambda_{k}, \Omega_{2})\) is in fact a mean for \(k\in\{1,2\}\).
Example 3
Let \(\Omega_{3}= \{\theta_{t}:(0,\infty)\rightarrow(0,\infty): t\in (0,\infty)\}\) be a family of functions defined by
$$\begin{aligned} \theta_{t}(x)=\frac{e^{-x\sqrt{t}}}{t^{n/2}}. \end{aligned}$$
Since \(t\mapsto\frac{d^{n}}{dx^{n}}\theta_{t}(x)= e^{-x\sqrt{t}}\) is exponentially convex for \(x>0\), being the Laplace transform of a nonnegative function [15], by the same argument as given in Example 1 we conclude that \(\Lambda_{k}(\theta_{t})\), \(k\in\{1,2\}\) are exponentially convex.
We assume that \(\Lambda_{k}(\theta_{t})>0\) for \(k\in\{1,2\}\). For this family of functions we have the following possible cases of \(\mu_{s,t}(\Lambda _{k},\Omega_{3})\): for \(k\in\{1,2\}\)
$$ \mathfrak{M}_{s,t}(\Lambda_{k},\Omega_{3}) = \textstyle\begin{cases} ( \frac{ \Lambda_{k} (\theta_{s})}{\Lambda_{k} (\theta_{t})} ) ^{\frac{1}{s-t}} ,& s\neq t,\\ \exp ( -\frac{\Lambda_{k}(\mathit{id}\cdot \theta_{s})}{2\sqrt{s} \Lambda_{k} (\theta_{s})}-\frac{n}{2s} ) ,& s=t. \end{cases} $$
By (4.2), \(\mathfrak{M}_{s,t}(\Lambda_{k},\Omega_{3})= -(\sqrt{s}+\sqrt{t})\ln\mu_{s,t}(\Lambda_{k},\Omega_{3})\), \(k\in\{1,2\}\), defines a class of means.
Example 4
Let \(\Omega_{4}=\{\phi_{t}:(0,\infty)\rightarrow (0,\infty): t\in(0,\infty)\}\) be a family of functions defined by
$$\begin{aligned} \phi_{t}(x)= \textstyle\begin{cases} \frac{t^{-x}}{(\ln t)^{n}} ,& t\neq1,\\ \frac{x^{n}}{n} ,& t=1. \end{cases}\displaystyle \end{aligned}$$
Since \({\frac{d^{n}}{dx^{n}}\phi_{t}(x)}= t^{-x}=e^{-x\ln t}>0\) for \(x>0\), by the same argument as given in Example 1 we conclude that \(t\mapsto\Lambda_{k}(\phi_{t})\), \(k\in\{1,2\}\), are exponentially convex.
We assume that \(\Lambda_{k}(\phi_{t})>0\) for \(k\in\{1,2\}\). For this family of functions we have the following possible cases of \(\mu_{s,t}(\Lambda _{k},\Omega_{4})\): for \(k\in\{1,2\}\)
$$ \mathfrak{M}_{s,t}(\Lambda_{k},\Omega_{4}) = \textstyle\begin{cases} ( \frac{ \Lambda_{k} (\phi_{s})}{\Lambda_{k} (\phi_{t})} ) ^{\frac{1}{s-t}} ,& s\neq t,\\ \exp ( -\frac{\Lambda_{k}(\mathit{id}\cdot \phi_{s})}{s\Lambda_{k} (\phi_{s})}-\frac{n}{s\ln s} ) ,& s=t\neq1,\\ \exp ( -\frac{1}{(n+1)}\frac{ \Lambda_{k}(\mathit{id}\cdot \phi_{1})}{\Lambda_{k} (\phi_{1})} ) ,& s=t=1. \end{cases} $$
By (4.2), \(\mathfrak{M}_{s,t}(\Lambda_{k},\Omega_{4})= -L(s,t)\ln\mu_{s,t}\), \((\Lambda_{k},\Omega_{4})\), \(k\in\{1,2\}\), defines a class of means, where \(L(s,t)\) is the logarithmic mean defined as
$$ L(s,t) = \textstyle\begin{cases} \frac{s-t}{\ln s-\ln t } ,& s \neq t , \\ s ,& s=t . \end{cases} $$
(6.1)
Remark 19
Monotonicity of \(\mu_{s,t}(\Lambda_{k},\Omega_{j})\) follows from (5.2) for \(k\in\{1,2\}\), \(j\in\{1,2,3,4\}\).
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

JP made the main contribution in conceiving the presented research. AAA, ARK, and JP worked jointly on each section, while AAA and ARK drafted the manuscript. All authors read and approved the final manuscript.
Literature
1.
go back to reference Pečarić, JE, Proschan, F, Tong, YL: Convex Functions, Partial Orderings and Statistical Applications. Academic Press, New York (1992) MATH Pečarić, JE, Proschan, F, Tong, YL: Convex Functions, Partial Orderings and Statistical Applications. Academic Press, New York (1992) MATH
2.
go back to reference Khan, AR, Pečarić, JE, Varošanec, S: Popoviciu type characterization of positivity of sums and integrals for convex functions of higher order. J. Math. Inequal. 7(2), 195-212 (2013) MATHMathSciNetCrossRef Khan, AR, Pečarić, JE, Varošanec, S: Popoviciu type characterization of positivity of sums and integrals for convex functions of higher order. J. Math. Inequal. 7(2), 195-212 (2013) MATHMathSciNetCrossRef
3.
go back to reference Khan, AR, Latif, N, Pečarić, JE: Exponential convexity for majorization. J. Inequal. Appl. 2012, 105 (2012) CrossRef Khan, AR, Latif, N, Pečarić, JE: Exponential convexity for majorization. J. Inequal. Appl. 2012, 105 (2012) CrossRef
4.
go back to reference Hardy, GH, Littlewood, JE, Pólya, G: Inequalities. Cambridge University Press, Cambridge (1978) Hardy, GH, Littlewood, JE, Pólya, G: Inequalities. Cambridge University Press, Cambridge (1978)
5.
go back to reference Fucks, L: A new proof of an inequality of Hardy-Littlewood-Polya. Math. Tidsskr. B, 53-54 (1947) Fucks, L: A new proof of an inequality of Hardy-Littlewood-Polya. Math. Tidsskr. B, 53-54 (1947)
6.
go back to reference Marshall, AW, Olkin, I, Arnold, BC: Inequalities: Theory of Majorization and Its Applications, 2nd edn. Springer, New York (2011) CrossRef Marshall, AW, Olkin, I, Arnold, BC: Inequalities: Theory of Majorization and Its Applications, 2nd edn. Springer, New York (2011) CrossRef
7.
8.
go back to reference Aglić Aljinović, A, Pečarić, J, Vukelić, A: On some Ostrowski type inequalities via Montgomery identity and Taylor’s formula II. Tamkang J. Math. 36(4), 279-301 (2005) MATHMathSciNet Aglić Aljinović, A, Pečarić, J, Vukelić, A: On some Ostrowski type inequalities via Montgomery identity and Taylor’s formula II. Tamkang J. Math. 36(4), 279-301 (2005) MATHMathSciNet
9.
go back to reference Mitrinović, DS, Pečarić, JE, Fink, AM: Inequalities for Functions and Their Integrals and Derivatives. Kluwer Academic, Dordrecht (1994) Mitrinović, DS, Pečarić, JE, Fink, AM: Inequalities for Functions and Their Integrals and Derivatives. Kluwer Academic, Dordrecht (1994)
10.
go back to reference Cerone, P, Dragomir, SS: Some new Ostrowski-type bounds for the Čebyšev functional and applications. J. Math. Inequal. 8(1), 159-170 (2014) MATHMathSciNetCrossRef Cerone, P, Dragomir, SS: Some new Ostrowski-type bounds for the Čebyšev functional and applications. J. Math. Inequal. 8(1), 159-170 (2014) MATHMathSciNetCrossRef
11.
12.
go back to reference Bernstein, SN: Sur les fonctions absolument monotones. Acta Math. 52(1), 1-66 (1929) CrossRef Bernstein, SN: Sur les fonctions absolument monotones. Acta Math. 52(1), 1-66 (1929) CrossRef
13.
go back to reference Widder, DV: Necessary and sufficient conditions for the representation of a function by a doubly infinite Laplace integral. Bull. Am. Math. Soc. 40(4), 321-326 (1934) MathSciNetCrossRef Widder, DV: Necessary and sufficient conditions for the representation of a function by a doubly infinite Laplace integral. Bull. Am. Math. Soc. 40(4), 321-326 (1934) MathSciNetCrossRef
14.
go back to reference Pečarić, J, Perić, J: Improvements of the Giaccardi and the Petrović inequality and related Stolarsky type means. An. Univ. Craiova, Ser. Mat. Inform. 39(1), 65-75 (2012) MATHMathSciNet Pečarić, J, Perić, J: Improvements of the Giaccardi and the Petrović inequality and related Stolarsky type means. An. Univ. Craiova, Ser. Mat. Inform. 39(1), 65-75 (2012) MATHMathSciNet
15.
Metadata
Title
Weighted majorization theorems via generalization of Taylor’s formula
Authors
Andrea Aglić Aljinović
Asif R Khan
Josip E Pečarić
Publication date
01-12-2015
Publisher
Springer International Publishing
Published in
Journal of Inequalities and Applications / Issue 1/2015
Electronic ISSN: 1029-242X
DOI
https://doi.org/10.1186/s13660-015-0710-8

Other articles of this Issue 1/2015

Journal of Inequalities and Applications 1/2015 Go to the issue

Premium Partner