Skip to main content
Erschienen in: Journal of Inequalities and Applications 1/2019

Open Access 01.12.2019 | Research

Generalization of the Levinson inequality with applications to information theory

verfasst von: Muhammad Adeel, Khuram Ali Khan, Ðilda Pečarić, Josip Pečarić

Erschienen in: Journal of Inequalities and Applications | Ausgabe 1/2019

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

In the presented paper, Levinson’s inequality for the 3-convex function is generalized by using two Green functions. Čebyšev-, Grüss- and Ostrowski-type new bounds are found for the functionals involving data points of two types. Moreover, the main results are applied to information theory via the f-divergence, the Rényi divergence, the Rényi entropy, the Shannon entropy and the Zipf–Mandelbrot law.
Hinweise

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

1 Introduction and preliminaries

In [12], Ky Fan’s inequality is generalized by Levinson for 3-convex functions as follows.
Theorem A
Let \(f :I=(0, 2\alpha ) \rightarrow \mathbb{R}\) with \(f^{(3)}(t) \geq 0\). Let \(x_{k} \in (0, \alpha )\) and \(p_{k}>0\). Then
$$ J_{1}(f) \geq 0, $$
(1)
where
$$\begin{aligned} J_{1}\bigl(f(\cdot)\bigr) =&\frac{1}{\mathbf{P}_{n}} \sum_{\rho =1}^{n}p_{\rho }f(2 \alpha -x_{\rho })-f\Biggl(\frac{1}{\mathbf{P}_{n}}\sum _{\rho =1}^{n}p_{ \rho }(2\alpha - x_{\rho })\Biggr)-\frac{1}{\mathbf{P} _{n}}\sum _{\rho =1} ^{n}p_{\rho }f(x_{\rho }) \\ &{}+ f\Biggl(\frac{1}{\mathbf{P}_{n}}\sum_{\rho =1}^{n}p_{\rho }x_{\rho } \Biggr). \end{aligned}$$
(2)
Working with the various differences, the assumptions of differentiability on f can be weakened.
In [18], Popoviciu noted that (1) is valid on \((0, 2a)\) for 3-convex functions, while in [2], Bullen gave a different proof of Popoviciu’s result and also the converse of (1).
Theorem B
(a)
Let \(f:I=[a, b] \rightarrow \mathbb{R}\) be a 3-convex function and \(x_{n}, y_{n} \in [a, b]\) for \(n=1, 2, \ldots , k \) such that
$$ \max \{x_{1} \ldots x_{k}\} \leq \min \{y_{1} \ldots y_{k}\}, \qquad x_{1}+y_{1}= \cdots =x_{k}+y_{k} , $$
(3)
and \(p_{n}>0\). Then
$$ J_{2}(f) \geq 0, $$
(4)
where
$$\begin{aligned} J_{2}\bigl(f(\cdot)\bigr) =&\frac{1}{\mathbf{P}_{k}} \sum_{\rho =1}^{k}p_{\rho }f(y _{\rho })-f\Biggl(\frac{1}{\mathbf{P}_{k}}\sum _{\rho =1}^{k}p_{\rho }y_{ \rho } \Biggr)- \frac{1}{\mathbf{P}_{k}}\sum_{\rho =1}^{k} p_{\rho }f(x_{ \rho }) \\ &{}+ f\Biggl(\frac{1}{\mathbf{P}_{k}}\sum_{\rho =1}^{k}p_{\rho }x_{ \rho } \Biggr). \end{aligned}$$
(5)
 
(b)
If f is continuous and \(p_{\rho }>0\), (4) holds for all \(x_{\rho }\), \(y_{\rho }\) satisfying (3), then f is 3-convex.
 
In [17], Pečarić weakened the assumption (3) and proved that inequality (1) still holds, i.e. the following result holds.
Theorem C
Let \(f:I=[a, b] \rightarrow \mathbb{R}\) be a 3-convex function, \(p_{k}>0\), and let for \(k=1, \ldots , n\), \(x_{k}\), \(y_{k}\) be such that \(x_{k}+y_{k}=2\breve{c}\), \(x_{k}+x_{n-k+1} \leq 2\breve{c}\) and \(\frac{p_{k}x_{k}+p_{n-k+1}x_{n-k+1}}{p_{k}+p _{n-k+1}} \leq \breve{c}\). Then (4) holds.
In [15], Mercer presented a notable work by replacing the condition of symmetric distribution of points \(x_{i}\) and \(y_{i}\) with symmetric variances of points \(x_{i}\) and \(y_{i}\). The second condition is a weaker condition.
Theorem D
Let f be a 3-convex function on \([a, b]\), \(p_{k}\) be positive such that \(\sum_{k=1}^{n}p_{k}=1\). Also let \(x_{k}\), \(y_{k}\) satisfy (3) and
$$ \sum_{\rho =1}^{n}p_{\rho } \Biggl(x_{\rho }-\sum_{\rho =1}^{n}p_{ \rho }x_{\rho } \Biggr)^{2}=\sum_{\rho =1}^{n}p_{\rho } \Biggl(y_{ \rho }- \sum_{\rho =1}^{n}p_{\rho }y_{\rho } \Biggr)^{2}. $$
(6)
Then (1) holds.
On the other hand the error function \(e_{\mathcal{F}}(t)\) can be represented in terms of the Green function \(G_{\mathcal{F}, n}(t, s)\) of the boundary value problem
$$\begin{aligned}& z^{(n)}(t)=0, \\& z^{(i)}(a_{1}) = 0,\quad 0 \leq i \leq p, \\& z^{(i)}(a_{2}) = 0,\quad p+1 \leq i \leq n-1, \\& e_{F}(t)= \int ^{a_{2}}_{a_{1}}G_{F, n}(t, s)f^{(n)}(s)\,ds, \quad t \in [a, b], \end{aligned}$$
where
G F , n ( t , s ) = 1 ( n 1 ) ! { i = 0 p ( n 1 i ) ( t a 1 ) i ( a 1 s ) n i 1 , a 1 s t ; i = p + 1 n 1 ( n 1 i ) ( t a 1 ) i ( a 1 s ) n i 1 , t s a 2 .
(7)
The following result holds in [1].
Theorem E
Let \(f \in C^{n}[a, b]\), and let \(P_{F}\) be its ‘two-point right focal’ interpolating polynomial. Then, for \(a \leq a_{1} < a_{2} \leq b\) and \(0 \leq p \leq n-2\),
$$\begin{aligned} f(t) =&P_{F}(t)+e_{F}(t) \\ =& \sum_{i=0}^{p}\frac{(t-a_{1})^{i}}{i!}f^{(i)}(a_{1}) \\ &{}+ \sum_{j=0}^{n-p-2} \Biggl(\sum _{i=0}^{j}\frac{(t-a_{1})^{p+1+i}(a _{1}-a_{2})^{j-i}}{(p+1+i)!(j-i)!} \Biggr)f^{(p+1+j)}(a_{2} ) \\ &{}+ \int ^{a_{2}}_{a_{1}}G_{F, n}(t, s)f^{(n)}(s)\,ds, \end{aligned}$$
(8)
where \(G_{F, n}(t, s)\) is the Green function, defined by (7).
Let \(f \in C^{n}[a, b]\), and let \(P_{F}\) be its ‘two-point right focal’ interpolating polynomial for \(a \leq a_{1} < a_{2} \leq b\). Then, for \(n=3\) and \(p=0\), (8) becomes
$$\begin{aligned} f(t) =& f(a_{1}) + (t-a_{1})f^{(1)}(a_{2})+ (t-a_{1}) (a_{1}-a_{2})f ^{(2)}(a_{2})+\frac{(t-a_{1})^{2}}{2}f^{(2)}(a_{2}) \\ &{}+ \int ^{a_{2}}_{a_{1}}G_{1}(t, s)f^{(3)}(s)\,ds, \end{aligned}$$
(9)
where
$$\begin{aligned} G_{1}(t, s) = \textstyle\begin{cases} (a_{1}-s)^{2}, & a_{1} \leq s \leq t; \\ - (t-a_{1})(a_{1}-s)+\frac{1}{2}(t-a_{1})^{2}, & t \leq s \leq a_{2}. \end{cases}\displaystyle \end{aligned}$$
(10)
For \(n=3\) and \(p=1\), (8) becomes
$$\begin{aligned} f(t) = f(a_{1}) + (t-a_{1})f^{(1)}(a_{2})+ \frac{(t-a_{1})^{2}}{2}f ^{(2)}(a_{2})+ \int ^{a_{2}}_{a_{1}}G_{2}(t, s)f^{(3)}(s)\,ds, \end{aligned}$$
(11)
where
$$\begin{aligned} G_{2}(t, s) = \textstyle\begin{cases} \frac{1}{2}(a_{1}-s)^{2}+(t-a_{1})(a_{1}-s), & a_{1} \leq s \leq t; \\ - \frac{1}{2} (t-a_{1})^{2}, & t \leq s \leq a_{2}. \end{cases}\displaystyle \end{aligned}$$
(12)
The presented work is organized as follows: in Sect. 2, Levinson’s inequality for the 3-convex function is generalized by using two Green functions defined by (10) and (12). In Sect. 3, Čebyšev-, Grüss- and Ostrowski-type new bounds are found for the functionals involving data points of two types. In Sect. 4, the main results are applied to information theory via the f-divergence, the Rényi divergence, the Rényi entropy, the Shannon entropy and the Zipf–Mandelbrot law.

2 Main results

First we give an identity involving Jensen’s difference of two different data points. Then we give equivalent form of identity by using Green function defined by (10) and (12).
Theorem 1
Let \(f\in C^{3}[\zeta _{1}, \zeta _{2}]\) such that \(f: I= [\zeta _{1}, \zeta _{2}] \rightarrow \mathbb{R}\), \((p_{1}, \ldots , p_{n}) \in \mathbb{R}^{n}\), \((q_{1}, \ldots , q_{m}) \in \mathbb{R}^{m}\) such that \(\sum_{\rho =1}^{n}p_{\rho }=1\) and \(\sum_{\varrho =1}^{m}q_{\varrho }=1\). Also let \(x_{\rho }\), \(y_{\varrho }\), \(\sum_{\rho =1}^{n}p_{\rho }x _{\rho }\), \(\sum_{\varrho =1}^{m}q_{\varrho }y_{\varrho } \in I\). Then
$$\begin{aligned} J\bigl(f(\cdot)\bigr) =&\frac{1}{2} \Biggl[\sum _{\varrho =1}^{m}q_{\varrho }y_{ \varrho }^{2}- \Biggl(\sum_{\varrho =1}^{m}q_{\varrho }y_{\varrho } \Biggr)^{2}- \sum_{\rho =1}^{n}p_{\rho } x_{\rho }^{2}+\Biggl(\sum_{\rho =1}^{m}p_{ \rho }x_{\rho } \Biggr)^{2} \Biggr] f^{(2)}(\zeta _{2}) \\ &{}+ \int _{\zeta _{1}}^{\zeta _{2}}J\bigl(G_{k}(\cdot , s)\bigr)f^{(3)}(s)\,ds, \end{aligned}$$
(13)
where
$$ J\bigl(f(\cdot)\bigr)=\sum_{\varrho =1}^{m}q_{\varrho }f(y_{\varrho })-f \Biggl( \sum_{\varrho =1}^{m}q_{\varrho }y_{\varrho } \Biggr)-\sum_{\rho =1}^{n}p _{\rho }f(x_{\rho }) +f\Biggl(\sum _{\rho =1}^{n}p_{\rho }x_{\rho } \Biggr) $$
(14)
and
$$ J\bigl(G_{k}(\cdot , s)\bigr)= \sum _{\varrho =1}^{m}q_{\varrho }G_{k}(y_{\varrho }, s)-G _{k}\Biggl(\sum_{\varrho =1}^{m}q_{\varrho }y_{\varrho }, s\Biggr)-\sum_{\rho =1} ^{n}p_{\rho }G_{k}(x_{\rho }, s) +G_{k}\Biggl(\sum_{\rho =1}^{n}p_{\rho }x _{\rho }, s\Biggr), $$
(15)
for \(G_{k}(\cdot , s)\) (\(k=1, 2\)) defined in (10) and (12), respectively.
Proof
(i) For \(k=1\).
Using (9) in (14), we have
$$\begin{aligned}& J\bigl(f(\cdot)\bigr)=\sum_{\varrho =1}^{m}q_{\varrho } \bigg[f(\zeta _{1})+(y_{ \varrho }-\zeta _{1})f^{(1)}(\zeta _{2})+(y_{\varrho }- \zeta _{1}) (\zeta _{1}-\zeta _{2})f^{(2)} (\zeta _{2}) \\& \hphantom{J(f(\cdot))=}{}+\frac{(y_{\varrho }-\zeta _{1})^{2}}{2}f^{(2)}(\zeta _{2})+ \int _{\zeta _{1}}^{\zeta _{2}}G_{1}(y_{\varrho }, s)f^{(3)}(s)\,ds\bigg] \\& \hphantom{J(f(\cdot))=}{}-\Bigg[f(\zeta _{1})+\Biggl(\sum _{\varrho =1}^{m}q_{\varrho }y_{\varrho }- \zeta _{1}\Biggr)f^{(1)}(\zeta _{2})+\Biggl( \sum_{\varrho =1}^{m}q_{\varrho }y_{ \varrho }- \zeta _{1}\Biggr) (\zeta _{1}- \zeta _{2})f^{(2)}(\zeta _{2}) \\& \hphantom{J(f(\cdot))=}{}+\frac{(\sum_{\varrho =1}^{m}q_{\varrho }y_{\varrho }-\zeta _{1})^{2}}{2}f^{(2)}(\zeta _{2})+ \int _{\zeta _{1}}^{\zeta _{2}}G_{1}\Biggl( \sum _{\varrho =1}^{m} q_{\varrho }y_{\varrho }, s\Biggr)f^{(3)}(s)\,ds\Bigg] \\& \hphantom{J(f(\cdot))=}{}-\sum_{\rho =1}^{n}p_{\rho } \bigg[f(\zeta _{1})+(x_{\rho }-\zeta _{1})f^{(1)}(\zeta _{2})+(x_{\rho }- \zeta _{1}) (\zeta _{1}-\zeta _{2})f ^{(2)} (\zeta _{2}) \\& \hphantom{J(f(\cdot))=}{}+\frac{(x_{\rho }-\zeta _{1})^{2}}{2}f^{(2)}(\zeta _{2})+ \int _{\zeta _{1}}^{\zeta _{2}}G_{1}(x_{\rho }, s)f^{(3)}(s)\,ds\bigg] \\& \hphantom{J(f(\cdot))=}{}+\Bigg[f(\zeta _{1})+\Biggl(\sum _{\rho =1}^{n}p_{\rho }x_{\rho }- \zeta _{1}\Biggr)f^{(1)}(\zeta _{2})+\Biggl( \sum_{\rho =1}^{n}p_{\rho }x_{\rho }- \zeta _{1}\Biggr) (\zeta _{1}-\zeta _{2}) f^{(2)}(\zeta _{2}) \\& \hphantom{J(f(\cdot))=}{}+\frac{(\sum_{\rho =1}^{n}p_{\rho }x_{\rho }-\zeta _{1})^{2}}{2}f ^{(2)}(\zeta _{2})+ \int _{\zeta _{1}}^{\zeta _{2}}G_{1}\Biggl(\sum _{\rho =1} ^{n} p_{\rho }x_{\rho }, s\Biggr)f^{(3)}(s)\,ds\Bigg], \\& J\bigl(f(\cdot)\bigr)=f(\zeta _{1})+\Biggl(\sum _{\varrho =1}^{m}q_{\varrho }y_{\varrho }- \zeta _{1}\Biggr)f^{(1)}(\zeta _{2})+\Biggl( \sum_{\varrho =1}^{m}q_{\varrho }y_{ \varrho }- \zeta _{1}\Biggr) (\zeta _{1}-\zeta _{2})f^{(2)}( \zeta _{2}) \\& \hphantom{J(f(\cdot))=}{}+\frac{(\sum_{\varrho =1}^{m}q_{\varrho }y_{\varrho }^{2}-2\zeta _{1}\sum_{\varrho =1}^{m}q_{\varrho }y_{\varrho }+\zeta _{1}^{2})f^{(2)}( \zeta _{2})}{2}+\sum_{i=1}^{m} q_{\varrho } \int _{\zeta _{1}}^{\zeta _{2}}G _{1}(y_{\varrho }, s)f^{(3)}(s)\,ds \\& \hphantom{J(f(\cdot))=}{}-f(\zeta _{1})-\Biggl(\sum _{\varrho =1}^{m}q_{\varrho }y_{\varrho }- \zeta _{1}\Biggr)f^{(1)}(\zeta _{2})-\Biggl( \sum_{\varrho =1}^{m}q_{\varrho }y_{\varrho }- \zeta _{1}\Biggr) (\zeta _{1}-\zeta _{2})f^{(2)}( \zeta _{2}) \\& \hphantom{J(f(\cdot))=}{}-\frac{((\sum_{\varrho =1}^{m}q_{\varrho }y_{\varrho })^{2}-2\zeta _{1}\sum_{\varrho =1}^{m}q_{\varrho }y_{\varrho }+\zeta _{1}^{2})f^{(2)}( \zeta _{2})}{2} \\& \hphantom{J(f(\cdot))=}{}- \int _{\zeta _{1}}^{\zeta _{2}}G_{1}\Biggl(\sum _{\varrho =1} ^{m}q_{\varrho }y_{\varrho }, s\Biggr)f^{(3)}(s)\,ds \\& \hphantom{J(f(\cdot))=}{}-f(\zeta _{1})-\Biggl(\sum _{\rho =1}^{n}p_{\rho }x_{\rho }- \zeta _{1}\Biggr)f^{(1)}( \zeta _{2})-\Biggl( \sum_{\rho =1}^{n}p_{\rho }x_{\rho }- \zeta _{1}\Biggr) (\zeta _{1}-\zeta _{2})f^{(2)}( \zeta _{2}) \\& \hphantom{J(f(\cdot))=}{}-\frac{(\sum_{\rho =1}^{n}p_{\rho }x_{\rho }^{2}-2\zeta _{1} \sum_{\rho =1}^{n}p_{\rho }x_{\rho }+\zeta _{1}^{2})f^{(2)}(\zeta _{2})}{2}- \sum_{\rho =1}^{n}p_{\rho } \int _{\zeta _{1}}^{\zeta _{2}}G_{1}(x_{ \rho }, s)f^{(3)}(s)\,ds \\& \hphantom{J(f(\cdot))=}{}+f(\zeta _{1})+\Biggl(\sum _{\rho =1}^{n}p_{\rho }x_{\rho }- \zeta _{1}\Biggr)f^{(1)}( \zeta _{2})+\Biggl( \sum_{\rho =1}^{n}p_{\rho }x_{\rho }- \zeta _{1}\Biggr) (\zeta _{1}-\zeta _{2})f^{(2)}( \zeta _{2}) \\& \hphantom{J(f(\cdot))=}{}+\frac{((\sum_{\rho =1}^{n}p_{\rho }x_{\rho })^{2}-2\zeta _{1} \sum_{\rho =1}^{n}p_{\rho }x_{\rho }+\zeta _{1}^{2})f^{(2)}(\zeta _{2})}{2} \\& \hphantom{J(f(\cdot))=}{} + \int _{\zeta _{1}}^{\zeta _{2}}G_{1}\Biggl(\sum _{\rho =1}^{n}p_{\rho }x_{ \rho }, s\Biggr)f^{(3)}(s)\,ds, \\& J\bigl(f(\cdot)\bigr)=\frac{1}{2} \Biggl[\sum _{\varrho =1}^{m}q_{\varrho }y_{ \varrho }^{2}- \Biggl(\sum_{\varrho =1}^{m}q_{\varrho }y_{\varrho } \Biggr)^{2}- \sum_{\rho =1}^{n}p_{\rho }x_{\rho }^{2} +\Biggl(\sum_{\rho =1}^{n}p_{ \rho }x_{\rho } \Biggr)^{2} \Biggr]f^{(2)}(\zeta _{2}) \\& \hphantom{J(f(\cdot))=}{}+\sum_{\varrho =1}^{m}q_{\varrho } \int _{\zeta _{1}}^{\zeta _{2}}G_{1}(y _{\varrho },s)f^{(3)}(s)\,ds- \int _{\zeta _{1}}^{\zeta _{2}}G_{1} \Biggl( \sum _{\varrho =1}^{m}q_{\varrho }y_{\varrho }, s\Biggr)f^{(3)}(s)\,ds \\& \hphantom{J(f(\cdot))=}{}-\sum_{\rho =1}^{n}p_{\rho } \int _{\zeta _{1}}^{\zeta _{2}}G_{1}(x_{ \rho }, s)f^{(3)}(s)\,ds + \int _{\zeta _{1}}^{\zeta _{2}}G_{1}\Biggl(\sum _{ \rho =1}^{n}p_{\rho }x_{\rho }, s\Biggr)f^{(3)}(s)\,ds. \end{aligned}$$
After rearranging, we have (13).
(ii) For \(k=2\).
Using (11) in (14) and following similar steps as in the proof of (i), we get (13). □
Corollary 1
Let \(f\in C^{3}[0, 2\alpha ]\) such that \(f: I= [0, 2\alpha ] \rightarrow \mathbb{R}\), \(x_{1}, \ldots , x_{n} \in (0, \alpha )\), \((p_{1}, \ldots , p_{n})\) \(\in \mathbb{R}^{n}\) such that \(\sum_{\rho =1}^{n}p _{\rho }=1\). Also let \(x_{\rho }\), \(\sum_{\rho =1}^{n}p_{\rho }(2\alpha -x_{\rho })\), \(\sum_{\rho =1}^{n}p_{\rho }x_{\rho } \in I\). Then
$$\begin{aligned} J\bigl(f(\cdot)\bigr) =& \int _{\zeta _{1}}^{\zeta _{2}}J\bigl(G_{k}(\cdot , s)\bigr)f^{(3)}(s)\,ds, \quad 0\leq \zeta _{1}< \zeta _{2} \leq 2\alpha , \end{aligned}$$
(16)
where \(J(f(\cdot))\) and \(J(G(\cdot , s))\) are defined in (14) and (15), respectively.
Proof
Choosing \(I=[0, 2\alpha ]\), \(y_{\varrho }=(2\alpha -x_{\rho })\), \(x_{1}, \ldots , x_{n} \in (0, \alpha )\), \(p_{\rho }=q_{\varrho }\) and \(m=n\), in Theorem 1, after simplification we get (16). □
Theorem 2
Let \(f: I= [\zeta _{1}, \zeta _{2}] \rightarrow \mathbb{R}\) be a 3-convex function. Also let \((p_{1}, \ldots , p_{n}) \in \mathbb{R}^{n}\), \((q_{1}, \ldots , q_{m})\in \mathbb{R}^{m}\) be such that \(\sum_{\rho =1}^{n}p_{\rho }=1\) and \(\sum_{\varrho =1}^{m}q_{ \varrho }=1\) and \(x_{\rho }\), \(y_{\varrho }\), \(\sum_{\rho =1}^{n}p_{ \rho }x_{\rho }\), \(\sum_{\varrho =1}^{m}q_{\varrho }y_{\varrho } \in I\).
If
$$ \Biggl[\sum_{\varrho =1}^{m}q_{\varrho }y_{\varrho }^{2}- \Biggl( \sum_{\varrho =1}^{m}q_{\varrho }y_{\varrho } \Biggr)^{2}-\sum_{\rho =1}^{n}p _{\rho }x_{\rho }^{2}+\Biggl(\sum _{\rho =1}^{n} p_{\rho }x_{\rho } \Biggr)^{2} \Biggr] f^{(2)}(\zeta _{2}) \geq 0, $$
(17)
then the following statements are equivalent:
For \(f \in C^{3}[\zeta _{1}, \zeta _{2}]\)
$$ \sum_{\rho =1}^{n}p_{\rho }f(x_{\rho })-f \Biggl(\sum_{\rho =1}^{n}p_{\rho }x _{\rho }\Biggr)\leq \sum_{\varrho =1}^{m}q_{\varrho }f(y_{\varrho })- f\Biggl( \sum_{\varrho =1}^{m}q_{\varrho }y_{\varrho } \Biggr). $$
(18)
For all \(s \in I\)
$$\begin{aligned} \sum_{\rho =1}^{n}p_{\rho }G_{k}(x_{\rho }, s)-G_{k}\Biggl(\sum_{\rho =1} ^{n}p_{\rho }x_{\rho }, s\Biggr)\leq \sum _{\varrho =1}^{m}q_{\varrho }G_{k}(y _{\varrho }, s)- G_{k}\Biggl(\sum_{\varrho =1}^{m}q_{\varrho }y_{\varrho }, s\Biggr), \end{aligned}$$
(19)
where \(G_{k}(\cdot , s)\) are defined by (10) and (12) for \(k=1, 2\), respectively.
Proof
(18) ⇒ (19): Let (18) is valid. Then as the function \(G_{k}(\cdot , s)\) (\(s \in I\)) is also continuous and 3-convex, it follows that also for this function (18) holds, i.e. (19) is valid.
(19) ⇒ (18): If f is 3-convex, then without loss of generality, we can suppose that there exists the third derivative of f. Let \(f \in C^{3}[\zeta _{1}, \zeta _{2}]\) be a 3-convex function and (19) holds. Then we can represent the function f in the form (9). Now by means of some simple calculations we can write
$$\begin{aligned}& \sum_{\varrho =1}^{m}q_{\varrho }f(y_{\varrho })-f \Biggl(\sum_{\varrho =1} ^{m}q_{\varrho }y_{\varrho } \Biggr)- \sum_{\rho =1}^{n}p_{\rho }f(x_{\rho })+ f\Biggl(\sum_{\rho =1}^{n}p_{\rho }x_{\rho } \Biggr) \\& \quad = \frac{1}{2} \Biggl[\sum_{\varrho =1}^{m}q_{\varrho }y_{\varrho } ^{2}-\Biggl(\sum_{\varrho =1}^{m}q_{\varrho }y_{\varrho } \Biggr)^{2}-\sum_{\rho =1} ^{n}p_{\rho }x_{\rho }^{2}+ \Biggl(\sum _{\rho =1}^{n}p_{\rho }x_{\rho } \Biggr)^{2} \Biggr] f^{(2)}(\zeta _{2}) \\& \qquad {}+ \int _{\zeta _{1}}^{\zeta _{2}}\left ( \sum _{\varrho =1}^{m}q_{\varrho }G _{k}(y_{\varrho }, s)\right . \\& \qquad {}-\left .G_{k}\Biggl(\sum_{\varrho =1}^{m}q_{\varrho }(y_{\varrho }, s)\Biggr) - \sum_{\rho =1}^{n}p_{\rho }G_{k}(x_{\rho }, s)+G_{k}\Biggl(\sum_{\rho =1} ^{n}p_{\rho }x_{\rho }, s\Biggr)\right )f^{(3)}(s)\,ds. \end{aligned}$$
By the convexity of f, we have \(f^{(3)}(s) \geq 0\) for all \(s \in I\). Hence, if for every \(s \in I\), (19) is valid then it follows that, for every 3-convex function \(f:I \rightarrow \mathbb{R}\), with \(f \in C^{3}[\zeta _{1}, \zeta _{2}]\), (18) is valid. □
Remark 1
If the expression
$$ \sum_{\varrho =1}^{m}q_{\varrho }y_{\varrho }^{2}- \Biggl(\sum_{\varrho =1} ^{m}q_{\varrho }y_{\varrho } \Biggr)^{2}-\sum_{\rho =1}^{n}p_{\rho }x_{ \rho }^{2}+ \Biggl(\sum_{\rho =1}^{n}p_{\rho }x_{\rho } \Biggr) ^{2} $$
and \(f^{(2)}(\zeta _{2})\) have different signs in (17) then inequalities (18) and (19) are reversed.
Next we have results about generalization of Bullen’s type inequality (for real weights) given in [2] (see also [11, 16]).
Corollary 2
Let \(f: I= [\zeta _{1}, \zeta _{2}] \rightarrow \mathbb{R}\) be a 3-convex function and \(f \in C^{3}[\zeta _{1}, \zeta _{2}]\), \(x_{1}, \ldots , x_{n} \), \(y_{1}, \ldots , y_{m} \in I\) such that
$$ \max \{x_{1}, \ldots , x_{n}\} \leq \min \{y_{1}, \ldots , y_{m}\} $$
(20)
and
$$ x_{1}+y_{1}= \cdots =x_{n}+y_{m}. $$
(21)
Also let \((p_{1}, \ldots , p_{n}) \in \mathbb{R}^{n}\), \((q_{1}, \ldots , q_{m})\in \mathbb{R}^{m}\) such that \(\sum_{\rho =1}^{n}p_{ \rho }=1\) and \(\sum_{\varrho =1}^{m}q_{\varrho }=1\) and \(x_{\rho }\), \(y _{\varrho }\), \(\sum_{\rho =1}^{n}p_{\rho }x_{\rho }\), \(\sum_{\varrho =1} ^{m}q_{\varrho }y_{\varrho } \in I\). If (17) holds, then (18) and (19) are equivalent.
Proof
By choosing \(x_{\rho }\) and \(y_{\varrho }\) such that conditions (20) and (21) hold in Theorem 2, we get required result. □
Remark 2
If \(p_{\rho }=q_{\varrho }\) are positive and \(x_{\rho }\), \(y_{\varrho }\) satisfy (20) and (21), then inequality (18) reduces to Bullen’s inequality given in [16, p. 32, Theorem 2] for \(m=n\).
Next we have generalized form (for real weights) of Bullen’s type inequality given in [17] (see also [16]).
Corollary 3
Let \(f: I= [\zeta _{1}, \zeta _{2}] \rightarrow \mathbb{R}\) be a 3-convex function and \(f \in C^{3}[\zeta _{1}, \zeta _{2}]\), \((p_{1}, \ldots , p_{n}) \in \mathbb{R}^{n}\), \((q_{1}, \ldots , q_{m}) \in \mathbb{R}^{m}\) such that \(\sum_{\rho =1}^{n}p_{\rho }=1\) and \(\sum_{\varrho =1}^{m}q_{\varrho }=1\). Also let \(x_{1}, \ldots , x _{n} \) and \(y_{1}, \ldots , y_{m} \in I\) such that \(x_{\rho }+y_{ \varrho }=2c\) and for \(\rho =1, \ldots , n x_{\rho }+x_{n-\rho +1} \) and \(\frac{p_{\rho }x_{\rho }+p_{n-\rho +1}x_{n-\rho +1}}{p_{\rho }+p _{n-\rho +1}} \leq c\). If (17) holds, then (18) and (19) are equivalent.
Proof
Using Theorem 2 with the conditions given in the statement we get the required result. □
Remark 3
In Theorem 2, if \(m=n\), \(p_{\rho }=q_{\varrho }\) are positive, \(x_{\rho }+y_{\varrho }=2c\), \(x_{\rho }+x_{n-\rho +1} \) and \(\frac{p_{\rho }x_{\rho }+p_{n-\rho +1}x_{n-\rho +1}}{p_{\rho }+p_{n- \rho +1}} \leq c\). Then (18) reduces to generalized form of Bullen’s inequality defined in [16, p. 32, Theorem 4].
In [15], Mercer made a notable work by replacing the condition (21) of symmetric distribution of points \(x_{\rho }\) and \(y_{\varrho }\) with symmetric variances of points \(x_{\rho }\) and \(y_{\varrho }\) for \(\rho =1, \ldots , n\) and \(\varrho =1, \ldots , m\).
So in the next result we use Mercer’s condition (6), but for \(\rho =\varrho \) and \(m=n\).
Corollary 4
Let \(f: I= [\zeta _{1}, \zeta _{1}] \rightarrow \mathbb{R}\) be 3-convex function and \(f \in C^{3}[\zeta _{1}, \zeta _{2}]\), \(p_{\rho }\), \(q_{\rho }\) are positive such that \(\sum_{\rho =1}^{n}p _{\rho }=1\) and \(\sum_{\rho =1}^{n}q_{\rho }=1\). Also let \(x_{\rho }\), \(y_{\rho }\) satisfy (20) and
$$ \sum_{\rho =1}^{n}p_{\rho } \Biggl(x_{\rho }-\sum_{\rho =1}^{n}p_{ \rho }x_{\rho } \Biggr)^{2}=\sum_{\rho =1}^{n}p_{\rho } \Biggl(y_{ \rho }-\sum_{\rho =1} ^{n}q_{\rho }y_{\rho } \Biggr)^{2}. $$
(22)
If (17) holds, then (18) and (19) are equivalent.
Proof
For positive weights, using (6) and (20) in Theorem 2, we get required result. □
Next we have results that lean on the generalization of Levinson’s type inequality given in [12] (see also [16]).
Corollary 5
Let \(f: I= [0, 2\alpha ] \rightarrow \mathbb{R}\) be a 3-convex function and \(f \in C^{3}[0, 2\alpha ]\), \(x_{1}, \ldots , x_{n} \in (0, \alpha )\), \((p_{1}, \ldots , p_{n}) \in \mathbb{R}^{n}\) and \(\sum_{\rho =1}^{n}p_{\rho }=1\). Also let \(x_{\rho }\), \(\sum_{\rho =1}^{n}p_{\rho }(2\alpha -x_{\rho })\), \(\sum_{\rho =1}^{n}p_{\rho }x_{\rho } \in I\). Then the following are equivalent:
$$ \sum_{\rho =1}^{n}p_{\rho }f(x_{\rho })-f \Biggl(\sum_{\rho =1}^{n}p_{\rho }x _{\rho }\Biggr)\leq \sum_{\rho =1}^{n}p_{\rho }f(2 \alpha -x_{\rho })- f\Biggl( \sum_{\rho =1}^{n}p_{\rho }(2 \alpha -x_{\rho })\Biggr). $$
(23)
For all \(s \in I\)
$$\begin{aligned} \sum_{\rho =1}^{n}p_{\rho }G_{k}(x_{\rho }, s) -G_{k}\Biggl(\sum_{\rho =1} ^{n}p_{\rho }x_{\rho }, s\Biggr) \leq& \sum _{\rho =1}^{n}p_{\rho }G_{k}(2 \alpha -x_{\rho }, s) \\ &{}-G_{k}\Biggl(\sum_{\rho =1}^{n}p_{\rho }(2 \alpha -x_{\rho }), s\Biggr), \end{aligned}$$
(24)
where \(G_{k}(\cdot , s)\) is defined in (10) and (12) for \(k=1, 2\), respectively.
Proof
If \(I=[0, 2\alpha ]\), \((x_{1}, \ldots , x_{n}) \in (0, \alpha )\), \(p_{\rho }=q_{\varrho }\), \(m=n\) and \(y_{\varrho }=(2\alpha -x_{\rho })\) in Theorem 2 with \(0 \leq \zeta _{1} < \zeta _{2} \leq 2 \alpha \) we get required result. □
Remark 4
In Corollary 5 if \(p_{\rho }\) are positive then inequality (23) reduces to Levinson’s inequality given in [16, p. 32, Theorem 1].

3 New bounds for Levinson’s type functionals

Consider the Čebyšev functional for two Lebesgue integrable functions \(f_{1}, f_{2}: [\zeta _{1}, \zeta _{2}] \rightarrow \mathbb{R,}\)
$$\begin{aligned} \varTheta (f_{1}, f_{2}) =& \frac{1}{\zeta _{2}-\zeta _{1}} \int ^{\zeta _{2}} _{\zeta _{1}}f_{1}(x)f_{2}(x)\,dx \\ &{}- \frac{1}{\zeta _{2}-\zeta _{1}} \int ^{\zeta _{2}}_{\zeta _{1}}f_{1}(x)\,dx\cdot \frac{1}{ \zeta _{2}-\zeta _{1}} \int ^{\zeta _{2}}_{\zeta _{1}}f_{2}(x)\,dx, \end{aligned}$$
(25)
where the integrals are assumed to exist.
Theorem F
([3])
Let \(f_{1} : [\zeta _{1}, \zeta _{2}] \rightarrow \mathbb{R}\) be a Lebesgue integrable function and \(f_{2} : [\zeta _{1}, \zeta _{2}] \rightarrow \mathbb{R}\) be an absolutely continuous function with \((\cdot , -\zeta _{1})(\cdot , -\zeta _{2})[f'_{2}]^{2} \in L[\zeta _{1}, \zeta _{2}]\). Then
$$\begin{aligned} \bigl\vert \varTheta (f_{1}, f_{2}) \bigr\vert \leq \frac{1}{\sqrt{2}}\bigl[\varTheta (f_{1}, f _{1})\bigr]^{\frac{1}{2}}\frac{1}{\sqrt{\zeta _{2}-\zeta _{1}}} \biggl( \int ^{ \zeta _{2}}_{\zeta _{1}} (t-\zeta _{1}) ( \zeta _{2}-t)\bigl[f'_{2}(t) \bigr]^{2}\,dt \biggr) ^{\frac{1}{2}}. \end{aligned}$$
(26)
\(\frac{1}{\sqrt{2}}\) is the best possible.
Theorem G
([3])
Let \(f_{1}: [\zeta _{1}, \zeta _{2}] \rightarrow \mathbb{R}\) be an absolutely continuous with \(f^{\prime }_{1} \in L_{\infty }[\zeta _{1}, \zeta _{2}]\) and let \(f_{2}: [\zeta _{1}, \zeta _{2}] \rightarrow \mathbb{R}\) is monotonic non-decreasing on \([\zeta _{1}, \zeta _{2}]\). Then
$$\begin{aligned} \bigl\vert \varTheta (f_{1}, f_{2}) \bigr\vert \leq \frac{1}{2(\zeta _{2}-\zeta _{1})} \bigl\Vert f^{\prime } \bigr\Vert _{ \infty } \int ^{\zeta _{2}}_{\zeta _{1}} (t-\zeta _{1}) ( \zeta _{2}-t)\bigl[f'_{2}(t) \bigr]^{2}\,df _{2}(t). \end{aligned}$$
(27)
\(\frac{1}{2}\) is the best possible.
In the next result we construct the Čebyšev-type bound for our functional defined in (5).
Theorem 3
Let \(f\in C^{3}[\zeta _{1}, \zeta _{2}]\) such that \(f: I= [\zeta _{1}, \zeta _{2}] \rightarrow \mathbb{R}\) and \(f^{(3)}(\cdot)\) is absolutely continuous with \((\cdot -\zeta _{1})(\zeta _{2} -\cdot)[f^{(4)}]^{2} \in L[\zeta _{1}, \zeta _{2}]\). Also let \((p_{1}, \ldots , p_{n}) \in \mathbb{R} ^{n}\), \((q_{1}, \ldots , q_{m}) \in \mathbb{R}^{m}\) be such that \(\sum_{\rho =1}^{n}p_{\rho }=1\), \(\sum_{\varrho =1}^{m}q_{\varrho }=1\), \(x_{\rho }\), \(y_{\varrho }\), \(\sum_{\rho =1}^{n}p_{\rho }x_{\rho }\), \(\sum_{\varrho =1}^{m}q_{\varrho }y_{\varrho } \in I\). Then
$$\begin{aligned} J\bigl(f(\cdot)\bigr) =&\frac{1}{2} \Biggl[\sum _{\varrho =1}^{m}q_{\varrho }y_{ \varrho }^{2}- \Biggl(\sum_{\varrho =1}^{m}q_{\varrho }y_{\varrho } \Biggr)^{2}- \sum_{\rho =1}^{n}p_{\rho }x_{\rho }^{2} +\Biggl(\sum_{\varrho =1}^{m}p_{ \rho }x_{\rho } \Biggr)^{2} \Biggr] f^{(2)}(\zeta _{2}) \\ &{}+\frac{f^{(2)}(\zeta _{2}) - f^{(2)}(\zeta _{1})}{(\zeta _{2}-\zeta _{1})} \int _{\zeta _{1}}^{\zeta _{2}}J\bigl(G_{k}(\cdot , s)\bigr)f^{(3)}(s)\,ds + \mathcal{R}_{3}(\zeta _{1}, \zeta _{2}; f), \end{aligned}$$
(28)
where \(J(f(\cdot))\), \(J(G_{k}(\cdot , s))\) are defined in (14) and (15), respectively, and the remainder \(\mathcal{R}_{3}(\zeta _{1}, \zeta _{2}; f)\) satisfies the bound
$$\begin{aligned} \bigl\vert \mathcal{R}_{3}(\zeta _{1}, \zeta _{2}; f) \bigr\vert \leq& \frac{\zeta _{2}-\zeta _{1}}{\sqrt{2}} \bigl[\varTheta \bigl(J\bigl(G_{k}(\cdot , s) \bigr),J\bigl(G_{k}(\cdot , s)\bigr)\bigr) \bigr] ^{\frac{1}{2}}\times \\ &{} \frac{1}{\sqrt{\zeta _{2} - \zeta _{1}}} \biggl( \int _{\zeta _{1}}^{\zeta _{2}}(s - \zeta _{1}) ( \zeta _{2} - s)\bigl[f^{(4)}(s)\bigr]^{2}\,ds \biggr)^{ \frac{1}{2}}, \end{aligned}$$
(29)
for \(G_{k}(\cdot , s)\) (\(k=1, 2\)) defined in (10) and (12), respectively.
Proof
Setting \(f_{1} \mapsto J(G_{k}(\cdot , s))\) and \(f_{2} \mapsto f^{(3)}\) in Theorem F, we get
$$\begin{aligned}& \biggl\vert \frac{1}{\zeta _{2} - \zeta _{1}} \int _{\zeta _{1}}^{\zeta _{2}}J\bigl(G _{k}(\cdot , s)\bigr)f^{(3)}(s)\,ds - \frac{1}{\zeta _{2} - \zeta _{1}} \int _{\zeta _{1}}^{\zeta _{2}}J\bigl(G_{k}(\cdot , s)\bigr)\,ds\cdot \frac{1}{\zeta _{2} - \zeta _{1}} \int _{\zeta _{1}}^{\zeta _{2}}f^{(3)}(s)\,ds \biggr\vert \\& \quad \leq \frac{1}{\sqrt{2}}\bigl[\varTheta \bigl(J\bigl(G_{k}( \cdot , s)\bigr), J\bigl(G_{k}(\cdot , s)\bigr)\bigr) \bigr]^{ \frac{1}{2}}\frac{1}{\sqrt{\zeta _{2} - \zeta _{1}}} \biggl( \int _{\zeta _{1}}^{\zeta _{2}}(s - \zeta _{1}) ( \zeta _{2} - s)\bigl[f^{(4)}(s)\bigr]^{2}\,ds \biggr) ^{\frac{1}{2}}, \\& \biggl\vert \frac{1}{\zeta _{2} - \zeta _{1}} \int _{\zeta _{1}}^{\zeta _{2}}J\bigl(G _{k}(\cdot , s)\bigr)f^{(3)}(s)\,ds - \frac{f^{(2)}(\zeta _{2}) - f^{(2)}(\zeta _{1})}{(\zeta _{2} - \zeta _{1})^{2}} \int _{\zeta _{1}}^{\zeta _{2}}J\bigl(G _{k}(\cdot , s)\bigr)\,ds \biggr\vert \\& \quad \leq \frac{1}{\sqrt{2}}\bigl[\varTheta \bigl(J\bigl(G_{k}( \cdot , s)\bigr), J\bigl(G_{k}(\cdot , s)\bigr)\bigr) \bigr]^{ \frac{1}{2}}\frac{1}{\sqrt{\zeta _{2} - \zeta _{1}}} \biggl( \int _{\zeta _{1}}^{\zeta _{2}}(s - \zeta _{1}) ( \zeta _{2} - s)\bigl[f^{(4)}(s)\bigr]^{2}\,ds \biggr) ^{\frac{1}{2}}. \end{aligned}$$
Multiplying \((\zeta _{2} - \zeta _{1})\) on both sides of the above inequality and using the estimation (29), we get
$$\begin{aligned} \int _{\zeta _{1}}^{\zeta _{2}}J\bigl(G_{k}(\cdot , s)\bigr)f^{(3)}\,ds = \frac{f^{(2)}( \zeta _{2}) - f^{(2)}(\zeta _{1})}{(\zeta _{2} - \zeta _{1})} \int _{\zeta _{1}}^{\zeta _{2}}J\bigl(G_{k}(\cdot , s)\bigr)\,ds + \mathcal{R}_{3}(\zeta _{1}, \zeta _{1}; f). \end{aligned}$$
Using the identity (13), we get (28). □
In the next result bounds of Grüss-type inequalities are estimated.
Theorem 4
Let \(f\in C^{3}[\zeta _{1}, \zeta _{2}]\) such that \(f: I= [\zeta _{1}, \zeta _{2}] \rightarrow \mathbb{R}\), \(f^{(3)}(\cdot)\) is absolutely continuous and \(f^{(4)}(\cdot) \geq 0\) a.e. on \([\zeta _{1}, \zeta _{2}]\). Also let \((p_{1}, \ldots , p_{n}) \in \mathbb{R}^{n}\), \((q_{1}, \ldots , q_{m}) \in \mathbb{R}^{m}\) be such that \(\sum_{\rho =1}^{n}p _{\rho }=1\), \(\sum_{\varrho =1}^{m}q_{\varrho }=1\), \(x_{\rho }\), \(y_{ \varrho }\), \(\sum_{\rho =1}^{n}p_{\rho }x_{\rho }\), \(\sum_{\varrho =1} ^{m}q_{\varrho }y_{\varrho } \in I\). Then identity (28) holds, where the remainder satisfies the estimation
$$\begin{aligned} \bigl\vert \mathcal{R}_{3}(\zeta _{1}, \zeta _{2}; f) \bigr\vert \leq (\zeta _{2} - \zeta _{1}) \bigl\Vert J\bigl(G_{k}( \cdot , s)\bigr)^{\prime } \bigr\Vert _{\infty } \biggl[ \frac{f^{(2)}(\zeta _{2})+f ^{(2)}(\zeta _{1})}{2} - \frac{f^{(2)}(\zeta _{2})- f^{(2)}(\zeta _{1})}{ \zeta _{2} - \zeta _{1}} \biggr]. \end{aligned}$$
(30)
Proof
Setting \(f_{1} \mapsto J(G_{k}(\cdot , s))\) and \(f_{2} \mapsto f^{(3)}\) in Theorem G, we get
$$\begin{aligned}& \biggl\vert \frac{1}{\zeta _{2} - \zeta _{1}} \int _{\zeta _{1}}^{\zeta _{2}}J\bigl(G _{k}(\cdot , s)\bigr))f^{(3)}(s)\,ds - \frac{1}{\zeta _{2} - \zeta _{1}} \int _{\zeta _{1}}^{\zeta _{2}}J\bigl(G_{k}(\cdot , s)\bigr)\,ds\cdot \frac{1}{\zeta _{2} - \zeta _{1}} \int _{\zeta _{1}}^{\zeta _{2}}f^{(3)}(s)\,ds \biggr\vert \\& \quad \leq \frac{1}{2} \bigl\Vert J\bigl(G_{k}(\cdot , s)\bigr)^{\prime } \bigr\Vert _{\infty }\frac{1}{\zeta _{2} - \zeta _{1}} \int _{\zeta _{1}}^{\zeta _{2}}(s - \zeta _{1}) ( \zeta _{2} - s)\bigl[f ^{(4)}(s)\bigr]^{2}\,ds. \end{aligned}$$
(31)
Since
$$\begin{aligned}& \int _{\zeta _{1}}^{\zeta _{2}}(s - \zeta _{1}) ( \zeta _{2} - s)\bigl[f^{(4)}(s)\bigr]^{2}\,ds \\& \quad = \int _{\zeta _{1}}^{\zeta _{2}}[2s - \zeta _{1} - \zeta _{2}]f^{3}(s)\,ds \\& \quad = (\zeta _{2} - \zeta _{1})\bigl[f^{(2)}( \zeta _{2}) + f^{(2)}(\zeta _{1})\bigr] - 2 \bigl(f^{(2)}(\zeta _{2}) - f^{(2)}(\zeta _{1})\bigr), \end{aligned}$$
(32)
using (13), (31) and (32), we have (28). □
Ostrowski-type bounds for newly constructed functional defined in (5).
Theorem 5
Let \(f\in C^{3}[\zeta _{1}, \zeta _{2}]\) such that \(f: I= [\zeta _{1}, \zeta _{2}] \rightarrow \mathbb{R}\) and \(f^{(2)}(\cdot)\) is absolutely continuous. Also let \((p_{1}, \ldots , p_{n}) \in \mathbb{R}^{n}\), \((q_{1}, \ldots , q_{m}) \in \mathbb{R}^{m}\) such that \(\sum_{\rho =1} ^{n}p_{\rho }=1\), \(\sum_{\varrho =1}^{m}q_{\varrho }=1\), \(x_{\rho }\), \(y _{\varrho }\), \(\sum_{\rho =1}^{n}p_{\rho }x_{\rho }\), \(\sum_{\varrho =1} ^{m}q_{\varrho }y_{\varrho } \in I\). Also let \((r, s)\) is a pair of conjugate exponents that is \(1 \leq r, s, \leq \infty \), \(\frac{1}{r}+ \frac{1}{s}=1\). If \(|f^{(3)}|^{r}: [\zeta _{1}, \zeta _{2}] \rightarrow \mathbb{R}\) be Riemann integrable function, then
$$\begin{aligned}& \Biggl\vert J\bigl(f(\cdot)\bigr) -\frac{1}{2} \Biggl[\sum_{\varrho =1}^{m}q_{\varrho }y _{\varrho }^{2}-\Biggl(\sum_{\varrho =1}^{m}q_{\varrho }y_{\varrho } \Biggr)^{2}- \sum_{\rho =1}^{n} p_{\rho }x_{\rho }^{2} +\Biggl(\sum _{\varrho =1}^{m}p _{\rho }x_{\rho } \Biggr)^{2} \Biggr] f^{(2)}(\zeta _{2}) \Biggr\vert \\& \quad \leq \bigl\Vert f^{(3)} \bigr\Vert _{r} \biggl( \int _{\zeta _{1}}^{\zeta _{2}} \bigl\vert J\bigl(G _{k}(\cdot , s)\bigr)\,ds \bigr\vert ^{s} \biggr)^{\frac{1}{s}}. \end{aligned}$$
(33)
Proof
Rearrange identity (13) in such a way
$$\begin{aligned}& \Biggl\vert J\bigl(f(\cdot)\bigr) -\frac{1}{2} \Biggl(\sum_{\varrho =1}^{m}q_{\varrho }y _{\varrho }^{2}-\Biggl(\sum_{\varrho =1}^{m}q_{\varrho }y_{\varrho } \Biggr)^{2}- \sum_{\rho =1}^{n} p_{\rho } x_{\rho }^{2}+\Biggl(\sum _{\rho =1}^{m}p_{ \rho }x_{\rho } \Biggr)^{2} \Biggr)f^{(2)}(\zeta _{2}) \Biggr\vert \\& \quad \leq \int _{\zeta _{1}}^{\zeta _{2}}J\bigl(G_{k}(\cdot , s)\bigr)f^{(3)}(s)\,ds. \end{aligned}$$
(34)
Employing the classical Hölder inequality for the R.H.S. of (34) yields (33). □

4 Application to information theory

The idea of the Shannon entropy is the focal point of data hypothesis, now and then alluded to as the measure of uncertainty. The entropy of a random variable is characterized regarding its probability distribution and can be shown to be a decent measure of randomness or uncertainty. The Shannon entropy permits one to evaluate the normal least number of bits expected to encode a series of images dependent on the letters in order size and the recurrence of the symbols.
Divergences between probability distributions have become familiar with a measure of the difference between them. A variety of sorts of divergences exist, for instance the f-difference (particularly, the Kullback–Leibler divergence, the Hellinger distance and the total variation distance), the Rényi divergence, the Jensen–Shannon divergence, and so forth (see [13, 21]). There are a lot of papers managing inequalities and entropies, see, e.g., [8, 10, 20] and the references therein. The Jensen inequality assumes a crucial role in a part of these inequalities. In any case, Jensen’s inequality deals with one sort of information focus and Levinson’s inequality manages two type information points.
The Zipf law is one of the central laws in data science, and it has been utilized in linguistics. Zipf in 1932 found that we can tally how frequently each word shows up in the content. So on the off chance that we rank (r) word as per the recurrence of word event \((f)\), at that point the result of these two numbers is a steady \((C): C = r \times f\). Aside from the utilization of this law in data science and linguistics, the Zipf law is utilized in city population, sun powered flare power, site traffic, earthquack magnitude, the span of moon pits, and so forth. In financial aspects this distribution is known as the Pareto law, which analyzes the distribution of the wealthiest individuals in the community [6, p. 125]. These two laws are equivalent in the mathematical sense, yet they are involved in different contexts [7, p. 294].

4.1 Csiszár divergence

In [4, 5] Csiszár gave the following definition.
Definition 1
Let f be a convex function from \(\mathbb{R}^{+}\) to \(\mathbb{R}^{+}\). Let \(\tilde{\mathbf{r}}, \tilde{\mathbf{k}} \in \mathbb{R}_{+}^{n}\) be such that \(\sum_{s=1}^{n}r_{s}=1\) and \(\sum_{s=1}^{n}q_{s}=1\). Then the f-divergence functional is defined by
$$\begin{aligned} I_{f}(\tilde{\mathbf{r}}, \tilde{\mathbf{k}}) := \sum_{s=1}^{n}q_{s}f \biggl( \frac{r_{s}}{q_{s}} \biggr). \end{aligned}$$
By defining the following:
$$\begin{aligned} f(0) := \lim_{x \rightarrow 0^{+}}f(x); \qquad 0f \biggl( \frac{0}{0} \biggr):=0; \qquad 0f \biggl(\frac{a}{0} \biggr):= \lim_{x \rightarrow 0^{+}}xf \biggl(\frac{a}{0} \biggr), \quad a>0, \end{aligned}$$
he stated that nonnegative probability distributions can also be used.
Using the definition of f-divergence functional, Horv́ath et al. [9] gave the following functional.
Definition 2
Let I be an interval contained in \(\mathbb{R}\) and \(f: I \rightarrow \mathbb{R}\) be a function. Also let \(\tilde{\mathbf{r}}=(r_{1}, \ldots , r_{n})\in \mathbb{R}^{n}\) and \(\tilde{\mathbf{k}}=(k_{1}, \ldots , k_{n})\in (0, \infty )^{n}\) be such that
$$\begin{aligned} \frac{r_{s}}{k_{s}} \in I, \quad s= 1, \ldots , n. \end{aligned}$$
Then
$$\begin{aligned} \hat{I}_{f}(\tilde{\mathbf{r}}, \tilde{\mathbf{k}}) : = \sum _{s=1} ^{n}k_{s}f \biggl( \frac{r_{s}}{k_{s}} \biggr). \end{aligned}$$
Theorem 6
Let \(\tilde{\mathbf{r}}= (r_{1}, \ldots , r_{n} )\), \(\tilde{\mathbf{k}}= (k_{1}, \ldots , k_{n} )\) be in \((0, \infty )^{n}\) and \(\tilde{\mathbf{w}}= (w_{1}, \ldots , w _{m} )\), \(\tilde{\mathbf{t}}= (t_{1}, \ldots , t_{m} )\) are in \((0, \infty )^{m}\) such that
$$\begin{aligned} \frac{r_{s}}{k_{s}} \in I, \quad s = 1, \ldots , n, \end{aligned}$$
and
$$\begin{aligned} \frac{w_{u}}{t_{u}} \in I, \quad u = 1, \ldots , m. \end{aligned}$$
If
$$\begin{aligned}& \Biggl[\frac{1}{\sum_{u=1}^{m}t_{u}}\sum_{u=1}^{m} \frac{(w_{u})^{2}}{t _{u}}- \Biggl(\sum_{u=1}^{m} \frac{w_{u}}{\sum_{u=1}^{m}t_{u}} \Biggr)^{2} -\frac{1}{\sum_{v=1}^{n}k_{v}}\sum _{v=1}^{n}\frac{(r_{v})^{2}}{k_{v}} \\& \quad {}+ \Biggl(\sum_{v=1}^{n} \frac{r_{v}}{\sum_{v=1}^{n}k_{v}} \Biggr)^{2} \Biggr]f^{(2)}(\zeta _{2})\geq 0, \end{aligned}$$
(35)
then the following are equivalent.
(i)
For every continuous 3-convex function \(f: I \rightarrow \mathbb{R}\),
$$ J_{\hat{f}}(r, w, k, t)\geq 0. $$
(36)
 
(ii)
$$ J_{G_{k}}(r, w, k, t) \geq 0, $$
(37)
 
where
$$\begin{aligned} J_{\hat{f}}(r, w, k, t) =&\frac{1}{\sum_{u=1}^{m}t_{u}} \hat{I}_{f}( \tilde{\mathbf{w}}, \tilde{\mathbf{t}})- f \Biggl(\sum _{u=1}^{m}\frac{w _{u}}{\sum_{u=1}^{m}t_{u}} \Biggr) -\frac{1}{\sum_{v=1}^{n}k_{v}} \hat{I}_{f}(\tilde{\mathbf{r}}, \tilde{ \mathbf{k}}) \\ &{}+f \Biggl(\sum_{v=1}^{n} \frac{r_{v}}{\sum_{v=1}^{n}k_{v}} \Biggr). \end{aligned}$$
(38)
Proof
Using \(p_{v} = \frac{k_{v}}{\sum_{v=1}^{n}k_{v}}\), \(x_{v} = \frac{r _{v}}{k_{v}}\), \(q_{u} = \frac{t_{u}}{\sum_{u=1}^{m}t_{u}}\) and \(y_{u} = \frac{w_{u}}{t_{u}}\) in Theorem 2, we get the required results. □

4.2 Shannon entropy

Definition 3
(See [9])
The \(\mathcal{S}\)hannon entropy of a positive probability distribution \(\tilde{\mathbf{r}}=(r_{1}, \ldots , r_{n})\) is defined by
$$\begin{aligned} \mathcal{S} : = - \sum_{v=1}^{n}r_{v} \log (r_{v}). \end{aligned}$$
(39)
Corollary 6
Let \(\tilde{\mathbf{r}}= (r_{1}, \ldots , r_{n} )\) and \(\tilde{\mathbf{w}}= (w_{1}, \ldots , w_{m} )\) be probability distributions, \(\tilde{\mathbf{k}}= (k_{1}, \ldots , k_{n} )\) be in \((0, \infty )^{n}\) and \(\tilde{\mathbf{t}}= (t_{1}, \ldots , t_{m} )\) be in \((0, \infty )^{m}\). If
$$\begin{aligned}& \Biggl[\frac{1}{\sum_{u=1}^{m}t_{u}}\sum_{u=1}^{m} \frac{(w_{u})^{2}}{t _{u}}- \Biggl(\sum_{u=1}^{m} \frac{w_{u}}{\sum_{u=1}^{m}t_{u}} \Biggr)^{2} -\frac{1}{\sum_{v=1}^{n}k_{v}}\sum _{v=1}^{n}\frac{(r_{v})^{2}}{k_{v}} \\& \quad {}+ \Biggl(\sum_{v=1}^{n} \frac{r_{v}}{\sum_{v=1}^{n}k_{v}} \Biggr)^{2} \Biggr]\geq 0 \end{aligned}$$
(40)
and
$$ J_{G_{k}}(r, w, k, t)\leq 0, $$
(41)
then
$$ J_{s}(r, w, k, t) \leq 0, $$
(42)
where
$$\begin{aligned} J_{s}(r, w, k, t) =&\frac{1}{\sum_{u=1}^{m}t_{u}} \Biggl[\tilde{ \mathcal{S}}+\sum_{u=1}^{m}w_{u} \log (t_{u}) \Biggr]+ \Biggl[\sum_{u=1} ^{m}\frac{w_{u}}{\sum_{u=1}^{m} t_{u}}\log \Biggl(\sum _{u=1}^{m}\frac{w _{u}}{\sum_{u=1}^{m}t_{u}} \Biggr) \Biggr] \\ &{}-\frac{1}{\sum_{v=1}^{n}k_{v}} \Biggl[\mathcal{S}- \sum _{v=1}^{n}r_{v}\log (k_{v}) \Biggr] \\ &{} - \Biggl[\sum_{v=1}^{n} \frac{r_{v}}{ \sum_{v=1}^{n} k_{v}}\log \Biggl(\sum_{v=1}^{n} \frac{r_{v}}{\sum_{v=1} ^{n}k_{v}} \Biggr) \Biggr]; \end{aligned}$$
(43)
\(\mathcal{S}\) is defined in (39) and
$$ \tilde{\mathcal{S}} : = - \sum_{u=1}^{m}w_{u} \log (w_{u}). $$
If the base of the log is less than 1, then (42) and (41) hold in reverse direction.
Proof
The function \(f(x) \mapsto -x\log (x)\) is 3-convex for a base of the log is greater than 1. So using \(f(x):= -x\log (x)\) in (35) and (36), we get the required results by Remark 1. □

4.3 Rényi divergence and entropy

The Rényi divergence and the Rényi entropy are given in [19].
Definition 4
Let \(\tilde{\mathbf{r}}, \tilde{\mathbf{q}} \in \mathbb{R}_{+}^{n}\) be such that \(\sum_{1}^{n}r_{i}=1\) and \(\sum_{1}^{n}q_{i}=1\), and let \(\delta \geq 0\), \(\delta \neq 1\).
(a)
The Rényi divergence of order δ is defined by
$$\begin{aligned} \mathcal{D}_{\delta }(\tilde{\mathbf{r}}, \tilde{ \mathbf{q}}) : = \frac{1}{\delta - 1} \log \Biggl(\sum _{i=1} ^{n}q_{i} \biggl( \frac{r_{i}}{q_{i}} \biggr)^{\delta } \Biggr). \end{aligned}$$
(44)
 
(b)
The Rényi entropy of order δ of \(\tilde{\mathbf{r}}\) is defined by
$$\begin{aligned} \mathcal{H}_{\delta }(\tilde{\mathbf{r}}): = \frac{1}{1 - \delta } \log \Biggl( \sum_{i=1}^{n} r_{i}^{\delta } \Biggr). \end{aligned}$$
(45)
 
These definitions also hold for non-negative probability distributions.
Theorem 7
Let \(\tilde{\mathbf{r}} = (r_{1}, \ldots , r_{n})\), \(\tilde{\mathbf{k}} = (k_{1}, \ldots , k_{n}) \in \mathbb{R}_{+}^{n}\), \(\tilde{\mathbf{w}} = (w_{1}, \ldots , w_{m})\), \(\tilde{\mathbf{t}} = (t_{1}, \ldots , t_{m}) \in \mathbb{R}_{+}^{m}\) be such that \(\sum_{1}^{n}r_{v}=1\), \(\sum_{1}^{n}k_{v}=1\), \(\sum_{1}^{m}w_{u}=1\) and \(\sum_{1}^{m}t_{u}=1\).
If either \(1 < \delta \) and the base of the log is greater than 1 or \(\delta \in [0, 1)\) and the base of the log is less than 1, and if
$$\begin{aligned}& \Biggl[\sum_{u=1}^{m} \frac{(t_{u})^{2}}{w_{u}} \biggl(\frac{w_{u}}{t _{u}} \biggr)^{2\delta }- \Biggl(\sum_{u=1}^{m}t_{u} \biggl(\frac{w_{u}}{t _{u}} \biggr) ^{\delta } \Biggr)^{2} - \sum_{v=1}^{n}\frac{(k_{v})^{2}}{r _{v}} \biggl(\frac{r_{v}}{k_{v}} \biggr)^{2\delta } \\& \quad {}+ \Biggl(\sum_{v=1}^{n}k_{v} \biggl(\frac{r_{v}}{k_{v}} \biggr)^{\delta } \Biggr)^{2} \Biggr]\geq 0 \end{aligned}$$
(46)
and
$$\begin{aligned} \sum_{v=1}^{n}r_{v}G_{k} \biggl( \biggl(\frac{r_{v}}{k_{v}} \biggr)^{\delta -1}, s\biggr) -G_{k}\Biggl(\sum_{v=1}^{n}r_{v} \biggl(\frac{r_{v}}{k_{v}} \biggr)^{\delta -1}, s\Biggr) \geq& \sum _{u=1}^{m}w_{u}G_{k} \biggl( \biggl(\frac{w_{u}}{t_{u}} \biggr)^{ \delta -1}, s\biggr) \\ &{}-G_{k}\Biggl(\sum_{u=1}^{m}w_{u} \biggl(\frac{w_{u}}{t_{u}} \biggr)^{\delta -1}, s\Biggr), \end{aligned}$$
(47)
then
$$\begin{aligned} \sum_{v=1}^{n}r_{v} \log \biggl( \frac{r_{v}}{k_{v}} \biggr)-\mathcal{D}_{\delta }(\tilde{ \mathbf{r}}, \tilde{\mathbf{k}}) \geq \sum_{u=1}^{m}w_{u} \log \biggl( \frac{w_{u}}{t_{u}} \biggr)-\mathcal{D}_{\delta }(\tilde{ \mathbf{w}}, \tilde{\mathbf{t}}). \end{aligned}$$
(48)
If either \(1 < \delta \) and the base of the log is greater than 1 or \(\delta \in [0, 1)\) and the base of the log is less than 1, then (47) and (48) hold in reverse direction.
Proof
The proof is only for the case when \(\delta \in [0, 1)\) and the base of the log is greater than 1 and similarly the remaining cases are simple to prove.
Choosing \(I = (0, \infty )\) \(f=\log \) so \(f^{(2)}(x)\) is negative and \(f^{(3)}(x)\) is positive, therefore f is 3-convex. So using \(f=\log \) and the substitutions
$$ p_{v} : = r_{v}, \qquad x_{v} : = \biggl( \frac{r_{v}}{k_{v}} \biggr)^{\delta - 1}, \quad v = 1, \ldots , n, $$
and
$$ q_{u} : = w_{u}, \qquad y_{u} : = \biggl( \frac{w_{u}}{t_{u}} \biggr)^{\delta - 1}, \quad u = 1, \ldots , m, $$
in the reverse of inequality (18) (by Remark 1), we have
$$\begin{aligned} (\delta -1)\sum_{v=1}^{n}r_{v} \log \biggl( \frac{r_{v}}{k_{v}} \biggr)- \log \Biggl(\sum _{v=1}^{n}k_{v}\biggl( \frac{r_{v}}{k_{v}}\biggr)^{\delta } \Biggr) \geq & (\delta -1)\sum _{u=1}^{m}w_{u} \log \biggl( \frac{w_{u}}{t_{u}} \biggr) \\ &{}-\log \Biggl(\sum_{u=1}^{m}t_{u} \biggl(\frac{w_{u}}{t_{u}}\biggr)^{\delta } \Biggr). \end{aligned}$$
(49)
Dividing (49) with \((\delta -1)\) and using
$$\begin{aligned}& \mathcal{D}_{\delta }(\tilde{\mathbf{r}}, \tilde{\mathbf{k}})= \frac{1}{\delta -1}\log \Biggl(\sum_{v=1}^{n}k_{v} \biggl(\frac{r _{v}}{k_{v}}\biggr)^{\delta } \Biggr), \\& \mathcal{D}_{\delta }(\tilde{\mathbf{w}}, \tilde{\mathbf{t}})= \frac{1}{\delta -1}\log \Biggl(\sum_{u=1}^{m}t_{u} \biggl(\frac{w _{u}}{t_{u}}\biggr)^{\delta } \Biggr), \end{aligned}$$
we get (48). □
Corollary 7
Let \(\tilde{\mathbf{r}} = (r_{1}, \ldots , r_{n}) \in \mathbb{R}_{+} ^{n}\), \(\tilde{\mathbf{w}} = (w_{1}, \ldots , w_{m}) \in \mathbb{R} _{+}^{m}\) be such that \(\sum_{1}^{n}r_{v}=1\) and \(\sum_{1}^{m}w_{u}=1\).
Also let
$$\begin{aligned}& \Bigg[\sum_{u=1}^{m} \frac{1}{m^{2}w_{u}} (mw_{u} )^{2\delta }- \Biggl(\sum _{u=1}^{m}\frac{1}{m} (m w_{u} )^{\delta } \Biggr)^{2}- \sum _{v=1}^{n}\frac{1}{n^{2}r_{v}} (nr_{v} )^{2\delta } \\& \quad+ \Biggl(\sum_{v=1}^{n} \frac{1}{n} (n r_{v} )^{\delta } \Biggr)^{2} \geq 0 \end{aligned}$$
(50)
and
$$\begin{aligned} \sum_{v=1}^{n}r_{v}G_{k} \bigl((nr_{v})^{\delta -1}, s \bigr) -G_{k} \Biggl( \sum_{v=1}^{n}r_{v}(nr_{v})^{\delta -1}, s \Biggr) \geq& \sum_{u=1} ^{m}w_{u}G_{k} \bigl((mw_{u})^{\delta -1}, s \bigr) \\ &{}-G_{k} \Biggl(\sum_{u=1}^{m}w_{u}(mw_{u})^{\delta -1}, s \Biggr). \end{aligned}$$
(51)
If \(1 < \delta \) and the base of the log is greater than 1, then
$$ \sum_{v=1}^{n}r_{v} \log (r_{v})+\mathcal{H}_{\delta }( \tilde{\mathbf{r}})\geq \sum_{u=1}^{m}w_{u}\log (w_{u})+\mathcal{H}_{\delta }(\tilde{\mathbf{w}}). $$
(52)
The reverse inequality holds in (51) and (52) if the base of the log is less than 1.
Proof
Suppose \(\tilde{\mathbf{k}}= (\frac{\textbf{1}}{\textbf{n}}, \ldots , \frac{\textbf{1}}{\textbf{n}} )\) and \(\tilde{\mathbf{t}}= (\frac{\textbf{1}}{\textbf{m}}, \ldots , \frac{\textbf{1}}{ \textbf{m}} )\). Then from (44), we have
$$\begin{aligned} \mathcal{D}_{\delta } (\tilde{\mathbf{r}}, \tilde{\mathbf{k}}) = \frac{1}{\delta - 1} \log \Biggl(\sum_{v=1}^{n}n ^{\delta - 1}r_{v}^{\delta } \Biggr) = \log (n) + \frac{1}{\delta - 1}\log \Biggl(\sum_{v=1}^{n}r_{v}^{\delta } \Biggr) \end{aligned}$$
and
$$\begin{aligned} \mathcal{D}_{\delta } (\tilde{\mathbf{w}}, \tilde{\mathbf{t}}) = \frac{1}{\delta - 1} \log \Biggl(\sum_{u=1}^{m}m ^{\delta - 1}w_{u}^{\delta } \Biggr) = \log (m) + \frac{1}{\delta - 1}\log \Biggl(\sum_{u=1}^{m}w_{u}^{\delta } \Biggr). \end{aligned}$$
This implies
$$\begin{aligned} \mathcal{H}_{\delta }(\tilde{\mathbf{r}}) = \log (n) - \mathcal{D}_{\delta } \biggl(\tilde{\mathbf{r}}, \frac{\textbf{1}}{ \textbf{n}} \biggr) \end{aligned}$$
(53)
and
$$\begin{aligned} \mathcal{H}_{\delta }(\tilde{\mathbf{w}}) = \log (m) - \mathcal{D}_{\delta } \biggl(\tilde{\mathbf{w}}, \frac{\textbf{1}}{ \textbf{m}} \biggr). \end{aligned}$$
(54)
It follows from Theorem 7, \(\tilde{\mathbf{k}}= \frac{ \textbf{1}}{\textbf{n}}\), \(\tilde{\mathbf{t}}= \frac{\textbf{1}}{ \textbf{m}}\), (53) and (54), that
$$ \sum_{v=1}^{n}r_{v} \log (nr_{v})-\log (n)+\mathcal{H}_{ \delta }(\tilde{ \mathbf{r}})\geq \sum_{u=1}^{m}w_{u} \log (m w_{u})- \log (m)+\mathcal{H}_{\delta }(\tilde{ \mathbf{w}}). $$
(55)
After some simple calculations we get (52). □

4.4 Zipf–Mandelbrot law

In [14] the authors gave some contribution in analyzing the Zipf–Mandelbrot law which is defined as follows.
Definition 5
Zipf–Mandelbrot law is a discrete probability distribution depending on the three parameters: \(\mathcal{N} \in \{1, 2, \ldots , \}\), \(\phi \in [0, \infty )\) and \(t > 0\), and is defined by
$$\begin{aligned} f(s; \mathcal{N}, \phi , t) : = \frac{1}{(s + \phi )^{t}\mathcal{H}_{\mathcal{N}, \phi , t}}, \quad s = 1, \ldots , \mathcal{N}, \end{aligned}$$
where
$$\begin{aligned} \mathcal{H}_{\mathcal{N}, \phi , t} = \sum_{\nu =1}^{ \mathcal{N}} \frac{1}{(\nu + \phi )^{t}}. \end{aligned}$$
For all values of \(\mathcal{N}\) if the total mass of the law is taken, then, for \(0 \leq \phi \), \(1< t\), \(s \in \mathcal{N}\), the density function of the Zipf–Mandelbrot law becomes
$$\begin{aligned} f(s; \phi , t) = \frac{1}{(s + \phi )^{t}\mathcal{H}_{ \phi , t}}, \end{aligned}$$
where
$$\begin{aligned} \mathcal{H}_{\phi , t} = \sum_{\nu =1}^{\infty } \frac{1}{( \nu + \phi )^{t}}. \end{aligned}$$
For \(\phi = 0\), the Zipf–Mandelbrot law becomes the Zipf law.
Theorem 8
Let \(\tilde{\mathbf{r}}\) and \(\tilde{\mathbf{w}}\) be the Zipf–Mandelbrot laws.
If (50) and (51) hold for \(r_{v}=\frac{1}{(v+k)^{v} {\mathcal{H}_{\mathcal{N}, k, v}}}\), \(w_{u}=\frac{1}{(u+w)^{u} {\mathcal{H}_{\mathcal{N}, w, u}}}\), and if the base of the log is greater than 1, then
$$\begin{aligned}& \sum_{v=1}^{n}\frac{1}{(v+k)^{v}{\mathcal{H}_{ \mathcal{N}, k, v}}}\log \biggl(\frac{1}{(v+k)^{v}{\mathcal{H}_{\mathcal{N}, k, v}}} \biggr)+\frac{1}{1 - \delta }\log \Biggl( \frac{1}{\mathcal{H}_{\mathcal{N}, k, v}^{\delta }} \sum_{v=1}^{n} \frac{1}{(v + k)^{\delta v}} \Biggr) \\& \quad \geq \sum_{u=1}^{m} \frac{1}{(u+w)^{u}{\mathcal{H}_{ \mathcal{N}, w, u}}}\log \biggl(\frac{1}{(u+w)^{u}{\mathcal{H}_{\mathcal{N}, w, u}}} \biggr) \\& \qquad {} +\frac{1}{1 - \delta } \log \Biggl(\frac{1}{\mathcal{H}_{\mathcal{N}, w, u}^{ \delta }}\sum _{u=1}^{m}\frac{1}{(u + w)^{\delta u}} \Biggr). \end{aligned}$$
(56)
The inequality is reversed in (51) and (56) if the base of the log is less than 1.
Proof
The proof is similar to Corollary 7; by using Definition 5 and the hypothesis given in the statement we get the required result. □

Acknowledgements

The authors wish to thank the anonymous referees for their very careful reading of the manuscript and fruitful comments and suggestions. The research of the 4th author is supported by the Ministry of Education and Science of the Russian Federation (the Agreement number No. 02.a03.21.0008).

Competing interests

The authors declare that there is no conflict of interests regarding the publication of this paper.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Literatur
1.
Zurück zum Zitat Aras-Gazić, G., Čuljak, V., Pečarić, J., Vukelić, A.: Generalization of Jensen’s inequality by Lidstone’s polynomial and related results. Math. Inequal. Appl. 164, 1243–1267 (2013) MathSciNetMATH Aras-Gazić, G., Čuljak, V., Pečarić, J., Vukelić, A.: Generalization of Jensen’s inequality by Lidstone’s polynomial and related results. Math. Inequal. Appl. 164, 1243–1267 (2013) MathSciNetMATH
2.
Zurück zum Zitat Bullen, P.S.: An inequality of N. Levinson. Publ. Elektroteh. Fak. Univ. Beogr., Ser. Mat. Fiz. 109–112 (1973) Bullen, P.S.: An inequality of N. Levinson. Publ. Elektroteh. Fak. Univ. Beogr., Ser. Mat. Fiz. 109–112 (1973)
3.
Zurück zum Zitat Cerone, P., Dragomir, S.S.: Some new Ostrowski-type bounds for the Čebyšev functional and applications. J. Math. Inequal. 8(1), 159–170 (2014) MathSciNetCrossRef Cerone, P., Dragomir, S.S.: Some new Ostrowski-type bounds for the Čebyšev functional and applications. J. Math. Inequal. 8(1), 159–170 (2014) MathSciNetCrossRef
4.
Zurück zum Zitat Csiszár, I.: Information-type measures of difference of probability distributions and indirect observations. Studia Sci. Math. Hung. 2, 299–318 (1967) MathSciNetMATH Csiszár, I.: Information-type measures of difference of probability distributions and indirect observations. Studia Sci. Math. Hung. 2, 299–318 (1967) MathSciNetMATH
5.
Zurück zum Zitat Csiszár, I.: Information measures: a critical survey. In: Tans. 7th Prague Conf. on Info. Th., Statist. Decis. Funct., Random Process and 8th European Meeting of Statist., vol. B, pp. 73–86. Academia, Prague (1978) Csiszár, I.: Information measures: a critical survey. In: Tans. 7th Prague Conf. on Info. Th., Statist. Decis. Funct., Random Process and 8th European Meeting of Statist., vol. B, pp. 73–86. Academia, Prague (1978)
6.
Zurück zum Zitat Diodato, V.: Dictionary of Bibliometrics. Haworth Press, New York (1994) Diodato, V.: Dictionary of Bibliometrics. Haworth Press, New York (1994)
7.
Zurück zum Zitat Egghe, L., Rousseau, R.: Introduction to Informetrics, Quantitative Methods in Library, Documentation and Information Science. Elsevier, New York (1990) Egghe, L., Rousseau, R.: Introduction to Informetrics, Quantitative Methods in Library, Documentation and Information Science. Elsevier, New York (1990)
8.
Zurück zum Zitat Gibbs, A.L.: On choosing and bounding probability metrics. Int. Stat. Rev. 70, 419–435 (2002) CrossRef Gibbs, A.L.: On choosing and bounding probability metrics. Int. Stat. Rev. 70, 419–435 (2002) CrossRef
9.
Zurück zum Zitat Horváth, L., Pečarić, Ð., Pečarić, J.: Estimations of f- and Rényi divergences by using a cyclic refinement of the Jensen’s inequality. Bull. Malays. Math. Sci. Soc. 1(14) (2017) Horváth, L., Pečarić, Ð., Pečarić, J.: Estimations of f- and Rényi divergences by using a cyclic refinement of the Jensen’s inequality. Bull. Malays. Math. Sci. Soc. 1(14) (2017)
10.
Zurück zum Zitat Khan, K.A., Niaz, T., Pečarić, Ð., Pečarić, J.: Refinement of Jensen’s inequality and estimation of f and Rényi divergence via Montgomery identity. J. Inequal. Appl. 2018, 318 (2018) CrossRef Khan, K.A., Niaz, T., Pečarić, Ð., Pečarić, J.: Refinement of Jensen’s inequality and estimation of f and Rényi divergence via Montgomery identity. J. Inequal. Appl. 2018, 318 (2018) CrossRef
11.
Zurück zum Zitat Krnić, M., Lovričević, N., Pečarić, J.: Superadditivity of the Levinson functional and applications. Period. Math. Hung. 71(2), 166–178 (2015) MathSciNetCrossRef Krnić, M., Lovričević, N., Pečarić, J.: Superadditivity of the Levinson functional and applications. Period. Math. Hung. 71(2), 166–178 (2015) MathSciNetCrossRef
12.
Zurück zum Zitat Levinson, N.: Generalization of an inequality of Kay Fan. J. Math. Anal. Appl. 6, 133–134 (1969) MathSciNetMATH Levinson, N.: Generalization of an inequality of Kay Fan. J. Math. Anal. Appl. 6, 133–134 (1969) MathSciNetMATH
13.
Zurück zum Zitat Liese, F., Vajda, I.: Convex Statistical Distances. Teubner-Texte Zur Mathematik, vol. 95. Teubner, Leipzig (1987) MATH Liese, F., Vajda, I.: Convex Statistical Distances. Teubner-Texte Zur Mathematik, vol. 95. Teubner, Leipzig (1987) MATH
14.
Zurück zum Zitat Lovričević, N., Pečarić, Ð., Pečarić, J.: Zipf–Mandelbrot law, f-divergences and the Jensen-type interpolating inequalities. J. Inequal. Appl. 2018, 36 (2018) MathSciNetCrossRef Lovričević, N., Pečarić, Ð., Pečarić, J.: Zipf–Mandelbrot law, f-divergences and the Jensen-type interpolating inequalities. J. Inequal. Appl. 2018, 36 (2018) MathSciNetCrossRef
15.
Zurück zum Zitat Mercer, A.McD.: A variant of Jensen’s inequality. J. Inequal. Pure Appl. Math. 4(4), 73 (2003) MathSciNetMATH Mercer, A.McD.: A variant of Jensen’s inequality. J. Inequal. Pure Appl. Math. 4(4), 73 (2003) MathSciNetMATH
16.
Zurück zum Zitat Mitrinović, D.S., Pečarić, J., Fink, A.M.: Classical and New Inequalities in Analysis, vol. 61. Kluwer Academic, Norwell (1992) MATH Mitrinović, D.S., Pečarić, J., Fink, A.M.: Classical and New Inequalities in Analysis, vol. 61. Kluwer Academic, Norwell (1992) MATH
17.
Zurück zum Zitat Pečarić, J.: On an inequality on N. Levinson. Publ. Elektroteh. Fak. Univ. Beogr., Ser. Mat. Fiz. 71–74 (1980) Pečarić, J.: On an inequality on N. Levinson. Publ. Elektroteh. Fak. Univ. Beogr., Ser. Mat. Fiz. 71–74 (1980)
18.
Zurück zum Zitat Popoviciu, T.: Sur une inegalite de N. Levinson. Mathematica 6, 301–306 (1969) Popoviciu, T.: Sur une inegalite de N. Levinson. Mathematica 6, 301–306 (1969)
19.
Zurück zum Zitat Rényi, A.: On measure of information and entropy. In: Proceeding of the Fourth Berkely Symposium on Mathematics, Statistics and Probability, pp. 547–561 (1960) Rényi, A.: On measure of information and entropy. In: Proceeding of the Fourth Berkely Symposium on Mathematics, Statistics and Probability, pp. 547–561 (1960)
20.
21.
Zurück zum Zitat Vajda, I.: Theory of Statistical Inference and Information. Kluwer Academic, Dordrecht (1989) MATH Vajda, I.: Theory of Statistical Inference and Information. Kluwer Academic, Dordrecht (1989) MATH
Metadaten
Titel
Generalization of the Levinson inequality with applications to information theory
verfasst von
Muhammad Adeel
Khuram Ali Khan
Ðilda Pečarić
Josip Pečarić
Publikationsdatum
01.12.2019
Verlag
Springer International Publishing
Erschienen in
Journal of Inequalities and Applications / Ausgabe 1/2019
Elektronische ISSN: 1029-242X
DOI
https://doi.org/10.1186/s13660-019-2186-4

Weitere Artikel der Ausgabe 1/2019

Journal of Inequalities and Applications 1/2019 Zur Ausgabe

Premium Partner