This paper introduces the theory of ϕ-Jensen variance. Our main motivation is to extend the connotation of the analysis of variance and facilitate its applications in probability, statistics and higher education. To this end, we first introduce the relevant concepts and properties of the interval function. Next, we study several characteristics of the log-concave function and prove an interesting quasi-log concavity conjecture. Next, we introduce the theory of ϕ-Jensen variance and study the monotonicity of the interval function \(\operatorname{JVar}_{\phi}\varphi ( {{X _{ [ {a,b} ] }}} )\) by means of the log concavity. Finally, we demonstrate the applications of our results in higher education, show that the hierarchical teaching model is ‘normally’ better than the traditional teaching model under the appropriate hypotheses, and study the monotonicity of the interval function \(\operatorname{Var} \mathscr{A} (X _{ [ {a,b} ] } )\).
Hinweise
Competing interests
The authors declare that they have no conflicts of interest in this joint work.
Authors’ contributions
All authors contributed equally and significantly in this paper. All authors read and approved the final manuscript.
1 Introduction
This paper introduces the theory of ϕ-Jensen variance. Our main motivation is to extend the connotation of the analysis of variance and facilitate its applications in probability, statistics and higher education. Our research results have important theoretical significance and reference value for the higher education systems. The proofs of these results are both interesting and difficult. A large number of algebraic, functional analysis, probability, statistics and inequality theories are used in this paper.
Higher education is an important social activity. One of the interesting problems in higher education is whether we should advocate a hierarchical teaching model. This problem is always controversial in educational circles, which has attracted the attention of some mathematics workers [1‐5]. In this paper, we study the problem from the angle of the analysis of variance, so as to decide on the superiority or the inferiority of the hierarchical teaching model and the traditional teaching model. The research methods of the problem are based on the theory of ϕ-Jensen variance.
Anzeige
Now we recall the concepts of the hierarchical teaching model and the traditional teaching model as follows [1].
The usual teaching model assumes that the scores of each student in a university is treated as a continuous random variable written as \({X _{I}}\), which takes on some value in the real interval \(I=[0,1]\), and its probability density function \({p_{I}}:I\rightarrow ( {0,\infty} ) \) is continuous. Suppose we now divide the students into m classes written as
where \(0={a_{0}}\leqslant{a_{1}}\leqslant\cdots\leqslant {a_{m}}=1\), \(m\geqslant2\), and \({a_{i}}\), \({a_{i+1}}\) are the lowest and the highest allowable scores of the students of \(\operatorname{Class} [ {{a_{i}},{a_{i+1}}} ] \), respectively. Then we say that the set
is a hierarchical teaching model such that the traditional teaching model, denoted by \(\operatorname{HTM} \{ {a_{0},a_{m},{p_{I}}} \}\), is just a special \({\operatorname{HTM}} \{ {{a_{0}},\ldots,{a_{m}},{p_{I}}} \} \) where \(m=1\).
If \(a_{0}=-\infty\), \(a_{m}=\infty\), then the \(\operatorname{HTM} \{ {{-\infty}, \ldots, {\infty},p_{\mathbb{R}}} \}\) and the \(\operatorname{HTM} \{ {{-\infty},{\infty},{p_{\mathbb{R}}}} \}\) are called generalized hierarchical teaching model and generalized traditional teaching model, respectively, where, and in the future, \(\mathbb{R}\triangleq (-\infty,\infty )\).
In order to study the hierarchical and the traditional teaching models from the angle of the analysis of variance, we need to recall the definition of the truncated random variable as follows [1].
Anzeige
Let \({X _{I}}\in I\) be a continuous random variable with continuous probability density function \({p_{I}}:I\rightarrow ( {0,\infty} ) \). If \({X _{J}}\in J\subseteq I\) is also a continuous random variable and its probability density function is
$$ {p_{J}}:J\rightarrow ( {0,\infty} ),\qquad {p_{J}} ( t ) \triangleq\frac{{{p_{I}} ( t ) }}{{\int_{J}{{p_{I}}}}}, $$
(2)
then we say that the random variable \({X _{J}}\) is a truncated random variable of the random variable \({X _{I}}\), written as \({X _{J}}\subseteq{X _{I}}\). If \({X _{J}}\subseteq{X _{I}}\) and \(J\subset I\), then we say that the random variable \({X _{J}}\) is a proper truncated random variable of the random variable \({X _{I}}\), written as \({X_{J}}\subset{X _{I}}\). Here I and J are n-dimensional intervals (see Section 2).
We point out a basic property of the truncated random variable as follows [1]: Let \({X _{I}}\in I\) be a continuous random variable with continuous probability density function \({p_{I}}:I\rightarrow ( {0,\infty} ) \). If \({X _{{I_{\ast}}}}\subseteq{X _{I}}\), \({X _{{I^{\ast}}}}\subseteq{X _{I}}\) and \({I_{\ast}}\subseteq{I^{\ast}}\), then \({X _{{I_{\ast}}}}\subseteq{X _{{I^{\ast}}}}\), while if \({X _{{I_{\ast}}}}\subseteq{X _{I}}\), \({X _{{I^{\ast}}}}\subseteq{X _{I}}\) and \({I_{\ast}}\subset{I^{\ast}}\), then \({X _{{I_{\ast }}}}\subset{X _{{I^{\ast}}}} \).
According to the definitions of the mathematical expectation\(\mathrm{E}\varphi ( {{X _{J}}} ) \) and the variance\(\operatorname{Var}\varphi ( {{X _{J}}} )\), we easily get
In the \(\operatorname{HTM} \{ {{a_{0}},\ldots,{a_{m}},{p_{I}}} \}\), the scores of each student in \(\operatorname{Class} [ {{a_{i}},{a_{i+1}}} ] \) is also a random variable written as \({X _{ [ {{a_{i}},{a_{i+1}}} ] }}\). Since \([ {{a_{i}},{a_{i+1}}} ] \subseteq I\), it is a truncated random variable of the random variable \({X _{I}}\), where \(i=0,1,\ldots,m-1\). Assume that the \(j-i\) classes
are merged into one, written as \(\operatorname{Class} [ {{a_{i}},{a_{j}}} ] \). Since \([ {{a_{i}},{a_{j}}} ] \subseteq I\), we know that \({X _{ [ {{a_{i}},{a_{j}}} ] }}\) is also a truncated random variable of the random variable \({X _{I}}\), where \(0\leqslant i< j\leqslant m\). In general, we have
In the \(\operatorname{HTM} \{ {{a_{0}},\ldots,{a_{m}},{p_{I}}} \}\), we are concerned with the relationship between the variance \(\operatorname{Var}{X _{ [ {{a_{i}},{a_{j}}} ] }}\) and \(\operatorname{Var}{X _{I}}\), so as to decide on the superiority or the inferiority of the hierarchical and the traditional teaching models. If
then we say that the \(\operatorname{HTM} \{ {{a_{0}},\ldots,{a_{m}},{p_{I}}} \}\) is right increasing. If the hierarchical teaching model is both left and right increasing, i.e.,
then we say that the hierarchical teaching model is increasing.
If a hierarchical teaching model is increasing, then in view of the usual meaning of the variance, we tend to think that this hierarchical teaching model is better than the traditional teaching model. Otherwise, this hierarchical teaching model is probably not worth promoting.
In this paper, we study the hierarchical and the traditional teaching models from the angle of the analysis of variance. In other words, we study the monotonicity of the hierarchical teaching model, so as to decide on the superiority or the inferiority of the hierarchical and the traditional teaching models. In particular, we need to find the conditions such that inequalities (6), (7) and (8) hold (see Theorem 6) by means of the theory of ϕ-Jensen variance.
In order to facilitate the description of the theory of ϕ-Jensen variance, in Section 2, we introduce the relevant concepts and properties of the interval functions, in Section 3, we study several characteristics of the log-concave function. In particular, we will prove the interesting quasi-log concavity conjecture in [1]. In Section 4, we introduce the theory of ϕ-Jensen variance and study the monotonicity of the interval function \(\operatorname{JVar}_{\phi}\varphi ( {{X _{ [ {a,b} ] }}} )\) by means of the log concavity. In Section 5, we demonstrate the applications of our results in higher education, show that the hierarchical teaching model is ‘normally’ better than the traditional teaching model under the appropriate hypotheses, and study the monotonicity of the interval function \(\operatorname{Var} \mathscr{A} (X _{ [ {a,b} ] } )\).
2 Interval function
To study the theory of ϕ-Jensen variance, we need to introduce the relevant concepts and properties of the interval functions in this section.
We will use the following notations in this paper.
If \(\mathbf{a}\leqslant\mathbf{b}\) and there exists \(j\in \{ 1,2,\ldots,n \}\) such that \(a_{j}< b_{j}\), then we say that a is less than b or b is greater than a, written as \(\mathbf{a}<\mathbf{b}\) or \(\mathbf{b}>\mathbf{a}\).
Let \(I_{j}\subseteq\mathbb{R}\), \(j=1,\ldots,n\), be intervals. Then we say that the set \(I\triangleq I_{1}\times\cdots\times I_{n}\) is an n-dimensional interval, where the product × is the Descartes product.
If \(\mathbf{a},\mathbf{b}\in I\), then we say that the set
Let I̅ be the closed interval set of the interval I. We say that the mapping \(G:\overline{I}\rightarrow\mathbb{R}\) is an interval function. The image of the closed interval \([ {\mathbf{a},\mathbf{b}} ] \) is written as \(G [ {\mathbf{a},\mathbf{b}} ]\), and the interval function \(G:\overline{I}\rightarrow\mathbb{R}\) can also be expressed as \(G [ {\mathbf{a},\mathbf{b}} ] \) (\([ {\mathbf{a},\mathbf {b}} ]\in\overline{I} \)).
By (9), for the interval function \(G:\overline{I}\rightarrow \mathbb{R}\), we have
$$ G [ {\mathbf{a},\mathbf{b}} ]=G [ {\mathbf{b},\mathbf {a}} ],\quad \forall \mathbf{a},\mathbf{b}\in I. $$
(11)
That is to say, the image \(G [ {\mathbf{a},\mathbf{b}} ]\) of the closed interval \([\mathbf{a},\mathbf{b}]\) is a symmetric function.
Let \(G:\overline{I}\rightarrow\mathbb{R} \) be an interval function, and let \(a_{j}< b_{j}\), \(j=1,\ldots,n\). If
then we say that the interval function \(G:\overline{I}\rightarrow \mathbb{R} \) is left increasing. If the interval function \(G:\overline{I}\rightarrow\mathbb{R} \) is both left increasing and right increasing, i.e.,
$$ [ {\mathbf{a},\mathbf{b}} ]\subset [ {\mathbf{c},\mathbf {d}} ] \subseteq I \quad\Rightarrow\quad G [ {\mathbf{a},\mathbf{b}} ] < G [ {\mathbf{c},\mathbf {d}} ], $$
(14)
then we say that the interval function \(G:\overline{I}\rightarrow \mathbb{R} \) is increasing.
If G or −G is left increasing, then we say that the interval function \(G:\overline{I}\rightarrow\mathbb{R} \) is left monotonous. If G or −G is right increasing, then we say that the interval function \(G:\overline{I}\rightarrow\mathbb{R}\) is right monotonous. If G or −G is increasing, then we say that the interval function \(G:\overline{I}\rightarrow\mathbb{R} \) is monotonous.
We remark here that if an interval function \(G:\overline{I}\rightarrow \mathbb{R}\), here I is an interval, is increasing, then the graph of the function
$$Z=G[x,y],\quad (x,y)\in I^{2} $$
looks like a drain or a valley. For example, the interval function
If \({X}\in I\), where \(I\subseteq\mathbb{R}^{n}\) is an n-dimensional interval, is a continuous random variable, and its probability density function \({p}:I\rightarrow ( {0,\infty} ) \) is continuous, then the interval function
$$P \bigl(X\in[\mathbf{a},\mathbf{b}] \bigr)\triangleq G [ {\mathbf {a}, \mathbf{b}} ] $$
is the probability of the random event ‘\(X\in[\mathbf{a},\mathbf {b}]\)’. In other words,
$$ [ {\mathbf{a},\mathbf{b}} ]\subset [ {\mathbf{c},\mathbf {d}} ] \subseteq I \quad\Rightarrow\quad 0\leqslant P \bigl(X\in[\mathbf {a},\mathbf{b}] \bigr)< P \bigl(X\in[ \mathbf{c},\mathbf{d}] \bigr)\leqslant1. $$
(15)
For the monotonicity of the interval function, we have the following proposition.
Proposition 1
Let\(G:\overline{I}\rightarrow\mathbb{R} \), where\(I\subseteq\mathbb{R}^{n}\)is ann-dimensional interval, be an interval function, and the partial derivatives of\(G [ {\mathbf {a},\mathbf{b}} ]\)exist, where\([ {\mathbf{a},\mathbf {b}} ] \in\overline{I}\). Then we have the following two assertions.
then the interval function\(G:\overline{I}\rightarrow\mathbb{R} \)is left increasing.
Proof
We first prove assertion (I). Let
$$a_{j}< b_{j},\quad 1\leqslant j\leqslant n, \qquad [ {\mathbf{a}, \mathbf {b}} ]\subset [ {\mathbf{a},\mathbf{b}+\varDelta \mathbf{b}} ] \subseteq I , $$
here \(\varDelta \mathbf{b}>\mathbf{0}\). Hence there exists \(j\in \{ 1,\ldots,n \}\) such that \(\varDelta b_{j}>0\). According to the theory of analysis and (16), we know that the function \(G [ {\mathbf{a},\mathbf{b}} ]\) is strictly increasing with respect to \(b_{j}\), hence
In other words, the interval function \(G:\overline{I}\rightarrow\mathbb {R} \) is left increasing. Assertion (II) is proved. The proof of Proposition 1 is completed. □
In Section 4.5, we will demonstrate the applications of Proposition 1.
As an application of Proposition 1, we have the following example.
Example 1
Let \({X}\in I\), where I is an interval, be a continuous random variable, and let its probability density function \({p}:I\rightarrow ( {0,\infty} ) \) be continuous, as well as let the function \(\varphi: I\rightarrow\mathbb{R}\) be continuous and strictly increasing. Then the interval function
$$ \mathrm{E}\varphi (X_{[a,b]} )\triangleq \left \{ \textstyle\begin{array}{@{}l@{\quad}l} \frac{\int_{a}^{b}{p(x)\varphi(x) \,\mathrm{d}x}}{\int_{a}^{b}{p(x)\,\mathrm{d}x}}, & a \ne b, \\ a, & a = b, \end{array}\displaystyle \right .\quad \forall [ {a,b} ] \in\overline{I} $$
(18)
is right increasing, and the interval function \(-\mathrm{E}\varphi (X_{[a,b]} )\) is left increasing, where \(X_{[a,b]}\subseteq X\), and \(\mathrm{E}\varphi (X_{[a,b]} )\) is the mathematical expectation of \(\varphi (X_{[a,b]} )\).
Proof
Let \([ {a,b} ] \in\overline{I}\) and \(a\ne b\). Then we have
By Proposition 1, \(\mathrm{E}\varphi (X_{[a,b]} )\) is right increasing and \(-\mathrm{E}\varphi (X_{[a,b]} )\) is left increasing. This ends the proof. □
In Section 4.6, we will demonstrate the applications of Example 1.
Now we introduce the convexity and the concavity of the interval functions as follows.
The interval function \(G:\overline{I}\rightarrow\mathbb{R}\) is said to be convex if
$$ J\in\overline{I},\qquad K\in\overline{I}, \qquad \theta\in[0,1]\quad\Rightarrow\quad (1- \theta)GJ+\theta GK\geqslant G \bigl[(1-\theta)J+\theta K \bigr], $$
(19)
where \((1-\theta)J+\theta K\in\overline{I}\) by (10). The interval function \(G:\overline{I}\rightarrow\mathbb{R}\) is said to be concave if −G is convex.
Indeed, since the function \(|t|^{\gamma} \) (\(t\in\mathbb{R}\)) is convex, by Jensen’s inequality [6‐10], we know that for any \([a,b]\in\overline{\mathbb{R}}\), \([c,d]\in\overline{\mathbb{R}}\), \(\theta \in[0,1]\), we have
Convexity and concavity are essential attributes of functions, their research and applications are important topics in mathematics.
To study the theory of ϕ-Jensen variance, in this section, we need to study the log concavity and the quasi-log concavity of functions.
3.1 Log concavity
There are many types of convexity and concavity for functions. One of them is the log concavity which has many applications in probability and statistics.
Recall the definition of a log-concave function [1, 11‐20] as follows.
The function \({p}:I\rightarrow ( {0,\infty} ) \), here I is an n-dimensional interval, is called a log-concave function if logp is a concave function, i.e.,
If \(-\log{p}\) is a concave function, then we say that the function \(p:I\rightarrow ( {0,\infty} ) \) is a log-convex function.
In [20], the authors apply the log concavity to study the Roy model, and several interesting results are obtained. In particular, we have the following (see p.1128 in [20]): If D is a log concave random variable, then
Unfortunately, their results did not include the case where D is a truncated random variable.
In this paper, we apply the log concavity of functions to generalize the inequalities in (21) to the case where D is a truncated random variable (see Remark 2).
To prepare for the proofs of the results in Section 4.5, we need to study several characteristics of the log-concave function as follows.
For the log-concave function, we can easily get the following Propositions 2 and 3 by the theory of analysis [1].
Proposition 2
Let the function\({p: {I}}\rightarrow ( {0,\infty} )\), hereIis an interval, be differentiable. Then the functionpis a log-concave function if and only if the function\({ ( {\log{p}} ) ^{{\prime}}}\)is monotone decreasing, i.e., if\(a,b\in I\), \(a< b\), then we have
where\(( \log{p} ) ^{{\prime }}\)is the derivative of the function logp.
Proposition 3
Let the function\({p:{I}}\rightarrow ( {0,\infty} )\), hereIis an interval, be twice differentiable. Then the functionpis a log-concave function if and only if
$$ ( \log{p} ) ^{{\prime\prime }}(t )\leqslant0 \quad\Leftrightarrow\quad{p} ( t ) {p} {^{\prime\prime }} ( t ) -{ \bigl[ {{p} {^{\prime}} ( t ) } \bigr] ^{2}}\leqslant0,\quad \forall t\in I, $$
(23)
where\(( \log{p} ) ^{{\prime\prime }}\)is the second order derivative of the function logp.
For other characteristics of the log-concave function, we have the following non-trivial result.
Theorem 1
Let the function\({p}:I\rightarrow ( {0,\infty} )\), hereIis an interval, be differentiable. Then the functionpis a log-concave function if and only if
$$ {p} ( b ) -{p} ( a ) -\frac{{p^{\prime} ( b ) }}{{{p} ( b ) }}\int_{a}^{b}{{p} ( t ) \,\mathrm{d}t}\geqslant0, \quad\forall ( {a,b} ) \in{I^{2}}. $$
(24)
Proof
Assume that the function p is a log-concave function, we prove inequality (24) as follows.
We define an auxiliary function as follows:
$$ F:{I^{2}} \to ( { - \infty,\infty} ),\qquad F ( {a,b} ) \triangleq{p} ( b ) - {p} ( a ) - ( \log {p} ) ^{{\prime }}(b )\int_{a}^{b} {{p} ( t )\,\mathrm{d}t}. $$
If \(a = b\), then \(F ( {a,b} ) = 0\). Inequality (24) holds. We assume that \(b \ne a\) below.
Note that the function \({p}:I \to ( {0,\infty} )\) is differentiable. By the Cauchy mean value theorem, there exists a real number \(\theta\in ( {0,1} )\) such that
Since \(\int_{a}^{b} {{p} ( t )\,\mathrm{d}t} > 0\), we have \(F ( {a,b} ) \geqslant0\) by (27). This proves inequality (24) for the case where \(a < b\).
If \(a>b\), then
$$ b< a+\theta ( {b-a} ) < a. $$
(28)
Combining with Proposition 2, (25) and (28), we obtain that
$$ \bigl[ ( \log{p} ){^{\prime}} { (a )} - ( \log{p} ){^{\prime}} { ( b )} \bigr]\int_{a}^{b}{{p} ( t ) \,\mathrm{d}t} \geqslant0. $$
(31)
Since \(\int_{a}^{b}{{p} ( t ) \,\mathrm{d}t}>0\), we get (22) by (31). The proof of Theorem 1 is completed. □
In Sections 3.2 and 4.5, we will demonstrate the applications of Theorem 1.
For the log concavity, we have the following interesting example.
Example 2
Let the function \(p:(\alpha,\beta )\rightarrow(0,\infty)\) be a probability density function of a random variable X, and let the probability distribution function of X be
If \(p:(\alpha,\beta)\rightarrow(0,\infty)\) is a differentiable log-concave function, then \(P:(\alpha,\beta)\rightarrow[0,1]\) is also a log-concave function, i.e.,
$$ 0\leqslant P \bigl[\alpha< X\leqslant(1-\theta)a+\theta b \bigr] \leqslant \bigl[P (\alpha< X\leqslant a ) \bigr]^{1-\theta } \bigl[ P (\alpha< X\leqslant b ) \bigr]^{\theta}\leqslant1, $$
(32)
where \((a,b)\in(\alpha,\beta)^{2}\), \(\theta\in [ {0,1} ]\), and \(P (\alpha< X\leqslant x )\triangleq P(x)\) is the probability of random event ‘\(\alpha< X\leqslant x\)’.
is an interval function. If the function \(p:I\rightarrow ( {0,\infty} ) \) is twice continuously differentiable, then inequalities in (34) can be rewritten as
The significance of the quasi-log concavity in the analysis of variance is as follows (see Theorem 5.1 in [1]): Let \({X _{I}}\) be a continuous random variable and its probability density function \(p:I\rightarrow ( {0,\infty} ) \) be twice continuously differentiable. Then the function \(p:I\rightarrow (0,\infty)\) is quasi-log concave if and only if
We remark here that for the twice continuously differentiable function, quasi-log concavity implies log concavity, and quasi-log convexity implies log convexity, as well as log convexity implies quasi-log convexity [1].
An interesting conjecture was proposed by Wen et al. in [1] as follows.
That is to say, (34) still holds. Hence p is quasi-log concave.
We remark here that if the function \({ ( {\log{p}} ) ^{{\prime}}}\) is strictly decreasing, then the equation in (34) holds if and only if \(a=b\). This completes the proof of Corollary 1. □
Corollary 1 implies the following interesting corollary.
Corollary 2
Let the function\(p:I\rightarrow (0,\infty)\), hereIis an interval, be twice continuously differentiable. Thenpis quasi-log concave if and only ifpis log concave.
4 Theory of ϕ-Jensen variance
The covariance and the variance are important qualitative features of random variables. Indeed, the research and the application of these indexes are important topics in probability and statistics. In this section, we generalize the traditional covariance and the variance of random variables, and define ϕ-covariance, ϕ-variance, ϕ-Jensen variance, ϕ-Jensen covariance, integral variance and γ-order variance. We also study the relationships among these ‘variances’. In Section 4.5, we study the monotonicity of the interval function involving the ϕ-Jensen variance by means of the log concavity.
In the following discussion, we assume the following.
I is an n-dimensional interval (or n-dimensional, closed and bounded domain in \(\mathbb{R}^{n}\)). The \({X}\triangleq ( {{X _{1}},\ldots,{X _{n}}} ) \in I\) is an n-dimensional continuous random variable, and its probability density function \({p}:I\rightarrow ( {0,\infty} ) \) is continuous. The functions \({\varphi_{i}}:I\rightarrow J\) and \(\varphi:I\rightarrow J\) are continuous, where J is an interval and \(i=1,\ldots,m\), \(m\geqslant2\). The function \(\phi:J\rightarrow\mathbb{R} \) is continuous and non-constant. The \(\phi^{\prime} ( x )\), \(\phi^{\prime\prime} ( x )\) and \(\phi^{\prime \prime\prime} ( x )\) are the derivative, second order derivative and third order derivative of the function \(\phi ( x )\), respectively.
4.1 ϕ-Variance
The signed square root of the real number t is defined as
$$\sqrt[\pm]{t}\triangleq\sqrt{\vert t\vert }\operatorname{sign} ( t ) \in \mathbb{R}, $$
where \(\operatorname{sign} ( t ) \) is the sign function, which is similar to the function \(\sqrt[3]{t}\).
which is a generalized mathematical expectation of the random variable \(\phi [\psi (X )+\delta (X ) ]\) in the interval \([1,\infty)\).
We now define the ϕ-covariance matrix\([ \operatorname{Cov}_{\phi} ( \varphi_{i},\varphi_{j} ) ] _{m\times m}\) of the random variables \({\varphi_{1}} ( {X} ), \ldots , {\varphi_{m}} ( {X } )\) as follows:
That is to say, the ϕ-covariance matrix \({ [ {{{\operatorname{\operatorname{Cov}} }_{\phi}} ( {{\varphi_{i}},{\varphi _{j}}} )} ]_{m \times m}}\) of the random variables \({\varphi_{1}} ( {X } ), \ldots, {\varphi_{m}} ( {X} )\) is non-negative. The proof of Proposition 4 is completed. □
According to Proposition 4 and the quadratic form theory, all the principal minors of the ϕ-covariance matrix are non-negative. In particular, all the \(2\times2\) principal minors of the ϕ-covariance matrix are non-negative. Hence
as a ϕ-correlation coefficient of the random variables \({ \varphi_{i}} ( {{X}} ) \) and \({\varphi_{j}} ( {{X }} ) \), where \(b_{i,j}\) is defined by (43), and \(i,j=1,\ldots,m\).
4.2 ϕ-Jensen variance
We say that the function
$$a \circ b:\mathbb{R}^{2} \rightarrow\mathbb{R},\qquad a \circ b= \left \{ \textstyle\begin{array}{@{}l@{\quad}l} \sqrt[\pm]{ab}, & a \ne b, \\ a, & a=b \end{array}\displaystyle \right . $$
is a signed square root product of two real numbers a and b [22].
For the signed square root product \(a \circ b\), we have
According to the above definition, we have the following relationship between the ϕ-Jensen covariance \({\operatorname{JCov}_{\phi}} ( {{\varphi _{i}},{\varphi_{j}}} )\) and the ϕ-covariance \(\operatorname{Cov}_{\phi} ( {{\varphi_{i}},{\varphi _{j}}} )\).
Unfortunately, the ϕ-Jensen covariance matrix \({ [ {{{\operatorname{\operatorname{JCov}} }_{\phi}} ( {{\varphi _{i}},{\varphi_{j}}} )} ]_{m \times m}}\) of the random variables \({\varphi_{1}} ( {X } ),\ldots, {\varphi_{m}} ( {X} ) \) is not non-negative in general. But since
if \(\sqrt{{\operatorname{Cov}_{\phi}} ( {{\varphi_{i}},{\varphi _{i}}} )}>0\), \(\sqrt{{\operatorname{Cov}_{\phi}} ( {{\varphi _{j}},{\varphi _{j}}} )}>0\), we can define the functional
as a ϕ-Jensen correlation coefficient of the random variables \({\varphi_{i}} ( {{X}} ) \) and \({\varphi_{j}} ( {{X }} ) \), where \(i,j=1,\ldots,m\).
A natural question is why we define the ϕ-Jensen variance. One of the reasons is that we have the following relationship between the ϕ-Jensen variance \(\operatorname{JVar}_{\phi}{\varphi}\) and the variance [9, 10]:
Let the function\(\phi:J\rightarrow ( -\infty,\infty ) \)be twice continuously differentiable and\(\phi^{\prime\prime} ( x ) \geqslant0\), \(\forall x\in J\), and let the function\(\varphi:I\rightarrow J\)be continuous. Then we have the inequalities
Suppose that \(I,J \subset(0,\infty)\) are two intervals, and \(\varphi :I\rightarrow J\) is a monotonic function. If we set \(\phi^{\prime \prime}=\varphi^{-1}>0\), then
We say that the functional \(\mathrm{E}_{\varphi}(X)\) is the φ-mathematical expectation of the random variable \(X _{I}\) and the functional \(\operatorname{JVar}_{\iint\varphi^{-1}}\varphi\) is an integral variance of the random variable \(\varphi ( {{X}} ) \).
In order to facilitate applications in Section 5, now we introduce a special ϕ-Jensen variance, which is called a γ-order variance.
We define a function \({\phi_{\gamma}}\) as follows:
Hence we may say that the functional \((\operatorname{Var}^{[\gamma] }\varphi )^{1/\gamma}\) is a γ-order mean variance of the random variable \(\varphi ({{X }} ) \), where \(\gamma\ne0\).
Since \(\phi_{\gamma}\) is a convex function, according to Theorem 2 and the continuity, we have
In [22], the authors defined the Dresher variance mean\(V_{\gamma,\delta}(\varphi)\) of the random variable \(\varphi(X)\) and obtained the Dresher-type inequality (see Theorem 2 in [22]) and the following V-E inequality (see (7) in [22]):
where \(\gamma>\delta\geqslant1\), and the coefficient \(\delta/\gamma\) is the best constant, and the authors demonstrated the applications of these results in space science (see (55)-(60) in [22]).
Based on the above analysis, we know that the ϕ-Jensen variance and the γ-order variance are natural extensions of the traditional variance
According to Theorem 2, we may use the ϕ-Jensen variance \(\operatorname{JVar}_{\phi}{\varphi}\) to replace the traditional variance Varφ. For example, we may use the integral variance \({\operatorname{JVar}_{\iint\varphi^{-1} }\varphi}\) or γ-order variance \(\operatorname{Var}^{[\gamma]}{\varphi}\) to replace the traditional variance Varφ. If some \(\varphi(t)\leqslant0\), \(\exists t\in{I}\), then we may use the \(\phi_{\gamma}^{*} \)-Jensen variance \(\operatorname{JVar}_{\phi_{\gamma}^{*} }{\varphi}\) to replace the traditional variance Varφ, where
Theorem 1 in [7] implies the following results: Let the function \(\phi: [ {0,\infty} ) \rightarrow\mathbb{R} \) be twice continuously differentiable, and let ϕ with \(\phi^{{\prime\prime}}\) be convex, and let the function \(\varphi:I\rightarrow [ {0,\infty} ) \) be continuous. Then we have the inequalities
Let the function\(\phi :J\rightarrow\mathbb{R} \)be twice continuously differentiable. If\(\mathbf{x}\in{J^{n}}\), \(\mathbf{p}\in{\Omega^{n}}\), then we have the following identity:
Let the function\(\phi:J\rightarrow\mathbb{R} \)be twice continuously differentiable and\(\phi^{\prime\prime} ( x ) \geqslant0\), \(\forall x\in J\). If\(\mathbf{x}\in{J^{n}}\), \(\mathbf{p}\in{\Omega^{n}}\), then we have the following inequalities:
We just need to prove the second inequality in (71), because the proof of the first inequality in (71) is similar.
Let \(T \triangleq \{ {\varDelta {I_{1}}, \ldots, \varDelta {I_{m}}} \}\) be a partition of I. Pick any \({\eta_{i}} \in \varDelta {I_{i}}\), \(1\leqslant i \leqslant m\), and set
This proves the second inequality in (57). The proof of Theorem 2 is completed. □
A large number of algebra, functional analysis and inequality theories are used in the proof of Theorem 2. Based on these theories, we obtained Lemma 3, which is the discrete form of Theorem 2. According to Lemma 3 and the definition of the Riemann integral, we obtained the proof of Theorem 2. Therefore, the proof of Theorem 2 is both interesting and very difficult.
4.4 An example in the generalized traditional teaching model
In order to illustrate the significance of the ϕ-Jensen variance, integral variance and γ-order variance, we provide an illustrative example as follows.
In the generalized traditional teaching model \(\operatorname{HTM} \{ {-\infty,\infty, {p_{\mathbb{R}}}} \}\), suppose that the score of a student is \({X \in{J}}\), where \(J=(\mu, \infty)\), \(0 \leqslant\mu< \infty\), and μ is the average score of the students. In order to stimulate the learning enthusiasm of a student, we may want to give the student a bonus payment \(\mathscr{A} ( {{X }} )\), where \(X> \mu\). The function \(\mathscr{A} :J\rightarrow ( {0,\infty} ) \) is called an allowance function of the \(\operatorname{HTM} \{ {-\infty,\infty, {p_{\mathbb{R}}}} \}\) [1]. In general, we define the allowance function \(\mathscr{A}\) as follows:
here we define the constants \(C_{0}\triangleq0\) and \(C_{1}\triangleq0\). Therefore, the integral variance of the random variable \(\mathscr{A} ( { {X}} ) \) is
where \([\mathrm{E}(X-\mu)^{\alpha} ]^{1/\alpha}\) is the α-power mean [22, 26, 27] of the random variable \(X-\mu\).
4.5 Monotonicity of the interval function \(\operatorname{JVar}_{\phi}\varphi ( {{X _{ [ {a,b} ] }}} ) \)
In this section, we apply the log concavity of function to study the monotonicities of the interval function \(\operatorname{JVar}_{\phi }\varphi ( {{X _{ [ {a,b} ] }}} ) \) involving a ϕ-Jensen variance. In particular, we generalize inequalities in (21) to the case where D is a truncated random variable (see Remark 2). Our purpose is to study the hierarchical and the traditional teaching models from the angle of the analysis of variance, so as to decide on the superiority or the inferiority of the hierarchical teaching model and the traditional teaching model.
Let \(X_{[a,b]}\) be a truncated random variable of X, where the probability density function \({p}:I\rightarrow ( {0,\infty} ) \) of X is continuous. Then, by (4), (50) and the definition of the truncated random variable, we know that the ϕ-Jensen variance of the random variable \(\varphi (X_{[a,b]} )\) is
which is a non-negative interval function, where \(\phi\bullet\varphi \triangleq\phi ( {\varphi} )\) is a composite function.
The main results of this section is the following Theorem 3.
Theorem 3
Let the function\({p}:I\rightarrow ( {0,\infty } ) \)be differentiable and log-concave, and let the functions\(\phi :J\rightarrow\mathbb{R} \)and\(\varphi:I\rightarrow J \)be thrice differentiable and twice differentiable, respectively, which satisfy the following conditions:
$$\phi ^{{\prime\prime}} ( x ) >0,\quad \forall x\in J \quad\textit{and} \quad{ \varphi^{{\prime}}} ( t ) >0, \quad\forall t\in I, $$
whereIandJare intervals. Then we have the following two assertions.
(I)
If\(\phi^{{\prime\prime\prime}} ( x )\geqslant 0\), \(\forall x\in J\), and\(\varphi^{{\prime\prime}} ( t )\geqslant 0\), \(\forall t\in I\), then the interval function\(\operatorname{JVar}_{\phi}\varphi ( {{X _{ [ {a,b} ] }}} ) \) (\([ {a,b} ]\in\overline{I} \)) is right increasing.
(II)
If\(\phi^{{\prime\prime\prime}} ( x ) \leqslant 0\), \(\forall x\in J\), and\(\varphi^{{\prime\prime}} ( t ) \leqslant 0\), \(\forall t\in I\), then the interval function\(\operatorname{JVar}_{\phi}\varphi ( {{X _{ [ {a,b} ] }}} ) \) (\([ {a,b} ]\in\overline{I} \)) is left increasing.
Here the interval function\(\operatorname{JVar}_{\phi}\varphi ( {{X _{ [ {a,b} ] }}} ) \)is defined by (78).
Two real numbers α and β are said to have the same sign [28], written as \(\alpha\sim\beta\), if
$$ w \triangleq\frac{{\int_{a}^{b}{p\varphi}}}{{\int_{a}^{b}p}}=\frac{{\int_{a}^{b}{p ( t )\varphi ( t ) }\,\mathrm{d}t}}{{\int_{a}^{b}p ( t )\,\mathrm{d}t}},\quad a\ne b. $$
(80)
In order to prove Theorem 3, we need four lemmas as follows.
Lemma 4
Let the functions\({p}:I\rightarrow ( {0,\infty } ) \)and\(\varphi:I\rightarrow J \)be continuous, and let the function\(\phi :J\rightarrow\mathbb{R} \)be differentiable. If we set
$$ H ( {a,b} ) \triangleq \bigl[ {\phi\bullet\varphi ( b ) -{\phi{^{\prime}}} ( w ) \bigl( {\varphi ( b ) -w} \bigr) } \bigr] \int_{a}^{b}p- \int_{a}^{b}{p\phi\bullet\varphi}, $$
Let the function\({p}:I\rightarrow ( {0,\infty } ) \)be continuous, and let the functions\(\varphi:I\rightarrow J \)and\(\phi :J\rightarrow\mathbb{R} \)be differentiable and twice differentiable, respectively. Then we have
$$ \frac{{\partial H ( {a,b} ) }}{{\partial b}}= \biggl( {\varphi ^{{\prime}} ( b ) \int _{a}^{b}p} \biggr) \int_{w}^{\varphi ( b ) }{ \phi^{{\prime\prime}} ( t ) }\,\mathrm{d}t-p ( b ) \phi^{{\prime\prime}} ( w ) { \bigl( {\varphi ( b ) -w} \bigr) ^{2}}, \quad\forall [ {a,b} ] \in\overline{I}, $$
(87)
whereIandJare intervals, wand\(H ( {a,b} )\)are defined by (80) and (81), respectively.
$$\begin{aligned} \frac{{\partial H ( {a,b} ) }}{{\partial b}} ={}&\frac{\partial }{{\partial b}} \biggl\{ { \bigl[ {\phi\bullet\varphi ( b ) -\phi ^{{\prime }} ( w ) \bigl( {\varphi ( b ) -w} \bigr) } \bigr] \int _{a}^{b}p-\int_{a}^{b}{p \phi\bullet\varphi}} \biggr\} \\ ={}& {\frac{\partial [ {\phi\bullet\varphi ( b ) -\phi^{\prime} ( w ) ( {\varphi ( b ) -w} ) } ] }{{\partial b}}} \int_{a}^{b}p \\ &{} + \bigl[ {\phi\bullet\varphi ( b ) -\phi^{{\prime}} ( w ) \bigl( {\varphi ( b ) -w} \bigr) } \bigr] p ( b ) -p ( b ) \phi\bullet\varphi ( b ) \\ ={}& \bigl[{\phi^{\prime}\bullet\varphi ( b )}-\phi^{{\prime }} ( w ) \bigr]\varphi^{{\prime}} ( b )\int_{a}^{b}p- \phi^{{\prime\prime}} ( w )p ( b ) \bigl( {\varphi ( b ) -w} \bigr)^{2} \\ &{} +\phi^{{\prime }} ( w )p ( b ) \bigl( {\varphi ( b ) -w} \bigr)+ \bigl[ { \phi\bullet\varphi ( b ) -\phi^{{\prime}} ( w ) \bigl( {\varphi ( b ) -w} \bigr) } \bigr] p ( b ) -p ( b ) \phi\bullet\varphi ( b ) \\ ={}& \biggl( {\varphi^{{\prime}} ( b ) \int_{a}^{b}p} \biggr) \bigl[ {\phi^{{\prime}}\bullet\varphi ( b ) -\phi^{{\prime }} ( w ) } \bigr] -p ( b ) \phi^{{\prime\prime}} ( w ) { \bigl( {\varphi ( b ) -w} \bigr) ^{2}} \\ ={}& \biggl( {\varphi^{{\prime}} ( b ) \int_{a}^{b}p} \biggr) \int_{w}^{\varphi ( b ) }{\phi^{{\prime\prime}} ( t ) }\,\mathrm{d}t-p ( b ) \phi^{{\prime\prime}} ( w ) { \bigl( {\varphi ( b ) -w} \bigr) ^{2}}. \end{aligned}$$
The proof is completed. □
Lemma 6
Let the function\({p}:I\rightarrow ( {0,\infty } ) \)be differentiable and log-concave, and let the function\(\varphi :I\rightarrow J \)be twice differentiable and satisfy the following condition:
$${\varphi^{{\prime}}} ( t ) >0,\quad \forall t\in I. $$
If we set
$$ {H_{\ast}} ( {a,b} ) \triangleq{\varphi^{{\prime}}} ( b ) \int _{a}^{b}p-p ( b ) \bigl( {\varphi ( b ) -w} \bigr), $$
Next, we assume that (95) holds. Then (93) holds by the proof of Lemma 6. Now we prove that inequality (96) also holds as follows.
From (95) we know that the function \(\phi^{\prime\prime}\) is decreasing, hence from (93) we get
$$ \phi^{\prime\prime} ( w )\leqslant\phi^{\prime\prime } ( t )\leqslant \phi^{\prime\prime} \bigl( \varphi(b) \bigr) \quad\mbox{for } \varphi(b)\leqslant t \leqslant w. $$
$$\begin{aligned} \frac{{\partial H ( {a,b} )}}{{\partial b}} & = \biggl( {\varphi ^{{\prime}} ( b )\int _{a}^{b} p } \biggr)\int_{w}^{\varphi ( b )} {\phi^{{\prime\prime}} ( t )} \,\mathrm{d}t - p ( b )\phi^{{\prime\prime}} ( w ){ \bigl( {\varphi ( b ) - w} \bigr)^{2}} \\ & = \biggl( {\varphi^{{\prime}} ( b )\int_{b}^{a} p } \biggr)\int_{\varphi ( b )}^{w} {\phi^{\prime\prime} ( t )} \,\mathrm{d}t - p ( b )\phi^{{\prime\prime}} ( w ){ \bigl( {\varphi ( b ) - w} \bigr)^{2}} \\ & \geqslant \biggl( {\varphi^{{\prime}} ( b )\int_{b}^{a} p } \biggr) \bigl( {w - \varphi ( b )} \bigr)\phi^{\prime\prime} ( w ) - p ( b )\phi^{{\prime\prime}} ( w ){ \bigl( {\varphi ( b ) - w} \bigr)^{2}} \\ & = \biggl( {\varphi^{{\prime}} ( b )\int_{a}^{b} p } \biggr) \bigl( {\varphi ( b )-w} \bigr)\phi^{\prime\prime} ( w ) - p ( b )\phi^{{\prime\prime}} ( w ){ \bigl( {\varphi ( b ) - w} \bigr)^{2}} \\ &= \bigl[ {\varphi ( b ) -w} \bigr] \phi^{{\prime\prime}} ( w ) {H_{\ast}} ( {a,b} ). \end{aligned}$$
That is to say, inequality (96) still holds. This ends the proof of Lemma 7. □
$$ x\in J\quad\Rightarrow\quad\phi^{{\prime\prime\prime}} ( x ) \geqslant0, \quad t\in I\quad\Rightarrow\quad \varphi^{{\prime\prime}} ( t ) \geqslant0,\quad a,b\in I, a< b. $$
A large number of analysis and inequality theories are used in the proof of Theorem 3. Based on these theories, we obtained Theorem 1 and Lemmas 4-7, and according to Theorem 1 and Lemmas 4-7, we obtained the proof of Theorem 3. Therefore, the proof of Theorem 3 is also both interesting and very difficult.
Remark 2
Let \(D\in\mathbb{R}\) be a log concave random variable. In (103) and (112), if we set \(\phi(x)\equiv x^{2}\), \(\varphi(t)\equiv t\), \(I=\mathbb{R}\), then we get the inequalities
where \(p:\mathbb{R}\rightarrow(0,\infty)\) is a differentiable log-concave function. In other words, we have generalized the inequalities in (21) to the case where D is a truncated random variable.
In Section 4.6, we will demonstrate the applications of Theorem 3.
The connotation of Theorem 3 is very rich, which implies the following four interesting corollaries.
Corollary 3
LetXbe a continuous random variable and its probability density function\({p}:I\rightarrow ( {0,\infty} ) \)be a differentiable log-concave function, and let the twice differentiable function\(\varphi: I\rightarrow J \)satisfy the following conditions:
$$I,J \subset(0,\infty),\qquad {\varphi^{{\prime}}} ( t ) >0,\qquad {\varphi^{{\prime\prime}}} ( t ) \geqslant0, \quad\forall t\in I, $$
whereIandJare intervals. Then the interval function\(\operatorname{JVar} _{\iint\varphi^{-1} }\varphi ( {{X _{ [ {a,b} ] }}} ) \) (\([ {a,b} ]\in\overline{I} \)) is right increasing.
Proof
Set \(\phi^{\prime\prime}=\varphi^{-1}\), where \(\varphi ^{-1}\) is the inverse function of the function φ. Since
$$I,J \subset(0,\infty),\qquad {\varphi^{{\prime}}} ( t ) >0,\qquad {\varphi^{{\prime\prime}}} ( t ) \geqslant0,\quad \forall t\in I, $$
we have
$$\phi^{\prime\prime}(x)>0, \qquad\phi^{\prime\prime\prime}(x)>0, \quad\forall x\in J. $$
By assertion (I) in Theorem 3, the interval function
In Theorem 3, if we set \(\phi(x)\equiv x^{2}\) and \(\varphi(t)\equiv t\), then we get the following.
Corollary 4
LetXbe a continuous random variable and its probability density function\({p}:I\rightarrow ( {0,\infty} ) \)be differentiable and log-concave. Then the interval function\(\operatorname{Var}{{X _{ [ {a,b} ] }}} \) (\([ {a,b} ]\in \overline{I} \)) is increasing.
In Section 5.2, we will demonstrate the applications of Corollary 4 in the hierarchical teaching model.
Corollary 5
LetXbe a continuous random variable and its probability density function\({p}:I\rightarrow ( {0,\infty} ) \)be a differentiable function, and let the twice differentiable function\(\varphi:I\rightarrow J \)satisfy the following condition:
$${\varphi^{{\prime}}} ( t ) >0, \quad\forall t\in I, $$
is a log-concave function, then the interval function\(\operatorname{Var}\varphi ({{X _{ [ {a,b} ] }}} )\)is increasing, where\(\varphi^{-1}\)is the inverse function ofφ, and\({{X _{ [ {a,b} ] }}}\subseteq X\).
and \(p_{J}^{*}=p_{I} \bullet\varphi^{-1} (\varphi^{-1} )^{\prime}\) is a differentiable log-concave function, by Corollary 4, the interval function \(\operatorname{Var}^{(p_{J}^{*})}{{X _{ [ {a^{*},b^{*}} ] }^{*}}} \) (\([ {a^{*},b^{*}} ]\in\overline{J} \)) is increasing, i.e.,
That is to say, the interval function \(\operatorname{Var}\varphi ({{X _{ [ {a,b} ] }}} ) \) (\([ {a,b} ]\in\overline {I} \)) is increasing. The proof of Corollary 5 is completed. □
In Section 5.3, we will demonstrate the applications of Corollary 5 in the generalized traditional teaching model.
In Theorem 3, if I is an n-dimensional interval, then we have the following result.
Corollary 6
Let the probability density function\({p_{j}}:I_{j}\rightarrow ( {0,\infty} ) \)of the random variable\(X _{j}\)be a differentiable log-concave function, and let\(\varphi_{j} :I_{j}\rightarrow ( {0,\infty} ) \)be twice differentiable, which satisfy the following conditions:
where\(I=I_{1}\times\cdots\times I_{n}\), \(t= (t_{1},\ldots ,t_{n} )\). If\(\gamma\geqslant2\), and\({X _{1}}, \ldots,{X _{n}} \)are independent random variables, then the interval function
is right increasing, where\(p:I\rightarrow(0,\infty)\)is the probability density function of then-dimensional random variable\(X\triangleq(X_{1},\ldots,X_{n})\), and\(X_{[\mathbf{a},\mathbf {b}]}\subseteq X\).
and there is \(j:1\leqslant j\leqslant n\) such that the equations in (118) and (119) do not hold, where the function \({\phi_{\gamma}}\) is defined by (59).
Since \({X _{1}}, \ldots,{X _{n}}\) are independent random variables, we have
Since there is \(j:1\leqslant j\leqslant n\) such that the equations in (118) and (119) do not hold, the equation in inequalities (123) does not hold. That is to say, inequalities (120) hold when \(n=2\).
(II) Suppose that
$$ 0< \prod_{j=1}^{n-1}A_{j}-\prod _{j=1}^{n-1}B_{j} \leqslant\prod _{j=1}^{n-1}A_{j}^{\prime}- \prod_{j=1}^{n-1}B_{j}^{\prime},\quad n \geqslant3. $$
Since there is \(j:1\leqslant j\leqslant n\) such that the equations in (118) and (119) do not hold, the equation in inequalities (126) does not hold. That is to say, inequalities (120) hold. The proof of Corollary 6 is completed. □
As an application of Corollary 6, we have the following example.
and \(\varphi: I\rightarrow[0,1]\) is the probability distribution function of X, where \({X _{1}}, \ldots,{X _{n}} \) are independent random variables. If \({p_{j}}:I_{j}\rightarrow ( {0,\infty} ) \) is differentiable, increasing and log-concave, then
The normal distribution [29‐32] is considered as the most prominent probability distribution in probability and statistics. In order to facilitate the applications in Sections 5.2 and 5.3, in this section, we need to recall the concept of k-normal distribution as follows: If the probability density function of the random variable X is
then we say that the random variable X follows a k-normal distribution [1], or X follows a generalized normal distribution [32, 33], denoted by \(X \sim{N_{k}} ( {\mu ,\sigma} ) \), where \(t\in\mathbb{R}\), the parameters \(\mu\in\mathbb{R}\), \(\sigma \in ( {0,\infty} )\), \(k\in ( {1,\infty} )\), and \(\Gamma ( s ) \) is the gamma function. The graph of the function \(p ( {t;0,1,k} )\) is depicted in Figure 3.
×
Clearly, \(p ( {t;\mu,\sigma,2} )\) is just the standard normal distribution \(N ( {\mu,\sigma} )\) with mean μ and mean variance σ, as well as \(p ( {t;\mu,\sigma,k} )\) and the probability distribution function
here μ, \(\sigma^{k}\) and σ are the mathematical expectation, the k-order absolute central moment and the k-order mean absolute central moment of the random variable X, respectively.
We remark here that there are close relationships between the k-normal distribution and the Weibull distribution [1].
5.2 Applications in the hierarchical teaching model
In the hierarchical teaching model or the traditional teaching model, the score of each student is treated as a random variable \({X _{I}}\in I= [ {0,1} ]\). By using the central limit theorem [34], we may think that \({X _{I}}\subseteq X \sim {N_{2}} ( {\mu,\sigma } )\), where μ is the average score of the students and σ is the mean variance of the score. If the top and bottom students are insignificant, that is to say, the variance VarX of the random variable X is very small, according to formulas (128) and Figure 3, we may think that there is a real number \(k\in ( {2,\infty} ) \) such that \({X _{I}}\subseteq X \sim {N_{k}} ( {\mu,\sigma} ) \). Otherwise, we may think that there is a real number \(k\in ( {1,2} ) \) such that \({X _{I}}\subseteq X \sim{N_{k}} ( {\mu,\sigma} ) \). Here \(\mu\in ( {0,1} ) \) is the average score of the students and σ is the k-order mean absolute central moment of the score. We can estimate the number k by means of a sampling procedure.
Based on the above analysis, \({\phi_{\gamma}^{\prime\prime\prime }} ( x )= (\gamma-2 )x^{\gamma-3}\), where the function \({\phi_{\gamma}}\) is defined by (59), Theorem 3, Corollary 4 and formulas (128), we get the following proposition.
Proposition 6
In the hierarchical teaching model\(\operatorname{HTM} \{ {{a_{0}},\ldots,{a_{m}},{p_{I}}} \}\), assume that\({X _{I}}\subset X \sim{N_{k}} ( {\mu,\sigma} )\), \(k>1\). Then we have the following three assertions.
(I)
If\(\gamma\geqslant2\), \(0\leqslant i< j \leqslant{j^{\prime}}\leqslant m\), then we have the following inequality:
According to Proposition 6, we know that the \(\operatorname{HTM} \{ {{a_{0}},\ldots,{a_{m}},{p_{I}}} \}\) is increasing under the hypotheses
$${X _{I}}\subset X \sim{N_{k}} ( {\mu,\sigma} ),\quad k>1. $$
Therefore, we may conclude that the hierarchical teaching model is ‘normally’ better than the traditional teaching model by the central limit theorem and Proposition 6.
Remark 4
In [1], the authors proved that the probability density function of the k-normal distribution is quasi-log concave and showed that the generalized hierarchical teaching model is ‘normally’ better than the generalized traditional teaching model. That is to say, in the \(\operatorname{HTM} \{ {{-\infty}, \ldots, {\infty},p_{\mathbb{R}}} \}\), if \(X_{\mathbb{R}} \sim{N_{2}} ( {\mu,\sigma} )\), then we have the following inequalities:
Therefore, Proposition 6 is a generalization of (132).
5.3 Applications in the generalized traditional teaching model
Next, we demonstrate the applications of Corollary 5 in the generalized traditional teaching model.
In the generalized traditional teaching model \(\operatorname{HTM} \{ {-\infty,\infty,{p_{\mathbb{R}}}} \} \), according to the central limit theorem, we may assume that the score X of each student follows a k-normal distribution, i.e., \(X \sim{N_{k}} ( {\mu,\sigma} )\), \(k>1\), where \(\mu>0\) is the average score of the students and \(\sigma>0 \) is the k-order mean absolute central moment of the score.
In the \(\operatorname{HTM} \{ {{-\infty},{\infty},{p_{\mathbb{R}}}} \}\), assume that
In the \(\operatorname{HTM} \{ {-\infty,\infty,{p_{\mathbb{R}}}} \} \), for the general allowance function (72), we have the following.
Proposition 7
In the\(\operatorname{HTM} \{ {-\infty,\infty ,{p_{\mathbb{R}}}} \} \), assume that the scoreXof each student follows ak-normal distribution, where\(k>1\). Then we have the following two assertions.
(I)
If\(0<\alpha\leqslant1\), then the interval function\(\operatorname{Var} \mathscr{A} (X _{ [ {a,b} ] } ) \) (\([ {a,b} ]\in\overline{ (\mu,\infty )} \)) is increasing.
(II)
If\(1<\alpha<k\), then the interval function\(\operatorname{Var} \mathscr{A} (X _{ [ {a,b} ] } ) \) (\([ {a,b} ]\in\overline{ [\mu^{*},\infty )} \)) is also increasing. Here
According to Corollary 5, we just need to prove that the function \(p_{J}^{*}\triangleq p\bullet\mathscr{A} ^{-1} (\mathscr{A} ^{-1} )^{\prime} \) is a differentiable log-concave function under the hypotheses of assertions (I) and (II).
Therefore, the function \(p_{J}^{*}\triangleq p\bullet\mathscr{A} ^{-1} (\mathscr{A} ^{-1} )^{\prime} \) is a differentiable log-concave function under the hypotheses of assertions (I) and (II). This completes the proof of Proposition 7. □
6 Conclusion
Variances and covariances are important concepts in the analysis of variance since they can be used as quantitative tools in mathematical models involving probability and statistics. The motivation of this paper is to extend the connotation of the analysis of variance and facilitate its applications in probability, statistics and higher education. In the applications, one of our main purposes is to study the hierarchical and the traditional teaching models from the angle of the analysis of variance, so as to decide on the superiority or the inferiority of the hierarchical teaching model and the traditional teaching model.
In this paper, we first introduce the relevant concepts and properties of the interval functions. Next, we study several characteristics of the log-concave function, and prove the interesting quasi-log concavity conjecture in [1]. Next, we generalize the traditional covariance and the variance of random variables and define ϕ-covariance, ϕ-variance, ϕ-Jensen variance, ϕ-Jensen covariance, integral variance and γ-order variance, and study the relationships among these ‘variances’, as well as study the monotonicity of the interval function \(\operatorname{JVar}_{\phi}\varphi ( {{X _{ [ {a,b} ] }}} )\). Finally, we demonstrate the applications of our results in higher education. Based on the monotonicity of the interval function \({\operatorname{Var}^{ [ \gamma ] }}{X _{ [ {{a},{b}} ] }} \) (\([a,b]\in\overline{I} \)), we show that the hierarchical teaching model is ‘normally’ better than the traditional teaching model under the hypotheses that \({X _{I}}\subset X \sim{N_{k}} ( {\mu,\sigma} )\), \(k>1\). We also study the monotonicity of the interval function \(\operatorname{Var} \mathscr{A} (X _{ [ {a,b} ] } )\) involving an allowance function \(\mathscr{A}\). Theorems 1 and 2 are the main theoretical basis and Theorem 3 is one of main results of this paper.
A large number of algebraic, functional analysis, probability, statistics and inequality theories are used in this paper. The proofs of our results are both interesting and difficult, and the problems of proof of these results are difficult to be solved by means of the existing probability and statistics theories. Some of our proof methods can also be found in the references of this paper.
Based on the above analysis, we know that the theory of ϕ-Jensen variance is of great theoretical significance and application value in inequality, probability, statistics and higher education.
Acknowledgements
This work is supported in part by the Natural Science Foundation of China (No. 61309015), the Scientific Research Fund of the Education Department of Sichuan Province of China (No. 07ZA207, No. 14ZB0372), Natural Science Fund of Chengdu University for Young Researchers (No. 2013XJZ08), and Doctoral Research Fund of Chengdu University (No.20819022).
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Competing interests
The authors declare that they have no conflicts of interest in this joint work.
Authors’ contributions
All authors contributed equally and significantly in this paper. All authors read and approved the final manuscript.