Skip to main content
Top
Published in: Annals of Data Science 4/2019

Open Access 17-05-2019

Odd Hyperbolic Cosine Exponential-Exponential (OHC-EE) Distribution

Authors: Omid Kharazmi, Ali Saadatinik, Shahla Jahangard

Published in: Annals of Data Science | Issue 4/2019

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

In the present paper, we introduce a new lifetime distribution based on the general odd hyperbolic cosine-FG model. Some important properties of proposed model including survival function, quantile function, hazard function, order statistic are obtained. In addition estimating unknown parameters of this model will be examined from the perspective of classic and Bayesian statistics. Moreover, an example of real data set is studied; point and interval estimations of all parameters are obtained by maximum likelihood, bootstrap (parametric and non-parametric) and Bayesian procedures. Finally, the superiority of proposed model in terms of parent exponential distribution over other fundamental statistical distributions is shown via the example of real observations.
Notes

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

1 Introduction

Alzaatreh et al. [2] have introduced a new model of lifetime distributions, which the researchers refer to its especial case as odd-G distribution. It is based on the combination of an arbitrary cumulative distribution function CDF with odd ratio of the baseline CDF as G. The integration form of the new CDF is
$$\begin{aligned} H(x)=\int _{-\infty }^{\frac{G(x)}{1-G(x)}} f(t)dt, \end{aligned}$$
(1)
where f is the probability density function PDF of arbitrary CDF. This interesting method attracted the attention of some researchers. We refer the reader to [1, 4]. Generating new model based on this method resulted in creating very flexible statistical model. Recently, Kharazmi et al. [7] have introduced a new general family of lifetime distribution based on the odd ratio of a parent distribution G for general hyperbolic cosine-F (HCF) family of lifetime distributions that recently proposed by Kharazmi and Saadatinik [6]. This new model will be denoted by odd-HCF-G\((or\ OHCFG)\) distribution.
In the present paper, we introduce a special case of the odd-HCF-G model by applying two exponential distributions instead of F and G models.It is referred to as OHCEE distribution. One of our main motivation to introduce this new distribution is to provide more flexibility for fitting real datasets compare to other well-known classic statistical distributions.
We provide a comprehensive discussion about statistical and reliability properties of new OHCEE model. Furthermore, we consider maximum likelihood and bootstrap estimation procedures in order to estimate the unknown parameters of the new model for a complete data set. In addition, parametric and non-parametric bootstrap confidence intervals are calculated.
The rest of the paper organized as follows. In the Sect. 2, we review the main statistical and reliability properties of recently proposed model by Kharazmi et al. [7]. In Sect. 3, by considering the two exponential distributions as the base distributions, a new model is presented according to the general model discussed in Sect. 1 and its prominent characteristics are studied. This new model refer to as OHCEE distribution. In Sect. 4, we examine the inferential procedures for estimation unknown parameters of the OHCEE model. In this Section, we provide discussions about the maximum likelihood, Bayesian and bootstrap procedures. Applications and numerical analysis of an real data set are presented in Sect. 5. Finally, in Sect. 6 the paper is concluded.

2 Preliminaries of HCF and OHCFG Lifetime Models

In this section, we review the structure of two general HCF and OHCFG models. Kharazmi and Saadatinik [6] introduced a family of distributions using hyperbolic c function. The hyperbolic cosine has similar name to the trigonometric functions, but it is defined in terms of the exponential function as follows:
$$\begin{aligned} \cosh (x)=\frac{e^x+e^{-x}}{2} \end{aligned}$$
(2)
According to Kharazmi and Saadatinik [6] a random variable X has a hyperbolic cosine-F (HCF) distribution if its cumulative distribution function (cdf) is given by
$$\begin{aligned} G(x)=\frac{2e^a}{e^{2a}-1}\sinh \Big (aF(x)\Big ), \end{aligned}$$
(3)
where \(x>0,\, a>0\).
Motivated by idea of Alzaatreh et al. [2], a new class of statistical distributions have proposed by Kharazmi et al [7]. The new model is constructed by applying Alzaattreh idea to the hyperbolic cosine-F (HCF) family of lifetime distributions.
The CDF of new model is defined as
$$\begin{aligned} \begin{aligned} H(x)&=\int _{0}^{\frac{G(x)}{1-G(x)}}\frac{2a\,e^a}{e^{2a}-1}f(t)\cosh (aF(t)){\mathrm {d}}t\\&=\frac{2e^a}{e^{2a}-1}\sinh \left( aF\left( \frac{G(x)}{1-G(x)}\right) \right) \end{aligned} \end{aligned}$$
The density of OHC-FG model can be obtained as
$$\begin{aligned} h(x)=\frac{2a\,e^a}{e^{2a}-1}\frac{g(x)}{(1-G(x))^2}f \left( \frac{G(x)}{1-G(x)}\right) \cosh \left( aF\left( \frac{G(x)}{1-G(x)}\right) \right) \end{aligned}$$
The survival reliability \({\bar{H}}(x)\) and the hazard rate function r(x) for OHC-FG distribution are in the following form
$$\begin{aligned} {\bar{H}}(x)=1-\frac{2e^a}{e^{2a}-1}\sinh \left( aF \left( \frac{G(x)}{1-G(x)}\right) \right) \end{aligned}$$
and
$$\begin{aligned} r(x)=\frac{\frac{2a\,e^a}{e^{2a}-1}\frac{g(x)}{(1-G(x))^2}f \left( \frac{G(x)}{1-G(x)}\right) \cosh \left( aF\left( \frac{G(x)}{1-G(x)}\right) \right) }{1-\frac{2e^a}{e^{2a}-1}\sinh \left( aF\left( \frac{G(x)}{1-G(x)}\right) \right) }, \end{aligned}$$
respectively. The pth quantile \(x_p\) of the OHC-FG distribution can be obtained as
$$\begin{aligned} x_p=G^{-1}\left( \frac{F^{-1}\left( \frac{1}{a}arcsinh \left( \frac{e^{2a}-1}{2e^a}p\right) \right) }{1+F^{-1} \left( \frac{1}{a}arcsinh\left( \frac{e^{2a}-1}{2e^a}p\right) \right) }\right) ;\quad 0\le p \le 1. \end{aligned}$$
\(arcsinh(x)=\ln (x+\sqrt{x^2+1})\), then we get Since
$$\begin{aligned} x_p=G^{-1}\left( \frac{F^{-1}\left( \frac{1}{a} \log \left( \frac{e^{2a}-1}{2e^a}p+\sqrt{ \left( \frac{e^{2a}-1}{2e^a}p\right) ^2+1}\right) \right) }{1+F^{-1}\left( \frac{1}{a} \log \left( \frac{e^{2a}-1}{2e^a}p+\sqrt{\left( \frac{e^{2a}-1}{2e^a}p\right) ^2+1}\right) \right) }\right) . \end{aligned}$$
Hence, If the baseline F and G distributions are invertable then we can easily generate random samples from the OHC-FG distribution.

3 An Especial Case of OHCFG Model

we apply the OHC-FG method to a specific class of distribution functions, namely to an exponential distribution and call this new distribution, three-parameter OHC-EE distribution.
Definition
A random variable X has OHC-\(\hbox {EE}(a, \lambda _1,\lambda _2)\), if its probability density function (PDF) is given by
$$\begin{aligned} h(x)= & {} \frac{2a\,e^a}{e^{2a}-1}\lambda _1\,e^{\lambda _1 x}\,\lambda _2\,e^{-\lambda _2(e^{\lambda _1x}-1)}\,\cosh \left( a\left( 1-e^{-\lambda _2(e^{\lambda _1x}-1)}\right) \right) ;\nonumber \\ x> & {} 0,\, a,\lambda _1,\lambda _2>0. \end{aligned}$$
(4)
The CDF of (4) can be written as
$$\begin{aligned} H(x)=\frac{2\,e^a}{e^{2a}-1}\sinh \left( a\left( 1-e^{-\lambda _2(e^{\lambda _1x}-1)}\right) \right) , \end{aligned}$$
also, survival and hazard rate functions are given by
$$\begin{aligned} {\bar{H}}(x)=1-\frac{2\,e^a}{e^{2a}-1}\sinh \left( a\left( 1-e^{-\lambda _2(e^{\lambda _1x}-1)}\right) \right) , \end{aligned}$$
and the hazard rate function is given by
$$\begin{aligned} r(x)=\frac{h(x)}{{\bar{H}}(x)}=\frac{2a\,e^a\,\lambda _1\,e^{\lambda _1 x}\,\lambda _2\,e^{-\lambda _2(e^{\lambda _1x}-1)}\, \cosh \left( a\left( 1-e^{-\lambda _2(e^{\lambda _1x}-1)}\right) \right) }{e^{2a}-1-2e^a\,\sinh \left( a\left( 1-e^{-\lambda _2(e^{\lambda _1x}-1)}\right) \right) } \end{aligned}$$

3.1 Some Properties of the OHCEE Distribution

In this section, we obtain some properties of the OHCEE distribution, involving quantiles, moments,moment generating function and extreme order statistics Some shapes of density function and hazard rate function are shown in Figs. 1 and 2.

3.2 Quantiles

For the OHCEE distribution, the pth quantile \(x_p\) is the solution of \(H(x_p)=p\), hence
$$\begin{aligned} \begin{aligned} x_p&=\frac{1}{\lambda _1}\log \left( 1-\frac{1}{\lambda _2}\log \left( 1-\frac{1}{a}arcsinh\left( \frac{e^{2a}-1}{2e^a}p\right) \right) \right) \\&=\frac{1}{\lambda _1}\log \left( 1-\frac{1}{\lambda _2}\log \left( 1-\frac{1}{a}\log \left( \frac{e^{2a}-1}{2e^a}p+\sqrt{ \left( \frac{e^{2a}-1}{2e^a}p\right) ^2+1}\right) \right) \right. ;\quad 0 \le p \le 1 \end{aligned} \end{aligned}$$
which is the base of generating OHCEE random variates.

3.3 Moments and Moment Generating Function

In this subsection, moments and related measures including coefficients of variation, skewness and kurtosis are presented. Tables of values for the first six moments, standard deviation (SD), coefficient of variation (CV), coefficient of skewness (CS) and coefficient of kurtosis (CK) are also presented. The rth moment of the OHCEE distribution, denoted by \(\mu ^{\prime }_{r}\) is
$$\begin{aligned}&\mu ^{\prime }_{r}{=}E(X^r)=\frac{2a\,e^a}{e^{2a}-1}\int _{0}^{\infty }x^r\,\lambda _1\,e^{\lambda _1 x}\,\lambda _2\,e^{-\lambda _2(e^{\lambda _1x}-1)}\, \sum _{n=0}^{\infty }\frac{a^{2n}\left( 1-e^{-\lambda _2(e^{\lambda _1 x}-1)}\right) ^{2n}}{(2n)!} {\mathrm {d}}x\nonumber \\&\quad =\sum _{n=0}^{\infty }\frac{a^{2n+1}}{(2n)!}\frac{2\,e^a}{e^{2a}-1}\int _{0}^{\infty } x^r\,\lambda _1\,e^{\lambda _1 x}\,\lambda _2\,e^{-\lambda _2(e^{\lambda _1x}-1)}\,\left( 1-e^{-\lambda _2(e^{\lambda _1 x}-1)}\right) ^{2n}{\mathrm {d}}x\nonumber \\&\quad =\sum _{n=0}^{\infty }\frac{a^{2n+1}}{(2n)!}\frac{2\,e^a}{e^{2a}-1}\int _{0}^{\infty }\left( \frac{1}{\lambda _1}\log (1+y)\right) ^r \lambda _2\, e^{-\lambda _2 y}\,\left( 1-e^{-\lambda _2 y}\right) ^{2n}{\mathrm {d}}y\nonumber \\&\quad =\sum _{n=0}^{\infty }\frac{a^{2n}}{\lambda _1^r\,(2n+1)!}\frac{2\,e^a}{e^{2a}-1}\int _{0}^{\infty }\left( \log (1+y)\right) ^r\,(2n+1)\,\lambda _2\,e^{-\lambda _2 y}\,\left( 1-e^{-\lambda _2 y}\right) ^{2n}{\mathrm {d}}y\nonumber \\&\quad =\sum _{n=0}^{\infty }\frac{a^{2n}}{\lambda _1^r\,(2n+1)!}\frac{2\,e^a}{e^{2a}-1}\int _{0}^{\infty }\left( \log (1+y)\right) ^r\,f_{GE(2n+1,\lambda _2)}(y)\,{\mathrm {d}}y\nonumber \\&\quad =\sum _{j=1}^{\infty }\sum _{n=0}^{\infty }\frac{d_{r.j}\,(-1)^j\,a^{2n}}{\lambda _1^r\,(2n+1)!}\frac{2\,e^a}{e^{2a}-1}\int _{0}^{\infty }y^j\,f_{GE(2n+1,\lambda _2)}(y)\,{\mathrm {d}}y \nonumber \\&\quad =\sum _{j=1}^{\infty }\sum _{n=0}^{\infty }\frac{d_{r.j}\,(-1)^j\,a^{2n}}{\lambda _1^r\,(2n+1)!}\frac{2\,e^a}{e^{2a}-1}E_{X_{GE(2n+1,\lambda _2)}}[Y^j] \end{aligned}$$
(5)
where GE is Generalized Exponential distribution. The above integrate follows by using the series
$$\begin{aligned} \log ^r(1+y)=\left( -\sum _{k=1}^{\infty }\frac{(-1)^k}{k}y^k\right) ^r \end{aligned}$$
see the website https://​www.​wolframalpha.​com, and by using the identity
$$\begin{aligned} \left( \sum _{j=0}^{\infty }a_j\,x^j\right) ^t=\sum _{j=0}^{\infty }d_{t.j}x^j \end{aligned}$$
where \(d_{t.j}=(j a_0)^{-1}\sum _{m=1}^{j}[m(t+1)-j]a_m\,d_{t.j-m}\) and \(d_{t.0}=a_0^t\).
Table 1
Moments of the OHCEE distribution for some parameter values \(\hbox {a}=2\)
\(\mu ^{\prime }_{r}\)
\(\lambda _1=0.5,\,\lambda _2=0.5\)
\(\lambda _1=0.5,\,\lambda _2=1.5\)
\(\lambda _1=1.5,\,\lambda _2=0.5\)
\(\lambda _1=1.5,\,\lambda _2=1.5\)
\(\mu ^{\prime }_{1}\)
2.333524
1.185911
0.7778413
0.3953037
\(\mu ^{\prime }_{2}\)
6.861136
1.943436
0.7623485
0.2159373
\(\mu ^{\prime }_{3}\)
22.67089
3.760179
0.8396628
0.1392659
\(\mu ^{\prime }_{4}\)
81.05204
8.144519
1.000642
0.1005496
\(\mu ^{\prime }_{5}\)
307.6889
19.22541
1.266209
0.0791169
\(\mu ^{\prime }_{6}\)
1226.292
48.65189
1.682157
0.0667378
SD
1.415802
0.537051
0.1573114
0.0596723
CV
0.6067227
0.4528594
0.3953037
0.1509531
CS
0.0312526
0.4615376
0.0312529
0.4615372
CK
2.320697
2.680524
2.320698
2.680524
Table 2
Moments of the OHCEE distribution for some parameter values \(\lambda _2=0.5\)
\(\mu ^{\prime }_{r}\)
\(a=0.3,\,\lambda _1=0.5\)
\(a=0.5,\,\lambda _1=1\)
\(a=0.8,\,\lambda _1=1.2\)
\(a=1,\,\lambda _1=1.8\)
\(\mu ^{\prime }_{1}\)
1.860842
0.943458
0.8113764
0.5553856
\(\mu ^{\prime }_{2}\)
4.789929
1.225457
0.8961562
0.4156781
\(\mu ^{\prime }_{3}\)
14.49713
1.866463
1.153372
0.360613
\(\mu ^{\prime }_{4}\)
48.74443
3.152546
1.639693
0.344453
\(\mu ^{\prime }_{5}\)
176.9999
5.743568
2.507746
0.3532041
\(\mu ^{\prime }_{6}\)
682.5668
11.10332
4.062111
0.3830325
SD
1.327196
0.335344
0.2378245
0.1072249
CV
0.7132234
0.3554414
0.2931124
0.1930639
CS
0.4214693
0.39924
0.3476608
0.303294
CK
2.488223
2.460495
2.404912
2.366611
The variance, CV, CS, and CK are given by
$$\begin{aligned}&\sigma ^2=\mu ^{\prime }_{2}-\mu ^2,\quad CV=\frac{\sigma }{\mu }=\frac{\sqrt{\mu ^{\prime }_{2}-\mu ^2}}{\mu }=\sqrt{\frac{\mu ^{\prime }_2}{\mu ^2}-1},\end{aligned}$$
(6)
$$\begin{aligned}&CS=\frac{E[(X-\mu )^3]}{[E(X-\mu )^2]^{3/2}}=\frac{\mu ^{\prime }_3-3\mu \mu ^{\prime }_2+2\mu ^3}{(\mu ^{\prime }_2-\mu ^2)^{3/2}}, \end{aligned}$$
(7)
and
$$\begin{aligned} CK=\frac{E[(X-\mu )^4]}{[E(X-\mu )^2]^2}=\frac{\mu ^{\prime }_4-4\mu \mu ^{\prime }_3+6\mu ^2\mu ^{\prime }_2-3\mu ^4}{(\mu ^{\prime }_2-\mu ^2)^2} \end{aligned}$$
(8)
respectively. Table 1 lists the first six moments of the OHCEE distribution for selected values of the parameters, by fixing \(\hbox {a}=2\). And Table 2 lists the first six moments of the OHCEE distribution for selected values of the parameters, by fixing \(\lambda _1=0.5\). These values can be determined numerically using R.
The moment generating function of the OHCEE distribution is given by
$$\begin{aligned} E(e^{tX})=\sum _{n=0}^{\infty }\sum _{k=0}^{2n} \frac{2e^a\,a^{2n+1}\left( {\begin{array}{c}2n\\ k\end{array}}\right) (-1)^k\,\lambda _2}{(e^{2a}-1)(2n)!}\frac{e^{\lambda _2(k+1)}}{(\lambda _2(k+1))^{t/\lambda _1+1}}\,\Gamma \left( \frac{t}{\lambda _1}+1,\lambda _2(k+1)\right) \end{aligned}$$
In order to investigate and analyze amount of skewness and kurtosis of the new model under the four parameters \(a,\,\lambda _1\) and \(\lambda _2\), 3D diagrams are presented in 3, 4 and 5. Analysis of theses graphs shows that all four parameter are effective in variation of skewness and kurtosis.

3.4 Order Statistics

Order statistics play an important role in probability and statistics. In this section, we present the distribution of the ith order statistic from the OHCEE distribution. The pdf of the ith order statistic from the OHCEE pdf \(f_{OHCEE}(x)\) is given by
$$\begin{aligned} \begin{aligned} f_{i:n}(x)&=\frac{n!}{(i-1)!(n-i)!}\,f_{OHCEE}(x) \left[ F_{OHCEE}(x)\right] ^{i-1}\left[ 1-F_{OHCEE}(x)\right] ^{n-i}\\&=\frac{n!}{(i-1)!(n-i)!}\,f_{OHCEE}(x)\sum _{m=0}^{n-i}\left( {\begin{array}{c}n-i\\ m\end{array}}\right) (-1)^m\left[ F_{OHCEE}(x)\right] ^{m+i-1} \end{aligned} \end{aligned}$$
by using the binomial expansion \(\left[ 1-F_{OHCEE}(x)\right] ^{n-i}=\sum _{m=0}^{n-i}\left( {\begin{array}{c}n-i\\ m\end{array}}\right) (-1)^m\left[ F_{OHCEE}(x)\right] ^{m}\), so that
$$\begin{aligned} f_{i:n}(x)=\frac{1}{B(i,n-i+1)}\sum _{m=0}^{n-i}\left( {\begin{array}{c}n-i\\ m\end{array}}\right) (-1)^m\left[ F_{OHCEE}(x)\right] ^{m+i-1}\,f_{OHCEE}(x) \end{aligned}$$

4 Infernce Procedure

In this section, we consider estimation of unknown parameters of the \(OHCEE(a,\lambda _1,\lambda _2)\) distribution by maximum likelihood method, Bayesian and bootstrap estimation. Also, we get the stress–strength parameter under this distribution.

4.1 Maximum Likelihood Estimation

Let \(x_1,\dots ,x_n\) be a random sample from the OHCEE distribution and \(\Delta =(a,\lambda _1,\lambda _2)\) the vector of parameters. The log-likelihood function is given by
$$\begin{aligned} \begin{aligned} L=L(\Delta )&=n\log \left( \frac{2a\,e^a}{e^{2a}-1}\right) +n\log (\lambda _1)+\lambda _1\sum _{i=1}^{n}x_i+n\log (\lambda _2)-\lambda _2\sum _{i=1}^{n}(e^{\lambda _1\,x_i}-1)\\&\quad +\,\sum _{i=1}^{n} \log \left( \cosh \left( a\left( 1-e^{-\lambda _2\left( e^{\lambda _1 x_i}-1\right) }\right) \right) \right) \end{aligned} \end{aligned}$$
(9)
The elements of the score vector are given by
$$\begin{aligned} \frac{dL}{da}= & {} n\frac{2e^{3a}(1-a)-2e^a(1+a)}{2\,a\,e^a(e^{2a}-1)}\\&+\,\sum _{i=1}^{n}(1-e^{-\lambda _2(e^{\lambda _1 x_i}-1)}) \tanh \left( a\left( 1-e^{-\lambda _2\left( e^{\lambda _1 x_i}-1\right) }\right) \right) =0,\\ \frac{dL}{d\lambda _1}= & {} \frac{n}{\lambda _1} +\sum _{i=1}^{n}x_i-\lambda _2\sum _{i=1}^{n}x_i\,e^{\lambda _1 x_i} \\&+\,a\lambda _2\sum _{i=1}^{n}x_i\,e^{\lambda _1 x_i}e^{-\lambda _2(e^{\lambda _1 x_i}-1)}\tanh \left( a\left( 1-e^{-\lambda _2\left( e^{\lambda _1 x_i}-1\right) }\right) \right) =0, \end{aligned}$$
and
$$\begin{aligned} \frac{dL}{d\lambda _2}= & {} \frac{n}{\lambda _2}-\sum _{i=1}^{n}(e^{\lambda _1 x_i}-1)\\&+\,a\sum _{i=1}^{n}(e^{\lambda _1 x_i}-1)\,e^{-\lambda _2(e^{\lambda _1 x_i}-1)}\tanh \left( a\left( 1-e^{-\lambda _2\left( e^{\lambda _1 x_i}-1\right) }\right) \right) =0 \end{aligned}$$
respectively.
The maximum likelihood estimates, \({\hat{\Delta }}\) of \(\Delta =(a,\lambda _1,\lambda _2)\) are obtained by solving the nonlinear equations \(\frac{dL}{da}=0, \frac{dL}{d\lambda _1}=0, \frac{dL}{d\lambda _2}=0\). hese equations are not in closed form and the values of the parameters a,\(\lambda _1\) and \( \lambda _2\) must be found by using iterative methods. Therefore, the maximum likelihood estimates, \({\hat{\Delta }}\) of \(\Delta =(a,\lambda _1,\lambda _2)\) can be determined using an iterative method such as the NewtonRaphson procedure.

4.2 Stress–Strength Parameter Estimation

In reliability, the stress–strength model describes the life of a component which has a random strength X subjected to a random stress Y. In this section, we consider the problem of estimating \(R=P(X>Y)\), under the assumption that \(X\sim OHCEE(a_1,\lambda _{11},\lambda _{21})\), \(Y\sim OHCEE(a_2,\lambda _{12},\lambda _{22})\), and X and Y are independently distributed. Then, we can get
$$\begin{aligned} \begin{aligned} R=P(X>Y)&=\int _{0}^{\infty } [1-H_X(y)]\,h_Y(y){\mathrm {d}}y\\&=\int _{0}^{\infty }\left[ 1-\frac{2\,e^a_1}{e^{2a_1}-1}\sinh \left( a_1\left( 1-e^{-\lambda _{21}\left( e^{\lambda _{11}y}-1\right) }\right) \right) \right] \\&\quad \times \, \frac{2a_2\,e^{a_2}}{e^{2a_2}-1}\lambda _{12}\,e^{\lambda _{12}y} \lambda _{22}\,e^{-\lambda _{22} \left( e^{\lambda _{12}y}-1\right) } \cosh \left( a_2\left( 1-e^{-\lambda _{22} \left( e^{\lambda _{12}y}-1\right) }\right) \right) {\mathrm {d}}y \end{aligned} \end{aligned}$$
(10)
To compute the MLE of R, let us first obtain the MLEs of \(a_1,\,a_2,\,\lambda _{11},\,\lambda _{12},\,\lambda _{21}\) and \(\lambda _{22}\). Suppose \(x_1,\dots ,x_n\) be the observed values of a random sample of size n from \(OHCEE(a_1,\lambda _{11},\lambda _{12})\) and \(y_1,\dots ,y_m\) be the observed values of a random sample of size m from \(OHCEE(a_2,\lambda _{12},\lambda _{22})\). Therefore, the log-likelihood function of \(a_1,\,a_2,\,\lambda _{11},\,\lambda _{12},\,\lambda _{21}\) and \(\lambda _{22}\) is given by
$$\begin{aligned} \begin{aligned} L(a_1,a_2,\lambda _{11},\lambda _{12},\lambda _{21},\lambda _{22})&=n\log \left( \frac{2a_1\,e^a_1}{e^{2a_1}-1}\right) \\&\quad +\,n\log (\lambda _{11})+\lambda _{11}\sum _{i=1}^{n}x_i+n\log (\lambda _{21})\\&\quad -\,\lambda _{21}\sum _{i=1}^{n}(e^{\lambda _{11}\,x_i}-1)\\&\quad +\,\sum _{i=1}^{n} \log \left( \cosh \left( a_1\left( 1-e^{-\lambda _{21}\left( e^{\lambda _{11}x_i}-1\right) }\right) \right) \right) \\&\quad +\,m\log \left( \frac{2a_2\,e^a_2}{e^{2a_2}-1}\right) \\&\quad +\,m\log (\lambda _{12})+\lambda _{12}\sum _{i=1}^{m}y_i+m\log (\lambda _{22})\\&\quad -\, \lambda _{22}\sum _{i=1}^{m}(e^{\lambda _{12}\,y_i}-1)\\&\quad +\, \sum _{i=1}^{m} \log \left( \cosh \left( a_2\left( 1-e^{-\lambda _{22} \left( e^{\lambda _{12}y_i}-1\right) }\right) \right) \right) \\ \end{aligned} \end{aligned}$$
(11)
It follows that the MLEs of \(a_1,\,a_2,\,\lambda _{11},\,\lambda _{12},\,\lambda _{21}\) and \(\lambda _{22}\), say \({\hat{a}}_1,\,{\hat{a}}_2,\,{\hat{\lambda }}_{11},\,{\hat{\lambda }}_{12},\,{\hat{\lambda }}_{21}\) and \({\hat{\lambda }}_{22}\), are the simultaneous solutions of the following equations:
$$\begin{aligned}&n\frac{2e^{3a_1}(1{-}a_1){-}2e^a_1(1{+}a_1)}{2\,a_1\,e^a_1(e^{2a_1}{-}1)}{+}\sum _{i{=}1}^{n}(1{-}e^{{-}\lambda _{12}(e^{\lambda _{11} x_i}{-}1)}) \tanh \left( a_1\left( 1{-}e^{{-}\lambda _{12}\left( e^{\lambda _{11} x_i}{-}1\right) }\right) \right) {=}0,\\&m\frac{2e^{3a_2}(1{-}a_2){-}2e^a_2(1{+}a_2)}{2\,a_2\,e^a_2(e^{2a_2}{-}1)}{+}\sum _{i{=}1}^{m}(1{-}e^{{-}\lambda _{22}(e^{\lambda _{12} y_i}{-}1)}) \tanh \left( a_2\left( 1{-}e^{{-}\lambda _{22}\left( e^{\lambda _{12} y_i}{-}1\right) }\right) \right) {=}0,\\&\frac{n}{\lambda _{11}}{+}\sum _{i{=}1}^{n}x_i{-}\lambda _{21}\sum _{i{=}1}^{n} x_i\,e^{\lambda _{11} x_i} {+}a_1\lambda _{21}\sum _{i{=}1}^{n}x_i\,e^{\lambda _{11}x_i}e^{{-}\lambda _{21}(e^{\lambda _{11} x_i}{-}1)}\tanh \left( a_1\left( 1{-}e^{{-}\lambda _{21}\left( e^{\lambda _{11} x_i}{-}1\right) }\right) \right) {=}0,\\&\frac{m}{\lambda _{12}}{+}\sum _{i{=}1}^{m}y_i{-}\lambda _{22} \sum _{i{=}1}^{m}y_i\,e^{\lambda _{12} y_i}{+}a_2\lambda _{22}\sum _{i{=}1}^{m}y_i\,e^{\lambda _{12}y_i}e^{{-}\lambda _{22}(e^{\lambda _{12} y_i}{-}1)}\tanh \left( a_2\left( 1{-}e^{{-}\lambda _{22}\left( e^{\lambda _{12} y_i}{-}1\right) }\right) \right) {=}0,\\&\frac{n}{\lambda _{21}}{-}\sum _{i{=}1}^{n}(e^{\lambda _{11} x_i}{-}1){+}a_1\sum _{i{=}1}^{n}(e^{\lambda _{11} x_i}{-}1)\,e^{{-}\lambda _{21}(e^{\lambda _{11} x_i}{-}1)}\tanh \left( a_1\left( 1{-}e^{{-}\lambda _{21}\left( e^{\lambda _{11} x_i}{-}1\right) }\right) \right) {=}0\\&\frac{m}{\lambda _{22}}{-}\sum _{i{=}1}^{m}(e^{\lambda _{12}y_i}{-}1){+}a_2\sum _{i{=}1}^{m}(e^{\lambda _{12} y_i}{-}1)\,e^{{-}\lambda _{22}(e^{\lambda _{12} y_i}{-}1)}\tanh \left( a_2\left( 1{-}e^{{-}\lambda _{22}\left( e^{\lambda _{12} y_i}{-}1\right) }\right) \right) {=}0 \end{aligned}$$
Once we obtain \({\hat{a}}_1,\,{\hat{a}}_2,\,{\hat{\lambda }}_{11},\,{\hat{\lambda }}_{12},\,{\hat{\lambda }}_{21}\) and \({\hat{\lambda }}_{22}\) therefore, we compute the MLE of R as \({\hat{R}}\).
Here, the maximum likelihood approach does not give an explicit estimator for the MLEs of parameters and hence the MLE of R. In practice, one has to use numerical methods to find the MLEs, such methods are well implemented in R software packages.

4.3 Bootstrap Estimation

The parameters of the fitted distribution can be estimated by parametric (resampling from the fitted distribution) or non-parametric (resampling with replacement from the original data set) bootstraps resampling (see [5]). These two parametric and nonparametric bootstrap procedures are described as follows.
Parametric bootstrap procedure:
1.
Estimate \(\theta \) (vector of unknown parameters), say \({\hat{\theta }}\) , by using the MLE procedure based on a random sample.
 
2.
Generate a bootstrap sample \(\{X_1^*,\ldots ,X_m^*\}\) using \({\hat{\theta }}\) and obtain the bootstrap estimate of \(\theta \), say \(\widehat{\theta ^*}\), from the bootstrap sample based on the MLE procedure.
 
3.
Repeat Step 2 NBOOT times.
 
4.
Order \(\widehat{\theta ^*}_1,\ldots ,\widehat{\theta ^*}_{NBOOT}\) as \(\widehat{\theta ^*}_{(1)},\ldots ,\widehat{\theta ^*}_{(NBOOT)}\) . Then obtain \(\gamma \)-quantiles and \(100(1-\alpha )\%\) confidence intervals of parameters.
 
In case of the OHCEE distribution, the parametric bootstrap estimators (PBs) of \(\alpha ,\beta ,\lambda \) and \(\xi \), say \({\hat{\alpha }}_{PB},{\hat{\beta }}_{PB},{\hat{\lambda }}_{PB}\) and \({\hat{\xi }}_{PB}\), respectively.
Nonparametric bootstrap procedure
1.
Generate a bootstrap sample \(\{X_1^*,\ldots ,X_m^*\}\) , with replacement from original data set.
 
2.
Obtain the bootstrap estimate of \(\theta \) with MLE procedure, say \(\widehat{\theta ^*}\), by using the bootstrap sample.
 
3.
Repeat Step 2 NBOOT times.
 
4.
Order \(\widehat{\theta ^*}_1,\ldots ,\widehat{\theta ^*}_{NBOOT}\) as \(\widehat{\theta ^*}_{(1)},\ldots ,\widehat{\theta ^*}_{(NBOOT)}\) . Then obtain \(\gamma \)-quantiles and \(100(1-\alpha )\%\) confidence intervals of parameters.
 
In case of the OHCEE distribution, the nonparametric bootstrap estimators (NPBs) of \(\alpha ,\beta ,\lambda \) and \(\xi \), say \({\hat{\alpha }}_{NPB},{\hat{\beta }}_{NPB},{\hat{\lambda }}_{NPB}\) and \({\hat{\xi }}_{NPB}\), respectively.

4.4 Bayesian Estimation

In this section we consider Bayesian inference of the unknown parameters of the \(OHCEE(a,\lambda _1, \lambda _2)\). It is assumed that a is known and both of the parameters of \(\lambda _1\) and \(\lambda _2\) are unknown and have the independent gamma prior distributions with probability density functions
$$\begin{aligned} g(\lambda _1)\propto e^{-c\,\lambda _1}\,\lambda _1^{b-1},\quad \lambda _1>0,\;\; b, c>0, \end{aligned}$$
and
$$\begin{aligned} g(\lambda _2)\propto e^{-s\,\lambda _2}\,\lambda _2^{d-1},\quad \lambda _2>0,\;\; s, d>0, \end{aligned}$$
Thus, the joint prior distribution for \(\lambda _1\) and \(\lambda _2\) is
$$\begin{aligned} g(\lambda _1, \lambda _2)\propto \lambda _1^{b-1}\,\lambda _2^{d-1}\,e^{-(c\lambda _1+s\lambda _2)}, \quad \lambda _1>0,\;\; \lambda _2>0,\;\; b,\,c,\, s,\, d>0. \end{aligned}$$
Now, the posterior density function of \(\lambda _1\) and \(\lambda _2\) given the data, denoted by \(\pi (\lambda _1,\lambda _2|{\underline{x}})\), can be obtained the following as
$$\begin{aligned} \begin{aligned} \pi (\lambda _1,\lambda _2|{\underline{x}})&=\frac{1}{R({\underline{x}})}\lambda _1^{n+b-1}\,\lambda _2^{n+d-1}\,e^{-c\lambda _1}\,e^{-\Big (s+\sum _{i=1}^{n}\Big (e^{\lambda _1 x_i}-1\Big )\Big )}\,e^{\lambda _1\sum _{i=1}^{n}x_i}\\&\quad \times \prod _{i=1}^{n}\cosh \Big (a\Big (1-e^{-\lambda _2(e^{\lambda _1 x_i}-1)}\Big )\Big ) ,\quad \lambda _1>0,\, \lambda _2>0. \end{aligned} \end{aligned}$$
(12)
where
$$\begin{aligned} \begin{aligned} R({\underline{x}})&=\int _{0}^{\infty }\int _{0}^{\infty } \lambda _1^{n+b-1}\,\lambda _2^{n+d-1}\,e^{-c\lambda _1}\,e^{- \Big (s+\sum _{i=1}^{n}\Big (e^{\lambda _1 x_i}-1\Big )\Big )}\,e^{\lambda _1\sum _{i=1}^{n}x_i}\\&\quad \times \,\prod _{i=1}^{n}\cosh \Big (a\Big (1-e^{-\lambda _2(e^{\lambda _1 x_i}-1)}\Big )\Big ) {\mathrm {d}} \lambda _1 {\mathrm {d}} \lambda _2. \end{aligned} \end{aligned}$$
Therefore, the Bayes estimate of any function of \(\alpha \) and \(\beta \), say \(\theta (\alpha ,\beta )\) under squared error loss function is
$$\begin{aligned} \begin{aligned} {\hat{\theta }}_{Bayes}&=\frac{1}{R({\underline{x}})}\int _{0}^{\infty }\int _{0}^{\infty }\theta (\alpha ,\beta ) \lambda _1^{n+b-1}\,\lambda _2^{n+d-1}\,e^{-c\lambda _1} \,e^{-\Big (s+\sum _{i=1}^{n}\Big (e^{\lambda _1 x_i}-1\Big )\Big )}\,e^{\lambda _1\sum _{i=1}^{n}x_i}\\&\quad \times \,\prod _{i=1}^{n}\cosh \Big (a\Big (1-e^{-\lambda _2(e^{\lambda _1 x_i}-1)}\Big )\Big ) {\mathrm {d}}\lambda _1\,{\mathrm {d}}\lambda _2 \end{aligned} \end{aligned}$$
(13)

4.4.1 Lindleys Approximation

It is impossible to compute Eq. (2) analytically. Lindleys approximation is used to compute the ratio of integrals of the form (1). Based on Lindleys approximation, the approximate Bayes estimates of \(\lambda _1\) and \(\lambda _2\) under squared error loss function are
$$\begin{aligned} \begin{aligned} \hat{\lambda _1}_{Lindley}&=\hat{\lambda _1}+\frac{1}{2}\Big [ 2\rho _{\lambda _1}\,\sigma _{\lambda _1\lambda _1}+2\rho _{\lambda _2} \,\sigma _{\lambda _1\lambda _2}+\sigma _{\lambda _1\lambda _1} (L_{\lambda _1\lambda _1\lambda _1}\sigma _{\lambda _1\lambda _1}\\&\quad +\, L_{\lambda _1\lambda _2\lambda _1}\sigma _{\lambda _1\lambda _2}+L_{\lambda _2\lambda _1\lambda _1}\sigma _{\lambda _2\lambda _1}+L_{\lambda _2\lambda _2\lambda _1}\sigma _{\lambda _2\lambda _1})\\&\quad +\,\sigma _{\lambda _2\lambda _1}(L_{\lambda _1\lambda _1\lambda _2}\sigma _{\lambda _1\lambda _1}+L_{\lambda _1\lambda _1\lambda _2}\sigma _{\lambda _1\lambda _2}+L_{\lambda _2\lambda _1\lambda _2}\sigma _{\lambda _2\lambda _1}+L_{\lambda _2\lambda _2\lambda _2}\sigma _{\lambda _2\lambda _2}\Big ], \end{aligned} \end{aligned}$$
(14)
and
$$\begin{aligned} \begin{aligned} \hat{\lambda _2}_{Lindley}&=\hat{\lambda _2}+\frac{1}{2}\Big [ 2\rho _{\lambda _1}\,\sigma _{\lambda _2\lambda _1}+2\rho _{\lambda _2} \, \sigma _{\lambda _2\lambda _2}+\sigma _{\lambda _1\lambda _2} (L_{\lambda _1\lambda _1\lambda _1}\sigma _{\lambda _1\lambda _1}\\&\quad +\,L_{\lambda _1\lambda _2\lambda _1}\sigma _{\lambda _1\lambda _2}+L_{\lambda _2\lambda _1\lambda _1}\sigma _{\lambda _2\lambda _1}+L_{\lambda _2\lambda _2\lambda _1}\sigma _{\lambda _2\lambda _2})\\&\quad +\,\sigma _{\lambda _2\lambda _2}(L_{\lambda _1\lambda _1\lambda _2}\sigma _{\lambda _1\lambda _1}+L_{\lambda _1\lambda _2\lambda _2}\sigma _{\lambda _1\lambda _2}+L_{\lambda _2\lambda _1\lambda _2}\sigma _{\lambda _2\lambda _1}+L_{\lambda _2\lambda _2\lambda _2}\sigma _{\lambda _2\lambda _2}\Big ], \end{aligned} \end{aligned}$$
(15)
respectively. Here \(\hat{\lambda _1}\) and \(\hat{\lambda _2}\) are the MLEs of \(\lambda _1\) and \(\lambda _2\), respectively. The details involved in the derivations of Eqs. (3) and (4) are placed in the “Appendix” by a function of the parameter and corresponding estimator. Five well-known loss functions and associated Bayesian estimators and corresponding posterior risk are presented in Table 3.
Table 3
Bayes estimator and posterior risk under different loss functions
loss function
Bayes estimator
Posterior risk
\(L_1=SELF=(\theta -d)^2\)
\(E(\theta |x)\)
\(Var(\theta |x)\)
\(L_2=WSELF=\frac{(\theta -d)^2}{\theta }\)
\((E(\theta ^{-1}|x))^{-1}\)
\(E(\theta |x)-(E(\theta ^{-1}|x))^{-1}\)
\(L_3=MSELF=\left( 1-\frac{d}{\theta }\right) ^2\)
\(\frac{E(\theta ^{-1}|x)}{E(\theta ^{-2}|x)}\)
\(1-\frac{E(\theta ^{-1}|x)^2}{E(\theta ^{-2}|x)}\)
\(L_4=PLF=\frac{(\theta -d)^2}{d}\)
\(\sqrt{E(\theta ^2|x)}\)
\(2\left( \sqrt{E(\theta ^2|x)}-E(\theta |x)\right) \)
\(L_5=KLF=\left( \sqrt{\frac{d}{\theta }-\sqrt{\frac{\theta }{d}}}\right) \)
\(\sqrt{\frac{E(\theta |x)}{E(\theta ^{-1}|x)}}\)
\(2\left( \sqrt{E(\theta |x)E(\theta ^{-1}|x)}-1\right) \)

5 Algorithm and a Simulation Study

In this section, we give one algorithm for generating the random data \(x_1,\dots ,x_n\) from the OHCEE distribution and hence a simulation study is obtained to evaluate the performance of MLEs.

5.1 Algorithm

Here, we obtain a algorithm for generating the random data \(x_1,\dots ,x_n\) from the OHCEE distribution as follows. The algorithm is based on generating random data from the inverse cdf of the OHCEE distribution.
Algorithm
1.
Generate \(U_i\sim Uniform(0,1);\, i=1,\dots ,n\),
 
2.
set
 
$$\begin{aligned} X_p=\frac{1}{\lambda _1}\log \left( 1-\frac{1}{\lambda _2}\log \left( 1-\frac{1}{a}\log \left( \frac{e^{2a}-1}{2e^a}p+ \sqrt{\left( \frac{e^{2a}-1}{2e^a}p\right) ^2+1}\right) \right) \right. ;\quad 0 \le p \le 1 \end{aligned}$$

5.2 Monte Carlo Simulation Study

In this section, we assess the performance of the MLE’s of the parameters with respect to sample size n for the \(OHCEE(a,\lambda _1,\lambda _2)\) distribution. We used above Algorithm to generate data from the OHCEE distribution. The assessment of performance is based on a simulation study by using the Monte Carlo method. Let \({\hat{a}}, \hat{\lambda _1}\) and \(\hat{\lambda _2}\) be the MLEs of the parameters a, \(\lambda _1\) and \(\lambda _2,\) respectively. We compute the mean square error (MSE) and bias of the MLEs of the parameters a, \(\lambda _1\) and \(\lambda _2,\) based on the simulation results of \(\hbox {N} = 2000\) independence replications. results are summarized in Table 4 for different values of n, a, \(\lambda _1\) and \(\lambda _2,\). From Table 4 the results verify that MSE and bias of the MLEs of the parameters decrease with respect to sample size n increases. Hence, we can see the MLEs of a, \(\lambda _1\) and \(\lambda _2,\) are consistent estimators.
Table 4
MSEs and Average biases (values in parentheses) of the simulated estimates
  
\(\hbox {a}=0.3\)
\(\lambda _1=0.5\)
\(\lambda _2=1.5\)
n
30
0.8189 (0.1738)
0.0865 (0.0544)
132.3321 (2.3249)
50
0.7216 (0.1095)
0.0519 (0.0187)
40.7226 (1.3271)
100
0.6768 (0.1396)
0.0275 (\(-\)0.0182)
15.1627 (0.9064)
200
0.5902 (0.1135)
0.0172 (\(-\)0.0303)
4.4592 (0.6561)
  
\(\hbox {a}=0.5\)
\(\lambda _1=1\)
\(\lambda _2=2\)
n
30
1.0717 (0.1182)
0.4595 (0.1103)
353.7897 (4.8048)
50
0.8683 (0.0302)
0.2869 (0.0377)
165.9069 (3.2168)
100
0.7678 (0.0574)
0.1643 (\(-\)0.0512)
65.0995 (2.2592)
200
0.6099 (0.0405)
0.1014 (\(-\)0.0595)
17.2459 (1.2489)
  
\(\hbox {a}=0.8\)
\(\lambda _1=0.5\)
\(\lambda _2=1.5\)
n
30
1.2207 (\(-\)0.0169)
0.0873 (0.0515)
106.8476 (2.3648)
50
1.0508 (\(-\)0.0300)
0.0538 (0.0134)
54.7730 (1.7901)
100
0.8062 (\(-\)0.0321)
0.0304 (\(-\)0.0151)
17.3773 (1.1130)
200
0.6423 (\(-\)0.0515)
0.0196 (-0.0180)
4.9051 (0.6426)
  
\(\hbox {a}=1\)
\(\lambda _1=1.5\)
\(\lambda _2=2\)
n
30
1.4442 (\(-\)0.0288)
1.0211 (0.1839)
297.3460 (4.7064)
50
1.1301 (\(-\)0.0819)
0.7082 (0.0773)
163.7686 (3.5522)
100
0.8579 (\(-\)0.0866)
0.4011 (\(-\)0.0087)
64.1250 (2.0630)
200
0.6941 (\(-\)0.1346)
0.2789 (\(-\)0.0210)
26.7817 (1.2582)
  
\(\hbox {a}=2\)
\(\lambda _1=2\)
\(\lambda _2=1.5\)
n
30
2.3294 (\(-\)0.1836 )
2.1399 (0.5090)
223.8817 (3.3442)
50
1.6476 (\(-\)0.2089)
1.4322 (0.3452)
57.4308 (1.7158)
100
1.1889 (\(-\)0.2213)
0.8673 (0.2137)
12.2516 (0.6404)
200
0.9679 (\(-\)0.2788)
0.6181 (0.2014)
3.7672 (0.2618)

6 Practical Data Applications

In this section, we present the application of the OHCEE model to one practical data set to illustrate its flexibility among a set of competitive models (Table 6).
The windshield on a large aircraft is a complex piece of equipment, comprised basically of several layers of material, including a very strong outer skin with a heated layer just beneath it, all laminated under high temperature and pressure. Failures of these items are not structural failures. Instead, they typically involve damage or delamination of the nonstructural outer ply or failure of the heating system. These failures do not result in damage to the aircraft but do result in replacement of the windshield. We consider the data on service times for a particular model windshield given in Table 16.11 of Murthy et al. [8]. These data were recently studied by Ramos et al. [9]. These data are:
0.046 1.436 2.592 0.140 1.492 2.600 0.150 1.580 2.670 0.248 1.719 2.717 0.280 1.794 2.819 0.313 1.915 2.820 0.389 1.920 2.878 0.487 1.963 2.950 0.622 1.978 3.003 0.900 2.053 3.102 0.952 2.065 3.304 0.996 2.117 3.483 1.003 2.137 3.500 1.010 2.141 3.622 1.085 2.163 3.665 1.092 2.183 3.695 1.152 2.240 4.015 1.183 2.341 4.628 1.244 2.435 4.806 1.249 2.464 4.881 1.262 2.543 5.140.
Graphical measure The total time test (TTT) plot due to Aarset [3] is an important graphical approach to verify whether the data can be applied to a specific distribution or not. According to Aarset [3], the empirical version of the TTT plot is given by plotting \(T(r/n)=[\sum _{i=1}^{r}y_{i:n}+(n-r)y_{r:n}]/\sum _{i=1}^{n}y_{i:n}\) against r / n, where \(r=1,\dots ,n\) and \(y_{i:n}(i=1,\dots ,n)\) are the order statistics of the sample. Aarset [3] showed that the hazard function is constant if the TTT plot is graphically presented as a straight diagonal, the hazard function is increasing (or decreasing) if the TTT plot is concave (or convex). The hazard function is U-shaped if the TTT plot is convex and then concave, if not, the hazard function is unimodal. The TTT plots for data set is presented in Fig. 6. These plots indicate that the empirical hazard rate functions of the data set is increasing. Therefore, the OHCEE distribution is appropriate to fit this data set. parametric and non-parametric bootstrap methods for the real data set. We provide results of bootstrap estimation based on 10,000 bootstrap replicates in Table 5.
Table 5
Bootstrap point and interval estimation of the parameters a, \(\lambda _1\) and \(\lambda _2\)
 
Parametric bootstrap
Non-parametric bootstrap
 
Point estimation
CI
Point estimation
CI
a
2.735
(0.464, 4.430)
2.641
(0.672,4.063)
\(\lambda _1\)
0.501
(0.248, 0.967)
0.497
(0.324, 0.847)
\(\lambda _2\)
0.910
(0.136, 3.224)
0.927
(0.188, 2.080)
We fit the OHCEE distribution to the one data set and compare it with the HCE, gamma, generalized exponential and Weibull densities. Table 6 shows the MLEs of parameters, log-likelihood, Akaike information criterion (AIC), Cramrvon \(\hbox {Mises}(W^*\)), AndersonDarling (\(A^*\)) and p-value(P) statistics for the data set. The OHCEE distribution provides the best fit for the data set as it shows the lowest AIC, \(A^*\) and \(W^*\) than other considered models. The relative histograms, fitted OHCEE, HCE, gamma, generalized exponential and Weibull PDFs for data are plotted in Fig. 7. The plots of empirical and fitted survival functions, P–P plots and Q–Q plots for the OHCEE and other fitted distributions are displayed in Figs. 7 and 8 respectively. These plots also support the results in Table 6. We compare the OHCEE model with a set of competitive models, namely:
(i)
Hyperbolic Cosine-Exponential distribution (HCE) [6]. The two-parameter HCE density function is given by
$$\begin{aligned} f(x;a,\lambda )=\frac{2a\,e^a}{e^{2a}-1}\lambda \,e^{-\lambda x} \cosh (a(1-e^{-\lambda x})); \quad x>0 \end{aligned}$$
where \(a>0\) and \(\lambda >0\).
 
(ii)
The two-parameter Weibull distribution is given by
$$\begin{aligned} f(x;\alpha ,\beta )=\frac{\alpha }{\beta }\,\left( \frac{x}{\beta }\right) ^{\alpha -1}\,e^{-\left( \frac{x}{\beta }\right) ^{\alpha }};\quad x>0 \end{aligned}$$
where \(\alpha >0\) and \(\beta >0.\)
 
(iii)
The two-parameter Gamma distribution is given by
$$\begin{aligned} f(x;\alpha ,\theta )=\frac{1}{\theta ^{\alpha }\,\Gamma (\alpha )} \,x^{\alpha -1}\,e^{-(x/\theta )};\quad x>0 \end{aligned}$$
where \(\alpha >0\) and \(\theta >0\) and \(\Gamma (\alpha )=\int _{0}^{\infty }t^{\alpha -1}\,e^{-t}{\mathrm {d}}t\).
 
(iv)
The two-parameter generalized exponential (GE) distribution is given by
$$\begin{aligned} f(x;\alpha ,\lambda )=\alpha \lambda \, e^{-\lambda x}(1-e^{-\lambda x})^{\alpha -1}; \quad x>0 \end{aligned}$$
 
where \(\alpha >0\) and \(\lambda >0.\)
As mentioned in inference section, there are not closed expression for MLE estimation of parameters \(a,\,\lambda _1\) and \(\lambda _2\). We use numerical methods to obtain MLE estimation of these parameters. To evaluate the results of MLE estimation (Table 6), We provide profile-likelihood plots of OHCEE distribution for each parameter in Fig. 9.
Table 6
Parameter estimates (standard errors),log-likelihood values and goodness of fit measures
Model
MLEs of parameters (s.e)
Log-likelihood
AIC
BIC
\(A^*\)
\(W^*\)
K.S
P
OHCEE
\({\hat{a}}=2.58\) (0.96)
\(-\) 97.91
201.83
208.26
0.30
0.04
0.06
0.95
\({\hat{\lambda }}_1=0.24\,(0.14)\)
       
\({\hat{\lambda }}_2= 2.08\,(2.02)\)
       
HCE
\({\hat{a}}=3.69\,(0.67)\)
\(-\) 99.81
203.63
207.92
0.45
0.07
0.10
0.51
\({\hat{\lambda }}=0.89\,(0.09)\)
       
Weibull
\({\hat{\alpha }}=1.62\,(0.16)\)
\(-\) 100.31
204.63
208.92
0.64
0.09
0.10
0.41
\({\hat{\beta }}=2.30\,(0.18)\)
       
Gamma
\({\hat{\alpha }}=1.9\,(0.31)\)
\(-\) 102.83
209.66
213.95
1.16
0.2
0.58
0.84
\({\hat{\theta }}=0.91\,(0.17)\)
       
GE
\({\hat{\alpha }}=1.89\,(0.34)\)
\(-\) 103.54
211.09
215.37
1.31
0.23
0.14
0.13
\({\hat{\lambda }}=0.69\,(0.09)\)
       
The corresponding Bayesian point estimation and posterior risk provided in Table 7.
Table 7
Bayesian estimates and their posterior risks of parameters under different loss functions based on real data
Data
Service times
Bayes
\(\widehat{\lambda _1}\)
\(\widehat{\lambda _2}\)
Loss functions
Estimate
Risk
Estimate
Risk
SELF
0.242722
0.00022
2.140396
0.04963
WSELF
0.243575
0.00085
2.165938
0.02554
MSELF
0.244380
0.00330
2.195038
0.01343
PLF
0.242265
0.00091
2.128769
0.02325
KLF
0.243148
0.00350
2.153129
0.01182

7 Conclusion

In this article, a new model for the lifetime distributions is introduced and its main properties are discussed. A special submodel of this family is taken up by considering exponential distributions in place of the parent distribution F and instead of the parent distribution G. We also show that the proposed distribution has variability of hazard rate shapes such as increasing and upside-down bathtub shapes. Numerical results of maximum likelihood, Bayesian and bootstrap procedures for a set of real data are presented in separate tables. From a practical point of view, we show that the proposed distribution is more flexible than some common statistical distributions.
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendix

Appendix

Lindley developed an asymptotic expansion for evaluating the ratio of integrals of the form
$$\begin{aligned} u^*=E[u(\alpha ,\beta )|x]=\frac{\int \int u(\alpha ,\beta )exp[l(\alpha ,\beta |x)+\rho (\alpha ,\beta )]}{\int \int exp[l(\alpha ,\beta |x)+\rho (\alpha ,\beta )]} {\mathrm {d}}\alpha {\mathrm {d}}\beta , \end{aligned}$$
where \(u(\alpha ,\beta )\) is a function of \(\alpha \) and \(\beta \) only, and \(l(\alpha ,\beta |x)\) is the log-likelihood function and \(\rho (\alpha ,\beta )=\log [\pi (\alpha ,\beta )]\).
Utilizing Lindleys method, \(u^*\) can be approximated as
$$\begin{aligned} \begin{aligned} u^*=&u+\frac{1}{2}\Big [(u_{\alpha \alpha }+2u_\alpha \rho _\alpha )\sigma _{\alpha \alpha }+(u_{\alpha \beta }+2u_\alpha \rho _\beta ) \sigma _{\alpha \beta }\\&\qquad \qquad +(u_{\beta \alpha }+2u_\beta \rho _\alpha )\sigma _{\beta \alpha }+(u_{\beta \beta }+2u_\beta \rho _\beta )\sigma _{\beta \beta }\\&\qquad \qquad +(u_\alpha \sigma _{\alpha \alpha } + u_\beta \sigma _{\alpha \beta })(L_{\alpha \alpha \alpha }\sigma _{\alpha \alpha } + L_{\alpha \beta \alpha }\sigma _{\alpha \beta } + L_{\beta \alpha \alpha }\sigma _{\beta \alpha } + L_{\beta \beta \alpha }\sigma _{\beta \beta })\\&\qquad \qquad +(u_\alpha \sigma _{\beta \alpha } + u_\beta \sigma _{\beta \beta })(L_{\alpha \alpha \beta }\sigma _{\alpha \alpha } + L_{\alpha \beta \beta }\sigma _{\alpha \beta } + L_{\beta \alpha \beta }\sigma _{\beta \alpha } + L_{\beta \beta \beta }\sigma _{\beta \beta })\Big ] \end{aligned} \end{aligned}$$
(16)
All functions of the right-hand side of Equation (5) are to be evaluated at the MLE \({\hat{u}}\).
For our setup, the joint prior distribution is \(\pi (\lambda _1,\lambda _2)=\lambda _1^{b-1}\,\lambda _2^{d-1}\,e^{-(c\lambda _1+s\lambda _2)}\) and
$$\begin{aligned} \rho (\lambda _1,\lambda _2)=\log [\pi (\lambda _1,\lambda _2)]=(b-1) \log \lambda _1+(d-1) \log \lambda _2 -(c\lambda _1+s\lambda _2) \end{aligned}$$
which give out
$$\begin{aligned} \rho _{\lambda _1}= & {} \frac{d\rho }{d\lambda _1}=\frac{b-1}{\lambda _1}-c, \qquad \rho _{\lambda _2}=\frac{d\rho }{d\lambda _2}=\frac{ d-1}{\lambda _2}-s, \\&u_\alpha =\frac{du}{d\alpha }, \qquad u_\beta =\frac{du}{d\beta }\qquad u_{\alpha \beta }=\frac{d^2u}{d\alpha d\beta },\qquad u_{\beta \alpha }=\frac{d^2u}{d\beta d\alpha },\qquad \\&u_{\alpha \alpha }=\frac{d^2u}{d\alpha d\alpha },\qquad u_{\beta \beta }=\frac{d^2u}{d\beta d\beta } \end{aligned}$$
When
$$\begin{aligned} u= & {} \lambda _1,\quad u_{\lambda _1}=1,\quad u_{\lambda _1\lambda _1}=0=u_{\lambda _2}=u_{\lambda _1\lambda _2}=u_{\lambda _2\lambda _1}=u_{\lambda _2\lambda _2},\\&\hat{\lambda _1}_{Lindley}=\hat{\lambda _1}+\frac{1}{2}\Big [ 2\rho _{\lambda _1}\,\sigma _{\lambda _1\lambda _1}+2\rho _{\lambda _2} \,\sigma _{\lambda _1\lambda _2}+\sigma _{\lambda _1\lambda _1} (L_{\lambda _1\lambda _1\lambda _1}\sigma _{\lambda _1\lambda _1}\\&+\,L_{\lambda _1\lambda _2\lambda _1}\sigma _{\lambda _1\lambda _2}+L_{\lambda _2\lambda _1\lambda _1}\sigma _{\lambda _2\lambda _1}+L_{\lambda _2\lambda _2\lambda _1}\sigma _{\lambda _2\lambda _1})\\&+\,\sigma _{\lambda _2\lambda _1}(L_{\lambda _1\lambda _1\lambda _2} \sigma _{\lambda _1\lambda _1}+L_{\lambda _1\lambda _1\lambda _2} \sigma _{\lambda _1\lambda _2}\\&+\,L_{\lambda _2\lambda _1\lambda _2} \sigma _{\lambda _2\lambda _1}+L_{\lambda _2\lambda _2\lambda _2} \sigma _{\lambda _2\lambda _2}\Big ], \end{aligned}$$
and when
$$\begin{aligned} u= & {} {\lambda _2},\quad u_{\lambda _2}=1,\quad u_{\lambda _2\lambda _2}=0=u_{\lambda _1}=u_{\lambda _1\lambda _1}=u_{\lambda _2\lambda _1}=u_{\lambda _1\lambda _1},\\&\hat{\lambda _2}_{Lindley}=\hat{\lambda _2}+\frac{1}{2}\Big [ 2\rho _{\lambda _1}\,\sigma _{\lambda _2\lambda _1}+2\rho _{\lambda _2} \,\sigma _{\lambda _2\lambda _2}+\sigma _{\lambda _1\lambda _2} (L_{\lambda _1\lambda _1\lambda _1}\sigma _{\lambda _1\lambda _1}\\&+\,L_{\lambda _1\lambda _2\lambda _1}\sigma _{\lambda _1\lambda _2}+L_{\lambda _2\lambda _1\lambda _1}\sigma _{\lambda _2\lambda _1}+L_{\lambda _2\lambda _2\lambda _1}\sigma _{\lambda _2\lambda _2})\\&+\,\sigma _{\lambda _2\lambda _2}(L_{\lambda _1\lambda _1\lambda _2} \sigma _{\lambda _1\lambda _1}+L_{\lambda _1\lambda _2\lambda _2} \sigma _{\lambda _1\lambda _2}\\&+\,L_{\lambda _2\lambda _1\lambda _2} \sigma _{\lambda _2\lambda _1}+L_{\lambda _2\lambda _2\lambda _2} \sigma _{\lambda _2\lambda _2}\Big ], \end{aligned}$$
In the above expressions \(\sigma _{ij}=(i,j)\) th element in the inverse of the negative Hessian matrix, \(i,j=\alpha ,\beta \), and \(L_{ijk}\) implies the term obtained from differentiating log L with respect to i, j and k.
Literature
1.
go back to reference Alizadeh M, Altun E, Cordeiro GM, Rasekhi M (2018) The odd power cauchy family of distributions: properties, regression models and applications. J Stat Comput Simul 88(4):785–807CrossRef Alizadeh M, Altun E, Cordeiro GM, Rasekhi M (2018) The odd power cauchy family of distributions: properties, regression models and applications. J Stat Comput Simul 88(4):785–807CrossRef
2.
go back to reference Alzaatreh A, Lee C, Famoye F (2013) A new method for generating families of continuous distributions. Metron 71(1):63–79CrossRef Alzaatreh A, Lee C, Famoye F (2013) A new method for generating families of continuous distributions. Metron 71(1):63–79CrossRef
3.
go back to reference Aarset MV (1987) How to identify a bathtub hazard rate. IEEE Trans Reliab 36(1):106–108CrossRef Aarset MV (1987) How to identify a bathtub hazard rate. IEEE Trans Reliab 36(1):106–108CrossRef
4.
go back to reference Cordeiro GM, Alizadeh M, Ozel G, Hosseini B, Ortega EMM, Altun E (2017) The generalized odd log-logistic family of distributions: properties, regression models and applications. J Stat Comput Simul 87(5):908–932CrossRef Cordeiro GM, Alizadeh M, Ozel G, Hosseini B, Ortega EMM, Altun E (2017) The generalized odd log-logistic family of distributions: properties, regression models and applications. J Stat Comput Simul 87(5):908–932CrossRef
5.
go back to reference Efron B, Tibshirani RJ (1994) An introduction to the bootstrap. CRC Press, Boca Raton Efron B, Tibshirani RJ (1994) An introduction to the bootstrap. CRC Press, Boca Raton
6.
go back to reference Kharazmi O, Saadatinik A (2016) Hyperbolic cosine-F families of distributions with an application to exponential distribution. Gazi Univ J Sci 29(4):811829 Kharazmi O, Saadatinik A (2016) Hyperbolic cosine-F families of distributions with an application to exponential distribution. Gazi Univ J Sci 29(4):811829
7.
go back to reference Kharazmi O, Saadatinik A, Alizadeh M, Hamedani GG (2018) Odd hyperbolic cosine-FG (OHC-FG) family of lifetime distributions. J Stat Theory Appl (JSTA) (accepted) Kharazmi O, Saadatinik A, Alizadeh M, Hamedani GG (2018) Odd hyperbolic cosine-FG (OHC-FG) family of lifetime distributions. J Stat Theory Appl (JSTA) (accepted)
8.
go back to reference Murthy DP, Xie M, Jiang R (2004) Weibull models, vol 505. Wiley, Hoboken Murthy DP, Xie M, Jiang R (2004) Weibull models, vol 505. Wiley, Hoboken
9.
go back to reference Ramos MW, Marinho PR, Silva RV, Cordeiro GM (2013) The exponentiated Lomax Poisson distribution with an application to lifetime data. Adv Appl Stat 34(2):107–135 Ramos MW, Marinho PR, Silva RV, Cordeiro GM (2013) The exponentiated Lomax Poisson distribution with an application to lifetime data. Adv Appl Stat 34(2):107–135
Metadata
Title
Odd Hyperbolic Cosine Exponential-Exponential (OHC-EE) Distribution
Authors
Omid Kharazmi
Ali Saadatinik
Shahla Jahangard
Publication date
17-05-2019
Publisher
Springer Berlin Heidelberg
Published in
Annals of Data Science / Issue 4/2019
Print ISSN: 2198-5804
Electronic ISSN: 2198-5812
DOI
https://doi.org/10.1007/s40745-019-00200-z

Other articles of this Issue 4/2019

Annals of Data Science 4/2019 Go to the issue