Skip to main content
Log in

On optimal designs for censored data

  • Published:
Metrika Aims and scope Submit manuscript

A Publisher's Erratum to this article was published on 09 August 2014

Abstract

In time to event experiments the individuals under study are observed to experience some event of interest. If this event is not observed until the end of the experiment, censoring occurs, which is a common feature in such studies. We consider the proportional hazards model with type I and random censoring and determine locally \(D\)- and \(c\)-optimal designs for a larger class of nonlinear models with two parameters, where the experimental conditions can be selected from a finite discrete design region, as is often the case in practice. Additionally, we compute \(D\)-optimal designs for a three-parameter model on a continuous design region.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  • Atkinson AC, Donev AN (1996) Optimum experimental designs. Clarendon Press, Oxford

    Google Scholar 

  • Atwood CL (1969) Optimal and efficient designs of experiments. Ann Math Stat 40:1570–1602

    Article  MATH  MathSciNet  Google Scholar 

  • Chernoff H (1953) Locally optimal designs for estimating parameters. Ann Math Stat 24:586–602

    Article  MATH  MathSciNet  Google Scholar 

  • Cox DR, Oakes D (1984) Analysis of survival data. Chapman and Hall, London

    Google Scholar 

  • Dette H (1997) Designing experiments with respect to ‘standardized’ optimality criteria. JRSSB 59:97–110

    Article  MATH  MathSciNet  Google Scholar 

  • Duchateau L, Janssen P (2008) The frailty model. Springer, New York

    MATH  Google Scholar 

  • Fedorov VV (1972) Theory of optimal experiments. Academic Press, New York

    Google Scholar 

  • Konstantinou M, Biedermann S, Kimber A (2014) Optimal designs for two-parameter nonlinear models with application to survival models. Statistica Sinica 24:415–428

    MATH  MathSciNet  Google Scholar 

  • Müller C (1995) Maximin efficient designs for estimating nonlinear aspects in linear models. JSPI 44:117–132

    MATH  Google Scholar 

  • Pukelsheim F (1993) Optimal design of experiments. Wiley, New York

    MATH  Google Scholar 

  • Pukelsheim F, Torsney B (1991) Optimal weights for experimental designs on linearly independent support points. Ann Stat 19(3):1614–1625

    Article  MATH  MathSciNet  Google Scholar 

  • Rodríguez-Torreblanca C, Rodríguez-Díaz JM (2007) Locally D- and c-optimal designs for Poisson and negative binomial regression models. Metrika 66:161–172

    Article  MathSciNet  Google Scholar 

  • Silvey SD (1980) Optimal design. Chapman and Hall, London

    Book  MATH  Google Scholar 

Download references

Acknowledgments

We would like to thank the referees for their constructive and helpful comments.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Dennis Schmidt.

Appendix

Appendix

Proof of Theorem 3.4

(a) The first part of the proof is similar to that of Konstantinou et al. (2014) for the interval design region. An optimal design \(\xi ^*\) with two support points \(x_1^*<x_2^*\) has equal weights. The determinant of the information matrix is given by:

$$\begin{aligned} \det \left( \mathbf{M}(\xi ^*,{\varvec{\beta }})\right) = \frac{1}{4} Q(\beta _0 + \beta _1 x_1^*)Q(\beta _0 + \beta _1 x_2^*) (x_2^*-x_1^*)^2. \end{aligned}$$

From assumption (A2) it follows that the function \(Q\) is strictly increasing. Since \(\beta _1 > 0\), the determinant is increasing with \(x_2^*\), and it is maximised for \(x_2^*=x_k\). To obtain the second support point, we have to maximise the function \(r(x):=Q(\beta _0 + \beta _1 x)(x_k-x)^2\) over \(x\in \fancyscript{X}\). We consider \(r(x)\) as a function of real \(x\) and show that it has only one extremum in \((-\infty ,x_k)\), which is a maximum. The first derivative is

$$\begin{aligned} r'(x) = \beta _1 Q'(\beta _0 + \beta _1 x) (x_k-x)^2 - 2 Q(\beta _0 + \beta _1 x) (x_k-x) \end{aligned}$$

with a zero located at \(x=x_k\). Since the function \(Q\) is positive, so is the function \(r(x)\) for all \(x \ne x_k\) and hence there is a minimum at \(x=x_k\). We may get further zeros of \(r'\) by solving the equation \(\phi (x)=x_k\). The solution \(\tilde{x}=\phi ^{-1}(x_k)\) is unique by Lemma 3.1. Since \(\phi \) is strictly increasing and \(\phi (x_k)>x_k\), the second zero \(\tilde{x}\) is located in the interval \((-\infty ,x_k)\). It is a maximum of \(r\), because the derivative changes sign at this point.

It follows that the function \(r(x)\) is strictly decreasing to both sides around the maximum, and for the discrete design region it is maximised either at \(x^-\) or \(x^+\) or eventually at both points. Figure 1 illustrates the situation.

Since \(x_k>\tilde{x}\), we have \(\left\{ x_i\in \fancyscript{X}:x_i \ge \tilde{x}\right\} \ne \varnothing \). If \(\tilde{x} \le x_1\), the maximum is located at \(\tilde{x} \le x_1 = x^+\), and in this case \(x^+ = x_1\) is the second support point. If \(x_1<\tilde{x}\) then \(x^-\) and \(x^+\) are located at two adjacent design points \(x_i\) and \(x_{i+1}\), say, \(x_1^*\) has to be chosen at that of these two points, at which \(r\) attains the maximal value.

(b) The case \(\beta _1<0\) can be treated in a similar way or the proof may be obtained by symmetry considerations. \(\square \)

Fig. 1
figure 1

Sketch of a typical graph for the function \(r(x)=Q(\beta _0 + \beta _1 x)(x_k-x)^2\)

Proof of Theorem 3.5

The optimality conditions for the two-point designs result from Theorem 3.3. If none of these designs is \(D\)-optimal, then a \(D\)-optimal design must have three support points. The determinant of the information matrix for such a design is given by:

$$\begin{aligned} \det \left( \mathbf{M}(\xi ,{\varvec{\beta }})\right) = \omega _1 \omega _2 d_{12} + \omega _1 \omega _3 d_{13} + \omega _2 \omega _3 d_{23}. \end{aligned}$$

Substituting \(\omega _3=1-\omega _1-\omega _2\) and setting the partial derivatives with respect to \(\omega _1\) and \(\omega _2\) equal to zero leads to the following system of linear equations:

$$\begin{aligned} \begin{pmatrix} -2d_{13} &{} d_{12}-d_{13}-d_{23} \\ d_{12}-d_{13}-d_{23} &{} -2d_{23} \end{pmatrix} \begin{pmatrix} \omega _1^* \\ \omega _2^* \end{pmatrix} = \begin{pmatrix} -d_{13} \\ -d_{23} \end{pmatrix}. \end{aligned}$$

Solving this system yields the solution given in (3.4).

We denote the matrix on the left-hand side by \(\mathbf{A}\) and show that its determinant, which is the denominator of the optimal weights, is not equal to zero. The determinant of \(\mathbf{A}\) is given by:

$$\begin{aligned} \det (\mathbf{A})= d_{12}(d_{23}+d_{13}-d_{12}) + d_{23}(d_{13}+d_{12}-d_{23}) + d_{13}(d_{23}+d_{12}-d_{13}). \end{aligned}$$

Since no design with two support points is \(D\)-optimal, the optimality conditions for the two-point designs in Theorem 3.5 are not satisfied. Hence the three expressions in parentheses are positive and so is the determinant.

Because the design \(\xi ^*\) contains the optimal weights, it is the unique \(D\)-optimal design. \(\square \)

Proof of Theorem 3.6

(a) We first prove that a \(D\)-optimal design must contain the support point \(x_k\). Let \(\xi \) be a design with support points \(x_i,x_j,x_l \in \fancyscript{X},\, x_i<x_j<x_l\), and weights \(\omega _i,\, \omega _j\) and \(\omega _l\). The determinant of the information matrix is given by:

$$\begin{aligned} \det \bigl (\mathbf{M}(\xi ,{\varvec{\beta }})\bigr ) = \omega _i \omega _j d_{ij} + \omega _i \omega _l d_{il} + \omega _j \omega _l d_{jl}. \end{aligned}$$
(7.1)

As in the proof of Theorem 3.4, we conclude that the determinant is increasing with \(x_l\) and it is maximised for \(x_l=x_k\).

We will now show that a \(D\)-optimal design with three support points \(x_i<x_j<x_k\) must have a support point less than \(\phi ^{-1}(x_k)\). Assume that all three support points were greater than \(\phi ^{-1}(x_k)\). We apply Theorem 2 of Konstantinou et al. (2014) to the design region \([x_i,x_k]\) and obtain the unique \(D\)-optimal design \(\xi ^*\) with support points \(x_i\) and \(x_k\) and equal weights. Since it is \(D\)-optimal on \([x_i,x_k]\), it is also \(D\)-optimal on \(\left\{ x_i,x_j,x_k\right\} \), which is a contradiction.

We continue to show that it is impossible that both support points \(x_i\) and \(x_j\) are less than \(\phi ^{-1}(x_k)\). Assume that \(x_i<x_j<\phi ^{-1}(x_k)\). Since \(\phi (x_j)>x_j\), we have \(\phi (x_j)\in (x_j,x_k)\). Application of Theorem 2 of Konstantinou et al. (2014) to the design region \([x_i,\phi (x_j)]\) yields the unique \(D\)-optimal design \(\xi ^*\) with support points \(x_j\) and \(\phi (x_j)\) and equal weights. Since it is \(D\)-optimal on \([x_i,\phi (x_j)]\), it is also \(D\)-optimal on \(\left\{ x_i,x_j,\phi (x_j)\right\} \) and hence Theorem 3.5 gives the inequality

$$\begin{aligned} \frac{Q(\theta _i)Q(\theta _j)(x_j-x_i)^2}{Q(\theta _j)Q(\theta _{\phi _j})(\phi (x_j)-x_j)^2} + \frac{Q(\theta _i)Q(\theta _{\phi _j})(\phi (x_j)-x_i)^2}{Q(\theta _j)Q(\theta _{\phi _j})(\phi (x_j)-x_j)^2} \le 1, \end{aligned}$$
(7.2)

where \(\theta _{\phi _j}=\beta _0 + \beta _1 \phi (x_j)\). Let the functions \(h_1(x)\) and \(h_2(x)\) be defined as follows:

$$\begin{aligned} h_1(x)=\frac{Q(\theta _i)(x_j-x_i)^2}{Q(\beta _0 + \beta _1 x)(x-x_j)^2}, \quad h_2(x)=\frac{Q(\theta _i)(x-x_i)^2}{Q(\theta _j)(x-x_j)^2}. \end{aligned}$$

The function \(h_1\) is a strictly decreasing function in \(x\). The derivative of \(h_2(x)\) is given by:

$$\begin{aligned} h_2'(x) = \frac{2Q(\theta _i)(x - x_i)(x_i - x_j)}{Q(\theta _j)(x-x_j)^3}. \end{aligned}$$

Since \(x_i-x_j<0\), the derivative is negative for \(x>x_j\) and hence the function \(h_2(x)\) is strictly decreasing for \(x>x_j\). Thus the following inequality holds as well, which follows from (7.2) by substituting \(\phi (x_j)\) by \(x_k\):

$$\begin{aligned} \frac{Q(\theta _i)Q(\theta _j)(x_j-x_i)^2}{Q(\theta _j)Q(\theta _k)(x_k-x_j)^2} + \frac{Q(\theta _i)Q(\theta _k)(x_k-x_i)^2}{Q(\theta _j)Q(\theta _k)(x_k-x_j)^2} \le 1. \end{aligned}$$

Theorem 3.5 states that the design with support points \(x_j\) and \(x_k\) and equal weights is \(D\)-optimal on the design region \(\fancyscript{X}=\left\{ x_i,x_j,x_k\right\} \), which contradicts our assumption that no two-point design is \(D\)-optimal. Hence we have \(x_1^*<\phi ^{-1}(x_k)<x_2^*\).

It remains to show that \(x_1^*=x^-\) and \(x_2^*=x^+\). Let \(\mathbf{M}(\xi ,{\varvec{\beta }})^{-1}=(m_{ij})_{i,j=1,2}\) denote the inverse of the information matrix. By the equivalence theorem, the design \(\xi \) is \(D\)-optimal if and only if

$$\begin{aligned} \frac{m_{11}+2m_{12}x+m_{22}x^2}{2} \le \frac{1}{Q(\theta )} =: g(\theta ) \end{aligned}$$

for all \(x \in \fancyscript{X}\). As \(\beta _1 \ne 0\) we can write the left-hand side as a quadratic polynomial in \(\theta = \beta _0 + \beta _1 x\) with suitable coefficients \(c_1,c_2,c_3 \in \mathbb {R}\). Then the design \(\xi \) is \(D\)-optimal if and only if \(c_1 \theta ^2 + c_2 \theta + c_3 \le g(\theta )\) for all \(\theta \in \beta _0+\beta _1\fancyscript{X}\), which is equivalent to:

$$\begin{aligned} k(\theta ) := g(\theta ) - c_1 \theta ^2 - c_2 \theta - c_3 \ge 0 \quad \forall \theta \in \beta _0+\beta _1\fancyscript{X}. \end{aligned}$$

We consider again \(k\) as a function on \(\mathbb {R}\), and we have \(k''(\theta )=g''(\theta )-2c_1\). Since \(g''(\theta )\) is injective by (A3), the function \(k''(\theta )\) can have at most one zero. From Rolle’s theorem it follows that \(k'(\theta )\) has at most two zeros. Hence \(k(\theta )\) has at most one minimum. If \(k(\beta _0+\beta _1\phi ^{-1}(x_k))\ge 0\), then the three-point design would also be \(D\)-optimal on \(\fancyscript{X}\cup \left\{ \phi ^{-1}(x_k)\right\} \), which is a contradiction because the two-point design with support points \(\phi ^{-1}(x_k)\) and \(x_k\) is the unique \(D\)-optimal design.

We thus get \(k(\beta _0+\beta _1\phi ^{-1}(x_k))<0=k(\beta _0+\beta _1x_1^*)=k(\beta _0+\beta _1x_2^*)\). Since \(k(\theta )\) has at most one minimum and \(x_1^*<\phi ^{-1}(x_k)<x_2^*\), it follows that the \(D\)-optimal three-point design must have the support points \(x_1^*=x^-\) and \(x_2^*=x^+\).

(b) For \(\beta _1 < 0\) the proof works analogously to the case \(\beta _1 > 0\). \(\square \)

Proof of Remark 3.1

We give the proof only for the case \(\beta _1>0\), since for \(\beta _1<0\) it works in the same way. In the proof of Theorem 3.4(a) we have shown that for arbitrary \(a \in \mathbb {R}\) the function \(r(x):=Q(\beta _0 + \beta _1 x) (a-x)^2\) has exactly one extremum in the interval \((-\infty ,a)\), which is a maximum. Hence \(r(x)\) is eventually decreasing for \(x \rightarrow -\infty \) and bounded below by zero, thus converging to some limit \(b \ge 0\). Since the factor \((a-x)^2\) tends to \(\infty \) as \(x \rightarrow -\infty \), we have \(\lim _{x \rightarrow -\infty } Q(\beta _0 + \beta _1 x) = 0\).

We now show that \(\lim _{x \rightarrow -\infty } r(x) = 0\) for fixed \(a \in \mathbb {R}\). Assume that \(\lim _{x \rightarrow -\infty } r(x)=b>0\). Application of l’Hôpital’s rule yields

$$\begin{aligned} b&= \lim _{x \rightarrow -\infty } \frac{Q(\beta _0 + \beta _1 x)}{\left( \frac{1}{(a-x)^2}\right) } = \lim _{x \rightarrow -\infty } \frac{\beta _1Q'(\beta _0 + \beta _1 x)}{\left( \frac{2}{(a-x)^3}\right) } \\&=\lim _{x \rightarrow -\infty } \frac{1}{2} \beta _1Q'(\beta _0 + \beta _1 x) (a-x)^3 \end{aligned}$$

and it follows:

$$\begin{aligned} 1 = \frac{b}{b} = \lim _{x \rightarrow -\infty } \frac{Q(\beta _0 + \beta _1 x)(a-x)^2}{\frac{1}{2} \beta _1 Q'(\beta _0 + \beta _1 x) (a-x)^3} = \lim _{x \rightarrow -\infty } \frac{Q(\beta _0 + \beta _1 x)}{Q'(\beta _0 + \beta _1 x)} \cdot \frac{2}{\beta _1(a-x)}. \end{aligned}$$

The second factor \(2/\bigl (\beta _1(a-x)\bigr )\) converges to \(0\) as \(x \rightarrow -\infty \). Thus the first factor \(Q(\beta _0 + \beta _1 x)/Q'(\beta _0 + \beta _1 x)\) has to tend to \(\infty \) as \(x \rightarrow -\infty \), contradicting assumption (A4) that this function is increasing.

Now a suitable design region \(\fancyscript{X}=\left\{ x_1,x_2,x_3\right\} \) is constructed as follows.

Let \(\varepsilon >0\) and \(x_3\) may be chosen arbitrarily. The elements \(x_1<\phi ^{-1}(x_3)\) and \(x_2 \in (\phi ^{-1}(x_3),x_3)\) will be chosen to satisfy the following two conditions:

  1. (i)

    \(d_{13} = d_{23}\)

  2. (ii)

    \(\frac{d_{13} - d_{12}}{2d_{13} + d_{12}} \le \frac{4}{3}\varepsilon ^2\)

Since \(\lim _{x_1 \rightarrow -\infty } Q(\beta _0 + \beta _1 x_1)(x_3-x_1)^2 = 0\), for every \(x_2\) there is an \(x_1(x_2)\), which solves the equation (i). We note that \(\lim _{x_2 \rightarrow x_3} x_1(x_2) = -\infty \). The inequality (ii) can also be satisfied, because the left-hand side converges to zero as \(x_2 \rightarrow x_3\):

$$\begin{aligned} 0\le \lim _{x_2 \rightarrow x_3} \frac{d_{13} - d_{12}}{2d_{13} + d_{12}}&\le \lim _{x_2 \rightarrow x_3} \frac{d_{13} - d_{12}}{d_{12}} = \lim _{x_2 \rightarrow x_3} \frac{Q(\beta _0 + \beta _1 x_3)(x_3-x_1(x_2))^2}{Q(\beta _0 \!+\! \beta _1 x_2)(x_2\!-\!x_1(x_2))^2} \!-\! 1 \\&\le \lim _{x_2 \rightarrow x_3} \frac{Q(\beta _0 + \beta _1 x_3)}{Q(\beta _0 \!+\! \beta _1 x_2)} \cdot \lim _{x_2 \rightarrow x_3} \frac{(x_3-x_1(x_2))^2}{(\phi ^{-1}(x_3)\!-\!x_1(x_2))^2} \!-\! 1 = 0. \end{aligned}$$

By Theorem 3.4(a) the design \(\xi '\) with support points \(x_1\) and \(x_3\) and equal weights \(1/2\) is an optimal two-point design. Let \(\xi \) be the three-point design with support points \(x_1,\, x_2\) and \(x_3\) and corresponding weights \(1/3\). The determinant of the information matrix of \(\xi \) is given by:

$$\begin{aligned} \det \left( \mathbf{M}(\xi ,{\varvec{\beta }})\right) = \frac{1}{9}d_{12} + \frac{2}{9}d_{13}. \end{aligned}$$

The denominator of the left-hand side of inequality (ii) is equal to \(9\cdot \det \bigl (\mathbf{M}(\xi ,{\varvec{\beta }})\bigr )\), and we get the inequality

$$\begin{aligned} \det \bigl (\mathbf{M}(\xi ,{\varvec{\beta }})\bigr )&\ge \frac{1}{9}\left[ d_{13}-12\varepsilon ^2\det \bigl (\mathbf{M}(\xi ,{\varvec{\beta }})\bigr )\right] + \frac{2}{9}d_{13} \\&= -\frac{4}{3}\varepsilon ^2\det \bigl (\mathbf{M}(\xi ,{\varvec{\beta }})\bigr ) + \frac{1}{3}d_{13} \\&= -\frac{4}{3}\varepsilon ^2\det \bigl (\mathbf{M}(\xi ,{\varvec{\beta }})\bigr ) + \frac{4}{3} \det \bigl (\mathbf{M}(\xi ',{\varvec{\beta }})\bigr ), \end{aligned}$$

which is equivalent to \(\left( \frac{3}{4}+\varepsilon ^2\right) \cdot \det \bigl (\mathbf{M}(\xi ,{\varvec{\beta }})\bigr ) \ge \det \bigl (\mathbf{M}(\xi ',{\varvec{\beta }})\bigr )\). Now we have

$$\begin{aligned} \text {eff}_D(\xi ',{\varvec{\beta }}) \le \left( \frac{\det \bigl (\mathbf{M}(\xi ',{\varvec{\beta }})\bigr )}{\det \bigl (\mathbf{M}(\xi ,{\varvec{\beta }})\bigr )}\right) ^{\frac{1}{2}} \le \left( \frac{3}{4}+\varepsilon ^2\right) ^{\frac{1}{2}} \le \frac{\sqrt{3}}{2} + \varepsilon , \end{aligned}$$

which proves the assertion. \(\square \)

Proof of Theorem 5.1

(a) A \(D\)-optimal three-point design \(\xi ^*\) with support points \(x_1<x_2<x_3\) has equal weights \(1/3\). The determinant of the information matrix is given by

$$\begin{aligned} \det \left( \mathbf{M}(\xi ^*,{\varvec{\beta }})\right) = \frac{1}{27}Q(\theta _1)Q(\theta _2)Q(\theta _3)(x_3-x_1)^2(x_3-x_2)^2(x_2-x_1)^2, \end{aligned}$$

where \(\theta _i=\beta _0 + \beta _1 x_i + \beta _2 x_i^2\) for \(i=1,2,3\) is a quadratic polynomial in \(x_i\) and thus symmetric about the minimum at \(x_i=-\beta _1/(2\beta _2)\). Hence \(Q(\theta _i)\) is also symmetric. Since the function \(Q\) is strictly increasing, \(Q(\theta _i)\) has a minimum at \(x_i=-\beta _1/(2\beta _2)\) and is strictly increasing to both sides. It follows that \(Q(\theta _i)\) can only be maximal at one of the boundary points of the interval \([u,v]\). If \(-\beta _1/(2\beta _2) \ge (u+v)/2\) holds, then \(Q(\beta _0 + \beta _1 u + \beta _2 u^2) \ge Q(\beta _0 + \beta _1 v + \beta _2 v^2)\) and the determinant of the information matrix is maximised for \(x_1=u\). If \(-\beta _1/(2\beta _2) \le (u+v)/2\) holds, then the determinant is maximised for \(x_3=v\). We consider the first case since the second case follows analogously.

Now we want to maximise the determinant for fixed \(x_2\) over \(x_3\). Therefore, we have to maximise the function \(h(x_3) := Q(\beta _0 + \beta _1 x_3 + \beta _2 x_3^2)(x_3-u)^2(x_3-x_2)^2\), which has only two zeros at \(x_3=u\) and \(x_3=x_2\) and is positive elsewhere. Hence \(h(x_3)\) is initially increasing for \(x_3>x_2\). If no maximum exists, then \(h(x_3)\) is increasing for all \(x_3>x_2\) and is maximised for \(x_3=v\). This is the case if and only if the derivative has at most one zero \(\tilde{x}_3>x_2\). Then \(\tilde{x}_3\) is a saddle point of \(h(x_3)\) since the function tends to \(\infty \) as \(x_3 \rightarrow \infty \). If there exist several zeros of the derivative which are located outside the design region, then \(h(x_3)\) is also maximised for \(x_3=v\). For \(x_3>x_2\) the derivative of \(h(x_3)\) is equal to zero if and only if:

$$\begin{aligned} l(x_3,x_2) := \frac{(\beta _1 \!+\! 2\beta _2 x_3)(x_3\!-\!u)(x_3\!-\!x_2)}{2x_3-x_2-u} \!=\! -2 \cdot \frac{Q(\beta _0 \!+\! \beta _1 x_3 + \beta _2 x_3^2)}{Q'(\beta _0 \!+\! \beta _1 x_3 \!+\! \beta _2 x_3^2)} =: r(x_3). \end{aligned}$$

Since the function \(r(x_3)\) is negative, zeros of \(h'(x_3)\) can only exist if \(l(x_3,x_2)\) is also negative. The function \(l(x_3,x_2)\) is negative only if \(x_2 < x_3 < -\beta _1/(2\beta _2)\). The location of the zeros depends on \(x_2\). The derivative of \(l(x_3,x_2)\) with respect to \(x_2\) is given by

$$\begin{aligned} \frac{\hbox {d}l(x_3,x_2)}{\hbox {d}x_2} = - \frac{(\beta _1 + 2\beta _2 x_3)(x_3-u)^2}{(2x_3-x_2-u)^2} \end{aligned}$$

and is thus positive for \(x_3 \in \bigl (x_2,-\beta _1/(2\beta _2)\bigr )\). Hence \(l(x_3,x_2)\) is minimal on this interval for \(x_2=u\). If the equation \(l(x_3,u)=r(x_3)\) has at most one solution, then the equation \(l(x_3,x_2)=r(x_3)\) has at most one solution for all \(x_2 \in [u,v]\) and \(h'(x_3)\) has no second zero for \(x_3>x_2\). This fact is visualised by Fig. 2.

Fig. 2
figure 2

Proof sketch (dashed line: \(l(x_3,u)\), solid line: \(l(x_3,x_2)\) for \(x_2 > u\), dotted line: \(r(x_3)\))

The function \(l(x_3,u)\) is given by \(l(x_3,u) = \frac{1}{2}(\beta _1 + 2\beta _2 x_3)(x_3-u)\), and the equation \(l(x_3,u)=r(x_3)\) is equivalent to (5.1).

We now consider the case \(h'(x_3)\) having several zeros. For fixed \(x_2 \!\in \! \bigl (u,\!-\beta _1/(2\beta _2)\bigr )\) let a zero of \(h'(x_3)\) be located at \(\tilde{x}_3\). Since \(l(x_3,x_2)\) is minimised for \(x_2=u\), it follows that \(l(\tilde{x}_3,u) < l(\tilde{x}_3,x_2)=r(\tilde{x}_3)\). Moreover, we have \(\lim _{x_3 \rightarrow u}l(x_3,u)=0>r(u)\). By continuity of both functions there must exist a point of intersection at \(x_3 < \tilde{x}_3\). This shows that the first point of intersection of \(l(x_3,x_2)\) and \(r(x_3)\) is minimal for \(x_2=u\). If this zero of \(h'(x_3)\) is located outside the interval \((u,v)\) for \(x_2=u\), then it is located outside this interval for all \(x_2>u\) and \(h(x_3)\) is maximised for \(x_3=v\).

A \(D\)-optimal three-point design \(\xi ^*\) has thus the support points \(x_1=u\) and \(x_3=v\) under the conditions given. For maximisation of the determinant of the information matrix over \(x_2\), we have to maximise the function \(k(x_2) := Q(\beta _0 + \beta _1 x_2 + \beta _2 x_2^2)(v-x_2)^2(x_2-u)^2\). The term \((v-x_2)^2(x_2-u)^2\) is symmetric about \(x_2=(u+v)/2\). Since the symmetry point of \(Q(\beta _0 + \beta _1 x_2 + \beta _2 x_2^2)\) is located at \(x_2=-\beta _1/(2\beta _2) \ge (u+v)/2\), the function \(k(x_2)\) is maximised in the interval \(\bigl (u,(u+v)/2\bigr ]\). For \(x_2 \in \bigl (u,(u+v)/2\bigr ]\) the derivative of \(k(x_2)\) is equal to zero if and only if:

$$\begin{aligned} (\beta _1 + 2\beta _2 x_2)(v-x_2)(x_2-u) + 2 \cdot \frac{Q(\beta _0 + \beta _1 x_2 + \beta _2 x_2^2)}{Q'(\beta _0 + \beta _1 x_2 + \beta _2 x_2^2)} \cdot (v+u-2x_2) = 0. \end{aligned}$$
(7.3)

We distinguish two cases.

Case 1: Let \(-\beta _1/(2\beta _2) = (u+v)/2\). Then \(k'(x_2)\) has a zero at \(x_2=(u+v)/2\). Further zeros exist if and only if:

$$\begin{aligned} -\beta _2(v-x_2)(x_2-u) = -2 \cdot \frac{Q(\beta _0 + \beta _1 x_2 + \beta _2 x_2^2)}{Q'(\beta _0 + \beta _1 x_2 + \beta _2 x_2^2)}. \end{aligned}$$

The left-hand side is strictly decreasing for \(x_2 \in \bigl (u,(u+v)/2\bigr ]\). The function \(Q/Q'\) is increasing by (A4), and it follows that the right-hand side is increasing on the interval \(\bigl (u,(u+v)/2\bigr ]\). Hence there can be at most one point of intersection. This point cannot be a saddle point since the derivative changes sign. Thus it is a maximum and there is a minimum at \(x_2=(u+v)/2\) because of the symmetry of \(k(x_2)\) about \((u+v)/2\). If no point of intersection exists, then the maximum is located at \(x_2=(u+v)/2\). In both cases the maximum is located at the smallest solution of Eq. (7.3).

Case 2: Let \(-\beta _1/(2\beta _2) > (u+v)/2\). Then \(k'(x_2)\) has no zero at \(x_2=(u+v)/2\). We have \(k'(x_2)=0\), if and only if:

$$\begin{aligned} l(x_2) := \frac{(\beta _1 + 2\beta _2 x_2)(v-x_2)(x_2-u)}{v+u-2x_2} = -2 \cdot \frac{Q(\beta _0 + \beta _1 x_2 + \beta _2 x_2^2)}{Q'(\beta _0 + \beta _1 x_2 + \beta _2 x_2^2)} =: r(x_2). \end{aligned}$$

The derivative of \(l(x_2)\) is given by

$$\begin{aligned} l'(x_2) = \beta _1 + 2\beta _2 x_2 + \frac{2(v-x_2)(x_2-u)\bigl (\beta _1 + \beta _2 (v+u)\bigr )}{(v+u-2x_2)^2} \end{aligned}$$

and it is negative for \(x_2 \in \bigl (u,(u+v)/2\bigr )\). Hence \(l(x_2)\) is strictly decreasing on this interval. The function \(r(x_2)\) is increasing on this interval. Since \(l(u)=0>r(u)\) and \(l(x_2)\) tends to \(-\infty \) as \(x_2 \rightarrow (u+v)/2\), there exists exactly one point of intersection, at which \(k(x_2)\) is maximised.

The other cases (b), (c) and (d) follow by similar arguments. \(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Schmidt, D., Schwabe, R. On optimal designs for censored data. Metrika 78, 237–257 (2015). https://doi.org/10.1007/s00184-014-0500-1

Download citation

  • Received:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00184-014-0500-1

Keywords

Mathematics Subject Classification (2000)

Navigation