Skip to main content
Log in

The Sufficient and Necessary Condition for the Identifiability and Estimability of the DINA Model

  • Published:
Psychometrika Aims and scope Submit manuscript

Abstract

Cognitive diagnosis models (CDMs) are useful statistical tools in cognitive diagnosis assessment. However, as many other latent variable models, the CDMs often suffer from the non-identifiability issue. This work gives the sufficient and necessary condition for identifiability of the basic DINA model, which not only addresses the open problem in Xu and Zhang (Psychometrika 81:625–649, 2016) on the minimal requirement for identifiability, but also sheds light on the study of more general CDMs, which often cover DINA as a submodel. Moreover, we show the identifiability condition ensures the consistent estimation of the model parameters. From a practical perspective, the identifiability condition only depends on the Q-matrix structure and is easy to verify, which would provide a guideline for designing statistically valid and estimable cognitive diagnosis tests.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1

Similar content being viewed by others

References

  • Casella, G., & Berger, R. L., (2002). Statistical inference. Duxbury Pacific Grove, CA, 2 edition.

  • Chen, Y., Liu, J., Xu, G., & Ying, Z. (2015). Statistical analysis of \(Q\)-matrix based diagnostic classification models. Journal of the American Statistical Association, 110, 850–866.

    Article  PubMed  Google Scholar 

  • Chiu, C.-Y., Douglas, J. A., & Li, X. (2009). Cluster analysis for cognitive diagnosis: Theory and applications. Psychometrika, 74, 633–665.

    Article  Google Scholar 

  • de la Torre, J. (2011). The generalized DINA model framework. Psychometrika, 76, 179–199.

    Article  Google Scholar 

  • DeCarlo, L. T. (2011). On the analysis of fraction subtraction data: The DINA model, classification, class sizes, and the Q-matrix. Applied Psychological Measurement, 35, 8–26.

    Article  Google Scholar 

  • DiBello, L. V., Stout, W. F., & Roussos, L. A. (1995). Unified cognitive psychometric diagnostic assessment likelihood-based classification techniques. In P. D. Nichols, S. F. Chipman, & R. L. Brennan (Eds.), Cognitively diagnostic assessment (pp. 361–390). Hillsdale, NJ: Erlbaum Associates.

    Google Scholar 

  • Fang, G., Liu, J., & Ying, Z. (2017). On the identifiability diagnostic classification models. arXiv Preprint.

  • Gabrielsen, A. (1978). Consistency and identifiability. Journal of Econometrics, 8(2), 261–263.

    Article  Google Scholar 

  • Goodman, L. A. (1974). Exploratory latent structure analysis using both identifiable and unidentifiable models. Biometrika, 61, 215–231.

    Article  Google Scholar 

  • Henson, R. A., Templin, J. L., & Willse, J. T. (2009). Defining a family of cognitive diagnosis models using log-linear models with latent variables. Psychometrika, 74, 191–210.

    Article  Google Scholar 

  • Junker, B. W., & Sijtsma, K. (2001). Cognitive assessment models with few assumptions, and connections with nonparametric item response theory. Applied Psychological Measurement, 25, 258–272.

    Article  Google Scholar 

  • Koopmans, T. C., & Reiersøl, O. (1950). The identification of structural characteristics. Annals of Mathematical Statistics, 21, 165–181.

    Article  Google Scholar 

  • Liu, J., Xu, G., & Ying, Z. (2013). Theory of self-learning Q-matrix. Bernoulli, 19(5A), 1790–1817.

    Article  PubMed  PubMed Central  Google Scholar 

  • Maris, G., & Bechger, T. M. (2009). Equivalent diagnostic classification models. Measurement, 7, 41–46.

    Google Scholar 

  • McHugh, R. B. (1956). Efficient estimation and local identification in latent class analysis. Psychometrika, 21, 331–347.

    Article  Google Scholar 

  • Rothenberg, T. J. (1971). Identification in parametric models. Econometrica, 39, 577–591.

    Article  Google Scholar 

  • Tatsuoka, C. (2009). Diagnostic models as partially ordered sets. Measurement, 7, 49–53.

    Google Scholar 

  • Tatsuoka, K. K. (1983). Rule space: An approach for dealing with misconceptions based on item response theory. Journal of Educational Measurement, 20, 345–354.

    Article  Google Scholar 

  • Templin, J. L., & Henson, R. A. (2006). Measurement of psychological disorders using cognitive diagnosis models. Psychological Methods, 11, 287–305.

    Article  PubMed  Google Scholar 

  • von Davier, M. (2005). A general diagnostic model applied to language testing data. Princeton, NJ: Educational Testing Service. (Research report).

    Book  Google Scholar 

  • von Davier, M. (2014). The DINA model as a constrained general diagnostic model: Two variants of a model equivalency. British Journal of Mathematical and Statistical Psychology, 67(1), 49–71.

    Article  Google Scholar 

  • Wang, S., & Douglas, J. (2015). Consistency of nonparametric classification in cognitive diagnosis. Psychometrika, 80(1), 85–100.

    Article  PubMed  Google Scholar 

  • Xu, G. (2017). Identifiability of restricted latent class models with binary responses. The Annals of Statistics, 45, 675–707.

    Article  Google Scholar 

  • Xu, G., & Shang, Z. (2017). Identifying latent structures in restricted latent class models. Journal of the American Statistical Association, (accepted).

  • Xu, G., & Zhang, S. (2016). Identifiability of diagnostic classification models. Psychometrika, 81, 625–649.

    Article  PubMed  Google Scholar 

Download references

Acknowledgements

The authors thank the editor, the associate editor, and two reviewers for many helpful and constructive comments. This work is partially supported by National Science Foundation (Grant No. SES-1659328).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Gongjun Xu.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 201 KB)

Appendix: Proof of Theorem 1

Appendix: Proof of Theorem 1

To study model identifiability, directly working with (2) is technically challenging. To facilitate the proof of the theorem, we introduce a key technical quantity following that of Xu (2017), the marginal probability matrix called the T-matrix. The T-matrix \(T({\varvec{s}},{\varvec{g}})\) is defined as a \(2^J\times 2^K\) matrix, where the entries are indexed by row index \({\varvec{r}}\in \{0,1\}^J\) and column index \({\varvec{\alpha }}\). Suppose that the columns of \(T({\varvec{s}},{\varvec{g}})\) indexed by \(({\varvec{\alpha }}^1,\ldots ,{\varvec{\alpha }}^{2^K})\) are arranged in the following order of \(\{0,1\}^K\)

$$\begin{aligned} {\varvec{\alpha }}^1= & {} \mathbf {0},~ {\varvec{\alpha }}^2={\varvec{e}}_1,~ \ldots ,~{\varvec{\alpha }}^{K+1} = {\varvec{e}}_K,~{\varvec{\alpha }}^{K+2} = {\varvec{e}}_1+{\varvec{e}}_2,~ {\varvec{\alpha }}^{K+3} = {\varvec{e}}_1+{\varvec{e}}_3,~ \ldots , \\ {\varvec{\alpha }}^{2^K}= & {} \sum _{k=1}^K{\varvec{e}}_k =\mathbf {1}, \end{aligned}$$

where \(\mathbf {0}\) denotes the column vector of zeros, \(\mathbf {1}\) denotes the column vector of ones, and \({\varvec{e}}_k\) denotes a standard basis vector, whose kth element is one and the rest are zero; to simplify notation, we omit the dimension indices of \(\mathbf {0}, \mathbf {1}\) and \({\varvec{e}}_k\)’s. Similarly, suppose that the rows of \(T({\varvec{s}},{\varvec{g}})\) indexed by \(({\varvec{r}}^1,\ldots ,{\varvec{r}}^{2^J})\) are arranged in the following order

$$\begin{aligned} {\varvec{r}}^1= & {} \mathbf {0}, ~ {\varvec{r}}^2={\varvec{e}}_1,~ \ldots ,~{\varvec{r}}^{J+1} = {\varvec{e}}_J,~{\varvec{r}}^{J+2} = {\varvec{e}}_1+{\varvec{e}}_2,~ {\varvec{r}}^{J+3} = {\varvec{e}}_1+{\varvec{e}}_3,~ \ldots , \\ {\varvec{r}}^{2^J}= & {} \sum _{j=1}^J{\varvec{e}}_j=\mathbf {1}. \end{aligned}$$

The \({\varvec{r}}=(r_1,\ldots , r_J)\)th row and \({\varvec{\alpha }}\)th column element of \(T({\varvec{s}},{\varvec{g}})\), denoted by \(t_{{\varvec{r}},{\varvec{\alpha }}}({\varvec{s}},{\varvec{g}})\), is the probability that a subject with attribute profile \({\varvec{\alpha }}\) answers all items in the subset \(\{j: r_j=1\}\) positively, that is, \(t_{{\varvec{r}},{\varvec{\alpha }}}({\varvec{s}},{\varvec{g}}) = P({\varvec{R}}\succeq {\varvec{r}}\mid Q,{\varvec{s}},{\varvec{g}},{\varvec{\alpha }}). \) When \({\varvec{r}}=\mathbf {0}\), \(t_{\mathbf {0},{\varvec{\alpha }}}({\varvec{s}},{\varvec{g}}) = P({\varvec{r}}\succeq \mathbf {0}) = 1 \text{ for } \text{ any } {\varvec{\alpha }}.\) When \({\varvec{r}}={\varvec{e}}_j\), for \(1\le j\le J\), \(t_{{\varvec{e}}_j,{\varvec{\alpha }}}({\varvec{s}},{\varvec{g}}) =P(R_j=1 \mid Q,{\varvec{s}},{\varvec{g}},{\varvec{\alpha }}).\) Let be the row vector in the T-matrix corresponding to \({\varvec{r}}\). Then for any \({\varvec{r}}\ne \mathbf {0}\), we can write where \(\odot \) is the element-wise product of the row vectors.

By definition, multiplying the T-matrix by the distribution of attribute profiles \({\varvec{p}}\) results in a vector, \(T({\varvec{s}},{\varvec{g}}){\varvec{p}}\), containing the marginal probabilities of successfully responding to each subset of items positively. The \({\varvec{r}}\)th entry of this vector is

We can see that there is a one-to-one mapping between the two \(2^J\)-dimensional vectors \(T({\varvec{s}},{\varvec{g}}){\varvec{p}}\) and \(\left( P({\varvec{R}}= {\varvec{r}}\mid Q,{\varvec{s}},{\varvec{g}},{\varvec{p}}):~ {\varvec{r}}\in \{0,1\}^J\right) \). Therefore, Definition 1 directly implies the following proposition.

Proposition 1

The parameters \(({\varvec{s}},{\varvec{g}},{\varvec{p}})\) are identifiable if and only if for any \((\bar{{\varvec{s}}},\bar{{\varvec{g}}},\bar{{\varvec{p}}})\ne ({\varvec{s}},{\varvec{g}},{\varvec{p}})\), there exists \({\varvec{r}}\in \{ 0, 1\}^J\) such that

(10)

Proposition 1 shows that to establish the identifiability of \(({\varvec{s}},{\varvec{g}},{\varvec{p}})\), we only need to focus on the T-matrix structure.

The following proposition characterizes the equivalence between the identifiability of the DINA model associated with a Q-matrix with some zero \({\varvec{q}}\)-vectors and that associated with the submatrix of Q containing all of those nonzero \({\varvec{q}}\)-vectors. The proof of Proposition 2 is given in the Supplementary Material.

Proposition 2

Suppose the Q-matrix of size \(J\times K\) takes the form

$$\begin{aligned} Q=\begin{pmatrix} Q'\\ \mathbf {0}\\ \end{pmatrix}, \end{aligned}$$

where \(Q'\) denotes a \(J'\times K\) submatrix containing the \(J'\) nonzero \({\varvec{q}}\)-vectors of Q, and \(\mathbf {0}\) denotes a \((J-J')\times K\) submatrix containing those zero \({\varvec{q}}\)-vectors of Q. Then, the DINA model associated with Q is identifiable if and only if the DINA model associated with \(Q'\) is identifiable.

By Proposition 2, without loss of generality, in the following we assume the Q-matrix does not contain any zero \({\varvec{q}}\)-vectors and prove the necessity and sufficiency of the proposed Conditions 1 and 2.

Proof of Necessity

The necessity of Condition 1 comes from Theorem 3 in Xu and Zhang (2016). Now suppose Condition 1 holds, but Condition 2 is not satisfied. Without loss of generality, suppose the first two columns in \(Q^*\) are the same and the Q takes the following form

(11)

where \({\varvec{v}}\) is any binary vector of length \(J-K\). To show the necessity of Condition 2, from Proposition 1, we only need to find two different sets of parameters \(({\varvec{s}},{\varvec{g}},{\varvec{p}})\ne ( \bar{{\varvec{s}}},\bar{{\varvec{g}}},\bar{{\varvec{p}}})\) such that for any \({\varvec{r}}\in \{0,1\}^J\), the following equation holds

$$\begin{aligned} T_{{\varvec{r}},\cdot }( {\varvec{s}},{\varvec{g}}){\varvec{p}}= T_{{\varvec{r}},\cdot }( \bar{{\varvec{s}}},\bar{{\varvec{g}}})\bar{{\varvec{p}}}. \end{aligned}$$
(12)

We next construct such \(({\varvec{s}},{\varvec{g}},{\varvec{p}})\) and \((\bar{{\varvec{s}}}, \bar{{\varvec{g}}},\bar{{\varvec{p}}})\). We assume in the following that \(\bar{{\varvec{s}}}={\varvec{s}}\) and \(\bar{g}_j =g_j\) for any \(j> 2\), and focus on the construction of \((\bar{g}_1,\bar{g}_2,\bar{{\varvec{p}}})\ne ( g_1, g_2, {\varvec{p}}) \) satisfying (12) for any \({\varvec{r}}\in \{0,1\}^J\). For notational convenience, we write the positive response probability for item j and attribute profile \({\varvec{\alpha }}\) in the following general form \( \theta _{j,{\varvec{\alpha }}} := (1-s_j)^{\xi _{j,{\varvec{\alpha }}} }g_j^{1-\xi _{j,{\varvec{\alpha }}}}. \) So based on our construction, for any \(j>2\), \(\theta _{j,{\varvec{\alpha }}} = \bar{\theta }_{j,{\varvec{\alpha }}}\).

We define two subsets of items \(S_0\) and \(S_1\) to be

$$\begin{aligned} S_0 = \{j: q_{j,1}=q_{j,2}=0\} \text{ and } S_1 = \{j: q_{j,1}=q_{j,2}=1\}, \end{aligned}$$

where \(S_0\) includes those items not requiring any of the first two attributes, and \(S_1\) includes those items requiring both of the first two attributes. Then, since Condition 2 is not satisfied, we must have \(S_0\cup S_1 = \{3,4,\ldots ,J\}\), i.e., all but the first two items either fall in \(S_0\) or \(S_1\). Now consider any \({\varvec{\alpha }}^*\in \{0,1\}^{K-2}\), for any item \(j\in S_0\), the four attribute profiles \((0,0,{\varvec{\alpha }}^*)\), \((0,1,{\varvec{\alpha }}^*)\), \((1,0,{\varvec{\alpha }}^*)\) and \((1,1,{\varvec{\alpha }}^*)\) always have the same positive response probabilities to j, and for any \(j\in S_1\), the three attribute profiles \((0,0,{\varvec{\alpha }}^*)\), \((1,0,{\varvec{\alpha }}^*)\), \((0,1,{\varvec{\alpha }}^*)\) always have the same positive response probabilities to j. In summary,

$$\begin{aligned} {\left\{ \begin{array}{ll} \theta _{j,\,(0,0,{\varvec{\alpha }}^*)} = \theta _{j,\,(0,1,{\varvec{\alpha }}^*)} = \theta _{j,\,(1,0,{\varvec{\alpha }}^*)} =\theta _{j,\,(1,1,{\varvec{\alpha }}^*)}&{} \text{ for } j\in S_0;\\ \theta _{j,\,(0,0,{\varvec{\alpha }}^*)} = \theta _{j,\,(0,1,{\varvec{\alpha }}^*)} = \theta _{j,\,(1,0,{\varvec{\alpha }}^*)} \le \theta _{j,\,(1,1,{\varvec{\alpha }}^*)}&{} \text{ for } j\in S_1. \end{array}\right. } \end{aligned}$$
(13)

For any response vector \({\varvec{r}}\in \{0,1\}^J\) such that \({\varvec{r}}_{S_1}:=(r_j: j\in S_1)\ne \mathbf {0}\), namely \(r_j=1\) for some item j requiring both of the first two attributes, we discuss the following four cases.

  1. (a)

    For any \({\varvec{r}}\) such that \((r_1,r_2) = (0,0)\) and \({\varvec{r}}_{S_1}\ne \mathbf {0}\), from (13) and the definition of the T-matrix, (12) is equivalent to

    $$\begin{aligned}&\sum _{{\varvec{\alpha }}^*}\left\{ \left[ \prod _{j>2:\,r_j=1} \theta _{j,\,(0,0,{\varvec{\alpha }}^*)}\right] \big [p_{(0,0,{\varvec{\alpha }}^*)}+p_{(0,1,{\varvec{\alpha }}^*)}+p_{(1,0,{\varvec{\alpha }}^*)}\big ]\right. \\&\qquad \left. +\, \left[ \prod _{j>2:\,r_j=1} \theta _{j,\,(1,1,{\varvec{\alpha }}^*)}\right] p_{(1,1,{\varvec{\alpha }}^*)} \right\} \\&\quad = \sum _{{\varvec{\alpha }}^*}\left\{ \left[ \prod _{j>2:\,r_j=1}\bar{\theta }_{j,\,(0,0,{\varvec{\alpha }}^*)}\right] \big [\bar{p}_{(0,0,{\varvec{\alpha }}^*)}+\bar{p}_{(0,1,{\varvec{\alpha }}^*)}+\bar{p}_{(1,0,{\varvec{\alpha }}^*)}\big ]\right. \\&\qquad \left. +\, \left[ \prod _{j>2:\,r_j=1}\bar{\theta }_{j,\,(1,1,{\varvec{\alpha }}^*)}\right] \bar{p}_{(1,1,{\varvec{\alpha }}^*)} \right\} \\&\quad = \sum _{{\varvec{\alpha }}^*}\left\{ \left[ \prod _{j>2:\,r_j=1} \theta _{j,\,(0,0,{\varvec{\alpha }}^*)}\right] \big [\bar{p}_{(0,0,{\varvec{\alpha }}^*)}+\bar{p}_{(0,1,{\varvec{\alpha }}^*)}+\bar{p}_{(1,0,{\varvec{\alpha }}^*)}\big ]\right. \\&\qquad \left. +\, \left[ \prod _{j>2:\,r_j=1} \theta _{j,\,(1,1,{\varvec{\alpha }}^*)}\right] \bar{p}_{(1,1,{\varvec{\alpha }}^*)} \right\} , \end{aligned}$$

    where the last equality above follows from \(\theta _{j,{\varvec{\alpha }}} = \bar{\theta }_{j,{\varvec{\alpha }}}\) for any \(j>2\). To ensure the above equations hold, it suffices to have the following equations satisfied for any \({\varvec{\alpha }}^*\in \{0,1\}^{K-2}\)

    $$\begin{aligned} {\left\{ \begin{array}{ll} p_{(1,1,{\varvec{\alpha }}^*)} = \bar{p}_{(1,1,{\varvec{\alpha }}^*)}; \\ p_{(0,0,{\varvec{\alpha }}^*)} + p_{(1,0,{\varvec{\alpha }}^*)} + p_{(0,1,{\varvec{\alpha }}^*)} = \bar{p}_{(0,0,{\varvec{\alpha }}^*)} +\bar{p}_{(1,0,{\varvec{\alpha }}^*)} +\bar{p}_{(0,1,{\varvec{\alpha }}^*)}. \end{array}\right. } \end{aligned}$$
    (14)
  2. (b)

    For any \({\varvec{r}}\) such that \((r_1,r_2) = (1,0)\) and \({\varvec{r}}_{S_1}\ne \mathbf {0}\), from (13) and the definition of the T-matrix, (12) can be equivalently written as

    $$\begin{aligned}&\sum _{{\varvec{\alpha }}^*}\left\{ \left[ \prod _{j>2:\,r_j=1} \theta _{j,\,(0,0,{\varvec{\alpha }}^*)}\right] \big [ g_1 (p_{(0,0,{\varvec{\alpha }}^*)} + p_{(0,1,{\varvec{\alpha }}^*)}) + (1-s_1) p_{(1,0,{\varvec{\alpha }}^*)} \big ]\right. \\&\qquad \left. + \left[ \prod _{j>2:\,r_j=1} \theta _{j,\,(1,1,{\varvec{\alpha }}^*)}\right] (1-s_1) p_{(1,1,{\varvec{\alpha }}^*)} \right\} \\&\quad = \sum _{{\varvec{\alpha }}^*}\left\{ \left[ \prod _{j>2:\,r_j=1} \theta _{j,\,(0,0,{\varvec{\alpha }}^*)}\right] \big [ \bar{g}_1 (\bar{p}_{(0,0,{\varvec{\alpha }}^*)} + \bar{p}_{(0,1,{\varvec{\alpha }}^*)}) + (1-s_1) \bar{p}_{(1,0,{\varvec{\alpha }}^*)} \big ] \right. \\&\qquad \left. + \left[ \prod _{j>2:\,r_j=1} \theta _{j,\,(1,1,{\varvec{\alpha }}^*)}\right] (1-s_1) \bar{p}_{(1,1,{\varvec{\alpha }}^*)} \right\} . \end{aligned}$$

    To ensure the above equation holds, it suffices to have the following equations satisfied for any \({\varvec{\alpha }}^*\in \{0,1\}^{K-2}\)

    $$\begin{aligned} {\left\{ \begin{array}{ll} p_{(1,1,{\varvec{\alpha }}^*)}= \bar{p}_{(1,1,{\varvec{\alpha }}^*)} ;\\ g_1 [p_{(0,0,{\varvec{\alpha }}^*)} + p_{(0,1,{\varvec{\alpha }}^*)}] + (1-s_1) p_{(1,0,{\varvec{\alpha }}^*)}= \bar{g}_1 [\bar{p}_{(0,0,{\varvec{\alpha }}^*)} + \bar{p}_{(0,1,{\varvec{\alpha }}^*)}] + (1-s_1) \bar{p}_{(1,0,{\varvec{\alpha }}^*)}. \end{array}\right. } \end{aligned}$$
    (15)
  3. (c)

    For any \({\varvec{r}}\) such that \((r_1,r_2) = (0,1)\) and \({\varvec{r}}_{S_1}\ne \mathbf {0}\), by symmetry to the previous case of \((r_1,r_2)=(1,0)\), when the following equations hold for any \({\varvec{\alpha }}^*\in \{0,1\}^{K-2}\), Eq. (12) is guaranteed to hold

    $$\begin{aligned} {\left\{ \begin{array}{ll} p_{(1,1,{\varvec{\alpha }}^*)} = \bar{p}_{(1,1,{\varvec{\alpha }}^*)} ;\\ g_2 [p_{(0,0,{\varvec{\alpha }}^*)} + p_{(1,0,{\varvec{\alpha }}^*)}] + (1-s_2) p_{(0,1,{\varvec{\alpha }}^*)} = \bar{g}_2 [\bar{p}_{(0,0,{\varvec{\alpha }}^*)} + \bar{p}_{(1,0,{\varvec{\alpha }}^*)}] + (1-s_2) \bar{p}_{(0,1,{\varvec{\alpha }}^*)}. \end{array}\right. } \end{aligned}$$
    (16)
  4. (d)

    For any \({\varvec{r}}\) such that \((r_1,r_2) = (1,1)\) and \({\varvec{r}}_{S_1}\ne \mathbf {0}\), similarly to the previous cases, Eq. (12) can be equivalently written as

    $$\begin{aligned}&\sum _{{\varvec{\alpha }}^*}\left\{ \left[ \prod _{j>2:\,r_j=1} \theta _{j,\,(0,0,{\varvec{\alpha }}^*)}\right] \big [ g_1 g_2 p_{(0,0,{\varvec{\alpha }}^*)} + (1-s_1) g_2 p_{(1,0,{\varvec{\alpha }}^*)} + g_1 (1-s_2) p_{(0,1,{\varvec{\alpha }}^*)} \big ] \right. \\&\qquad \left. + \left[ \prod _{j>2:\,r_j=1} \theta _{j,\,(1,1,{\varvec{\alpha }}^*)}\right] (1-s_1) (1-s_2) p_{(1,1,{\varvec{\alpha }}^*)} \right\} \\&\quad = \sum _{{\varvec{\alpha }}^*}\left\{ \left[ \prod _{j>2:\,r_j=1} \theta _{j,\,(0,0,{\varvec{\alpha }}^*)}\right] \big [ \bar{g}_1\bar{g}_2 \bar{p}_{(0,0,{\varvec{\alpha }}^*)} + (1-s_1) \bar{g}_2 \bar{p}_{(1,0,{\varvec{\alpha }}^*)} +\bar{g}_1 (1-s_2) \bar{p}_{(0,1,{\varvec{\alpha }}^*)} \big ]\right. \\&\qquad \left. + \left[ \prod _{j>2:\,r_j=1} \theta _{j,\,(1,1,{\varvec{\alpha }}^*)}\right] (1-s_1) (1-s_2) \bar{p}_{(1,1,{\varvec{\alpha }}^*)} \right\} . \end{aligned}$$

    To ensure the above equation hold, it suffices to have the following equations hold for any \({\varvec{\alpha }}^*\in \{0,1\}^{K-2}\)

    $$\begin{aligned} \begin{aligned} {\left\{ \begin{array}{ll} &{} p_{(1,1,{\varvec{\alpha }}^*)} = \bar{p}_{(1,1,{\varvec{\alpha }}^*)}; \\ &{} g_1 g_2 p_{(0,0,{\varvec{\alpha }}^*)} + (1-s_1) g_2 p_{(1,0,{\varvec{\alpha }}^*)} + g_1 (1-s_2) p_{(0,1,{\varvec{\alpha }}^*)} \\ &{}\qquad =\bar{g}_1 \bar{g}_2 \bar{p}_{(0,0,{\varvec{\alpha }}^*)} + (1-s_1) \bar{g}_2 \bar{p}_{(1,0,{\varvec{\alpha }}^*)} +\bar{g}_1 (1-s_2 ) \bar{p}_{(0,1,{\varvec{\alpha }}^*)}. \end{array}\right. } \end{aligned} \end{aligned}$$
    (17)

We further consider those response vectors with \({\varvec{r}}_{S_1}=\mathbf {0}\). A similar argument gives that, to ensure (12) holds for any \({\varvec{r}}\) with \({\varvec{r}}_{S_1}=\mathbf {0}\), it suffices to have Eqs. (14)–(17) hold. Together with the results in cases (a)–(d) discussed above, we know that Eqs. (14)–(17) are a set of sufficient conditions for (12) to hold for any \({\varvec{r}}\in \{0,1\}^J\). Therefore, to show the necessity of Condition 2, we only need to construct \((\bar{g}_1,\bar{g}_2,\bar{{\varvec{p}}})\ne ( g_1, g_2, {\varvec{p}}) \) satisfying (14)–(17), which can be equivalently written as, for any \({\varvec{\alpha }}^*\in \{0,1\}^{K-2}\), \(p_{(1,1,{\varvec{\alpha }}^*)} = \bar{p}_{(1,1,{\varvec{\alpha }}^*)}\) and

$$\begin{aligned} {\left\{ \begin{array}{ll} &{} p_{(0,0,{\varvec{\alpha }}^*)} + p_{(1,0,{\varvec{\alpha }}^*)} + p_{(0,1,{\varvec{\alpha }}^*)} = \bar{p}_{(0,0,{\varvec{\alpha }}^*)} +\bar{p}_{(1,0,{\varvec{\alpha }}^*)} +\bar{p}_{(0,1,{\varvec{\alpha }}^*)}; \\ &{} g_1 [p_{(0,0,{\varvec{\alpha }}^*)} + p_{(0,1,{\varvec{\alpha }}^*)}] + (1-s_1) p_{(1,0,{\varvec{\alpha }}^*)} = \bar{g}_1 [\bar{p}_{(0,0,{\varvec{\alpha }}^*)} + \bar{p}_{(0,1,{\varvec{\alpha }}^*)}] + (1-s_1) \bar{p}_{(1,0,{\varvec{\alpha }}^*)} ;\\ &{} g_2 [p_{(0,0,{\varvec{\alpha }}^*)} + p_{(1,0,{\varvec{\alpha }}^*)}] + (1-s_2) p_{(0,1,{\varvec{\alpha }}^*)} = \bar{g}_2 [\bar{p}_{(0,0,{\varvec{\alpha }}^*)} + \bar{p}_{(1,0,{\varvec{\alpha }}^*)}] + (1-s_2) \bar{p}_{(0,1,{\varvec{\alpha }}^*)} ;\\ &{} g_1 g_2 p_{(0,0,{\varvec{\alpha }}^*)} + (1-s_1) g_2 p_{(1,0,{\varvec{\alpha }}^*)} + g_1 (1-s_2) p_{(0,1,{\varvec{\alpha }}^*)} \\ &{}\qquad =\bar{g}_1 \bar{g}_2 \bar{p}_{(0,0,{\varvec{\alpha }}^*)} + (1-s_1) \bar{g}_2 \bar{p}_{(1,0,{\varvec{\alpha }}^*)} +\bar{g}_1 (1-s_2) \bar{p}_{(0,1,{\varvec{\alpha }}^*)}. \end{array}\right. } \end{aligned}$$
(18)

To construct \((\bar{g}_1,\bar{g}_2,\bar{{\varvec{p}}})\ne ( g_1, g_2, {\varvec{p}}) \), we focus on the family of parameters \(({\varvec{s}},{\varvec{g}},{\varvec{p}})\) such that for any \({\varvec{\alpha }}^*\in \{0,1\}^{K-2}\),

$$\begin{aligned} \frac{p_{(0,1,{\varvec{\alpha }}^*)}}{p_{(0,0,{\varvec{\alpha }}^*)}} = u \text{ and } \frac{p_{(1,0,{\varvec{\alpha }}^*)}}{p_{(0,0,{\varvec{\alpha }}^*)}} = v, \end{aligned}$$

where u and v are some positive constants. Next we choose \( \bar{{\varvec{p}}}\) such that for any \({\varvec{\alpha }}^*\in \{0,1\}^{K-2}\)

$$\begin{aligned}p_{(1,1,{\varvec{\alpha }}^*)} = \bar{p}_{(1,1,{\varvec{\alpha }}^*)}, \quad \bar{p}_{(0,0,{\varvec{\alpha }}^*)} = \bar{\rho } \cdot p_{(0,0,{\varvec{\alpha }}^*)},\quad \frac{\bar{p}_{(0,1,{\varvec{\alpha }}^*)}}{\bar{p}_{(0,0,{\varvec{\alpha }}^*)}} = \bar{u},\quad \text{ and }\quad \frac{\bar{p}_{(1,0,{\varvec{\alpha }}^*)}}{\bar{p}_{(0,0,{\varvec{\alpha }}^*)}} = \bar{v}, \end{aligned}$$

for some positive constants \(\bar{\rho }\), \(\bar{u}\) and \(\bar{v}\) to be determined. In particular, we choose \(\bar{\rho }\) close enough to 1 and then (18) is equivalent to

$$\begin{aligned} \begin{aligned} {\left\{ \begin{array}{ll} &{} (1+u+v) = \bar{\rho } (1+\bar{u}+\bar{v}); \\ &{} g_1 (1+u) + (1-s_1) v = \bar{\rho } ~[~\bar{g}_1 (1+\bar{u}) + (1-s_1) \bar{v}~] ;\\ &{} g_2 (1+v) + (1-s_2) u = \bar{\rho }~ [~\bar{g}_2 (1+\bar{v}) + (1-s_2) \bar{u}~] ;\\ &{} g_1 g_2+g_1 (1-s_2) u+(1-s_1) g_2 v =\bar{\rho } ~ [~\bar{g}_1 \bar{g}_2+\bar{g}_1 (1-s_2) \bar{u}+(1-s_1) \bar{g}_2 \bar{v}~]. \end{array}\right. } \end{aligned} \end{aligned}$$
(19)

For any \(g_1,g_2, s_1, s_2, u\) and v, the above system of equations contain five free parameters \(\bar{\rho }\), \(\bar{u}\), \(\bar{v}\), \(\bar{g}_1\) and \(\bar{g}_2\), while only have four constraints, so there are infinitely many sets of solutions of \((\bar{\rho }, \bar{u}, \bar{v}, \bar{g}_1, \bar{g}_2)\) to (19). This gives the non-identifiability of \((g_1, g_2,{\varvec{p}})\) and hence justifies the necessity of Condition 2. \(\square \)

Proof of Sufficiency

It suffices to show that if \(T({\varvec{s}},{\varvec{g}}){\varvec{p}}= T(\bar{{\varvec{s}}},\bar{{\varvec{g}}})\bar{{\varvec{p}}}\), then \(({\varvec{s}},{\varvec{g}},{\varvec{p}})= ( \bar{{\varvec{s}}},\bar{{\varvec{g}}},\bar{{\varvec{p}}})\). Under Condition 1, Theorem 4 in Xu and Zhang (2016) gives that \({\varvec{s}}=\bar{{\varvec{s}}}\) and \(g_j = \bar{g}_j\) for \(j\in \{K+1,\ldots ,J\}.\) It remains to show \(g_j = \bar{g}_j\) for \(j\in \{1,\ldots ,K\}\). To facilitate the proof, we introduce the following lemma, whose proof is given in the Supplementary Material.

Lemma 1

Suppose Condition 1 is satisfied. For an item set S, define \(\vee _{h\in S\,}{\varvec{q}}_h \) to be the vector of the element-wise maximum of the \({\varvec{q}}\)-vectors in the set S. For any \(k\in \{1,\ldots ,K\}\), if there exist two item sets, denoted by \(S_k^-\) and \(S_k^+\), which are not necessarily nonempty or disjoint, such that

$$\begin{aligned} g_h= & {} \bar{g}_h \text{ for } \text{ any } h\in S_k^-\cup S_k^+, ~and~ \vee _{h\in S_k^+}{\varvec{q}}_h - \vee _{h\in S_k^-}{\varvec{q}}_h = {\varvec{e}}_k^\top \nonumber \\= & {} (\varvec{0}, \underbrace{1}_\text {column }k, \varvec{0}), \end{aligned}$$
(20)

then \(g_k = \bar{g}_k\).

Suppose the Q-matrix takes the form of (3), then under Condition 2, any two different columns of the \((J-K)\times K\) submatrix \(Q^* \) as specified in (3) are distinct. Before proceeding with the proof, we first introduce the concept of the “lexicographic order.” We denote the lexicographic order on \(\{0,1\}^{J-K}\), the space of all \((J-K)\)-dimensional binary vectors, by “\(\prec _{\text {lex}}\).” Specifically, for any \({\varvec{a}}=(a_1,\ldots ,a_{J-K})^\top \), \({\varvec{b}}=(b_1,\ldots ,b_{J-K})^\top \in \{0,1\}^{J-K}\), we write \({\varvec{a}}\prec _{\text {lex}}{\varvec{b}}\) if either \(a_1<b_1\); or there exists some \(i\in \{2,\ldots ,J-K\}\) such that \(a_i<b_i\) and \(a_j=b_j\) for all \(j<i\). For instance, the following four vectors \({\varvec{a}}_1,{\varvec{a}}_2,{\varvec{a}}_3,{\varvec{a}}_4\) in \(\{0,1\}^2\) are sorted in an increasing lexicographic order:

$$\begin{aligned} {\varvec{a}}_1 = \begin{pmatrix} 0\\ 0 \end{pmatrix} \prec _{\text {lex}} {\varvec{a}}_2 = \begin{pmatrix} 0\\ 1 \end{pmatrix} \prec _{\text {lex}} {\varvec{a}}_3 = \begin{pmatrix} 1\\ 0 \end{pmatrix} \prec _{\text {lex}} {\varvec{a}}_4 = \begin{pmatrix} 1\\ 1 \end{pmatrix}. \end{aligned}$$

It is not hard to see that if the K column vectors of the submatrix \(Q^*\) are mutually distinct, then there exists a unique way to sort them in an increasing lexicographic order. Thus under Condition 2, there exists a unique permutation \((k_1,k_2,\ldots ,k_K)\) of \((1,2,\ldots ,K)\) such that column \(k_1\) has the smallest lexicographic order among the K columns of \(Q^*\), column \(k_2\) has the second smallest lexicographic order, and so on, i.e., . As an illustration, consider the leftmost Q-matrix presented in Example 1, Eq. (6):

then the permutation is \((k_1,k_2,k_3)=(3,2,1)\), since the third column of \(Q^*\) has the smallest lexicographic order, while the first column has the largest. Recall that we denote \({\varvec{a}}\succeq {\varvec{b}}\) if \(a_i>b_i\) for all i, and denote \({\varvec{a}}\nsucceq {\varvec{b}}\) otherwise. Then by definition, if \({\varvec{a}}\prec _{\text {lex}}{\varvec{b}}\), then \({\varvec{a}}\nsucceq {\varvec{b}}\) must hold. Therefore for any \(1\le i<j\le K\), since , we must have . This fact will be useful in the following proof.

Equipped with the permutation \((k_1,\ldots ,k_K)\), we first prove \(g_{k_1} = \bar{g}_{k_1}\). Define a subset of items

$$\begin{aligned} S_{k_1}^- = \{j>K: q_{j,k_1} = 0\}, \end{aligned}$$

which includes those items from \(\{K+1,\ldots ,J\}\) that do not require attribute \(k_1\). Since is of the smallest lexicographic order among column vectors of \(Q^*\), for any \(k\in \{1,\ldots ,K\}\backslash \{k_1\}\), we must have Thus, for any \(k\in \{1,\ldots ,K\}\backslash \{k_1\}\) there must exist some item \(j_k\in \{K+1,\ldots ,J\}\) such that \(q_{j_k,k} = 1 > 0 = q_{j_k,k_1},\) which indicates that the union of the attributes required by items in \(S_{k_1}^-\) includes all the attributes other than \(k_1\), i.e.,

$$\begin{aligned} \vee _{h\in S_{k_1}^-}{\varvec{q}}_h = (\mathbf {1},\underbrace{0}_\text {column }k_{1},\mathbf {1}). \end{aligned}$$

We further define \(S_{k_1}^+ = \{K+1,\ldots ,J\}\). Since \(S_{k_1}^-\) and \(S_{k_1}^+\) satisfy conditions (20) in Lemma 1 for attribute \(k_1\), we have \(g_{k_1} = \bar{g}_{k_1}.\)

Next we use the induction method to prove that for \(l=2,\ldots ,K\), we also have \(g_{k_l}=\bar{g}_{k_l}\). In particular, suppose for any \(1\le m\le l-1\), we already have \(g_{k_m} = \bar{g}_{k_m}\). Note that each \(k_l\) is an integer in \(\{1,\ldots ,K\}\) that can be viewed as either the index of the \(k_l\)th attribute or the index of the \(k_l\)th item. Define a set of items

$$\begin{aligned} S^{-}_{k_l} = \{j>K: q_{j,k_l} = 0\} \cup \{k_m:1\le m\le l-1\}, \end{aligned}$$
(21)

where the set \(\{j > K : q_{j,k_l}=0\}\) contains those items, among the last \(J-K\) items, which do not require attribute \(k_l\), while the set \(\{k_m :1\le m\le l-1\}\) contains those items for which we have already established the identifiability of the guessing parameter in steps \(m=1,2,\ldots ,l-1\) of the induction method, i.e., \(g_{k_m}=\bar{g}_{k_m}\) for \(m=1,\ldots ,l-1\). Thus for any item \(j\in S_{k_l}^-\), we have \(g_{j} = \bar{g}_{j}\). Namely, \(S_{k_l}^-\) includes the items whose guessing parameters have already been identified prior to step l of the induction method. Moreover, we claim

$$\begin{aligned} \vee _{h\in S_{k_l}^-}{\varvec{q}}_h = (\mathbf {1},\underbrace{0}_\text {column }k_{l},\mathbf {1}). \end{aligned}$$
(22)

This is because for any \(1\le m\le l-1\), the item \(k_m\), whose \({\varvec{q}}\)-vector is \({\varvec{e}}_{k_m}^\top \), is included in the set \(S_{k_l}^-\) and hence attribute \(k_m\) is required by the set \(S_{k_l}^-\); on the other hand, for any \(h\in \{l+1,\ldots , K\}\), the column vector is of greater lexicographic order than , and hence, there must exist some item in \(S_{k_l}^-\) that does not require attribute \(k_l\) but requires attribute \(k_h\). We further define \(S_{k_l}^+ = \{K+1,\ldots ,J\}\). The chosen \(S_{k_l}^-\) and \(S_{k_l}^+\) satisfy the conditions (20) in Lemma 1 and therefore \(g_{k_l} = \bar{g}_{k_l}.\)

Now that all the slipping and guessing parameters have been identified, \(T({\varvec{s}},{\varvec{g}}) {\varvec{p}}= T(\bar{{\varvec{s}}},\bar{{\varvec{g}}}) \bar{{\varvec{p}}} = T({\varvec{s}},{\varvec{g}}) \bar{{\varvec{p}}}\). Then, the fact that \(T({\varvec{s}},{\varvec{g}})\) has full column rank, which is shown in the proof of Theorem 1 in Xu and Zhang (2016), implies \({\varvec{p}}= \bar{{\varvec{p}}}.\) This completes the proof. \(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Gu, Y., Xu, G. The Sufficient and Necessary Condition for the Identifiability and Estimability of the DINA Model. Psychometrika 84, 468–483 (2019). https://doi.org/10.1007/s11336-018-9619-8

Download citation

  • Received:

  • Revised:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11336-018-9619-8

Keywords

Navigation