1 Introduction

Topic modeling is a rapidly developing branch of statistical text analysis (Blei 2012). A probabilistic topic model of a text collection defines each topic by a multinomial distribution over words, and then describes each document with a multinomial distribution over topics. Such representation reveals a hidden thematic structure of the collection and promotes the usage of topic models in information retrieval, classification, categorization, summarization and segmentation of texts.

Latent Dirichlet allocation (LDA) (Blei et al. 2003) is the most popular probabilistic topic model. LDA is a two-level Bayesian generative model, in which topic distributions over words and document distributions over topics are generated from prior Dirichlet distributions. This assumption reduces model complexity and facilitates Bayesian inference due to the conjugacy of Dirichlet and multinomial distributions.

Hundreds of LDA extensions have been developed recently to model natural language phenomena and to incorporate additional information about authors, time, labels, categories, citations, links, etc., (Daud et al. 2010).

Nevertheless, building combined and multi-objective topic models remains a difficult problem in Bayesian approach because of a complicated inference in the case of a non-conjugate prior. This open issue is little discussed in the literature. An evolutionary approach has been proposed recently (Khalifa et al. 2013), but it seems to be computationally infeasible for large text collections.

Another difficulty is that Dirichlet prior conflicts with natural assumptions of sparsity. A document usually contains a small number of topics, and a topic usually consists of a small number of domain-specific terms. Therefore, most words and topics must have zero probabilities. Sparsity helps to save memory and time in modeling large text collections. However, Bayesian approaches to sparsing (Shashanka et al. 2008; Wang and Blei 2009; Larsson and Ugander 2011; Eisenstein et al. 2011; Chien and Chang 2013) suffer from an internal contradiction with Dirichlet prior, which can not produce vectors with zero elements.

To address the above problems we introduce a non-Bayesian semi-probabilistic approach—Additive Regularization of Topic Models (ARTM). Learning a topic model from a document collection is an ill-posed problem of approximate stochastic matrix factorization, which has an infinite set of solutions. To choose a better solution, we add regularization penalty terms to the log-likelihood. Any problem-oriented regularizers or their linear combination may be used instead of Dirichlet prior or together with it. The idea of ARTM is inspired by Tikhonov’s regularization of ill-posed inverse problems (Tikhonov and Arsenin 1977).

Additive regularization differs from Bayesian approach in several aspects.

Firstly, we do not aim to build a fully generative probabilistic model of text. Many requirements for a topic model can be more naturally formalized in terms of optimization criteria rather than prior distributions. Regularizers may have no probabilistic interpretation at all. The structure of regularized models is so straightforward that their representation and explication in terms of graphical models is no longer needed. Thus, ARTM falls into the trend of avoiding excessive probabilistic assumptions in natural language processing.

Secondly, we use the regularized expectation–maximization (EM) algorithm instead of more complicated Bayesian inference. We do not use conjugate priors, integrations, and variational approximations. Despite these fundamental differences both approaches often result in the same or very similar learning algorithms, but in ARTM the inference is much shorter.

Thirdly, ARTM considerably simplifies both design and inference of multi-objective topic models. At the design stage we formalize each requirement for the model in a form of a regularizer—a criterion to be maximized. At the inference stage we simply differentiate each regularizer with respect to the model parameters.

ARTM also differs from previous regularization techniques each designed for a particular regularizer such as KL-divergence, Dirichlet prior, \(L_1\) or \(L_2\) penalty terms (Si and Jin 2005; Chien and Wu 2008; Wang et al. 2011; Larsson and Ugander 2011). ARTM is not an incremental improvement of a particular topic model, but a new instrument for building and combining topic models much easier than in the state-of-the-art Bayesian approach.

The aim of the paper is to introduce a new regularization framework for topic modeling and to provide an initial pool of useful regularizers.

The rest of the paper is organized as follows.

In Sect. 2 we describe probabilistic latent semantic analysis (PLSA) model, the historical predecessor of LDA. We introduce the EM-algorithm from optimizational point of view. Then we show experimentally on synthetic data that both PLSA and LDA give non-unique and unstable solutions. Further we use PLSA as a more appropriate base for a stronger problem-oriented regularization.

In Sect. 3 we introduce the ARTM approach and prove general equations for regularized EM-algorithm. It is a major theoretical contribution of the paper.

In Sect. 4 we work out a pool of regularizers by revising known topic models. We propose an alternative interpretation of LDA as a regularizer that minimizes Kullback–Leibler divergence with a fixed multinomial distribution. Then we consider regularizers for smoothing, sparsing, semi-supervised learning, topic correlation and decorrelation, topic coherence maximization, documents linking, and document classification. Most of them require tedious calculations within Bayesian approach, whereas ARTM leads to similar results “in one line”.

In Sect. 5 we combine three regularizers from our pool to build a highly sparse and well interpretable topic model. We propose to monitor many quality measures during EM-iterations to choose the regularization path empirically for a multi-objective topic model. In our experiment we measure sparsity, kernel size, coherence, purity, and contrast of the topics. We show that ARTM improves all measures at once almost without any loss of the hold-out perplexity.

In Sect. 6 we discuss advantages and limitations of ARTM.

2 Topic models PLSA and LDA

Let \(D\) denote a set (collection) of texts and \(W\) denote a set (vocabulary) of all terms from these texts. Each term can represent a single word as well as a key phrase. Each document \({d\in D}\) is a sequence of \(n_d\) terms \((w_1,\ldots ,w_{n_d})\) from the vocabulary \(W\). Each term might appear multiple times in the same document.

Assume that each term occurrence in each document refers to some latent topic from a finite set of topics \(T\). Text collection is considered to be a sample of triples \((w_i,d_i,t_i),\, {i=1,\ldots ,n}\) drawn independently from a discrete distribution \(p(w,d,t)\) over a finite space \(W\times D \times T\). Term \(w\) and document \(d\) are observable variables, while topic \(t\) is a latent (hidden) variable. Following the “bag of words” model, we represent each document by a subset of terms \(d\subset W\) and the corresponding integers \(n_{dw}\), which count how many times the term \(w\) appears in the document \(d\).

Conditional independence is an assumption that each topic generates terms regardless of the document: \(p(w\ {\vert }\ t) = p(w\ {\vert }\ d,t)\). According to the law of total probability and the assumption of conditional independence

$$\begin{aligned} p(w\ {\vert }\ d) = \sum _{t\in T} p(t\ {\vert }\ d)\, p(w\ {\vert }\ t). \end{aligned}$$
(1)

The probabilistic model (1) describes how the collection \(D\) is generated from the known distributions \(p(t\ {\vert }\ d)\) and \(p(w\ {\vert }\ t)\). Learning a topic model is an inverse problem: to find distributions \(p(t\ {\vert }\ d)\) and \(p(w\ {\vert }\ t)\) given a collection \(D\). This problem is equivalent to finding an approximate representation of counter matrix

$$\begin{aligned} F = \bigl ( \hat{p}_{wd} \bigr )_{W{\times }D}, \quad \hat{p}_{wd} = \hat{p}(w\ {\vert }\ d) = \tfrac{n_{dw}}{n_d}, \end{aligned}$$
(2)

as a product \({F \approx \varPhi \varTheta }\) of two unknown matrices—the matrix \(\varPhi \) of term probabilities for the topics and the matrix \(\varTheta \) of topic probabilities for the documents:

$$\begin{aligned} \begin{array}{rlrlrl} \varPhi &{}= (\phi _{wt})_{W{\times }T},\;\;\;\; &{} \phi _{wt} &{}= p(w\ {\vert }\ t),\;\;\;\; &{} \phi _t &{}= (\phi _{wt})_{w\in W}; \\ \varTheta &{}= (\theta _{td})_{T{\times }D},\;\;\;\; &{} \theta _{td} &{}= p(t\ {\vert }\ d),\;\;\;\; &{} \theta _d &{}= (\theta _{td})_{t\in T}. \end{array} \end{aligned}$$
(3)

Matrices \(F\), \(\varPhi \) and \(\varTheta \) are stochastic, that is, they have non-negative and normalized columns representing discrete distributions. Usually the number of topics \(|T|\) is much smaller than the collection size \(|D|\) and the vocabulary size \(|W|\).

In probabilistic latent semantic analysis PLSA (Hofmann 1999) the topic model (1) is learned by log-likelihood maximization with linear constrains:

$$\begin{aligned} L(\varPhi ,\varTheta )&= \ln \prod _{d\in D}\prod _{w\in d} p(w\ {\vert }\ d)^{n_{dw}} = \sum _{d\in D} \sum _{w\in d} n_{dw}\ln \sum _{t\in T} \phi _{wt}\theta _{td} \;\rightarrow \; \max _{\varPhi ,\varTheta }; \end{aligned}$$
(4)
$$\begin{aligned} \sum _{w\in W} \phi _{wt}&= 1, \quad \phi _{wt}\geqslant 0; \qquad \sum _{t\in T} \theta _{td} = 1, \quad \theta _{td}\geqslant 0. \end{aligned}$$
(5)

Theorem 1

The stationary point of the optimization problem (4), (5) satisfies the system of equations with auxiliary variables \(p_{tdw}\), \(n_{wt}\), \(n_{td}\), \(n_{t}\), \(n_{d}\)

$$\begin{aligned} p_{tdw}&= \frac{\phi _{wt}\theta _{td}}{\sum _{s\in T}\phi _{ws}\theta _{sd}}; \end{aligned}$$
(6)
$$\begin{aligned} \phi _{wt}&= \frac{n_{wt}}{n_t}, \qquad n_{wt} = \sum _{d\in D} n_{dw}p_{tdw}, \qquad n_{t} = \sum _{w\in W} n_{wt}; \end{aligned}$$
(7)
$$\begin{aligned} \theta _{td}&= \frac{n_{td}}{n_d}, \qquad n_{td} = \sum _{w\in d} n_{dw}p_{tdw}, \qquad n_{d} = \sum _{t\in T} n_{td}. \end{aligned}$$
(8)

This statement follows from Karush–Kuhn–Tucker (KKT) conditions. We will prove a more general theorem in the sequel. The system of Eqs. (6)–(8) can be solved by various numerical methods. Particularly, the simple-iteration method is equivalent to the EM algorithm, which is typically used in practice.

EM algorithm repeats two steps in a loop.

The expectation step or E-step (6) can be understood as the Bayes’ rule for the probability distribution \(p(t\ {\vert }\ d,w)\):

$$\begin{aligned} p_{tdw} = p(t\ {\vert }\ d,w) = \frac{p(w,t|d)}{p(w|d)} = \frac{p(w|t)p(t|d)}{p(w|d)} = \frac{\phi _{wt}\theta _{td}}{\sum _{s}\phi _{ws}\theta _{sd}}. \end{aligned}$$
(9)

The value \(n_{tdw}=n_{dw}p_{tdw}\) estimates how many times the term \(w\) appears in the document \(d\) with relation to the topic \(t\).

The maximization step or M-step (7), (8) can therefore be interpreted as frequency estimates for the conditional probabilities \(\phi _{wt}\) and \(\theta _{td}\).

Algorithm 2.1 reorganizes EM iterations by incorporating the E-step inside the M-step. Thus it avoids storage of a three-dimensional array \(p_{tdw}\). Each EM iteration is a run through the entire collection.

Equations (6)–(8) can be rewritten in a shorter notation by omitting normalization and using the proportionality sign: \(p_{tdw} \propto \phi _{wt}\theta _{td};\; \phi _{wt} \propto n_{wt};\; \theta _{td} \propto n_{td}\).

figure a

In latent Dirichlet allocation (LDA) parameters \(\varPhi ,\varTheta \) are constrained by an assumption that vectors \(\phi _t\) and \(\theta _d\) are drawn from Dirichlet distributions with hyperparameters \({\beta =(\beta _w)_{w\in W}}\) and \({\alpha =(\alpha _t)_{t\in T}}\) respectively (Blei et al. 2003). Learning algorithms for LDA generally fall into two categories—sampling-based algorithms (Steyvers and Griffiths 2004) and variational algorithms (Teh et al. 2006). In Gibbs Sampling (LDA-GS) a topic \(t\) is sampled from the probability distribution \(p(t\ {\vert }\ d,w)\) for each term occurrence \({w=w_i}\), then counters \(n_{wt},n_{td},n_{t},n_{t}\) are increased by 1. Learning algorithms for LDA can also be considered as EM-like algorithms with modified M-step (Asuncion et al. 2009). The following is the most simple and frequently used modification:

$$\begin{aligned} \phi _{wt} \propto n_{wt}+\beta _w, \qquad \theta _{td} \propto n_{td}+\alpha _t. \end{aligned}$$
(10)

It is generally recognized since the work of Blei et al. (2003) that LDA is less subjected to overfitting than PLSA. Nevertheless, recent experiments show that the performance of PLSA and LDA differs insignificantly on large text collections (Masada et al. 2008; Wu et al. 2010; Lu et al. 2011). The reason is that the optimal values of hyperparameters \(\beta _w\) and \(\alpha _t\) are usually close to zero (Wallach et al. 2009). Therefore they affect only small values \(n_{wt}\) and \(n_{td}\) corresponding to the rare terms of topics and rare topics of documents. Robust variants of PLSA and LDA models describe rare terms by a separate model component and have nearly the same performance (Potapenko and Vorontsov 2013). This means that LDA reduces overfitting only for insignificantly rare terms and topics. Thus overfitting does not seems to be such a serious problem for probabilistic topic models.

In contrast, the non-uniqueness, which causes the instability of the solution, is a serious problem. The likelihood (4) depends on the product \(\varPhi \varTheta \), which is defined up to a linear transformation: \(\varPhi \varTheta = (\varPhi S) (S^{-1}\varTheta )\), where \({\varPhi ' = \varPhi S}\) and \({\varTheta ' = S^{-1}\varTheta }\) are stochastic matrices. The transformation \(S\) is not controlled by EM-like algorithms and may depend on random initialization.

We performed the following experiment on the synthetic data in order to assess the ability of PLSA and LDA to restore true matrices \(\varPhi ,\varTheta \). The collection was generated with the parameters \({|W|=1{,}000},\, {|D|=500},\, {|T|=30}\), the lengths of the documents \(n_d\in [100, 600]\) were chosen randomly. Columns of the matrices \(\varPhi ,\varTheta \) were drawn from the symmetric Dirichlet distributions with parameters \(\beta ,\alpha \) respectively. The differences between the restored distributions \(\hat{p}(i\ {\vert }\ j)\) and the synthetic ones \(p(i\ {\vert }\ j),\, j=1,\ldots ,m\) were measured by the average Hellinger distance both for the matrices \(\varPhi ,\varTheta \) and for their product:

$$\begin{aligned} D_{\varPhi }&= H(\hat{\varPhi }, \varPhi ); \quad D_{\varTheta } = H(\hat{\varTheta }, \varTheta ); \quad D_{\varPhi \varTheta } = H(\hat{\varPhi }\hat{\varTheta }, \varPhi \varTheta ); \end{aligned}$$
(11)
$$\begin{aligned} H(\hat{p}, p)&= \frac{1}{m}\sum _{j=1}^m \sqrt{ \frac{1}{2}\sum _{i} \left( \sqrt{\hat{p}(i\ {\vert }\ j)} - \sqrt{p(i\ {\vert }\ j)} \right) ^2}. \end{aligned}$$
(12)

PLSA and LDA turned out to restore the matrices \(\varPhi ,\varTheta \) much worse than their product, Figs. 1, 2. The error depends on the sparsity of the original matrices \(\varPhi ,\varTheta \). In our experiments LDA did not perform well even when we used the same hyperparameters \(\alpha ,\beta \) for synthetic data generation and for LDA-GS algorithm.

Fig. 1
figure 1

Errors in restoring the matrices \(\varPhi \), \(\varTheta \) and \(\varPhi \varTheta \) over hyperparameter \(\beta \) while \({\alpha = 0.01}\) is fixed for LDA Gibbs sampling (left chart) and PLSA-EM (right chart)

Fig. 2
figure 2

Errors in restoring the matrices \(\varPhi \), \(\varTheta \) and \(\varPhi \varTheta \) over hyperparameter \(\alpha \) while \({\beta = 0.01}\) is fixed for LDA Gibbs Sampling (left chart) and PLSA-EM (right chart)

These facts show that the Dirichlet distribution is too weak as a regularizer. More problem-oriented regularizers are needed to formalize additional restrictions on the matrices \(\varPhi ,\varTheta \) and to ensure uniqueness and stability of the solution. Therefore our starting point will be the PLSA model, free of regularizers, but not the LDA model, even though it is more popular in recent research works.

3 EM-algorithm with additive regularization

Consider \(r\) additional objectives \(R_i(\varPhi ,\varTheta ),\, {i=1,\ldots ,r}\), called regularizers. To maximize these objectives together with the likelihood (4) consider their linear combination with nonnegative regularization coefficients \(\tau _i\):

$$\begin{aligned} R(\varPhi ,\varTheta ) = \sum _{i=1}^r \tau _i R_i(\varPhi ,\varTheta ), \qquad L(\varPhi ,\varTheta ) + R(\varPhi ,\varTheta ) \;\rightarrow \; \max _{\varPhi ,\varTheta }. \end{aligned}$$
(13)

Topic \(t\) is called regular if \({n_{wt} + \phi _{wt} \frac{\partial R}{\partial \phi _{wt}} > 0}\) for at least one term \({w\in W}\). If the reverse inequality holds for all \({w\in W}\) then topic \(t\) is called overregularized.

Document \(d\) is called regular if \({n_{td} + \theta _{td} \frac{\partial R}{\partial \theta _{td}} > 0}\) for at least one topic \({t\in T}\). If the reverse inequality holds for all \({t\in T}\) then document \(d\) is called overregularized.

Theorem 2

If the function \(R(\varPhi ,\varTheta )\) is continuously differentiable and \((\varPhi ,\varTheta )\) is the local maximum of the problem (13), (5), then for any regular topic \(t\) and any regular document \(d\) the system of equations holds:

$$\begin{aligned} p_{tdw}&= \frac{\phi _{wt}\theta _{td}}{\sum _{s\in T}\phi _{ws}\theta _{sd}}; \end{aligned}$$
(14)
$$\begin{aligned} \phi _{wt}&\propto \biggl ( n_{wt} + \phi _{wt} \frac{\partial R}{\partial \phi _{wt}} \biggr )_{+};&n_{wt}&= \sum _{d\in D} n_{dw}p_{tdw};&\end{aligned}$$
(15)
$$\begin{aligned} \theta _{td}&\propto \biggl ( n_{td} + \theta _{td} \frac{\partial R}{\partial \theta _{td}} \biggr )_{+};&n_{td}&= \sum _{w\in d} n_{dw}p_{tdw};&\end{aligned}$$
(16)

where \((z)_+ = \max \{z,0\}\).

Note 1

If a topic \(t\) is overregularized then (15) gives \(\phi _t=0\). In this case we have to exclude the topic \(t\) from the model. Topic overregularization is a mechanism that can eliminate irrelevant topics and optimize the number of topics.

Note 2

If a document \(d\) is overregularized then Eq. (16) gives \(\theta _d=0\). In this case we have to exclude the document \(d\) from the model. For example, a document may be too short, or have no relation to the thematics of a given collection.

Note 3

Theorem 1 is the particular case of Theorem 2 at \({R(\varPhi ,\varTheta )=0}\).

Proof

For the local minimum \((\varPhi ,\varTheta )\) of the problem (13), (5) the KKT conditions can be written as follows:

$$\begin{aligned} \sum _{d} n_{dw} \frac{\theta _{td}}{p(w\ {\vert }\ d)} + \frac{\partial R}{\partial \phi _{wt}} = \lambda _t - \lambda _{wt}; \quad \lambda _{wt}\geqslant 0; \quad \lambda _{wt}\phi _{wt} = 0, \end{aligned}$$
(17)

where \(\lambda _t\) and \(\lambda _{wt}\) are KKT multipliers for normalization and nonnegativity constrains respectively. Let us multiply both sides of the first equation by \(\phi _{wt}\), identify the right-hand side of (14) and replace it by the left-hand side variable \(p_{tdw}\). Then we apply the definition of \(n_{wt}\) from (15):

$$\begin{aligned} \phi _{wt} \lambda _t = \sum _{d} n_{dw} \frac{\phi _{wt}\theta _{td}}{p(w\ {\vert }\ d)} + \phi _{wt} \frac{\partial R}{\partial \phi _{wt}} = n_{wt} + \phi _{wt} \frac{\partial R}{\partial \phi _{wt}}. \end{aligned}$$
(18)

An assumption that \(\lambda _t\leqslant 0\) contradicts the regularity condition for the topic \(t\). Then \({\lambda _t>0},\; {\phi _{wt}\geqslant 0}\). The left-hand side is nonnegative, thus the right-hand side is nonnegative too, consequently,

$$\begin{aligned} \phi _{wt} \lambda _t = \biggl ( n_{wt} + \phi _{wt} \frac{\partial R}{\partial \phi _{wt}} \biggr )_{+}. \end{aligned}$$
(19)

Let us sum both sides of this equation over all \({w\in W}\):

$$\begin{aligned} \lambda _t = \sum _{w\in W} \biggl ( n_{wt} + \phi _{wt} \frac{\partial R}{\partial \phi _{wt}} \biggr )_{+}. \end{aligned}$$
(20)

Finally, we obtain (15) by expressing \(\phi _{wt}\) from (19) and (20).

Equations for \(\theta _{td}\) are derived analogously thus finalizing the proof. \(\square \)

The system of Eqs. (14)–(16) defines a regularized EM-algorithm. It keeps E-step (6) and redefines M-step by regularized Eqs. (15), (16). Thus, the EM-algorithm for learning regularized topic models can be implemented by easy modification of any EM-like algorithm at hand. Particularly, in Algorithm 2.1 we are to modify only steps 8 and 9 according to Eqs. (15), (16).

4 Regularization criteria for topic models

In this section we collect a pool of regularizers that can be used in any combination or separately. We revise some of well-known topic models that were originally developed within Bayesian approach. We show that ARTM gives similar or more general results through a much simpler inference based on Theorem 2.

We will intensively use the Kullback–Leibler divergence (relative entropy) to measure the difference between multinomial distributions \((p_i)_{i=1}^n\) and \((q_i)_{i=1}^n\):

$$\begin{aligned} \mathrm{KL}(p \Vert q) \equiv \mathrm{KL}_i(p_i \Vert q_i) = \sum _{i=1}^n p_i \ln \frac{p_i}{q_i}. \end{aligned}$$
(21)

Recall that the minimization of the KL-divergence is equivalent to maximizing the likelihood of the model distribution \(q\) for the empirical distribution \(p\).

Smoothing regularization and LDA Let us minimize the KL-divergence between the distributions \(\phi _t\) and a fixed distribution \({\beta =(\beta _w)_{w\in W}}\), and the KL-divergence between \(\theta _d\) and a fixed distribution \({\alpha =(\alpha _t)_{t\in T}}\):

$$\begin{aligned} \sum _{t\in T} \mathrm{KL}_w (\beta _w \Vert \phi _{wt}) \rightarrow \min _{\varPhi }, \qquad \sum _{d\in D} \mathrm{KL}_t (\alpha _t \Vert \theta _{td}) \rightarrow \min _{\varTheta }. \end{aligned}$$
(22)

After summing these criteria with coefficients \(\beta _0,\alpha _0\) and removing constants we get the regularizer

$$\begin{aligned} R(\varPhi ,\varTheta ) = \beta _0 \sum _{t\in T} \sum _{w\in W} \beta _w \ln \phi _{wt} + \alpha _0 \sum _{d\in D} \sum _{t\in T} \alpha _t \ln \theta _{td} \rightarrow \max . \end{aligned}$$
(23)

The regularized M-step (15) and (16) gives equations

$$\begin{aligned} \phi _{wt} \propto n_{wt} + \beta _0\beta _w, \qquad \theta _{td} \propto n_{td} + \alpha _0\alpha _t, \end{aligned}$$
(24)

which are exactly the same as the M-step (10) in LDA model with hyperparameter vectors \({\beta =\beta _0(\beta _w)_{w\in W}}\) and \({\alpha =\alpha _0(\alpha _t)_{t\in T}}\) of the Dirichlet distributions.

The non-Bayesian interpretation of the smoothing regularization in terms of KL-divergence is simple, natural, and avoids complicated inference.

Sparsing regularization The opposite regularization strategy is to maximize KL-divergence between \(\phi _t\), \(\theta _d\) and fixed distributions \(\beta ,\alpha \):

$$\begin{aligned} R(\varPhi ,\varTheta ) = -\beta _0 \sum _{t\in T} \sum _{w\in W} \beta _w \ln \phi _{wt} -\alpha _0 \sum _{d\in D} \sum _{t\in T} \alpha _t \ln \theta _{td} \rightarrow \max . \end{aligned}$$
(25)

For example, to find a sparse distributions \(\phi _{wt}\) with lower entropy we may choose the uniform distribution \(\beta _{w}= \frac{1}{|W|}\), which is known to have the largest entropy.

The regularized M-step (15) and (16) gives equations that differ from the smoothing equations in the sign of the parameters \(\beta ,\alpha \):

$$\begin{aligned} \phi _{wt} \propto \bigl ( n_{wt} - \beta _0\beta _w \bigr )_+, \qquad \theta _{td} \propto \bigl ( n_{td} - \alpha _0\alpha _t \bigr )_+. \end{aligned}$$
(26)

The idea of entropy-based sparsing was originally proposed in the dynamic PLSA for video processing to produce sparse distributions of topics over time (Varadarajan et al. 2010). The conflict between Dirichlet prior and sparsing assumption leads to sophisticated sparse LDA models (Shashanka et al. 2008; Wang and Blei 2009; Eisenstein et al. 2011; Larsson and Ugander 2011; Chien and Chang 2013). A simple and natural sparsing becomes possible due to abandoning the Dirichlet prior within ARTM semi-probabilistic regularization framework.

Smoothing regularization for semi-supervised learning Consider a collection, which is partially labeled by experts: each document \(d\) from a subset \({D_0\subseteq D}\) is associated with a subset of topics \({T_d \subset T}\), and each topic \(t\) from a subset \({T_0\subset T}\) is associated with a subset of terms \({W_t \subset W}\). It is usually expected that labeling information helps to improve the interpretability of topics.

Consider the regularizer that minimizes KL-divergence between \(\phi _t\), \(\theta _d\) and uniform distributions \(\beta _{wt}=\tfrac{1}{|W_t|}[w\in W_t],\; \alpha _{td}=\tfrac{1}{|T_d|}[t\in T_d]\) respectively:

$$\begin{aligned} R(\varPhi ,\varTheta ) = \beta _0 \sum _{t\in T_0} \sum _{w\in W} \beta _{wt} \ln \phi _{wt} + \alpha _0 \sum _{d\in D_0} \sum _{t\in T} \alpha _{td} \ln \theta _{td} \rightarrow \max . \end{aligned}$$
(27)

The regularized M-step (15) and (16) gives another kind of smoothing:

$$\begin{aligned} \phi _{wt}&\propto n_{wt} + \beta _0 \beta _{wt}\, [t\in T_0];\end{aligned}$$
(28)
$$\begin{aligned} \theta _{td}&\propto n_{td} + \alpha _0 \alpha _{td}\, [d\in D_0]. \end{aligned}$$
(29)

This can be considered as yet another generalization of LDA, in which vectors \(\beta ,\alpha \) are different for the respective distributions \(\phi _{t},\theta _{d}\) depending on labeled data.

Decorrelation of topics Reducing the overlapping between the topic-word distributions is known to make the learned topics more interpretable (Tan and Ou 2010). A regularizer that minimizes covariance between vectors \(\phi _t\),

$$\begin{aligned} R(\varPhi ) = - \gamma \sum _{t\in T} \sum _{s\in T\backslash t} \sum _{w\in W} \phi _{wt}\phi _{ws} \rightarrow \max , \end{aligned}$$
(30)

leads to the following equation of the M-step:

$$\begin{aligned} \phi _{wt} \propto \Bigl ( n_{wt} - \gamma \phi _{wt} \sum _{s\in T\backslash t}\phi _{ws} \Bigr )_+. \end{aligned}$$
(31)

From this formula we conclude that for each term \(w\) the highest probabilities \(\phi _{wt}\) will increase even further, while small probabilities will decrease from iteration to iteration, and may eventually turn into zeros. Therefore, this regularizer also stimulates sparsity. Besides, it has another useful property, which is to group stop-words into a separate topic (Tan and Ou 2010).

Covariance regularization for documents Sometimes we possess information that some documents are likely to share similar topics. For example, they may fall into the same category or one document may have a reference or a link to the other. Making use of this information in terms of the regularizer, we get:

$$\begin{aligned} R(\varTheta ) = \tau \sum _{d,c} n_{dc} \sum _{t\in T}\theta _{td}\theta _{tc} \rightarrow \max , \end{aligned}$$
(32)

where \(n_{dc}\) is the weight of the link between documents \(d\) and \(c\). A similar model LDA-JS by Dietz et al. (2007) is based on the minimization of Jensen–Shannon divergence between \(\theta _d\) and \(\theta _c\), rather than on the covariance maximization.

According to (16), the equation for \(\theta _{td}\) in the M-step turns into

$$\begin{aligned} \theta _{td} \propto n_{td} + \;\tau \theta _{td} \sum _{c\in D} n_{dc}\theta _{tc}. \end{aligned}$$
(33)

This is a kind of smoothing regularizer, which adjusts probabilities \(\theta _{td}\) so that they become closer to \(\theta _{tc}\) for all documents \(c\), connected with \(d\).

Correlated topic model (CTM) was first introduced by Blei and Lafferty (2007) to find strong correlations between topics. For example, a document about geology is more likely to also be about archeology than genetics.

In CTM the correlation between topics is modeled by an assumption that document vectors \(\theta _d\) are generated by logistic normal prior distribution:

$$\begin{aligned} \theta _{td} = \frac{\exp (\eta _{td})}{\sum _{s\in T} \exp (\eta _{sd})}; \qquad p(\eta _d\ {\vert }\ \mu ,\varSigma ) = \frac{\exp \bigl ( -\tfrac{1}{2} (\eta _d-\mu )^\mathsf{\tiny T } \varSigma ^{-1} (\eta _d-\mu ) \bigr )}{(2\pi )^{\frac{n}{2}} |\varSigma |^{\frac{1}{2}}}, \end{aligned}$$
(34)

where \(|T|\)-vector \(\mu \) and \(|T|\times |T|\) covariance matrix \(\varSigma \) are parameters of Gaussian distribution. Document vectors \({\eta _d \in {{\mathbb {R}}}^{|T|}}\) are determined by the corresponding vectors \(\theta _{d}\) up to an arbitrary document-dependent constant \(C_d\):

$$\begin{aligned} \eta _{td} = \ln \theta _{td}+C_d. \end{aligned}$$
(35)

Initially CTM was developed within Bayesian approach, although Bayesian inference is complicated by the fact that the logistic normal distribution is not conjugate to the multinomial. We argue that the very idea of CTM can be alternatively implemented and easier understood within ARTM approach.

In terms of ARTM we define a regularizer as the log-likelihood of the logistic normal model for a sample of the document vectors \(\eta _d\):

$$\begin{aligned} R(\varTheta ) = \tau \sum _{d\in D} \ln p(\eta _d\ {\vert }\ \mu ,\varSigma ) = -\dfrac{\tau }{2} \sum _{d\in D} (\eta _d-\mu )^\mathsf{\tiny T } \varSigma ^{-1} (\eta _d-\mu ) +{{\mathrm {const}}}\rightarrow \max .\qquad \end{aligned}$$
(36)

According to (16) the equation for \(\theta _{td}\) in the M-step turns into

$$\begin{aligned} \theta _{td} \propto \left( n_{td} - \;\tau \sum _{s\in T} \tilde{\varSigma }_{ts} \left( \ln \theta _{sd}-\mu _s\right) \right) _+, \end{aligned}$$
(37)

where \(\varSigma ^{-1} = (\tilde{\varSigma }_{ts})_{T\times T}\) is the inverse covariance matrix.

The parameters \(\varSigma ,\mu \) of Gaussian distribution are assumed to be constant during the iteration. Following the idea of block-coordinate optimization we estimate them after each run through the collection (in Algorithm 2.1 after step 9):

$$\begin{aligned} \mu&= \frac{1}{|D|} \sum _{d\in D} \ln \theta _d; \end{aligned}$$
(38)
$$\begin{aligned} \varSigma&= \frac{1}{|D|} \sum _{d\in D} \bigl (\ln \theta _d - \mu ) \bigl (\ln \theta _d - \mu )^\mathsf \tiny T . \end{aligned}$$
(39)

Then we invert the covariance matrix and turn insignificant values \(\tilde{\varSigma }_{ts}\) into zeros to get sparse solution and reduce computations in (37). Blei and Lafferty (2007) propose to use lasso regression for this purpose.

Coherence regularization A topic is called coherent if its most frequent words typically appear nearby in the documents—either in the training collection, or in some external corpus like Wikipedia. An average topic coherence is considered to be a good interpretability measure of a topic model (Newman et al. 2010b).

Let \({C_{wv} = \hat{p}(w\ {\vert }\ v)}\) denote an estimate of the co-occurrence of word pairs \({(w,v)\in W^2}\). Usually, \(C_{wv}\) is defined as a portion of the documents that contain both words \(v\) and \(w\) in a sliding window of ten words.

Let us estimate the conditional probability \(p(w\ {\vert }\ t)\) from \({\phi _{vt} = p(v\ {\vert }\ t)}\) over all coherent words \(v\) using the law of total probability:

$$\begin{aligned} \hat{p}(w\ {\vert }\ t) = \sum _{v\in W\backslash w} C_{wv} \phi _{vt} = \sum _{v\in W\backslash w} \frac{C_{wv}n_{vt}}{n_t}. \end{aligned}$$
(40)

Consider a regularizer which minimizes the weighted sum of KL-divergences between the empirical distribution \(\hat{p}(w\ {\vert }\ t)\) and the model distribution \(\phi _{wt}\):

$$\begin{aligned} R(\varPhi ) = \tau \sum _{t\in T} n_t \sum _{w\in W} \hat{p}(w\ {\vert }\ t) \ln \phi _{wt} \rightarrow \max . \end{aligned}$$
(41)

According to (15) the equation of the M-step turns into

$$\begin{aligned} \phi _{wt} \propto n_{wt} + \tau \sum _{v\in W\backslash w} C_{wv} n_{vt}. \end{aligned}$$
(42)

The same formula was derived by Mimno et al. (2011) for LDA model and Gibbs Sampling algorithm, from more complicated reasoning through a generalized Polya urn model and a more complex heuristic estimate for \(C_{wv}\).

Newman et al. (2011) propose yet another regularizer:

$$\begin{aligned} R(\varPhi ) = \tau \sum _{t\in T} \,\ln \sum _{u,v\in W} C_{uv}\phi _{ut}\phi _{vt} \rightarrow \max , \end{aligned}$$
(43)

where \({C_{uv} = N_{uv}}\) if \({{\mathrm {PMI}}}(u,v)>0\) and \({C_{uv} = 0}\) otherwise, pointwise mutual information \({{{\mathrm {PMI}}}(u,v) = \ln \frac{|D|N_{uv}}{N_u N_v}}\) depends on document frequencies: \(N_{uv}\) is the number of documents that contain both words \(u,v\) in a sliding window of ten words, \(N_u\) is the number of documents that contain at least one occurrence of the word \(u\).

Thus we conclude that there is no commonly accepted approach to the coherence optimization in the literature. All approaches that we have found so far can be easily expressed in terms of ARTM without Dirichlet priors.

Document classification Let \(C\) be a finite set of classes. Suppose each document \(d\) is labeled by a subset of classes \(C_d \subset C\). The task is to infer a relationship between classes and topics, to improve a topic model by using labeling information, and to learn a decision rule, which is able to classify new documents. Common discriminative approaches such as SVM or Logistic Regression usually give unsatisfactory results on large text collections with a big number of unbalanced and interdependent classes. Probabilistic topic model can benefit in this situation because it processes all classes simultaneously (Rubin et al. 2012).

There are many examples of document labeling in the literature. Classes may refer to text categories (Rubin et al. 2012; Zhou et al. 2009), authors (Rosen-Zvi et al. 2004), time periods (Cui et al. 2011; Varadarajan et al. 2010), cited documents (Dietz et al. 2007), cited authors (Kataria et al. 2011), users of documents (Wang and Blei 2011). More information about special models can be found in the survey (Daud et al. 2010). All these models fall into several groups and all of them can be easily expressed in terms of ARTM. Below we consider a close analogue of Dependency LDA (Rubin et al. 2012), one of the most general topic models for document classification.

We expand the probability space to the set \(D\times W\times T\times C\) and assume that each term \(w\) in document \(d\) is related to both topic \({t\in T}\) and class \({c\in C}\). To classify documents we model distribution \(p(c\ {\vert }\ d)\) over classes for each document \(d\). We assume that classes of a document are determined by its topics, then conditional independence assumption \({p(c\ {\vert }\ t) = p(c\ {\vert }\ d,t)}\) is satisfied. This allows us to express \(p(c\ {\vert }\ d)\) in terms of class probabilities for the topics \({p(c\ {\vert }\ t) = \psi _{ct}}\) and topic probabilities for the documents \({p(t\ {\vert }\ d) = \theta _{td}}\) in the way that is similar to the basic topic model (1):

$$\begin{aligned} p(c\ {\vert }\ d) = \sum _{t\in T} \psi _{ct} \theta _{td}. \end{aligned}$$
(44)

Thus we introduce a third stochastic matrix of model parameters \({\varPsi =(\psi _{ct})_{C\times T}}\).

Another conditional independence \(p(w,c\ {\vert }\ d) = p(w\ {\vert }\ d) \, p(c\ {\vert }\ d)\) allows to split the log-likelihood into PLSA term \(L(\varPhi ,\varTheta )\) as in (4) and a regularization term \(Q(\varPsi ,\varTheta )\):

$$\begin{aligned} \ln \prod _{d\in D}\prod _{w\in d} p(w,c\ {\vert }\ d)^{n_{dw}}&= L(\varPhi ,\varTheta ) + \tau Q(\varPsi ,\varTheta ) \;\rightarrow \; \max _{\varPhi ,\varTheta ,\varPsi }; \end{aligned}$$
(45)
$$\begin{aligned} Q(\varPsi ,\varTheta )&= \sum _{d\in D}\sum _{c\in C} m_{dc} \ln \sum _{t\in T} \psi _{ct} \theta _{td}, \end{aligned}$$
(46)

where \(m_{dc}\) is the empirical frequency of classes in document \(d\). It can be estimated via uniform distribution over classes: \({m_{dc} = n_d\frac{[c\in C_d]}{|C_d|}}\). The regularization coefficient \(\tau \) may be set to 1 or it may be used to trade-off the document language model \(p(w\ {\vert }\ d)\) and the document classification model \(p(c\ {\vert }\ d)\). The regularizer \(Q\) can be considered as a minimization of KL-divergence between the probability model of classification \(p(c\ {\vert }\ d)\) and the empirical class frequency \(m_{dc}\). The problem (45), (46) can still be solved via the regularized EM-like algorithm due to the following generalization of Theorem 2.

Theorem 3

If the function \(R(\varPhi ,\varPsi ,\varTheta )\) of stochastic matrices \(\varPhi ,\varPsi ,\varTheta \) is continuously differentiable and \((\varPhi ,\varPsi ,\varTheta )\) is the local maximum of \({L(\varPhi ,\varTheta ) + \tau Q(\varPsi ,\varTheta ) + R(\varPhi ,\varPsi ,\varTheta )}\) then for any regular topic \(t\) and any regular document \(d\) the system of equations holds:

$$\begin{aligned} p_{tdw}&= \frac{\phi _{wt}\theta _{td}}{\sum _{s\in T}\phi _{ws}\theta _{sd}};&p_{tdc}&= \frac{\psi _{ct}\theta _{td}}{\sum _{s\in T}\psi _{cs}\theta _{sd}}; \end{aligned}$$
(47)
$$\begin{aligned} \phi _{wt}&\propto \biggl ( n_{wt} + \phi _{wt} \frac{\partial R}{\partial \phi _{wt}} \biggr )_{+};&n_{wt}&= \sum _{d\in D} n_{dw}p_{tdw}; \end{aligned}$$
(48)
$$\begin{aligned} \psi _{ct}&\propto \biggl ( m_{ct} + \psi _{ct} \frac{\partial R}{\partial \psi _{ct}} \biggr )_{+};&m_{ct}&= \sum _{d\in D} m_{dc}p_{tdc}; \end{aligned}$$
(49)
$$\begin{aligned} \theta _{td}&\propto \biggl ( n_{td} + \tau m_{td} + \theta _{td} \frac{\partial R}{\partial \theta _{td}} \biggr )_{+};&n_{td}&= \sum _{w\in d} n_{dw}p_{tdw};\;\; m_{td} = \sum _{c\in C_d} m_{dc}p_{tdc}.\qquad \end{aligned}$$
(50)

We omit the proof, which is analogous to the proof of Theorem 2.

Regularization term \(R(\varPhi ,\varPsi ,\varTheta )\) can include Dirichlet prior for \(\varPsi \), as in Dependency LDA, but sparsing seems to be a more natural choice.

Another useful example of \(R\) is label regularization.

Label regularization is known to improve multi-label classification for unbalanced classes (Mann and McCallum 2007; Rubin et al. 2012). We encourage the similarity between the model distribution \(p(c)\) and the empirical class frequency \(\hat{p}_c\) in the training data:

$$\begin{aligned} R(\varPsi ) = \xi \sum _{c\in C} \hat{p}_c \ln p(c) \rightarrow \max , \quad p(c) = \sum _{t\in T} \psi _{ct} p(t), \quad p(t) = \frac{n_t}{n}, \end{aligned}$$
(51)

where \(\xi \) is the regularization coefficient. The formula for the M-step (49)

$$\begin{aligned} \psi _{ct} \propto m_{ct} + \xi \hat{p}_c \frac{\psi _{ct} n_t}{\sum _{s\in T} \psi _{cs} n_s} \end{aligned}$$
(52)

results in smoothing of distributions \(\psi _{ct}\) proportionally to the frequencies \(\hat{p}_c\).

5 Combining regularizers for sparsing and improving interpretability

Interpretability of a topic is a poorly formalized requirement. Essentially what it means is that, provided with the list of the most frequent terms and the most representative documents of a topic, a human can understand its meaning and give it an appropriate name. The interpretability is an important property for information retrieval, systematization and visualization of text collections.

Most of the existing approaches involve human assessment. Newman et al. (2009) ask experts to assess the usefulness of topics by a 3-point scale. Chang et al. (2009) prepare lists of 10 most frequent words for each topic, intruding one random word into each list. A topic is considered to be interpretable if experts can correctly identify the intrusion word. Human-based approach is important at research stage, but it prohibits a fully automatic construction of the topic model.

Coherence is the most popular automatic measure, which is known to correlate well with human estimates of the interpretability (Newman et al. 2010a, b; Mimno et al. 2011). Coherence measures how often the most probable words of the topic occur nearby in the documents from the underlying collection or from external polythematic collection such as Wikipedia.

In this paper we propose another formalization of interpretability, which also does not require human assessment. We assume that each interpretable topic contains its own lexical kernel—a set of specific terms for a particular domain area, which have high probability in this topic, and lower probabilities in other topics. Lexical kernel of the topic should be free of common lexis words, which frequently occur in many documents. Thus, we want to find matrices \(\varPhi \) and \(\varTheta \) with a sparsity structure similar to the one displayed in Fig. 3. To do this we split the set of topics \(T\) into two subsets: domain-specific topics \(S\) and background topics \(B\).

Fig. 3
figure 3

The example of sparse matrices \(\varPhi \) and \(\varTheta \) with specific and background topics. Background topics are shown as two rightmost columns in \(\varPhi \) and two lowest rows in \(\varTheta \)

Domain-specific topic \({t\in S}\) contains terms of a particular domain area. Domain-specific distributions \(p(w\ {\vert }\ t)\) are sparse and weakly correlated. Their corresponding distributions \(p(d\ {\vert }\ t)\) are also sparse, because each domain-specific topic occurs in a relatively small number of documents.

Background topic \({t\in B}\) contains common lexis words. Background distributions \(p(w\ {\vert }\ t)\) and \(p(d\ {\vert }\ t)\) are smooth, because background words occur in many documents. A topic model with background can be considered as a generalization of robust models, which use only one background distribution (Chemudugunta et al. 2007; Potapenko and Vorontsov 2013).

Combining sparsing, smoothing, and decorrelation To obtain the sparsity sructure of \(\varPhi \) and \(\varTheta \) matrices as shown in Fig. 3, we propose a combination of five regilarizers: smoothing of background topics in matrices \(\varPhi \) and \(\varTheta \), sparsing of domain-specific topics in matrices \(\varPhi \) and \(\varTheta \), and decorrelation of domain-specific topics in matrix \(\varPhi \):

$$\begin{aligned} R(\varPhi ,\varTheta ) =&- \beta _0 \sum _{t\in S} \sum _{w\in W} \beta _w \ln \phi _{wt} - \alpha _0 \sum _{d\in D} \sum _{t\in S} \alpha _t \ln \theta _{td} \nonumber \\ {}&\quad + \beta _1 \sum _{t\in B} \sum _{w\in W} \beta _w \ln \phi _{wt} + \alpha _1 \sum _{d\in D} \sum _{t\in B} \alpha _t \ln \theta _{td}\nonumber \\ {}&\quad - \gamma \sum _{t\in T} \sum _{s\in T\backslash t} \sum _{w\in W} \phi _{wt}\phi _{ws} \rightarrow \max . \end{aligned}$$
(53)

We use uniform distribution \(\alpha _t\) and two types of background distribution \(\beta _w\): either a uniform distribution, or the term frequency estimates \({\beta _w = n_w/n}\).

Then we obtain M-step formulas for a combined model from (15) and (16):

$$\begin{aligned} \phi _{wt}&\propto \Biggl (n_{wt} - \beta _0 \underbrace{\beta _{w} [t \in S]}_{\begin{array}{c} \text {sparsing} \\ \text {specific}\\ \text {topic} \end{array}} {} + \beta _1 \underbrace{\beta _{w} [t \in B]}_{\begin{array}{c} \text {smoothing}\\ \text {background}\\ \text {topic} \end{array}} {} - \gamma \underbrace{[t \in S]\, \phi _{wt} \sum _{s\in S\backslash t} \phi _{ws}}_{\text {decorrelation}} {} \Biggr )_{+};\end{aligned}$$
(54)
$$\begin{aligned} \theta _{td}&\propto \Biggl (n_{td} - \alpha _0 \underbrace{\alpha _{t} [t \in S]}_{\begin{array}{c} \text {sparsing}\\ \text {specific}\\ \text {topic} \end{array}} {} + \alpha _1 \underbrace{\alpha _{t} [t \in B]}_{\begin{array}{c} \text {smoothing}\\ \text {background}\\ \text {topic} \end{array}}{}\Biggr )_{+}. \end{aligned}$$
(55)

Regularization trajectory A linear combination of multiple regularizers \(R_i\) depends on a vector of regularization coefficients \({\mathbf {\tau }= (\tau _i)_{i=1}^r}\), which is hard to optimize. A similar problem has been efficiently solved in ElasticNet with a regularization path technique specially developed for a combination of \(L_1\) and \(L_2\) regularization (Friedman et al. 2010). In topic modeling a much larger variety of regularizers is used. Extremely large coefficient may lead to a conflict with other regularizers, to a slower convergence, or to a degeneration of the model. Conversely, extremely small coefficient actually disables the regularization. According to the theory of regularization of ill-posed inverse problems (Tikhonov and Arsenin 1977) we must reduce the regularization coefficient down to zero during the iterations, in order to achieve a correct regularized solution. Optimizing the convergence rate is usually task-dependent and should be controlled manually in the experiment.

Then we define the regularization trajectory as a multidimensional vector \(\mathbf {\tau }\), which is a function of the number of iteration and, possibly, of the model quality measures. In our experiments we choose the regularization trajectory by analyzing experimentally how the change of regularization coefficients affects quality measures of the model during iterations.

Quality measures Learning a topic model from a text collection can be considered as a constrained multi-criteria optimization problem. Therefore, the quality of a topic model should also be measured by a set of criteria. Below we describe a set of quality measures that we use in our experiments.

The accuracy of a topic model \(p(w\ {\vert }\ d)\) on the collection \(D\) is commonly evaluated in terms of perplexity, which is closely related to the likelihood (the lower perplexity is, the better):

$$\begin{aligned} {{\fancyscript{P}}}(D,p) = \exp \Bigl ( -\frac{1}{n} L(\varPhi ,\varTheta ) \Bigr ) = \exp \biggl ( -\frac{1}{n} \sum _{d\in D} \sum _{w\in d} n_{dw} \ln p(w\ {\vert }\ d)\biggr ). \end{aligned}$$
(56)

The hold-out perplexity \({{\fancyscript{P}}}(D',p_D)\) of the model \(p_D\) trained on the collection \(D\) is evaluated on the test set of documents \(D'\) not intersecting \(D\). In our experiments we split the collection in proportion \(|D|:|D'|=9:1\). Each document \(d\) from the test set is further randomly split into two halves: the first one is used to estimate parameters \(\theta _d\), and the second one is used in the perplexity evaluation. The terms in the second halves that did not appear in \(D\) are ignored. Parameters \(\phi _{t}\) are estimated from the training set \(D\).

The sparsity of a model is measured by the ratio of zero elements in matrices \(\varPhi \) and \(\varTheta \) over domain-specific topics \(S\).

The background ratio is a ratio of background terms over the collection:

$$\begin{aligned} {{\fancyscript{B}}}= \frac{1}{n} \sum _{d\in D}\sum _{w\in d}\sum _{t\in B} n_{dw} p(t\ {\vert }\ d,w). \end{aligned}$$
(57)

It takes value from 0 to 1. If \({{\fancyscript{B}}}\) is close to 0 then the model does not eliminate common lexis from domain-specific topics. If \({{\fancyscript{B}}}\) is close to 1 then the model is degenerated, possibly due to excessive sparsing.

We define the lexical kernel \(W_t\) of a topic \(t\) as a set of terms that distinguish the topic \(t\) from the other topics: \({W_t = \{w:p(t\ {\vert }\ w)>\delta \}}\). In our experiments we set \({\delta =0.25}\). Then we define a set of measures, which characterize the conformity of the matrix \(\varPhi \) with the sparse structure shown in Fig. 3:

  • kernel size \({{{\mathrm {ker}}}_t = |W_t|}\), the reasonable values for it are about \(\frac{|W|}{|T|}\);

  • purity \({{\mathrm {pur}}}_t = \sum _{w \in W_t} p(w\ {\vert }\ t)\), the higher the better;

  • contrast \({{\mathrm {con}}}_t = \frac{1}{|W_t|} \sum _{w \in W_t} p(t\ {\vert }\ w)\), the higher the better.

The coherence of a topic \(t\) is defined as the pointwise mutual information averaged over all word pairs from the top-\(k\) most probable words of the topic \(t\):

$$\begin{aligned} {{\fancyscript{C}}}^k_t = \frac{2}{k(k-1)} \sum _{i=1}^{k-1} \sum _{j=i}^k{{\mathrm {PMI}}} (w_i,w_j), \end{aligned}$$
(58)

where \(w_i\) is the \(i\)th word in the list of \(\phi _{wt}\), \({w\in W}\), sorted in descending order. A typical approach is to calculate the top-10 coherence. In addition, we estimated the coherence of top-100 words and the coherence of the topic kernel.

Finally, we define the corresponding measures of kernel size, purity, contrast, and coherence for the topic model by averaging over domain-specific topics \({t\in S}\).

Text collection In our experiments we used the NIPS dataset, which contains \({|D| = 1{,}566}\) English articles from the Neural Information Processing Systems conference. The length of the collection in words is \({n \approx 2.3 \times 10^6}\). The vocabulary size is \({|W| \approx 1.3 \times 10^4}\). We held out \({|D'|=174}\) documents for the testing set. In the preparation step we used BOW toolkit (McCallum 1996) to perform changing to low-case, punctuation elimination, and stop-words removal.

Experimental results In all experiments within this paragraph the number of iterations was set to 40, and the number of topics was set to \(|T|=100\) with \(|B|=10\) background topics.

In Table 1 we compare PLSA (first row), LDA (second row) and multiple regularized topic models. First three columns define a combination of regularizers. Other columns correspond to the quality measures described above.

We use a regularized EM-algorithm with smoothing (23) for LDA model with symmetric Dirichlet prior and usually recommended parameters \({\alpha =0.5},\, {\beta =0.01}\).

We use a uniform smoothing for background topics with \({\alpha = 0.8},\, {\beta = 0.1}\).

We use a uniform distribution \({\beta _w = \frac{1}{|W|}}\) or background distribution \({\beta _w = \frac{n_w}{n}}\) for sparsing domain-specific topics.

From Table 1 we conclude that the combination of sparsing, smoothing and decorrelation significantly improves all quality measures at once. Sparsing gives up to 98 % zero elements in \(\varPhi \) and 87 % zero elements in \(\varTheta \). Decorrelation improves purity and coherence. Smoothing helps to transfer common lexis words from domain-specific topics to background topics. A slight loss of the hold-out perplexity is consistent with an observation of Chang et al. (2009) that models which achieve better predictive perplexity often have less interpretable latent spaces.

Table 1 Topic models with various combinations of regularizers: smoothing (Sm), sparsing (Sp) with uniform (u) or background (b) distribution, and decorrelation (Dc)

In experiments we use convergence charts to compare different models and to choose regularization trajectories \({\mathbf {\tau }= (\alpha _0,\alpha _1,\beta _0,\beta _1,\gamma )}\). A convergence chart represents each quality measure of the topic model as a function of the iteration step. These charts give insight into the effects of each regularizer when it is used alone or in combination with others.

Figures 4, 5, and 6 show convergence charts for PLSA and two ARTM regularized models. Quality measures are shown in three charts for each model. The left chart represents a hold-out perplexity \({{\fancyscript{P}}}\) on the left-hand axis, sparsity \({{\fancyscript{S}}}_\varPhi ,{{\fancyscript{S}}}_\varTheta \) of matrices \(\varPhi ,\varTheta \) and background ratio \({{\fancyscript{B}}}\) on the right-hand axis. The middle chart represents kernel size (ker) on the left-hand axis, purity (pur) and contrast (con) on the right-hand axis. The right chart represents the coherence of top10 words \({{\fancyscript{C}}}^{10}\), top100 words \({{\fancyscript{C}}}^{100}\), and kernel words \({{\fancyscript{C}}}^{\text {ker}}\) on the left-hand axis.

Figure 4 shows that PLSA does not sparse matrices \(\varPhi ,\varTheta \) and gives too low topic purity. Also it does not determine background words.

Figure 5 shows the cumulative effect of sparsing domain-specific topics (with background distribution \(\beta _w\)) and smoothing background topics.

Figure 6 shows that decorrelation augments purity and coherence. Also it helps to move common lexis words from the domain-specific topics to the background topics. As a result, the background ratio reaches almost 80 %.

Again, note the important effect of regularization for the ill-posed problem: some of quality measures may change significantly even after the likelihood converges, either with no change or with a slight increase of the perplexity.

Because of the volume limitations we can not show all the convergence charts that we have analyzed in our experiments while choosing a satisfactory regularization trajectory. Below we present only our final recommendations.

Fig. 4
figure 4

Convergence charts for PLSA topic model

Fig. 5
figure 5

Convergence charts for ARTM combining sparsing and smoothing

Fig. 6
figure 6

Convergence charts for ARTM combining sparsing, smoothing, and decorrelation

It is better to switch on sparsing after the iterative process enters into convergence stage making clear which elements of the matrices \(\varPhi , \varTheta \) are close to zero. An earlier or a more abrupt sparsing may lead to an increase of perplexity. We enabled sparsing at the 10th iteration and gradually adjusted the regularization coefficient to turn into zeros 8 % of the non-zero elements in each vector \(\theta _d\) and 10 % in each column \(\phi _t\) per iteration.

Smoothing of the background topics should better start straight from the first iteration, with constant regularization coefficients.

Decorrelation can be activated also from the first iteration, with a maximum regularization coefficient that does not yet significantly increase perplexity. For our collection we chose \({\gamma =2\times 10^5}\).

6 Discussion and conclusions

Learning a topic model from text collection is an ill-posed problem of stochastic matrix factorization. It generally has infinitely many solutions, which is why solutions computed algorithmically are usually unstable and depend on random initialization. Bayesian regularization in the latent Dirichlet allocation does not cope with this problem, indicating that Dirichlet prior is too weak as a regularizer. More problem-oriented regularizers are needed to restrict the set of solutions.

In this paper we propose a semi-probabilistic approach named ARTM—Additive Regularization of Topic Models. It is based on the maximization of the weighted sum of the log-likelihood and additional regularization criteria. Learning a topic model is considered as a multi-criteria optimization problem, which then is reduced to a single-criterion problem via scalarization. To solve the optimization problem we use a general regularized EM-algorithm. Compared to the dominant Bayesian approach, ARTM avoids excessive probabilistic assumptions, simplifies the inference of the topic model and allows to use any combination of regularizers.

ARTM provides the theoretical background for developing a library of unified regularizers. With such a library topic models for various applications could be build simply by choosing a suitable combination of regularizers from a pool.

In this paper we introduced a general framework of ARTM under the following constraints, which we intend to remove in further research work.

We confined ourselves to a bag-of-words representation of text collection, and have not considered more sophisticated topic models such as hierarchical, multigram, multilingual, etc. Applying additive regularization to these models will probably require more efforts.

We have worked out only one numerical method—regularized EM-algorithm, suitable for a broad class of regularizers. Alternative optimization techniques as well as their convergence and stability have not yet been considered.

Our review of regularizers is far from being complete. Besides, in our experimental study we have investigated only three of them: sparsing, smoothing, and decorrelation. We argue that this combination improves the interpretability of topics and therefore it is useful for many topic modeling applications. Extensive experiments with combinations of a wider set of regularizers are left beyond the scope of this paper.

Finally, having faced with a problem of regularization trajectory optimization, we confined to a very simple visual technique for monitoring convergence process and comparing topic models empirically.