Connectionist language models offer many advantages over their statistical counterparts, but they also have some drawbacks like a much more expensive computational cost. This paper describes a novel method to overcome this problem. A set of normalization values associated to the most frequent
-gramsis pre-computed and the model is smoothed with lower
-gramconnectionist or statistical models. The proposed approach is favourably compared to standard connectionist language models and with statistical back-off language models.