2014 | OriginalPaper | Chapter
An Investigation on Statistical Machine Translation with Neural Language Models
Authors : Yinggong Zhao, Shujian Huang, Huadong Chen, Jiajun Chen
Published in: Chinese Computational Linguistics and Natural Language Processing Based on Naturally Annotated Big Data
Publisher: Springer International Publishing
Activate our intelligent search to find suitable subject content or patents.
Select sections of text to find matching patents with Artificial Intelligence. powered by
Select sections of text to find additional relevant content using AI-assisted search. powered by
Recent work has shown the effectiveness of neural probabilistic language models(NPLMs) in statistical machine translation(SMT) through both reranking the n-best outputs and direct decoding. However there are still some issues remained for application of NPLMs. In this paper we further investigate through detailed experiments and extension of state-of-art NPLMs. Our experiments on large-scale datasets show that our final setting, i.e., decoding with conventional
n
-gram LMs plus un-normalized feedforward NPLMs extended with word clusters could significantly improve the translation performance by up to averaged 1.1 B
leu
on four test datasets, while decoding time is acceptable. And results also show that current NPLMs, including feedforward and RNN still cannot simply replace
n
-gram LMs for SMT.