Journal of Natural Language Processing
Online ISSN : 2185-8314
Print ISSN : 1340-7619
ISSN-L : 1340-7619
Empirical Support for New Probabilistic Generalized LR Parsing
Virach SornlertlamvanichKentaro InuiHozumi TanakaTakenobu TokunagaToshiyuki Takezawa
Author information
JOURNAL FREE ACCESS

1999 Volume 6 Issue 3 Pages 3-22

Details
Abstract

This paper shows the empirical results of our probabilistic GLR parser based on a new probabilistic GLR language model (PGLR) against existing models based on the same GLR parsing framework, namely the model proposed by Briscoe and Carroll (B & C), and two-level PCFG or pseudo context-sensitive grammar (PCSG) which is claimed to be a context-sensitive version of PCFG. We evaluate each model in character-based parsing (morphological and syntactic analysis) tasks, in which we have to consider the word segmentation and multiple part-of-speech problems. Parsing a sentence from the morphological level makes the task much more complex because of the increase of parse ambiguity stemming from word segmentation ambiguities and multiple corresponding sequences of parts-of-speech. As a result of the well-founded probabilistic nature of PGLR, the model accurately incorporates probabilities for word prediction, by way of encoding pre-terminal n-gram constraints into LR parsing tables. The PGLR model empirically outperforms the other two models in all measures, on experimentation with the ATR Japanese corpus. To examine the appropriateness of PGLR using an LALR table, we test the PGLR model using both an LALR and CLR table. The results show that parsing with the PGLR model using LALR table returns the best performance in parse accuracy, parsing time and memory space consumption.

Content from these authors
© The Association for Natural Language Processing
Previous article Next article
feedback
Top