Skip to main content
Top

2018 | OriginalPaper | Chapter

Textual Entailment in Legal Bar Exam Question Answering Using Deep Siamese Networks

Authors : Mi-Young Kim, Yao Lu, Randy Goebel

Published in: New Frontiers in Artificial Intelligence

Publisher: Springer International Publishing

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

Every day a large volume of legal documents are produced, and lawyers need support for their analysis, especially in corporate litigation. Typically, corporate litigation has the aim of finding evidence for or against the litigation claims. Identifying the critical legal points within large volumes of legal text is time consuming and costly, but recent advances in natural language processing and information extraction have provided new enthusiasm for improved automated management of legal texts and the identification of legal relationships. As a legal information extraction example, we have constructed a question answering system for Yes/No bar exam questions. Here we introduce a Siamese deep Convolutional Neural Network for textual entailment in support of legal question answering. We have evaluated our system using the data from the competition on legal information extraction/entailment (COLIEE). The competition focuses on the legal information processing required to answer yes/no questions from legal bar exams, and it consists of two phases: legal ad-hoc information retrieval (Phase 1), and textual entailment (Phase 2). We focus on Phase 2, which requires “Yes” or “No” answers to previously unseen queries. We do this by comparing the extracted meanings of queries and relevant articles. Our choice of features used for the semantic modeling focuses on word properties and negation. Experimental evaluation demonstrates the effectiveness of the Siamese Convolutional Neural Network, and our results show that our Siamese deep learning-based method outperforms the previous use of a single Convolutional Neural Network.

Dont have a licence yet? Then find out more about our products and how to get one now:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literature
1.
go back to reference Kim, M.-Y., Kang, S.-J., Lee, J.-H.: Resolving ambiguity in inter-chunk dependency parsing. In: Proceedings of 6th Natural Language Processing Pacific Rim Symposium, pp. 263–270 (2001) Kim, M.-Y., Kang, S.-J., Lee, J.-H.: Resolving ambiguity in inter-chunk dependency parsing. In: Proceedings of 6th Natural Language Processing Pacific Rim Symposium, pp. 263–270 (2001)
2.
go back to reference Bdour, W.N., Gharaibeh, N.K.: Development of yes/no arabic question answering system. Int. J. Artif. Intell. Appl. 4(1), 51–63 (2013) Bdour, W.N., Gharaibeh, N.K.: Development of yes/no arabic question answering system. Int. J. Artif. Intell. Appl. 4(1), 51–63 (2013)
3.
go back to reference Nielsen, R.D., Ward, W., Martin, J.H.: Toward dependency path based entailment. In: Proceedings of the Second PASCAL Challenges Workshop on RTE (2006) Nielsen, R.D., Ward, W., Martin, J.H.: Toward dependency path based entailment. In: Proceedings of the Second PASCAL Challenges Workshop on RTE (2006)
4.
go back to reference Yu, L., Hermann, K.M., Blunsom, P., Pulman, S.: Deep learning for answer sentence selection. arXiv preprint arXiv:1412.1632 (2014) Yu, L., Hermann, K.M., Blunsom, P., Pulman, S.: Deep learning for answer sentence selection. arXiv preprint arXiv:​1412.​1632 (2014)
5.
go back to reference Kalchbrenner, N., Grefenstette, E., Blunsom, P.: A convolutional neural network for modelling sentences. In: Proceedings of ACL (2014) Kalchbrenner, N., Grefenstette, E., Blunsom, P.: A convolutional neural network for modelling sentences. In: Proceedings of ACL (2014)
7.
go back to reference Yih, W., He, X., Meek, C.: Semantic parsing for single-relation question answering. In: Proceedings of ACL (2014) Yih, W., He, X., Meek, C.: Semantic parsing for single-relation question answering. In: Proceedings of ACL (2014)
8.
go back to reference Dahl, G.E., Sainath, T.N., Hinton, G.E.: Improving deep neural networks for LVCSR using rectified linear units and dropout. In: Proceedings of Acoustics, Speech and Signal Processing (ICASSP), pp. 8609–8613 (2013) Dahl, G.E., Sainath, T.N., Hinton, G.E.: Improving deep neural networks for LVCSR using rectified linear units and dropout. In: Proceedings of Acoustics, Speech and Signal Processing (ICASSP), pp. 8609–8613 (2013)
9.
go back to reference Deng, L., Abdel-Hamid, O., Yu, D.: A deep convolutional neural network using heterogeneous pooling for trading acoustic invariance with phonetic confusion. In: Proceedings of Acoustics, Speech and Signal Processing (ICASSP), pp. 6669–6673. IEEE (2013) Deng, L., Abdel-Hamid, O., Yu, D.: A deep convolutional neural network using heterogeneous pooling for trading acoustic invariance with phonetic confusion. In: Proceedings of Acoustics, Speech and Signal Processing (ICASSP), pp. 6669–6673. IEEE (2013)
10.
go back to reference Bordes, A., Chopra, S., Weston, J.: Question answering with subgraph embeddings. In: Proceedings of EMNLP (2014) Bordes, A., Chopra, S., Weston, J.: Question answering with subgraph embeddings. In: Proceedings of EMNLP (2014)
11.
go back to reference Hinton, G.E., Srivastava, N., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.R.: Improving neural networks by preventing co-adaptation of feature detectors. arXiv preprint arXiv:1207.0580 (2012) Hinton, G.E., Srivastava, N., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.R.: Improving neural networks by preventing co-adaptation of feature detectors. arXiv preprint arXiv:​1207.​0580 (2012)
12.
go back to reference Kouylekov, M., Magnini, B.: Tree edit distance for recognizing textual entailment: estimating the cost of insertion. In: Proceedings of the Second PASCAL Challenges Workshop on RTE (2006) Kouylekov, M., Magnini, B.: Tree edit distance for recognizing textual entailment: estimating the cost of insertion. In: Proceedings of the Second PASCAL Challenges Workshop on RTE (2006)
13.
go back to reference Goyal, N.: LearningToQuestion at SemEval 2017 task 3: ranking similar questions by learning to rank using rich features. In: Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017), pp. 310–314 (2017) Goyal, N.: LearningToQuestion at SemEval 2017 task 3: ranking similar questions by learning to rank using rich features. In: Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017), pp. 310–314 (2017)
14.
go back to reference Mueller, J., Thyagarajan, A.: Siamese recurrent architectures for learning sentence similarity. In: AAAI, pp. 2786–2792, February 2016 Mueller, J., Thyagarajan, A.: Siamese recurrent architectures for learning sentence similarity. In: AAAI, pp. 2786–2792, February 2016
15.
go back to reference Das, A., Yenala, H., Chinnakotla, M., Shrivastava, M.: Together we stand: siamese networks for similar question retrieval. In: Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pp. 378–387 (2016) Das, A., Yenala, H., Chinnakotla, M., Shrivastava, M.: Together we stand: siamese networks for similar question retrieval. In: Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pp. 378–387 (2016)
16.
go back to reference Bromley, J., Guyon, I., LeCun, Y., Säckinger, E., Shah, R.: Signature verification using a “siamese” time delay neural network. In: Advances in Neural Information Processing Systems, pp. 737–744 (1994) Bromley, J., Guyon, I., LeCun, Y., Säckinger, E., Shah, R.: Signature verification using a “siamese” time delay neural network. In: Advances in Neural Information Processing Systems, pp. 737–744 (1994)
17.
go back to reference Chopra, S., Hadsell, R., LeCun, Y.: Learning a similarity metric discriminatively, with application to face verification. In: IEEE Computer Vision and Pattern Recognition, pp. 539–546 (2005) Chopra, S., Hadsell, R., LeCun, Y.: Learning a similarity metric discriminatively, with application to face verification. In: IEEE Computer Vision and Pattern Recognition, pp. 539–546 (2005)
19.
go back to reference Lyu, C., Lu, Y., Ji, D., Chen, B.: Deep learning for textual entailment recognition. In: IEEE 27th International Conference on Tools with Artificial Intelligence (ICTAI), pp. 154–161 (2015) Lyu, C., Lu, Y., Ji, D., Chen, B.: Deep learning for textual entailment recognition. In: IEEE 27th International Conference on Tools with Artificial Intelligence (ICTAI), pp. 154–161 (2015)
20.
go back to reference Neculoiu, P., Maarten, V., Mihai, R.: Learning text similarity with siamese recurrent networks. In: Proceedings of the 1st Workshop on Representation Learning for NLP, pp. 148–157 (2016) Neculoiu, P., Maarten, V., Mihai, R.: Learning text similarity with siamese recurrent networks. In: Proceedings of the 1st Workshop on Representation Learning for NLP, pp. 148–157 (2016)
21.
go back to reference Bowman, S.R., Angeli, G., Potts, C., Manning, C.D.: A large annotated corpus for learning natural language inference. arXiv preprint arXiv:1508.05326 (2015) Bowman, S.R., Angeli, G., Potts, C., Manning, C.D.: A large annotated corpus for learning natural language inference. arXiv preprint arXiv:​1508.​05326 (2015)
22.
go back to reference Rocktaschel, T., Grefenstette, E., Hermann, K.M., Kocisky, T., Blunsom, P.: Reasoning about entailment with neural attention. arXiv preprint arXiv:1509.06664 (2015) Rocktaschel, T., Grefenstette, E., Hermann, K.M., Kocisky, T., Blunsom, P.: Reasoning about entailment with neural attention. arXiv preprint arXiv:​1509.​06664 (2015)
23.
go back to reference Yin, W., Schutze, H., Xiang, B., Zhou, B.: ABCNN: attention-based convolutional neural network for modeling sentence pairs. arXiv preprint arXiv:1512.05193 (2015) Yin, W., Schutze, H., Xiang, B., Zhou, B.: ABCNN: attention-based convolutional neural network for modeling sentence pairs. arXiv preprint arXiv:​1512.​05193 (2015)
24.
25.
26.
go back to reference Mou, L., Men, R., Li, G., Xu, Y., Zhang, L., Yan, R., Jin, Z.: Natural language inference by tree-based convolution and heuristic matching. In: Proceedings of the Conference on Association for Computational Linguistics (2016) Mou, L., Men, R., Li, G., Xu, Y., Zhang, L., Yan, R., Jin, Z.: Natural language inference by tree-based convolution and heuristic matching. In: Proceedings of the Conference on Association for Computational Linguistics (2016)
28.
go back to reference Liu, Y., Sun, C., Lin, L., Wang, X.: Learning natural language inference using bidirectional LSTM model and inner-attention. arXiv preprint arXiv:1605.09090 (2016) Liu, Y., Sun, C., Lin, L., Wang, X.: Learning natural language inference using bidirectional LSTM model and inner-attention. arXiv preprint arXiv:​1605.​09090 (2016)
29.
30.
go back to reference Bowman, S.R., Gauthier, J., Rastogi, A., Gupta, R., Manning, C.D., Potts, C.: A fast unified model for parsing and sentence understanding. arXiv preprint arXiv:1603.06021 (2016) Bowman, S.R., Gauthier, J., Rastogi, A., Gupta, R., Manning, C.D., Potts, C.: A fast unified model for parsing and sentence understanding. arXiv preprint arXiv:​1603.​06021 (2016)
31.
go back to reference Parikh, A.P., Tackstrom, O., Das, D., Uszkoreit, J.: A decomposable attention model for natural language inference. arXiv preprint arXiv:1606.01933 (2016) Parikh, A.P., Tackstrom, O., Das, D., Uszkoreit, J.: A decomposable attention model for natural language inference. arXiv preprint arXiv:​1606.​01933 (2016)
32.
go back to reference Sha, L., Chang, B., Sui, Z., Li, S.: Reading and thinking: re-read LSTM unit for textual entailment recognition. In: Proceedings of COLING: Technical Papers, pp. 2870–2879 (2016) Sha, L., Chang, B., Sui, Z., Li, S.: Reading and thinking: re-read LSTM unit for textual entailment recognition. In: Proceedings of COLING: Technical Papers, pp. 2870–2879 (2016)
Metadata
Title
Textual Entailment in Legal Bar Exam Question Answering Using Deep Siamese Networks
Authors
Mi-Young Kim
Yao Lu
Randy Goebel
Copyright Year
2018
DOI
https://doi.org/10.1007/978-3-319-93794-6_3

Premium Partner