Skip to main content

2020 | OriginalPaper | Buchkapitel

The ASSIN 2 Shared Task: A Quick Overview

verfasst von : Livy Real, Erick Fonseca, Hugo Gonçalo Oliveira

Erschienen in: Computational Processing of the Portuguese Language

Verlag: Springer International Publishing

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

This paper offers a brief overview on the ASSIN 2, an evaluation shared task collocated with STIL 2019. ASSIN 2 covered two different but related tasks: Recognizing Textual Entailment (RTE), also known as Natural Language Inference (NLI), and Semantic Textual Similarity (STS). The ASSIN 2 collection was made of pairs of sentences annotated with human judgments for NLI and STS. Participating teams could take part in any of the tasks or both: nine teams participated in the STS task and eight in the NLI task.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Fußnoten
1
Examples in English: Some animals are playing wildly in the water entails Some animals are playing in the water; A plane is flying does not entail A dog is barking.
 
2
The evaluation scripts can be found at https://​github.​com/​erickrf/​assin.
 
Literatur
1.
Zurück zum Zitat Agirre, E., Diab, M., Cer, D., Gonzalez-Agirre, A.: SemEval-2012 task 6: a pilot on semantic textual similarity. In: Proceedings of the 1st Joint Conference on Lexical and Computational Semantics-Vol. 1: Proceedings of Main Conference and Shared Task, and, Vol. 2: Proceedings of Sixth International Workshop on Semantic Evaluation, pp. 385–393. ACL Press (2012) Agirre, E., Diab, M., Cer, D., Gonzalez-Agirre, A.: SemEval-2012 task 6: a pilot on semantic textual similarity. In: Proceedings of the 1st Joint Conference on Lexical and Computational Semantics-Vol. 1: Proceedings of Main Conference and Shared Task, and, Vol. 2: Proceedings of Sixth International Workshop on Semantic Evaluation, pp. 385–393. ACL Press (2012)
2.
Zurück zum Zitat Bowman, S.R., Angeli, G., Potts, C., Manning, C.D.: A large annotated corpus for learning natural language inference. In: Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, Lisbon, Portugal, pp. 632–642. Association for Computational Linguistics, September 2015 Bowman, S.R., Angeli, G., Potts, C., Manning, C.D.: A large annotated corpus for learning natural language inference. In: Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, Lisbon, Portugal, pp. 632–642. Association for Computational Linguistics, September 2015
3.
Zurück zum Zitat Cer, D., Diab, M., Agirre, E., Lopez-Gazpio, I., Specia, L.: Semeval-2017 task 1: semantic textual similarity multilingual and crosslingual focused evaluation. In: Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval 2017), pp. 1–14. ACL Press (2017) Cer, D., Diab, M., Agirre, E., Lopez-Gazpio, I., Specia, L.: Semeval-2017 task 1: semantic textual similarity multilingual and crosslingual focused evaluation. In: Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval 2017), pp. 1–14. ACL Press (2017)
4.
Zurück zum Zitat Dagan, I., Glickman, O., Magnini, B.: The PASCAL recognising textual entailment challenge. In: Quiñonero-Candela, J., Dagan, I., Magnini, B., d’Alché-Buc, F. (eds.) MLCW 2005. LNCS (LNAI), vol. 3944, pp. 177–190. Springer, Heidelberg (2006). https://doi.org/10.1007/11736790_9CrossRef Dagan, I., Glickman, O., Magnini, B.: The PASCAL recognising textual entailment challenge. In: Quiñonero-Candela, J., Dagan, I., Magnini, B., d’Alché-Buc, F. (eds.) MLCW 2005. LNCS (LNAI), vol. 3944, pp. 177–190. Springer, Heidelberg (2006). https://​doi.​org/​10.​1007/​11736790_​9CrossRef
5.
Zurück zum Zitat Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), Minneapolis, Minnesota, pp. 4171–4186. ACL Press, June 2019 Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), Minneapolis, Minnesota, pp. 4171–4186. ACL Press, June 2019
7.
Zurück zum Zitat Fonseca, E., Santos, L., Criscuolo, M., Aluísio, S.: Visão geral da avaliação de similaridade semântica e inferência textual. Linguamática 8(2), 3–13 (2016) Fonseca, E., Santos, L., Criscuolo, M., Aluísio, S.: Visão geral da avaliação de similaridade semântica e inferência textual. Linguamática 8(2), 3–13 (2016)
8.
Zurück zum Zitat Gururangan, S., Swayamdipta, S., Levy, O., Schwartz, R., Bowman, S., Smith, N.A.: Annotation artifacts in natural language inference data. In: Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), New Orleans, Louisiana, pp. 107–112. Association for Computational Linguistics, June 2018 Gururangan, S., Swayamdipta, S., Levy, O., Schwartz, R., Bowman, S., Smith, N.A.: Annotation artifacts in natural language inference data. In: Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), New Orleans, Louisiana, pp. 107–112. Association for Computational Linguistics, June 2018
9.
Zurück zum Zitat Kalouli, A.L., Real, L., de Paiva, V.: Correcting contradictions. In: Proceedings of Computing Natural Language Inference (CONLI) Workshop, 19 September 2017 (2017) Kalouli, A.L., Real, L., de Paiva, V.: Correcting contradictions. In: Proceedings of Computing Natural Language Inference (CONLI) Workshop, 19 September 2017 (2017)
10.
Zurück zum Zitat Marelli, M., Bentivogli, L., Baroni, M., Bernardi, R., Menini, S., Zamparelli, R.: SemEval-2014 task 1: evaluation of compositional distributional semantic models on full sentences through semantic relatedness and textual entailment. In: Proceedings of the 8th International Workshop on Semantic Evaluation (SemEval 2014), Dublin, Ireland, pp. 1–8. ACL Press (2014) Marelli, M., Bentivogli, L., Baroni, M., Bernardi, R., Menini, S., Zamparelli, R.: SemEval-2014 task 1: evaluation of compositional distributional semantic models on full sentences through semantic relatedness and textual entailment. In: Proceedings of the 8th International Workshop on Semantic Evaluation (SemEval 2014), Dublin, Ireland, pp. 1–8. ACL Press (2014)
11.
Zurück zum Zitat Nangia, N., Williams, A., Lazaridou, A., Bowman, S.: The RepEval 2017 shared task: multi-genre natural language inference with sentence representations. In: Proceedings of the 2nd Workshop on Evaluating Vector Space Representations for NLP, Copenhagen, Denmark, pp. 1–10. Association for Computational Linguistics, September 2017 Nangia, N., Williams, A., Lazaridou, A., Bowman, S.: The RepEval 2017 shared task: multi-genre natural language inference with sentence representations. In: Proceedings of the 2nd Workshop on Evaluating Vector Space Representations for NLP, Copenhagen, Denmark, pp. 1–10. Association for Computational Linguistics, September 2017
12.
Zurück zum Zitat Negri, M., Marchetti, A., Mehdad, Y., Bentivogli, L., Giampiccolo, D.: Semeval-2012 task 8: cross-lingual textual entailment for content synchronization. In: Proceedings of *SEM (2012) Negri, M., Marchetti, A., Mehdad, Y., Bentivogli, L., Giampiccolo, D.: Semeval-2012 task 8: cross-lingual textual entailment for content synchronization. In: Proceedings of *SEM (2012)
13.
Zurück zum Zitat Negri, M., Marchetti, A., Mehdad, Y., Bentivogli, L., Giampiccolo, D.: Semeval-2013 task 8: cross-lingual textual entailment for content synchronization. In: Proceedings of *SEM (2013) Negri, M., Marchetti, A., Mehdad, Y., Bentivogli, L., Giampiccolo, D.: Semeval-2013 task 8: cross-lingual textual entailment for content synchronization. In: Proceedings of *SEM (2013)
14.
Zurück zum Zitat Real, L., Fonseca, E., Oliveira, H.G. (eds.): Proceedings of ASSIN 2. (2020, to be published) Real, L., Fonseca, E., Oliveira, H.G. (eds.): Proceedings of ASSIN 2. (2020, to be published)
16.
Zurück zum Zitat Zaenen, A., Karttunen, L., Crouch, R.: Local textual inference: can it be defined or circumscribed? In: Proceedings of the ACL Workshop on Empirical Modeling of Semantic Equivalence and Entailment, Ann Arbor, Michigan, pp. 31–36. Association for Computational Linguistics, June 2005. https://www.aclweb.org/anthology/W05-1206 Zaenen, A., Karttunen, L., Crouch, R.: Local textual inference: can it be defined or circumscribed? In: Proceedings of the ACL Workshop on Empirical Modeling of Semantic Equivalence and Entailment, Ann Arbor, Michigan, pp. 31–36. Association for Computational Linguistics, June 2005. https://​www.​aclweb.​org/​anthology/​W05-1206
Metadaten
Titel
The ASSIN 2 Shared Task: A Quick Overview
verfasst von
Livy Real
Erick Fonseca
Hugo Gonçalo Oliveira
Copyright-Jahr
2020
DOI
https://doi.org/10.1007/978-3-030-41505-1_39

Premium Partner