Skip to main content
main-content

Tipp

Weitere Artikel dieser Ausgabe durch Wischen aufrufen

01.12.2019 | Ausgabe 7/2019

Automatic Control and Computer Sciences 7/2019

Word Embedding for Semantically Related Words: An Experimental Study

Zeitschrift:
Automatic Control and Computer Sciences > Ausgabe 7/2019
Autoren:
M. S. Karyaeva, P. I. Braslavski, V. A. Sokolov
Wichtige Hinweise
Translated by A. Ovchinnikova

Abstract

The ability to identify semantic relations between words has made a word2vec model widely used in NLP tasks. The idea of word2vec is based on a simple rule that a higher similarity can be reached if two words have a similar context. Each word can be represented as a vector, so the closest coordinates of vectors can be interpreted as similar words. It allows to establish semantic relations (synonymy, relations of hypernymy and hyponymy and other semantic relations) by applying an automatic extraction. The extraction of semantic relations by hand is considered as a time-consuming and biased task, requiring a large amount of time and some help of experts. Unfortunately, the word2vec model provides an associative list of words which does not consist of relative words only. In this paper, we show some additional criteria that may be applicable to solve this problem. Observations and experiments with well-known characteristics, such as word frequency, a position in an associative list, might be useful for improving results for the extraction of semantic relations for the Russian language by using word embedding. In the experiments, the word2vec model trained on the Flibusta and pairs from Wiktionary are used as examples with semantic relationships. Semantically related words are applicable to thesauri, ontologies and intelligent systems for natural language processing.

Bitte loggen Sie sich ein, um Zugang zu diesem Inhalt zu erhalten

Literatur
Über diesen Artikel

Weitere Artikel der Ausgabe 7/2019

Automatic Control and Computer Sciences 7/2019 Zur Ausgabe