2014 | OriginalPaper | Buchkapitel
LOD Laundromat: A Uniform Way of Publishing Other People’s Dirty Data
verfasst von : Wouter Beek, Laurens Rietveld, Hamid R. Bazoobandi, Jan Wielemaker, Stefan Schlobach
Erschienen in: The Semantic Web – ISWC 2014
Verlag: Springer International Publishing
Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.
Wählen Sie Textabschnitte aus um mit Künstlicher Intelligenz passenden Patente zu finden. powered by
Markieren Sie Textabschnitte, um KI-gestützt weitere passende Inhalte zu finden. powered by
It is widely accepted that
proper
data publishing is difficult. The majority of Linked Open Data (LOD) does not meet even a core set of data publishing guidelines. Moreover, datasets that are
clean
at creation, can get
stains
over time. As a result, the LOD cloud now contains a high level of
dirty
data that is difficult for humans to clean and for machines to process.
Existing solutions for cleaning data (standards, guidelines, tools) are targeted towards human data creators, who can (and do) choose not to use them. This paper presents the LOD Laundromat which removes stains from data without any human intervention. This fully automated approach is able to make very large amounts of LOD more easily available for further processing
right now
.
LOD Laundromat is not a new dataset, but rather a uniform point of entry to a collection of cleaned siblings of existing datasets. It provides researchers and application developers a wealth of data that is guaranteed to conform to a specified set of best practices, thereby greatly improving the chance of data actually being (re)used.