2003 | OriginalPaper | Buchkapitel
Metadata and Data Warehouse Quality
verfasst von : Matthias Jarke, Maurizio Lenzerini, Yannis Vassiliou, Panos Vassiliadis
Erschienen in: Fundamentals of Data Warehouses
Verlag: Springer Berlin Heidelberg
Enthalten in: Professional Book Archive
Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.
Wählen Sie Textabschnitte aus um mit Künstlicher Intelligenz passenden Patente zu finden. powered by
Markieren Sie Textabschnitte, um KI-gestützt weitere passende Inhalte zu finden. powered by
In the traditional view, data warehouses provide large-scale caches of historic data. They sit between (a) information sources gained externally or through online transaction processing systems (OLTP) and (b) decision support or data mining queries following the vision of online analytic processing (OLAP). Three main arguments have been put forward in favor of this caching approach: 1.Performance and safety considerations. The concurrency control methods of most DBMS do not react well to a mix of short update transactions (as in OLTP) and OLAP queries that typically search a large portion of the database. Moreover, the OLTP systems are often critical for the operation of the organization and must not be in danger of corruption by other applications.2.Logical interpretability problems. Inspired by the success of spreadsheet techniques, OLAP users tend to think in terms of highly structured multidimensional data models, whereas information sources offer at best relational, often just semistructured data models or even flat files.3.Temporal and granularity mismatch. OLTP systems focus on current operational support in great detail, whereas OLAP often considers historical developments in a somewhat lesser detail.