Skip to main content

2002 | Buch

Information and Database Quality

herausgegeben von: Mario G. Piattini, Coral Calero, Marcela Genero

Verlag: Springer US

Buchreihe : Advances in Database Systems

insite
SUCHEN

Über dieses Buch

In a global and increasingly competitive market, where organizations are driven by information, the search for ways to transform data into true knowledge is critical to a business's success. Few companies, however, have effective methods of managing the quality of this information. Because quality is a multidimensional concept, its management must consider a wide variety of issues related to information and data quality. Information and Database Quality is a compilation of works from research and industry that examines these issues, covering both the organizational and technical aspects of information and data quality.
Information and Database Quality is an excellent reference for both researchers and professionals involved in any aspect of information and database research.

Inhaltsverzeichnis

Frontmatter
Chapter 1. The Organization’s Most Important Data Issues
Abstract
The typical organization is faced with a range of issues that prevent it from taking full advantage of its data resources. Among these issues are poor connection between strategy and data, low accuracy levels, inadequate knowledge of what data resources are available, and lack of management accountability. While one might hope that the Internet and stunning advances in database and communications technologies might ease these issues, just the opposite is happening. The issues are becoming more complex, not more tractable. Further, the expected growth in the sheer quantity of data exacerbate these issues.
Robert W. Pautke, Thomas C. Redman
Chapter 2. Quality in Conceptual Modelling
Abstract
Conceptual modelling has become a key part of the early phases of the information system (IS) life cycle. Conceptual modelling is no longer only for databases, but in a broad sense it is considered as the elicitation and formal definition of the general knowledge about a domain that an IS needs to know now to provide in order to perform the required functions. Indeed, conceptual models lay the foundation of all later designs and implementation work. Therefore, special emphasis must be put on conceptual model quality, which can have a great impact on the IS which is finally implemented. The idea of this chapter is to present a thorough analysis of most of the existing relevant works related to conceptual model quality, to provide an overall view of what has been done and to get a more comprehensive idea of the direction in which research is going.
Marcela Genero, Mario Piattini
Chapter 3. Information quality in internet commerce design
Abstract
Internet business will mature and become part of the mainstream business in the near future. Many consumers still do not feel comfortable purchasing online and poor quality Web site design is often cited as a reason. Information and design quality are recognized as significant factors affecting the effectiveness of Web sites, and are among the issues that will determine the ability of businesses to reap the full benefits of Internet commerce. In this chapter, we seek to develop a framework identifying the key features and facilities of Internet commerce Web sites. Once developed, the framework will enable an assessment of the Web site’s design and information quality against a standard set of key characteristics or features.
Pairin Katerattanakul, Keng Siau
Chapter 4. Metrics for databases: a way to assure the quality
Abstract
Databases are the core of the Information Systems. The correct functioning of these databases has a direct effect on the quality of the IS that supports it. So, the success associated with an Information System largely depends on the design quality of the database that the system uses. One way for assuring the quality of the databases designs is by using metrics. In this chapter, we will be to give a series of guidelines which allow us to learn how metrics can be developed, in such a way that they can be used to achieve a specific objective related with the quality database design.
Coral Calero, Mario Piattini
Chapter 5. Total Quality data Management (TQdM)
Methodology for Information Quality Improvement
Abstract
This chapter provides a description of the TQdM®methodology for information quality improvement. It defines what information quality is, why it is essential to the survival of organizations in the Information Age. It describes the processes required to assess and improve information quality in order to achieve business performance excellence. It describes a process for implementing culture change required to achieve a sustainable environment of continuous information quality.
Total Quality data Management (TQdM®) is not aprogramand it is not simply aprocessof data measurement or data cleansing. TQdM is a cultural transformation required to achieve sustainable business performance excellence. This chapter describes the processes of TQdM, including assessment, cleansing (corrective maintenance), process improvement (preventive maintenance using the PDCA (Plan-Do-Check-Act) Cycle) and culture transformation as defined in English (1999).
TQdM is a registered trademark of INFORMATION IMPACT International, Inc.
Larry P. English
Chapter 6. Data Quality and Business Rules
Abstract
A critical component of improving data quality is being able to distinguish between “good” (i.e., valid) data and “bad” (i.e., invalid) data. The common definition of data quality focuses on the concept of “fitness for use,” yet because data values appear in many contexts, formats, and frameworks, this simple concept can devolve into extremely complicated notions as to what constitutes fitness. The conventional wisdom dictates that in order to improve data quality, we must be able to measure the levels of data quality. Consequently, to be able to measure levels of data quality, we must know what defines a valid value. In this chapter, we explore a framework for defining data quality and business rules that qualify data values within their context, as well as the mechanism for using a rule-based system for measuring conformity to these business rules.
David Loshin
Chapter 7. A NEAT Approach for Data Quality Assessment
Industrial Experiences using a DQ Assessment Methodology
Abstract
In this work, we present NEAT, a methodology that provides a systematic way of assessing data quality. The methodology is quite simple, and can be applied when data quality should be evaluated and improved. The core part of NEAT is that of deriving metrics to evaluate data quality. To do this, we propose to measure information quality using metrics defined by traditional software engineering techniques. The outcome of this work is a suitable set of metrics that establishes a starting point for a systematic analysis of data quality.
We propose to work with data quality issues from a software engineering perspective. So, the main characteristics of NEAT that make it different from standard techniques are:
  • NEAT is a general methodology that can be applied regardless of the particular domain.
  • The emphasis of NEAT is on preventive data quality techniques, and so we work with data models and processes - not only data instance
  • NEAT includes data quality maintenance
In this work, we also present two experiences on data quality assessment. The main goals of presenting these projects are (a) to evaluate the proposed methodology, (b) to show how the framework is used in real cases to evaluate data quality and to propose improvements, and (c) to evaluate whether this kind of work over data quality is worthwhile. These experiences were carried out in two FORTUNE 50 companies in Argentina. Moreover, they were performed in very different domains: one was carried out in an oil and gas company; the other was executed in a telecommunications company.
Mónica Bobrowski, Martina Marré, Daniel Yankelevich
Chapter 8. Quality in Data Warehousing
Abstract
Data warehousing is a new technology which provides a software infrastructure for decision support systems and OLAP applications. Data warehouse systems collect data from heterogeneous and distributed sources, transform and reconcile this data in order to aggregate it and customize it with respect to business and organizational criteria required by decision makers. High level aggregated data is organized by subjects and stored as a multidimensional structure into a data mart. Data quality is very important in database applications in general and very crucial in data warehousing in particular. Indeed, data warehouse systems provide aggregated data to decision makers whose actions and decisions should be very strategic to the enterprise. Providing dirty data, imprecise data or non coherent data may lead to the rejection of the decision support system or may result into non productive decisions. This chapter provides a general framework for data warehouse design based on quality.
Mokrane Bouzeghoub, Zoubida Kedad
Chapter 9. Where is Information Quality in Information Systems Education?
Abstract
Improving the Information Quality (IQ) skill level of Information Systems (IS) professionals has become critical as organizational success increasingly depends on the availability of high-quality information. Using an IQ framework from the literature on data and information quality and recent IS curriculum models, the authors identify gaps between the information quality skills needed by organizations and the skills taught by universities to future IS professionals. The authors suggest improvements to the IS curriculum models to close this gap and thus to improve information quality teaching and learning.
Beverly K. Kahn, Diane M. Strong
Backmatter
Metadaten
Titel
Information and Database Quality
herausgegeben von
Mario G. Piattini
Coral Calero
Marcela Genero
Copyright-Jahr
2002
Verlag
Springer US
Electronic ISBN
978-1-4615-0831-1
Print ISBN
978-1-4613-5260-0
DOI
https://doi.org/10.1007/978-1-4615-0831-1