Elsevier

Journal of Accounting and Public Policy

Volume 24, Issue 1, January–February 2005, Pages 61-75
Journal of Accounting and Public Policy

Minimizing cost of continuous audit: Counting and time dependent strategies

https://doi.org/10.1016/j.jaccpubpol.2004.12.004Get rights and content

Abstract

Why do we need to continuously audit databases? The answer to this question depends on several factors, including the users and the applications that have accessed the data, the timing and the type of data modifications such as permissions or schema and so on. Many studies pertain to the technical feasibility of continuous auditing of databases, but do not consider the economic feasibility of such auditing. This paper helps fill this void in the literature. We examine certain strategies that have been suggested in the database auditing literature (see, e.g. [Orman, L.V., 2001. Database audit and control strategies. Information Technology and Management 2(1), 27–51]) with major and minor modifications. Orman studied the counting, periodic and hybrid auditing strategies with the objective of minimizing the number of errors introduced during database access. Unlike Orman whose focus is on assessing the number of errors entering into the system (technical feasibility), we focus on the long run operating cost of running database audit. We use results from regenerative stochastic processes to derive expressions for the long run average cost under the counting and periodic auditing strategies. Future directions for research are also proposed.

Introduction

Databases and online systems have dramatically changed the way businesses transact and keep the records.1 The database-supported online systems enable businesses to conduct their businesses electronically, leading to greater efficiency in information processing and in production and sales operations. With databases and online real-time processing systems, business can retrieve, classify, and report activities far more quickly (Amer et al., 1987, Orman, 1990, Date, 1995, Alles et al., 2002). Unlike in the past, where reports were produced several months after events and transactions took place, businesses today can provide financial statements and other activity reports instantaneously, enhancing their usefulness to investors.

While databases and online real-time processing systems provide several advantages, they also introduce control and security concerns about a client’s transaction processing system. The one-time data entry with simultaneous entry into multiple system interfaces and input into cross-functional records make data monitoring for errors and inconsistencies quite difficult (Laudon, 1986, Redman, 1992, Wang et al., 1995; Orman, 2001). While audit trails mitigate some of these concerns, audit trails are often vague and obscured, making error detection difficult, raising questions about the quality of information produced by these systems (Fernandez et al., 1981).

Quality of information is very important to auditors. Before they express an opinion about a client’s financial reports, auditors must ensure that the information generated through a client’s system is reliable (Elliott, 1997, Alles et al., 2002). This requires that the auditors evaluate the adequacy of the controls surrounding a client’s information processing systems and test them for errors and inconsistencies. In a conventional audit, auditors perform such an evaluation and testing only after a client’s reporting period has ended and when the audit of financial reports begins. Even then, auditors do not examine one hundred percent of a client’s system and select only a sample of transactions and examine them for errors and inconsistencies. Such limited examination, while adequate for evaluating simpler processing systems, is not sufficient to evaluate complex processing systems. In complex processing systems, the transaction volume is very high and the transaction processing systems are generally integrated across functions—e.g. manufacturing, inventory, and record-keeping. Consequently, a single error in processing could affect multiple records and the high transaction volume would make detection of these errors difficult. Therefore, examining these systems for errors and inconsistencies once during a year or by limited inspection is unlikely to make the auditor confident about the system reliability. These systems demand more frequent inspections and necessitate greater monitoring to be effective. They must be monitored on a continuous basis, where the auditor gathers information about a client’s system electronically and through embedded monitoring tools and evaluates the system reliability at regular intervals during the year (Groomer and Murthy, 1989, Elliott, 1997, Rezaee et al., 2001).

In the past, several studies have addressed the technical feasibility of continuous auditing (Groomer and Murthy, 1989, Vasarhelyi and Halper, 1991, Vasarhelyi et al., 1991, Halper et al., 1992, Vasarhelyi et al., 2003). However, one of the important questions that these studies have not answered is the economic feasibility of continuous audit. Will there be a demand for continuous audit and if auditors adopt continuous monitoring, will it be cost-effective for both businesses and the auditors? While the supporters of continuous auditing (e.g. Elliott committee (2002)) claim that there is great demand for such audits, businesses and auditors have not enthusiastically embraced continuous audit of clients’ systems. As Alles et al. (2002) state, one of the reasons for the low adoption is the high cost of implementation. Therefore, this paper examines continuous audit for their economic feasibility and reports on the long run operating costs of continuous audit and how the costs change depending on the monitoring strategy used.

This study develops an analytical methodology to identify the long run cost of continuous audit of a large database. Audit for this purpose denotes the execution of the integrity constraints, identification of errors and correction of those errors before they are reentered into the system. The analytical model is used to compare two continuous auditing strategies proposed by Orman (2001), counting and periodic monitoring of databases. The analytical model identifies the optimal number of transactions after which audit must begin under the counting strategy and the optimal time that must elapse before audit must begin under the periodic strategy. The long run average cost of audit is computed for each monitoring alternative. This study uses the theory of regenerative Markov processes to analytically represent the cost of continuous monitoring under the two strategies (Cinlar, 1975). This is one of the first studies in accounting literature to use the regenerative Markov processes to examine costs of continuous audit. We expect the results to contribute to an understanding of the cost effectiveness and economic feasibility of continuous auditing.

The remainder of the paper is organized as follows. The next section describes the role of databases in online transaction processing systems and the strategies that must be followed to detect, control, and audit the databases and maintain their integrity. The Section 3 describes the audit methodologies developed by the authors to identify the optimal cost of continuous auditing. This section also includes numerical examples to illustrate the cost under each audit strategy. The last section compares the cost strategies and summarizes the findings and also provides future directions for research in this area.

Section snippets

Auditing database integrity—the monitoring strategies

In a database, integrity constrains are used as the principle tools to detect errors (Date, 1995, Weber, 1988, Orman, 2001). The integrity constraints are programmed controls integrated within a DBMS. The integrity constraints verify the objective behind each transaction and require that they are satisfied before an input is accepted for processing (Davis and Weber, 1986). The integrity constraints can be invoked in two basic ways: as automatic and as continuous integrity tools to monitor data

Minimizing cost of continuous audit—the methodology

One of the primary functions of an auditor during an audit process is to verify the quality of the data under audit. A principle measure of data quality is the number of errors within a database. Errors are minimized when a database is monitored for errors. Orman (2001) developed an analytical methodology to observe the error rates under various monitoring strategies. The analytical process developed by Orman is subjected to several assumptions: (1) all errors are equally important; (2) each

Summary and conclusions

Auditors are required to assure the quality of data and information contained in corporate databases as part of their audit process. The use of online and real-time transaction processing has made the audit for data quality difficult and requires auditors to monitor the databases on a near continuous basis. Although several studies have examined the technical feasibility of continuous audit of databases, the economic feasibility of such audits remains an important issue.

This study examined

Acknowledgment

We express our gratitude to the anonymous reviewers and the editors for their contribution to the refinement of this paper.

References (20)

  • M.G. Alles et al.

    Feasibility and economics of continuous assurance

    Auditing, A Journal of Practice and Theory

    (2002)
  • T.A. Amer et al.

    A review of the computer information systems research related to accounting and auditing

    Journal of Information Systems

    (1987)
  • E. Cinlar

    Introduction to Stochastic Processes

    (1975)
  • C.J. Date

    An Introduction to Stochastic Processes

    (1995)
  • G.B. Davis et al.

    The impact of advanced computer systems on controls and audit procedures: A theory and empirical test

    Auditing, A Journal of Practice and Theory

    (1986)
  • R. Elliott

    Assurance service opportunities: Implications for academica

    Accounting Horizons

    (1997)
  • R. Elliott

    Twenty-first century assurance

    Auditing, A Journal of Practice and Theory

    (2002)
  • E.B. Fernandez et al.

    Database Security and Integrity

    (1981)
  • S.M. Groomer et al.

    Continuous auditing of database applications: An embedded audit module approach

    Journal of Information Systems

    (1989)
  • F.B. Halper et al.

    The continuous audit process audit system: Knowledge engineering and representation

    EDPACS

    (1992)
There are more references available in the full text version of this article.

Cited by (19)

  • Innovation and practice of continuous auditing

    2011, International Journal of Accounting Information Systems
    Citation Excerpt :

    The auditor can connect into the system after a period of time or a number of transactions (Du and Roohani, 2007). However, Pathak et al. (2004) finds a continuous audit cycle dependent on transaction volume may be more cost-effective. For example, an audit will be triggered after a number of accounts payable transactions have entered into the accounting information system (Fig. 1).

  • General theory of cost minimization strategies of continuous audit of databases

    2007, Journal of Accounting and Public Policy
    Citation Excerpt :

    As explained in PCS (2005), counting strategy requires that a “database is monitored for errors and other integrity violations after every n transaction.

  • The influence of scope and timing of reliability assurance in B2B e-commerce

    2006, International Journal of Accounting Information Systems
View all citing articles on Scopus
View full text