Skip to main content
Top

1986 | Book

Computer Systems for Process Control

Editor: Reinhold Güth

Publisher: Springer US

insite
SEARCH

About this book

The Brown Boveri Symposia are by now part of a firm!ly established tradition. This is the ninth event in a series which was initiated shortly after Corporate Research was created as a separate entity within our Company; the Symposia are held every other year. The themes to date have been: 1969 Flow Research on Blading 1971 Real-Time Control of Electric Power Systems 1973 High-Temperature Materials in Gas Turbines 1975 Nonemissive Electrooptic Displays 1977 Current Interruption in High-Voltage Networks 1979 Surges in High-Voltage Networks 1981 Semiconductor Devices for Power Conditionling 1983 Corrosion in Power Generating Equipment 1985 Computer Systems for Process Control Why have we chosen these topics? At the outset we established certain selection criteria; we felt that a subject for a symposium should fulfill the following three requirements: It should characterize a part of a thoroughly scientific discipline; in other words it should describe an area of scholarly study and research. r - It should be of current interest in the sense that important results have recently been obtained and considerable research effort is presently underway in the international scientific community. - It should bear some relation to the scientific and technological activity of our Company. Let us look at the requirement "current interest": Some of the topics on the list above have been the subject of research for several decades, some even from the - v vi FOREWORD ginning of the century.

Table of Contents

Frontmatter

Introduction

Introduction
Abstract
The purpose of a control system is to control a technical process, such that it has desired properties, and to make the process observable to human operators. Control tasks arise in factories, chemical plants, power stations, and in networks for the transport or distribution of water, oil, and electrical energy.
R. Güth

Computer System Architecture

Frontmatter
Principles of Computer Architecture
Abstract
First a definition of computer architecture is introduced together with the notion of machine data types, to provide a basis for the subsequent discussion of sequential machine architectures and parallel processing architectures. Measures taken in sequential machine architectures to mitigate the von Neumann bottleneck and the semantic gap of conventional machines are discussed, as well as modern trends in processor architecture such as RISC processors and pipeline processors. The discussion of parallel processing architecture centers around the dichotomy of SIMD machines versus MIMD machines and the appropriate control structures in both forms. The notion of data structure architecture is introduced and contrasted with the dataflow architecture. Loosely coupled, distributed multicomputer systems are contrasted with a new generation of highly configurable, strongly coupled, multicomputer systems.
W. K. Giloi
Challenges and Directions in Fault-Tolerant Computing
Abstract
Two decades of theoretical and experimental work and numerous recent successful applications have established fault tolerance as a standard objective in computer system design. As with the objective of correctness, and in contrast to the objective of high speed, satisfaction of fault-tolerance requirements cannot be demonstrated by testing alone, but requires formal analysis. Most of the work in fault tolerance has been concerned with developing effective design techniques. Recent work on reliability modeling and formal proof of fault-tolerant design and implementation is laying a foundation for a more rigorous design discipline. The scope of concerns has also expanded to include any source of computer unreliability, such as design mistakes in software, hardware, or at any system level.
Current art is barely able to keep up with the rapid pace of computer technology, the stresses of new applications and the new expansion in scope of concerns. Particular challenges lie in coping with the imperfections of the ultrasmall, i.e., high-density VLSI, and the ultralarge, i.e., large software systems. It is clear that fault tolerance cannot be “added” to a design and must be integrated with other design objectives. Simultaneous demands in future systems for high performance, high security, high evolvability and high fault tolerance will require new theoretical models of computer systems and a much closer integration of practical design techniques.
J. Goldberg
Distributed Real-Time Processing
Abstract
Design issues raised by the achievement of safeness, liveness and timeliness properties for distributed real-time computing systems are first investigated. This is accomplished through an overview of problems and known solutions related to faults/intrusions, variable delays, concurrency, global physical time, process scheduling and through an examination of interdependences which exist between problems and of possibly conflicting or complementary approaches embedded in known solutions. Examples of such interdependencies and conflicting or complementary solutions are given.
These design issues are then investigated in greater detail for a particular class of processing functions, that is, interprocess communications. Merits and limitations of various algorithms utilized to achieve atomic and reliable message transfers, end-to- end flow control and time-constrained message scheduling are identified. Special attention is given to the multiaccess control problem in local area networks where finite bounded message transfer delays must be guaranteed.
G. Le Lann
Design of Real-Time Systems
Abstract
In a real-time application, the value dimension and the time dimension of information are of comparable importance. This paper presents a methodology for the design of distributed real-time systems which requires an early binding of software functions to hardware components such that the timing and reliability properties of a still unrefined design can be analyzed. After a discussion of real-time system requirements some design prinicples for distributed fault-tolerant real-time systems are presented. In the final section an overview of a design environment and a set of software tools which support this methodology are given.
H. Kopetz
Architectures for Process ControL
Abstract
The control of industrial processes by computers has brought not only new possibilities but also new challenges to engineers. The requirements in terms of response time, computing power, flexibility and fault tolerance are stricter than in the fields of commercial or scientific computation, since the work has to be carried out in real time. An additional difficulty is that most of these requirements are contradictory. A solution to the problems of complexity, flexibility and geographical separation is provided by a distributed architecture, which consists of (multiprocessor) nodes located at strategic points of the plant and interconnected by an industrial network. Brown Boveri have been developing such a distributed architecture in the form of building blocks and have tested it successfully in power plants and other control systems. Such a distributed architecture results in a high degree of freedom in the configuration and extension of a control system, and in a firm base for tolerance against faults. The geographical distribution of hardware finds its counterpart in the distribution of software in the form of functionality independent units which communicate over well-defined interfaces. The drawback of distribution is the communication bottleneck and consequent response-time problems. Several means are being used to reduce this bottleneck, including data reduction at the source and broadcast of process data to all interested receivers. The broadcasting of data permits considerable reduction in the communication traffic, also addresses the response time and flexibility problems, and provides at the same time the base for maintaining the redundant units in an actualized state for fault tolerance. One particular application has been developed at the Brown Boveri Research Center in the form of a fault-tolerance mechanism based on remote procedure calls.
H. Kirrmann, T. Lalive d’Epinay, H. P. Stoeckler
Performance Modelling of Control Systems
Abstract
Control systems must satisfy stringent performance requirements that can be derived from the nature of the process controlled. The distinguishing characteristics of an electrical power system and their effects on the control system are described. A hybrid analytical/simulation model is developed that describes the control-system behavior for both nonsaturated and saturated resources. The model is incorporated in an application-oriented, user-friendly, interactive, performance-analysis tool that can be used to support the planning, design and tuning of a process-control system.
M. Vitins, K. Signer

Human-Computer Interaction

Frontmatter
Dialog Design: Principles and Experiments
Abstract
Programming methodology has emphasized different criteria for judging the quality of software during the three decades of its existence. In the early days of scarce computing resources the most important aspect of a program was its functionality — what it can do, and how efficiently it works. In the second phase of structured programming the realization that programs have a long life and need continuous adaptation to changing demands led to the conclusion that functionality is not enough — a program must be understandable, so that its continued utility is not tied to its inventor. Today, in the age of interactive computer usage and the growing number of casual users, we begin to realize that functionality and good structure are also not enough — good behavior towards the user is just as important. Interactive use of computers by users with a wide variety of backgrounds, in different applications, is ever increasing.
Novel styles of interfaces that exploit bitmap graphics have emerged after a long history of interactive command languages based on alphanumeric terminals. This development is repeating, with a twenty-year delay, the history of programming languages: large collections of unrelated operations are being replaced by systematic structures that satisfy general principles of consistency. We do not attempt to give a comprehensive or balanced survey of the many approaches to human-computer interface design being investigated today. We summarize a few general issues that designers of human-computer interfaces encounter, and illustrate them by means of examples from our research activities, some of them performed jointly with industry.
J. Nievergelt, C. Muller, H. Sugaya
Engineering Graphics: Overview
Abstract
CAD Systems are of increasing importance both as a basic technology and as a tool for engineering applications. This paper surveys different types of CAD workstations (PC-based, high-performance) and the possibilities of interconnecting them with special reference to the subject of graphics and geometry standards. The CAD workstation market and the world revenue in engineering graphics are briefly surveyed. The user benefits of CAD and the expected trends for CAD system technology are listed. The bibliography lists references to related work and further technical details.
J. Encarnacao

Software Engineering

Frontmatter
Issues in Process and Communication Structure for Distributed Programs
Abstract
Many proposals have been made for structuring distributed programs. This paper looks at one such proposal, the one embedded in the Argus programming language and system. The paper provides a discussion of decisions made in the two major areas of process structure and communication, and compares the chosen structures with alternatives. The paper emphasizes the rationale for decisions and the issues that must be considered in making such decisions.
B. Liskov, M. Herlihy
Requirements Engineering and Software Development: A Study Toward Another Life-Cycle Model
Abstract
The problem of how to increase both productivity and quality is of primary concern in software production. The waterfall life-cycle model is a strict sequence of phases where the sequential order is fixed. The paper describes a unified life-cycle model which is a derivative of the waterfall model. The unified model is a scheme for connecting the phases in the life cycle in a customized order and to bring about higher productivity. A distinctive feature of the unified model is the “semantic model” which is created at the top phase of the life cycle. The selected phases are arbitrarily interconnected to this phase in the order selected by the designer.
In the waterfall life-cycle model, consideration of implementation is delayed until after the requirements are completely specified. In the unified life-cycle model designers are allowed to devise a prototype model which satisfies user requirements, describe it and execute it during the requirement-specification phase. The semantic model is a representation of this model in a semantics which is compatible with that of the implementation. In other words, the semantics applied in describing the semantic model will be consistent with the semantics in which the programming of the software system for the target computer system is implemented. Applying one unique semantics throughout the life cycle is beneficial because the maintainability of the documents in the early phases could be improved to the degree in which the source text for those documents is maintained.
The semantic model is a set of objects interconnected with one another through the passing of messages. The paper describes a technique for defining objects from the requirements and their description by an object-oriented language called OKBL (object-oriented knowledge-based language). Each defined object may be transformed to a set of functions or it may be transformed to a set of program modules in the succeeding design phases. The final codes for the target computers may also be described in OKBL. Following a unique semantics throughout the life cycle leads to increased productivity for the following reasons:
  • - Describing specifications based on the same semantics throughout the whole life cycle increases traceability between phases and results in improvement in maintainability.
  • - Reusing existing objects simplifies the problems of reusability improvement.
  • - The time spent in the intermediate phases, where the transformations from requirements to programs are made, is shortened.
Y. Matsumoto
Software Quality Assurance
Abstract
Quality assurance has been in existence for a long time in many fields of engineering, but in software engineering the idea is still fairly new. In this paper we describe the current state of Software Quality Assurance (SQA), and propose a classification of techniques. The paper consists of three parts: Firstly, the fundamental terms are defined, and the state of the art identified. This includes a survey of standards and proven techniques, as published in journals and textbooks. Secondly, practical experiences with SQA techniques used at Brown Boveri are presented, and their successes and failures discussed. It is difficult to obtain reliable data for the benefits of SQA, therefore only some general conclusions can be reached regarding its impact on quality and productivity. Thirdly, SQA consists of many different components which are difficult to order according to their relative importance. With the aim of establishing common understanding and terminology we have defined several levels of SQA which may serve as a general framework.
K. Fruehauf, J. Ludewig, H. Sandmayr
Functional Specification of Process-Control Software
Abstract
Functional program mming languages are based on the mathematical concept of recursive functions. Such languages define the desired result of a computation as a static input-output mapping. Thus, programs expressed in a functional language can be considered as an executable specification. In contrast to procedural languages, the sequence of computation is not explicitly specified, which thereby facilitates the construction of programs.
At the BBC Research Center, the concepts of functional languages are applied to the development of a graphical programming environment which is used for the construction of process-control software. This programming environment is intended to be used by control engineers who may have little knowledge of computer science. Programs are constructed as data-flow graphs which correspond to the function charts traditionally used for the description of process-control mechanisms. The computer-supported graphic system is used for both programming and documentation. The graphic source is translated automatically into executable code.
F. Kaufmann, D. Schillinger, U. Schult
Logic Programming
Abstract
Logic programming is based on formal mathematical concepts of first order predicate logic. A logic program is a static description of the problem in the form of normalized logic statements (Horn clauses). The execution of a logic program relies on automated theorem proving. In contrast to conventional procedural languages, in which the sequence of computation is specified explicitly by control structures, in logic programs information is provided only about “what” is to be solved, not “how” it is to be solved. Hence, logic programs represent executable specifications and are well suited for fast prototyping.
The concepts of logic programming have not yet been fully realized, because of the large requirements on memory and execution time of conventional computers. The programming language Prolog is based on logic programming, but employs various additional logical features in order to be a practical programming language. Recent advances in compilation and hardware design are continuously improving the feasibility of logic-based programming languages for industrial applications.
At the BBC Research Center, the system Modula--Prolog is being used for fast prototyping and knowledge engineering. Modula--Prolog permits the arbitrary combination of Modula-2 and Prolog programs and hence utilizes the advantages of both procedural and logic programming. Modula--Prolog has been applied to the development of knowledge-based expert systems for the configuration and diagnosis of technical systems.
J. Kriz, H. Sugaya
Metadata
Title
Computer Systems for Process Control
Editor
Reinhold Güth
Copyright Year
1986
Publisher
Springer US
Electronic ISBN
978-1-4613-2237-5
Print ISBN
978-1-4612-9311-8
DOI
https://doi.org/10.1007/978-1-4613-2237-5