Weitere Kapitel dieses Buchs durch Wischen aufrufen
A safety-critical system is a system whose failure could result in significant economic damage or loss of life. There are many examples of safety-critical systems such as aircraft flight control systems, nuclear power stations, and missile systems. It is essential to employ rigorous processes in their design and development, and software testing alone is usually insufficient in verifying the correctness of such systems.
Bitte loggen Sie sich ein, um Zugang zu diesem Inhalt zu erhalten
Sie möchten Zugang zu diesem Inhalt erhalten? Dann informieren Sie sich jetzt über unsere Produkte:
The expected usage of the software (or operational profile) is a quantitative characterization (usually based on probability) of how the system will be used.
We are assuming that the defect has been corrected perfectly with no new defects introduced by the changes made.
MTBF = MTTF + MTTR.
It is questionable whether step-wise refinement is suitable in mainstream software engineering, as it involves re-writing a specification several times and takes significant time to prove that the refinement steps are valid. It is more relevant to the safety-critical field.
Bjorner D, Jones C (1982) Formal specification and software development. Prentice Hall International Series in Computer Science
Cobb RH, Mills HD (1990) Engineering software under statistical quality control. IEEE Software
Diller A (1990) An introduction to formal methods. Wiley, England
Hinchey M, Bowen J (1995) Applications of formal methods. Prentice Hall International Series in Computer Science
O’Regan G (2006) Mathematical approaches to software quality. Springer, London
O’Regan G (2019) Concise guide to formal methods. Springer, London
Spivey JM (1992) The Z Notation. A reference manual. Prentice Hall International Series in Computer Science
- Verification of Safety-Critical Systems
Neuer Inhalt/© ITandMEDIA