[Go Up], [Go On].

1.1. Motivation

While results from this work have already enjoyed both direct and indirect influence in the given industrial domain, this thesis is not about railway signalling any more than it is about computer science itself. Indeed, this work falls somewhere between the needs of engineering practice on the one hand, and the advancing scientific basis of computing on the other. That it does so is not really an accident: it is precisely because of the gulf that exists between the communities of industrial developers of computer based systems whose work is strongly governed by market imperatives, and academic computer scientists who have hitherto been motivated more by the mathematical elegance and precision of their theories. Our endeavour is, in a small way, to shed some light on what lies in between these positions with a particular regard to the application of notions from theoretical computer science to relevant problems faced by industry.

Applied theoretical computer science has, for better or worse, become synonymous with the term `formal methods'. Despite several decades of research in the area, formal methods have yet to be wholeheartedly incorporated into the development of computer systems on any scale of design. In the large scale applications that include the control of industrial plant, power generation, aviation and mass transportation, this may be because the move towards computer dominated solutions to the engineering problems is a relatively recent development for communities and licensing authorities that have strong, conservative safety cultures. On the smaller scale, in consumer electronics say, the financial risks seem too great when formal mathematical techniques towards software development and system design are difficult to apply in general, currently impossible to use with only naive understanding of the theory and supporting tools, and poorly promoted by real, or even realistic, case-studies from which to learn.

The emergence of two relevant industrial standards is thus particularly interesting: MOD 00-55: The Procurement of Safety Critical Software in Defence Equipment [69], and IEC 1131-3: Software for Computers in the Application of Industrial Safety-Related Systems [46]. One of the requirements appearing in the former (Interim) standard is the mandatory use of formal methods in projects intended to supply equipment to the UK Ministry of Defence; the latter standard defines several languages (e.g., Function Block Diagrams, or Structured Text) that provide software for Programmable Logic Controllers (PLC), mainly used in the process control industry. Neither standard is solely for use within the given sector. However, it is fair to say that the assemblage of PLC languages defined in IEC 1131-3 seems insufficiently well defined mathematically for such software to be readily acceptable according to the coding requirements of MOD 00-55. This is a shame since the relative simplicity of the PLC languages (e.g., when compared to Ada) ostensibly offer excellent opportunities for the kind of formal design and development anticipated in MOD 00-55.

Interestingly, Halang and Krämer [39] (unintentionally) illustrate one of the key technical difficulties involved here. Their work addresses the reluctance of licensing authorities to certify software embedded in process control with a proposal for a formal software development methodology for PLC programs. The underlying theory is OBJ through which the authors formalise requirements and functional aspects of the design specification assembled as a Function Block Diagram. Properties of the specification can be verified largely automatically with the mechanical support for OBJ, and design validation is further supported through symbolic execution. A Structured Text program (Structured Text is a small, procedural tasking language) is then developed and annotated in the sense of Floyd-Hoare logic with assertions drawn from the requirements and specification documents---the verification conditions derived from the structured text can also be discharged using OBJ. Practical considerations aside, the main problem with this development methodology is simply that the relationship between Function Block Diagrams and Structured Text (and indeed the other PLC languages) is entirely informal. This reflects the relationship between these languages in the defining IEC standard.

Halang and Krämer seriously tackle the crisis in embedded software but there may yet be some doubt, depending of course on how one interprets the standard, whether their efforts satisfy the strictures of MOD 00-55. (Similar concerns have also been voiced about the RAISE wide spectrum language and programming methodology [23].) While there is no intended cross-reference between the two cited standards, the point is that neither really grasps the strength of applied theoretical computer science. MOD 00-55 is very rigid in its definition of `formal methods' and probably overestimates the benefit currently to be accrued from, if not the difficulty attending to, the use of mathematical formalism in developing software all the way down from high-level requirements to detailed code. Although MOD 00-55 addresses software issues only, neither it nor MOD 00-56 [70] with which it is explicitly related, recognise a potential rôle for formal methods in supporting system design as a whole. The IEC standard, however, underestimates the insight and assurance to be obtained by theoretical analysis of programming languages, by clarifying their mathematical definitions and, in this case, by elaborating the semantics of their interactions. This is a particular concern for languages intended for use in the burgeoning area of safety critical computer systems.

Mandating the use of formal methods is certainly one way of getting designers to use them, but the paucity of guidance of the how-to variety is evidently a major stumbling block: the guidance [68] offered on the selection of formal techniques to use to meet the requirements of MOD 00-55 suggests that case-studies published in the open literature should already demonstrate their successful industrial application. In 1991 such evidence was thin on the ground even for `mainstream' formal methods like Z and VDM; several years later the situation is not much improved, but the publication of the NIST report by Craigen, Gerhart, and Ralston [22, 23], and the FAA report by Rushby [83], indicate growing awareness in several key industrial sectors. The former was commissioned by the (US) National Institute of Science and Technology (and other bodies) to inform deliberations within industry and government on the potential impact of formal methods on standards and regulations; the latter report was commissioned by the (US) Federal Aviation Administration who face the increasingly challenging task of certifying to very high levels of dependability the computer systems on board commercial aircraft (in particular).

The NIST report summarises twelve industrial applications of formal methods used with varying degrees of mathematical rigour on projects of substantial commercial importance. The timescales involved ranged from about nine months (Hewlett-Packard, Medical Instruments), to about nine years (GEC Alsthom, Railway Signalling). The projects studied focused mainly on software specification. The GEC case-study was thought particularly successful: 15,000 lines of formally specified and verified code were produced for a signalling system that increased the capacity of one Paris Metro line by 12,000 passengers per hour (25%), so saving the operators the enormous cost of constructing a new line to meet the capacity. The success of the project is perhaps better judged by the fact that GEC Alsthom are presently using the same tools and software development techniques on similar railway contracts. The Hewlett-Packard case-study is singled out here for another reason: this produced 4,500 lines of `zero defect' code (according to the certifier, another branch of the organisation) through machine supported formal specification, but without proof because the software was not safety critical. The project was also intended to achieve technology transfer by promoting the specification language used, but was in this respect an abject failure.

Implicated in the additional costs that arose in the least successful case-study summarised in the NIST report (Ontario Hydro, Tripwire Computer) is the lack of tools support for the formal method used (the fidelity of the formalism is not otherwise in question). Given such a small sample it would be premature to suppose that there is in general a relationship between the level of tool support and the costs incurred (or savings made) in applying formal methods. What several examples in the NIST report do indicate is that the methodology into which more rigorous mathematical techniques are introduced is at least as important to the project's success as the strength of tools applied. However, perhaps the strongest suggestion Craigen et al. put forward is that the standard of tools supporting particularly the deeper applications (where, for example, proofs demonstrating the conformance of code to specifications are required to satisfy licensing authorities) urgently needs to be improved. This may be so, but begs the question whether there are tangible benefits presently to be obtained through formal verification at the level of program code: the report does not examine the issue, but the evidence presented hardly convinces one that there are---indeed, it rather illustrates that the rôle of formal methods (or formal proof) in the design of complex systems is as yet poorly understood. Unfortunately the analysis in the NIST report is quite shallow, and it therefore offers little guidance in these matters.

Rushby [83] on the other hand, offers considerable practical guidance on the uptake of formal methods in a thorough, and far ranging, survey of current techniques in support of system (and software) quality control, and assurance. This report is intended to inform licensing bodies, in particular the FAA, about the strengths and fallibilities of mathematical formalism in system design, as well as to inform those seeking to license critical software-based systems about where in the development lifecycle to apply formal methods to best effect. The analysis begins with a classification of formal methods that is given in terms of the degree of rigour attending the mathematical arguments used in support of system development. In a simplified form:

Proofs at levels 1 and 2 are conducted in the manner of the rigorous arguments preferred by mathematicians, although specification formalisms at level 2 may provide deduction rules that could in principle lead to formalising such arguments; the transition to level 3 is therefore marked by the provision of theorem provers and the `fully formal' specification languages alluded to which are firmly rooted in mathematical logic (making mechanical support a practical necessity), and which have demonstrably sound axiomatisations.

This classification may not be universally applicable, but it serves Rushby's purpose which is to examine the likely rôle of formal methods in the development lifecycle and in the certification of critical systems. His conclusions address the aviation industry specifically, but are quite unequivocal in asserting that their current best practise, based as it is on design reviews, inspections and structured walkthroughs, and buttressed by various approaches to testing to support verification and validation, appear to be adequate for the task of producing certifiable software from clearly stated requirements and unambiguous specifications. Formal methods should first be applied in the earlier stages of the lifecycle, with whatever appropriate degrees of rigour, to produce precise statements of requirements and assumptions, and thoroughly debugged design specifications. Neither the evidence nor the analysis in the FAA report prioritises the application of formal methods to the problem of producing code from specifications.

Rushby goes on to argue that the most rigorous (i.e., level 3) applications of formal methods should be brought to bear precisely where traditional approaches are apparently least adequate: in designing those aspects of digital (avionic) systems that deal with the management of hardware redundancy, algorithms to achieve fault tolerance, and the synchronisation of independent control channels (in particular). The anecdotal evidence in the NIST report also indicates that the lower level applications of formal methods (e.g., RAISE and Z which are identified explicitly) are particularly weak in the specification and analysis of the coordination between concurrent, synchronous, but often asynchronous and distributed, activities. To apply formal methods with success at any level to such challenging problems demands intense effort and deep abstractions in order to gain intellectual control over the task at hand. But then formal methods can, and indeed as we shall see in later chapters do, lead to discoveries and insights into the nature of complex control systems that are quite unapproachable by other means.

Some care has to be exercised in transferring Rushby's analysis to other industrial domains however, since the non-existence of a safe state for airborne systems naturally introduces a heightened appreciation of the need for rigour and robust design throughout the development cycle which may be less marked in other industrial domains. Yet many terrestrial control systems are acutely safety critical and have similar architectural needs that involve replicated hardware for high availability, independent data and control channels and voting mechanisms to mask random environmental perturbations, and so on. Moreover, there is an important class of control systems that apparently challenge the notion that formal verification `close to the code' does not deserve a high priority: these are data-driven control systems. The architecture of such systems typically consists of a generic hardware platform that executes a generic read/write loop (or polling cycle) which is parameterised by application specific control laws manifest in static data that are interpreted to yield appropriate responses to polled inputs. Such data (though one might as well call them software) may be highly safety critical in that they govern the behaviour of the control system as a whole; they therefore demand the most rigorous techniques of analysis (and design).

Large scale examples of data-driven control systems that have been developed in recent years can be found in the railway signalling industry. A specific example is British Rail's Solid State Interlocking which is described in more detail in later sections of this chapter. Clearly, one of the main attractions of developing data-driven controllers for highly complex systems such as this is that to a large extent they effect a clean separation of concerns. On the one hand the computer systems engineering concerns, such as those identified by Rushby, are focused in the design of the generic hardware platform and control software. On the other hand the application specific concerns that can be addressed by domain experts without the need for particular computing skills are focused in the preparation of the data. In the world of railway signalling the application data are instantiated for each network installation---that is, roughly speaking, for each station---and express logical relationships between the various controlled elements in the network as well as dynamic relationships between trains and signals, and the sequencing of signal aspects. Such data vary in the details according to the geographic layout of the network concerned; then, the generic control software applied to these geographic data yields the desired control function.

Unfortunately a complete separation of concerns according to this division between data and control is rather more ideal than it is commonplace. Complications arise in the realm of railway signalling from the practical necessity of subdividing the railway into separate control authorities. One has therefore to design the interfaces, or protocols, between physically separated controllers, and this inevitably blurs the division between what one considers to be application data, and that which is thought of as generic control software. Some layer of the protocol will very likely be expressed in the application data since it is required to set up communication to enable one control system to perform specific (within the geography of the network concerned) signalling functions on behalf of another. But if one wishes to avoid programming timers, queues, and watchdogs (the stock in trade of protocol design) at the application layer then one is left with little choice but to encode specific domain knowledge about the nature of the data transferred, or the functions requested/acknowledged, in the underlying architecture. Thus, in order to conceal the interface at the application layer (this might be construed as a good thing to do) one has to complicate the generic software with non-generic code.

While this example illustrates something of a paradox in the philosophy of (distributed) data-driven control, in practice some experimentation leads to a workable compromise. But this thesis will demonstrate, echoing Rushby's appeal for the utmost rigour in precisely this area of system design, that enormous care has to be exercised in building such interfaces. For when in Chapter 5 our formal analysis is focused on one of the protocols by which Solid State Interlocking achieves distributed control of the railway, we shall indeed find subtle (and not-so-subtle) flaws in the overall design.

A second advantage of data-driven controllers is the introduction of application specific languages in which to express the control functions. A well-known example is Ladder Logic (or Ladder Diagrams, now standardised in IEC 1131-3) which evolved in the electrical engineering community as a specification notation for relay circuits (with delayed feedback). Ladder Logic has found use in interlocking design too, but with solid state logic gradually replacing the more costly relay logic, more appropriate notations have begun to emerge. An example is British Rail's Geographic Data Language which is studied in this thesis. Both these examples are expressively weak: application specific languages are not typically called upon to express more complex programs than sequences of commands like

IF <conditions> THEN <actions>
for simple atomic actions like assigning a variable or setting a register, and conditions that are expressed in terms of internal state variables. From the point of view of software assurance such languages are interesting in several ways. Firstly, because they support design abstraction in a notation that is closely integrated with the application domain. Secondly, such unsophisticated languages admit precise mathematical definitions from which compilers and interpreters can be rigorously derived. Finally, efficient interpretations in formal logic are possible to realise automatic verification tools for checking behavioural properties of their programs.

With respect to Solid State Interlocking in particular, one of the problems with data preparation is that the activity is very much like programming, even to the point that the specifications are incomplete. For signalling, specifications are given by control tables which, loosely, indicate all of the conditions that have to be satisfied before a signal can be switched from red to green to admit a train into the track section beyond. These tables have a well defined syntax, and a clear meaning for signalling engineers, but remain exceedingly difficult to `get right'---so difficult, in fact, that some railway authorities have abandoned control tables as specification documents. Nonetheless these are used along with other documents by British Rail to guide the production of their geographic data. In the absence of any means to demonstrate completeness (in the informal sense, but also the formal) of these specifications there is inevitably a need to verify that the derived code does enjoy certain fundamental safety properties---such as logically prohibiting the possibility for two trains to simultaneously enter the same section of the railway.

Traditional methods of verification, i.e., those which constitute current practice, are based on inspections of control tables and the derived geographic data, on simple decompilers and syntax comparators, and massive testing both in the design office and in situ. The enormous combinatorial complexity inherent to railway interlockings means that exhaustive simulation is simply impossible. Yet the syntactic nature of the data also make visual inspection an extremely arduous task---so that the discovery by this means of deep errors (problems of specification), or even minor `typographic' errors (problems of coding), can be haphazard at best. Logical flaws in geographic data do emerge through testing the designs, but there is a clearly recognised need (throughout the industry, in fact) to reduce the costs of extensive testing and to boost productivity in interlocking design. To achieve these ends, but particularly to introduce the rigour needed to radically improve quality assurance, calls for a measured introduction of formal methods into the design process. These are some of the reasons why we need theorem proving for geographic data.


[Go Up], [Go On].
Matthew Morley, Edinburgh. Date: 29 November, 1998