A set-theoretic formalism, AOG, is introduced to support automated verification of pointer programs. AOG targets pointer reasoning for source programs before compilation (i.e. before removal of field names). Pointer structures are represented as object graphs instead of heaps. Each property in AOG is a relation between object graphs and name assignments of program variables, and specifications result from composing properties. AOG extends Separation Logic's compositions of address-disjoint separating conjunction to more restrictive compositions with different disjointness conditions; the extension is shown to be strict when fixpoints are present. A composition that is a "unique decomposition" decomposes any given graph uniquely into two parts. An example is the separation between the non-garbage and garbage parts of memory. Although AOG is in general undecidable, it is used to define the semantics of specific decidable logics that support automated program verification of specific topologies of pointer structure. One logic studied in this paper describes pointer variables located on multiple parallel linked lists. The logic contains quantifiers and fixpoints but is nonetheless decidable; it is applied to the example of in-place list reversal for automated verification. The Schorr-Waite marking algorithm is also considered. The technique of unique decomposition is particularly useful for establishing laws such logics.
The important notion of adaptivity of a distributed information system is formalised, following Dijkstraâ€™s idea of self stabilisation. Moreover the formalisation quantifies the extent to which a system adapts, enabling degrees of adaptivity to be specified and hence assured of an implementation. The ideas are expressed without commitment to any particular semantic formalism and demonstrated on the cluster-based routing of messages in mobile ad hoc networks.
This paper introduces a new method for the synthesis of gate-level asynchronous controllers. The method is based on a user-level specification formalism, the path model, and a simple algebraic tool, the signal calculus, that in combination provide a formalism for the specification and analysis of path-model designs. The path model view of communication is state-based as in other asynchronous finite-state-machine methods; however it focuses on the critical component of state machines, the paths, and ignores other non-critical free states. The signal calculus, a temporal lifting of Boolean logic, helps to formalise path-model specifications algebraically, removing much of the inadequacy of traditional tabular tools like flow tables, with their dependency on table cells that is exponential in input size. The method is demonstrated on two examples.
The recovery of behaviour from its approximation over substructures is fraught with pathology. Here the extent is considered to which the behaviour of a continuous function on a locally compact Abelian group can be approximated by its behaviour on proper closed subgroups. Known results are summarised when the behaviour concerns integrability and the group is the circle; then boundedness and other limiting behaviour "at infinity" are considered for more general groups. It is shown that if a continuous function is bounded on each proper closed subgroup of a connected locally compact Abelian group then it is bounded on the whole group. As befits this Festschrift, the techniques are predominantly topological. In passing we reflect on criteria for the difficult problem of identifying "substructures" in Computer Science.
The definition of data refinement between datatypes is expressed in terms of all programs that invoke procedures of the types. As a result it is laborious to check. Simulations provide sound conditions that, being â€˜staticâ€™, facilitate checking; but then their soundness is important. In this paper we extract a technique from the heart of the theory and show it to be equivalent to data refinement; it plays a key role in establishing properties about simulations in any of the computational models. We survey the difficulties confronting the theory when the procedures and invoking programs may contain probabilistic choices, and show that then each of the two simulation conditions is alone not complete as a rule for data refinement, even if the datatypes are deterministic (in contrast to the standard case). The last part of the paper discusses work in progress.
Emergent behaviour - system behaviour not determined by the behaviours of system components when considered in isolation - is commonplace in multi-agent systems, particularly when agents adapt to environmental change. This article considers the manner in which Formal Methods may be used to authenticate the trustworthiness of such systems. Techniques are considered for capturing emergent behaviour in the system speciï¬cation and then the incremental reï¬nement method is applied to justify design decisions embodied in the implementation. To demonstrate the approach, one and tw0-dimensional versions of Conwayâ€™s Game of Life are studied, and in particular an incremental reï¬nement of â€˜the gliderâ€™ is given from its speciï¬cation
This pap er introduces a normative principle for the behaviour of contemporary computing and communication systems and considers some of its consequences. The principle, named the principle of distribution, says that in a distributed multi-agent system control resides as much as possible with the individuals constituting the system, rather than in centralised agents; and when that is infeasible or becomes inappropriate due to environmental changes, control evolves upwards from the individuals to an appropriate intermediate level rather than b eing imp osed from above. The setting for the work is the dynamically changing global space resulting from ubiquitous communication. Accordingly the paper begins by determining the characteristics of the distributed multi-agent space it spans. It then fleshes out the principle of distribution, with examples from daily life as well as from Computer Science. The case is made for the principle of distribution to work at various levels of abstraction of system behaviour: to inform the high-level discussion that ought to precede the more low-level concerns of technology, protocols and standardisation but also to facilitate those lower levels. Of the more substantial applications of the principle of distribution, a technical example concerns the design of secure ad hoc networks of mobile devices, achievable without any form of centralised authentication or identication, but in a solely distributed manner. Here the context is how the principle can be used to provide new and provably secure protocols for genuinely ubiquitous communication. A second--more managerial--example concerns the distributed production and management of op en source software, and a third investigates some pertinent questions involving the dynamic restructing of control in distributed systems, important in times of disaster or malevolence.
The 'ensembles' identified by the InterLink working group on Software Intensive Systems comprise vast numbers of components adapting and interacting in complex and even unforeseen ways. If the analysis of ensembles is difficult, their synthesis, or engineering, is downright intimidating. We show, following a recent three-level approach to agent-oriented software engineering, that it is possible to specialise that intimidating task to three levels of abstraction (the 'micro', 'macro' and 'meso' levels), each potentially manageable by interesting extensions of standard formal software engineering. The result provides challenges for formal software engineering but opportunities for ensemble engineering.
The purpose of the UNU-IIST Research Day (28-ii-2008) was for us to tell each other about our research: the ideas behing it, why they are important and where there are leading. The 'metascience' was as important as the science and provided UNU-IIST fellows with the opportinuity to be trained in that aspect of the management of research. Judging by amount of discussion it provoked, the day was a success. This paper is an elaboration of the talk given by the author, providing more detail than it was possible to give in the talk, but which is necessary if the reader is to evaluate the case being made. This paper makes the case for the incremental approach to program semantics using Galois connections. It considers a sustained case study, that of sequential programs and their related (specifications) commands, with the final addition of probabilistic choice. 'Structural' decisions are discussed throughout and the conclusion reflects on several issues that arose from the day.
Compositional reasoning about pointers and references is crucial to verification of contemporary software. This paper introduces a pointer logic that extends Separation Logic with a fixpoint operator and new compositions different from separating conjunction. Higher level of abstraction can be achieved if the right compositions are used in the right places. In particular, if a relation is a full unique decomposition then the corresponding composition decomposes any given graph uniquely into two parts; an example is the separation between the non-garbage and garbage parts of memory. The logic is proved to be largely satisfaction-decidable in the sense that there is a finite procedure to determine whether or not a program state satisfies a formula. The main technical result of the paper is that if only full unique decompositions are used in compositions, then a rather general fragment becomes validity-decidable. The logic is axiomatized and, with the help of an infinitary inference rule, proved to be complete. Separation Logic without pointer arithmetic is shown to be a fragment of the logic.
A major current challenge in the area of complex multi-agent systems id the conceptual approach to, and practical treatment of, emergent behaviour. This paper indicates how Formal Methods may be applied to engineer multi-agent adaptive systems exhibiting emergence, by consideration of an 'emergence predicate' in the specification and incremental derivation of interacting components that achieve that predicate. To perform that task, 'policies' are introduced as a novel technique for constraining agents in a hierarchical manner so that the multi-agent implementation as a whole exhibits the specified emergence. The ideas are demonstrated on models of the foraging behaviour of an ant colony, and used to analyse various designs appearing in the literature.
The complex systems lying at the heart of ensemble engineering exhibit-and are perhaps even characterised by-emergent behaviour: behaviour that is not explicitly derived from the func- tional description of the components of the ensemble at the level of abstraction at which they are provided. A typical example from Artificial Life comprises an ensemble consisting of a flock of birds; the components are the individual birds, thought of autonomously, and the emergent behaviour is that of the flock. Emergent behaviour can be understood by expanding the level of abstraction of description of the components to augment their functional behaviour; but that is infeasible in specifying ensembles of realistic size (although it is the main implementation method) since it amounts to consideration of an entire implementation. An alternative must be taken. It is proposed that to specify an ensemble, the functional behaviour of its components instead be augmented by a system-wide predicate (or conjunction of 'policies', which may be seen as kinds of weak 'ethical principles', when the components are agents) pertaining to the collec- tive behaviour of its components and accounting for emergence. An implementation, however, ranges in distribution from being centralised to being fully distributed, depending on the degree to which the emergent behaviour is incorporated in the components. So the next step in this work consists of identifying implementation designs. A further step consists of establishing cri- teria for the conformance of an implementation against a specification, and the final step applies those ideas to case studies using model checking. Important application of these ideas lies in ensembles whose dynamics is controlled by artificial agents, for which a satisfactory theory of ethical behaviour can be given that is not based on free will, and takes the form of policies. The tension between emergence and reductionism, that is felt in moving from a specification to an implementation, is resolved by making explicit the level of abstraction of a description.
A new asynchronous pipeline is presented for the self-timed design discipline of quasi-delay insensitivity with dual-rail encoding and four-phase handshaking. The function blocks of the pipeline consist of a universal gate (presented in the appendix) which is particularly simple, comprising only the conventional nand and nor gates, and is supported by an equally simple development method. The simple quadgate enables optimisation techniques for completion- detection circuitry and pipeline architecture. The pipeline is reported to compare favourably with analogous designs and is demonstrated here on a selection of 32-bit adders, simulated on 10,000 pairs of random inputs.
Reasoning about a distributed system that exhibits a combination of probabilistic and temporal behaviour does not seem to be easy with current techniques. The reason is the interaction between probability and abstraction, made worse by remote synchronisation. In this paper the recently proposed language ptsc (for probability, time and shared-variable concurrency) is extended by constructs for interleaving and local block. Both enhance a designer's ability to modularise a design; the latter also permits a design to be compared with its more abstract specification, by concealing appropriately chosen design variables. Laws of the extended language are studied and applied in a case study consisting of a faulty register-transfer-level design.
The success of model-based testing, in automating the testing of an implementation given its state-based (or model-based) specification, raises the question of how best the specification can be tweaked in order to facilitate that process. This paper discusses several answers. Motivated by an example from web-based systems, and taking account of the restriction imposed by the testing interface, it considers both functional and non-functional properties. The former include laws, implicit system invariants and other consistency conditions whilst the latter include security leaks. It concludes that because of the importance of the link between specification and implementation in the testing process, there is a trade-off between genuinely useful testing information and the incorporation of some degree of information about the link, not normally regarded as part of the specification.
The purpose of this paper is to explore the models and forms of reasoning appropriate for analysing the temporal behaviour of circuits. That leads from various Boolean models appropriate for combinational and synchronous circuits, to the event model for asynchronous circuits. Circuit behaviour is captured at each level of abstraction using model-specific operators, whose laws are used to perform algebraic (rather than semantic) reasoning about transient behaviour. The results have a particularly satisfying form: for example the hazards of a conditional are calculated algebraically to be a conditional of hazards. A method is developed for simulating an asynchronous combinational circuit, that starts with an untimed, logic-level, specification, passes to a timed refinement and then moves to a simple timed automaton for simulation in UPPAAL. Each stage in the method is supported by a projection relating the models at the various levels, so that correctness can be confirmed.
Any model for sequential programming that contains both (demonic) nondeterminism and probabilism must reveal the difference between these two well-known programs: the first assigns to some variable a fair choice between Booleans, whilst the second reverses the order by making the fair before the nondeterminism one. The interest and subtlety of those programs lies in the fact that without probability, disjoint assignments commute and so those programs might naively have been expected to be indistinguishable. Previous relational models separate them by exploiting a complex definition of sequential composition. This paper provides an alternative approach in which sequential composition is instead standard relational composition. The technical contribution that enables that to be achieved expands the binary (initial-final) state view of computation to incorporate a third state, the 'original' state which checkpoints the most recent nondeterministic choice to be mande on the basis of past probabilistic choices and so facilitates independent nondeterministic combinations to be choses against them. The resulting model has the standard relational model embedded in it by a Galois connection.
Hoare and Heâ€™s approach to unifying theories of programming, UTP, is a dozen years old. In spite of the importance of its ideas, UTP does not seem to be attracting due interest. The purpose of this article is to discuss why that is the case, and to consider UTPâ€™s destiny. To do so it analyses the nature of UTP, focusing primarily on unification, and makes suggestions to expand its use.