Hoare and He's approach to unifying theories of programming, UTP, is a dozen years old. In spite of the importance of its ideas, UTP does not seem to be attracting due interest. The purpose of this article is to discuss why that is the case, and to consider UTP's destiny. To do so it analyses the nature of UTP, focusing primarily on unification, and makes suggestions to expand its use.
The important notion of adaptivity of a distributed information system is formalised, extending Dijkstra’s idea of self stabilisation. The formalisation quantifies the extent to which a system adapts, enabling degrees of adaptivity to be specified and hence assured in an implementation. The ideas are expressed without commitment to any particular formal notation and demonstrated on a cluster formation algorithm for mobile ad hoc networks.
The current dominance of the service-based paradigm reflects the success of specific design and archi- tectural principles embodied in terms like SOA and REST. This paper suggests further principles for the design of services exhibiting long-running transactions, whose characteristic feature is that in the case of failure not all system states can be automatically restored: system compensation is required. The principles are expressed at the level of scope-based compensation and fault handling and ensure the con- sistency of data critical to the business logic by demanding (a) either the commitment of all of the 'scope' or nothing in case of failure, and (b) compensation is assured in case of failure in parent scopes. The notion of scope is expressed algebraically and design guidelines are suggested to ensure transactional processes satisfy those principles. To assist validation, the algebra is translated to Coloured Petri Nets; the principles are expressed as CTL formulae; and their bounded versions are checked with the CPN Tools. A simple but typical case study, 'Batch-ATM', is used throughout to illustrate definitions and finally to demonstrate the approach.
Artificial agents (AAs), particularly but not only those in Cyberspace, extend the class of entities that can be involved in moral situations. For they can be conceived of as moral patients (as entities that can be acted upon for good or evil) and also as moral agents (as entities that can perform actions, again for good or evil). In this paper, we clarify the concept of agent and go on to separate the concerns of morality and responsibility of agents (most interestingly for us, of AAs). We conclude that there is substantial and important scope, particularly in Computer Ethics, for the concept of moral agent not necessarily exhibiting free will, mental states or responsibility. This complements the more traditional approach, common at least since Montaigne and Descartes, which considers whether or not (artificial) agents have mental states, feelings, emotions and so on. By focussing directly on ‘mind-less morality’ we are able to avoid that question and also many of the concerns of Artificial Intelligence. A vital component in our approach is the ‘Method of Abstraction’ for analysing the level of abstraction (LoA) at which an agent is considered to act. The LoA is determined by the way in which one chooses to describe, analyse and discuss a system and its context. The ‘Method of Abstraction’ is explained in terms of an ‘interface’ or set of features or observables at a given ‘LoA’. Agenthood, and in particular moral agenthood, depends on a LoA. Our guidelines for agenthood are: interactivity (response to stimulus by change of state), autonomy (ability to change state without stimulus) and adaptability (ability to change the ‘transition rules’ by which state is changed) at a given LoA. Morality may be thought of as a ‘threshold’ defined on the observables in the interface determining the LoA under consideration. An agent is morally good if its actions all respect that threshold; and it is morally evil if some action violates it. That view is particularly informative when the agent constitutes a software or digital system, and the observables are numerical. Finally we review the consequences for Computer Ethics of our approach. In conclusion, this approach facilitates the discussion of the morality of agents not only in Cyberspace but also in the biosphere, where animals can be considered moral agents without their having to display free will, emotions or mental states, and in social contexts, where systems like organizations can play the role of moral agents. The primary ‘cost’ of this facility is the extension of the class of agents and moral agents to embrace AAs.
We propose a formalism for modelling concurrent systems and for supporting stepwise refinement of concurrent systems. Our formalism allows one to describe the possible behaviours of a concurrent system in an operational manner, and to select from the operational description the desired behaviours using a logical approach. We discuss how our formalism can be used for incremental stepwise development where a high-level specification consisting of little operational aspect and largely logical assertions, is refined gradually into a low-lever specification consisting of mainly operational description and few or no logical dictations. We illustrate this refinement methodology with a simplified version of Lamportâ€™s Bakery algorithm.
Orchestration of web services using low-level languages like BPEL is akin to performing Software Engineering with just the target programming language. Orc has been proposed as an elegant and more abstract design language. But much of its elegance derives from its functional style which must be matched, by the Software Engineer, with the state-based nature of systems as a whole. This paper explores the interface between the state-based and functional views of orchestration by studying a more abstract semantics for Orc. Sound laws are established and the resulting formalism is used on examples to demonstrate the top-down development of implementations.
This survey introduces a link between BPEL and a more abstract BPEL-like language. It is tutorial in nature, focusing on features particular to BPEL, like scope-based compensation and fault-handling. Such features are demonstrated here with a number of executable BPEL programs. Although BPEL also provides features common to other programming languages, it contains three important components: the BPEL program (program algebra), XSD (Data types) and WSDL (Input/Output). This survey concentrates on clarifying program logic and algebra instead of syntax. In doing so an undesirable feature of compensation in BPEL is observed and a correction for it proposed. It is argued that such observations justify the more abstract view of BPEL taken here which facilitates accountable correctness.
The definition of data refinement between datatypes is expressed in terms of all programs that invoke procedures of the types. As a result it is laborious to check. Simulations provide sound conditions that, being â€˜staticâ€™, facilitate checking; but then their soundness is important. In this paper we extract a technique from the heart of the theory and show it to be equivalent to data refinement; it plays a key role in establishing properties about simulations in any of the computational models. We survey the difficulties confronting the theory when the procedures and invoking programs may contain probabilistic choices, and show that then each of the two simulation conditions is alone not complete as a rule for data refinement, even if the datatypes are deterministic (in contrast to the standard case). The last part of the paper discusses work in progress.
Systems which exhibit emergent behaviour, i.e., behaviour not determined by the behaviours of the con- stituents when considered in isolation, are becoming more common due to increasing use of distributed and decentralised designs. There have been claims that formal methods, and particularly refinement, can not be used to derive systems with emergent behaviour. In this paper, however, we argue that they can. To prove the point, we perform a refinement of an oft-cited example of emergence: the â€˜gliderâ€™ pattern from Conwayâ€™s Game of Life.
Web navigation model provides a dynamic view for web modelling. It is useful for clarifying requirements and specifying implementation behaviors of systems from design intensions. In this paper, we propose a formal model to describe web navigation of user behaviors, where link activities play an important role. Several issues have been considered in our model, such as web browser effects, adaptive navigation, frame communication etc. After the link activity model is established, we use model checker SPIN to check whether there exist problems such as such as broken links, dead ends, missed reply pages, reachability of pages etc. This method can help us to analyze user behaviors, meanwhile it provides us a way to expose design faults in web systems.
A set-theoretic formalism, AOG, is introduced to support automated verification of pointer programs. AOG targets pointer reasoning for source programs before compilation (i.e. before removal of field names). Pointer structures are represented as object graphs instead of heaps. Each property in AOG is a relation between object graphs and name assignments of program variables, and specifications result from composing properties. AOG extends Separation Logic's compositions of address-disjoint separating conjunction to more restrictive compositions with different disjointness conditions; the extension is shown to be strict when fixpoints are present. A composition that is a "unique decomposition" decomposes any given graph uniquely into two parts. An example is the separation between the non-garbage and garbage parts of memory. Although AOG is in general undecidable, it is used to define the semantics of specific decidable logics that support automated program verification of specific topologies of pointer structure. One logic studied in this paper describes pointer variables located on multiple parallel linked lists. The logic contains quantifiers and fixpoints but is nonetheless decidable; it is applied to the example of in-place list reversal for automated verification. The Schorr-Waite marking algorithm is also considered. The technique of unique decomposition is particularly useful for establishing laws such logics.
A major current challenge in the area of complex multi-agent systems id the conceptual approach to, and practical treatment of, emergent behaviour. This paper indicates how Formal Methods may be applied to engineer multi-agent adaptive systems exhibiting emergence, by consideration of an 'emergence predicate' in the specification and incremental derivation of interacting components that achieve that predicate. To perform that task, 'policies' are introduced as a novel technique for constraining agents in a hierarchical manner so that the multi-agent implementation as a whole exhibits the specified emergence. The ideas are demonstrated on models of the foraging behaviour of an ant colony, and used to analyse various designs appearing in the literature.
Refinement algebra provides axioms for the stepwise removal of abstraction, in the form of demonic nondetermisn, in a first-order system that supports reasoning about loops. It has been extended by Solin and Meinicke to computations involving implicit probabilistic choices: demonic nondeterminism then satisfied weaker properties. In this paper their axion system is extended to capture explicit probabilistics choices. The first form is an unquantified probabilistic choice; the second is a partial quantified probabilistic choice (from which the usual binary probabilistic choice can be recovered). The new refinement algebra is sound with respect to 1-bounded expectation transformers, the premier model of probabilistic computations, but also with respect to a new model introduced here to capture more directly partial quantified computations. In this setting a 'normal form' result if Kozen is proved, replacing multiple loops with a single loop that does the same job; and the extent to which the two forms of loop have the same expected number of steps to termination is considered. Being entirely within first-order logic, the new refinement algebra is targetted to automation.
Attempts to engineer autonomic multi-agent systems, particularly those having large number of agents, has revealed the need for design structures and formalisms to support the construction of properties that emerge at the system level. Such emergence, like self-* behaviour, relies typically on intricate inter-agent interactions. This paper shows how to top-down incremental approach of Formal Methods can be used satisfactorily in that situation, by considering a case study in which agents adapt and autonimouslu achieve a given configuration.
Any model for sequential programming that contains both (demonic) nondeterminism and probabilism must reveal the difference between these two well-known programs: the first assigns to some variable a fair choice between Booleans, whilst the second reverses the order by making the fair before the nondeterminism one. The interest and subtlety of those programs lies in the fact that without probability, disjoint assignments commute and so those programs might naively have been expected to be indistinguishable. Previous relational models separate them by exploiting a complex definition of sequential composition. This paper provides an alternative approach in which sequential composition is instead standard relational composition. The technical contribution that enables that to be achieved expands the binary (initial-final) state view of computation to incorporate a third state, the 'original' state which checkpoints the most recent nondeterministic choice to be mande on the basis of past probabilistic choices and so facilitates independent nondeterministic combinations to be choses against them. The resulting model has the standard relational model embedded in it by a Galois connection.