The complex systems lying at the heart of ensemble engineering exhibit-and are perhaps even characterised by-emergent behaviour: behaviour that is not explicitly derived from the func- tional description of the components of the ensemble at the level of abstraction at which they are provided. A typical example from Artificial Life comprises an ensemble consisting of a flock of birds; the components are the individual birds, thought of autonomously, and the emergent behaviour is that of the flock. Emergent behaviour can be understood by expanding the level of abstraction of description of the components to augment their functional behaviour; but that is infeasible in specifying ensembles of realistic size (although it is the main implementation method) since it amounts to consideration of an entire implementation. An alternative must be taken. It is proposed that to specify an ensemble, the functional behaviour of its components instead be augmented by a system-wide predicate (or conjunction of 'policies', which may be seen as kinds of weak 'ethical principles', when the components are agents) pertaining to the collec- tive behaviour of its components and accounting for emergence. An implementation, however, ranges in distribution from being centralised to being fully distributed, depending on the degree to which the emergent behaviour is incorporated in the components. So the next step in this work consists of identifying implementation designs. A further step consists of establishing cri- teria for the conformance of an implementation against a specification, and the final step applies those ideas to case studies using model checking. Important application of these ideas lies in ensembles whose dynamics is controlled by artificial agents, for which a satisfactory theory of ethical behaviour can be given that is not based on free will, and takes the form of policies. The tension between emergence and reductionism, that is felt in moving from a specification to an implementation, is resolved by making explicit the level of abstraction of a description.
This pap er introduces a normative principle for the behaviour of contemporary computing and communication systems and considers some of its consequences. The principle, named the principle of distribution, says that in a distributed multi-agent system control resides as much as possible with the individuals constituting the system, rather than in centralised agents; and when that is infeasible or becomes inappropriate due to environmental changes, control evolves upwards from the individuals to an appropriate intermediate level rather than b eing imp osed from above. The setting for the work is the dynamically changing global space resulting from ubiquitous communication. Accordingly the paper begins by determining the characteristics of the distributed multi-agent space it spans. It then fleshes out the principle of distribution, with examples from daily life as well as from Computer Science. The case is made for the principle of distribution to work at various levels of abstraction of system behaviour: to inform the high-level discussion that ought to precede the more low-level concerns of technology, protocols and standardisation but also to facilitate those lower levels. Of the more substantial applications of the principle of distribution, a technical example concerns the design of secure ad hoc networks of mobile devices, achievable without any form of centralised authentication or identication, but in a solely distributed manner. Here the context is how the principle can be used to provide new and provably secure protocols for genuinely ubiquitous communication. A second--more managerial--example concerns the distributed production and management of op en source software, and a third investigates some pertinent questions involving the dynamic restructing of control in distributed systems, important in times of disaster or malevolence.