We apply formal description techniques (FDT) to model, compose and give operational meaning to the class of reactive systems representing manufacturing enterprises. The enterprise pursues its activities by means of resources and processes that execute concurrently on the resources, subject to internal (resource) and external (market) constraints. Some modelling techniques are familiar for reactive systems, other are specific to this domain: modelling management decisions, product transfer during one-to-one (one supplier one consumer) synchronisation, marketing and many-to-one (many suppliers one consumer) synchronisation. The paper is a novel application of FDTs, also a contribution to the semantics of enterprise engineering.
In 1996, Zhou Chaochen and Michael Hansen proposed a first order interval logic called Neighbourhood Logic (NL) which can specify liveness and fairness of computing systems and also define notions of real analysis in terms of expanding modalities introduced therein. Here, we first show that NL forms a sound and complete system. Next, we extend NL by introducing two more expanding modalities in the upward and downward directions and propose a Two Dimensional Neighbourhood Logic (NL$^2$), which can be used to specify super-dense computation. Finally, we also prove completeness of this new NL$^2$ .
This tutorial is about design and proof of design of reliable systems from unreliable components. It teaches the concept and techniques of fault-tolerance, at the same time building a formal theory where this property can be specified and verified. The theory eventually supports a range of useful design techniques, especially for multiple faults. We extend CCS, its bisimulation equivalence and modal logic, under the driving principle that any claim about fault-tolerance should be invariant under the removal of faults from the assumptions (faults are unpredictable); this principle rejects the reduction of fault-tolerance to ``correctness under all anticipated faults''. The theory is applied to the range of examples and eventually extended to include considerations of fault-tolerance and timing, under scheduling on the limited resources. This document describes the motivation and the contents of the tutorial.
This tutorial is about fault-tolerance and the methods by which this property can be certified: specified, verified, even guaranteed by construction. The scope of interest is distributed systems operating under limited resources and subject to constraints about the value and timing of interactions with their environment. The issues, often not addressed elsewhere, are: how to support construction of systems that can tolerate (provably) multiple faults and how to ensure that verification of fault-tolerance is fault-monotonic: having proved we can tolerate several faults we must tolerate, provably, any combination of them. This document provides the motivation, overview and contents of the tutorial.
This report presents and justifies the idea of orthogonal formalisation of Common Object Request Broker Architecture (CORBA). Considering CORBA in a general context of specification and implementation barriers between request and an operation, we construct two independent specification hierarchies, characterising correspondingly broker implementation and object model aspects of CORBA. Though being orthogonal, the hierarchies can be easily composed resulting in a number of CORBA specifications of different abstraction levels, focused on different classes of CORBA features. It allows us to describe CORBA in a uniform and modular manner. We also considered how different CORBA features can be defined on top of the basic specifications. We assume that in such a way it is possible to characterise other CORBA dimensions (in particular, CORBA services), coming up to formal treatment of the general transparency concept in the context of system integration.
The super-dense computation model provides an abstraction of real-time behaviour of computing systems. Logics to deal with this model are being studied. In the paper, we propose a combination of a linear temporal logic and an interval logic, and demonstrate how this combination can be used to specify a real-time semantics of an OCCAM-like programming language and its real-time properties, where the super-dense computation model is adopted.
This paper is a contribution to the semantics of the emerging discipline of enterprise engineering. We study the composition of models of individual enterprises into the model which represents the behaviour of an extended or a virtual enterprise. The former corresponds intuitively to the union of models: all activities taking place within and between individual enterprises. The latter to intersection: coordinated and shared activities which utilise the resources of all participating enterprises. Modelling adopts a unifying business perspective upon a firm (a discrete parts manufacturer), its structure (available resources) and behaviour (activities which utilise resources). Model composition is based on formal semantics. The result is a precise technical meaning for an extended and a virtual enterprise, suitable for symbolic execution, reasoning and foremost -- for understanding the difference between both concepts.
This paper is a study of the formal semantics of an extended and a virtual enterprise and how it is possible to represent their behaviour by the composition of models of individual enterprises. We consider core activities of an enterprise for manufacturing discrete parts products, modelled in terms of resources, processes and business goals (customer and purchase orders). The extended enterprise allows enterprises to interact, by matching customer and purchase orders. The virtual enterprise allows them to cooperate, by processes which execution cross organisational boundaries. The result is a precise technical meaning for an extended and a virtual enterprise, suitable for symbolic execution, reasoning and foremost, for understanding the difference between both concepts.
We extend Duration Calculus to a logic which allows description of Discrete Processes where several steps of computation can occur at the same time point. Moreover, the order of occurrence of these steps is relevant. The resulting logic is called Duration Calculus of Weakly Monotonic Time (WDC). It allows effects such as true synchrony and digitisation to be modelled. As an example, We formulate a new semantics of Timed CSP assuming that the communication and computation take no time. We also outline a semantics of shared variable concurrency under similar assumptions. We introduce a notion of deformation of time in (WDC). We study the duration calculus properties which remain invariant under such deformation of time.
While decision theory provides an appealing normative framework for representing rich preference structures, eliciting utility or value functions typically incurs a large cost. For many applications involving interactive sys tems this overhead precludes the use of for mal decisiontheoretic models of preference. Instead of performing elicitation in a vacuum, it would be useful if we could augment di rectly elicited preferences with some appro priate default information. In this paper we propose a casebased approach to alleviat ing the preference elicitation bottleneck. As suming the existence of a population of users from whom we have elicited complete or in complete preference structures, we propose eliciting the preferences of a new user inter actively and incrementally, using the closest existing preference structures as potential de faults. Since a notion of closeness demands a measure of distance among preference struc tures, this paper takes the first step of study ing various distance measures over fully and partially specified preference structures. We explore the use of Euclidean distance, Spear man's footrule, and define a new measure, the probabilistic distance. We provide computa tional techniques for all three measures.
The need to reason with imprecise probabil ities arises in a wealth of situations rang ing from pooling of knowledge from multi ple experts to abstractionbased probabilistic planning. Researchers have typically repre sented imprecise probabilities using intervals and have developed a wide array of differ ent techniques to suit their particular require ments. In this paper we provide an analysis of some of the central issues in representing and reasoning with interval probabilities. At the focus of our analysis is the probability crossproduct operator and its interval gen eralization, the ccoperator. We perform an extensive study of these operators relative to manipulation of sets of probability distribut tions. This study provides insight into the sources of the strengths and weaknesses of various approaches to handling probability intervals. We demonstrate the application of our results to the problems of inference in in terval Bayesian networks and projection and evaluation of abstract probabilistic plans.
A realistic system for planning with uncertain in formation in partially observable domains must be able to reason about sensing actions and to condition its further actions on the sensed infor mation. Among implemented planning systems, we can distinguish two approaches to contingent decisiontheoretic planning. The first is char acterized by a highly unconstrained plan space, while the second is characterized by a constrained and inflexible specification of plan space. In this paper, we take a middle ground between these two approaches that we consider to be more prac tical. We permit the user to specify the structure of the space of possible plans to be considered but to do so in a flexible manner. This flexibility is obtained through the use of a modular represen tation. We separate the representation of actions from the representation of domain relations and we separate those from the representation of the plan space. Actions and domain relations are represented with schematic Bayes net fragments and plan space is represented using programming lan guage constructs. We present a planning system that can find optimal plans given this represen tation.
We present an experiment in modelling and analysis of an application domain: competitive manufacturing. The result is a unique formal model which combines previously separate models for marketing (competition) and enterprises (coordination). In particular, we capture formally the marketing mix: product, price, place and promotion, and its effect on the sale of the enterprise. The model is built in stages: market without marketing, marketing without limits and marketing under limited resources. Analysis includes justifying abstraction, down to two enterprises competing for a single consumer.
The major problem of checking a parallel composition of real-time systems for a real-time property is the combinatorial explosion of state spaces. To avoid this problem, one can use bisimulation equivalence w.r.t. the property to be checked to minimise the system state space. In this paper, we define such equivalence for the integrated linear duration properties of real-time automata network with shared variables. Based on this technique we develop an algorithm for checking a real-time automata network with shared variables w.r.t. a linear duration property. Using Fischer Protocol as a case study, we show that our algorithm can avoid exhaustive state space exploration significantly in some cases in comparison to some other tool in the literature.
We describe ongoing work toward development of a decisiontheoretic agent to help users choose videos based on their preferences. The DIVA (Decision Theoretic Interactive Video Advisor) system elicits user preferences using a casebased technique. Hard constraints are used to permit the user to communi cate temporary deviations from his basic preferences. If the user is not happy with the system's recommen dations, he can provide feedback, which is used to modify the represented preferences and generate a new set of recommendations. We describe the fundamen tal algorithms, the implementation, and some results from some initial experimentation.
We introduce a necessary test for the claims about provable fault-tolerance: having proved to tolerate several faults, we must tolerate (provably) any combination of them. One notable failure to pass this test is bisimulation. The paper presents a class of bisimulations which are fault-monotonic and within CCS support compositional design of component specifications by stepwise refinement, each step increasing or at least preserving the current level of fault-tolerance.
The work presented here concerns domain description, domain model and requirements capture for an accounting system for the UNU financial administration. A novel aspect of this work is the development of a formal model for accounting concepts and the use of this model as a reference when defining requirements.
The RAISE Specification Language has no ``real time'' features. So, it is difficult to use it to specify real time applications. Now, we consider adding time to RSL. Our first attempt following Anne Haxthausen's proposal of encoding DC in RSL, we set up a proof assistant called DC/RJ. It is constructed from encoding of both DC semantics and proof system of Interval Logic and DC, and uses the RAISE Justification tool to do the verification for DC. In this paper, the author first introduces how DC can be integrated with RSL in detail. Then the construction and hierarchy of DC/RJ are explained. The Gas Burner example is used to show how DC/RJ works. A comparison among DC proof assistant systems, the problems and perspectives of our system are discussed at the end.
In this paper, we present a rigorous design of a Fault Diagnosis and Isolation (FDI) algorithm. The system is modelled as a hybrid system with a network of parallel components. The requirement is specified in Duration Calculus. We use traditional program logic, suitably extended, to verify the discrete component and subsequently derive a number of properties of the system. Finally, the requirement is shown to be satisfied by proving that it can be deduced from the system properties.