Establishing a virtual organisation requires not only being able to identify individual member enterprises but also prescribe the roles they should play in the organisation. This becomes a difficult task when selection is not only based on static competencies but also on how the enterprise is able to dynamically deliver services to its clients, within the area of its competence. In this paper we present a simple model which explains the behaviour of a virtual organisation in terms of the services it can offer to and receive from its environment, by means of the services by its members and service-based interactions between them. We also describe a prototype software (based on the model) to identify partners for the organisation, which collects data on the Internet and if possible produces an architecture (members and their roles) to deliver the required service.
We describe ongoing work toward development of a decisiontheoretic agent to help users choose videos based on their preferences. The DIVA (Decision Theoretic Interactive Video Advisor) system elicits user preferences using a casebased technique. Hard constraints are used to permit the user to communicate temporary deviations from his basic preferences. If the user is not happy with the system's recommendations, he can provide feedback, which is used to modify the represented preferences and generate a new set of recommendations. We describe the fundamental algorithms, the implementation, and some results from some initial experimentation.
Classical Decision Theory provides a normative framework for representing and reasoning about complex preferences. Straightforward application of this theory to automate decision making is difficult due to high elicitation cost. In response to this problem, researchers have recently developed a number of qualitative, logicoriented approaches for representing and reasoning about preferences. While effectively addressing some expressiveness issues, these logics have not proven powerful enough for building practical automated decision making systems. In this paper we present a hybrid approach to preference elicitation and decision making that is grounded in classical multiattribute utility theory, but can make effective use of the expressive power of qualitative approaches. Specifically, assuming a partially specified multilinear utility function, we show how comparative statements about classes of decision alternatives can be used to further constrain the utility function and thus identify supoptimal alternatives. This work demonstrates that quantitative and qualitative approaches can be synergistically integrated to provide effective and flexible decision support.
The last few years has seen a surge in interest in the use of techniques from Bayesian Decision Theory to address problems in AI. Decision Theory provides a normative framework for representing and reasoning about decision problems under uncertainty. Within the context of this framework, researchers in the Uncertainty in AI community have been developing computational techniques for building rational agents and representations suited to engineering their knowledge bases. The current special issue reviews recent research in Bayesian problem solving techniques. The articles that follow cover the topics of inference in Bayesian networks, decision-theoretic planning, and qualitative decision theory. This article provides a brief introduction to Bayesian networks and then covers applications of Bayesian problem solving techniques, knowledge-based model construction and structured representations, and learning of graphical probability models.
We study how composition of enterprise models can represent the behaviour of an extended/virtual enterprise. Each enterprise manufactures discrete-parts products and is modelled with three concepts: product-related resources, processes and business goals (customer and purchase orders). Composition makes possible two forms of interaction between enterprises: matching customer and purchase orders and sharing processes which cross organisational boundaries. Models and their composition are represented in a formal notation.
We apply formal description techniques (FDT) to model, compose and give operational meaning to the class of reactive systems representing manufacturing enterprises. The enterprise pursues its activities by means of resources and processes that execute concurrently on the resources, subject to internal (resource) and external (market) constraints. Some modelling techniques are familiar for reactive systems, other are specific to this domain: modelling management decisions, product transfer during one-to-one (one supplier one consumer) synchronisation, marketing and many-to-one (many suppliers one consumer) synchronisation. The paper is a novel application of FDTs, also a contribution to the semantics of enterprise engineering.
In 1996, Zhou Chaochen and Michael Hansen proposed a first order interval logic called Neighbourhood Logic (NL) which can specify liveness and fairness of computing systems and also define notions of real analysis in terms of expanding modalities introduced therein. Here, we first show that NL forms a sound and complete system. Next, we extend NL by introducing two more expanding modalities in the upward and downward directions and propose a Two Dimensional Neighbourhood Logic (NL$^2$), which can be used to specify super-dense computation. Finally, we also prove completeness of this new NL$^2$ .
This tutorial is about design and proof of design of reliable systems from unreliable components. It teaches the concept and techniques of fault-tolerance, at the same time building a formal theory where this property can be specified and verified. The theory eventually supports a range of useful design techniques, especially for multiple faults. We extend CCS, its bisimulation equivalence and modal logic, under the driving principle that any claim about fault-tolerance should be invariant under the removal of faults from the assumptions (faults are unpredictable); this principle rejects the reduction of fault-tolerance to ``correctness under all anticipated faults''. The theory is applied to the range of examples and eventually extended to include considerations of fault-tolerance and timing, under scheduling on the limited resources. This document describes the motivation and the contents of the tutorial.
This tutorial is about fault-tolerance and the methods by which this property can be certified: specified, verified, even guaranteed by construction. The scope of interest is distributed systems operating under limited resources and subject to constraints about the value and timing of interactions with their environment. The issues, often not addressed elsewhere, are: how to support construction of systems that can tolerate (provably) multiple faults and how to ensure that verification of fault-tolerance is fault-monotonic: having proved we can tolerate several faults we must tolerate, provably, any combination of them. This document provides the motivation, overview and contents of the tutorial.
This report presents and justifies the idea of orthogonal formalisation of Common Object Request Broker Architecture (CORBA). Considering CORBA in a general context of specification and implementation barriers between request and an operation, we construct two independent specification hierarchies, characterising correspondingly broker implementation and object model aspects of CORBA. Though being orthogonal, the hierarchies can be easily composed resulting in a number of CORBA specifications of different abstraction levels, focused on different classes of CORBA features. It allows us to describe CORBA in a uniform and modular manner. We also considered how different CORBA features can be defined on top of the basic specifications. We assume that in such a way it is possible to characterise other CORBA dimensions (in particular, CORBA services), coming up to formal treatment of the general transparency concept in the context of system integration.
The super-dense computation model provides an abstraction of real-time behaviour of computing systems. Logics to deal with this model are being studied. In the paper, we propose a combination of a linear temporal logic and an interval logic, and demonstrate how this combination can be used to specify a real-time semantics of an OCCAM-like programming language and its real-time properties, where the super-dense computation model is adopted.
This paper is a contribution to the semantics of the emerging discipline of enterprise engineering. We study the composition of models of individual enterprises into the model which represents the behaviour of an extended or a virtual enterprise. The former corresponds intuitively to the union of models: all activities taking place within and between individual enterprises. The latter to intersection: coordinated and shared activities which utilise the resources of all participating enterprises. Modelling adopts a unifying business perspective upon a firm (a discrete parts manufacturer), its structure (available resources) and behaviour (activities which utilise resources). Model composition is based on formal semantics. The result is a precise technical meaning for an extended and a virtual enterprise, suitable for symbolic execution, reasoning and foremost -- for understanding the difference between both concepts.
This paper is a study of the formal semantics of an extended and a virtual enterprise and how it is possible to represent their behaviour by the composition of models of individual enterprises. We consider core activities of an enterprise for manufacturing discrete parts products, modelled in terms of resources, processes and business goals (customer and purchase orders). The extended enterprise allows enterprises to interact, by matching customer and purchase orders. The virtual enterprise allows them to cooperate, by processes which execution cross organisational boundaries. The result is a precise technical meaning for an extended and a virtual enterprise, suitable for symbolic execution, reasoning and foremost, for understanding the difference between both concepts.
We extend Duration Calculus to a logic which allows description of Discrete Processes where several steps of computation can occur at the same time point. Moreover, the order of occurrence of these steps is relevant. The resulting logic is called Duration Calculus of Weakly Monotonic Time (WDC). It allows effects such as true synchrony and digitisation to be modelled. As an example, We formulate a new semantics of Timed CSP assuming that the communication and computation take no time. We also outline a semantics of shared variable concurrency under similar assumptions. We introduce a notion of deformation of time in (WDC). We study the duration calculus properties which remain invariant under such deformation of time.
While decision theory provides an appealing normative framework for representing rich preference structures, eliciting utility or value functions typically incurs a large cost. For many applications involving interactive sys tems this overhead precludes the use of for mal decisiontheoretic models of preference. Instead of performing elicitation in a vacuum, it would be useful if we could augment di rectly elicited preferences with some appro priate default information. In this paper we propose a casebased approach to alleviat ing the preference elicitation bottleneck. As suming the existence of a population of users from whom we have elicited complete or in complete preference structures, we propose eliciting the preferences of a new user inter actively and incrementally, using the closest existing preference structures as potential de faults. Since a notion of closeness demands a measure of distance among preference struc tures, this paper takes the first step of study ing various distance measures over fully and partially specified preference structures. We explore the use of Euclidean distance, Spear man's footrule, and define a new measure, the probabilistic distance. We provide computa tional techniques for all three measures.
The need to reason with imprecise probabil ities arises in a wealth of situations rang ing from pooling of knowledge from multi ple experts to abstractionbased probabilistic planning. Researchers have typically repre sented imprecise probabilities using intervals and have developed a wide array of differ ent techniques to suit their particular require ments. In this paper we provide an analysis of some of the central issues in representing and reasoning with interval probabilities. At the focus of our analysis is the probability crossproduct operator and its interval gen eralization, the ccoperator. We perform an extensive study of these operators relative to manipulation of sets of probability distribut tions. This study provides insight into the sources of the strengths and weaknesses of various approaches to handling probability intervals. We demonstrate the application of our results to the problems of inference in in terval Bayesian networks and projection and evaluation of abstract probabilistic plans.
A realistic system for planning with uncertain in formation in partially observable domains must be able to reason about sensing actions and to condition its further actions on the sensed infor mation. Among implemented planning systems, we can distinguish two approaches to contingent decisiontheoretic planning. The first is char acterized by a highly unconstrained plan space, while the second is characterized by a constrained and inflexible specification of plan space. In this paper, we take a middle ground between these two approaches that we consider to be more prac tical. We permit the user to specify the structure of the space of possible plans to be considered but to do so in a flexible manner. This flexibility is obtained through the use of a modular represen tation. We separate the representation of actions from the representation of domain relations and we separate those from the representation of the plan space. Actions and domain relations are represented with schematic Bayes net fragments and plan space is represented using programming lan guage constructs. We present a planning system that can find optimal plans given this represen tation.
We present an experiment in modelling and analysis of an application domain: competitive manufacturing. The result is a unique formal model which combines previously separate models for marketing (competition) and enterprises (coordination). In particular, we capture formally the marketing mix: product, price, place and promotion, and its effect on the sale of the enterprise. The model is built in stages: market without marketing, marketing without limits and marketing under limited resources. Analysis includes justifying abstraction, down to two enterprises competing for a single consumer.