We present an approach to elicitation of user preference models in which assumptions can be used to guide but not constrain the elicitation process. We demonstrate that when domain knowledge is available, even in the form of weak and somewhat inaccurate assumptions, significantly less data is required to build an accurate model of user preferences than when no domain knowledge is provided. This approach is based on the KBANN (KnowledgeBased Artificial Neural Network) algorithm pioneered by Shavlik and Towell (1989). We demonstrate this approach through two examples, one involves preferences under certainty, and the other involves preferences under uncertainty. In the case of certainty, we show how to encode assumptions concerning preferential independence and monotonicity in a KBANN network, which can be trained using a variety of preferential information including simple binary classification. In the case of uncertainty, we show how to construct a KBANN network that encodes certain types of dominance relations and attitude toward risk. The resulting network can be trained using answers to standard gamble questions and can be used as an approximate representation of a person's preferences. We empirically evaluate our claims by comparing the KBANN networks with simple backpropagation artificial neural networks in terms of learning rate and accuracy. For the case of uncertainty, the answers to standard gamble questions used in the experiment are taken from an actual medical data set first used by Miyamoto and Eraker (1988). In the case of certainty, we define a measure to which a set of preferences violate a domain theory, and examine the robustness of the KBANN network as this measure of domain theory violation varies.
We study the problem of defining similarity measures on preferences from a decision theoretic point of view. We propose a similarity measure, called probabilistic distance, that originates from the Kendall's tau function, a well-known concept in the statistical literature. We compare this measure to other existing similarity measures on preferences. The key advantage of this measure is its extensibility to accommodate partial preferences and uncertainty. We develop efficient methods to compute this measure, exactly or approximately, under all circumstances. These methods make use of recent advances in the area of Markov chain Monte Carlo simulation. We discuss two applications of the probabilistic distance: in the construction of the Decision-Theoretic Video Advisor (diva), and in robustness analysis of a theory refinement technique for preference elicitation.
The need to reason with imprecise probabil ities arises in a wealth of situations rang ing from pooling of knowledge from multi ple experts to abstractionbased probabilistic planning. Researchers have typically repre sented imprecise probabilities using intervals and have developed a wide array of differ ent techniques to suit their particular require ments. In this paper we provide an analysis of some of the central issues in representing and reasoning with interval probabilities. At the focus of our analysis is the probability crossproduct operator and its interval gen eralization, the ccoperator. We perform an extensive study of these operators relative to manipulation of sets of probability distribut tions. This study provides insight into the sources of the strengths and weaknesses of various approaches to handling probability intervals. We demonstrate the application of our results to the problems of inference in in terval Bayesian networks and projection and evaluation of abstract probabilistic plans.
Classical Decision Theory provides a normative framework for representing and reasoning about complex preferences. Straightforward application of this theory to automate decision making is difficult due to high elicitation cost. In response to this problem, researchers have recently developed a number of qualitative, logicoriented approaches for representing and reasoning about preferences. While effectively addressing some expressiveness issues, these logics have not proven powerful enough for building practical automated decision making systems. In this paper we present a hybrid approach to preference elicitation and decision making that is grounded in classical multiattribute utility theory, but can make effective use of the expressive power of qualitative approaches. Specifically, assuming a partially specified multilinear utility function, we show how comparative statements about classes of decision alternatives can be used to further constrain the utility function and thus identify supoptimal alternatives. This work demonstrates that quantitative and qualitative approaches can be synergistically integrated to provide effective and flexible decision support.
We present an approach to elicitation of user preference models in which assumptions can be used to guide but not constrain the elicitation process. We show how to encode assumptions concerning preferential independence and monotonicity in a Knowledge-Based Artificial Neural Network. We quantify the degree to which user preferences violate a set of assumptions. We empirically compare the KBANN network with an unbiased ANN in terms of learning rate and accuracy for preferences consistent and inconsistent with the assumptions. We go on to demonstrate how the technique can be used to learn a fine-grained preference structure from simple binary classification data.
Decision theory has become widely accepted in the AI community as a useful framework for planning and decision making. Applying the framework typically requires elicitation of some form of probability and utility in formation. While much work in AI has fo cused on providing representations and tools for elicitation of probabilities, relatively little work has addressed the elicitation of utility models. This imbalance is not particularly justified considering that probability models are relatively stable across problem instances, while utility models may be different for each instance. Spending large amounts of time on elicitation can be undesirable for interactive systems used in lowstakes decision making and in timecritical decision making. In this paper we investigate the issues of reasoning with incomplete utility models. We identify patterns of problem instances where plans can be proved to be suboptimal if the (un known) utility function satisfies certain con ditions. We present an approach to planning and decision making that performs the utility elicitation incrementally and in a way that is informed by the domain model.
In previous work  we presented a case-based approach to eliciting and reasoning with preferences. A key issue in this approach is the definition of similarity between user preferences. We introduced the probabilistic distance as a measure of similarity on user preferences, and provided an algorithm to compute the distance between two partially specified value functions. This is for the case of decision making under certainty. In this paper we address the more challenging issue of computing the probabilistic distance in the case of decision making under uncertainty. We present algorithms to compute the probabilistic distance between two completely or partially specified utility functions. We demonstrate the use of this algorithm with a medical data set of partially specified patient preferences, where none of the other existing distance measures appear definable. Using this data set, we also demonstrate that the case-based approach to preference elicitation is applicable in domains with uncertainty.
While decision theory provides an appealing normative framework for representing rich preference structures, eliciting utility or value functions typically incurs a large cost. For many applications involving interactive sys tems this overhead precludes the use of for mal decisiontheoretic models of preference. Instead of performing elicitation in a vacuum, it would be useful if we could augment di rectly elicited preferences with some appro priate default information. In this paper we propose a casebased approach to alleviat ing the preference elicitation bottleneck. As suming the existence of a population of users from whom we have elicited complete or in complete preference structures, we propose eliciting the preferences of a new user inter actively and incrementally, using the closest existing preference structures as potential de faults. Since a notion of closeness demands a measure of distance among preference struc tures, this paper takes the first step of study ing various distance measures over fully and partially specified preference structures. We explore the use of Euclidean distance, Spear man's footrule, and define a new measure, the probabilistic distance. We provide computa tional techniques for all three measures.