The book describes 12 case studies which use RAISE (Rigorous Approach to Industrial Software Engineering) to con- struct, analyse, develop and apply formal specifications. The case studies present a wide range of application areas: authentication of communications protocols, error detection in programs, geographic information systems, library management, multi-lingual documents, production planning, run-time verification, software design patterns, software reuse, taxation management, telephony, and travel planning. Most have little or nothing to do with safety-critical systems, traditionally considered as the area in which formal methods are most applicable, and many of them do not involve verification. The cases studies illustrate diverse uses of formal specifications: to capture requirements for new software, to carry out software development from requirements, to formalise algorithms in order to prove their correctness, to check if software behaves correctly at run-time, to build models of application domains, to relate descriptive and prescriptive models, to formalise notations for software development, and various other uses. They also try to put the task of creating formal specifications into perspective, asking questions about the purpose, scope and use of the formal models before the models are built.
The case study chapters have each been written by one of this book's editors. Each case study describes work done at the United Nations University's International Institute for Software Technology (UNU-IIST) in Macau by that editor in collaboration with fellows (lecturers or post-graduate students from developing countries). Each case study chapter concludes with information about the fellows involved, the relevant UNU-IIST technical report(s), and other related work.
The goal of implementation verification is to prove that a given program behaves correctly under all possible executions, for instance that it produces correct output for all legal inputs. There are several conditions on carrying out such verification in practice: (1) the implementation is available in a form suitable for analysis; (2) the implementation “size” is such that the proof remains feasible; (3) it is written in a language which supports specification and proof; and (4) there is enough human expertise to guide the proof. Such conditions are often not satisfied in practice. Result verification is a possible alternative. It relies not on the implementation of the program but on its execution record. This record may be a sequence of inputs and the corresponding outputs from the program when it computes a certain function, or a sequence of values observed about the changing state of the program as it interacts with its environment, or perhaps such values augmented with time-stamps. Whatever the form, the goal is to decide if this record describes a correct execution of the program. A positive answer does not mean that the program is fault-free, only that this execution did not exhibit any errors. On the other hand, finding an error is like constructing a counter-example for claims of correctness about the program. Due to its modest goals, result verification has several advantages over implementation verification: (1) It can be carried out off-line, after the execution terminates, or on-line, as the execution unfolds; (2) Its application scope includes both off-the-shelf components (binary files) and remote service providers (distributed objects); (3) Due to the simple structure involved (a sequence of states), it is easier to automate; and (4) Based solely on the execution record, its complexity is largely independent of the “size” of the implementation and the language used, which makes it particularly suitable for complex, heterogeneous systems. In this chapter we propose a technical framework for carrying out automated result verification in practice: 1. The execution record is a simple text file. Many programs produce such files during their normal operations, for administrative purposes. A general technique for recording exactly the information needed for verification, is to introduce a program monitor by wrapping. 2. The execution record is given as input to the verifier program, which does the actual verification. Given the execution record in a text file, we consider result verification as a text-processing task. Accordingly, the verifier is written in AWK, which is a special-purpose language for text processing . Verification is done by the AWK interpreter, given the execution record and the verifier program as inputs. 3. The verifier program is not written by hand but specification-generated. The generator takes the property the verifier should check, called result specification, as input and produces the code of the verifier program as output. The specification-to-verifier generator is itself written in AWK. 4. A result specification is a first-order property built with two kinds of variables, state and record variables, using the functions and predicates over their respective types. A record variable refers to the contents of the record file. A state variable allows the calculation of the values that are not written in the record file but are derived from it. For each state variable we define its type, initial value and how this value changes for different operations invoked. We also allow specifications to be combined. The rest of the chapter is as follows. Section 12.2 shows how to model the behaviour of a program. Section 12.3 describes how to wrap a program to record the relevant observations during its execution. Result verification, using a hand-written verifier program, is the subject of Section 12.4. Section 12.5 defines a result specification. Section 12.6 describes various ways such specifications can be combined. Section 12.7 describes the generator which produces the code of the verifier program given the specification. Section 12.8 contains an example and Section 12.9 presents some conclusions.
Communications scholars have been challenging the universality of Eurocentric scholarship, which they argue to be a form of intellectual imperialism imposing its provincial ideals and masquerading these as universal. As an answer to Eurocentricity, Asiacentricity proposes to place Asian values and ideals at the center of inquiry to see Asian phenomena from the standpoint of Asians as subjects and agents. This article critiques this Asiacentric agenda and critically examines the implications and premises of this paradigm. It suggests instead that a culture-centric paradigm be adopted to avoid an Asian version of the Eurocentricity crisis. The article advocates the adoption of a more harmonious perspective in light of the convergence of global cultures and calls for approaching research deficiencies as a global community of communication scholars rather than one divided along ethnic fault lines. The culture-centric approach is proposed as a meta-theory that is non-polarizing by nature through its placement of culture at the center of inquiry. To avoid the creation of polarity, culture-centricity seeks to encompass the contradictions and ambivalences as well as other diverse cultural representations. This article argues for a non-polarizing approach so that communication scholars can theorize without ideological bias or artificial boundaries created by arbitrary definitions of what constitutes East or West.
The advent of computer technology offers the opportunity for developing multimodal, dynamic and interactive representations of knowledge which are expected to significantly enhance learning. This chapter presents an overview of the pedagogical effectiveness of the algorithm visualization systems, which make use of dynamic visual representations for supporting the instruction in the domain of computer algorithms. The author highlights the most important conceptual and methodological advances in the field, analyzing the properties of the representations that are usually displayed by such systems and presenting significant research results concerning their pedagogical efficiency. Available studies indicate that it is not the quality of the graphical display (“what the students see”) but students’ engagement in active learning situations with algorithm visualization systems (“what the students do”), that substantially affects the learning outcomes. Moreover, it seems that a significant level of learning is achieved when algorithm visualization systems are integrated in instructional settings which follow the constructivist paradigm. In this case students are guided not simply to view experts’ visualizations and interact with them but also to construct their own and present them to peers, thus initiating fruitful knowledge building conversations. In this way algorithm visualization systems are better conceptualized as “construction supporting” tools than simply as “knowledge conveyors”. Towards enhancing this role of the software
it seems that “low-tech and fidelity” AV construction systems may be quite adequate for supporting students’ engagement in essential learning activities.
Object-based distributed systems challenge the traditional ways of applying formal methods via specification and proof. One of the problems is the large number of the components involved, partly decided at compile time (static invocation) and partly at run-time (dynamic invocation). Another problem is having to rely on the vendor’s claims about correctness of individual components, without being able (lacking implementation details) to verify such claims ourselves. Yet another is expressing component specifications in interface definition languages, which describe how to communicate with a component (syntax), but not the expected results of such communication (semantics). Such problems make static verification difficult, at best. On the other hand, the structuring of the whole system in terms of the independent, distributed components, is particularly suitable for making such a system fault-tolerant . The goal is to ensure that failures of individual components (violation of their specifications) will not cause the whole system to fail (violation of the system’s specification). The latter specification can be used at design-time to prove if the system is indeed fault-tolerant. The former specifications can be used at run-time to detect if components have failed. This chapter describes an approach to formally specify software components in order to make such error detection possible. Defining such specifications is not without problems. Specifications may contain infinite constructs like quantifiers (for all values of a type), liveness properties (for all states in a sequence) or modal properties of branching time (for all transitions from a state). Such constructs are generally non-executable and non-checkable. This means we cannot execute them directly on the machine and we cannot check effectively at run-time that they indeed hold. On the other hand, specifications based on propositional logic which can be checked at run-time, are insufficiently expressive in practice. Also, checking specifications which use equality of states is not possible when the state can be only accessed via defined operations; the best we can do is checking observational equivalence . Such problems require a different kind of specification to carry out effective run-time checking from those to support static verification. In this chapter we propose an approach to define and check such specifications at run-time. Technically, the approach is as follows: 1. Specifications are formally-based. They are defined as logic-based regular expressions built from the propositions about the states or pairs of states of a component, via its observer operations. 2. Specifications are checkable at run-time, based on the recorded history of the component’s execution. This history is a sequence of values observed about individual states and the operations, with arguments, whose invocation caused state-changes. 3. Checking is carried out by a wrapper which is generated from the component and its specification. The wrapper takes over all communication between the component and its environment. It remains transparent to the clients except being able to detect (as it carries out run-time checking after every state-change) and announce (via the additional boolean observer error) the occurrence of an error. 4. The required effect of run-time checking is described formally as the fail-stop property. The wrapper generator is to transform a given component, which may fail in an arbitrary way, into a component which only fails by refusing to operate, which fact is also announced by the observer error. The rest of this chapter is as follows. Section 13.2 explains and illustrates the concept of a ‘component’. Section 13.3 discusses and compares ‘fault-free’ and ‘fail-stop’ behaviours of components. Section 13.4 shows how to ensure fail-stop behaviour by pattern matching. Section 13.5 presents an example, a line editor. Sections 13.6 and 13.7 describe how pattern matching can become part of an automatically generated wrapper. Section 13.6 specifies the wrapper generator and Section 13.7 describes a prototype implementation. Section 13.8 provides some conclusions.
Business process modelling can have many applications. It can be used to: understand the flow of work in an organisation; monitor and control progress of work; measure and optimise work performance; predict the effects of changes in management and operations; plan for implementation of changes; design interaction patterns between processes running in different organisations; and specify, develop and deploy software to further organisational/business goals, etc. Formal analysis is a particularly good reason to carry out such modelling, as it can disclose problems long before the process is actually deployed in an organisation. This early detection is especially important for processes that cross organisational boundaries where errors, as in all distributed systems, are inherently hard to detect and expensive to correct. One approach to model a business process is to describe its concrete execution in an enterprise, using enterprise resources to produce tangible results like products or services. Typically, such descriptive modelling would be carried out to explore the analogy between business and computing and would concentrate on the mechanics of a process. A prescriptive model, in contrast, aims to express the intended purpose of each process. The challenge for business process modelling is, we believe, finding suitable abstractions that can be applied at both levels. By formally relating such descriptive and prescriptive models, we could verify if a process is correct (satisfies its intended purpose) and further develop an engineering approach to design such processes in a rigorous way. This chapter is set to contribute to this general goal. We consider a particular, although broadly defined, business domain: customer-driven production. Production refers to the process of creating goods – tangible products like cars, phones or shovels. Products are produced by assembly from sub-products, carried out within independent business entities called production cells. Each cell contains the resources to store, manufacture and deliver products to its customers. On a certain level, a cell represents formally what is a manufacturing enterprise with its resources like a warehouse, shop-floor and product stocks. The behaviour of a cell is driven by the orders received from the customers and how such orders are implemented. The customers include other production cells. The implementation of a customer order is described by a production process. The aim of this chapter is to define formally what it means for such a process to be feasible (possible to carry out given the resources delegated for its execution) and correct (satisfying a customer order, if feasible). We adopt the following business model, explained for sequential, concurrent and distributed production: 1. Sequential production (one process, one cell). Each process responds to a customer order which specifies the product to deliver, the number of items and possibly the latest time of delivery. The process operates within a production cell which offers the resources for its execution, in terms of product stocks, storage space, and capacity to carry out manufacturing and transportation. It describes in detail what operations should be performed on the cell and in which sequence. The process is feasible if the cell has enough resources for its execution. A feasible process is correct if its execution satisfies the order: the stocks for the products reach the required volumes within the deadline. 2. Concurrent production (many processes, one cell). A cell may contain several processes, each created in response to a particular customer order and all executed concurrently. In order to resolve conflicts for the shared resources (such as product stocks, manufacturing workstations or a transportation system), processes are assigned priorities to represent their importance. Feasibility means the cell has enough resources for all such processes, executed concurrently. Correctness means all processes satisfy their corresponding customer orders, when executed concurrently under a priority-based scheduler. 3. Distributed production (many processes, many cells). Several processes running in different cells may satisfy an order in a cooperative way. Trying to utilise the resources that one cell lacks and another has available, they form customer-supplier relations dynamically, by receiving customer orders and implementing with their own processes, perhaps sending more orders at the same time. As a result, each cell may be running several processes, each contributing part of the original order. This mechanism leads to the distribution of production activities. Feasibility means each cell has enough resources for its processes executed concurrently. Correctness means a process satisfies its customer order provided all supplier processes satisfy their own. Moreover, when several such processes run in the same cell, they must satisfy their orders concurrently. The rest of this chapter is as follows. Section 7.2 is about product modelling. Section 7.3 presents a descriptive production model, including the concepts like production cells, operations and processes, and defines what it means for a process to be feasible. Section 7.4 presents a corresponding prescriptive model. Section 7.5 defines what it means for a feasible process to be correct, by relating such descriptive and prescriptive models. It starts with sequential production, then introduces concurrency, distribution, and real-time constraints. Section 7.6 describes related work and provides some conclusions.
Electronic Government leads to technology-enabled transformation of government organizations, and consequently of their relationships with citizens, businesses and other arms of government. Developing countries can greatly benefit from Electronic Government development, in terms of increasing the capacity of government organizations to meet tremendous socio-economic needs. However, they also face common challenges: weak implementation, delivery and coordination; policy-strategy and strategy-implementation gaps; insufficient human capacity; lack of research to precede project implementations; etc. This paper proposes a rigorous Electronic Government Development Framework (EGOV.*) to address some of these challenges. The framework enables systematic construction of Electronic Government for a given Public Administration (PA) in terms of: (1) establishing the readiness of the PA for ICT-enabled transformation; (2) determining state-of-the-art in Electronic Government practices and solutions around the world, as relevant to the PA; (3) building a PA-wide vision and strategy towards the development of high-quality Electronic Government; (4) constructing a government program to implement this strategy; (5) building human capacity within the PA, covering leadership, management and technical skills, to be able to execute and benefit from this program; and (6) establishing a Resource Center for Electronic Government on the basis of existing institutions, particularly government and academia, and raising the capacity of this Center to execute the program. The framework has been applied in three countries - one completed, one ongoing, and one to start.
Information sharing (IS) is a key capability required for one-stop and networked government, responding to a variety of intra-organizational, inter-organizational, or cross-national needs like sharing service-related information between parties involved in the delivery of seamless services, sharing information on available resources to enable whole-of-government response to emergencies, etc. Despite its importance, the IS capability is not common for governments due to various technical, organizational, cultural, and other barriers which are generally difficult to address by individual agencies. However, developing such capabilities is a challenging task which requires government-wide coordination, explicit policies and strategies, and concrete implementation frameworks. At the same time, reconciling existing theoretical frameworks with the IS practice can be difficult due to the differences in conceptions and abstraction levels. In order to address such difficulties, this chapter proposes a conceptual framework to guide the development of Government Information Sharing (GIS) policies, strategies, and implementations. By integrating theoretical frameworks and the GIS practice, the framework adopts a holistic view on the GIS problem, highlights the main areas for policy intervention, and provides policy makers and government managers with conceptual clarity on the GIS problem.
Better integration of Electronic Government (EGOV) and Public Administration Reform (PAR) strategies has been identified by global EGOV benchmark reports as one of the contemporary issues to address in improving the outcomes of EGOV programs. This chapter presents a technique for aligning EGOV and PAR strategies based on the Strategic Alignment Model (SAM) of Henderson and Venkatraman. By treating EGOV and PAR strategies as two different alignment domains, similar to organizational and technological domains respectively, we re-frame the original SAM to address our specific alignment needs. Our model provides a procedure and metrics for analyzing: (1) alignment between a pair of EGOV and PAR strategies and (2) the internal coherency of an EGOV strategy. We discuss our experience in applying this approach in Macao and conclude with how it may be used in aligning EGOV with other strategies such as those related to governance and development.
This paper presents a Minitrack on Development Methods for Electronic Government, organized as part of the Electronic Government Track at the 42nd Hawaii International Conference on System Sciences (HICSS-42), 5-8 January 2009, Big Island, Hawaii. The minitrack focused on development methods, including technical, managerial and organizational dimensions, to address various challenges facing Electronic Government development: dependability and accessibility, evolving requirements, adherence to law and regulations, multi-channel service delivery, technical and organizational complexity, dependence on ever-changing legal and operational environments, etc. Of particular interest are contributions that can make Electronic Government development more measurable, predictable, replicable and scalable, contributing to the establishment of theoretical foundations and engineering practices for the domain.
The three papers accepted by the minitrack were: (1) “Government Enterprise Architecture Grid Adaptation in Finland” co-authored by Katariina Valtonen, Ville Seppänen, and Mauri Leppänen; (2) “Developing User Requirements for Transnational Government Information Systems” co-authored by Philip Seltsikas and Nikolaos Papas; and (3) “(Semantic) Model-driven Development for Electronic Government Applications” co-authored by Aadya Shukla, Charles Crichton, Jim Davies, Steve Harris, and Andrew Tsui.
This paper presents a Minitrack on Development Methods for Electronic Government, organized as part of the Electronic Government Track at HICCS43 - 43rd Hawaii International Conference on System Sciences (HICSS-43), 5-8 January 2010, Koloa, Hawaii. The minitrack focused on development methods, including technical, managerial and organizational dimensions, to address various challenges facing Electronic Government development: dependability and accessibility, evolving requirements, adherence to law and regulations, multi-channel service delivery, technical and organizational complexity, dependence on ever-changing legal and operational environments, etc. Of particular interest are contributions that can make Electronic Government development more measurable, predictable, replicable and scalable, contributing to the establishment of theoretical foundations and engineering practices for the domain.
The six papers accepted by the Minitrack were: (1) “A Goal Oriented and Knowledge Based e-Government Project Management Platform” by Demetrios Sarantis, Yannis Charalabidis and Dimitris Askounis; (2) “Simulation Games for the Collaborative Development of Multichannel Public Service Delivery” by Bram Klievink and Marijn Janssen; (3) “Can e-Government Adopters Benefit from a Technology-First Approach? The Case of Egypt Embarking on Service-Oriented Architecture” by Ralf Klischewski and Ranwa Abubakr; (4) “EA as a Tool for Strategic Planning – a Case Study of a Local Government” by Katariina Valtonen, Ismo Korhonen and Riku Rekonen; (5) “On Mapping Business Document Models to Core Components” by Michael Strommer, Christian Pichler and Philipp Liegl; and (6) “Semantic Interoperability in Practice” by Aadya Shukla, Steve Harris and Jim Davies.
Travel planning can be a challenge these days. There is an abundance of travel-related information in the magazines, newspapers, guides, brochures, on the internet. The information covers all imaginable aspects of travel including flights, hotels, cars, restaurants, maps, events and others. Travel is offered in the form of individual products or complete packages, its contents predefined or customised to individual needs, definite or based on waiting lists, sold for a regular price or as special deals (under different restrictions when, where and for whom the deal applies). With so many options available and so many constraints involved, there is a great deal of uncertainty how to carry out the process of travel planning. Internet contributes significantly to making this information available for individual travellers, including on-line availability-checking, reservations and sale. However, internet-based travel agents tend to focus more on the presentation of information and less on its actual contents. Moreover, they implement little in the way of a method for the process of systematic travel planning – formulating the requirements and then moving gradually toward the solution. Here are some observations when visiting such agents: (1) a visitor has to know in detail what service he looks for before he can learn this for himself; (2) when presented with an offer a visitor can only accept or reject it, but cannot explore the alternatives; (3) an error message communicates to the visitor the presence of an error but not the reasons for it or how to avoid it; (4) there is no recollection of previous interactions with a visitor – the system always returns to its pre-defined state; (5) when a little amount of automation is possible in support of visitors’ decisions it is not implemented. This chapter is about methods and tools in support of travel planning. We show how to approach travel planning as essentially an engineering problem. We define precisely what constitutes travel requirements, what is a travel plan, and in what sense the plan correctly implements the requirements. The approach is model-based, with models representing all stages of travel planning, from abstract requirements to concrete itineraries. Such models are designed, analysed and redesigned to represent decisions made by the travel engineer (the person who designs the travel plan). The models are minimal – expressed with the minimum number of modelling concepts – but also open-ended – new concepts can be added when needed. They are also formally-based – their syntax is well-defined and their semantics is expressed in precise mathematical terms, using numbers, sets, maps, functions, abstract and concrete types. The formal machinery used for semantic definitions is basic set theory and logic. Formality makes it possible to prove correctness of a travel plan with respect to travel requirements, and also to justify the soundness of the design rules that allow travel plans to be made more concrete. Such rules are applied during a development process, leading gradually from the abstract travel requirements to concrete solutions. We present such rules and discuss their soundness. In presenting such models we have four goals in mind. First, we want to formalise some of the travel-related concepts in a way which is independent of the current technology, providing the basis for an engineering approach. The latter means that the models must capture different levels of abstraction as well as relate those levels by the concept of correctness. Second, we want to apply such models to define a formally-based method for systematic travel planning, including definitions of the design rules and justification of why they are sound. Third, we want to explore the opportunities and limitations for automation in support of travel-related decision-making. The models will be used to formulate precise problems and seek possibly automated solutions. Fourth, we would like to seek an implementation of a prototype travel assistant, based on the models, in support of the method. The rest of this chapter is as follows. Section 8.2 introduces the basic concepts. Section 8.3 describes the models for travel planning, with definitions and examples. As the models capture different levels of abstraction, Section 8.4 defines what it means for a concrete travel plan to correctly implement an abstract plan. Section 8.5 presents a method for systematic travel planning, based on the models in Section 8.3 and the notion of correctness in Section 8.4. Section 8.6 describes the ideas for a travel assistant, a software system to support travel development. Finally, Section 8.7 contains some conclusions.
Artificial agents (AAs), particularly but not only those in Cyberspace, extend the class of entities that can be involved in moral situations. For they can be conceived of as moral patients (as entities that can be acted upon for good or evil) and also as moral agents (as entities that can perform actions, again for good or evil). In this paper, we clarify the concept of agent and go on to separate the concerns of morality and responsibility of agents (most interestingly for us, of AAs). We conclude that there is substantial and important scope, particularly in Computer Ethics, for the concept of moral agent not necessarily exhibiting free will, mental states or responsibility. This complements the more traditional approach, common at least since Montaigne and Descartes, which considers whether or not (artificial) agents have mental states, feelings, emotions and so on. By focussing directly on ‘mind-less morality’ we are able to avoid that question and also many of the concerns of Artificial Intelligence. A vital component in our approach is the ‘Method of Abstraction’ for analysing the level of abstraction (LoA) at which an agent is considered to act. The LoA is determined by the way in which one chooses to describe, analyse and discuss a system and its context. The ‘Method of Abstraction’ is explained in terms of an ‘interface’ or set of features or observables at a given ‘LoA’. Agenthood, and in particular moral agenthood, depends on a LoA. Our guidelines for agenthood are: interactivity (response to stimulus by change of state), autonomy (ability to change state without stimulus) and adaptability (ability to change the ‘transition rules’ by which state is changed) at a given LoA. Morality may be thought of as a ‘threshold’ defined on the observables in the interface determining the LoA under consideration. An agent is morally good if its actions all respect that threshold; and it is morally evil if some action violates it. That view is particularly informative when the agent constitutes a software or digital system, and the observables are numerical. Finally we review the consequences for Computer Ethics of our approach. In conclusion, this approach facilitates the discussion of the morality of agents not only in Cyberspace but also in the biosphere, where animals can be considered moral agents without their having to display free will, emotions or mental states, and in social contexts, where systems like organizations can play the role of moral agents. The primary ‘cost’ of this facility is the extension of the class of agents and moral agents to embrace AAs.
The context of the specific situation where knowledge is applied affects significantly the problem solving process by forcing people to negotiate and reconsider the
priorities of their mental representations and problem solving operators, in relation to this process. In this work we argue that students’ context awareness can significantly be
enhanced by the use of appropriate external representations which guide them to activate context inducing cognitive processes. By embedding such representations in a case based learning environment we expect to guide students’ processing of the rich in contextual information material, in a way that improves both their context awareness and metacontextual competence. After presenting a context model, we discuss the design of such representations based on this model and explain why we expect that their use in a learning situation would enhance context awareness. A research agenda is also included, suggesting specific research activities for evaluating the instructional efficiency of the proposed design.
Nowadays, acquisition of trustable information is increasingly important in both professional and private contexts. However, establishing what information is trustable and what is not, is a very challenging task. For example, how can information quality be reliably assessed? How can sources’ credibility be fairly assessed? How can gatekeeping processes be found trustworthy when filtering out news and deciding ranking and priorities of traditional media? An Internet-based solution to a human-based ancient issue is being studied, and it is called Polidoxa, from Greek “poly” (πολύ), meaning “many” or “several” and “doxa” (δόξα), meaning “common belief” or “popular opinion.” This old problem will be solved by means of ancient philosophies and processes with truly modern tools and technologies. This is why this work required a collaborative and interdisciplinary joint effort from researchers with very different backgrounds and institutes with significantly different agendas. Polidoxa aims at offering: 1) a trust-based search engine algorithm, which exploits stigmergic behaviours of users’ network, 2) a trust-based social network, where the notion of trust derives from network activity and 3) a holonic system for bottom-up self-protection and social privacy. By presenting the Polidoxa solution, this work also describes the current state of traditional media as well as newer ones, providing an accurate analysis of major search engines such as Google and social network (e.g., Facebook). The advantages that Polidoxa offers, compared to these, are also clearly detailed and motivated. Finally, a Twitter application (Polidoxa@twitter) which enables experimentation of basic Polidoxa principles is presented.
The television landscape is in a state of flux. In this new environment, profit-driven media companies have to balance tradeoffs between traditional and new channels of video distribution to optimize returns on their investments in content generation. This chapter describes the challenges traditional television service providers face in adapting their strategies to an environment in which the internet is playing an increasingly prominent role as a new distribution channel. In the short to intermediate run there is the challenge of finding ways to monetize an internet audience without cannibalizing profits earned through traditional distribution channels. The longer-term challenge is adapting to a distribution technology that embeds a fundamentally different economic logic for video market organization. In this chapter, we describe and analyze current trends in the internet television market and traditional television industry players’ efforts to respond to the opportunities and threats posed by internet distribution.