The UNeGov.net Initiative - Building a Community of Practice for Electronic Governance - was established in order to transfer the experience gained through the e-Macao Project to other parts of the world, particularly to developing countries. The aim is to build a global Community of Practice, comprising experts and practitioners interested in developing, sharing and applying concrete solutions for Electronic Governance. The Initiative established an activity framework comprising: (1) a community portal to document all activities of the community, coordinate its work and maintain a repository of resources relevant to Electronic Governance, (2) a series of network-building workshops around the world, (3) a series of capacity-building schools and courses on various aspects of Electronic Governance, (4) a community-wide practice in collaborative problem-solving based on the common repository of resources, (5) a series of state-of-the-art and state-of-practice reports about Electronic Governance in various countries, (6) a curriculum for training public officials in planning, development and management for Electronic Governance, (7) projects promoting Good Governance through Electronic Governance, and (8) International Conference on the Theory and Practice of Electronic Governance. Within this framework, community actions are carried out in the scope of various thematic areas including: legislation, financing, organization, planning, coordination, human capacity development, e-readiness, e-participation, e-procurement, software infrastructure, public services, interoperability, standards for Electronic Governance, and others. The aim of this report is to explain the UNeGov.net initiative, from its aim and objectives, to concrete deliverables obtained and activities carried out during 2005 and 2006. More information about UNeGov.net can be found at http://www.unegov.net.
We formally capture the notion of a virtual enterprise - a rapidly configured and continuously reconfiguring collection of enterprises acting together to produce specific products. In a sequence of formally justified refinements and using the trader concept of the emerging ODP (Open Distributed Processing) standard, we make the model increasingly flexible and dynamic. For each step we show that up to some internal details, the resulting models implement the initial enterprise model. As the formal notation we use RSL, the RAISE specification language. The outcome of this paper can be used, we believe, to provide architectural foundations for studying more concrete concepts, like development of information infrastructures, in enterprise integration.
We formally capture the notion of a virtual enterprise -- a rapidly configured (and continuously reconfiguring) collection of enterpises acting together to produce specific product(s). In a sequence of refinements and using the trader concept of the emerging ODP (Open Distributed Processing) standard, we make the model increasingly flexible and dynamic. We justify each step by showing that up to some internal details, the resulting models are nothing more than implementations of the initial enterprise model. As the formal notation we use RSL, the RAISE specification language. The outcome of this paper can further be used, we believe, to provide architectural foundations for studying more concrete concepts (like development of information infrastructures) in enterprise integration.
We demonstrate how to build and execute models of production systems using some of the Unix system tools, in particular for text processing (awk) and process management (bash). We use text files to represent the state of the system, implement production operations as fully automated text transformations and compose such operations into a process for their sequential execution. As the models are implementation-dependent, we also provide their formalisation in a specification language RSL. A simple case study illustrates the presentation.
We present an experiment in modelling and analysis of an application domain: competitive manufacturing. The result is a unique formal model which combines previously separate models for marketing (competition) and enterprises (coordination). In particular, we capture formally the marketing mix: product, price, place and promotion, and its effect on the sale of the enterprise. The model is built in stages: market without marketing, marketing without limits and marketing under limited resources. Analysis includes justifying abstraction, down to two enterprises competing for a single consumer.
This document provides an abstract model for manufacturing industry, as it appears in the market context. We identify a number of `players' (consumers, suppliers, traders and producers) and formalize the ways they compete for the shared market. Initially abstract, the model becomes more concrete in a sequence of refinements (conceptual rather than formal) naturally evolving to address such aspects of an enterprise like marketing, finances, administration and production. The companion documents will consider these aspects in more detail. To formalize the game of supply and demand is a prerequisite, we believe, for modelling an enterprise that will provide the most effective answer to the demands of the business environment.
Cooperation between the members of a virtual manufacturing organization can be described as following a certain protocol, similar to communication protocols in distributed systems. We investigate how it is possible to model such protocols explicitly, for an organization modeled as a client-server system where one member (a client) can request another (a server) to fulfill a certain production goal. The goal is given in an abstract way: the type of the product and its required quantity. It is up to the server to decide if and how to implement this goal. We present several protocols how the servers can react to such requests and the clients receive their responses. We also discuss how the choice of a protocol affects the behavior of the whole organization.
Electronic Government leads to technology-enabled transformation of government organizations, and consequently of their relationships with citizens, businesses and other arms of government. Developing countries can greatly benefit from Electronic Government development, in terms of increasing the capacity of government organizations to meet tremendous socio-economic needs. However, they also face common challenges: weak implementation, delivery and coordination; policy-strategy and strategy-implementation gaps; insufficient human capacity; lack of research to precede project implementations; etc. This paper proposes a rigorous Electronic Government Development Framework (EGOV.*) to address some of these challenges. The framework enables systematic construction of Electronic Government for a given Public Administration (PA) in terms of: (1) establishing the readiness of the PA for ICT-enabled transformation; (2) determining state-of-the-art in Electronic Government practices and solutions around the world, as relevant to the PA; (3) building a PA-wide vision and strategy towards the development of high-quality Electronic Government; (4) constructing a government program to implement this strategy; (5) building human capacity within the PA, covering leadership, management and technical skills, to be able to execute and benefit from this program; and (6) establishing a Resource Center for Electronic Government on the basis of existing institutions, particularly government and academia, and raising the capacity of this Center to execute the program. The framework has been applied in three countries - one completed, one ongoing, and one to start.
Responding to the issues of complexity, relevance, cost and risk of Electronic Governance (EGOV), we witness a specialization of the roles responsible for EGOV development and operation, professionalization of the personnel playing such roles, and utilization of the EGOV services and information to fulfill citizen needs. In order to build competencies required by such (managerial, professional, technician and user) roles, education becomes a key success factor, and a growing variety of EGOV learning opportunities emerges. However, lacking conceptual underpinnings for EGOV education, the discovery, analysis and integration of such opportunities is difficult. To address this need, the paper develops a theoretical construct for EGOV education; applies six measures to this construct: who - learners, why - roles, what - competencies, how - programs, where - schools, and when - prerequisites; and validates it through a landscaping exercise focusing on EGOV university programs.
We demonstrate how to build and execute models of production systems using some of the Unix system tools, in particular for text processing (awk) and process management (bash). We use text files to represent the state of the system, implement production operations as text transformations and compose such operations into a process for their sequential execution. As the models are implementation-dependent, we also provide their abstract formalisation in a specification language RSL. Two case studies illustrate the presentation.
This tutorial is about design and proof of design of reliable systems from unreliable components. It teaches the concept and techniques of fault-tolerance, at the same time building a formal theory where this property can be specified and verified. The theory eventually supports a range of useful design techniques, especially for multiple faults. We extend CCS, its bisimulation equivalence and modal logic, under the driving principle that any claim about fault-tolerance should be invariant under the removal of faults from the assumptions (faults are unpredictable); this principle rejects the reduction of fault-tolerance to ``correctness under all anticipated faults''. The theory is applied to the range of examples and eventually extended to include considerations of fault-tolerance and timing, under scheduling on the limited resources. This document describes the motivation and the contents of the tutorial.
Consider a distributed real-time program which is executed on a system with a limited set of hardware resources. Assume the program is required to satisfy some timing constraints, despite the occurrence of anticipated hardware failures. For efficient use of resources, scheduling decisions must be taken at run-time, considering deadlines, the load and hardware failures. The paper demonstrates how to reason about such dynamically scheduled programs in the framework of a timed process algebra and modal logic. The algebra provides a uniform process encoding of programs, hardware and schedulers, with an operational semantics of a process depending on the assumptions about faults. The logic specifies the timing properties of a process and verifies them via this fault-affected semantics, establishing fault-tolerance. The approach lends itself to application of existing tools and results supporting reasoning in process algebras and modal logics.
e-Macao is a two-year project to build a foundation for Electronic Government in Macao in terms of readiness assessment, software research and development, and capacity-building for government workforce. The project focused on five main activity areas: (1) survey - a detailed survey of the current state of e-Government practice was carried out, both locally and globally; (2) training - a comprehensive training program was organized for government workforce in technical and management skills for e-Government, promoting collaborative, cross-agency development among government trainees; (3) development - a prototype software infrastructure for e-Government was developed, with example Electronic Public Services delivered to citizens, businesses and government built on top of this infrastructure; (4) research - relevant research was conducted in foundational and applied aspects of e-Government; (5) dissemination - the findings were disseminated inside the project, locally among e-Government stakeholders, and internationally. The project was carried out from July 2004 to June 2006. It was led by UNU-IIST and funded by the Government of Macao SAR through Macao Foundation. UNU-IIST partners on the project were: the Government of Macao SAR, including 44 agencies in all vertical areas of the Government (Administration and Justice, Education and Culture, Finance and Economy, Security, and Transport and Public Works), University of Macau and INESC-Macau. The second phase of the project has been approved by the Government, promoted to a program framework, and extended to last for three more years until end of 2009. The aim of this report is to explain the project, from its aim and objectives, through activities and deliverables, to its organization and evaluation. More information about the e-Macao Project can be found from the project portal at http://www.emacao.gov.mo.
In the area of concurrent, communicating systems, a common approach to verify the absence of design faults is in terms of an equivalence relation between a high-level and a low-level process. One such relation is bisimulation and this holds if two processes cannot be distinguished by observing them for a finite interval of time. However, the absence of design faults does not guarantee that the process will behave correctly in practice as it depends on various hardware devices which may be subject to physical faults themselves. Such faults cannot be avoided; they must be tolerated. The purpose of this thesis is to provide a formal framework, based on bisimulations and using the Calculus of Communicating Systems, by which we can specify, design and verify concurrent, fault-tolerant systems, with emphasis placed on reasoning and design under weak assumptions about faults. The framework is described below. As the formalmodel we use labelled transition systems and we introduce additional ‘faulty transitions’ to represent the effects of faults. This leads to a semantic theory of fault tolerant bisimulations under which faulty transitions are possible for a low-level process but not for the high-level one. Corresponding to faults being unpredictable, the theory is fault-monotonic: after verifying correctness for a number of faults, correctness for some, or even none of them is guaranteed. As a process description language we use a version of CCS where recursion involves locally declared constants. The language is assigned different semantics, each taking an additional declaration, as part of the fault description language, to specify effects of faults. It is then possible to verify fault-tolerance by using process transformations and ‘normal’ bisimulations, and to verify correctness in the presence of multiple faults (specified by several declarations) using composition of faults. The more faults are anticipated, the more difficult is to ensure that they are all tolerated. This and the fault-monotonic property suggests to establish fault-tolerance by incremental refinement, for an increasing variety of faults. We also show how to apply this refinement compositionally and provide a number of fault-tolerance-preserving transformations. The framework is illustrated by a few examples, including communication protocols, stable storage, distributed consensus and a distributed database service which supports atomic transactions despite faults in the underlying storage and communication devices.
We present a case study in formal design of a distributed database. The database supports atomic transactions despite distribution and faults affecting its components. Development proceeds compositionally, from sequential, concurrent to distributed system, while building up capacity of individual components to tolerate an increasing number of faults. The case study illustrates some useful techniques for building fault-tolerant systems in general. We conclude by discussing them and their support in the formalism based on CCS (for implementation) and a version of mu-calculus (for specification and verification).
The paper presents a rigorous framework to plan, perform and manage foundational e-government projects. Each project aims to establish a government-wide direction for egovernment and create an initial impetus in this direction within individual agencies. The framework identifies five kinds of tasks – survey, development, training, research and dissemination, and prescribes how they can be planned, performed and managed to achieve the stated objectives. The general focus is on support for strategic planning and capacity development. The technical focus is on middleware infrastructure development. We also outline how the framework was applied to carry out an e-government project in Macao, China.
We introduce a necessary test for the claims about provable fault-tolerance: having proved to tolerate several faults, we must tolerate (provably) any combination of them. One notable failure to pass this test is bisimulation. The paper presents a class of bisimulations which are fault-monotonic and within CCS support compositional design of component specifications by stepwise refinement, each step increasing or at least preserving the current level of fault-tolerance.
We outline a research project proposal to explore the use of Category Theory (CT) for the integration of distributed systems, as captured in the emerging standard for Open Distributed Processing (ODP). Following the unifying role that CT played for the diverse branches of mathematics, it is to provide a unifying architectural semantics for various concepts of the ODP standard, its viewpoint languages and their Formal Description Techniques (FDT). RAISE and its supporting tools will play the role of the underlying notation and will help to formally design software for conformance checking, consistency between different viewpoint specifications, translation of specifications in different FDTs (functors between many FDTs and CT already exist) and for simulation (virtual machine). CIMOSA, Open System Architecture for Computer Integrated Manufacturing, will provide the first concrete test and the running example for our study.
We study design support for building a virtual organisation, in particular the problem to organise production among the set of distributed cells to meet a given production goal. Each cell has a certain capacity to store, manufacture and deliver products, and it can interact with other cells by means of information and logistics networks. The goal specifies the type of the product and the volume required. We propose a solution to this problem in the form of a distributed coordination protocol. The protocol is based on an algorithm which given a goal and a cell attempts to generate a sequence of operations on the cell to implement this goal. Lacking the resources locally the algorithm generates some auxiliary goals which the protocol then communicates to other cells, involving generating (recursively) more processes and more goals etc. Although this leads generally towards the simpler goals, the protocol may sometimes fail to provide a match between the goals and cells, lacking the resources globally. We provide a formalisation of the problem, propose a solution, offer correctness arguments and discuss an email-based implementation.