We use a formal object-oriented specification language (OOL) to formalize and combine some UML models. With the formalization, we develop a set of refinement laws of UML models that captures the essential nature, principles, nature and patterns of object-oriented design. With the support of the incremental and iterative features of object-orientation and the Rational Unified Process (RUP), our method supports precise and consistent UML model transformation. Keywords: Design patterns, model consistency, model integration, model refinement
Nowadays, governments are adopting Web 2.0 technologies for interacting with citizens, empowering them to share their views, react to issues of their concern and form opinion. In particular, social media play an important role in this context, due to their widespread use. For governments, a major technical challenge is the lack of automated intelligent tools for processing citizens’ opinion in government social media. At the same time, during the last decade, argumentation theory has consolidated itself in Artificial Intelligence as a new paradigm for modeling common sense reasoning, with application in several areas, such as legal reasoning, multi-agent systems, and decision support systems, among others. This paper outlines an argument-based approach for overcoming such challenge, combined with context-based information retrieval. Our ultimate aim is to combine context-based search and argumentation in a collaborative framework for managing (retrieving and publishing) service- and policy-related information in government-use social media tools.
We present an experiment in modelling and analysis of an application domain: competitive manufacturing. The result is a unique formal model which combines previously separate models for marketing (competition) and enterprises (coordination). In particular, we capture formally the marketing mix: product, price, place and promotion, and its effect on the sale of the enterprise. The model is built in stages: market without marketing, marketing without limits and marketing under limited resources. Analysis includes justifying abstraction, down to two enterprises competing for a single consumer.
Recently, software verification is being used to prove the presence of contradictions in source code and thus reveal potential weaknesses in the code or provide assistance to the compiler optimization. Compared to verification of correctness properties, the translation from source code to logic can be very simple and thus easy to solve by automated theorem provers. In this paper, we present a translation of Java into logic that is suitable for proving the presence of contradictions in code. We show that the translation, which is based on the Jimple language, can be used to analyze real-world programs, and discuss some issues that arise from differences between Java code and its bytecode
We present Joogie, a tool that detects infeasible code in Java programs. Infeasible code is code that does not occur on feasible control-flow paths and thus has no feasible execution. Infeasible code comprises many errors detected by static analysis in modern IDEs such as guaranteed null-pointer dereference or unreachable code. Unlike existing techniques, Joogie identifies infeasible code by proving that a particular statement cannot occur on a terminating execution using techniques from static verification. Thus, Joogie is able to detect infeasible code which is overlooked by existing tools. Joogie works fully automatic, it does not require user- provided specifications and (almost) never produces false warnings.
We present a framework for representing the prob abilistic effects of actions and contingent treatment plans. Our language has a welldefined declarative semantics and we have developed an implemented al gorithm (named BNG) that generates Bayesian net works (BN) to compute the posterior probabilities of queries. In this paper we address the problem of pro jecting a contingent treatment plan by automatically constructing a structure of interrelated BNs, which we call a BNgraph, and applying the available propaga tion procedures on it. To address the optimal plan generation, we base our approach on the observation that normally the target plan space has a welldefined structure. We provide a language to describe plan spaces which resembles a programming language with loops and conditionals. We briefly present the proce dures for finding the optimal plan(s) from such speci fied plan spaces.
This paper reviews the nature and responsibilities of Government Chief Information Officer (GCIO) positions, defines competencies required to fulfill such responsibilities, and presents the results of a survey of 78 education programs from 21 countries to determine to what extent they build GCIO-relevant competencies and how much attention different programs pay to policy, design, implementation and operation aspects of the public sector Information and Communication Technology (ICT). The survey covers CIO, Electronic Government, Technology Management, Leadership, Public Administration, Development, Sustainable Development and ICT for Development programs, all analyzed using a single conceptual framework. The survey revealed, among others: that the programs are strongly oriented toward one discipline, that no program fulfills all competency needs expected from GCIO positions, that such needs can be fulfilled by combinations of existing programs, and that a truly international GCIO curriculum is yet to emerge.
Tutoring systems typically contain or generate a set of approved solutions to problems presented to students. Student solutions that don’t match the approved ones, but are otherwise partially correct, receive little acknowledgment as feedback, stifling broader reasoning. Additionally, feedback mechanisms rely on having the student model, which requires extensive effort to build. This paper provides an alternative to the traditional ITS architecture by using a hint generation strategy that bypasses the student model and instead leverages off of the domain ontology. Concept hierarchy and co-occurrence between concepts in the domain ontology are drawn upon to ascertain partial correctness of a solution and guide student reasoning towards the correct solution. We describe the strategy incorporated in a tutoring system for medical PBL, wherein the widely available UMLS is deployed as the domain ontology. Evaluation of expert agreement with system generated hints on a 5-point likert scale resulted in an average score of 4.44 (r = 0.9018, p < 0.05). Hints containing partial correctness feedback scored significantly higher than those without it (Wilcoxon Rank Sum, p < 0.001).
We apply formal description techniques (FDT) to model, compose and give operational meaning to the class of reactive systems representing manufacturing enterprises. The enterprise pursues its activities by means of resources and processes that execute concurrently on the resources, subject to internal (resource) and external (market) constraints. Some modelling techniques are familiar for reactive systems, other are specific to this domain: modelling management decisions, product transfer during one-to-one (one supplier one consumer) synchronisation, marketing and many-to-one (many suppliers one consumer) synchronisation. The paper is a novel application of FDTs, also a contribution to the semantics of enterprise engineering.
e-Government Readiness Assessment is a vital step in developing effective e-Government strategies which provides important knowledge for policy- and decision-makers. Particularly for developing countries, it is imperative to analyse the conditions, opportunities and challenges of an existing environment to ensure that the resulting e-Government strategy is realistic and workable, whilst enabling public administration reform in support of a sustainable development agenda. While there are different approaches to e-Government Readiness Assessment, the review of existing literature reveals a general lack of focus on methodology and survey design for e-Government Readiness Assessment applicable to developing countries. In this paper, we present the key elements of a holistic e-Government Readiness Assessment methodology, considering national- and agency-level survey model and instrument design. In addition, we discuss implementation issues and present recommendations for future research including the validation of the proposed methodology.
Service integration is central to joined-up government initiatives and requires information on the collaborators and the services they offer, roles of different actors, the resources required, and their goals (individual and shared). These information are largely available in unstructured forms on government portals, publications and other textural sources. This paper explores semantic text mining for extracting service-related information from such sources using Natural Language Processing techniques supported by Service-Oriented Process Ontologies. Our solution framework consists of the following steps: (1) creating domain and service-oriented process ontology, (2) extracting service-related information from textual sources based on the ontology, and finally (3) mining relationship among the services based on the extracted information in Step 2 linked with a pre-defined hierarchy of service delivery goals specifying the objective(s) to be achieved among the orchestrated services. We describe our approach to these tasks and discuss the progress of the work, our experiences and the challenges encountered so far.
A promising strategy to promote good governance is harnessing the opportunities provided by the use of mobile phones, widely accessible to most segments of the society, for delivering public information and services and for decision-making by government. This paper investigates the design and implementation of mobile governance (MGOV) strategies for development (MGOV4D). Specifically, it presents an MGOV4D strategy framework to
support mobile Information and Communication Technologies (ICT) for development (MICT4D) projects in meeting their development objectives. The paper consists of four parts. First, it
presents a framework for determining the governance and related MGOV requirements for MICT4D initiatives. Second, it applies the framework to determine the MGOV4D requirements for a concrete case study of migrant head porters – local micro-logistic service providers from Ghana, involving the use of mobile phones to meet the porters’ livelihood needs. Third, based on the identified requirements, it presents a set of MICT4D initiatives that could be developed into MGOV4D programs to address the requirements. Fourth, it synthesizes the MGOV4D strategies that can support the inclusion objectives for the head porters and similar vulnerable groups. In the conclusions, the paper discusses how these results can support policy efforts for achieving the Millennium Development Goal 1 – Poverty Alleviation, and 3 – Gender (specifically Women Empowerment).
The behavior of embedded hardware and software systems is determined by at least three dimensions: control flow, data aspects, and real-time requirements. To specify the different dimensions of a system with the best-suited techniques, the formal language CSP-OZ-DC integrates Communicating Sequential Processes (CSP), Object-Z (OZ), and Duration Calculus (DC) into a declarative formalism equipped with a unified and compositional semantics. In this paper, we provide evidence that CSP-OZ-DC is a convenient language for modeling systems of industrial relevance. To this end, we examine the emergency message handling in the European Train Control System (ETCS) as a case study with uninterpreted constants and infinite data domains. We automatically verify that our model ensures real-time safety properties, which crucially depend on the system?s data handling.
A policy framework is the backbone of public governance and a major contributor to its quality. Such a framework is particularly required in the areas where public governance seeks technology support, as is the case for Electronic Governance (e-Governance). This paper explains the need to put in place a comprehensive set of policies, and presents a model for policy interventions supporting e-Governance development. The model comprises a classification of policies based on their nature and applicability, and describes core areas for which policy interventions are required. The paper also presents three major scenarios for using the model: (1) a tool to help design and analyze critical policy interventions by developing and transition nations, (2) a template to understand different alternatives for interventions, and (3) a checklist to review all niche areas to be regulated. In particular, applicability of the model in India is discussed.
Since problem solving in group problem-based learning is a collaborative process, modeling individuals and the group is necessary if we wish to develop an intelligent tutoring system that can do things like focus the group discussion, promote collaboration, or suggest peer helpers. We have used Bayesian networks to model individual student knowledge and activity, as well as that of the group. The validity of the approach has been tested with student models in the areas of head injury, stroke and heart attack. Receiver operating characteristic (ROC) curve analysis shows that, the models are highly accurate in predicting individual student actions. Comparison with human tutors shows that group activity determined by the model agrees with that suggested by the majority of the human tutors with a high degree of statistical agreement (McNemar test, p = 0.774, Kappa = 0.823).
We present an approach to elicitation of user preference models in which assumptions can be used to guide but not constrain the elicitation process. We show how to encode assumptions concerning preferential independence and monotonicity in a Knowledge-Based Artificial Neural Network. We quantify the degree to which user preferences violate a set of assumptions. We empirically compare the KBANN network with an unbiased ANN in terms of learning rate and accuracy for preferences consistent and inconsistent with the assumptions. We go on to demonstrate how the technique can be used to learn a fine-grained preference structure from simple binary classification data.