The television landscape is in a state of flux. In this new environment, profit-driven media companies have to balance tradeoffs between traditional and new channels of video distribution to optimize returns on their investments in content generation. This chapter describes the challenges traditional television service providers face in adapting their strategies to an environment in which the internet is playing an increasingly prominent role as a new distribution channel. In the short to intermediate run there is the challenge of finding ways to monetize an internet audience without cannibalizing profits earned through traditional distribution channels. The longer-term challenge is adapting to a distribution technology that embeds a fundamentally different economic logic for video market organization. In this chapter, we describe and analyze current trends in the internet television market and traditional television industry players’ efforts to respond to the opportunities and threats posed by internet distribution.
In order to precisely analyze healthcare workflows, we examine how healthcare workflows can be modeled and verified with an elementary and concise timed CSP extension. To avoid considering healthcare workflows in isolation, we investigate the usage of our CSP dialect for formally modeling workflows alongside the instruction model of the openEHR specification set, which is a general, maintainable, and interoperable approach to electronic health records. Hence, we present a CSP model for openEHR instructions, which allows timed reasoning, and also integrates a basic notion of data and undefinedness. We show that this CSP dialect is suited to verify important properties of healthcare workflows, like workflow consistency, checking against timed specifications, and resource scheduling.
This chapter gives an overview of the recent advances in GUItesting.Considering the increasing popularity and fast software development cycles (e.g.,desktop and mobile applications), GUI testing gains more importance as it allows us to verify the behavior of a system from the user’s perspective.Thus, it can quickly uncover relevant bugs, which a user could face. Traditional capture-replay GUI testing approaches do not meet the demands of developers anymore. Therefore, there is an increasing research activity in model-based GUI testing, where the user interaction behavior is simulated using a graph-based model.In the following, we outline different graphical notations to describe feasible user interactions, and methods to generate and execute test cases from these models. We discuss the benefits and limitations of the state-of-the-art in GUI testing research and give a brief outlook about new trends and possibilities to improve the GUI testing automation.
In prime event structures with binary con°icts (pes-bc)3 a branching cell  is a subset of events closed under downward causalityand immediate con°ict relations. This means that no event outside the branching cell can be in con°ict with or enable any event inside the branching cell. It bears a strong resemblance to stubborn sets, a partial order reduction method on transition systems. A stubborn set (at a given state) is a subset of actions such that no execution consisting entirely of actions outside the stubborn set can be in con°ict with or enable actions that are inside the stubborn set.
A rigorous study of the relationship between the two ideas, however, is not straightforward due to the facts that 1) stubborn sets utilise sophis-ticated causality and con°ict relations that invalidate the stability and coherence of event structures , 2) without stability it becomes very di±cult to de¯ne concepts like pre¯xes and branching cells, which pre-require a clear notion of causality, and 3) it is challenging to devise a technique for identifying `proper' subsets of transitions as `events' such that the induced event-based system captures exactly the causality and con°ict information needed by stubborn sets.
A promising strategy to promote good governance is harnessing the opportunities provided by the use of mobile phones, widely accessible to most segments of the society, for delivering public information and services and for decision-making by government. This paper investigates the design and implementation of mobile governance (MGOV) strategies for development (MGOV4D). Specifically, it presents an MGOV4D strategy framework to
support mobile Information and Communication Technologies (ICT) for development (MICT4D) projects in meeting their development objectives. The paper consists of four parts. First, it
presents a framework for determining the governance and related MGOV requirements for MICT4D initiatives. Second, it applies the framework to determine the MGOV4D requirements for a concrete case study of migrant head porters – local micro-logistic service providers from Ghana, involving the use of mobile phones to meet the porters’ livelihood needs. Third, based on the identified requirements, it presents a set of MICT4D initiatives that could be developed into MGOV4D programs to address the requirements. Fourth, it synthesizes the MGOV4D strategies that can support the inclusion objectives for the head porters and similar vulnerable groups. In the conclusions, the paper discusses how these results can support policy efforts for achieving the Millennium Development Goal 1 – Poverty Alleviation, and 3 – Gender (specifically Women Empowerment).
Service integration is central to joined-up government initiatives and requires information on the collaborators and the services they offer, roles of different actors, the resources required, and their goals (individual and shared). These information are largely available in unstructured forms on government portals, publications and other textural sources. This paper explores semantic text mining for extracting service-related information from such sources using Natural Language Processing techniques supported by Service-Oriented Process Ontologies. Our solution framework consists of the following steps: (1) creating domain and service-oriented process ontology, (2) extracting service-related information from textual sources based on the ontology, and finally (3) mining relationship among the services based on the extracted information in Step 2 linked with a pre-defined hierarchy of service delivery goals specifying the objective(s) to be achieved among the orchestrated services. We describe our approach to these tasks and discuss the progress of the work, our experiences and the challenges encountered so far.
The 5th International Conference on Theory and Practice of Electronic Governance, ICEGOV2011, took place in Tallinn, Estonia from the 26th to the 28th of October 2011, under the patronage of the Ministry of Economic Affairs and Communication of the Republic of Estonia. The conference was co-organized by: e-Governance Academy in close partnership with Enterprise Estonia, Estonia; and Center for Technology in Government, University at Albany, State University of New York, USA. Part of the ICEGOV conference series, ICEGOV2011 was supported by the series coordinator – UNU-IIST (United Nations University – International Institute for Software Technology) Center for Electronic Governance, Macao SAR, China. The conference took place in the framework of the Nordic IT Week and various ICEGOV2011-collocated events were organized as part of this framework on the 29th of September 2011.
The success of the electronic governance (EGOV) benchmarking has been limited so far. Lacking a theory to integrate existing conceptualizations has made the acquisition and sharing of knowledge produced by different benchmarking exercises difficult. In order to address this problem, this paper: 1) explains the nature of the EGOV benchmarking activity though a well-established theoretical framework - Activity Theory, 2) applies the framework to carry out a mapping between a number of existing EGOV benchmarking conceptualizations, 3) develops an unified conceptualization based on these mappings and 4) validates the resulting model though a real-life national EGOV strategy development project. The use of the Activity Theory in the paper has enabled defining and relating initial dimensions of the EGOV benchmarking activity, and mapping the dimensions present in existing conceptualizations. This not only created a unifying theoretical basis for conceptualizing the EGOV benchmarking activity but allowed learning from and integrating existing conceptualizations. The work impacts on the EGOV benchmarking practice by enabling a logical design of the activity, and contextually correct understanding of existing EGOV benchmarking results with respect to their intended usage.
In recent years, an increasing amount of attention has been placed on improving access to Information and Communication Technology in the United States. With the rapidity at which broadband construction projects are dotting America, it is important to understand the social impacts of these infrastructural projects. One particularly salient issue is whether access to the Internet would decrease the involvement of youth in their home communities since youth and issues of talent retention are crucial to the long-term viability of rural communities. However, findings on this topic have been a mixed bag with some studies suggesting that the use of online social networking decreases community involvement while others have found that it may maintain or even increase community involvement. This study set out to clarify the conflicting findings and in the process, it has found support for both the displacement effect as well as an augmentation effect. The dual processes suggest that merely examining time spent on social networking sites does not provide a complete picture of the effects of Internet use on community involvement. The nature of the interactions and the participants in the online social networking also play an important role. For rural community leaders working towards the
long-term viability of their communities, the findings suggest that efforts should be directed towards mitigating the displacement effects of Internet use while harnessing popular Internet applications such as social networking sites to augment the involvement that youth have in their home communities.
Graphical user interfaces (GUIs) are a common way to interact with software. To ensure the quality of such software it is important to test the possible interactions with its user interface. GUI testing is a challenging task as they can allow, in general, infinitely many different sequences of interactions with the software. As it is only possible to test a limited amount of possible user interactions, it is crucial for the quality of GUI testing to identify relevant sequences and avoid improper ones. In this paper we propose a model for better GUI testing. Our model is created based on two observations. It is a common case that different user interactions result in the execution of the same code fragments. That is, it is sufficient to test only interactions that execute different code fragments. Our second observation is that user interactions are contextsensitive. That is, the control flow that is taken in a program fragment handling a user interaction depends on the order of some preceding user interactions. We show that these observations are relevant in practice. We present a preliminary implementation that utilizes these observations for test case generation.
Previous bibliometric analyses of research activity in Sustainable Development have procured scientific articles by searching for the term “sustainability” or “sustainable” in the titles, abstracts and keywords (Yarime et al., 2010; Kajikawa et al., 2007). But such an approach cannot adequately retrieve articles in the field and cannot be used to conduct analyses of research activities in the sub-areas. Our present work seeks to build a rich hierarchy representing the field of Sustainable Development and its sub-areas. Since Sustainable Development is highly inter-disciplinary in nature and yet evolving, it has been a matter of debate as to what should be included in a definition of the field. There have been efforts to provide a research core and framework of Sustainable Development by identifying sub-areas of Sustainable Development through bibliometric analysis (Kajikawa, 2008). In particular, using topological clustering, Kajikawa et al. (2007) identified the following sub-areas of sustainability science: Agriculture, Fisheries, Ecological Economics, Forestry, Business, Tourism, Water, Urban Planning, Rural Sociology, Energy, Health, Soil, Wildlife and Climate Change. In this paper we use this taxonomy as our definition of Sustainable Development and its sub-areas.Given the recognized critical need for countries to develop more sustainable development paths and the rapid increase in resources now being invested in this area, it becomes important to clearly understand the current state of research activity in this area. For this quantitative bibliometric analyses are well suited, but conducting such analyses in highly interdisciplinary and emerging areas like this is highly challenging.In this paper we a present bibliometric study of research activity in Sustainable Development. Sustainable Development concerns nature (e.g., climate, ocean, rivers, plants, and other components of the natural environment), artifacts (e.g., machinery, biotechnology, materials, chemicals, and energy), and society (e.g., economy, industry, finance, demography, culture, ethics, and history) (Le´le´, 1991; Goodland, 1995). In recent years, Sustainable Development and its various sub-areas such as Renewable Energy and Climate Change have been declared as national priority areas by numerous countries and international organizations.
The refinement calculus provides a methodology for transforming an abstract specification into a concrete implementation, by following a succession of refinement rules. These rules have been mechanized in theorem-provers, thus providing a formal and rigorous way to prove that a given program refines another one. In a previous work, we have extended this mechanization for object-oriented programs, where the memory is represented as a graph, and we have integrated our approach within the rCOS tool, a model-driven software development tool providing a refinement language. Hence, for any refinement step, the tool automatically generates the corresponding proof obligations and the user can manually discharge them, using a provided library of refinement lemmas. In this work, we propose an approach to automate the search of possible refinement rules from a program to another, using the rewriting tool Maude. Each refinement rule in Maude is associated with the corresponding lemma in Isabelle, thus allowing the tool to automatically generate the Isabelle proof when a refinement rule can be automatically found. The user can add a new refinement rule by providing the corresponding Maude rule and Isabelle lemma.
Effective Information Technology (IT) leadership is critical for achieving a good alignment between business needs and IT means of an organization. In the public sector, IT leadership is increasingly realized through the Government Chief Information Officer (GCIO) function, typically established by governments based on local circumstances and emerging needs. This makes peer-learning about the working of such systems and their transfer between different government contexts challenging. To address this concern, the authors introduced earlier a GCIO System - a set of inter-related activities to guide governments in gradually establishing, operating and sustaining the GCIO function. Based on a common conceptual model of the GCIO function, this paper defines a methodology for conducting the readiness assessment part of the GCIO System. The methodology comprises a set of assessment areas and a step-wise process to conduct assessment in these areas. The paper also shares the experience in applying this methodology in practice, and proposes how the assessment could inform the execution of other activities of the GCIO System.