This document provides an informal and conceptual presentation of what is a manufacturing enterprise. As this domain is vast, the native adopts a certain level of abstraction. It discusses the enterprise in terms of its four main function; namely: marketing, administration, finance, and especially, production. Each stage elaborates on the purpose (what the system is intended for), behaviour (what the system does), and structure (what makes it do what it does) of the systems in question. It aims to be a starting point for more extensive and detailed domain analysis of a manufacturing enterprise.
Mobile computing shares many features with distributed computing, but it also extends the latter because of the need to deal with the mobility and occasional disconnections of both devices and applications. In this report we give the RSL specifications to describe the behavior of applications in a mobile computing environment. Specifically, we give the specification for describing the migration, hoarding and cloning behavior of mobile applications. Keywords: Mobile Computing; Graphical Notation; Mobicharts; Formal Specification; RAISE; Consistency Conditions; Testing.
We present a method for systematic examination and reduction of the state space of distributed systems. The approach exploits properties of distributed systems to partition processes into atomic transitions and uses observations about partial orders among the distributed components to perform dynamic partial order reduction for state space reduction. This paper concentrates on preserving reachability checks and deadlock detection for a fixed input of the system. We present a search algorithm with a succinct representation of the search stack that requires only the current state in memory. We furthermore define characteristics to determine independent blocks of processes and propose and evaluate heuristics to guide the search to further state space reduction. In combination with dynamic symbolic execution (also known as concolic execution), the approach also allows the generalization of a concrete run and gives rise to examination of the full input state space of a distributed system.
This technical report contains the papers presented at Refine 2006, the International Refinement Workshop. This compendium is the preliminary proceedings for the workshop for the convenience of the workshop participants; the official proceedings will be published after the workshop in Electronic Notes on Theoretical Computer Science
One of the fundamental problem in concurrency and synchronisation study is the wait-free construction of atomic registers. Atomicity simplifies programming by conjuring up an illusion of exclusive use of shared registers by concurrent processes; whereas wait-freedom improves the efficiency by guaranteeing that every process can proceed at its own speed, i.e. without being blocked by or needing to wait for other processes, even when they are accessing the same register simultaneously.
Much research work has been devoted to the study of the problem since Lamport and Peterson first posed it in the 70s. A plethora of ingenious algorithms has been discovered in the 80s and 90s, along with the invention of some of the classic techniques for concurrency and synchronisation, e.g. bounded timestamping techniques. Nowadays these algorithms and techniques become the foundation of scalable synchronisation research for modern multicore programming.
Despite the importance of the problem there is few substantial survey work done to summarise the field. Vitanyi published a very well-written note recently to retrace some of the history and recall some of the major results and steps to its solution . Inspired by his work, we attempt in this report to give a more in-depth survey by 1) compiling a more comprehensive list of algorithms for atomic register construction, 2) organising and classifying the algorithms in a cube-like structure, and 3) providing a more accessible analysis on the classic techniques of bounded timestamping.
There are many examples of the use of the technique of domain analysis for modelling software systems in the initial stages of their development, although the case studies chosen are often of small systems or of small parts of large systems. In this paper we show that the techniques can be as readily applied to very large domains and how a manageable formal model of the domain can be obtained by abstracting appropriately. We illustrate this with a case study based on the airline business domain. We also discuss how this formal model can be developed towards software support systems for the airline industry which capture a wide range of different requirements, and how it might be applied more generally.
We present Joogie, a tool that detects infeasible code in Java programs. Infeasible code is code that does not occur on feasible control-flow paths and thus has no feasible execution. Infeasible code comprises many errors detected by static analysis in modern IDEs such as guaranteed null-pointer dereference or unreachable code. Unlike existing techniques, Joogie identifies infeasible code by proving that a particular statement cannot occur on a terminating execution using techniques from static verification. Thus, Joogie is able to detect infeasible code which is overlooked by existing tools. Joogie works fully automatic, it does not require user- provided specifications and (almost) never produces false warnings.
Recently, software verification is being used to prove the presence of contradictions in source code and thus reveal potential weaknesses in the code or provide assistance to the compiler optimization. Compared to verification of correctness properties, the translation from source code to logic can be very simple and thus easy to solve by automated theorem provers. In this paper, we present a translation of Java into logic that is suitable for proving the presence of contradictions in code. We show that the translation, which is based on the Jimple language, can be used to analyze real-world programs, and discuss some issues that arise from differences between Java code and its bytecode
This chapter gives an overview of the recent advances in GUItesting.Considering the increasing popularity and fast software development cycles (e.g.,desktop and mobile applications), GUI testing gains more importance as it allows us to verify the behavior of a system from the user’s perspective.Thus, it can quickly uncover relevant bugs, which a user could face. Traditional capture-replay GUI testing approaches do not meet the demands of developers anymore. Therefore, there is an increasing research activity in model-based GUI testing, where the user interaction behavior is simulated using a graph-based model.In the following, we outline different graphical notations to describe feasible user interactions, and methods to generate and execute test cases from these models. We discuss the benefits and limitations of the state-of-the-art in GUI testing research and give a brief outlook about new trends and possibilities to improve the GUI testing automation.
Graphical user interfaces (GUIs) are a common way to interact with software. To ensure the quality of such software it is important to test the possible interactions with its user interface. GUI testing is a challenging task as they can allow, in general, infinitely many different sequences of interactions with the software. As it is only possible to test a limited amount of possible user interactions, it is crucial for the quality of GUI testing to identify relevant sequences and avoid improper ones. In this paper we propose a model for better GUI testing. Our model is created based on two observations. It is a common case that different user interactions result in the execution of the same code fragments. That is, it is sufficient to test only interactions that execute different code fragments. Our second observation is that user interactions are context-sensitive. That is, the control flow that is taken in a program fragment handling a user interaction depends on the order of some preceding user interactions. We show that these observations are relevant in practice. We present a preliminary implementation that utilizes these observations for test case generation.
Graphical user interfaces (GUIs) are a common way to interact with software. To ensure the quality of such software it is important to test the possible interactions with its user interface. GUI testing is a challenging task as they can allow, in general, infinitely many different sequences of interactions with the software. As it is only possible to test a limited amount of possible user interactions, it is crucial for the quality of GUI testing to identify relevant sequences and avoid improper ones. In this paper we propose a model for better GUI testing. Our model is created based on two observations. It is a common case that different user interactions result in the execution of the same code fragments. That is, it is sufficient to test only interactions that execute different code fragments. Our second observation is that user interactions are contextsensitive. That is, the control flow that is taken in a program fragment handling a user interaction depends on the order of some preceding user interactions. We show that these observations are relevant in practice. We present a preliminary implementation that utilizes these observations for test case generation.
The goal of implementation verification is to prove that a given program behaves correctly under all possible executions, for instance that it produces correct output for all legal inputs. There are several conditions on carrying out such verification in practice: (1) the implementation is available in a form suitable for analysis; (2) the implementation “size” is such that the proof remains feasible; (3) it is written in a language which supports specification and proof; and (4) there is enough human expertise to guide the proof. Such conditions are often not satisfied in practice. Result verification is a possible alternative. It relies not on the implementation of the program but on its execution record. This record may be a sequence of inputs and the corresponding outputs from the program when it computes a certain function, or a sequence of values observed about the changing state of the program as it interacts with its environment, or perhaps such values augmented with time-stamps. Whatever the form, the goal is to decide if this record describes a correct execution of the program. A positive answer does not mean that the program is fault-free, only that this execution did not exhibit any errors. On the other hand, finding an error is like constructing a counter-example for claims of correctness about the program. Due to its modest goals, result verification has several advantages over implementation verification: (1) It can be carried out off-line, after the execution terminates, or on-line, as the execution unfolds; (2) Its application scope includes both off-the-shelf components (binary files) and remote service providers (distributed objects); (3) Due to the simple structure involved (a sequence of states), it is easier to automate; and (4) Based solely on the execution record, its complexity is largely independent of the “size” of the implementation and the language used, which makes it particularly suitable for complex, heterogeneous systems. In this chapter we propose a technical framework for carrying out automated result verification in practice: 1. The execution record is a simple text file. Many programs produce such files during their normal operations, for administrative purposes. A general technique for recording exactly the information needed for verification, is to introduce a program monitor by wrapping. 2. The execution record is given as input to the verifier program, which does the actual verification. Given the execution record in a text file, we consider result verification as a text-processing task. Accordingly, the verifier is written in AWK, which is a special-purpose language for text processing . Verification is done by the AWK interpreter, given the execution record and the verifier program as inputs. 3. The verifier program is not written by hand but specification-generated. The generator takes the property the verifier should check, called result specification, as input and produces the code of the verifier program as output. The specification-to-verifier generator is itself written in AWK. 4. A result specification is a first-order property built with two kinds of variables, state and record variables, using the functions and predicates over their respective types. A record variable refers to the contents of the record file. A state variable allows the calculation of the values that are not written in the record file but are derived from it. For each state variable we define its type, initial value and how this value changes for different operations invoked. We also allow specifications to be combined. The rest of the chapter is as follows. Section 12.2 shows how to model the behaviour of a program. Section 12.3 describes how to wrap a program to record the relevant observations during its execution. Result verification, using a hand-written verifier program, is the subject of Section 12.4. Section 12.5 defines a result specification. Section 12.6 describes various ways such specifications can be combined. Section 12.7 describes the generator which produces the code of the verifier program given the specification. Section 12.8 contains an example and Section 12.9 presents some conclusions.