The goal of implementation verification is to prove that a given program behaves correctly under all possible executions, for instance that it produces correct output for all legal inputs. There are several conditions on carrying out such verification in practice: (1) the implementation is available in a form suitable for analysis; (2) the implementation “size” is such that the proof remains feasible; (3) it is written in a language which supports specification and proof; and (4) there is enough human expertise to guide the proof. Such conditions are often not satisfied in practice. Result verification is a possible alternative. It relies not on the implementation of the program but on its execution record. This record may be a sequence of inputs and the corresponding outputs from the program when it computes a certain function, or a sequence of values observed about the changing state of the program as it interacts with its environment, or perhaps such values augmented with time-stamps. Whatever the form, the goal is to decide if this record describes a correct execution of the program. A positive answer does not mean that the program is fault-free, only that this execution did not exhibit any errors. On the other hand, finding an error is like constructing a counter-example for claims of correctness about the program. Due to its modest goals, result verification has several advantages over implementation verification: (1) It can be carried out off-line, after the execution terminates, or on-line, as the execution unfolds; (2) Its application scope includes both off-the-shelf components (binary files) and remote service providers (distributed objects); (3) Due to the simple structure involved (a sequence of states), it is easier to automate; and (4) Based solely on the execution record, its complexity is largely independent of the “size” of the implementation and the language used, which makes it particularly suitable for complex, heterogeneous systems. In this chapter we propose a technical framework for carrying out automated result verification in practice: 1. The execution record is a simple text file. Many programs produce such files during their normal operations, for administrative purposes. A general technique for recording exactly the information needed for verification, is to introduce a program monitor by wrapping. 2. The execution record is given as input to the verifier program, which does the actual verification. Given the execution record in a text file, we consider result verification as a text-processing task. Accordingly, the verifier is written in AWK, which is a special-purpose language for text processing . Verification is done by the AWK interpreter, given the execution record and the verifier program as inputs. 3. The verifier program is not written by hand but specification-generated. The generator takes the property the verifier should check, called result specification, as input and produces the code of the verifier program as output. The specification-to-verifier generator is itself written in AWK. 4. A result specification is a first-order property built with two kinds of variables, state and record variables, using the functions and predicates over their respective types. A record variable refers to the contents of the record file. A state variable allows the calculation of the values that are not written in the record file but are derived from it. For each state variable we define its type, initial value and how this value changes for different operations invoked. We also allow specifications to be combined. The rest of the chapter is as follows. Section 12.2 shows how to model the behaviour of a program. Section 12.3 describes how to wrap a program to record the relevant observations during its execution. Result verification, using a hand-written verifier program, is the subject of Section 12.4. Section 12.5 defines a result specification. Section 12.6 describes various ways such specifications can be combined. Section 12.7 describes the generator which produces the code of the verifier program given the specification. Section 12.8 contains an example and Section 12.9 presents some conclusions.
The goal of result-verification is to prove that one execution run of a program satisfies its specification. Compared with implementation-verification, result-verification has a larger scope for applications in practice, gives more opportunities for automation and, based on the execution record not the implementation, is particularly suitable for complex systems. This paper proposes a technical framework to apply this technique in practice. We show how to write formal result-based specifications, how to generate a verifier program to check a given specification and to carry out result -verification according to the generated program. The execution result is written as a text file, the verifier is written in AWK (special-purpose language for text processing) and verification is done automatically by the AWK interpreter, given the verifier and the execution result as inputs.
This volume includes abstracts of the two invited contributions and full papers of the eleven peer-reviewed contributions presented at OpenCert 2010.
This compendium consists of the papers presented at the FACS'05 workshop. This compendium is the preliminary proceedings for the workshop for the convenience of the workshop participants; the official proceedings will be published after the workshop in the Electronic Notes on Theoretical Computer Science.
Over the last decade, the importance of Free/Libre/Open Source Software (FLOSS) has increased dramatically and the size and scope of the OSS conference within which this workshop takes place is testament to that reality. This volume contains the proceedings of the joint workshop: the first International Workshop on Foundations and Techniques bringing together Free/Libre/Open Source Software and Formal Methods (FLOSS-FM 2008) and the second International Workshop on Foundations and Techniques for Open Source Software Certification (OpenCert 2008). All submitted papers have been reviewed by at least 3 Program Committee members. The aim of the workshop has been to bring together researchers from academia and industry, firstly, to study the question of FLOSS certification and, secondly, to explore in the largest possible sense the question of how formal methods can be brought more successfully to bear on the open source phenomenon, and also how FLOSS can be brought more fully into the formal methods community. The organisers would like to take this opportunity to thank the contributors, the programme committee and all the referees for their interest and hard work. We leave to the reader the seeds the participants have sown.
In 1996, Zhou Chaochen and Michael Hansen proposed a first order interval logic called Neighbourhood Logic (NL) which can specify liveness and fairness of computing systems and also define notions of real analysis in terms of expanding modalities introduced therein. Here, we first show that NL forms a sound and complete system. Next, we extend NL by introducing two more expanding modalities in the upward and downward directions and propose a Two Dimensional Neighbourhood Logic (NL$^2$), which can be used to specify super-dense computation. Finally, we also prove completeness of this new NL$^2$ .
Service integration is central to joined-up government initiatives and requires information on the collaborators and the services they offer, roles of different actors, the resources required, and their goals (individual and shared). These information are largely available in unstructured forms on government portals, publications and other textural sources. This paper explores semantic text mining for extracting service-related information from such sources using Natural Language Processing techniques supported by Service-Oriented Process Ontologies. Our solution framework consists of the following steps: (1) creating domain and service-oriented process ontology, (2) extracting service-related information from textual sources based on the ontology, and finally (3) mining relationship among the services based on the extracted information in Step 2 linked with a pre-defined hierarchy of service delivery goals specifying the objective(s) to be achieved among the orchestrated services. We describe our approach to these tasks and discuss the progress of the work, our experiences and the challenges encountered so far.
Understanding the behaviour of biological systems requires a complex setting of in vitro and in vivo experiments, which attracts high costs in terms of time and resources. The use of mathematical models allows researchers to perform computerised simulations of biological systems, which are called in silico experiments, to attain important insights and predictions about the system behaviour with a considerably lower cost. Computer visualisation is an important part of this approach, since it provides a realistic representation of the system behaviour. We define a formal methodology to model biological systems using different levels of representation: a purely formal representation, which we call molecular level, models the biochemical dynamics of the system; visualisation-oriented representations, which we call visual levels, provide views of the biological system at a higher level of organisation and are equipped with the necessary spatial information to generate the appropriate visualisation. We choose Spatial CLS, a formal language belonging to the class of Calculi of Looping Sequences, as the formalism for modelling all representation levels. We illustrate our approach using the budding yeast cell cycle as a case study.
Digital libraries (DL) are starting to play a central role in traditional libraries as well as becoming an important tool for publishers to present and sell their products. A lot of research has been carried out to study the usability and data representation in DL and has led to the development of a variety of software packages, both commercial and open source. However, very little research has focused on formal aspects of DL. This paper describes our attempt to formally specify DL using the RAISE Specification Language (RSL). The specification methodology, inspired by the 5S Framework defined by Gonccalves et al. , is kept at a fairly abstract level and aims to provide a basic model of the key DL issues (digital objects, collections, users and communities) as well as their relations within the DL framework and the implication of such relations in terms of security and human-computer interactions.
This paper describes preliminary results on the application of statistical model-checking to systems described with Stochastic CLS. Stochastic CLS is a formalism based on term rewriting that allows biomolecular systems to be described by taking into account their structure and by allowing very general events to be modelled. Statistical model-checking is an analysis technique that permits properties of a system to be studied on the results of a number of stochastic simulations. We choose Real-Time Maude as a tool that supports the modelling and analysis of systems with real-time properties. We adapt Gillespie's algorithm for simulating chemical systems into our approach. The resulting method is applied to analyse some simple examples and a model of the lactose operon regulation in E.coli.
This article describes a framework to formally model and analyse human behaviour. This is shown by a simple case study of a chocolate vending machine, which represents many aspects of human behaviour. The case study is modelled and analysed using the Maude rewrite system. This work extends a previous work by Basuki which attempts to model interactions between human and machine and analyse the possibility of errors occurring in the interactions. By redesigning the interface, it can be shown that certain kinds of error can be avoided for some users. This article overcomes the limitation of Basuki’s approach by incorporating many aspects of user behaviour into a single user model, and introduces a more natural approach to model human–computer interaction.