The overarching goal of a formal specification of design patterns is to enhance their understandability, address their composability problem and provide a tool support for their reusability. Understanding design patterns means uncovering when and how a specific pattern can be used to resolve a specific design problem. The lack of completeness of the existing formal specifications motivates this work. In this paper we start introducing a calculus for design patterns based on rCOS (Refinement Calculus of Object and Component Systems) specification which provides a formalization of patterns addresses the pattern composition mechanism and describes a mechanism for design refinement to patterns designs. In this first step we are focusing on the proposition of a formalization model. Thus, a model for patterns formalization is proposed and an assessment model is used for the evaluation of the proposed model and comparing it with the existing ones to show its efficiencies and limitations.
Software Technology is increasingly gaining prominence in national Information Technology (IT) development strategies due to its huge potential for socio-economic development, particularly through income generation from digital services and products, support for the delivery of public services and engagement of citizens. In recent years we have seen developing countries like India, Chile, Philippines, Brazil, China and Indonesia as increasingly important global players in the offshore software services industry, with India and China clearly leading in this industry. There has also been a surge in South-South Cooperation in Software Technology (SSC-ST) in general since 2003, with significant increase in bilateral, regional and contributions of UN organizations (e.g. UNCTAD, UNDP and UNU-IIST) as well as donor OECD countries (particularly Japan, South Korea and a few EU countries) to both the development of software technology capacities and their applications in the areas like agriculture, e-governance, transportation and the information society in general.
An object-oriented program consists of a section of class declarations and a main method. The class declaration section represents the structure of an object-oriented program, that is the data, the classes and relations among them. The execution of the main method realizes the application by invoking methods of objects of the classes defined in the class declarations. Class decorations define the general properties of objects and how they collaborate with each other in realizing the application task programmed as the main method. Note that for one class declaration section, different main methods can be programmed for different applications, and this is an important feature of reuse in object-oriented programming. On the other hand, different class declaration sections may support the same applications, but these different class declaration sections can make significant difference with regards to understanding, reuse and maintainability of the applications. With a UML-like modeling language, the class declaration section of a program is represented as a class diagram, and the instances of the class diagram are represented by object diagrams, that form the state space of the program. In this paper, we define a class diagram and its object diagrams as directed labeled graphs, and investigate what changes in the class structure maintain the capability of providing functionalities (or services). We formalize such as structure change by the notion of structure refinement. A structure refinement is a transformation from one graph to another that preserves the capability of providing services, that is, the resulting class graph should be able to provide at least as many, and as good, services (in terms of functional refinement) as the original graph. We then develop a calculus of object-oriented refinement in which the refinement rules are classified into four categories according to their natures and uses in object-oriented software design. The soundness of the calculus is proved and the completeness of the refinement rules of each category is established with regard to normal forms defined for object-oriented programs. These completeness results show the power of the simple refinement rules. The normal forms and the completeness results together capture the essence of polymorphism, dynamic method binding and object sharing by references in object-oriented computation. Keywords: Class graph, Object graph, Graph transformation, Normal form, Object-orientation, Structure refinement.
The recovery of behaviour from its approximation over substructures is fraught with pathology. Here the extent is considered to which the behaviour of a continuous function on a locally compact Abelian group can be approximated by its behaviour on proper closed subgroups. Known results are summarised when the behaviour concerns integrability and the group is the circle; then boundedness and other limiting behaviour "at infinity" are considered for more general groups. It is shown that if a continuous function is bounded on each proper closed subgroup of a connected locally compact Abelian group then it is bounded on the whole group. As befits this Festschrift, the techniques are predominantly topological. In passing we reflect on criteria for the difficult problem of identifying "substructures" in Computer Science.
Clinical reasoning is the cognitive process by which physicians synthesize clinical case information, integrate it with their knowledge and experience, and diagnose or manage the patient’s problem.1 Problem-based learning is a method often used to characterize medical problems as a context for students to acquire knowledge and clinical-reasoning ......
The availability of domain frameworks to enable rapid development of Electronic Public Services (EPS) is essential to meet the increasing demand for mature EPS by various government stakeholders. This paper presents a composite domain framework comprising frameworks to build the Front-Office and Back-Office parts of an EPS. The framework supports a set of domain requirements obtained through a detailed analysis of over 30 concrete public services. After presenting these requirements, the framework is described in four stages - architecture, design, implementation and instantiation - all using UML to capture the artifacts built during development. We also illustrate the application of the framework through a case study in developing an Electronic Licensing Service by means of framework instantiation. We conclude with some comments on the complexity, flexibility and performance of the framework. This work was carried out as part of the e-Macao Project to build a foundation for e-Government in Macao, funded by the Government of Macao SAR.
Complexity of software development has to be dealt with by dividing the different aspects and different views of the system and separating different concerns in the design. This implies the need of different modelling notations and tools to support more and more phases of the entire development process. To ensure the correctness of the models produced, the tools therefore need to integrate sophisticated checkers, generators and transformations. A feasible approach to ensure high quality of such add-ins is to base them on sound formal foundations. This paper reports our experience in the work on the Common Component Modelling Example (CoCoME) and shows where such add-ins will fit. In particular, we show how the formal techniques developed in rCOS can be integrated into a component-based development process, and where it can be integrated in and provide extension to an existing successful commercial tool for adding formally supported checking, transformation and generation modules. This is published in Formal Methods and Hybrid Real-Time Systems, Essays in Honour of Dines Bjorner and Zhou Chaochen on the Occasion of Their 70th Birthdays Lecture Notes in Computer Science 4700. Keywords: Software development tool, software process, formal methods, tool design.
Compositional reasoning about pointers and references is crucial to verification of contemporary software. This paper introduces a pointer logic that extends Separation Logic with a fixpoint operator and new compositions different from separating conjunction. Higher level of abstraction can be achieved if the right compositions are used in the right places. In particular, if a relation is a full unique decomposition then the corresponding composition decomposes any given graph uniquely into two parts; an example is the separation between the non-garbage and garbage parts of memory. The logic is proved to be largely satisfaction-decidable in the sense that there is a finite procedure to determine whether or not a program state satisfies a formula. The main technical result of the paper is that if only full unique decompositions are used in compositions, then a rather general fragment becomes validity-decidable. The logic is axiomatized and, with the help of an infinitary inference rule, proved to be complete. Separation Logic without pointer arithmetic is shown to be a fragment of the logic.
Proceedings of the 1st International Workshop on Harnessing Theories for Tool Support in Software TTSS'07, 22-23 September 2007, Macau SAR, China
The notion of contract was introduced to component-based software development in order to facilitate the semantically correct composition of components. We extend the form of this notion which is based on designs to capture probabilistic requirements on execution time. We show how reasoning about such requirements can be done in an infinite-interval-based system of probabilistic duration calculus.
The success of model-based testing, in automating the testing of an implementation given its state-based (or model-based) specification, raises the question of how best the specification can be tweaked in order to facilitate that process. This paper discusses several answers. Motivated by an example from web-based systems, and taking account of the restriction imposed by the testing interface, it considers both functional and non-functional properties. The former include laws, implicit system invariants and other consistency conditions whilst the latter include security leaks. It concludes that because of the importance of the link between specification and implementation in the testing process, there is a trade-off between genuinely useful testing information and the incorporation of some degree of information about the link, not normally regarded as part of the specification.
Intelligent tutoring systems are no different from other knowledge based systems in that they are often plagued by brittleness. Intelligent tutoring systems for problem solving are typically loaded with problem scenarios for which specific solutions are constructed. Solutions presented by students, are compared against these specific solutions, which often leads to a narrow scope of reasoning, where students are confined to reason towards a specific solution. Student solutions that are different from the specific solution entertained by the system are rejected as being incorrect, even though they may be acceptable or close to acceptable. This leads to brittleness in tutoring systems in evaluating student solutions and returning appropriate feedback. In this paper we discuss a few humanlike attributes in the context of robustness that are desirable in knowledge based systems. We then present a model of reasoning through which a tutoring system for medical problem-based learning, can begin to exhibit human-like robust behavior in evaluating solutions in a broader context using UMLS, and respond with hints that are mindful of the partial correctness of the student solution.
This paper introduces a new method for the synthesis of gate-level asynchronous controllers. The method is based on a user-level specification formalism, the path model, and a simple algebraic tool, the signal calculus, that in combination provide a formalism for the specification and analysis of path-model designs. The path model view of communication is state-based as in other asynchronous finite-state-machine methods; however it focuses on the critical component of state machines, the paths, and ignores other non-critical free states. The signal calculus, a temporal lifting of Boolean logic, helps to formalise path-model specifications algebraically, removing much of the inadequacy of traditional tabular tools like flow tables, with their dependency on table cells that is exponential in input size. The method is demonstrated on two examples.
This pap er introduces a normative principle for the behaviour of contemporary computing and communication systems and considers some of its consequences. The principle, named the principle of distribution, says that in a distributed multi-agent system control resides as much as possible with the individuals constituting the system, rather than in centralised agents; and when that is infeasible or becomes inappropriate due to environmental changes, control evolves upwards from the individuals to an appropriate intermediate level rather than b eing imp osed from above. The setting for the work is the dynamically changing global space resulting from ubiquitous communication. Accordingly the paper begins by determining the characteristics of the distributed multi-agent space it spans. It then fleshes out the principle of distribution, with examples from daily life as well as from Computer Science. The case is made for the principle of distribution to work at various levels of abstraction of system behaviour: to inform the high-level discussion that ought to precede the more low-level concerns of technology, protocols and standardisation but also to facilitate those lower levels. Of the more substantial applications of the principle of distribution, a technical example concerns the design of secure ad hoc networks of mobile devices, achievable without any form of centralised authentication or identication, but in a solely distributed manner. Here the context is how the principle can be used to provide new and provably secure protocols for genuinely ubiquitous communication. A second--more managerial--example concerns the distributed production and management of op en source software, and a third investigates some pertinent questions involving the dynamic restructing of control in distributed systems, important in times of disaster or malevolence.
Duration Calculus of Weakly Monotonic Time (WDC) is an extension of DC to allow description of discrete processes where several steps of computation can occur at the same time point. In this paper, we introduce Linear Occurrence Invariants (LOI) which is a subclass of WDC formulas and give an algorithm to check real-time automata for LOI using integer programming. LOI can be used effectively to specify system requirements in some cases including when the system is considered under the true synchrony assumption. We also extend WDC probabilistically to express dependability requirements of real-time systems and develop techniques to check deterministic probabilistic real-time automata for a class of probabilistic WDC formulas.
In this work, we study about formal design and verification of protocols using CSP(Communicating Sequential Processes) formalism. The protocols concern multicast communications; we develop formal models of multicast communication protocols with CSP and then verify the models using FDR which is the model checking tool of CSP.
The aim of this work is to develop a template for component-based programs which can be used in different programming languages. First, we give a model of component based systems based on the unifying theory of programming. We define the concepts interface, contract and component, and component combination. The definition can be used as the basis for the component template development. We define a contract to include method specification, and define a component as an implementation of a contract. This implementation may require services from other components with some assumptions about the schedule for resolving the conflict of shared method and resource uses with the presence of concurrency. The assumption is expressed as interaction protocols. In our model, components are correct by its design. We then give a guideline for extending the model for real-time component systems.
Component-based programming is about how to create application programs by prefabricated components with new software that provides both glue between the components, and new functionality. Models of components are required to support black-box compositionality and substitutability by a third party as well as interoperability. However, the glue codes and programs designed by users of the components for new applications in general do not require these features, and they can be even designed in programming paradigms different from those of the components. In this paper, we extend the rCOS calculus of components with a model for glue programs and application programs that is different from that of components. We study the composition of a glue program with components and prove that components glued by a glue program yield a new component.
A new asynchronous pipeline is presented for the self-timed design discipline of quasi-delay insensitivity with dual-rail encoding and four-phase handshaking. The function blocks of the pipeline consist of a universal gate (presented in the appendix) which is particularly simple, comprising only the conventional nand and nor gates, and is supported by an equally simple development method. The simple quadgate enables optimisation techniques for completion- detection circuitry and pipeline architecture. The pipeline is reported to compare favourably with analogous designs and is demonstrated here on a selection of 32-bit adders, simulated on 10,000 pairs of random inputs.
This survey examines the current state of South-South Cooperation in Software Technology. Results from the survey show that there has been a surge in South-South Cooperation in Software Technology (SSC-ST) in general since 2003, with significant increase in regional, in addition to bilateral cooperation. A significant increase has also occurred in the contributions of UN organizations (e.g. UNCTAD, UNDP and UNU-IIST) as well as donor OECD countries (particularly Japan, South Korea and a few EU countries) to both the development of software technology capacities and their applications in the areas like agriculture, e-governance, transportation and the Information Society in general. SSC-ST in the areas of e-governance and e-learning is particularly high. However, there is a pronounced divide even among the countries of the South in the area of software technology, with India, China and a few South-East Asian countries providing almost all capacities of the South. So far, not much inter-regional cooperation was documented. SSC-ST cooperation in the area of Open Source Software (OSS) has been growing rapidly in the past three to five years. This has been strongly supported by UN organizations primarily as a means to affordable access, catalyst for capacity acquisition, and a possible solution to reducing the scale of the Intellectual Property Rights (IPR) problem.