FM 2008

Accepted Papers to the Technical Symposium

Papers


Borzoo Bonakdarpour and Sandeep Kulkarni. Masking Faults while Providing Bounded-Time Phased Recovery

Carlo Alberto Furia, Matteo Rossi and Matteo Pradella. Automated Verification of Dense-Time MTL Specifications via Discrete-Time Approximation

Holger Grandy, Markus Bischof, Kurt Stenzel, Gerhard Schellhorn and Wolfgang Reif. Verification of Mondex Electronic Purses with KIV: From a Security Protocol to Verified Code

Alexander Harhurin and Judith Hartmann. Towards Consistent Specifications of Product Families

Stefan Rieger and Thomas Noll. Verifying Dynamic Pointer-Manipulating Threads

Radu Mateescu and Damien Thivolle. A Model Checking Language for Concurrent Value-Passing Systems

Thomas Gawlitza and Helmut Seidl. Precise Interval Analysis vs. Parity Games

Annabelle McIver, Carroll Morgan and Carlos Gonzalia. Proofs and refutations for probabilistic systems

Emina Torlak, Felix Chang and Daniel Jackson. Finding Minimal Unsatisfiable Cores of Declarative Specifications

Boutheina Chetali and Quang-Huy Nguyen. Indutrial Use of Formal Methods for a High-Level Security Evaluation

Anna Zaks and Amir Pnueli. CoVaC: Compiler Validation by Program Analysis of the Cross-Product

Tim McComb and Graeme Smith. Introducing Objects through Refinement

Tom Ridge, Michael Norrish and Peter Sewell. A rigorous approach to networking: TCP, from implementation to protocol to service

Johan Dovland, Einar Broch Johnsen, Olaf Owe and Martin Steffen. Lazy Behavioral Subtyping

Hugo Daniel Macedo, Peter Gorm Larsen and John Fitzgerald. Incremental Development of a Distributed Real-Time Model of a Cardiac Pacing System using VDM

Irem Aktug, Mads Dam and Dilian Gurov. Provably Correct Runtime Monitoring

Olivier Ponsini and Wendelin Serwe. A Schedulerless Semantics of TLM Models Written in SystemC via Translation into Lotos

Joseph Kiniry and Daniel M. Zimmerman. Secret Ninja Formal Methods

Torben Amtoft, John Hatcliff, Edwin Rodriguez, Robby, Jonathan Hoag and David Greve. Specification and Checking of Software Contracts for Conditional Information Flow

Michael Emmi, Dimitra Giannakopoulou and Corina Pasareanu. Assume Guarantee Verification for Interface Automata

Patrice Chalin and Frédéric Rioux. JML Runtime Assertion Checking: Improved Error Reporting and Efficiency using Strong Validity

Engin Uzuncaova and Sarfraz Khurshid. Constraint Prioritization for Efficient Analysis of Declarative Models

Arsenii Rudich, Adam Darvas and Peter Müller. Checking well-formedness of pure-method specifications

Abstracts


Borzoo Bonakdarpour and Sandeep Kulkarni. Masking Faults while Providing Bounded-Time Phased Recovery

Abstract: We focus on synthesis techniques for transforming existing fault-intolerant real-time programs to fault-tolerant programs that provide phased recovery. A fault-tolerant program is one that satisfies its safety and liveness specifications as well as timing constraints in the presence of faults. We argue that in many commonly considered programs (especially in mission-critical systems), when faults occur, simple recovery to the program's normal behavior is necessary, but not sufficient. For such programs, it is necessary that recovery is accomplished in a sequence of phases ensuring that the program satisfies certain properties at each phase. In this paper, we show that, in general, synthesizing fault-tolerant real-time programs that provide bounded-time phased recovery is NP-complete. We also characterize a sufficient condition for cases where synthesizing fault-tolerant real-time programs that provide bounded-time phased recovery can be accomplished in polynomial-time in the size of the input program's region graph. Our NP-completeness result also validates our conjecture on complexity of synthesizing hard-masking fault-tolerant programs in our previous work.


Carlo Alberto Furia, Matteo Rossi and Matteo Pradella. Automated Verification of Dense-Time MTL Specifications via Discrete-Time Approximation

Abstract: This paper presents a verification technique for dense-time MTL based on discretization. The technique reduces the validity problem of MTL formulas from dense to discrete time, through the notion of sampling invariance, introduced in previous work. Since the reduction is from an undecidable problem to a decidable one, the technique is necessarily incomplete, so it fails to provide conclusive answers for some formulas. The paper discusses this shortcoming and hints at how it can be mitigated in practice. The verification technique has been implemented on top of the Zot tool for discrete-time bounded validity checking; the paper also reports on in-the-small experiments with the tool, which show some results that are promising in terms of performance.


Holger Grandy, Markus Bischof, Kurt Stenzel, Gerhard Schellhorn and Wolfgang Reif. Verification of Mondex Electronic Purses with KIV: From a Security Protocol to Verified Code

Abstract: We present a verified JavaCard implementation for the Mondex Verification Challenge. This completes a series of verification efforts that we made to verify the Mondex case study starting at abstract transaction specifications, continuing with an introduction of a security protocol and now finally the refinement of this protocol to running source code. We show that current verification techniques and tool support are not only suitable to verify the original case study as state in the Grand Challenge but also can cope with extensions of it resulting in verified and running code. The Mondex verification presented in this paper is the first one that carries security properties proven on an abstract level to an implementation level using refinement.


Alexander Harhurin and Judith Hartmann. Towards Consistent Specifications of Product Families

Abstract: Addressing the challenges faced today during the development of multi-functional system families, we suggest a service-oriented approach to formally specifying the functionality and, in particular, the functional variability already in the requirement engineering phase. In this paper, we precisely define the underlying concepts, such as the notion of individual services, the combination of services, inter-service dependencies, and variability. Thereby, we especially focus on establishing the consistency of the overall specification. To that end, we formally define conflicts between requirements and describe how they can be detected and resolved based on the introduced formal concepts.


Stefan Rieger and Thomas Noll. Verifying Dynamic Pointer-Manipulating Threads

Abstract: We present a novel approach to the verification of concurrent pointer-manipulating programs with dynamic thread creation and memoryallocation as well as destructive updates operating on arbitrary(possibly cyclic) singly linked data structures.Correctness properties of such programs are expressed by combining asimple pointer logic for specifying heap properties withlinear-time (LTL) operators for reasoning about system executions. To automatically solve the corresponding model-checking problem, which is undecidable in general, we abstract from non-interrupted sublists in the heap,resulting in a finite-state representation of the data space. We also show that the control flow of a concurrent program with unbounded thread creation can be characterized by a Petri net, making LTL model checking decidable(though not feasible in practice). In a second abstraction step we also derivea finite-state representation of the control flow, which then allows us toemploy standard LTL model checking techniques.


Radu Mateescu and Damien Thivolle. A Model Checking Language for Concurrent Value-Passing Systems

Abstract: Modal mu-calculus is an expressive specification formalism for temporal properties of concurrent programs represented as Labeled Transition Systems (LTSs). However, its practical use is hampered by the complexity of the formulas, which makes the specification task difficult and error-prone. In this paper, we propose MCL (Model Checking Language), an enhancement of modal mu-calculus with high-level operators aimed at improving expressiveness and conciseness of formulas. The main MCL ingredients are parameterized fixed points, action patterns extracting data values from LTS actions, modalities on transition sequences described using extended regular expressions and programming language constructs, and an infinite looping operator specifying fairness. We also present a method for on-the-fly model checking of MCL formulas on finite LTSs, based on the local resolution of boolean equation systems, which has a linear-time complexity for alternation-free and fairness formulas. MCL is supported by the EVALUATOR 4.0 model checker developed within the CADP verification toolbox.


Thomas Gawlitza and Helmut Seidl. Precise Interval Analysis vs. Parity Games

Abstract: In Gawlitza07, a practical algorithm for precise interval analysis is provided for which, however, no non-trivial upper complexity bound is known. Here, we present a lower bound by showing that precise interval analysis is at least as hard as computing the sets of winning positions in parity games. Our lower-bound proof relies on an encoding of parity games into systems of particular integer equations. Moreover, we present a simplification of the algorithm for integer systems from Gawlitza07. For the given encoding of parity games, the new algorithm provides another algorithm for parity games which is almost as efficient as the discrete strategy improvement algorithm by Vöge and Jurdzi'nski Voege00.


Annabelle McIver, Carroll Morgan and Carlos Gonzalia. Proofs and refutations for probabilistic systems

Abstract: We consider the issue of finding and presenting counterexamples to a claim "this Spec is implemented by that Imp", that is refinement, in the context of probabilistic systems: using a geometric interpretation of the probabilistic/demonic semantic domain we are able to encode both refinement success and refinement failure as linear satisfaction problems, which can then be analysed automatically by an SMT solver. This allows the automatic discovery, and then presentation, of counterexamples in independently and efficiently checkable form.

In many cases the counterexamples can subsequently be converted into "source level" hints for the verifier.


Emina Torlak, Felix Chang and Daniel Jackson. Finding Minimal Unsatisfiable Cores of Declarative Specifications

Abstract: Declarative specifications exhibit a variety of problems, such as inadvertently overconstrained axioms and underconstrained conjectures, that are hard to diagnose with model checking and theorem proving alone. Recycling core extraction is a new coverage analysis that pinpoints an irreducible unsatisfiable core of a declarative specification. It is based on resolution refutation proofs generated by resolution engines, such as SAT solvers and resolution theorem provers. The extraction algorithm is described, and proved correct, for a generalized specification language with a regular translation to the input logic of a resolution engine. It has been implemented for the Alloy language and evaluated on a variety of specifications, with promising results.


Boutheina Chetali and Quang-Huy Nguyen. Indutrial Use of Formal Methods for a High-Level Security Evaluation

Abstract: This paper presents an effective use of formal methods for the development and the security certification of smart card software. The approach is based on the Common Criteria methodology that requires the use of formal methods to prove that a product implements the claimed security level. This work leads to the world-first certification of a commercial Java Card(TM) product involving formal assurances to reach the highest level. For this certification, formal methods are used for the design and implementation of the security functions of the Java Card(TM) platform embedded on the product. We describe the refinement scheme used to meet the Common Criteria requirements for the formal models and proofs. In particular, we show how to build the proof that the implementation ensures the security objectives claimed in the security specification. We also provide lessons learned from this scalable application of formal methods in the smart cards industry.


Anna Zaks and Amir Pnueli. CoVaC: Compiler Validation by Program Analysis of the Cross-Product

Abstract: The paper presents a deductive framework for proving program equivalence and its application to automatic verification of transformations performed by optimizing compilers. To leverage existing program analysis techniques, we reduce the equivalence checking problem to analysis of one system - a cross-product of the two input programs. We show how the approach can be effectively used for checking equivalence of consonant (i.e., structurally similar) programs. Finally, we report on the prototype tool that applies the developed methodology to verify that a compiler optimization run preserves the program semantics. Unlike existing frameworks, CoVaC accommodates absence of compiler annotations and handles most of the classical intraprocedural optimizations such as constant folding, reassociation, common subexpression elimination, code motion, dead code elimination, branch optimizations, and others.


Tim McComb and Graeme Smith. Introducing Objects through Refinement

Abstract: We present a strategy for using the existing theory of class refinement in Object-Z to introduce an arbitrary number of object instances into a specification. Since class refinement applies only to a single class, the key part of the strategy is the use of references to objects of the class being refined. Once object instances have been introduced through local class refinements in this way, they can be turned into foreign class instantiations through the application of straight-forward equivalence preserving transformations.


Tom Ridge, Michael Norrish and Peter Sewell. A rigorous approach to networking: TCP, from implementation to protocol to service

Abstract: Despite more then 30 years of research on protocol specification, the major protocols deployed in the Internet, such as TCP, are described only in informal prose RFCs and executable code. In part this is because the scale and complexity of these protocols makes them challenging targets for formalization.

In this paper we show how these difficulties can be addressed. We develop a high-level specification for TCP and the Sockets API, expressed in the HOL proof assistant, describing the byte-stream service that TCP provides to users. This complements our previous low-level specification of the protocol internals, and makes it possible for the first time to state what it means for TCP to be correct: that the protocol implements the service. We define a precise abstraction function between the models and validate it by testing, using verified testing infrastructure within HOL. This is a pragmatic alternative to full proof, providing reasonable confidence at a relatively low entry cost.

Together with our previous validation of the low-level model, this shows how one can rigorously tie together concrete implementations, low-level protocol models, and specifications of the services they claim to provide, dealing with the complexity of real-world protocols throughout.


Johan Dovland, Einar Broch Johnsen, Olaf Owe and Martin Steffen. Lazy Behavioral Subtyping

Abstract: Late binding allows flexible code reuse but complicates formal reasoning significantly, as a method call's receiver is not statically known. This is especially true when programs are incrementally developed by extending class hierarchies. This paper develops a novel method to reason about late bound method calls. In contrast to traditional behavioral subtyping, reverification is avoided without restricting method overriding to behavior-preserving redefinition. The approach ensures that when analyzing the methods of a class, it suffices to consider the class and its superclasses. Thus, the full class hierarchy is not needed and \emph{incremental} reasoning is supported. We formalize this approach as a calculus which lazily imposes context-dependent subtyping constraints on method definitions. The calculus ensures that all method specifications required by late bound calls remain satisfied when new classes extend a class hierarchy. The calculus does not depend on a specific program logic, but the examples in the paper use a Hoare-style proof system. We show soundness of the analysis method.


Hugo Daniel Macedo, Peter Gorm Larsen and John Fitzgerald. Incremental Development of a Distributed Real-Time Model of a Cardiac Pacing System using VDM

Abstract: The construction of formal models of real-time distributed systems is a considerable practical challenge. We propose and illustrate a pragmatic incremental approach in which detail is progressively added to abstract system-level specifications of functional and timing properties via intermediate models that express system architecture, concurrency and timing behaviour. The approach is illustrated by developing a new formal model of the cardiac pacemaker system proposed as a "grand challenge" problem in 2007. The models are expressed using the Vienna Development Method (VDM) and are validated primarily by scenario-based tests, including the analysis of timed traces. We argue that the insight gained using this staged modelling approach will be valuable in the subsequent development of implementations, and in detecting potential bottlenecks within suggested implementation architectures.


Irem Aktug, Mads Dam and Dilian Gurov. Provably Correct Runtime Monitoring

Abstract: Runtime monitoring is an established technique to enforce a wide range of program safety and security properties. We present a formalization of monitoring and monitor inlining, for the Java Virtual Machine. Monitors are security automata given in a special-purpose monitor specification language, ConSpec. The automata operate on finite or infinite strings of calls to a fixed API, allowing local dependencies on parameter values and heap content. We use a two-level class file annotation scheme to characterize two key properties: (i) that the program is correct with respect to the monitor as a constraint on allowed program behaviour, and (ii) that the program has a copy of the given monitor embedded into it, through a state abstraction function that yields state changes at prescribed points according to the monitor's transition function. As our main application of these results we present a concrete inliner, and use the annotation scheme to show its correctness. For this inliner, correctness of the level II annotations can be decided efficiently using a standard wp annotation checker, thus allowing on-device checking of inlining correctness in a proof-carrying code setting.


Olivier Ponsini and Wendelin Serwe. A Schedulerless Semantics of TLM Models Written in SystemC via Translation into Lotos

Abstract: TLM (Transaction-Level Modeling) was introduced to cope with the increasing complexity of Systems-on-Chip designs by raising the modeling level. Currently, TLM is primarily used for system-level functional testing and simulation using the SystemC C++ API widely accepted in industry. Nevertheless, TLM requires a careful handling of asynchronous concurrency. In this paper, we give a semantics to TLM models written in SystemC via a translation into the process algebra Lotos, enabling the verification of the models with the CADP toolbox dedicated to asynchronous systems. Contrary to other works on formal verification of TLM models written in SystemC, our approach targets fully asynchronous TLM without the restrictions imposed by the SystemC simulation semantics. We argue that this approach leads to more dependable models.


Joseph Kiniry and Daniel M. Zimmerman. Secret Ninja Formal Methods

Abstract: The use of formal methods can significantly improve software quality. However, many instructors and students consider formal methods to be too difficult, impractical, and esoteric for use in undergraduate classes. This paper describes a method, used successfully at several universities, that combines ninja stealth with the latest advances in formal methods tools and technologies to integrate applied formal methods into software engineering courses.


Torben Amtoft, John Hatcliff, Edwin Rodriguez, Robby, Jonathan Hoag and David Greve. Specification and Checking of Software Contracts for Conditional Information Flow

Abstract: Information assurance applications built according to the MILS (Multiple Independent Levels of Security) architecture often contain information flow policies that are conditional in the sense that data is allowed to flow between system components only when the system satisfies certain state predicates. However, existing specification and verification environments, such as SPARK Ada, used to develop MILS applications can only capture unconditional information flows. Motivated by the need to better formally specify and certify MILS applications in industrial contexts, we present an enhancement of the SPARK information flow annotation language that enables specification, inferring, and compositional checking of conditional information flow contracts. We report on the use of this framework for a collection of SPARK examples.


Michael Emmi, Dimitra Giannakopoulou and Corina Pasareanu. Assume Guarantee Verification for Interface Automata

Abstract: Interface automata provide a formalism capturing the high level interactions between software components. Checking compatibility, and other safety properties, in an automata-based system suffers from the scalability issues inherent in exhaustive techniques such as model checking. This work develops a theoretical framework and automated algorithms for modular verification of interface automata. We propose sound and complete assume-guarantee rules for interface automata, and learning-based algorithms to automate assumption generation. Our al- gorithms have been implemented in a practical model-checking tool and have been applied to a realistic NASA case study.


Patrice Chalin and Frédéric Rioux. JML Runtime Assertion Checking: Improved Error Reporting and Efficiency using Strong Validity

Abstract: The Java Modeling Language (JML) recently switched to an assertion semantics based on "strong validity" in which an assertion is taken to be valid precisely when it is defined and true. Elsewhere we have shared our positive experiences with the realization and use of this new semantics in the context of ESC/Java2. In this paper, we describe the challenges faced by and the redesign required for the implementation of the new semantics in the JML Runtime Assertion Checker (RAC) compiler. Not only is the new semantics effective at helping developers identify formerly undetected errors in specifications, we also demonstrate how the realization of the new semantics is more efficient-resulting in more compact instrumented code that runs slightly faster. More importantly, under the new semantics, the JML RAC can now compile sizeable JML annotated Java programs (like ESC/Java2) which it was unable to compile before.


Engin Uzuncaova and Sarfraz Khurshid. Constraint Prioritization for Efficient Analysis of Declarative Models

Abstract: The declarative modeling language Alloy and its automatic analyzer provide an effective tool-set for building designs of systems and checking their properties. The Alloy Analyzer performs bounded exhaustive analysis using off-the-shelf SAT solvers. The analyzer's performance hinges on the complexity of the models and so far, its feasibility has been shown only within limited bounds. We present a novel technique that defines program slicing for declarative models and enables efficient analyses exploiting partial solutions. A given model is partitioned into base and derived slices: a satisfying solution to the slice can be systematically extended to generate a solution for the entire model, while unsatisfiability of the base implies unsatisfiability of the entire model.

To identify a proper partitioning, we introduce an approach based on constraint prioritization. By analyzing the structure of a given model, we construct a set of candidate slicing criteria. Our fully automatic tool performs a small-scope analysis on each criterion to find whether declarative slicing optimization provides any performance gain and, if so, which slicing criterion to use for that particular model. The experimental results show that it is possible to achieve significant improvements in the solving time.


Arsenii Rudich, Adam Darvas and Peter Müller. Checking well-formedness of pure-method specifications

Abstract: Contract languages such as Eiffel, JML, and Spec# specify invariants and pre- and postconditions using side-effect free expressions of the programming language, in particular, pure methods. For such contracts to be meaningful, they must be well-formed. First, they must respect the partiality of operations, for instance, the preconditions of pure methods used in the contract. Second, they must enable a consistent encoding of pure methods in a program logic, which requires that their specifications are satisfiable and that recursive specifications are well-founded.

This paper presents a technique to check well-formedness of contracts. We give proof obligations that are sufficient to guarantee the existence of a model for the specification of pure methods. We improve over earlier work by providing a systematic solution including a soundness result and by supporting more forms of recursive specifications. Our technique has been implemented in the Spec# programming system.