Registration
From June 23 to June 27, at Institut Henri Poincaré.
Semantics and Verification are neighbouring branches of Computer Science which have traditionally developed separately. While both are motivated by the question of program correctness, they approach it from different angles.
There are, however, unifying ideas such as the power of abstraction, and both fields are firmly rooted in logical methods.
In recent years, there have been developments that bring the two approaches closer to each other, and sometimes even combine ideas and methods from both branches. Examples include the following:
This workshop aims to foster dialogue between the two communities. It will be organised around invited talks with plenty of time for discussions.
Monday 23 June | ||
---|---|---|
9h30-10h30 | Registration and Introduction | |
11h00-12h00 | Martin Hofmann | Abstract Interpretation from Buchi Automata |
14h30-15h30 | Olivier Serre | What are collapsible pushdown automata good for? |
16h00-17h00 | Sylvain Salvati | Models for model checking higher-order programs |
Tuesday 24 June | ||
9h30-10h30 | Jens Palsberg | Efficient Inference of Packed Locality Types |
11h00-12h00 | Giuseppe Castagna | Polymorphic Functions with Set-Theoretic Types |
14h30-15h30 | Ranjit Jhala | Liquid Types For Haskell |
16h00-17h00 | Jakob Rehof | Types as Programs: Foundations of Combinatory Logic Synthesis |
from 18h | Cocktail at the Institut Henri Poincaré | |
Wednesday 25 June | ||
9h30-10h30 | Andrey Rybalchenko | Solving (quantified) Horn constraints for program verification and synthesis |
11h00-12h00 | Helmut Seidl | Efficiently intertwining widening and narrowing |
Thursday 26 June | ||
9h30-10h30 | Colin Stirling | Deciding equivalence using type checking |
11h00-12h00 | Kazushige Terui | On applications of models of linear logic |
14h30-15h30 | Paul-André Melliès | Linear logic and higher-order model-checking |
16h00-17h00 | Pierre Clairambault | Strategies as Higher-Order Recursion Schemes |
Friday 27 June | ||
9h30-10h30 | Rupak Majumdar | Abstractions in the Continuous World |
11h00-12h00 | Mikolaj Bojanczyk | Computing with atoms |
Mikolaj Bojanczyk
Computing with atoms
Sets with atoms (also known as permutation models, or Fraenkel-Mostowski sets) were introduced in set theory by Fraenkel in 1922, and further developed by Mostowski. In the last decade, they were rediscovered for the computer science community, where they are called nominal sets.
This talk is about a research programme, which studies a notion of finiteness which only makes sense in sets with atoms, called “orbit-finiteness''. The research programme is to see what happens to discrete mathematics when sets are replaced by sets with atoms, and finiteness is replaces by orbit-finiteness. Two examples of this research programme, motivated by computer science, are orbit-finite versions of programming languages, and of Turing machines.
Programming languages. In sets with atoms, the set of rational numbers with their order is an orbit-finite set. Therefore, in a programming language for sets with atoms, one can write the following program, which exhaustively searches through all triples of rational numbers, searching for a counter-example to transitivity of the order:
counterexample:=false for x in Q do for y in Q do for z in Q do if (x < y) and (y < z) and not (x < z) then counterexample:=true
This programme will be executed in finite time, and will never set the variable counterexample to true.
When Turing machines with orbit-finite state spaces and alphabets are considered, the computability landscape becomes richer, in the sense that there are fewer equivalences between models. For instance, P is provably different from NP (although this result is unlikely to shed any light on the real P vs NP problem). Also, nondeterministic decidability does not imply deterministic decidability (unlike for normal Turing machines, where the two notions coincide).
Giuseppe Castagna
Polymorphic Functions with Set-Theoretic Types
In set-theoretic types systems, types are interpreted as sets of values, unions, intersections and negations of types as the corresponding set-theoretic operations, and the subtyping relation as set containment.
Set-theoretic types are useful to give very precise typing to operations on standard functional data structures and are at the basis of type systems for languages that manipulate XML documents.
In this talk we show how to add polymorphism to set-theoretic type systems. More precisely, we present a calculus with higher-order polymorphic functions, recursive types with arrow and product type constructors and set-theoretic type connectives (union, intersection, and negation). First we define and study the explicitly-typed version of the calculus in which type instantiation is driven by explicit instantiation annotations and study an efficient evaluation model for it. In the second part, we define a local type inference system that allows the programmer to omit explicit instantiation annotations, and a type reconstruction system that allows the programmer to omit explicit type annotations. This work provides the theoretical foundations and technical machinery needed to design and implement higher-order polymorphic functional languages for semi-structured data.
Pierre Clairambault
Strategies as Higher-Order Recursion Schemes
Higher-Order Recursion Schemes (HORS) are abstractions of higher-order functional programs. But they naturally represent programs whose interaction with the runtime environment is very restricted: it consists essentially in first-order function calls, for instance accesses to files or other resources. However, reasoning compositionally on programs requires another abstract representation, where programs can communicate through their environment via higher-order interfaces. Game semantics provide such a representation: the interaction of a program with its environment through a higher-order interface is seen as a game, and the program itself as a strategy on this game. This representation has to take into account the various computational effects that the evaluation context can have access to: various notions of state, exceptions, control operators,…
In this talk, I will recall a few results on the representation of higher-order terms with respect to notions of observation. I will then show that for higher-order purely functional terms observed by effectful contexts the strategies can be represented as HORS, thus reducing the equivalence problem for strategies to the equivalence of HORS.
Ranjit Jhala
Liquid Types For Haskell
We present LiquidHaskell, an automatic verifier for Haskell. Liquid Haskell uses “Refinement types”, a restricted form of dependent types where relationships between values are encoded by decorating types with logical predicates drawn from an efficiently SMT decidable theory (of arithmetic and uninterpreted functions.) In this talk, we will describe the key ingredients of LiquidHaskell.
First, we will present a rapid overview of liquid refinement types, including SMT solver based (decidable) subtyping, and inference. Decidability is achieved by eschewing the use of arbitrary terms inside types, and the use of indices to encode rich properties of data.
Second, we will show how to recover some of the expressiveness lost by restricting the logic, with two new techniques: measures which encode structural properties of values and abstract refinements which enable generalization (i.e. quantification) over refinements.
Third, we will discuss the curious interaction of laziness and refinement typing. In a nutshell, the technique of refinement typing can be viewed as a type-based generalization of Floyd-Hoare logics. Surprisingly, we demonstrate that under non-strict evaluation, these logics (and hence, classical refinement typing) is unsound, due to the presence of potentially divergent sub-computations. Fortunately, we show how soundness can be recovered with a termination analysis, itself, circularly bootstrapped off refinement typing.
We have used LiquidHaskell to verify safety, functional correctness and termination properties of real-world Haskell libraries totalling more than 10,000 lines of code. Time permitting, we will present a demonstration of the tool and a few short case studies illustrating its use.
(Joint work with Niki Vazou and Eric Seidel and Patrick Rondon)
Martin Hofman
Abstract Interpretation from Buchi Automata
We describe the construction of an abstract lattice from a given Buchi automata. The abstract lattice is finite and has the following key properties. (i) There is a Galois connection between it and the (infinite) lattice of languages of finite and infinite words over a given alphabet. (ii) The abstraction is faithful with respect to acceptance by the automaton. (iii) Least fixpoints and omega-iterations (but not in general greatest fixpoints) can be computed on the level of the abstract lattice.
This allows one to develop an abstract interpretation capable of checking whether finite and infinite traces of a (recursive) program are accepted by a policy automaton. It is also possible to cast this analysis in form of a type and effect system with the effects being elements of the abstract lattice.
While the resulting decidability and complexity results are known (regular model checking for pushdown systems) the abstract lattice provides a new point of view and enables smooth integration with data types, objects, higher-order functions which are best handled with abstract interpretation or type systems.
We demonstrate this by generalising our type-and-effect systems to object-oriented programs and higher-order functions.
(Joint work with Wei Chen, Edinburgh)
Rupak Majumdar
Abstractions in the Continuous World
The notion of abstraction plays a crucial role in static analysis of discrete structures. In this talk, I will show how abstraction can also be applied to the analysis of continuous-time dynamical systems. In this setting, we generalize classical notions of system equivalence to approximate equivalences or metrics over states. We show the existence of finite abstractions under certain assumptions on the continuous dynamics. The abstractions then reduce reasoning about continuous systems to reasoning about their discrete quotients.
Paul-Andre Mellies
Linear logic and higher-order model-checking
The main purpose of this talk will be to discuss a series of recent connections between linear logic and higher-order model checking. I will start by explaining how a careful inspection of the work by Kobayashi and Ong on higher-model checking reveals that priorities (or colors) behave there in just the same comonadic way as the exponential modality of linear logic. Then, adding to this observation the elegant correspondence established ten years ago by Bucciarelli and Ehrhard between indexed linear logic and the relational semantics of linear logic, I will explain how to construct a semantic and modular account of colors in higher-order model-checking. This is joint work with Charles Grellois.
Jens Palsberg
Efficient Inference of Packed Locality Types
In a distributed program, local access is much faster than remote access. As a help to programmers, some distributed languages like X10 require all accesses to be local and enable programs to shift the place of computation to gain access to remote data. In response, researchers have presented static analyses and type systems that determine conservatively whether all accesses are local. Those approaches can distinguish between local and remote data, and they can track place correlation between objects. Yet they cannot handle the Multi Place Idiom for safe use of a field that over time holds objects from different places. We present a type system with packed locality types, which are a light-weight form of existential types that enable us to type check the Multi Place Idiom. A well-typed program does only local access and can shift the place of computation. We present an O(n^4) time type inference algorithm that enables programmers to type check annotation-free programs, and we prove that the inference problem is P-complete. Our example language is a small extension of the Abadi-Cardelli object calculus with three constructs for distributed programming. We have implemented our algorithm and run it on a suite of challenge programs. Joint work with Riyaz Haque.
Jakob Rehof
Types as Programs: Foundations of Combinatory Logic Synthesis
Combinatory logic synthesis has been proposed recently as a type-theoretic research programme in automatic synthesis of compositions from collections of components. Composition synthesis is based on the idea that the inhabitation (provability) relation in combinatory logic can be used as a logical foundation for synthesizing expressions satisfying a goal type (specification) relative to a given collection of components exposed as a combinatory type environment.
It is shown that, under the semantics of relativized inhabitation, already simple types are a Turing-complete logic programming language for computing (synthesizing) compositions, where collections of combinatory types are programs and types are rules in such programs. In order to enhance the ability to express semantic specifications we introduce intersection types into composition synthesis, and we survey recent results on expressive power, algorithmics, and complexity. It is shown that modal types lead to a natural foundation for introducing meta-programming combinators, resulting in a highly flexible framework for composition synthesis. Based on a prototype implementation of the framework (CL)S, Combinatory Logic Synthesizer, we illustrate with practical examples as time permits. We will end the talk with a survey of open problems and future directions.
Andrey Rybalchenko
Solving (quantified) Horn constraints for program verification and synthesis
We show how automatic tools for the verification of linear and branching time properties of procedural, multi-threaded, and functional programs as well as program synthesis can be naturally and uniformly seen as solvers of constraints in form of (quantified) Horn clauses over background logical theories. Such a perspective can offer various advantages, e. g., a logical separation of concerns between constraint generation (also know as generation of proof obligations) and constraint solving (also known as proof discovery), reuse of solvers across different verifications tasks, and liberation of proof designers from low level algorithmic concerns and vice versa. A growing number of verification tools for/based on Horn clause solving is currently emerging. In this talk, we will represent verification problems as solving of (quantified) Horn constraints, show how to solve such constraints using state-of-the-art methods including satisfiability modulo theories (SMT) decision procedures, counterexample guided abstraction and refinement (CEGAR), as well as generalization of Craig interpolation.
Sylvain Salvati
Models for model checking higher-order programs
Model checking methods are often applied on schemes of programs, i.e. programs where the meaning of basic operations is abstracted away. These methods consists in verifying whether the (possibly infinite) tree generated by a program scheme is accepted by a finite state automaton of a certain kind. In the context of model-checking higher-order simply typed program schemes, the algorithms that are implemented and that verify whether these automata accept the infinite tree generated by the scheme are mostly based on intersection type systems. Nevertheless, these type systems can, in most cases, be seen as dual to models of lambda-Y-calculus under the view coined as models in logical form by Abramsky. From the point of view of models, the question of model checking becomes a question about computing the semantic values of programs. The finite nature of the models involved yields immediately the decidability of the model-checking problems. The problem of finding efficient model-checking algorithms can be seen as ways of efficiently evaluating semantic properties of programs. These remarks motivate the study of classes of models and the logical properties they can express on the output of higher-order program schemes. We first show that the class of monotone models precisely captures boolean combinations of automata with trivial conditions. Building on this result, we construct a class of more subtle models that capture weak alternating parity automata, i.e. weak Monadic Second Order properties. Interestingly, these models present an internal duality that gives rise to two dual type systems when following the view of models in logical form.
Helmut Seidl
Efficiently intertwining widening and narrowing
Non-trivial analysis problems require to compute post-solutions of equations over posets with infinite ascending and descending chains. Reasonably small post-solutionss still can be computed by accelerated fixpoint iteration through widening and narrowing. The strict separation into phases, however, may unnecessarily give up precision. While widening is also applicable if equations are non-monotonic, this is no longer the case for narrowing.
Monotonicity, though, may be violated by equation systems corresponding to interprocedural analysis. As a remedy, we present a novel operator that combines widening with narrowing. We present new solvers for the combined operator which always return sound results and explore when termination can be guaranteed.
Olivier Serre
What are collapsible pushdown automata good for?
An old model of computation, recursion schemes were originally designed as a canonical programming calculus for studying program transformation and control structures. In recent years, higher-order recursion schemes have received much attention as a method of constructing rich and robust classes of infinite trees with strong algorithmic properties.
In the last ten years, there have been several approaches to deal with higher-order recursion schemes: Games semantics and the concept of traversals, Collapsible pushdown automata, Intersection types, and lambda-Y calculus and Krivine machine.
In this talk, I will focus on the collapsible pushdown automata approach and start by giving their definition, examples and explain their relations with higher-order recursion schemes. I will then discuss on specific problems on collapsible pushdown automata arising from games (decidability, winning regions and winning strategy) and I will explain how these results can be used to obtain results on recursion schemes (model-checking, global model-checking and synthesis).
Finally I will briefly present the C-SHORe tool that automatically checks termination (or related properties) of functional programs using as a core methods a saturation techniques on collapsible pushdown automata.
Colin Stirling
Deciding equivalence using type checking
There has been recent work showing that model checking of higher-order systems can be reduced to type checking. In the talk I examine reducing equivalence checking to type checking. As a practical example I show that it offers an elementary decision procedure for language equivalence of real time strict deterministic languages.
Kazushige Terui
On applications of models of linear logic
There is a tight connection between models of linear logic and intersection type systems, and the latter often give us useful information on programs. For instance, the relational semantics is connected to a system of non-idempotent intersection types, that has been used to estimate the length of reduction sequences in lambda calculus. Another example is the Scott model of linear logic, that corresponds to the essential type assignment system of van Bakel. The system (and its variant) has been applied to fast evaluation in simply typed lambda calculus, and also to verification of higher order recursion schemes.
This connection could be perhaps generalized: any REL-like model (eg. coherent semantics) of linear logic leads to an intersection-like type system. While it would be interesting to develop a general theory, we are rather interested in finding applications of such models and type systems. In this talk, we will discuss our ongoing work in this direction.