Registration

Dieses Dokuwiki verwendet ein von Anymorphic Webdesign erstelltes Thema.

**This is an old revision of the document!**

From June 23 to June 27, at Institut Henri Poincaré.

Semantics and Verification are neighbouring branches of Computer Science which have traditionally developed separately. While both are motivated by the question of program correctness, they approach it from different angles.

- Semantics is concerned with building faithful models of programs in order to study program equivalence and refinement. Compositionality plays a prominent role, and category theory is often used to obtain sophisticated models.

- Verification is concerned with, inter alia, the model-checking question: for a correctness property at hand, an abstraction of a program is constructed that should be accurate enough for deciding whether the property holds. Models tend to be much simpler in this case, and the emphasis is on algorithmics.

There are, however, unifying ideas such as the power of abstraction, and both fields are firmly rooted in logical methods.

In recent years, there have been developments that bring the two approaches closer to each other, and sometimes even combine ideas and methods from both branches. Examples include the following:

- abstraction: abstract interpretation, abstraction refinement, etc.
- automata over data words and trees
- decision problems and complexity in lambda calculus and type theory; higher-order matching
- games in semantics vs games in verification
- higher-order model checking
- nominal computation
- Petri nets and pi-calculus
- type systems for program analysis and verification

This workshop aims to foster dialogue between the two communities. It will be organised around invited talks with plenty of time for discussions.

- Mikolaj Bojanczyk (Warsaw, Poland)
- Pierre Clairambault (Lyon, France)
- Arnaud Carayol (CNRS & Université Paris-Est Marne-la-Vallée)
- Giuseppe Castagna (CNRS & Universite Paris Diderot, France)
- Martin Hofmann (LMU, Germany)
- Ranjit Jhala (UC San Diego, USA)
- Rupak Majumdar (MPI, Germany)
- Paul-Andre Mellies (CNRS & Universite Paris Diderot, France)
- Jens Palsberg (UC Los Angeles, USA)
- Jakob Rehof (Dortmund, Germany)
- Andrey Rybalchenko (MSR Cambridge, UK)
- Sylvain Salvati (Bordeaux, France)
- Helmut Seidl (TUM, Germany)
- Olivier Serre (CNRS & Universite Paris Diderot, France)
- Colin Stirling (Edinburgh, UK)
- Kazushige Terui (Kyoto, Japan)

**Mikolaj Bojanczyk**

*Computing with atoms*

Sets with atoms (also known as permutation models, or Fraenkel-Mostowski sets) were introduced in set theory by Fraenkel in 1922, and further developed by Mostowski. In the last decade, they were rediscovered for the computer science community, where they are called nominal sets.

This talk is about a research programme, which studies a notion of finiteness which only makes sense in sets with atoms, called “orbit-finiteness''. The research programme is to see what happens to discrete mathematics when sets are replaced by sets with atoms, and finiteness is replaces by orbit-finiteness. Two examples of this research programme, motivated by computer science, are orbit-finite versions of programming languages, and of Turing machines.

Programming languages. In sets with atoms, the set of rational numbers with their order is an orbit-finite set. Therefore, in a programming language for sets with atoms, one can write the following program, which exhaustively searches through all triples of rational numbers, searching for a counter-example to transitivity of the order:

counterexample:=false for x in Q do for y in Q do for z in Q do if (x < y) and (y < z) and not (x < z) then counterexample:=true

This programme will be executed in finite time, and will never set the variable counterexample to true.

When Turing machines with orbit-finite state spaces and alphabets are considered, the computability landscape becomes richer, in the sense that there are fewer equivalences between models. For instance, P is provably different from NP (although this result is unlikely to shed any light on the real P vs NP problem). Also, nondeterministic decidability does not imply deterministic decidability (unlike for normal Turing machines, where the two notions coincide).

**Giuseppe Castagna**

*Polymorphic Functions with Set-Theoretic Types*

We present a calculus with higher-order polymorphic functions, recursive types with arrow and product type constructors and set-theoretic type connectives (union, intersection, and negation). In a first part we define and study the explicitly-typed version of the calculus in which type instantiation is driven by explicit instantiation annotations. In particular, we define an explicitly-typed λ-calculus with intersection types and an efficient evaluation model for it. In the second part, we define a local type inference system that allows the programmer to omit explicit instantiation annotations, and a type reconstruction system that allows the programmer to omit explicit type annotations. This work provides the theoretical foundations and technical machinery needed to design and implement higher-order polymorphic functional languages for semi-structured data.

**Pierre Clairambault**

*Strategies as Higher-Order Recursion Schemes*

Higher-Order Recursion Schemes (HORS) are abstractions of higher-order functional programs. But they naturally represent programs whose interaction with the runtime environment is very restricted: it consists essentially in first-order function calls, for instance accesses to files or other resources. However, reasoning compositionally on programs requires another abstract representation, where programs can communicate through their environment via higher-order interfaces. Game semantics provide such a representation: the interaction of a program with its environment through a higher-order interface is seen as a game, and the program itself as a strategy on this game. This representation has to take into account the various computational effects that the evaluation context can have access to: various notions of state, exceptions, control operators,…

In this talk, I will recall a few results on the representation of higher-order terms with respect to notions of observation. I will then show that for higher-order purely functional terms observed by effectful contexts the strategies can be represented as HORS, thus reducing the equivalence problem for strategies to the equivalence of HORS.

**Ranjit Jhala**

*Liquid Types For Haskell*

We present LiquidHaskell, an automatic verifier for Haskell. Liquid Haskell uses “Refinement types”, a restricted form of dependent types where relationships between values are encoded by decorating types with logical predicates drawn from an efficiently SMT decidable theory (of arithmetic and uninterpreted functions.) In this talk, we will describe the key ingredients of LiquidHaskell.

First, we will present a rapid overview of liquid refinement types, including SMT solver based (decidable) subtyping, and inference. Decidability is achieved by eschewing the use of arbitrary terms inside types, and the use of indices to encode rich properties of data.

Second, we will show how to recover some of the expressiveness lost by restricting the logic, with two new techniques: measures which encode structural properties of values and abstract refinements which enable generalization (i.e. quantification) over refinements.

Third, we will discuss the curious interaction of laziness and refinement typing. In a nutshell, the technique of refinement typing can be viewed as a type-based generalization of Floyd-Hoare logics. Surprisingly, we demonstrate that under non-strict evaluation, these logics (and hence, classical refinement typing) is unsound, due to the presence of potentially divergent sub-computations. Fortunately, we show how soundness can be recovered with a termination analysis, itself, circularly bootstrapped off refinement typing.

We have used LiquidHaskell to verify safety, functional correctness and termination properties of real-world Haskell libraries totalling more than 10,000 lines of code. Time permitting, we will present a demonstration of the tool and a few short case studies illustrating its use.

(Joint work with Niki Vazou and Eric Seidel and Patrick Rondon)

**Marting Hofman**

*Abstract Interpretation from Buchi Automata*

We describe the construction of an abstract lattice from a given Buchi automata. The abstract lattice is finite and has the following key properties. (i) There is a Galois connection between it and the (infinite) lattice of languages of finite and infinite words over a given alphabet. (ii) The abstraction is faithful with respect to acceptance by the automaton. (iii) Least fixpoints and omega-iterations (but not in general greatest fixpoints) can be computed on the level of the abstract lattice.

This allows one to develop an abstract interpretation capable of checking whether finite and infinite traces of a (recursive) program are accepted by a policy automaton. It is also possible to cast this analysis in form of a type and effect system with the effects being elements of the abstract lattice.

While the resulting decidability and complexity results are known (regular model checking for pushdown systems) the abstract lattice provides a new point of view and enables smooth integration with data types, objects, higher-order functions which are best handled with abstract interpretation or type systems.

We demonstrate this by generalising our type-and-effect systems to

object-oriented programs and higher-order functions.

(Joint work with Wei Chen, Edinburgh)

**Rupak Majumdar**

*Abstractions in the Continuous World*

The notion of abstraction plays a crucial role in static analysis of discrete structures. In this talk, I will show how abstraction can also be applied to the analysis of continuous-time dynamical systems. In this setting, we generalize classical notions of system equivalence to approximate equivalences or metrics over states. We show the existence of finite abstractions under certain assumptions on the continuous dynamics. The abstractions then reduce reasoning about continuous systems to reasoning about their discrete quotients.

**Jakob Rehof**

*Types as Programs: Foundations of Combinatory Logic Synthesis*

Combinatory logic synthesis has been proposed recently as a type-theoretic research programme in automatic synthesis of compositions from collections of components. Composition synthesis is based on the idea that the inhabitation (provability) relation in combinatory logic can be used as a logical foundation for synthesizing expressions satisfying a goal type (specification) relative to a given collection of components exposed as a combinatory type environment.

It is shown that, under the semantics of relativized inhabitation, already simple types are a Turing-complete logic programming language for computing (synthesizing) compositions, where collections of combinatory types are programs and types are rules in such programs. In order to enhance the ability to express semantic specifications we introduce intersection types into composition synthesis, and we survey recent results on expressive power, algorithmics, and complexity. It is shown that modal types lead to a natural foundation for introducing meta-programming combinators, resulting in a highly flexible framework for composition synthesis. Based on a prototype implementation of the framework (CL)S, Combinatory Logic Synthesizer, we illustrate with practical examples as time permits. We will end the talk with a survey of open problems and future directions.

**Andrey Rybalchenko**

*Solving (quantified) Horn constraints for program verification and synthesis*

We show how automatic tools for the verification of linear and branching time properties of procedural, multi-threaded, and functional programs as well as program synthesis can be naturally and uniformly seen as solvers of constraints in form of (quantified) Horn clauses over background logical theories. Such a perspective can offer various advantages, e. g., a logical separation of concerns between constraint generation (also know as generation of proof obligations) and constraint solving (also known as proof discovery), reuse of solvers across different verifications tasks, and liberation of proof designers from low level algorithmic concerns and vice versa. A growing number of verification tools for/based on Horn clause solving is currently emerging. In this talk, we will represent verification problems as solving of (quantified) Horn constraints, show how to solve such constraints using state-of-the-art methods including satisfiability modulo theories (SMT) decision procedures, counterexample guided abstraction and refinement (CEGAR), as well as generalization of Craig interpolation.

**Sylvain Salvati**

*Models for model checking higher-order programs*

Model checking methods are often applied on schemes of programs, i.e. programs where the meaning of basic operations is abstracted away. These methods consists in verifying whether the (possibly infinite) tree generated by a program scheme is accepted by a finite state automaton of a certain kind. In the context of model-checking higher-order simply typed program schemes, the algorithms that are implemented and that verify whether these automata accept the infinite tree generated by the scheme are mostly based on intersection type systems. Nevertheless, these type systems can, in most cases, be seen as dual to models of lambda-Y-calculus under the view coined as models in logical form by Abramsky. From the point of view of models, the question of model checking becomes a question about computing the semantic values of programs. The finite nature of the models involved yields immediately the decidability of the model-checking problems. The problem of finding efficient model-checking algorithms can be seen as ways of efficiently evaluating semantic properties of programs. These remarks motivate the study of classes of models and the logical properties they can express on the output of higher-order program schemes. We first show that the class of monotone models precisely captures boolean combinations of automata with trivial conditions. Building on this result, we construct a class of more subtle models that capture weak alternating parity automata, i.e. weak Monadic Second Order properties. Interestingly, these models present an internal duality that gives rise to two dual type systems when following the view of models in logical form.

**Helmut Seidl**

*Efficiently intertwining widening and narrowing*

Non-trivial analysis problems require to compute post-solutions of equations over posets with infinite ascending and descending chains. Reasonably small post-solutionss still can be computed by accelerated fixpoint iteration through widening and narrowing. The strict separation into phases, however, may unnecessarily give up precision. While widening is also applicable if equations are non-monotonic, this is no longer the case for narrowing.

Monotonicity, though, may be violated by equation systems corresponding to interprocedural analysis. As a remedy, we present a novel operator that combines widening with narrowing. We present new solvers for the combined operator which always return sound results and explore when termination can be guaranteed.

**Olivier Serre**

*What are collapsible pushdown automata good for?*

An old model of computation, recursion schemes were originally designed as a canonical programming calculus for studying program transformation and control structures. In recent years, higher-order recursion schemes have received much attention as a method of constructing rich and robust classes of infinite trees with strong algorithmic properties.

In the last ten years, there have been several approaches to deal with higher-order recursion schemes: Games semantics and the concept of traversals, Collapsible pushdown automata, Intersection types, and lambda-Y calculus and Krivine machine.

In this talk, I will focus on the collapsible pushdown automata approach and start by giving their definition, examples and explain their relations with higher-order recursion schemes. I will then discuss on specific problems on collapsible pushdown automata arising from games (decidability, winning regions and winning strategy) and I will explain how these results can be used to obtain results on recursion schemes (model-checking, global model-checking and synthesis).

Finally I will briefly present the C-SHORe tool that automatically checks termination (or related properties) of functional programs using as a core methods a saturation techniques on collapsible pushdown automata.

**Colin Stirling**

*Deciding equivalence using type checking*

There has been recent work showing that model checking of higher-order systems can be reduced to type checking. In the talk I examine reducing equivalence checking to type checking. As a practical example I show that it offers an elementary decision procedure for language equivalence of real time strict deterministic languages.

**Kazushige Terui**

*On applications of models of linear logic*

There is a tight connection between models of linear logic and intersection type systems, and the latter often give us useful information on programs. For instance, the relational semantics is connected to a system of non-idempotent intersection types, that has been used to estimate the length of reduction sequences in lambda calculus. Another example is the Scott model of linear logic, that corresponds to the essential type assignment system of van Bakel. The system (and its variant) has been applied to fast evaluation in simply typed lambda calculus, and also to verification of higher order recursion schemes.

This connection could be perhaps generalized: any REL-like model (eg. coherent semantics) of linear logic leads to an intersection-like type system. While it would be interesting to develop a general theory, we are rather interested in finding applications of such models and type systems. In this talk, we will discuss our ongoing work in this direction.

Dieses Dokuwiki verwendet ein von Anymorphic Webdesign erstelltes Thema.