Invited Talks
Unifying speaker 1:
Bruno Blanchet (INRIA / ENS / CNRS, France). Security protocol verification: Symbolic and computational models
Abstract: Security protocol verification has been a very active research area since the 1990s. This paper surveys various approaches in this area, considering the verification in the symbolic model, as well as the more recent approaches that rely on the computational model or that verify protocol implementations rather than specifications. Additionally, we briefly describe our symbolic security protocol verifier ProVerif and situate it among these approaches.
Unifying speaker 2:
Georg Gottlob (Univ. of Oxford, UK). Querying UML class diagrams
Abstract: UML Class Diagrams (UCDs) are the best known class-based formalism for conceptual modeling. They are used by software engineers to model the intensional structure of a system in terms of classes, attributes and operations, and to express constraints that must hold for every instance of the system. Reasoning over UCDs is of paramount importance in design, validation, maintenance and system analysis; however, for medium and large software projects, reasoning over UCDs may be impractical. Query answering, in particular, can be used to verify whether a (possibly incomplete) instance of the system modeled by the UCD, i.e., a snapshot, enjoys a certain property. In this work, we study the problem of querying UCD instances, and we relate it to query answering under guarded Datalog±, that is, a powerful Datalog-based language for ontological modeling. We present an expressive and meaningful class of UCDs, named Lean UCD, under which conjunctive query answering is tractable in the size of the instances.
(Joint work with Andrea Calì, Giorgio Orsi, Andreas Pieris.)
CC invited speaker:
François Bodin (IRISA and CAPS Entreprise, France). Programming heterogeneous many-cores using directives
Abstract: Directive-based programming models are a pragmatic way of adapting legacy codes to heterogeneous many-cores such as CPUs coupled with GPUs. They provide programmers an abstracted and portable interface for developing many-core applications. The directives are used to express parallel computation, data transfers between the CPU and the GPU memories and code tuning hints. The challenge for such environment is to achieve high programming productivity and at the same time provide performance portability across hardware platforms.
In this presentation we give an overview the state of the art of directives based parallel programming environments for many-core accelerators. In particular, we describe OpenACC (http://www.openacc-standard.org/), an initiative from CAPS, CRAY, NVIDIA and PGI that provides a new open parallel programming standard for C, C++ and Fortran languages. We show how tuning can be performed in such programming approach and specifically address numerical library inter-operability issues.
ESOP invited speaker:
Bjarne Stroustrup (Texas A&M Univ., USA). Foundations of C++
Abstract: C++ is a large and complicated language. People get lost in details. However, to write good C++ you need only understand a few fundamental techniques – the rest is indeed details. This paper presents a few fundamental examples and explains the principles behind them. Among the issues touched upon are type safety, resource management, compile-time computation, error-handling, concurrency, and performance. The presentation relies on and introduces a few features from the recent ISO C++ standard, C++11, that simplify the discussion of C++ fundamentals and modern style.
FASE invited speaker:
Wil van der Aalst (Techn. Univ. of Eindhoven, Netherlands). Distributed process discovery and conformance checking
Abstract: Process mining techniques have matured over the last decade and more and more organization started to use this new technology. The two most important types of process mining are process discovery (i.e., learning a process model from example behavior recorded in an event log) and conformance checking (i.e., comparing modeled behavior with observed behavior). Process mining is motivated by the availability of event data. However, as event logs become larger (say terabytes), performance becomes a concern. The only way to handle larger applications while ensuring acceptable response times, is to distribute analysis over a network of computers (e.g., multicore systems, grids, and clouds). This paper provides an overview of the different ways in which process mining problems can be distributed. We identify three types of distribution: replication, a horizontal partitioning of the event log, and a vertical partitioning of the event log. These types are discussed in the context of both procedural (e.g., Petri nets) and declarative process models. Most challenging is the horizontal partitioning of event logs in the context of procedural models. Therefore, a new approach to decompose Petri nets and associated event logs is presented. This approach illustrates that process mining problems can be distributed in various ways.
FOSSACS invited speaker:
Glynn Winskel (Univ. of Cambridge, UK). Bicategories of concurrent games
Abstract: This paper summarises recent results on bicategories of concurrent games and strategies. Nondeterministic concurrent strategies, those nondeterministic plays of a game left essentially unchanged by composition with copy-cat strategies, have recently been characterized as certain maps of event structures. This leads to a bicategory of general concurrent games in which the maps are nondeterministic concurrent strategies. It is shown how the bicategory can be refined to a bicategory of winning strategies by adjoining winning conditions to games. Assigning “access levels” to moves addresses situations where Player or Opponent have imperfect information as to what has occurred in the game. Finally, a bicategory of deterministic “linear” strategies, a recently discovered model of MALL (multiplicative-additive linear logic), is described. All the bicategories become equivalent to simpler order-enriched categories when restricted to deterministic strategies.
POST invited speaker:
Cynthia Dwork (Microsoft Research, Silicon Valley, USA). Differential privacy and the power of (formalizing) negative thinking
Abstract: Differential privacy is a promise, made by a data curator to a data subject: you will not be affected, adversely or otherwise, by allowing your data to be used in any study, no matter what other studies, data sets, or information from other sources are, or may become, available. This talk describes the productive role played by negative results in the formulation of differential privacy and the development of techniques for achieving it, concluding with a new negative result having implications related to participation in multiple, independently operated, differentially private databases.
TACAS invited speaker:
Holger Hermanns (Saarland Univ., Germany). Quantitative models for a not so dumb grid
Abstract: How to dimension buffer sizes in a network on chip? What availability can be expected for the Gallileo satellite navigation system? Is it a good idea to ride a bike with a wireless brake? Can photovoltaic overproduction blow out the European electric power grid? Maybe. Maybe not. Probably? The era of power-aware, wireless and distributed systems of systems asks for strong quantitative answers to such questions.
Stochastic model checking techniques have been developed to attack these challenges. They merge two well-established strands of informatics research and practice: verification of concurrent systems and performance evaluation. This presentation paints the landscape of behavioural models for probability, time, and cost, and discusses important aspects of compositional modelling and model checking techniques. Different real-life cases shows how these techniques are applied in practice.
A rich spectrum of quantitative analysis challenges is posed by the 'smart grid' vision. That vision promises a more stable, secure, and resilient power grid operation, despite increasing volatility of electric power production. However, that vision is in its infancy, while the reality of power production is already changing considerably in some regions of Europe. We focus on a regulation put in place by the German Federal Network Agency to increase grid stability in case of photovoltaic overproduction. We show that this regulation may in fact decrease grid stability. We also propose improved and fully decentralized stabilization strategies that take inspiration from probabilistic MAC protocols. Quantitative properties of these strategies are calculated by state-of-the-art stochastic model checking tools.