Invited Talks

The Search for Structure in Quantum Computation

Abstract. I give a non-comprehensive survey of the categorical quantum
mechanics program and how it guides the search for structure in
quantum computation. I discuss the example of measurement-based computing
which is one of the successes of such an enterprise and briefly
mention topological quantum computing which is an inviting target for
future research in this area.

Future-proofing collections: from mutable to persistent to parallel

Multicore processors are on every desk now. How are we going to make
use of the extra power they provide? Some think that actors, or
transactional memory, or some other new concurrency construct will
save the day by making concurrent programming easier and safer. Even
though these are welcome, I am skeptical about their ultimate
success. Concurrency is fundamentally hard and no dressing up will be
able to hide that fact completely.

A safer and for the programmer much simpler alternative is to treat
parallel execution as essentially an optimization. A promising
application area are collections. Programming by transforming and
aggregating collections is simple, powerful, and can be optimized by
executing bulk operations in parallel. To be able to do this in
practice, any side effects of parallel operations need to be carefully
controlled. This means that immutable, persistent collections are more
suitable than mutable ones.

In this talk I will describe the new Scala collections framework, and
show how it allows a seamless migration from traditional mutable
collections to persistent collections, and from there to parallel
collections. I show how the same vocabulary of methods can be used for
either type of collection, and how one can have parallel as well as
sequential views on the same underlying collection.

The design of this framework is guided by the "uniform return type
principle": every collection transformer should return the same kind
of collection it applies to. Simple as this sounds, it is surprisingly
hard to achieve in a statically typed language with a rich type
hierarchy (in fact, I know of no other collections framework that
achieved it). In the talk I will explain the type-systematic
constructions required by the principle. I will also present some
thoughts on how we might develop type-explanation capabilities of
compilers to effectively support these techniques in a user-friendly
way.

An Automata-Theoretic Approach to Program Analysis

Abstract. We present a general framework for the decomposition of
automatic proofs of program correctness by program analysis (program
analysis in the large sense of abstraction-based methods to extract
runtime properties from programs).  The decomposition does not refer
to the structured program but, instead, to the set of traces.  A trace
is a finite or infinite word consisting of transitions (events,
statements, ...).  Each subset in the decomposition is defined by an
automaton.  A correctness proof amounts to constructing a family of
automata which defines a decomposition of the set of traces.  How does
the program analysis construct these automata?  We answer this
question for both cases, partial correctness (safety) and termination
(liveness).  For the first case, we propose an alternative approach to
program analysis with counterexample-guided abstraction refinement.
For the second case, we propose an alternative view of termination
analysis based on transition invariants and transition predicate
abstraction.

Reliable software development: analysis-aware design

Abstract. The application of formal methods in software development
does not have to be an all-or-nothing proposition.
Progress can be made with the introduction of relatively
unobtrusive techniques that simplify analysis.
This approach is meant replace traditional analysis-agnostic
coding with an analysis-aware style of software development.

Verified Software Toolchain

Abstract. The software toolchain includes static analyzers to check
assertions about programs; optimizing compilers to translate programs
to machine language; operating systems and libraries to supply context
for programs.  Our Verified Software Toolchain verifies with
machine-checked proofs that the assertions claimed at the top of the
toolchain really hold in the machine-language program, running in the
operating-system context, on a weakly-consistent-shared-memory
machine.  Our verification approach is modular, in that proofs about
operating systems or concurrency libraries are oblivious of the
programming language or machine language, proofs about compilers are
oblivious of the program logic used to verify static analyzers, and so
on.  The approach is scalable, in that each component is verified in
the semantic idiom most natural for that component.  Finally, the
verification is foundational: the trusted base for proofs of
observable properties of the machine-language program includes only
the operational semantics of the machine language, not the source
language, the compiler, the program logic, or any other part of the
toolchain---even when these proofs are carried out by source-level
static analyzers.  In this talk I explain some semantic techniques for
building a verified toolchain.

The Dependability of Complex Socio-technical Systems

Abstract. The story of software engineering has been one of learning
to cope with ever greater scale and complexity. We're now building
systems with hundreds of millions of users, who belong to millions of
firms and dozens of countries; the firms can be competitors and the
countries might even be at war.

Rather than having a central planner, we have to arrange things so
that the desired behaviour emerges as a result of the self-interested
action of many uncoordinated principals. Mechanism design and game
theory are becoming as important to the system engineer as more
conventional knowledge such as data structures and algorithms. This
holds not just for systems no-one really controls, such as the
Internet; it extends through systems controlled by small groups of
firms, such as the future smart grid, to systems controlled by a
single firm, such as Facebook. Once you have hundreds of millions of
users, you have to set rules rather than micromanage outcomes.

Other social sciences have a role to play too, especially the
behavioural sciences;  HCI testing has to be supplemented by a more
principled understanding of psychology.  And as software comes to
pervade just about every aspect of society, software engineers cannot
avoid engaging with policy. This has significant implications for
academics: for how we educate our students, and for choosing research
topics that are most likely to have some impact.

Automated Design and Verification of Security Protocols based on
Zero-Knowledge Proofs

A central challenge in the analysis of large-scale security protocols
is the expressiveness of the formalism used in the formal analysis and
its capability to model complex cryptographic operations. While such
protocols traditionally relied only on the basic cryptographic
operations such as encryption and digital signatures, modern
cryptography has invented more sophisticated primitives with unique
security features that go far beyond the traditional understanding of
cryptography to solely offer secrecy and authenticity of a
communication. Zero-knowledge proofs constitute the most prominent and
arguably most amazing such primitive. A zero-knowledge proof consists
of a message or a sequence  of messages that combines two seemingly
contradictory properties:  First, it constitutes a proof of a
statement that cannot be  forged, i.e., it is impossible, or at least
computationally infeasible, to produce a zero-knowledge proof of a
wrong statement. Second, a zero-knowledge proof does not reveal any
information besides the bare validity of the statement. This
primitive's unique security features, combined with the recent advent
of efficient cryptographic  implementations of zero-knowledge proofs
for special classes of problems, have paved the way for its deployment
in modern applications, such as e-voting systems and anonymity
protocols.

In this talk, I will present a framework for the verification and
design of security protocols based on zero-knowledge proofs. The
framework comprises a symbolic representation of the cryptographic
semantics of zero-knowledge proofs that is suitable to automated
verification, a type system for the static enforcement of
authorization policies, a corresponding cryptographic soundness result
against arbitrary active attacks, and a general methodology for
designing security protocols that are resistant to principal
compromise.

Automated Learning of Probabilistic Assumptions for Compositional Reasoning

Abstract. Probabilistic verification techniques have been applied to the
formal modelling and analysis of a wide range of systems, from
communication protocols such as Bluetooth, to nanoscale computing
devices, to biological cellular processes. In order to tackle the
inherent challenge of scalability, compositional approaches to
verification are sorely needed. An example is assume-guarantee
reasoning, where each component of a system is analysed independently,
using assumptions about the other components that it interacts with. We
discuss recent developments in the area of automated compositional
verification techniques for probabilistic systems. In particular, we
describe techniques to automatically generate probabilistic assumptions
that can be used as the basis for compositional reasoning. We do so
using algorithmic learning techniques, which have already proved to be
successful for the generation of assumptions for compositional
verification of non-probabilistic systems. We also present recent
improvements and extensions to this work and survey some of the
promising potential directions for further research in this area.