The Search for Structure in Quantum ComputationAbstract. I give a non-comprehensive survey of the categorical quantummechanics program and how it guides the search for structure inquantum computation. I discuss the example of measurement-based computingwhich is one of the successes of such an enterprise and brieflymention topological quantum computing which is an inviting target forfuture research in this area.
Future-proofing collections: from mutable to persistent to parallelMulticore processors are on every desk now. How are we going to makeuse of the extra power they provide? Some think that actors, ortransactional memory, or some other new concurrency construct willsave the day by making concurrent programming easier and safer. Eventhough these are welcome, I am skeptical about their ultimatesuccess. Concurrency is fundamentally hard and no dressing up will beable to hide that fact completely.A safer and for the programmer much simpler alternative is to treatparallel execution as essentially an optimization. A promisingapplication area are collections. Programming by transforming andaggregating collections is simple, powerful, and can be optimized byexecuting bulk operations in parallel. To be able to do this inpractice, any side effects of parallel operations need to be carefullycontrolled. This means that immutable, persistent collections are moresuitable than mutable ones.In this talk I will describe the new Scala collections framework, andshow how it allows a seamless migration from traditional mutablecollections to persistent collections, and from there to parallelcollections. I show how the same vocabulary of methods can be used foreither type of collection, and how one can have parallel as well assequential views on the same underlying collection.The design of this framework is guided by the "uniform return typeprinciple": every collection transformer should return the same kindof collection it applies to. Simple as this sounds, it is surprisinglyhard to achieve in a statically typed language with a rich typehierarchy (in fact, I know of no other collections framework thatachieved it). In the talk I will explain the type-systematicconstructions required by the principle. I will also present somethoughts on how we might develop type-explanation capabilities ofcompilers to effectively support these techniques in a user-friendlyway.
An Automata-Theoretic Approach to Program AnalysisAbstract. We present a general framework for the decomposition ofautomatic proofs of program correctness by program analysis (programanalysis in the large sense of abstraction-based methods to extractruntime properties from programs). The decomposition does not referto the structured program but, instead, to the set of traces. A traceis a finite or infinite word consisting of transitions (events,statements, ...). Each subset in the decomposition is defined by anautomaton. A correctness proof amounts to constructing a family ofautomata which defines a decomposition of the set of traces. How doesthe program analysis construct these automata? We answer thisquestion for both cases, partial correctness (safety) and termination(liveness). For the first case, we propose an alternative approach toprogram analysis with counterexample-guided abstraction refinement.For the second case, we propose an alternative view of terminationanalysis based on transition invariants and transition predicateabstraction.
Reliable software development: analysis-aware design Abstract. The application of formal methods in software development does not have to be an all-or-nothing proposition. Progress can be made with the introduction of relatively unobtrusive techniques that simplify analysis. This approach is meant replace traditional analysis-agnostic coding with an analysis-aware style of software development.
Verified Software ToolchainAbstract. The software toolchain includes static analyzers to checkassertions about programs; optimizing compilers to translate programsto machine language; operating systems and libraries to supply contextfor programs. Our Verified Software Toolchain verifies withmachine-checked proofs that the assertions claimed at the top of thetoolchain really hold in the machine-language program, running in theoperating-system context, on a weakly-consistent-shared-memorymachine. Our verification approach is modular, in that proofs aboutoperating systems or concurrency libraries are oblivious of theprogramming language or machine language, proofs about compilers areoblivious of the program logic used to verify static analyzers, and soon. The approach is scalable, in that each component is verified inthe semantic idiom most natural for that component. Finally, theverification is foundational: the trusted base for proofs ofobservable properties of the machine-language program includes onlythe operational semantics of the machine language, not the sourcelanguage, the compiler, the program logic, or any other part of thetoolchain---even when these proofs are carried out by source-levelstatic analyzers. In this talk I explain some semantic techniques forbuilding a verified toolchain.
The Dependability of Complex Socio-technical SystemsAbstract. The story of software engineering has been one of learningto cope with ever greater scale and complexity. We're now buildingsystems with hundreds of millions of users, who belong to millions offirms and dozens of countries; the firms can be competitors and thecountries might even be at war.Rather than having a central planner, we have to arrange things sothat the desired behaviour emerges as a result of the self-interestedaction of many uncoordinated principals. Mechanism design and gametheory are becoming as important to the system engineer as moreconventional knowledge such as data structures and algorithms. Thisholds not just for systems no-one really controls, such as theInternet; it extends through systems controlled by small groups offirms, such as the future smart grid, to systems controlled by asingle firm, such as Facebook. Once you have hundreds of millions ofusers, you have to set rules rather than micromanage outcomes.Other social sciences have a role to play too, especially thebehavioural sciences; HCI testing has to be supplemented by a moreprincipled understanding of psychology. And as software comes topervade just about every aspect of society, software engineers cannotavoid engaging with policy. This has significant implications foracademics: for how we educate our students, and for choosing researchtopics that are most likely to have some impact.
Automated Design and Verification of Security Protocols based onZero-Knowledge ProofsA central challenge in the analysis of large-scale security protocolsis the expressiveness of the formalism used in the formal analysis andits capability to model complex cryptographic operations. While suchprotocols traditionally relied only on the basic cryptographicoperations such as encryption and digital signatures, moderncryptography has invented more sophisticated primitives with uniquesecurity features that go far beyond the traditional understanding ofcryptography to solely offer secrecy and authenticity of acommunication. Zero-knowledge proofs constitute the most prominent andarguably most amazing such primitive. A zero-knowledge proof consistsof a message or a sequence of messages that combines two seeminglycontradictory properties: First, it constitutes a proof of astatement that cannot be forged, i.e., it is impossible, or at leastcomputationally infeasible, to produce a zero-knowledge proof of awrong statement. Second, a zero-knowledge proof does not reveal anyinformation besides the bare validity of the statement. Thisprimitive's unique security features, combined with the recent adventof efficient cryptographic implementations of zero-knowledge proofsfor special classes of problems, have paved the way for its deploymentin modern applications, such as e-voting systems and anonymityprotocols.In this talk, I will present a framework for the verification anddesign of security protocols based on zero-knowledge proofs. Theframework comprises a symbolic representation of the cryptographicsemantics of zero-knowledge proofs that is suitable to automatedverification, a type system for the static enforcement ofauthorization policies, a corresponding cryptographic soundness resultagainst arbitrary active attacks, and a general methodology fordesigning security protocols that are resistant to principalcompromise.
Automated Learning of Probabilistic Assumptions for Compositional ReasoningAbstract. Probabilistic verification techniques have been applied to theformal modelling and analysis of a wide range of systems, fromcommunication protocols such as Bluetooth, to nanoscale computingdevices, to biological cellular processes. In order to tackle theinherent challenge of scalability, compositional approaches toverification are sorely needed. An example is assume-guaranteereasoning, where each component of a system is analysed independently,using assumptions about the other components that it interacts with. Wediscuss recent developments in the area of automated compositionalverification techniques for probabilistic systems. In particular, wedescribe techniques to automatically generate probabilistic assumptionsthat can be used as the basis for compositional reasoning. We do sousing algorithmic learning techniques, which have already proved to besuccessful for the generation of assumptions for compositionalverification of non-probabilistic systems. We also present recentimprovements and extensions to this work and survey some of thepromising potential directions for further research in this area.
We have 44 guests and no members online