PICS 2024 Speakers

Speakers

photo of Ivan Damgård

Ivan Damgård

Aarhus University

Ivan Damgård is a professor in computer science at Aarhus University. He does research in public-key cryptography, cryptographic protocols, and related mathematics and algorithmics. He is a fellow of the International Association for Cryptologic Research and has received 2015 RSA conference Award for Excellence in Mathematics, the STOC 21 test of time award, and the 2023 Dijkstra Price in Distributed Computing, among others. He is the co-founder of spin-off companies Cryptomathic and Partisia.

On the many crucial roles played by randomness in cryptography

Virtually no cryptographic primitive or protocol can be secure without making use of randomness, and hence tools from probability and information theory are essential for analysing them. We will cover a number of interesting examples of this, including the foundations of secure encryption, secret sharing and secure computation.

photo of Joseph Halpern

Joseph Halpern

Cornell University

Joe Halpern received a B.Sc. in mathematics from the University of Toronto in 1975 and a Ph.D. in mathematics from Harvard in 1981. In between, he spent two years as the head of the Mathematics Department at Bawku Secondary School, in Ghana. After a year as a visiting scientist at MIT, he joined the IBM Almaden Research Center in 1982, where he remained until 1996, also serving as a consulting professor at Stanford. In 1996, he joined the Computer Science Department at Cornell University, where he is currently the Joseph C. Ford Professor and was department department chair 2010-14. Halpern has coauthored 6 patents, 3 books ('Reasoning About Knowledge', 'Reasoning about Uncertainty', and 'Actual Causality'), and over 400 technical publications. He is a Fellow of AAAI, AAAS (American Association for the Advancement of Science), the American Academy of Arts and Sciences, ACM, IEEE, the Game Theory Society, the National Academy of Engineering, and SAET (Society for the Advancement of Economic Theory). Among other awards, he received the Kampe de Feriet Award in 2016, the ACM SIGART Autonomous Agents Research Award in 2011, the Dijkstra Prize in 2009, the ACM/AAAI Newell Award in 2008, the Godel Prize in 1997, was a Guggenheim Fellow in 2001-02, and a Fulbright Fellow in 2001-02 and 2009-10. He started the computer science section of arxiv.org, and serves on its advisory board.

Causality, Explanation, and Harm

What does it mean that an event C “actually caused” event E? The philosophy literature has been struggling with the problem of defining causality since the days of Hume, in the 1700s. Many of the definitions have been couched in terms of counterfactuals. (C is a cause of E if, had C not happened, then E would not have happened.) In 2001, Judea Pearl and I introduced a new definition of actual cause using causal models, where structural equations model counterfactuals. I will discuss the definition, and then show how causal models can be used to model responsibility, blame, explanation, and harm (among other things) in addition to causality, with a focus on explanation ad harm.

photo of Philipp Hennig

Philipp Hennig

University of Tübingen

Philipp Hennig holds the Chair for the Methods of Machine Learning at the University of Tübingen. His group studies the algorithmic foundations of machine learning from the perspective of probability theory. His work has been supported by two ERC grants; Emmy Noether, and Max Planck Fellowships. Hennig is a Founding Fellow of the European Laboratory of Learning and Intelligent Systems (ELLIS), where he co-directs the Research Program for Theory, Algorithms and Computations of Learning Machines.

Probabilistic Numerics — Computation as Inference

Numerical computation is the foundation of scientific computing and AI. We will discover that elementary numerical algorithms are extracting information — learning — about mathematical quantities from computations, and that it is possible to quantify uncertainty about this process along the way in a tractable fashion, as probability measures. We will beginn with basic numerical linear algebra, and extend from there to simulation methods for differential equations, and deep learning. The result is a universal framework the quantification of computational uncertainty that natively interacts with the conceptual frameworks of Bayesian machine learning.

photo of Marta Kwiatkowska

Marta Kwiatkowska

University of Oxford

Marta Kwiatkowska is a Professor of Computing Systems and Fellow at Trinity College, University of Oxford. Her area of expertise lies in probabilistic and quantitative verification techniques and the synthesis of correct-by-construction systems from quantitative specifications. She led the development of the probabilistic model checker PRISM, which has been used to model and verify numerous case studies across a variety of application domains. Recently, she has been focusing on safety and trust in Artificial Intelligence, with an emphasis on safety and robustness guarantees for machine learning. Marta has been the recipient of two ERC Advanced Grants and an honorary doctorate from KTH Institute of Technology, and was the first female winner of the Royal Society Milner Medal. She is a Fellow of the ACM, a Member of Academia Europea, and a Fellow of the Royal Society.

Probabilistic verification for neural networks

This course will provide an overview of formal verification for neural networks, which can be used to certify that the networks are robust to adversarial perturbations. Given a set of inputs I and outputs O under some restrictions, checking whether all x points in I map to O can be approximated by bounding f(x) over I, which can be achieved by convex relaxation and propagating the bounds forward through the network. It is also helpful to approximate the set of inputs I’ that are mapped to O, which can be achieved by propagating the bounds backwards. Finally, a probabilistic variant of neural networks, called Bayesian neural networks, will be introduced. Bayesian neural networks return a probability distribution on outputs, so their verification draws on bound propagation and involves bounding the posterior probability.

photo of Alexandra Silva

Alexandra Silva

Cornell University

Alexandra Silva is a theoretical computer scientist whose main research focuses on semantics of programming languages and modular development of algorithms for computational models. A lot of her work uses the unifying perspective offered by coalgebra, a mathematical framework established in the last decades. Alexandra is currently a Professor at Cornell University, and she was previously a Professor at University College London. She was the recipient of an ERC Consolidator in 2020, the Royal Society Wolfson Award 2019, Needham Award 2018, the Presburger Award 2017, the Leverhulme prize 2016, and an ERC starting Grant in 2015.

Probabilistic Network verification

Kleene algebra with tests is an algebraic framework that can be used to reason about imperative programs. It has been applied across a wide variety of areas including program transformations, concurrency control, compiler optimizations, cache control, networking, and more. In these lectures, we will provide an overview of Probabilistic NetKAT, a domain specific language based on Kleene Algebra with Tests. We will illustrate how it can be used as a core framework for probabilistic network verification, including properties like congestion and fault-tolerance.

photo of Dan Suciu

Dan Suciu

University of Washington

Dan Suciu is a Microsoft Endowed Professor in the Paul G. Allen School of Computer Science and Engineering at the University of Washington. Suciu is conducting research in data management, on topics such as query optimization, probabilistic data, data pricing, parallel data processing, data security. He is a co-author of two books Data on the Web: from Relations to Semistructured Data and XML, 1999, and Probabilistic Databases, 2011. He received the ACM SIGMOD Codd Innovation Award, received several best paper awards and test of time awards, and is a Fellow of the ACM. Suciu is currently an associate editor for the Journal of the ACM. Suciu's PhD students Gerome Miklau, Christopher Re and Paris Koutris received the ACM SIGMOD Best Dissertation Award in 2006, 2010, and 2016 respectively, and Nilesh Dalvi was a runner up in 2008.

Information Theory for Relational Query Processing

Conjunctive Query evaluation lies at the core of any modern database management system. This course will cover the use of information theory to answer some fundamental questions in query processing. How large can the output to a query be? How efficiently can we evaluate a query? Is one query always faster than another? These questions can be answered using a fascinating tool: entropic vectors and information inequalities. The lectures contain a gentle introduction to conjunctive queries, then to entropic inequalities, and finally describe their applications to query evaluation.

photo of Mikkel Thorup

Mikkel Thorup

University of Copenhagen

Mikkel Thorup (born 1965) has a D.Phil. from Oxford University from 1993. From 1993 to 1998 he was at the University of Copenhagen. From 1998 to 2013 he was at AT&T Labs-Research. Since 2013 he has been back as Professor at the University of Copenhagen. Since 2017, he has been a VILLUM Investigator heading Center for Basic Algorithms Research Copenhagen (BARC). Mikkel is a Fellow of the ACM and of AT&T, and a Member of the Royal Danish Academy of Sciences and Letters. He is co-winner of the 2011 MAA Robbins Award in mathematics and winner of the 2015 Villum Kann Rasmussen Award for Technical and Scientific Research, which is Denmark's biggest individual prize for research. More recently he was co-winner of the 2021 AMS-MOS Fulkerson Prize and an ACM STOC 20-year test of time award. Mikkel's main work is in algorithms and data structures, where he has worked on both upper and lower bounds. Recently one of his main focuses has been on hash functions unifying theory and practice. Mikkel prefers to seek his mathematical inspiration in nature, combining the quest with his hobbies of bird watching and mushroom picking.

Hash functions bridging the gap from theory to practice

Randomized algorithms are often enjoyed for their simplicity, but the hash functions employed to yield the desired probabilistic guarantees are often too complicated to be practical.

Hash functions are used everywhere in computing, e.g., hash tables, sketching, dimensionality reduction, sampling, and estimation. Abstractly, we like to think of hashing as fully-random hashing, assigning independent hash values to every possible key, but essentially this requires us to store the hash values for all keys, which is unrealistic for most key universes, e.g., 64-bit keys.

In practice we have to settle for implementable hash functions, and often practitioners settle for implementations that are too simple in that the algorithms ends up working only for sufficiently random input. However, the real world is full of structured/non-random input.

The issue is severe, for simple hash functions will often work very well in tests with random input. Moreover, the issue is often that error events that should never happen in practice, happen with way too high probability. This does not show in a few tests, but will show up over time when you put the system in production.

Over the last decade there has been major developments in simple tabulation based hash functions offering strong theoretical guarantees, so as to support fundamental properties such as Chernoff bounds, Sparse Johnson-Lindenstrauss transforms, succinct hash tables, fully-random hashing on a given set w.h.p., etc.

I will discuss some of the principles of these developments and offer insights on how far we can bridge from theory (assuming fully-random hash functions) to practice (needing something that can actually implemented efficiently).

photo of Antonio Vergari

Antonio Vergari

University of Edinburgh

Antonio Vergari is a Reader (Associate Professor) in Machine Learning and a member of the ELLIS Unit at the University of Edinburgh. His research focuses on efficient and reliable machine learning in the wild, tractable probabilistic modeling and combining learning with complex reasoning. He is interested in unifying probabilistic reasoning. Recently, he has been awarded an ERC Starting Grant called 'UNREAL - a Unified REAsoning Layer for Trustworthy ML'. Previously he has been a postdoc in the StarAI Lab lead by Guy Van den Broeck at UCLA. And before that he did a postdoc at the Max Planck Institute for Intelligent Systems in Tuebingen, Germany supervised by Isabel Valera. He obtained a PhD in Computer Science and Mathematics at the University of Bari, Italy. He published several conference and journal papers in top-tier AI and ML venues such as NeurIPS, ICML, UAI, ICLR, AAAI, ECML-PKDD and more, some of which have been awarded oral and spotlight presentations. He frequently engages with the tractable probabilistic modeling and the deep generative models communities by organizing a series of events: the Tractable Probabilistic Modeling Workshop (ICML2019, UAI2021, UAI2022, UAI2023), the Tractable PRobabilistic Inference MEeting (T-PRIME) at NeurIPS 2019 and presented a series of tutorials on complex probabilistic reasoning and models at UAI 2019, AAAI 2020, ECAI 2020, IJCAI 2021 and NeurIPS 2022. He organized a Dagstuhl Seminar on 'Recent Advancements in Tractable Probabilistic Inference' with Priyank Jaini, Kristian Kersting and Max Welling.

Probabilistic circuits: from tractable probabilistic inference to reliable neuro-symbolic AI

Guaranteeing the safety and reliability of deep learning models is of crucial importance, especially in many high-stake application scenarios. In these lectures, I will focus on the key challenge of designing probabilistic deep learning models that are reliable and yet efficient by design. I will do so within the framework of probabilistic circuits: overparametrized and computational graphs that are just neural networks with lots of structure, enough to guarantee the tractable computation of the probabilistic reasoning scenarios of interest, while not compromising their expressiveness. Second, I will discuss how we can use circuits to build a reliable foundation for neuro-symbolic AI. That is, for example, to provably satisfy certain constraints we can express in propositional logic in neural networks as to increase their performance and robustness. These models can be thought as being ``verified by design’’ and I will showcase some recent applications of this constraint satisfaction by design e.g., scaling link prediction in graphs with millions of nodes.