thomas-henzinger.jpg

From Static to Runtime Verification

by Bettina Könighofer — 20 March 2023
Topics: interview

Tom Henzinger was founding president of the Institute of Science and Technology Austria (ISTA) in 2009–2022. He holds a Ph.D. from Stanford University and Dr.h.c. degrees from Fourier University in Grenoble and from Masaryk University in Brno. He was Assistant Professor at Cornell University and Professor at the University of California, Berkeley. He was also Director at the Max-Planck Institute for Computer Science in Saarbrücken and Professor at EPFL in Lausanne. His research focuses on the theory of software systems, especially models, algorithms, and tools for the design and verification of reliable software systems.

Can you tell us a little bit about your current research?

The main focus of my research has shifted from static verification to runtime verification in the past couple of years. Runtime verification has, wrongly, been looked at as poor man’s version of verification by many, including myself. There is not only a practical need for runtime verification, there is also much interesting theory to be done. For example, the quantitative verification agenda—where one is interested in non-boolean specifications—fits very well with runtime verification. A runtime monitor may watch not only correctness or failure, but the degree of failure, as well as performance and resource consumption. Such quantitative monitors can be best-effort—relying on approximation theories—and third-party—increasing trust in deployed systems.

Can you recommend us a paper of yours that you are particularly proud of?

A paper that I consider fundamental, but which is not among my most cited papers, is called “Fair Simulation”; it was written in 1997 with my then-student Sriram Rajamani and then-postdoc Orna Kupferman in Berkeley. Robin Milner’s simulation relation is an essential tool for establishing the preservation of properties across different levels of abstraction, and we extended it beyond safety properties. The extension was not obvious at the time (in fact, there were several flawed proposals in the literature), and the game-theoretic interpretation of simulation was key. This, together with our simultaneous work on “Alternating-time Temporal Logic” (with Rajeev Alur and Orna Kupferman), started my long interest in games for specification, verification, and synthesis.

What are the most exciting current technical advances in your field? Can you recommend us an interesting paper that you have read recently?

An exciting recent development that took me by surprise is the notion “history-determinism” in automata theory. As the name suggests, history-determinism lies between determinism and nondeterminism: it is nondeterminism that can be resolved by remembering the past, without having to guess the future. History-determinism is at its core again a game-theoretic concept: Nir Piterman and I had called it “good-for-games” but did not see its full potential. For this, I recommend the survey by Udi Boker and Karoliina Lehtinen, which will appear soon. To me this is a beautiful example of how new life can be breathed into topics that seem to have been settled for many years.

Who would you like to see interviewed on the ETAPS blog?

Another cool development in verification is that builders of cutting-edge software systems have started to use heavy formal machinery. For example, it seems that much of the PDOS (Parallel and Distributed Operating Systems) group at MIT does verification these days. It would be interesting to hear about their motivation and objective.