andre-platzer.jpg

On The Relationship of AI And Programs

by Eduard Kamburjan — 20 October 2023
Topics: interview

André Platzer is the Alexander von Humboldt Professor for Logic of Autonomous Dynamical Systems at Karlsruhe Institute of Technology and leads the Logical Systems Lab at Carnegie Mellon University. He develops logics for dynamical systems to characterize the logical foundations of cyber-physical systems and to answer the question of how we can trust a computer to control physical processes. The solution to this challenge is the key to enabling computer assistance that we can bet our lives on. Prof. Platzer pursues this challenge with the principled design of programming languages with logics that can provide proofs as correctness guarantees.

What do you understand under AI, and can you maybe relate some of the latest results to that?

Artificial Intelligence (AI) describes the subfield of computer science devoted to techniques enabling intelligent behavior. This intelligence can come in very different forms. It is intelligent to play games strategically, which is why computer programs that win exceedingly challenging games are AI. It is intelligent to draw valid conclusions from given assumptions, which is why programs performing logical inferences are AI. It is intelligent to infer general principles from individual observations, which is why induction and learning programs are AI. And it is intelligent to lead a conversation, which is why programs that understand or produce natural language are AI.

And yet, many things that originally started out as core AI with its incomprehensible and somewhat amorphous characteristics mature and end up in separate fields once their working principles were sufficiently understood, systematized, and demystified. This led to areas such as algorithmic game theory and theorem proving on the side of symbolic AI as well as to machine learning and natural language processing on the side of subsymbolic AI. Symbolic AI deals with symbolic facts and their intelligent processing. Facts such as “the queen is next to the king on the chess board” or facts such as “all reducible polynomials can be split into multiple factors”. Subsymbolic AI, instead, deals with facts that have a less clear semantics and are more numerical and harder to explain, such as the particular distribution of weights in a neural network that was learned from example images.

Some of these crosscutting uses of AI are crossing this divide into different subfields of computer science. For instance, symbolic AI techniques help ensure the safety of subsymbolic AI in cyber-physical systems (CPS). This is relevant, because the safety of control algorithms for CPS such as robots, cars, and aircraft depends on how they interact with the system’s continuous dynamics of motion. And yet, the use of subsymbolic AI leads to parts of the software that is harder to understand, because it is not programmed manually but, instead, learned from data or experience. So, symbolic AI not only helps make subsymbolic AI safe but also improves the separation of concerns, because subsymbolic AI is responsible for optimality while symbolic AI is responsible for safety AAAI'18.

Other uses of AI crosscut the other way around, where subsymbolic AI is used to help better automate particularly challenging aspects of symbolic AI. For example, by using a variant of the AlphaZero algorithm to train a neural network oracle for a nondeterministic search program, subsymbolic AI can learn to discover and prove symbolic theorems e.g. invariants of single-loop arithmetic programs NeurIPS’22. This approach even works entirely without training data by simultaneously learning a teacher agent with the same approach.

What do you think is the role of the software sciences with the ongoing wide-spread adoption of new AI methods?

One might naively assume that if AI is used more in computers then programmers and software scientists are needed less. That is, of course, about as true as the suspicion that architects are not needed anymore after the invention of computer-aided design software.

Even if AI is used in more parts of software, then that still makes the programming and the science of this programming important. In fact, it makes it even more important and more challenging than conventional programming, because at least subsymbolic AI techniques often significantly complicate questions of reliability and quality of the outcome. Likewise, insightful game search algorithms have shifted where the challenges are, but have not removed all challenges in writing computer programs that win games. Similarly, general automated theorem proving techniques have contributed to the technological repertoire but merely shifted rather than removed the challenge of computerized logical inferences.

What is special about recent AI techniques is that large language models (LLMs) have been trained on such huge amounts of data that the same LLM can be used to answer a very wide range of questions. However, even that still overlooks the fact that

  1. LLMs still need very skillfull crafting of the questions to obtain the intended results in the first place, and
  2. it is still exceedingly difficult to justify the answer in the end.

Phenomenon 1 indicates that more techniques in general, and more AI techniques specifically, shift rather than eradicate the need for human insights and cleverness. Phenomenon 2 is often brought up to justify why software verification is getting more important rather than less important when using LLMs to generate programs. But I believe this is fundamentally not quite right, because justifications of programs are equally important, no matter what or who came up with them. The fact that a program is in need of a formal justification is merely especially apparent when an LLM generated that program. But a justification of correctness is also needed when a human writes a program. The only difference comes down to deliberate intent. Even if a human programmer does not formally justify the correctness of the program she wrote, she will still have written the program intentionally while at least informally convincing herself of the correctness. For an LLM, there is presently no way to know the intention of the constructions beyond the textual output.

Dijkstra famously wrote on the differences of attitude towards academic CS research on both sides of the Atlantic in the 80s. You spent many years doing research in the US, before returning to Europe. Is there a difference in how CS is viewed and treated in Europe and the US nowadays?

Except for some selected US universities such as CMU, MIT, Stanford, Cornell, symbolic logic and programming languages research is significantly more important in Europe than in the US. Europe should strategically leverage this significant strength in these crucial technical areas of symbolic AI. But Europe has significant catching up to do in subsymbolic AI compared to the US. This difference is most apparent in the geographical distribution of industry developing subsymbolic AI tools or LLMs.