TACAS 2024

30th International Conference on Tools and Algorithms for the Construction and Analysis of Systems

Scope

TACAS is a forum for researchers, developers and users interested in rigorously based tools and algorithms for the construction and analysis of systems.

The conference aims to bridge the gaps between different communities with this common interest and to support them in their quest to improve the utility, reliability, flexibility and efficiency of tools and algorithms for building systems.

Theoretical papers with clear relevance for tool construction and analysis as well as tool descriptions and case studies with a conceptual message are all encouraged. The topics covered by the conference include, but are not limited to:

  • specification and verification techniques;
  • software and hardware verification;
  • analytical techniques for real-time, hybrid, or stochastic systems;
  • analytical techniques for safety, security, or dependability;
  • SAT and SMT solving;
  • theorem proving;
  • model checking;
  • static and dynamic program analysis;
  • testing;
  • abstraction techniques for modeling and verification;
  • compositional and refinement-based methodologies;
  • system construction and transformation techniques;
  • machine-learning techniques for synthesis and verification;
  • tool environments and tool architectures;
  • applications and case studies.

Important Dates

The important dates are available in the Joint Call for Papers.

Submission Categories

TACAS accepts four types of submissions: research papers, case-study papers, regular tool papers, and tool demonstration papers.

Research Papers identify and justify a principled advance to the theoretical foundations for the construction and analysis of systems. Where applicable, they are supported by experimental validation. Artifact evaluation for regular research papers is not mandatory but strongly encouraged.

Case-Study Papers are full length case-studies that describe the application of techniques developed by the community to a single problem or a set of problems of practical importance, preferably in a real-world setting. Case-study papers must describe the problem of interest and its novelty. It must provide a detailed description of the case-study (or studies) and various related techniques to solve the problem of interest for the case-study application. Modeling aspects of the case-study must be discussed. The case-study paper must provide detailed results on the application of formal techniques and lead to a conclusion that could be of interest beyond TACAS (to the research community pertaining to the case study). Artifact evaluation is currently not mandatory for case-study papers but undergoing artifact evaluation is strongly encouraged.

Regular Tool Papers are full length papers that describe a tool of interest to the community or a new version of the tool built using novel algorithmic and engineering techniques. Regular tool papers must describe tools of broad interest and utility to the TACAS community. The papers must clearly describe the problem to be solved, its importance, related work, the techniques used in the tool and their novelty, the construction of the tool, its unique features, discuss how the tool is used and present benchmarking of the tool including comparisons with other tools and previous versions of the tool. For tools from the industry, the description should be useful in allowing the community members to reproduce some of the key techniques or “tricks” in their own tools. All tool papers must undergo mandatory artifact evaluation.

Tool-Demonstration Papers are short papers that demonstrate a new tool or application of existing tool on a significant and novel case-study. As such, tool demo papers will be accepted if the PC decides that they are sufficiently interesting and novel to the community. Artifact evaluation is mandatory for these papers.

Paper Submission Instructions

Submissions must follow the formatting guidelines of Springer’s LNCS (use the llncs.cls class) and be submitted electronically in pdf through the Easychair author interface of TACAS 2024.

The review process of TACAS 2024 is:

  • double-blind for regular research papers. As such, authors of regular research papers must omit their names and institutions; refer to prior work in the third person, just as prior work by others; not to include acknowledgments that might identify them.
  • single-blind for case study papers, regular tool papers, and tool-demonstration papers.

The length of regular research papers, case study papers, and regular tool papers is limited to 16 pp llncs.cls (excluding the bibliography and appendices). The length of tool demonstration papers is limited to 6 pp llncs.cls (excluding the bibliography and appendices). Additional material can be placed in a clearly marked appendix, at the end of the paper. The reviewers are, however, not obliged to read such appendices. If the paper is accepted, a full version including the appendix should be made publicly available via arXiv or a similar platform.

Submissions not adhering to the formatting and length requirements specified above will be rejected immediately.

Paper Evaluation

All papers will be evaluated by the program committee (PC), coordinated by the PC chairs and aided by the artifact evaluation chairs. All papers will be judged on novelty, significance, correctness, and clarity.

Reproducibility of results is of the utmost importance for TACAS. Therefore, we encourage all authors to include support for replicating the results of their papers. For theorems, this would mean providing proofs; for algorithms, this would mean including evidence of correctness and acceptable performance, either by a theoretical analysis or by experimentation; and for experiments, one should provide access to the artifacts used to generate the experimental data. Material that does not fit into the paper page limit may be provided as appendix and/or on a supplementary web site (without violating double-blind submission requirements of regular research papers), with access appropriately enabled and license rights made clear. For example, the supplemental material for reviewing case-study papers and papers with experimental results could be classified as reviewer-confidential if necessary (e.g., if proprietary data are investigated or software is not open source).

TACAS 2024 will use rebuttal for selected submissions (those in the grey zone).

Artifact Submission and Evaluation

TACAS 2024 implements mandatory artifact evaluation for regular tool paper and tool demonstration paper submissions. Authors of these submissions must submit the respective artifacts by October 26, 2023 (AoE). The artifact will be evaluated, and the outcome will be considered in the paper’s acceptance decision.

TACAS 2024 also implements a voluntary artifact evaluation for regular paper and case study paper submissions. Authors of these submissions are encouraged to submit their respective artifacts by October 26, 2023 (AoE). The artifacts will be evaluated post-paper-acceptance, provide the submission has been accepted as a regular paper or case study paper at TACAS 2024. The outcome of the voluntary artifact evaluation will not alter the paper acceptance decision.

Note: Guidelines on artifact submissions will soon be made available.

Competition on Software Verification

TACAS 2024 will host the 13th instance of the Competition on Software Verification, SV-COMP.

Program Committee

PC Chairs

PC members

  • Alessandro Abate (University of Oxford)
  • Erika Ábrahám (RWTH Aachen)
  • S Akshay (IIT Bombay)
  • Elvira Albert (Universidad Complutense de Madrid)
  • Leonardo Alt (Ethereum Foundation)
  • Suguman Bansal (Georgia Institute of Technology)
  • Nikolaj Bjørner (Microsoft Research)
  • Ahmed Boujanni (IRIF, Université Paris Cité)
  • Claudia Cauli (Amazon Web Services)
  • Rance Cleaveland (University of Maryland)
  • Mila Dalla Preda (University of Verona)
  • Rayna Dimitrova (CISPA Helmholtz Center for Information Security)
  • Madalina Erascu (West University of Timisoara)
  • Javier Esparza (Technical University of Munich)
  • Bernd Finkbeiner (CISPA)
  • Carlo Furia (Università della Svizzera Italiana)
  • Alberto Griggio (Fondazione Bruno Kessler)
  • Arie Gurfinkel (University of Waterloo)
  • Holger Hermanns (Saarland University)
  • Marijn Heule (Carnegie Mellon University)
  • Hossein Hojjat (Teheran Institute for Advanced Studies)
  • Nils Jansen (Radboud University)
  • Sebastian Junges (Radboud University)
  • Amir Kafshdar Goharshady (Hong Kong University of Science and Technology)
  • Benjamin Kaminski (Saarland University)
  • Guy Katz (Hebrew University of Jerusalem)
  • Laura Kovács (TU Wien)
  • Gergely Kovasznai (Eszterházy Károly University Eger)
  • Tamas Kozsik (Eotvos Lorand University Budapest)
  • Anthony Lin (TU Kaiserslauern)
  • Dorel Lucanu (Alexandru Ioan Cuza University Iasi)
  • Anna Lukina (TU Delft)
  • Filip Maric (University of Belgrade)
  • Laura Nenzi (University of Trieste)
  • Aina Niemetz (Stanford University)
  • Elizabeth Polgreen (University of Edinburgh)
  • Kristin Rozier (Iowa State University)
  • Cesar Sanchez (IMDEA Software Institute)
  • Mark Santolucito (Barnard College)
  • Anne-Kathrin Schmuck (Max-Planck-Institute for Software Systems)
  • Sharon Shoham (Tel Aviv University)
  • Mihaela Sighireanu (ENS Paris-Saclay)
  • Martin Suda (Czech Technical University in Prague)
  • Silvia Lizeth Tapia Tarifa (University of Oslo)
  • Caterina Urban (INRIA)
  • Yakir Vizel (Technion)
  • Tomas Vojnar (Brno University of Technology)
  • Georg Weissenbacher (TU Wien)
  • Sarah Winkler (Free University of Bozen-Bolzano)
  • Ningning Xie (University of Toronto and Google Brain)

Artifact Evaluation Committee

Artifact Evaluation Chairs

Artifact Evaluation Committee

TBA