ETAPS 2012: 24 March - 1 April 2012, Tallinn, Estonia
Category theory is a well-known mathematical modeling language with a wide area of applications in mathematics and computer science, including especially the semantical foundations of topics in software science and development. It is the intention of the ACCAT workshop to bring together leading researchers in the field of ACCAT with those in software science and development in order to transfer categorical concepts and theories in both directions.
Thomas Soboll (Univ. of Salzburg, Austria), Ulrike Golas (Zuse Institute Berlin, Germany)
Though proof assistants such as Agda, Coq, Isabelle, Matita, Mizar, Twelf, but also ACL2, HOL-Light, HOL4, PVS and many others are characterized as interactive theorem provers, automation is largely present under different forms in them, and at the same time probably not as present as what one would like. The purpose of this workshop is to gather people working on all different aspects of automation relevant for interactive theorem proving.
Hugo Herbelin (INRIA Paris - Rocquencourt, France)
Bidirectional transformations (bx) are a mechanism for maintaining the consistency of at least two related sources of information. Such sources can be databases, software models, documents, graphs, and trees. The methodologies used range from classical program transformation to graph transformation techniques, from ad-hoc techniques for data synchronization to the development of domain-specific languages and their integration. The aim of the workshop is to bring together researchers, established and new, interested in bidirectional transformations from different perspectives, such as: language-based approaches, software/model transformations, and data/schema co-evolution.
Frank Hermann (Techn. Univ. of Berlin, Germany), Janis Voigtländer (Univ. of Bonn, Germany)
Bytecode, such as produced by, e.g., Java and .NET compilers, has become an important topic of interest, both for industry and academia. The industrial interest stems from the fact that bytecode is typically used for Internet and mobile device applications (smart cards, phones, etc.), where security is a major issue. Moreover, bytecode is device-independent and allows dynamic loading of classes, which provides an extra challenge for the application of formal methods. Also the unstructuredness of the code and the pervasive presence of the operand stack provide further challenges for the analysis of bytecode. This workshop will focus on theoretical and practical aspects of semantics, verification, analysis, certification and transformation of bytecode.
Marieke Huisman (Univ. of Twente, Netherlands)
In more than a decade of research, it has been established that a wide variety of state-based dynamical systems, like transition systems, automata (including weighted and probabilistic variants), Markov chains, and game-based systems, can be treated uniformly as coalgebras. Coalgebra has developed into a field of its own interest presenting a deep mathematical foundation, a growing field of applications, and interactions with various other fields such as reactive and interactive system theory, object-oriented and concurrent programming, formal system specification, modal and description logics, artificial intelligence, dynamical systems, control systems, category theory, algebra, analysis, etc. The aim of the workshop is to bring together researchers with a common interest in the theory of coalgebras, their logics, and their applications.
Dirk Pattinson (Imperial College London, UK), Lutz Schröder (DFKI Bremen, Germany)
The area of Implicit Computational Complexity (ICC) grew out from several proposals to use logic and formal methods to provide languages for complexity-bounded computation (e.g., polytime or logspace computation). It aims at studying the computational complexity of programs without referring to external measuring conditions or a particular machine model, but only by considering language restrictions or logical / computational principles entailing complexity properties. Several approaches have been explored for that purpose, like restrictions on primitive recursion and ramification, rewriting systems, linear logic, types and lambda calculus, interpretations of functional and imperative programs. The workshop is intended to be a forum for researchers interested in ICC to present new results and discuss recent developments in this area.
Ugo Dal Lago (Univ. of Bologna, Italy)
Component-based software design has received considerable attention in industry and academia in the past decade. In recent years, the growing need for trustworthy software systems and the increased relevance of systems reliability, performance, and scalability have stimulated the emergence of formal techniques and architecture modelling approaches for the specification and implementation of component-based software architectures. Both have to deal with an increasing complexity in software systems, challenging analytical methods as well as modelling techniques.
FESCA aims to address the open question of how formal methods can be applied effectively to these new contexts and challenges. FESCA is interested in both the development and application of formal methods in component-based development and tries to cross-fertilize their research and application.
Barbora Bühnová (Masaryk Univ., Czech Republic),Lucia Kapová (Karlsruhe Inst. of Techn., Germany)
Fixed points play a fundamental role in several areas of computer science and logic by justifying induction and recursive definitions. The construction and properties of fixed points have been investigated in many different frameworks such as: design and implementation of programming languages, program logics, databases. The aim of this workshop is to provide a forum for researchers to present their results to those members of the computer science and logic communities who study or apply the theory of fixed points.
Zoltán Ésik (Univ. of Szeged, Hungary), Dale Miller (INRIA Saclay - Île-de-France, France)
Component-based design is widely considered as a major approach to developing complex systems in a time- and cost-effective way. Component interfaces are central in this approach and summarize the externally visible properties of a component which can be syntactic properties such as operation signatures, but can also be behavioral and extra-functional properties such as quality of service, security and dependability. In recent years, rich interface formalisms have been proposed to specify legal sequences of messages, or resource and timing constraints. The challenge is to achieve compositionality - the key requirement for the effective analysis and prediction of global system behavior based on individual component behaviors. The aim of this workshop is to bring together researchers who are interested in the formal underpinnings of interface technologies.
Sebastian Bauer (Ludwig Maximilian Univ. Munich, Germany),Jean-Baptiste Raclet (IRIT, Toulouse, France)
The topic of the event is graph search in all its forms in computer science. Graph search algorithms tend to have common characteristics, such as duplicate detection, independent of their application domain. Over the past few years, it has been shown that the scalability of such algorithms can be dramatically improved by using, e.g., external memory, by exploiting parallel architectures, such as clusters, multi-core CPUs, and graphics processing units, and by using heuristics to guide the search. The goal of this event is to gather scientists from different communities, such as model checking, artificial intelligence planning, game playing, and algorithm engineering, who do research on graph search algorithms, such that awareness of each others' work is increased.
Anton Wijs (Eindhoven Univ. of Techn., Netherlands), Dragan Bošnački (Eindhoven Univ. of Techn., Netherlands)
GT-VMT serves as a forum for all researchers and practitioners interested in the use of visual notations (especially graph-based), techniques and tools for the specification, modeling, validation, manipulation and verification of complex systems. The aim of the workshop is to promote engineering approaches that provide effective sound tool support for visual modeling languages, enhancing formal reasoning at the syntactic as well as semantic level (e.g., for model specification, model analysis, model transformation, and model consistency management) in different domains, such as UML, Petri nets, graph transformation or business process/workflow models.
This year's workshop has a special theme of constraints, and we particularly encourage submissions that focus on visual/graph constraints, ranging from underlying theory through to their utility in complex system specification, validation and verification.
Andrew Fish (Univ. of Brighton, UK), Leen Lambers (Hasso Plattner Inst. for Software Systems Engineering, Germany)
The interest on autonomous systems is increasing both in industry and academia. Such systems must operate with limited human intervention in a changing environment and must be able to compensate for significant system failures without external intervention. The most appropriate models of autonomous systems can be found in the class of hybrid systems (which study continuous-state dynamic processes via discrete-state controllers) that interact with their environment. This workshop brings together researchers interested in all aspects of autonomy and resilience of hybrid systems.
Manuela Bujorianu (Univ. of Manchester, UK)
This workshop aims at the interrelation between interactions, games and protocols. How does computer science deal with nondeterministic interactions where the actions a system takes are not (completely) determined by the interactions the system is involved in? In computer science, nondeterministic interactions are usually described by protocols. However, these interactions can also be viewed as games. As to be expected, games have become an increasingly important modeling tool for computer science where nondeterministic interactions are involved – from foundations in game semantics and reactive systems to applications in communication protocols and electronic business applications. The goal of this workshop is to bring researchers from industry and academia together and to explore how a better understanding of the interrelation between interactions, games and protocols leads to better-designed and more reliable nondeterministic interacting systems.
Bernd Finkbeiner (Univ. of Saarland, Germany),Johannes Reich (SAP, Germany)
The LDTA workshops bring together researchers interested in the field of formal language definitions and language technologies, with an emphasis on tools developed for or with these language definitions. Several specification formalisms like attribute grammars, action semantics, operational semantics, term rewriting, and algebraic approaches have been developed over the years. Of specific interest are the formal analysis of language specifications, and the automatic generation of language processing tools from such specifications. These tools typically perform some sort of program analysis, transformation, or generation. Also of interest are applications of such tools in domains including, but not limited to, modelling languages, re-engineering and re-factoring, aspect-oriented and domain-specific languages, XML processing, visualisation and graph transformation.
Anthony Sloane (Macquarie Univ., Australia),Suzana Andova (Eindhoven Univ. of Techn., Netherlands)
With the urge for more robust, verifiable and optimised programming languages, the interest for linearity in order to have more control on computational resources is increasing in several areas of Computer Science, both in the theoretical side: with work on proof technology, complexity classes and more recently quantum computation, and in the practical side: work on programanalysis, expressive operational semantics, linear languages, and techniques for program transformation, update analysis and efficient implementation. The aim of this workshop is to bring together researchers who are currently developing theory and applications of linear calculi, to foster their interaction and provide a forum for presenting new ideas and work in progress, and enable newcomers to learn about current activities in this area.
Sandra Alves (Univ. of Porto, Portugal),Ian Mackie (École Polytechnique, France)
MBT is devoted to model-based testing of both software and hardware. Model-based testing uses models that describe the behavior of the system under consideration to guide such efforts as test selection and test results evaluation. Model-based testing has gained attention with the popularization of models in software/hardware design and development. Of particular importance are formal models with precise semantics, such as state-based formalisms. Testing with such models allows one to measure the degree of the product's conformance with the model. The intent of this workshop is to bring together researchers and users using different kinds of models for testing and to discuss the state of the art in theory, applications, tools, and industrialization of model-based testing.
Alexander Petrenko (Inst. for System Programming, Russian Academy of Sciences / Moscow State Univ., Russia)
The MSFP workshop is devoted to the derivation of functionality from structure. It is a celebration of the direct impact of theoretical computer science on programs as we write them today. Modern programming languages, and in particular functional languages, support the direct expression of mathematical structures, equipping programmers with tools of remarkable power and abstraction. Monadic programming in Haskell is the paradigmatic example, but there are many more mathematical insights manifest in programs and in programming language design: Freyd-categories in reactive programming, symbolic differentiation yielding context structures, and comonadic presentations of dataflow, to name but three. This workshop is a forum for researchers who seek to reflect mathematical phenomena in data and control.
James Chapman (Institute of Cybernetics, Estonia), Paul Blain Levy (Univ. of Birmingham, UK)
Applications on the web today are built using numerous interacting services; soon off-the-shelf CPUs will host thousands of cores, and sensor networks will be composed from a large number of processing units. Many normal applications will soon need to make effective use of thousands of computing nodes. At some level of granularity, computation in such systems will be inherently concurrent and communication-centred.
PLACES aims to offer a forum where researchers from different fields exchange new ideas on one of the central challenges for programming in the near future, the development of programming methodologies and infrastructures where concurrency and distribution are the norm rather than a marginal concern.
Simon Gay (Univ. of Glasgow, UK)
Quantitative aspects of computation refer to the use of physical quantities (time, bandwidth, etc.) as well as mathematical quantities (e.g., probabilities) for the characterisation of the behaviour and for determining the properties of systems. Such quantities play a central role in defining both the model of systems (architecture, language design, semantics) and the methodologies and tools for the analysis and verification of system properties. The aim of the QAPL workshop series is to discuss the explicit use of time and probability and general quantities either directly in the model or as a tool for the analysis of systems.
Mieke Massink (CNR-ISTI, Italy),Herbert Wiklicky (Imperial College London, UK)
This workshop focuses on verification and testing of software change and upgrades. Software is usually not written all at once, but is built incrementally, due to several reasons:
Changes are done frequently during the lifetime of most systems and can introduce software errors that were not present in the old version, or expose errors that were present before but did not get exercised. In addition, upgrades are done gradually, so the old and new versions have to co-exist in the same system. This workshop will address the issue of efficient and reliable verification of system changes and upgrades by means of formal verification and dynamic analysis techniques.
Hana Chockler (IBM Research, Haifa, Israel)
Rewriting logic (RL) is a natural model of computation and an expressive semantic framework for concurrency, parallelism, communication, and interaction. It can be used for specifying a wide range of systems and languages in various application fields. It also has good properties as a metalogical framework for representing logics. In recent years, several languages based on RL (ASF+SDF, CafeOBJ, ELAN, Maude) have been designed and implemented. The aim of the workshop is to bring together researchers with a common interest in RL and its applications, and to give them the opportunity to present their recent works, discuss future research directions, and exchange ideas.
Francisco Durán (Univ. of Málaga, Spain)
We have 2462 guests and no members online