Keynotes

#
Thomas A. Henzinger

Institute of Science and Technology, Austria

Keynote : Temporal Logics for Multi-Agent Systems

Temporal logic formalizes reasoning about the possible behaviors of a system over time. For example, a temporal formula may stipulate that an occurrence of event A may be, or must be, followed by an occurrence of event B. Traditional temporal logics, however, are insufficient for reasoning about multi-agent systems. In order to stipulate, for example, that a specific agent can ensure that A is followed by B, references to agents, their capabilities, and their intentions must be added to the logic, yielding Alternating-time Temporal Logic (ATL). ATL is a temporal logic that is interpreted over multi-player games, whose players correspond to agents that pursue temporal objectives. The hardness of the model-checking problem - whether a given formula is true for a given multi-agent system- depends on how the players decide on the next move in the game (for example, whether they take turns, or make independent concurrent decisions, or bid for the next move), how the outcome of a move is computed (deterministically or stochastically), how much memory the players have available for making decisions, and which kind of objectives they pursue (qualitative or quantitative). The expressiveness of ATL is still insufficient for reasoning about equilibria and related phenomena in non-zero-sum games, where the players may have interfering but not necessarily complementary objectives. For this purpose, the behavioral strategies (a.k.a. policies) of individual players can been added to the logic as quantifiable first-order entities, yielding Strategy Logic. We survey several known results about these multi-player game logics and point out some open problems.

Bio: Thomas Henzinger is president of IST Austria (Institute of Science and Technology Austria). He holds a Ph.D. degree in Computer Science from Stanford University and a Dr.h.c. from Fourier University in Grenoble and from Masaryk University in Brno. He was professor at the University of California, Berkeley, director at the Max-Planck Institute for Computer Science in Saarbruecken, and professor at EPFL in Lausanne. His research focuses on modern systems theory, especially models, algorithms, and tools for the design and verification of reliable software, hardware, and embedded systems. His HyTech tool was the first model checker for mixed discrete-continuous systems. He is an ISI highly cited researcher, a member of Academia Europaea, a member of the German Academy of Sciences, a member of the Austrian Academy of Sciences, a Fellow of the AAAS, a Fellow of the ACM, and a Fellow of the IEEE. He received the Milner Award of the Royal Society and an ERC Advanced Investigator Grant.

#
Patrick Cousot

Courant Institute of Mathematical Sciences, New York University, USA

Keynote : Calculational design of a regular model checker by abstract interpretation

Security monitors have been used to check for safety program properties at runtime, that is for any given execution trace. Such security monitors check a safety temporal property specified by a finite automaton or, equivalently, a regular expression. Checking this safety temporal specification for all possible execution traces, that is the program semantics, is a static analysis problem, more precisely a model checking problem, since model checking specializes on temporal properties. We show that the model checker can be formally designed by calculus, by abstract interpretation of a formal trace semantics. The result is a structural sound and complete model checker, which proceeds by induction on the program syntax (as opposed to the more classical approach using computation steps formalized by a transition systems). By Rice theorem, further hypotheses or abstractions are needed to get a realistic model checking algorithm.

Bio: Patrick Cousot received the Doctor Engineer degree in Computer Science and the Doctor ès Sciences degree in Mathematics from the University Joseph Fourier of Grenoble, France. Patrick Cousot is Silver Professor of Computer Science at the Courant Institute of Mathematical Sciences, New York University, USA. Before he was Professor at the the École Normale Supérieure, Paris, France, the École Polytechnique and the University of Lorraine and a Research Scientist at the French National Center for Scientific Research at the University Joseph Fourier of Grenoble, France. Patrick Cousot is the inventor, with Radhia Cousot, of Abstract Interpretation. Patrick Cousot was awarded the Silver Medal of the CNRS (1999), a honorary doctorate from the Fakultät Mathematik und Informatik of the Universität des Saarlandes (2001), the Grand Prix of Computer Science and its Applications of the Fondation Airbus Group attributed by the French Academy of Sciences (2006), a Humboldt Research Award (2008), and, with Radhia Cousot, the ACM-SIPLAN Programming Languages Achievement Award (2013), the IEEE Harlan D. Mills Joint Award (2014), and the IEEE John Von Neumann Medal (2018) ``For introducing abstract interpretation, a powerful framework for automatically calculating program properties with broad application to verification and optimization.'' He is Member of the Academia Europaæ, Informatics section (since 2006).

#
Dominique Méry

University of Lorraine, France

Keynote :Verification by Construction of Distributed Algorithms

The verification of distributed algorithms is a challenge for formal techniques supported by tools, as model checkers and proof assistants. The difficulties, even for powerful tools, lie in the derivation of proofs of required properties, such as safety and eventuality, for distributed algorithms. Verification by construction can be achieved by using a formal framework in which models are constructed at different levels of abstraction; each level of abstraction is refined by the one below, and this refinement relationships is documented by an abstraction relation namely a gluing invariant. The highest levels of abstraction are used to express the required behavior in terms of the problem domain and the lowest level of abstraction corresponds to an implementation from which an efficient implementation can be derived automatically. In this talk, we will show a methodology based on the general concept of refinement and used for developing distributed algorithms satisfying a given list of safety and liveness properties. The modelling methodology is defined in the Event-B modelling language using the IDE RODIN. More precisely, we show how Event-B models can be developed for specific problems and how they can be simply reused by controlling the composition of state-based models through the refinement relationship. Following Polya, we have identified patterns expressed as Event-B models which can be replayed for developing distributed algorithms from requirements. Consequently, we obtain a (re)development of correct-by-construction existing distributed algorithms and a framework for deriving new distributed algorithms (by integrating models) and for ensuring the correctness of resulting distributed algorithms by construction. We illustrate our methodology using classical problems as communication in a network, leader election protocol, self-stabilisation. These works have been carried out in collaboration with partners of the ANR project RIMEL (http://rimel.loria.fr) which have considered as well the local computation model defining distributed computations as graph rewriting rules (see http://visidia.labri.fr). We will show how the local computation model can be used within the Event-B framework. Resources are available at the link http://eb2all.loria.fr and are part of the current project on developing an atlas of correct-by-construction distributed algorithms.