Date and time: Tuesday, May 4, 2021, 8:30 – 12:30 BST (London time)
Visit the AAMAS 2021 web site
As a field of AI, Machine Reasoning (MR) uses largely symbolic means to formalize and emulate abstract reasoning, essential for building intelligent, autonomous systems. Recently, explainable AI (XAI) is experiencing a tremendous impulse for growth due to the need for building trustworthy AI-based systems.
Machine Learning (ML) explainability is visible through a plethora of works. MR explainability, however, does not seem to have garnered as much attention, even though the body of work on MR explainability is long-standing, deep and diverse.
In this tutorial, we will provide a selective overview of MR techniques and studies addressing explainability questions that arise in areas such as logic-based inference, constraint programming, argumentation, autonomous planning, symbolic reinforcement learning, causal reasoning. For systematization, we suggest a loose categorization of explanations into attributive, contrastive and actionable. We believe that the overview and categorization will provide the audience insights from the established explainability research in MR to complement well the current XAI landscape.
8:30 – 8:35 Welcome
8:35 – 8:50 Introduction
8:50 - 9:30: Conceptual part
9:30 – 9:40 Comfort break
9:40 – 10:50 Technical Part I
10:50 – 11:00 Comfort break
11:00 – 12:20 Technical Part II
12:20 – 12:30 Conclusions
Tutorial contents will be partly based on the Machine Reasoning Explainability draft report on arXiv
Kristijonas is an Experienced Researcher at Ericsson and has previously been a postdoctoral researcher at Imperial College London where he had also obtained a PhD in Computing (AI). Kristijonas has researched XAI for 5 years, with a focus on argumentative explanations. He is an active member of XAI and MR communities, experienced in organizing XAI seminars and workshops and in delivering invited and conference talks to both research audiences and general public. Kristijonas has several publications on computational argumentation and argumentation-based explanations in top-tier conferences and journals.
Swarup is a Principal Researcher at Ericsson Research. His expertise is in the areas of AI and formal methods, and his work primarily focuses on applying them to service automatization and the Internet of Things (IoT). He has research experience in the areas of formal specification and verification of real-time embedded software and AI planning techniques.
Swarup holds a PhD in computer science from the Institute of Mathematical Sciences, Chennai, India, and a postdoctoral fellowship at LaBRI, University of Bordeaux, France. He has taught core and elective courses at the Indian Institute of Technology, Bhubaneswar, India, as an adjunct faculty.
Badrinath is a Principal Engineer with Ericsson Research. He has background in planning, semantic web, graph theory and high-performance computing. He has been an associate professor of computer science for a decade at the IIT Kharagpur, India, and taught a variety of courses at undergraduate and graduate levels.
Anusha is a Senior Researcher at Ericsson Research AI. She researches XAI, symbolic reinforcement learning, automated planning and control theory. She is experienced in delivering research overviews to both academic and business audiences, particularly on XAI and MR. Anusha holds a PhD from the University of Exeter, focused on control theory and statistical analysis. She has several years of teaching experience of graduate and postgraduate level courses in control theory, electrical engineering, and mathematics, with publications thereof.
Alexandros researches multi-agent control and planning, formal methods and symbolic reinforcement learning, has a strong track publication record and received a 2021-2026 grant for symbolic reinforcement learning for network control from Swedish Strategic Foundation. Alexandros has taught an MSc course on hybrid and multi-agent control systems at KTH Royal Institute of Technology.
Alessandro has background in SAT/SMT with a focus on overconstrained systems. Explanations of inconsistent logic-based systems have been at the center of his research, with published works in top-tier conferences, some of which form the basis of current research to rigorous explanations of ML models.
Please contact Kristijonas kristijonas.cyras@ericsson.com if you have any questions or feedback.