|
Professor
Stephen Muggleton, Department of Computing, Imperial
College London
Title: What do we need from Third Wave Artificial
Intelligence
Abstract:
Third Wave Artificial Intelligence is a term introduced by
DARPA to describe new forms intelligent computer systems
which transcend black box machine learning. This
presentation analyses key drivers which motivate such
developments. In doing so we revisit discussions,
from the late 1980s, between Geoff Hinton and Donald
Michie concerning the expected contributions of Neural
Networks versus Symbolic Machine Learning. At the time
Hinton argued the Scientific case for Neural Networks as
testable models for Neurobiology. By contrast, Michie
advocated the Engineering value of learned models which
were interpretable to the user and could even be used for
coaching skills in games such as Chess. In this talk we
describe recent research at Imperial College London in
which prototype Third Wave AI systems have been developed
which use modern Symbolic Machine Learning techniques.
These systems support natural language explanation of
symbolic Machine Learned solutions for areas ranging from
game-playing to high-level computer vision. In each case
models are learned from fewer examples than comparable
Second-Wave AI systems, and often require minimal
supervision. The data efficiency and explanaibility of
such systems is compatible with interactive learning
involving users. We close the talk by summarising
challenges to be addressed in future research.
Short biography:
Professor Stephen Muggleton FREng FAAAI is Professor of
Machine Learning in the Department of Computing at
Imperial College London, Director of the UK's Human-Like
Network and is internationally recognised as the founder
of the field of Inductive Logic Programming. SM’s career
has concentrated on the development of theory,
implementations and applications of Machine Learning,
particularly in the field of Inductive Logic Programming
(ILP) and Probabilistic ILP (PILP). Over the last decade
he has collaborated with biological colleagues, such as
Prof Mike Sternberg, on applications of Machine Learning
to Biological prediction tasks. SM’s group is situated
within the Department of Computing and specialises in the
development of novel general-purpose machine learning
algorithms, and their application to biological prediction
tasks. Widely applied software developed by the group
includes the ILP system Progol (publication has over 1700
citations on Google Scholar) as well as a family of
related systems including ASE-Progol (used in the Robot
Scientist project), Metagol and Golem.
|
|
Professor Francesca
Toni, Department of Computing, Imperial College London
Title: Extracting Dialogical Explanations for Review
Aggregations with Argumentative Dialogical Agents
Abstract:
The aggregation of online reviews is fast becoming the
chosen method of quality control for users in various
domains, from retail to entertainment. Consequently, fair,
thorough and explainable aggregation of reviews is
increasingly sought-after. We consider the movie review
domain, and in particular Rotten Tomatoes’ ubiquitous (and
arguably over-simplified) aggregation method, the
Tomatometer Score (TS). For a movie, this amounts to the
percentage of critics giving the movie a positive review.
We define a novel form of argumentative dialogical agent
(ADA) for explaining the reasoning within the reviews. ADA
integrates: 1.) NLP with reviews to extract a Quantitative
Bipolar Argumentation Framework (QBAF) for any chosen
movie to provide the underlying structure of explanations,
and 2.) gradual semantics for QBAFs for deriving a
dialectical strength measure for movies, as an alternative
to the TS, satisfying desirable properties for obtaining
explanations. We evaluate ADA using some prominent NLP
methods and gradual semantics for QBAFs. We show that they
provide a dialectical strength which is comparable with
the TS, while at the same time being able to provide
dialogical explanations of why a movie obtained its
strength via interactions between the user and ADA.
Short biography:
Francesca Toni is Professor in Computational Logic in the
Department of Computing, Imperial College London, UK, and
the funder and leader of the CLArg (Computational
Logic and Argumentation) research group. Her research
interests lie within the broad area of Knowledge
Representation and Reasoning in Artificial Intelligence,
and in particular include Argumentation, Logic-Based
Multi-Agent Systems, Logic Programming for Knowledge
Representation and Reasoning, Non-monotonic and
Default Reasoning. She graduated, summa cum laude, in
Computing at the University of Pisa, Italy, in 1990, and
received her PhD in Computing in 1995 from Imperial
College London. She has coordinated two EU projects,
received funding from EPSRC and the EU, and awarded a
Senior Research Fellowship from The Royal Academy of
Engineering and the Leverhulme Trust. She is currently
Technical Director of the ROAD2H EPSRC-funded
project. She has co-chaired ICLP2015 (the 31st
International Conference on Logic Programming) and KR 2018
(the 16th Conference on Principles of Knowledge
Representation and Reasoning). She is a member of the
steering committe of AT (Agreement Technologies) and KR
Inc (Principles of Knowledge Representation and Reasoning,
Incorporated), corner editor on Argumentation for the
Journal of Logic and Computation , and in the editorial
board of the Argument and Computation journal and the AI
journal.
|
|
Professor Ute
Schmid, Cognitive System Group, University of Bamberg
Title: Cooperative Learning with Mutual
Explanations
Abstract :
Explainable AI most often refers to visual highlighting of
information which is relevant for the classification
decision for a given instance. In contrast, interpretable
machine learning means that the learned models are
represented in a human readable form. While a transparent
and comprehensible model communicates how a class can be
characterized (e.g., what is a cat in contrast to non-cat
instances), an explanation gives reasons why a specific
instance is classified in a specific way (e.g., why is
this image classified as a cat). I will argue that
presenting either visualisations or rules to a user will
often not suffice as a helpful explanation. Visualisations
can only inform the user of the relevance of certain
features but not on the relevance of a specific feature
value. For example, when classifying a facial expression,
it is not enough to explain that the region of the eye
contributes to the classification. It is of importance
whether the eye is wide open or whether the lid is
tightened to classify an expression as surprise or pain.
Futhermore, visual highlighting can only inform about
conjunction of features (e.g., there is a red block and a
green block) while rules can convey relations (e.g., the
red block is on the green block) and negation. Often, a
combination of visual and textual explanations might be
most helpful for a user. Additionally, near-miss examples
can help to increase the understanding of what aspects of
an instance are crucial for class membership. It can be
assumed that explanations are not "one size fits all" but
that it depends on the user, the problem, and the current
situation which type of explanation is most helpful.
Finally, I will present a new method which allows a
machine learning system to exploit not only class
corrections but also explanations from the user to correct
and adapt learned models in interactive, cooperative
learning scenarios.
Short biography:
Ute Schmid holds a diploma in psychology and a diploma in
computer science, both from Technical University Berlin
(TUB), Germany. She received her doctoral degree (Dr.
rer.nat.) in computer science from TUB in 1994 and her
habilitation in computer science in 2002. From 1994 to
2001 she was assistant professor (wissenschaftliche
Assistentin) at the AI/Machine Learning group, Department
of Computer Science, TUB. Afterwards she worked as
lecturer (akademische Rätin) for Intelligent Systems at
the Department of Mathematics and Computer Science at
University Osnabrück. Since 2004 she holds a professorship
of Applied Computer Science/Cognitive Systems at the
University of Bamberg. Research interests of Ute Schmid
are mainly in the domain of comprehensible machine
learning, explainable AI, and high-level learning on
relational data, especially inductive programming,
knowledge level learning from planning, learning
structural prototypes, analogical problem solving and
learning. Further research is on various applications of
machine learning (e.g., classifier learning from medical
data and for facial expressions) and empirical and
experimental work on high-level cognitive processes. Ute
Schmid dedicates a significant amount of her time to
measures supporting women in computer science and to
promote computer science as a topic in elementary,
primary, and secondary education.
|
|
Professor
Nick Chater, Warwick Business School, University of
Warwick
Title: Virtual bargaining - A microfoundation for the
theory of social interaction
Abstract:
How can people coordinate their actions or make joint
decisions? One possibility is that each person attempts to
predict the actions of the other(s), and best-responds
accordingly. But this can lead to bad outcomes, and
sometimes even vicious circularity. An alternative view is
that each person attempts to work out what the two or more
players would agree to do, if they were to bargain
explicitly. If the result of such a "virtual" bargain is
"obvious," then the players can simply play their
respective roles in that bargain. I suggest that virtual
bargaining is essential to genuinely social interaction
(rather than viewing other people as instruments), and may
even be uniquely human. This approach aims to respect
methodological individualism, a key principle in many
areas of social science, while explaining how human groups
can, in a very real sense, be "greater" than the sum of
their individual members.
Short biography:
Nick Chater is Professor of Behavioural Science at Warwick
Business School. He works on the cognitive and social
foundations of rationality and language. He has published
more than 250 papers, co-authored or edited more than a
dozen books, has won four national awards for
psychological research, and has served as Associate Editor
for the journals Cognitive Science, Psychological Review,
and Psychological Science. He was elected a Fellow of the
Cognitive Science Society in 2010 and a Fellow of the
British Academy in 2012. Nick is co-founder of the
research consultancy Decision Technology and is a member
on the UK’s Committee on Climate Change. He is the author
of The Mind is Flat (2018).
|
|
Professor
Murray Shanahan, Google DeepMind &
Imperial College London
Title : Reconciling Deep Learning with Symbolic AI
Abstract :
In spite of its undeniable effectiveness, conventional
deep learning architectures have a number of limitations,
such as data inefficiency, brittleness, and lack of
interpretatbility. One way to address these limitations is
to import a central idea from symbolic AI, namely the use
of compositional representations based on objects and
relations. In this talk I will discuss recent work on
neural network architectures that learn to acquire and
exploit relational information, which are a step in this
direction.
Short biography:
Murray Shanahan is Professor of Cognitive Robotics in the
Dept. of Computing at Imperial College London, and a
senior research scientist at DeepMind. Educated at
Imperial College and Cambridge University (King’s
College), he became a full professor at Imperial in 2006,
and joined DeepMind in 2017. His publications span
artificial intelligence, robotics, machine learning,
logic, dynamical systems, computational neuroscience, and
philosophy of mind. He has written several books,
including “Embodiment and the Inner Life” (2010) and “The
Technological Singularity” (2015). His main current
research interests are neurodynamics, deep reinforcement
learning, and the future of AI.
|
|
Professor
Kristian Kersting, Technische
Universität Darmstadt
Title: Deep Machines That Know When They Do not Know
Abstract:
Our minds make inferences that appear to go far beyond
standard machine learning. Whereas people can learn richer
representations and use them for a wider range of learning
tasks, machine learning algorithms have been mainly
employed in a stand-alone context, constructing a single
function from a table of training examples. In this talk,
I shall touch upon a view on machine learning, called
probabilistic programming, that can help capturing these
human learning aspects by combining high-level programming
languages and probabilistic machine learning — the
high-level language helps reducing the cost of modelling
and probabilities help quantifying when a machine does not
know something. Since probabilistic inference remains
intractable, existing approaches leverage deep learning
for inference. Instead of “going down the full neural
road,” I shall argue to use sum-product networks, a deep
but tractable architecture for probability distributions.
This can speed up inference in probabilistic programs, as
I shall illustrate for unsupervised science understanding,
and even pave the way towards automating density
estimation, making machine learning accessible to a
broader audience of non-experts. This talk is based on
joint works with many people such as Carsten Binnig,
Zoubin Ghahramani, Andreas Koch, Alejandro Molina, Sriraam
Natarajan, Robert Peharz, Constantin Rothkopf, Thomas
Schneider, Patrick Schramwoski, Xiaoting Shao, Karl
Stelzner, Martin Trapp, Isabel Valera, Antonio Vergari,
and Fabrizio Ventola.
Short biography:
Kristian Kersting is a Professor (W3) for Machine Learning
at the TU Darmstadt University, Germany. After receiving
his Ph.D. from the University of Freiburg in 2006, he was
with the MIT, Fraunhofer IAIS, the University of Bonn, and
the TU Dortmund University. His main research interests
are statistical relational artificial intelligence (AI)
and probabilistic deep learning. Kristian has published
over 160 peer-reviewed technical papers and co-authored a
book on statistical relational AI. He regularly serves on
the PC (often at senior level) for several top conference
(NeurIPS, AAAI, IJCAI, KDD, ICML, UAI, ECML PKDD etc.),
co-chaired the PC of ECML PKDD 2013, 2020 and UAI 2017,
and is the elected PC co-chair of ECML PKDD 2020. He is
the Speciality Editor-in-Chief for Machine Learning and AI
of Frontiers in Big Data, and is/was an action editor of
TPAMI, JAIR, AIJ, DAMI, and MLJ.
|