1. Events
  2. 2019
  3. November
  4. Explain Yourself: Uncovering Symbolic Knowledge in Neural Networks




Explain Yourself: Uncovering Symbolic Knowledge in Neural Networks



Students, Academics

Series: Data Bites

Speaker: Simon Odense (PhD student, City, University of London)


The rapid growth in AI has left many concerned about the implications of using automated systems to make decisions in which there are important ethical or safety issues. The problem is made worse by the fact that most AI systems are black boxes. Without being able to understand the reasoning of an AI how can we be certain that the decisions it makes aren't the result of an unfair bias or crucial misunderstanding of the relationship between variables?

Rule extraction is one of the oldest approaches to the problem of explainable AI. By translating a neural network into a sequence of abstract causes and effects we hope to reveal the factors employed in a networks decision process, but can we reasonably expect neural networks to be explained this way? The lack of an effective general purpose rule extraction algorithm despite decades of work has convinced many people that the answer is no.

We address the question directly by uncovering the inherent limitations and potential of rule extraction in deep networks. We apply this knowledge to examine some of the common intuitions behind the mechanisms of deep learning and address the conceptual and philisophical divide between symbolic and connectionist approaches to AI.


Simon Odense is a PhD student at City, University of London in his final year where he does research in neural-symbolic integration and its application to explainable AI. He has a Bachelors and Masters degree in Mathematics from the University of Victoria where he did his masters thesis on the universal approximation of temporal Boltzmann machines.

Share this event

When and where

6.00pm - 7.00pmThursday 7th November 2019

AG21 College Building City, University of London Northampton Square London EC1V 0HB United Kingdom