Using a neurosymbolic AI approach, the collaboration will provide meaningful explanations to deep neural networks trained to predict respiratory conditions from chest x-rays.

By Mr John Stevenson(Senior Communications Officer), Published

City, University of London’s Data Science Institute is collaborating with Fujitsu to use explainable AI to increase trust between humans and AI and the accountability of AI-based medical systems.

The partnership will see the global information and communications technology equipment and services corporation support a research fellowship at City to explore the use of deep learning and interactive explanation technology.

682865682865Fujitsu, who have been working with City academics since 2021 have co-authored the paper, Extracting Meaningful High-Fidelity Knowledge from Convolutional Neural Networks, which will be featured at the 2022 IEEE World Congress on Computational Intelligence, written by City’s Dr Kwun Ho Ngan and Professor Artur d’Avila Garcez and Fujitsu’s Dr Joe Townsend.

The research paper introduces a neurosymbolic AI approach which provides meaningful explanations to deep networks trained to predict respiratory conditions from chest x-rays. Using logical rules and a shared conceptual representation space permits the explanations for pleural effusion x-rays to help identify flaws in the network trained from Covid-19 x-rays.

Professor Garcez, Director of City’s Data Science Institute, said:

We are fortunate to have been working alongside Dr Townsend and his team and we now look forward to working on this project to put our models and theory to the test in a fulfilling collaboration which also drives new research that can improve peoples’ lives.

Dr. Daisuke Fukuda, the head of the Research Center for AI Ethics, Research Unit of Fujitsu Ltd said:

“Through the collaboration with Professor Artur Garcez, we aim to provide accountability to AI and support decisions in the domains that have significant impact on a person's activities, including medical diagnosis. We hope that we can continue to work together to make research at Fujitsu contribute to our society."

“The project ties into research already started within Fujitsu, looking at how interactive, interpretable AI can support detection and improvement of medical decision processes. We are pleased to be collaborating with Professor Garcez and his research team, who are respected globally as leaders in the fields of machine learning and extracting meaning and knowledge from AI”, said Dr Joe Townsend, Principal Researcher, Fujitsu Research of Europe.