City, University of London’s Professor Artur Garcez explores some of the drawbacks of the technology.
By Professor Artur d’Avila Garcez, Director of City’s Research Centre for Machine Learning
We are seeing great excitement and concern around ChatGPT – part of the latest development in artificial neural networks that learn large language models (LLM).
While some researchers say that they have no concerns about ChatGPT, most will agree that there is great utility to be derived from the latest LLMs and so-called AIGC (Artificial Intelligence Generated Content).
From increases in productivity to real-time language translation, better ways of searching the internet and interacting with computational systems, the possibilities are tremendous.
Those who express anxiety over the technology point to two main problems, one of a more philosophical nature and the other with an immediate practical impact on everyday life.
As pointed out by Noam Chomsky during the AI Debate 3 in December 2022, ChatGPT cannot possibly tell us anything about how language works in the human mind. This is because, among other things, ChatGPT is a black-box.
Research about ‘opening the black-box’ to make sense of large neural networks, particularly to address concept learning, compositionality and reasoning, is underway in the research area known as neurosymbolic AI.
I have for several years co-organised the annual International Workshop on Neural-Symbolic Learning and Reasoning (known as the NeSy workshop series) - the longest standing gathering for the presentation and discussion of cutting-edge research in neurosymbolic AI.
Of immediate impact, ChatGPT makes mistakes.
It makes both silly and serious mistakes in ways that will disguise such mistakes because it generalizes from common patterns used in conversation.
The most immediate concern of a practical nature, therefore, is the risk of large-scale disinformation dissemination and its impact, particularly on young democracies.
Neurosymbolic AI can help mitigate the disinformation risks by opening the black box and allowing reasoning about what has been learned.
In the meantime, domain experts will need to become AI experts.