In recent years, using Artificial Intelligence (AI) models in medical prognosis studies has gained popularity due to its ability to handle large amounts of messy data, to learn from a variety of data types, and have the potential to achieve a high level of prediction accuracy.
However, automated decision systems that employ AI models in the medical domain still face criticisms where the models may fail to adhere to high standards of accountability, reliability, and transparency for medical decisions. Furthermore, it also complicates the issue of accountability in the event of a wrong decision.
In the EU-funded SMART BEAR project, we are addressing these limitations by leveraging Explainable AI (XAI) to explain the decisions made by AI models, thereby allowing end-users to trust the AI models, understand why certain decisions were made, and construct patient profiles.
Such patient profiles are invaluable in healthcare treatments as they can be of an assist for providing personalised solutions, promoting independent living, and eventually reducing healthcare cost.
About the Speaker:
Qiqi is a PhD student at City, University of London, and their research focuses on developing and applying automated decision systems that leverage both AI and XAI techniques in the medical domain. Qiqi has an MSc in Data Science and a BSc in Mathematics.