Tutorials

In the realm of biomedical research, characterized by its complexity and rapid evolution, graph deep learning (DL) has emerged as a pivotal analytical tool. Graph neural networks (GNNs) have demonstrated remarkable power in analyzing intricate biological datasets such as molecular graphs, protein-protein interaction networks, and patient similarity networks. Despite their efficacy, these sophisticated graph DL models often resemble enigmatic black boxes, with millions of parameters obscuring their decision-making processes. In life-critical fields like biomedicine, where understanding the 'why' behind model predictions is just as crucial as the predictions themselves, the challenge of explainability becomes paramount. Our tutorial is tailored to equip participants with the knowledge and tools to tackle this challenge, offering a comprehensive exploration of explainability research within the context of GNNs, specifically tailored for biomedical applications. The tutorial encompasses an introduction to graph DL in biomedicine, core principles of explainability in GNNs, their practical applications in biomedical challenges, an overview of interpretable GNN models in biomedicine, and an interactive session with hands-on exercises. Targeted at researchers, academics, data scientists, machine learning practitioners, and industry professionals in biotech and healthcare technology, this tutorial is a gateway to mastering explainability in GNNs, crucial for advancing impactful biomedical research and development.

As predictive models are increasingly deployed in critical domains (e.g., healthcare, law, and finance), there has been a growing emphasis on explaining the predictions of these models to decision makers (e.g. doctors, judges) so that they can understand the rationale behind model predictions, and determine if and when to rely on these predictions. This has given rise to the field of eXplainable Artificial Intelligence (XAI) that aims to develop algorithms which generate explanations of individual predictions made by complex ML models. These methods output the influence of each of the features on the model’s prediction, and are being utilized to explain complex models in medicine, finance, law, and science. Thus, it is critical to ensure that the explanations generated by these methods are reliable so that relevant stakeholders and decision makers are provided with credible information about the underlying models.

Machine learning (ML) and other computational techniques are increasingly being deployed in high-stakes decision-making. The process of deploying such automated tools to make decisions which affect the lives of individuals and society as a whole, is complex and rife with uncertainty and rightful skepticism. Explainable ML (or broadly XAI) is often pitched as a panacea for managing this uncertainty and skepticism. While technical limitations of explainability methods are being characterized formally or otherwise in ML literature, the impact of explanation methods and their limitations on end users and important stakeholders (e.g., policy makers, judges, doctors) is not well understood. Our tutorial aims to contextualize explanation methods and their limitations for such end users. We further discuss overarching ethical implications of these technical challenges beyond misleading and wrongful decision-making. While we will focus on implications to applications in finance, clinical healthcare, and criminal justice, our overarching theme should be valuable for all stakeholders of the FAccT community. Our primary objective is that such a tutorial will be a starting point for regulatory bodies, policymakers, and fiduciaries to engage with explainability tools in a more sagacious manner.