The Multiple Desiderata Challenge in Trustworthy Machine Learning
Professor Depeng Xu, UNC Charlotte
Oct 21, 2024. 12-1pm. WWH 335
Artificial Intelligence (AI) and Machine Learning (ML) have developed rapidly and been adopted in a variety of applications. Despite the popularity and efficiency of these models, society is concerned about the trustworthiness of machine learning models. (1) The intensive training process on large-scale data raises a prominent issue of privacy leakage. The deep learning models often overfit to their training data, which leaves the users in the training set vulnerable to data leakage and membership inference attacks. (2) Due to the natural under-representation or social bias embedded from the real world, without proper guidance, models trained from such data often inherits unintentional bias towards unprivileged groups. Evaluating and mitigating unwanted bias in ML can ensure that the model output is faithful and ethical. (3) Deep learning models also have a shortcoming in their explainability due to their black-box architecture. For reliable and verifiable models, researchers need to explore new ways to enhance the model’s explainability. (4) The security of AI is at risk of malicious attackers. ML models also need to have protection means to improve on their robustness against various adversarial attacks. This talk seeks to evaluate the current state of trustworthiness of AI/ML and envision future directions. Existing works focusing on a single aspect of trustworthiness, while this talk aims to discuss the multi-desiderata challenge. It includes simultaneously achieving combinations of privacy, fairness, explainability and robustness, as well as new issues arised from the interactions of multiple components. We believe addressing the multiple desiderata trustworthiness in ML is important to advance our knowledge in AI and to safeguard real world applications.
Professor Depeng Xu is Assistant Professor in Software and Information Systems. His research focuses on data mining and machine learning, specifically differential privacy, algorithmic fairness, and ethical AI. He is also interested in robust and explainable machine learning in text classification, anomaly detection, and image recognition.