Researching the Unknown

Welcome!

I am a Research Scientist at Adobe working on developing trustworthy machine learning that go beyond training models for specific downstream tasks and ensure they satisfy other desirable properties, such as explainability, fairness, and robustness. What inspires me to do what I do is the feeling of helping the community through my work! I am one of the co-founders of the Trustworthy ML Initiative, a forum and seminar series related to Trustworthy ML, and an active member of the MLC research group that focuses on democratizing research by supporting open collaboration in machine learning (ML) research.

I am always looking for new and exciting discussions as they are stepping stones for scientific research. Feel free to reach out to me here if you are excited about core eXplainable Artificial Intelligence (XAI) or its applications.

News

  • 09/16/2022: OpenXAI gets accepted at NeurIPS'22!!
  • 09/02/2022: Invited Talk in the TrustML Young Scientists Seminars at RIKEN-AIP.
  • 07/15/2022: Invited Talk on OpenXAI at Auburn University
  • 07/08/2022: Invited Talk on "XAI: Challenges, Solutions, and the Way Forward" at Adobe Research, Bangalore
  • 06/24/2022: Selected as a Lecturer for AI Summer School (AISS)'22 at IIIT-Delhi.
  • 04/05/2022: Workshop papers accepted at ICLR'22 Pair2Struct workshop and WWW'22 workshop on Graph Learning Benchmarks
  • 03/10/2022: Selected as a Mentor for the LOGML Summer School 2022
  • 03/03/2022: VoG paper accepted to CVPR 2022!!
  • 01/24/2022: Joined Adobe as a Research Scientist!!
  • 01/18/2022: 2 papers accepted at AISTATS 2022!!
  • 07/27/2021: New preprint Towards a Rigorous Theoretical Analysis and Evaluation of GNN Explanations on arXiv
  • 07/27/2021: Our paper Towards a Unified Framework for Fair & Stable Graph Representation Learning is accepted at UAI'21
  • 07/21/2021: 3 papers accepted at ICML'21 workshop on Theoretic Foundation, Criticism, and Applications Trends of Explainable AI, 1 paper at the ICML'21 workshop on Socially Responsible Machine Learning, 1 paper at the ICML'21 workshop on Interpretable Machine Learning in Healthcare, and 1 paper at the ICML'21 workshop on Uncertainty Robustness in Deep Learning.
  • 07/21/2021: Our paper Towards Unification & Robustness of Perturbation and Gradient Based Explanations is accepted at ICML'21
  • 03/04/2021: Presented a tutorial at FAccT with Shalmali Joshi and Himabindu Lakkaraju on limitations of explanability methods in ML. Video: here & Slides: here.
  • 03/02/2021: Guest Lecture in Topics in Machine Learning-Interpretability and Explainability course at Harvard

Copyright © All rights reserved | This template is made with by Colorlib