I’m a Research Scientist at Meta AI, in the Fundamental AI Research (FAIR) team. I received my PhD from McGill University (School of Computer Science) and Mila (Quebec AI Institute), Montreal, Canada, where I was advised by Dr. Joelle Pineau. I also received by MSc (Thesis) from McGill University, where I was advised by Dr. Derek Ruths and Dr. Joelle Pineau.
My current research interest involves investigating the role of multimodal language models at understanding and reasoning the world through rich visual representations, especially by leveraging language as a tool to decode and reason the underlying physical rules governing the world. My research also involves understanding the limits of systematic language understanding - the ability of neural systems to understand language in a human-like way - by evaluating the extent of their capabilities in understanding semantics, syntax and generalizability.
I am also involved in improving and enabling reproducibile research in Machine Learning - I’m the lead organizer of the annual Machine Learning Reproducibility Challenge (V1, V2, V3, V4, V5, V6, v7), and I serve as an associate editor at ReScience C, a peer reviewed journal promoting reproducible research. My work has been covered by several news outlets in the past, including Nature, VentureBeat, InfoQ, DailyMail and Hindustan Times.
PhD in Computer Science (ML & NLP), 2022
McGill University
MSc in Computer Science (ML & NLP), 2018
McGill University
B.Tech in Computer Science, 2014
West Bengal University of Technology
[01/06/24] Serving as a Senior Area Chair at ACL 2024.
[16/05/24] Excited to announce our large multimodal language model, Chameleon, is now released on arxiv! We have also released the code and model weights, as per our Open Source Strategy!
[10/18/23] MLRC 2023 challenge goes live in partnership with TMLR! Read our blog post for more information.
[07/10/23] Excited to share that our paper “Language model acceptability judgements are not always robust to context” has received Outstanding Paper Award at ACL 2023! Very happy and honored!
[06/01/23] New paper: “Language model acceptability judgements are not always robust to context”, which is now accepted as a long paper in ACL 2023, Toronto.