My current research interest involves investigating the limits of systematic language understanding of modern neural language representation systems, by leveraging linguistic and logical priors. In particular, I investigate how language is represented by neural models in a human-like way by testing their ability to understand the semantics, syntax and generalizability in the context of natural and artificial languages. I’m also currently investigating the role of multimodal language models at effective reasoning in the intersection of language and vision representations.
I’m a Research Scientist at Meta AI, in the Fundamental AI Research (FAIR) team. I did my PhD from McGill University (School of Computer Science) and Mila (Quebec AI Institute), supervised by Joelle Pineau, in the wonderful city of Montreal, QC, Canada. I spent a significant portion of my PhD being a Research Intern (STE) at Meta AI (FAIR), Montreal.
I am an associate editor of ReScience C, a peer reviewed journal promoting reproducible research, and I am the lead organizer of the annual Machine Learning Reproducibility Challenge (V1, V2, V3, V4, V5 ). My work has been covered by several news outlets in the past, including Nature, VentureBeat, InfoQ, DailyMail and Hindustan Times.
PhD in Computer Science (ML & NLP), 2022
McGill University
MSc in Computer Science (ML & NLP), 2018
McGill University
B.Tech in Computer Science, 2014
West Bengal University of Technology
[07/10/23] Excited to share that our paper “Language model acceptability judgements are not always robust to context” has received Outstanding Paper Award at ACL 2023! Very happy and honored!
[06/01/23] New paper: “Language model acceptability judgements are not always robust to context”, which is now accepted as a long paper in ACL 2023, Toronto.
[11/02/22] Successfully defended my PhD! Checkout my thesis here.
[08/29/22] Excited to announce a major life event: I’m starting today as a Research Scientist (Speech & NLP) in Meta AI New York!