My current research interest involves investigating the limits of systematic language understanding of modern neural language representation systems, by leveraging linguistic and logical priors. In particular, I investigate how language is represented by neural models in a human-like way by testing their ability to understand the semantics, syntax and generalizability in the context of natural and artificial languages. I am also interested in improving the state of Generative Language Modeling (Dialog Systems).
I’m a Research Scientist at Meta AI New York, in the Fundamental AI Research (FAIR) team. I did my PhD from McGill University (School of Computer Science) and Mila (Quebec AI Institute), supervised by Joelle Pineau, in the wonderful city of Montreal, QC, Canada. I spent a significant portion of my PhD being a Research Intern (STE) at Meta AI (FAIR), Montreal.
I am an associate editor of ReScience C, a peer reviewed journal promoting reproducible research, and I am the lead organizer of the annual Machine Learning Reproducibility Challenge (V1, V2, V3, V4, V5 ). My work has been covered by several news outlets in the past, including Nature, VentureBeat, InfoQ, DailyMail and Hindustan Times.
PhD in Computer Science (ML & NLP), 2022
McGill University
MSc in Computer Science (ML & NLP), 2018
McGill University
B.Tech in Computer Science, 2014
West Bengal University of Technology
[11/02/22] Successfully defended my PhD! Checkout my thesis here.
[08/29/22] Excited to announce a major life event: I’m starting today as a Research Scientist (Speech & NLP) in Meta AI New York!
[08/19/22] Happy to announce yet another Machine Learning Reproducibility Challenge, the MLRC 2022! This is our six edition!
[01/10/21] Happy to update that our paper “Masked Language Modeling and the Distributional Hypothesis: Order Word Matters Pre-training for Little” is accepted as a long paper at EMNLP 2021!