Koustuv Sinha

Koustuv Sinha

Research Scientist

Meta AI

I’m a Research Scientist at Meta AI, in the Fundamental AI Research (FAIR) team. I received my PhD from McGill University (School of Computer Science) and Mila (Quebec AI Institute), Montreal, Canada, where I was advised by Dr. Joelle Pineau. I also received by MSc (Thesis) from McGill University, where I was advised by Dr. Derek Ruths and Dr. Joelle Pineau.

My current research interest involves investigating the role of multimodal language models at understanding and reasoning the world through rich visual representations, especially by leveraging language as a tool to decode and reason the underlying physical rules governing the world. My research also involves understanding the limits of systematic language understanding - the ability of neural systems to understand language in a human-like way - by evaluating the extent of their capabilities in understanding semantics, syntax and generalizability.

I am also involved in improving and enabling reproducibile research in Machine Learning - I’m the lead organizer of the annual Machine Learning Reproducibility Challenge (V1, V2, V3, V4, V5, V6, v7), and I serve as an associate editor at ReScience C, a peer reviewed journal promoting reproducible research. My work has been covered by several news outlets in the past, including Nature, VentureBeat, InfoQ, DailyMail and Hindustan Times.

Resumé | PhD Thesis

Interests
  • Machine Learning
  • Natural Language Processing
  • Computational Linguistics
  • Multimodal NLP
  • Representation Learning
Education
  • PhD in Computer Science (ML & NLP), 2022

    McGill University

  • MSc in Computer Science (ML & NLP), 2018

    McGill University

  • B.Tech in Computer Science, 2014

    West Bengal University of Technology

Recent News

All news»

Recent Publications

(2023). Language model acceptability judgements are not always robust to context. Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Outstanding Paper Award.

Cite Arxiv ACL Anthology

(2022). The Curious Case of Absolute Position Embeddings. Findings of Empirical Methods of Natural Language Processing.

Cite DOI Arxiv

(2022). How sensitive are translation systems to extra contexts? Mitigating gender bias in Neural Machine Translation models through relevant contexts. Findings of Empirical Methods of Natural Language Processing (EMNLP), 2022.

Cite DOI Arxiv

Contact