Koustuv Sinha, Shagun Sodhani, Joelle Pineau, William L. Hamilton
Abstract
Recent research has highlighted the role of relational inductive biases
in building learning agents that can generalize and reason in a
compositional manner. However, while relational learning algorithms such as graph neural networks (GNNs) show promise, we do not understand how effectively these approaches can adapt to new tasks. In this work, we
study the task of logical generalization using GNNs by designing a
benchmark suite grounded in first-order logic. Our benchmark suite,
GraphLog
, requires that learning algorithms perform rule induction
in different synthetic logics, represented as knowledge graphs.
GraphLog
consists of relation prediction tasks on 57 distinct
logical domains. We use GraphLog
to evaluate GNNs in three different
setups: single-task supervised learning, multi-task pretraining, and
continual learning. Unlike previous benchmarks, our approach allows us
to precisely control the logical relationship between the different
tasks. We find that the ability for models to generalize and adapt is
strongly determined by the diversity of the logical rules they encounter
during training, and our results highlight new challenges for the design
of GNN models.
Latest News
- May 24, 2020 : Code for experiments in the paper released in GraphLog repository
- April 25, 2020 : Added simple supervised experiments using GraphLog in Pytorch Lightning