Submitted by on Sep 02 2020 } Suggest Revision
By: Peter Clark, Oyvind Tafjord, Kyle Richardson
From: Allen Institute for AI
Resource Type:
Data Code
Data Format:


The RuleTaker model reads in a new rulebase of natural language facts and rules, and decides whether an outcome is true or false based on the given information. We train transformers to reason (or emulate reasoning) over these sentences using synthetically generated data. Our models, that we call RuleTakers, provide the first empirical demonstration that this kind of soft reasoning over language is learnable, can achieve high (99%) accuracy, and generalizes to test data requiring substantially deeper chaining than seen during training (95%+ scores). We also demonstrate that the models transfer well to two hand-authored rulebases, and to rulebases paraphrased into more natural langua
Post comment