Authors:
(1) Kinjal Basu, IBM Research;
(2) Keerthiram Murugesan, IBM Research;
(3) Subhajit Chaudhury, IBM Research;
(4) Murray Campbell, IBM Research;
(5) Kartik Talamadupula, Symbl.ai;
(6) Tim Klinger, IBM Research.
Table of Links
3.1 Learning Symbolic Policy using ILP
4.1 Dynamic Rule Generalization
5 Experiments and Results
7 Future Work and Conclusion, Limitations, Ethics Statement, and References
3.2 Exception Learning
As EXPLORER does online learning, the quality of the initial rules is quite low; this gradually improves with more training. The key improvement achieved by EXPLORER is through exception learning, where an exception clause is added to the rule’s body using Negation as Failure (NAF). This makes the rules more flexible and able to handle scenarios where information is missing. The agent learns these exceptions by trying the rules and not receiving rewards. For example, in TWC, the agent may learn the rule that - apple goes to the fridge, but fail when it tries to apply the rule to a rotten apple. It then learns that the feature rotten is an exception to the previously learned rule. This can be represented as:
It is important to keep in mind that the number of examples covered by the exception is always fewer than the number of examples covered by
the defaults. This constraint has been included in EXPLORER’s exception learning module.
This paper is available on arxiv under CC BY 4.0 DEED license.