Explaining a chess agents moves using local interpretable model agnostic explanations

Dátum
Folyóirat címe
Folyóirat ISSN
Kötet címe (évfolyam száma)
Kiadó
Absztrakt

As artificial intelligence systems are more integrated with our day to day lives, the need to explain these systems will increase. Techniques have been developed to explain some of these systems, however most of these techniques have focused on problems where past decisions do not influence future decisions. In this thesis we investigate how the actions of a chess agent can be explained. It is envisioned that such an explanation can be used to provide personalized teaching to novice chess players. More specifically we aim to establish which features are important for a chess agent in selecting a specific move. Current state of the art explanation techniques has shown promising results in chess, however various limitations still exist. To address these limitations an adaptation of the method: Locally Interpretable Model - Agnostic Explanations (LIME) is used. The adaption is tested on a dataset of 102 chess puzzles. The adapted techniques perform on par with current state of the art techniques while addressing some of the limitations of current techniques. Our adapted technique is capable of estimating the importance of a feature not just through the removal of the features but also by translation of the feature. This is especially important in cases where a feature cannot be simply removed i.e. removing the king in a chess game leads to an illegal state. Previously, the importance of such a feature could not be established.

Leírás
Kulcsszavak
Explainable AI, Local Interpretable Model-Agnostic Explanations, Deep Reinforcement Learning, Chess, Saliency maps
Forrás
Gyűjtemények