keyboard_arrow_up
Puzzle Solving without Search or Human Knowledge: An Unnatural Language Approach

Authors

David Noever1 and Ryerson Burdick2, 1PeopleTec, Inc., USA, 2University of Maryland, USA

Abstract

The application of Generative Pre-trained Transformer (GPT-2) to learn text-archived game notation provides a model environment for exploring sparse reward gameplay. The transformer architecture proves amenable to training on solved text archives describing mazes, Rubik’s Cube, and Sudoku solvers. The method benefits from fine-tuning the transformer architecture to visualize plausible strategies derived outside any guidance from human heuristics or domain expertise. The large search space (>1019) for the games provides a puzzle environment in which the solution has few intermediate rewards and a final move that solves the challenge.

Keywords

Natural Language Processing (NLP), Transformers, Game Play, Deep Learning.

Full Text  Volume 12, Number 9