Grandmaster-level chess without search

In this cutting-edge paper, the authors present a groundbreaking approach to achieving Grandmaster-level chess performance without traditional search algorithms. Instead, they utilize a massive 270M parameter transformer model trained on a dataset of 10 million chess games annotated with action-values from Stockfish 16. Surprisingly, their model surpasses AlphaZero and GPT-3.5-turbo-instruct in performance without domain-specific modifications or explicit search processes. The paper emphasizes the importance of scale in achieving strong chess performance, as demonstrated through extensive ablations of design choices and hyperparameters. Overall, the research showcases the power of large-scale attention-based architectures in revolutionizing machine learning advancements in chess.

https://github.com/google-deepmind/searchless_chess

To top