github lightvector/KataGo v1.16.4
Experimental Eval Cache, Bugfixes

10 hours ago

If you're a new user, this section has tips for getting started and basic usage! If you don't know which version to choose (OpenCL, CUDA, TensorRT, Eigen, Eigen AVX2), see here.

Download the latest neural nets to use with this engine release at https://katagotraining.org/.
Also, for 9x9 boards or for boards larger than 19x19, see https://katagotraining.org/extra_networks/ for networks specially trained for those sizes!

KataGo is continuing to improve at https://katagotraining.org/ and if you'd like to donate your spare GPU cycles and support it, it could use your help there!

Notes about Precompiled Exes in this Release

For CUDA and TensorRT, the executables attached below are labeled with the versions of the libraries they are built for. E.g. trt10.2.0 for TensorRT 10.2.0.x, or cuda12.5 for CUDA 12.5.x, etc. It's recommended that you install and run these with the matching versions of CUDA and TensorRT rather trying to run with different versions.

The OpenCL version will more often work as long as you have any semi-modern GPU hardware accelerator and appropriate drivers installed, whether for Nvidia or non-Nvidia GPUs, without needing any specific versions, although it may be a bit less performant.

Available also below are both the standard and +bs50 versions of KataGo. The +bs50 versions are just for fun, and don't support distributed training but DO support board sizes up to 50x50. They may also be slightly slower and will use much more memory, even when only playing on 19x19, so use them only when you really want to try large boards.

The Linux executables were compiled on a 22.04 Ubuntu machine using AppImage. You will still need to install e.g. correct versions of Cuda/TensorRT or have drivers for OpenCL, etc. on your own. Compiling from source is also not so hard on Linux, see the "TLDR" instructions for Linux here.

Changes this Release

  • Added experimental eval-caching feature, NOT enabled by default yet. Enable it by setting useEvalCache=true in the gtp.cfg or analysis.cfg. This will make it so that while analyzing with KataGo interactively, if you walk deeper into a variation and have KataGo realize a good move that was a blind spot, then when you walk backward to an earlier point, the search will be far more likely to solve that tactic as well and now able to analyze the earlier position in light of the newly solved tactic.
  • Subtree value bias now no longer applies to nodes following passing, to avoid conflating evals that don't have a discriminating local pattern.
  • Fixed issue in contributing to distributed training where where unnecessary locking might block starting new games while old games were uploaded.
  • Fixed some typos that could cause the python testing script python/play.py or getting input features in python to crash
  • Various internal refactors and cleanups

Don't miss a new KataGo release

NewReleases is sending notifications on new releases.