If you're a new user, don't forget to check out this section for getting started and basic usage! If you don't know which version to choose (OpenCL, CUDA, TensorRT, Eigen, Eigen AVX2), read this: https://github.com/lightvector/KataGo#opencl-vs-cuda-vs-tensorrt-vs-eigen
Also, KataGo is continuing to improve at https://katagotraining.org/ and if you'd like to donate your spare GPU cycles and support it, it could use your help there!
The latest neural nets to use with the engine in this release are downloadable at https://katagotraining.org/ except for the b18-uec net, described and attached to this release below.
Users of the TensorRT version upgrading to this version of KataGo from v1.12.2 or earlier will also need to upgrade from TensorRT 8.2 to TensorRT 8.5
As before, attached here are "bs29" versions of KataGo. These are just for fun, and don't support distributed training but DO support board sizes up to 29x29. They may also be slower and will use much more memory, even when only playing on 19x19, so you should use them only when you really want to try large boards.
The Linux executables were compiled on an old 18.04 Ubuntu machine. As with older releases, they might not work, and it may be more reliable to build KataGo from source yourself, which fortunately is usually not so hard on Linux (https://github.com/lightvector/KataGo/blob/master/Compiling.md).
New Neural Net Architecture Support (release series v1.12.x)
The same as prior releases in the v1.12.x series, this release KataGo has recently-added support a new neural net architecture! See the release notes for v1.12.0 for details! The new neural net, "b18c384nbt" is also attached in this release for convenience, for general analysis use it should be similar in quality to recent 60-block models, but run significantly faster due to being a smaller net. For other recent trained nets, download them from https://katagotraining.org/.
What's Changed in v1.12.4
This specific release v.1.12.4 addresses a variety of small bugs or behavioral oddities in KataGo that should improve some rare issues for analysis, and improve the training data:
- Added a crude hack to mitigate an issue where in positions with a large misevaluation by the raw net that normally search could fix, if the search happened to try the unlikely move of passing, the opponent passing in response could prevent or greatly delay the search from converging to the right evaluation. Controlled by new config parameter
enablePassingHacks
, which defaults to true for GTP and analysis and false elsewhere. - Changed the search to be more aware of the difference between computer-like rulesets that require capturing stones before ending of the game and human-like rulesets that don't in when the game end is triggered within variations of the search itself. This and the above passing hack are intended to address a rare behavior oddity newly discovered in recent KataGo versions in the last week or two prior to this release.
- Fixed a bug where komi was accidentally initialized to be inverted when generating training data from existing board positions where White moved first rather than Black.
- Fixed a bug where when hiding the history for the input to the neural net, the historical ladder status of stones would not get hidden, leaking information about past history.
- Fixes a bug in parsing the komi on certain rules strings (thanks @hzyhhzy)
- Updated the
genconfig
command's produced configs to match the new formatting and inline documentation for GTP configs introduced in an earlier release. - Minor fixes and features for the tools for generating and handling hint positions for custom training.