github lightvector/KataGo v1.13.0
Better models and search and training, many improvements

latest releases: v1.14.1, v1.14.0, v1.13.2-kata9x9...
11 months ago

If you're a new user, this section has tips for getting started and basic usage! If you don't know which version to choose (OpenCL, CUDA, TensorRT, Eigen, Eigen AVX2), see here.

KataGo is continuing to improve at https://katagotraining.org/ and if you'd like to donate your spare GPU cycles and support it, it could use your help there!

For the TensorRT version, download it from v1.13.1 which is a quick bugfix release specific to the TensorRT version and which should only matter for users doing custom builds, but for clarity has been minted as a new release.

You can find the latest neural nets at https://katagotraining.org/. This release also features a somewhat outdated but stronger net using a new "optimistic policy" head in v1.13.0, attached below, the latest nets at katagotraining.org will also start including this improvement soon.

Attached here are "bs29" versions of KataGo. These are just for fun, and don't support distributed training but DO support board sizes up to 29x29. They may also be slower and will use much more memory, even when only playing on 19x19, so you should use them only when you really want to try large boards.

The Linux executables were compiled on a 20.04 Ubuntu machine. Some users have encountered issues with libzip or other library compatibility issues in the past. If you have this issue, you may be able to work around it by compiling from source, which is usually not so hard on Linux, see the "TLDR" instructions for Linux here.

Changes in v1.13.0

Modeling improvements

  • Optimistic policy - improved policy head that is biased to look more for unexpectedly good moves. A one-off neural net using this policy head is attached below, KataGo's main nets at https://katagotraining.org/ will begin including the new head soon as well.

  • Softplus error scaling - supports new squared softplus activations for value and score error predictions, as well as adjusted scaling of gradients and post-activation for those predictions, which should fix some rare outliers in overconfidence in these predictions as well as large prediction magnitudes that might result in less-stable training.

Search improvements

  • Fixed a bug with determining the baseline top move at low playouts for policy target pruning, which could cause KataGo at low playouts on small boards to sometimes play extremely bad moves (e.g. the 1-1 point).

  • For GTP and analysis, KataGo will automatically cap the number of threads at about 1/8th the number of playouts being performed, to prevent the worst cases of accidentally misconfiguring KataGo to use many threads being destructive to search quality when testing KataGo with low settings. To override this, you can set the config parameter minPlayoutsPerThread.

KataGo GTP/match changes

These are changes relevant to users running bots online or katago internal test matches.

  • Added support for automatically biasing KataGo to avoid moves it played recently in earlier games, giving more move variety for online bots. See the "Automatic avoid patterns" section in cpp/configs/gtp_example.cfg.

  • Updated the behavior of ogsChatToStderr=true for gtp2ogs version 8.x.x (https://github.com/online-go/gtp2ogs), for running KataGo on OGS.

  • Added a new config parameter gtpForceMaxNNSize that may reduce performance on small boards, but avoids a lengthy initialization time when changing board sizes, which is necessary for clients that may toggle the board size on every turn, such as gtp2ogs 8.x.x's pooling manager.

  • Fixed a segfault with extraPairs when using katago match to run round-robin matches (#777), removed support for blackPriority pairing logic, and added extraPairsAreOneSidedBW to allow one-sided colors for matches.

Analysis engine

Python code and training script changes

These are relevant to users running training/selfplay. There are many minor changes to some of the python training scripts and bash scripts this release. Please make backups and test carefully if upgrading your training process in case anything breaks your use case!

  • Model configs now support version "12" (corresponding to optimistic policy above) and "13" and "14" (corresponding to softplus error scaling above). Experimental scripts migrate_optimistic_policy.py and migrate_softplus_fix.py and migrate_squared_softplus.py are provided in python/ for upgrading an old version "11" model. You will also need to train more if you upgrade, to get the model to re-converge.

  • The training python code python/train.py now defaults to using a lot of parameters that KataGo's main run was using and that were tested to be effective, but that were NOT default before. Be advised that upgrading to v1.13.0 with an existing training run may change various parameters due to using the new defaults, possibly improving them, but nonetheless changing them.

  • Altered the format of the summary json file output by python/summarize_old_selfplay_files.py and which is called by the shuffler script in python/selfplay/shuffle_loop.sh to cache data and avoid searching every directory on every shuffle. The new format now tracks directory mtimes, avoiding some cases where it might miss new data. For existing training runs, the new scripts should seamlessly load the old format and upgrade it to the new format, however, after having done so, pre-v1.13.0 training code will no longer be able to read that new format if you then try to downgrade again.

  • Rewrote python/selfplay/synchronous_loop.sh to copy and run everything out of a dated directory to avoid concurrent changes to the git repo checkout affecting an ongoing run, and also improved it to use a flag -max-train-bucket-per-new-data and other flags to better prevent overfitting without having to so carefully balance games/training epochs size.

  • Overhauled documentation on selfplay training to be current with the new pytorch training introduced earlier in releases v1.12.x and to also recommend use of -max-train-bucket-per-new-data and related parameters that were not previously highlighted, which give much easier control over the relative selfplay vs training speed.

  • Removed confusing logic in the C++ code to split out part of its data as validation data (maxRowsPerValFile and validationProp parameters in selfplay cfg files no longer exist). This was not actually used by the training scripts. Instead, the shuffle script python/selfplay/shuffle.sh continues to do this with a random 5% of files, at the level of whole npz data files. This can be a bit chunky if you have too few files, to disable this behavior and just train on all of the data, pass the environment variable SKIP_VALIDATE=1 to shuffle.sh.

  • Removed support for self-distillation in python/train.py.

  • Significantly optimized shuffling performance for large numbers of files in python/shuffle.py.

  • Fixed a bug in shuffler in internal file naming that prevented it from shuffling .npz files that were themselves produced by another shuffling.

  • Fixed a bug in python/train.py where -no-repeat-files didn't always prevent repeats.

  • Selfplay process now accepts hintpos files that end in .bookposes.txt and .startposes.txt rather than only .hintposes.txt.

  • Removed unnecessary/unused and outdated copy of sgfmill from this repo. Install it via pip again if you need it.

  • Standardized python indentation to 4 spaces.

  • Various other flags and minor cleanups for various scripts.

Training logic changes

  • KataGo now clamps komi less aggressively when initializing the rules training, allowing for more games to teach the net about extreme komi.

  • Added a few more bounds on recorded scores for training.

Book generation changes

These are relevant to users using katago genbook to build opening books or tsumego variation books. See cpp/configs/book/genbook7jp.cfg for an example config.

  • Added some new config prameters bonusPerUnexpandedBestWinLoss and earlyBookCostReductionFactor and earlyBookCostReductionLambda for exploring high-value unexplored moves, and for expanding more bad early moves for exploring optimal play after deliberately bad openings.

  • Added support for expanding multiple book nodes per search, which should be more efficient for generating large books. See new parameters minTreeVisitsToRecord etc. in the example config.

  • Added some other minor book-specific search parameters.

  • Fixed a bug where the book would report nonsensical error estimates when generated with old KataGo nets that don't support error estimates.

Other bugfixes and cleanups

  • Running contribute will no longer echo the configured password into debug output / logs.

  • Upgraded half library to 2.2 to fix an issue on some systems. #755

  • Various minor fixes #787 #749 #778.

  • Various updates to documentation in this repo.

  • Various other cleanups, feature additions, modfications to internal code and tools.

Don't miss a new KataGo release

NewReleases is sending notifications on new releases.