If you're a new user, don't forget to check out this section for getting started and basic usage!
KataGo has started a new distributed run at https://katagotraining.org/, and this release newly supports the latest and strongest neural nets from there! Also if you wish to contribute, the run will be open for full public contributions soon, but for now, you can already try the new nets. For nets from older runs, see https://d3dndmfyhecmj0.cloudfront.net/index.html.
If you don't know which version to choose (OpenCL, CUDA, Eigen, Eigen AVX2), read this: https://github.com/lightvector/KataGo#opencl-vs-cuda-vs-eigen
Major Engine Changes and Fixes
-
Now supports the new neural nets at https://katagotraining.org/, which have an altered format and some new output heads that might be used to improve the search logic in future versions.
-
A lot of internal changes with hopefully all the critical changes needed to support public contributions for the distributed run opening shortly, as well as many bugfixes, and stronger search logic.
-
New subtree value bias correction method has been added to the search, which should be worth somewhere between 20 and 50 Elo for mid-thousands of playouts.
-
Fixed a bug in LCB move selection that prevented LCB from acting on the top-policy move. The fix is worth perhaps around 10 Elo.
-
Time control logic has been greatly overhauled and reimplemented. Most of its features are not enabled by default due to uncertainty on the best parameters, they may be set to reasonable defaults after more testing in the future. (Anyone interested in running tests or collaborating further logic tweaks would be welcome!)
-
Bugfix to Japanese-like rules that should allow for more accurate handling of double-ko-death situations. Will also require new nets to gradually adjust to these rules, which may take some more time with the ongoing new run.
-
Root symmetry sampling now samples without replacement instead of with replacement, and is capped at 8, the total number of possible symmetries, instead of 16.
Minor Engine Changes and Fixes
-
Removed old no-longer-useful search parameter
fpuUseParentAverage
. -
Built-in
katago match
tool'skomiAuto
feature now uses 100 visits per test instead of 20 by default to find a fair komi. -
Built-in
katago match
tool now has some logic to avoid prematurely-early resignation, to be consistent with GTP. -
Fixed a segfault that could happen during config generation in
katago genconfig
command. -
Fixed bug where analysis engine could sometimes report the
rootInfo
with the wrong side's perspective. -
Fixed bug where priorities outside [-2^31, 2^31-1] would not work properly in the analysis engine.
-
Fixed GTP command
kata-raw-nn
to also report the policy for passing.
Self-play and Training Changes
-
Neural net model version 10 is now the default version, which adds a few new training targets and rebalances all of the weights of the loss function. Training and loss function statistics may not be directly comparable to those of earlier versions.
-
Going forward, newly-created neural nets with the KataGo python scripts will default to using a 3x3 conv instead of a 5x5 conv for the first layer. This may result in newly-trained neural nets being very slightly weaker and lower-capacity, and very slightly faster than old nets. This also greatly reduces memory usage on bigger nets with OpenCL. Existing nets will be unaffected (even if v1.8 is used train them).
-
Fixed bug where hintposes were not adjusted for the initial turn number of the position.
-
Some SGF startposes file handling is improved to allow deeper-branching files to be handled without running out of stack space.
-
Fixed bug where a stale root nn policy might suppress a hintpos from taking effect. Hintposes will also do more full searches instead of cheap searches in the few moves after the hint.
-
Improved logging of debug output from self-play training, improved SGF file comments for selfplay games, various internal cleanups.
-
Training script now has option to lock the ratio of train steps vs data samples.
-
Easier usage of initial weights for training - train script will look for any tensorflow checkpoints and meta files within a directory named "initial_weights" that is a subdirectory of that specific net's training directory.
-
Deleted some of the old unused model code.
-
Just for fun, added some pytorch
genboard
scripts that train a neural net to generate plausible board positions given some existing stones on that board.