Skip to content

Releases: kiudee/chess-tuning-tools

chess-tuning-tools 0.7.3 (10 July 2021)

10 Jul 15:34
Compare
Choose a tag to compare

0.7.3 (2021-06-27)

Local tuner

  • Add --fast-resume switch to the tuner, which allows instant resume
    functionality from disk (new default).
  • Fix the match parser producing incorrect results, when concurrency > 1 is
    used for playing matches.

Distributed tuning framework

  • The distributed tuning framework is no longer deprecated.
  • Add --run-only-once flag to distributed tuning client. If True, it will
    terminate after completing one job or immediately if no job is found.
  • Add --skip-benchmark flag to distributed tuning client. If True, it will
    skip the calibration of the time control, which involves running a benchmark
    for both engines.
  • Tuning server of the distributed tuning framework will now also save the
    optimizer object.
  • Tuning server now also uses the updated pentanomial model including
    noise estimation.
  • warp_inputs can now be passed via database to the tuning server.
  • Fix the server for distributed tuning not sorting the data by job id causing
    the model to be fit with randomly permuted scores.
  • Fix the server for distributed tuning trying to compute the current optimum
    before a model has been fit.

chess-tuning-tools 0.7.3-rc1 (27 June 2021)

26 Jun 22:51
688ba1f
Compare
Choose a tag to compare

chess-tuning-tools 0.7.3-rc1 (27 June 2021)

Changes

  • Add --fast-resume switch to the tuner, which allows instant resume
    functionality from disk (new default).
  • Add --run-only-once flag to distributed tuning client. If True, it will
    terminate after completing one job or immediately if no job is found.
  • Add --skip-benchmark flag to distributed tuning client. If True, it will
    skip the calibration of the time control, which involves running a benchmark
    for both engines.
  • Tuning server of the distributed tuning framework will now also save the
    optimizer object.
  • Fix the match parser producing incorrect results, when concurrency > 1 is
    used for playing matches.
  • Fix the server for distributed tuning trying to compute the current optimum
    before a model has been fit.