pypi optuna 1.4.0
v1.4.0

latest releases: 4.2.0, 4.1.0, 4.0.0...
4 years ago

This is the release note of v1.4.0.

Highlights

Experimental Multi-objective Optimization

Multi-objective optimization is available as an experimental feature. Currently, it only provides random sampling, but it will be continuously developed in the following releases. Feedback is highly welcomed. See #1054 for details.

Enhancement of Storages

A new Redis-based storage is available. It is a fast and flexible in-memory storage. It can also persist studies on-disk without having to configure relational databases. It is still an experimental feature, and your feedback is highly welcomed. See #974 for details.

Performance tuning has been applied to RDBStorage. For instance, it speeds up creating study lists by over 3000 times (i.e., 7 minutes to 0.13 seconds). See #1109 for details.

Experimental Integration Modules for MLFlow and AllenNLP

A new callback function is provided for MLFlow users. It reports Optuna’s optimization results (i.e., parameter values and metric values) to MLFlow.
See #1028 for details.

A new integration module for AllenNLP is available. It enables you to reuse your jsonnet configuration files for hyperparameter tuning. See #1086 for details.

Breaking Changes

  • Delete the argument is_higher_better from TensorFlowPruningHook. (#1083, thanks @nuka137!)
  • Applied @abc.abstractmethod decorator to the abstract methods of BaseTrial and fixed ChainerMNTrial. (#1087, thanks @gorogoroumaru!)
  • Input validation for LogUniformDistribution for negative domains. (#1099)

New Features

  • Added RedisStorage class to support storing activity on Redis. (#974, thanks @pablete!)
  • Add MLFlow integration callback. (#1028, thanks @PhilipMay!)
  • Add study argument to optuna.integration.lightgbm.LightGBMTuner. (#1032)
  • Support multi-objective optimization. (#1054)
  • Add duration into FrozenTrial and DataFrame. (#1071)
  • Support parallel execution of LightGBMTuner. (#1076)
  • Add number property to FixedTrial and BaseTrial. (#1077)
  • Support DiscreteUniformDistribution in suggest_float. (#1081, thanks @himkt!)
  • Add AllenNLP integration. (#1086, thanks @himkt!)
  • Add an argument of max_resource to HyperbandPruner and deprecate n_brackets. (#1138)
  • Improve the trial allocation algorithm of HyperbandPruner. (#1141)
  • Add IntersectionSearchSpace to speed up the search space calculation. (#1142)
  • Implement AllenNLP config exporter to save training config with best_params in study. (#1150, thanks @himkt!)
  • Remove redundancy from HyperbandPruner by deprecating min_early_stopping_rate_low. (#1159)
  • Add pruning interval for KerasPruningCallback. (#1161, thanks @VladSkripniuk!)
  • suggest_float with step in multi_objective. (#1205, thanks @nzw0301!)

Enhancements

  • Reseed sampler's random seed generator in Study. (#968)
  • Apply lazy import for optuna.dashboard. (#1074)
  • Applied @abc.abstractmethod decorator to the abstract methods of BaseTrial and fixed ChainerMNTrial. (#1087, thanks @gorogoroumaru!)
  • Refactoring of StudyDirection. (#1090)
  • Refactoring of StudySummary. (#1095)
  • Refactoring of TrialState and FrozenTrial. (#1101)
  • Apply lazy import for optuna.structs to raise DeprecationWarning when using. (#1104)
  • Optimize get_all_strudy_summaries function for RDB storages. (#1109)
  • single() returns True when step or q is greater than high-low. (#1111)
  • Return trial_id at study._append_trial(). (#1114)
  • Use scipy for sampling from truncated normal in TPE sampler. (#1122)
  • Remove unnecessary deep-copies. (#1135)
  • Remove unnecessary shape-conversion and a loop from TPE sampler. (#1145)
  • Support Optuna callback functions at LightGBM Tuner. (#1158)
  • Fix the default value of max_resource to HyperbandPruner. (#1171)
  • Fix the method to calculate n_brackets in HyperbandPruner. (#1188)

Bug Fixes

  • Support Copy-on-Write for thread safety in in-memory storage. (#1139)
  • Fix the range of sampling in TPE sampler. (#1143)
  • Add figure title to contour plot. (#1181, thanks @harupy!)
  • Raise ValueError that is not raised. (#1208, thanks @harupy!)
  • Fix a bug that occurs when multiple callbacks are passed to MultiObjectiveStudy.optimize. (#1209)

Installation

  • Set version constraint on the cmaes library. (#1082)
  • Stop installing PyTorch Lightning if Python version is 3.5. (#1193)
  • Install PyTorch without CPU option on macOS. (#1215, thanks @himkt!)

Documentation

  • Add an Example and Variable Explanations to HyperBandPruner. (#972)
  • Add a reference of cli in the sphinx docs. (#1065)
  • Fix docstring on optuna/integration/*.py. (#1070, thanks @nuka137!)
  • Fix docstring on optuna/distributions.py. (#1089)
  • Remove duplicate description of FrozenTrial.distributions. (#1093)
  • Optuna Read the Docs top page addition. (#1098)
  • Update the outputs of some examples in first.rst. (#1100, thanks @A03ki!)
  • Fix plot_intermediate_values example. (#1103)
  • Use latest sphinx version on RTD. (#1108)
  • Add class doc to TPESampler. (#1144)
  • Fix a markup in pruner page. (#1172, thanks @nzw0301!)
  • Add examples for doctest to optuna/storages/rdb/storage.py. (#1212, thanks @nuka137!)

Examples

  • Update PyTorch Lightning example for 0.7.1 version. (#1013, thanks @festeh!)
  • Add visualization example script. (#1085)
  • Update pytorch_simple.py to suggest lr from suggest_loguniform. (#1112)
  • Rename test datasets in examples. (#1164, thanks @VladSkripniuk!)
  • Fix the metric name in KerasPruningCallback example. (#1218)

Tests

  • Add TPE tests. (#1126)
  • Bundle allennlp test data in the repository. (#1149, thanks @himkt!)
  • Add test for deprecation error of HyperbandPruner. (#1189)
  • Add examples for doctest to optuna/storages/rdb/storage.py. (#1212, thanks @nuka137!)

Code Fixes

  • Update type hinting of GridSampler.__init__. (#1102)
  • Replace mock with unittest.mock. (#1121)
  • Remove unnecessary is_log logic in TPE sampler. (#1123)
  • Remove redundancy from HyperbandPruner by deprecating min_early_stopping_rate_low. (#1159)
  • Use Trial.system_attrs to store LightGBMTuner's results. (#1177)
  • Remove _TimeKeeper and use timeout of Study.optimize. (#1179)
  • Define key names of system_attrs as variables in LightGBMTuner. (#1192)
  • Minor fixes. (#1203, thanks @nzw0301!)
  • Move colorlog after threading. (#1211, thanks @himkt!)
  • Pass IntUniformDistribution's step to UniformIntegerHyperparameter's q. (#1222, thanks @nzw0301!)

Continuous Integration

  • Create dockerimage.yml. (#901)
  • Add notebook verification for visualization examples. (#1088)
  • Avoid installing torch with CUDA in CI. (#1118)
  • Avoid installing torch with CUDA in CI by locking version. (#1124)
  • Constraint llvmlite version for Python 3.5. (#1152)
  • Skip GitHub Actions builds on forked repositories. (#1157, thanks @himkt!)
  • Fix --cov option for pytest. (#1187, thanks @harupy!)
  • Unique GitHub Actions step name. (#1190)

Other

  • GitHub Actions to automatically label PRs. (#1068)
  • Revert "GitHub Actions to automatically label PRs.". (#1094)
  • Remove settings for yapf. (#1110, thanks @himkt!)
  • Update pull request template. (#1113)
  • GitHub Actions to automatically label stale issues and PRs. (#1116)
  • Upgrade actions/stale to never close ticket. (#1131)
  • Run actions/stale on weekday mornings Tokyo time. (#1132)
  • Simplify pull request template. (#1147)
  • Use major version instead of semver for stale. (#1173, thanks @hross!)
  • Do not label contribution-welcome and bug issues as stale. (#1216)

Don't miss a new optuna release

NewReleases is sending notifications on new releases.