This is the release note of v1.4.0.
Highlights
Experimental Multi-objective Optimization
Multi-objective optimization is available as an experimental feature. Currently, it only provides random sampling, but it will be continuously developed in the following releases. Feedback is highly welcomed. See #1054 for details.
Enhancement of Storages
A new Redis-based storage is available. It is a fast and flexible in-memory storage. It can also persist studies on-disk without having to configure relational databases. It is still an experimental feature, and your feedback is highly welcomed. See #974 for details.
Performance tuning has been applied to RDBStorage
. For instance, it speeds up creating study lists by over 3000 times (i.e., 7 minutes to 0.13 seconds). See #1109 for details.
Experimental Integration Modules for MLFlow and AllenNLP
A new callback function is provided for MLFlow users. It reports Optuna’s optimization results (i.e., parameter values and metric values) to MLFlow.
See #1028 for details.
A new integration module for AllenNLP is available. It enables you to reuse your jsonnet configuration files for hyperparameter tuning. See #1086 for details.
Breaking Changes
- Delete the argument
is_higher_better
fromTensorFlowPruningHook
. (#1083, thanks @nuka137!) - Applied
@abc.abstractmethod
decorator to the abstract methods ofBaseTrial
and fixedChainerMNTrial
. (#1087, thanks @gorogoroumaru!) - Input validation for
LogUniformDistribution
for negative domains. (#1099)
New Features
- Added
RedisStorage
class to support storing activity on Redis. (#974, thanks @pablete!) - Add MLFlow integration callback. (#1028, thanks @PhilipMay!)
- Add
study
argument tooptuna.integration.lightgbm.LightGBMTuner
. (#1032) - Support multi-objective optimization. (#1054)
- Add duration into
FrozenTrial
andDataFrame
. (#1071) - Support parallel execution of
LightGBMTuner
. (#1076) - Add
number
property toFixedTrial
andBaseTrial
. (#1077) - Support
DiscreteUniformDistribution
insuggest_float
. (#1081, thanks @himkt!) - Add AllenNLP integration. (#1086, thanks @himkt!)
- Add an argument of
max_resource
toHyperbandPruner
and deprecaten_brackets
. (#1138) - Improve the trial allocation algorithm of
HyperbandPruner
. (#1141) - Add
IntersectionSearchSpace
to speed up the search space calculation. (#1142) - Implement AllenNLP config exporter to save training config with
best_params
in study. (#1150, thanks @himkt!) - Remove redundancy from
HyperbandPruner
by deprecatingmin_early_stopping_rate_low
. (#1159) - Add pruning interval for
KerasPruningCallback
. (#1161, thanks @VladSkripniuk!) suggest_float
with step inmulti_objective
. (#1205, thanks @nzw0301!)
Enhancements
- Reseed sampler's random seed generator in Study. (#968)
- Apply lazy import for
optuna.dashboard
. (#1074) - Applied
@abc.abstractmethod
decorator to the abstract methods ofBaseTrial
and fixedChainerMNTrial
. (#1087, thanks @gorogoroumaru!) - Refactoring of
StudyDirection
. (#1090) - Refactoring of
StudySummary
. (#1095) - Refactoring of
TrialState
andFrozenTrial
. (#1101) - Apply lazy import for
optuna.structs
to raiseDeprecationWarning
when using. (#1104) - Optimize
get_all_strudy_summaries
function for RDB storages. (#1109) single()
returns True whenstep
orq
is greater thanhigh-low
. (#1111)- Return
trial_id
atstudy._append_trial()
. (#1114) - Use
scipy
for sampling from truncated normal in TPE sampler. (#1122) - Remove unnecessary deep-copies. (#1135)
- Remove unnecessary shape-conversion and a loop from TPE sampler. (#1145)
- Support Optuna callback functions at LightGBM Tuner. (#1158)
- Fix the default value of
max_resource
toHyperbandPruner
. (#1171) - Fix the method to calculate
n_brackets
inHyperbandPruner
. (#1188)
Bug Fixes
- Support Copy-on-Write for thread safety in in-memory storage. (#1139)
- Fix the range of sampling in TPE sampler. (#1143)
- Add figure title to contour plot. (#1181, thanks @harupy!)
- Raise
ValueError
that is not raised. (#1208, thanks @harupy!) - Fix a bug that occurs when multiple callbacks are passed to
MultiObjectiveStudy.optimize
. (#1209)
Installation
- Set version constraint on the
cmaes
library. (#1082) - Stop installing PyTorch Lightning if Python version is 3.5. (#1193)
- Install PyTorch without CPU option on macOS. (#1215, thanks @himkt!)
Documentation
- Add an Example and Variable Explanations to
HyperBandPruner
. (#972) - Add a reference of cli in the sphinx docs. (#1065)
- Fix docstring on
optuna/integration/*.py
. (#1070, thanks @nuka137!) - Fix docstring on
optuna/distributions.py
. (#1089) - Remove duplicate description of
FrozenTrial.distributions
. (#1093) - Optuna Read the Docs top page addition. (#1098)
- Update the outputs of some examples in
first.rst
. (#1100, thanks @A03ki!) - Fix
plot_intermediate_values
example. (#1103)- Thanks @barneyhill for creating the original pull request #1050!
- Use latest
sphinx
version on RTD. (#1108) - Add class doc to
TPESampler
. (#1144) - Fix a markup in pruner page. (#1172, thanks @nzw0301!)
- Add examples for doctest to
optuna/storages/rdb/storage.py
. (#1212, thanks @nuka137!)
Examples
- Update PyTorch Lightning example for 0.7.1 version. (#1013, thanks @festeh!)
- Add visualization example script. (#1085)
- Update
pytorch_simple.py
to suggestlr
fromsuggest_loguniform
. (#1112) - Rename test datasets in examples. (#1164, thanks @VladSkripniuk!)
- Fix the metric name in
KerasPruningCallback
example. (#1218)
Tests
- Add TPE tests. (#1126)
- Bundle allennlp test data in the repository. (#1149, thanks @himkt!)
- Add test for deprecation error of
HyperbandPruner
. (#1189) - Add examples for doctest to optuna/storages/rdb/storage.py. (#1212, thanks @nuka137!)
Code Fixes
- Update type hinting of
GridSampler.__init__
. (#1102) - Replace
mock
withunittest.mock
. (#1121) - Remove unnecessary
is_log
logic in TPE sampler. (#1123) - Remove redundancy from
HyperbandPruner
by deprecatingmin_early_stopping_rate_low
. (#1159) - Use
Trial.system_attrs
to storeLightGBMTuner
's results. (#1177) - Remove
_TimeKeeper
and usetimeout
ofStudy.optimize
. (#1179) - Define key names of
system_attrs
as variables inLightGBMTuner
. (#1192) - Minor fixes. (#1203, thanks @nzw0301!)
- Move
colorlog
afterthreading
. (#1211, thanks @himkt!) - Pass
IntUniformDistribution
's step toUniformIntegerHyperparameter
'sq
. (#1222, thanks @nzw0301!)
Continuous Integration
- Create dockerimage.yml. (#901)
- Add notebook verification for visualization examples. (#1088)
- Avoid installing
torch
with CUDA in CI. (#1118) - Avoid installing
torch
with CUDA in CI by locking version. (#1124) - Constraint
llvmlite
version for Python 3.5. (#1152) - Skip GitHub Actions builds on forked repositories. (#1157, thanks @himkt!)
- Fix
--cov
option for pytest. (#1187, thanks @harupy!) - Unique GitHub Actions step name. (#1190)
Other
- GitHub Actions to automatically label PRs. (#1068)
- Revert "GitHub Actions to automatically label PRs.". (#1094)
- Remove settings for yapf. (#1110, thanks @himkt!)
- Update pull request template. (#1113)
- GitHub Actions to automatically label stale issues and PRs. (#1116)
- Upgrade
actions/stale
to never close ticket. (#1131) - Run
actions/stale
on weekday mornings Tokyo time. (#1132) - Simplify pull request template. (#1147)
- Use major version instead of semver for stale. (#1173, thanks @hross!)
- Do not label
contribution-welcome
andbug
issues as stale. (#1216)