This is the release note of v2.9.0.
Help us create the next version of Optuna! Please take a few minutes to fill in this survey, and let us know how you use Optuna now and what improvements you'd like. https://forms.gle/TtJuuaqFqtjmbCP67
Highlights
Ask-and-Tell CLI: Optuna from the Command Line
The built-in CLI which you can use to upgrade storages or check the installed version with optuna --version
, now provides experimental subcommands for the Ask-and-Tell interface. It is now possible to optimize using Optuna entirely from the CLI, without writing a single line of Python.
Ask with optuna ask
Ask for parameters using optuna ask
, specifying the search space, storage, study name, sampler and optimization direction. The parameters and the associated trial number can be output as either JSON or YAML.
The following is an example outputting and piping the results to a YAML file.
$ optuna ask --storage sqlite:///mystorage.db \
--study-name mystudy \
--sampler TPESampler \
--sampler-kwargs '{"multivariate": true}' \
--search-space '{"x": {"name": "UniformDistribution", "attributes": {"low": 0.0, "high": 1.0}}, "y": {"name": "CategoricalDistribution", "attributes": {"choices": ["foo", "bar"]}}}' \
--direction minimize \
--out yaml \
> out.yaml
[I 2021-07-30 15:56:50,774] A new study created in RDB with name: mystudy
[I 2021-07-30 15:56:50,808] Asked trial 0 with parameters {'x': 0.21492964898919975, 'y': 'foo'} in study 'mystudy' and storage 'sqlite:///mystorage.db'.
$ cat out.yaml
trial:
number: 0
params:
x: 0.21492964898919975
y: foo
Specify multiple whitespace separated directions for multi-objective optimization.
Tell with optuna tell
After computing the objective value based on the output of ask, you can report the result back using optuna tell
and it will be stored in the study.
$ optuna tell --storage sqlite:///mystorage.db \
--study-name mystudy \
--trial-number 0 \
--values 1.0
[I 2021-07-30 16:01:13,039] Told trial 0 with values [1.0] and state TrialState.COMPLETE in study 'mystudy' and storage 'sqlite:///mystorage.db'.
Specify multiple whitespace separated values for multi-objective optimization.
See #2817 for details.
Weights & Biases Integration
WeightsAndBiasesCallback
is a new study optimization callback that allows logging with Weights & Biases. This allows utilizing Weight & Biases’ rich visualization features to analyze studies to complement Optuna’s visualization.
import optuna
from optuna.integration.wandb import WeightsAndBiasesCallback
def objective(trial):
x = trial.suggest_float("x", -10, 10)
return (x - 2) ** 2
wandb_kwargs = {"project": "my-project"}
wandbc = WeightsAndBiasesCallback(wandb_kwargs=wandb_kwargs)
study = optuna.create_study(study_name="mystudy")
study.optimize(objective, n_trials=10, callbacks=[wandbc])
See #2781 for details.
TPE Sampler Refactorings
The Tree-structured Parzen Estimator (TPE) sampler has always been the default sampler in Optuna. Both it’s API and internal code has over time grown to accomodate for various needs such as independent and join parameter sampling (the multivariate
parameter) , and multi-objective optimization (the MOTPESampler
sampler). In this release, the TPE sampler has been refactored and its code greatly reduced. The previously experimental multi-objective TPE Sampler MOTPESampler
has also been deprecated and its capabilities are now absorbed by the standard TPESampler
.
This change may break code that depends on fixed seeds with this sampler. The optimization algorithms otherwise have not been changed.
Following demonstrates how you can now use the TPESampler
for multi-objective optimization.
import optuna
def objective(trial):
x = trial.suggest_float("x", 0, 5)
y = trial.suggest_float("y", 0, 3)
v0 = 4 * x ** 2 + 4 * y ** 2
v1 = (x - 5) ** 2 + (y - 5) ** 2
return v0, v1
sampler = optuna.samplers.TPESampler() # `MOTPESampler` used to be required for multi-objective optimization.
study = optuna.create_study(
directions=["minimize", "minimize"],
sampler=sampler,
)
study.optimize(objective, n_trials=100)
Note that omitting the sampler
argument or specifying None
currently defaults to the NSGAIISampler
for multi-objective studies instead of the TPESampler
.
See #2618 for details.
Breaking Changes
- Unify the univariate and multivariate TPE (#2618)
New Features
- MLFlow decorator for optimization function (#2670, thanks @lucafurrer!)
- Redis Heartbeat (#2780, thanks @Turakar!)
- Introduce Weights & Biases integration (#2781, thanks @xadrianzetx!)
- Function for failing zombie trials and invoke their callbacks (#2811)
- Optuna ask and tell CLI options (#2817)
Enhancements
- Unify
MOTPESampler
andTPESampler
(#2688) - Changed interpolation type to make numeric range consistent with Plotly (#2712, thanks @01-vyom!)
- Add the warning if an intermediate value is already reported at the same step (#2782, thanks @TakuyaInoue-github!)
- Prioritize grids that are not yet running in
GridSampler
(#2783) - Fix
warn_independent_sampling
inTPESampler
(#2786) - Avoid applying
constraint_fn
to non-COMPLETE
trials in NSGAII-sampler (#2791) - Speed up
TPESampler
(#2816) - Enable CLI helps for subcommands (#2823)
Bug Fixes
- Fix
AllenNLPExecutor
reproducibility (#2717, thanks @MagiaSN!) - Use
repr
andeval
to restore pruner parameters in AllenNLP integration (#2731) - Fix
Nan
cast bug inTPESampler
(#2739) - Fix
infer_relative_search_space
of TPE with the single point distributions (#2749)
Installation
Documentation
- Add how to suggest proportion to FAQ (#2718)
- Explain how to add a user's own logging callback function (#2730)
- Add
copy_study
to the docs (#2737) - Fix link to kurobako benchmark page (#2748)
- Improve docs of constant liar (#2785)
- Fix the document of
RetryFailedTrialCallback.retried_trial_number
(#2789) - Match the case of
ID
(#2798, thanks @belldandyxtq!) - Rephrase
RDBStorage
RuntimeError
description (#2802, thanks @belldandyxtq!)
Examples
- Add remaining examples to CI tests (optuna/optuna-examples#26)
- Use hydra 1.1.0 syntax (optuna/optuna-examples#28)
- Replace monitor value with accuracy (optuna/optuna-examples#32)
Tests
- Count the number of calls of the wrapped method in the test of
MOTPEMultiObjectiveSampler
(#2666) - Add specific test cases for
visualization.matplotlib.plot_intermediate_values
(#2754, thanks @asquare100!) - Added unit tests for optimization history of matplotlib tests (#2761, thanks @01-vyom!)
- Changed unit tests for pareto front of matplotlib tests (#2763, thanks @01-vyom!)
- Added unit tests for slice of matplotlib tests (#2764, thanks @01-vyom!)
- Added unit tests for param importances of matplotlib tests (#2774, thanks @01-vyom!)
- Changed unit tests for parallel coordinate of matplotlib tests (#2778, thanks @01-vyom!)
- Use more specific assert in
tests/visualization_tests/matplotlib/test_intermediate_plot.py
(#2803) - Added unit tests for contour of matplotlib tests (#2806, thanks @01-vyom!)
Code Fixes
- Create
study
directory (#2721) - Dissect allennlp integration in submodules based on roles (#2745)
- Fix deprecated version of
MOTPESampler
(#2770)
Continuous Integration
Other
- Bump up version to v2.9.0dev (#2723)
- Add an optional section to ask reproducible codes (#2799)
- Add survey news to
README.md
(#2801) - Add python code to issue templates for making reporting runtime information easy (#2805)
- Bump to v2.9.0 (#2828)
Thanks to All the Contributors!
This release was made possible by the authors and the people who participated in the reviews and discussions.
@ytsmiling, @harupy, @asquare100, @hvy, @c-bata, @nzw0301, @lucafurrer, @belldandyxtq, @not522, @TakuyaInoue-github, @01-vyom, @himkt, @Crissman, @toshihikoyanase, @sile, @vanpelt, @HideakiImamura, @MagiaSN, @keisuke-umezawa, @Turakar, @xadrianzetx, @ytsmiling, @harupy, @asquare100, @hvy, @c-bata, @nzw0301, @lucafurrer, @belldandyxtq, @not522, @TakuyaInoue-github, @01-vyom, @himkt, @Crissman, @toshihikoyanase, @sile, @vanpelt, @HideakiImamura, @MagiaSN, @keisuke-umezawa, @Turakar, @xadrianzetx