This is the release note of v2.4.0.
Highlights
Python 3.9 Support
This is the first version to officially support Python 3.9. Everything is tested with the exception of certain integration modules under optuna.integration
. We will continue to extend the support in the coming releases.
Multi-objective Optimization
Multi-objective optimization in Optuna is now a stable first-class citizen. Multi-objective optimization allows optimizing multi objectives at the same time such as maximizing model accuracy while minimizing model inference time.
Single-objective optimization can be extended to multi-objective optimization by
- specifying a sequence (e.g. a tuple) of
directions
instead of a singledirection
inoptuna.create_study
. Both parameters are supported for backwards compatibility - (optionally) specifying a sampler that supports multi-objective optimization in
optuna.create_study
. If skipped, will default to theNSGAIISampler
- returning a sequence of values instead of a single value from the objective function
Multi-objective Sampler
Samplers that support multi-objective optimization are currently the NSGAIISampler
, the MOTPESampler
, the BoTorchSampler
and the RandomSampler
.
Example
import optuna
def objective(trial):
# The Binh and Korn function. It has two objectives to minimize.
x = trial.suggest_float("x", 0, 5)
y = trial.suggest_float("y", 0, 3)
v0 = 4 * x ** 2 + 4 * y ** 2
v1 = (x - 5) ** 2 + (y - 5) ** 2
return v0, v1
sampler = optuna.samplers.NSGAIISampler()
study = optuna.create_study(directions=["minimize", "minimize"], sampler=sampler)
study.optimize(objective, n_trials=100)
# Get a list of the best trials.
best_trials = study.best_trials
# Visualize the best trials (i.e. Pareto front) in blue.
fig = optuna.visualization.plot_pareto_front(study, target_names=["v0", "v1"])
fig.show()
Migrating from the Experimental optuna.multi_objective
optuna.multi_objective
, used to be an experimental submodule for multi-objective optimization. This submodule is now deprecated. Changes required to migrate to the new interfaces are subtle as described by the steps in the previous section.
Database Storage Schema Upgrade
With the introduction of multi-objective optimization, the database storage schema for the RDBStorage
has been changed. To continue to use databases from v2.3, run the following command to upgrade your tables. Please create a backup of the database before.
optuna storage upgrade --storage <URL to the storage, e.g. sqlite:///example.db>
BoTorch Sampler
BoTorchSampler
is an experimental sampler based on BoTorch. BoTorch is a library for Bayesian optimization using PyTorch. See example for an example usage.
Constrained Optimization
For the first time in Optuna, BoTorchSampler
allows constrained optimization. Users can impose constraints on hyperparameters or objective function values as follows.
import optuna
def objective(trial):
x = trial.suggest_float("x", -15, 30)
y = trial.suggest_float("y", -15, 30)
# Constraints which are considered feasible if less than or equal to zero.
# The feasible region is basically the intersection of a circle centered at (x=5, y=0)
# and the complement to a circle centered at (x=8, y=-3).
c0 = (x - 5) ** 2 + y ** 2 - 25
c1 = -((x - 8) ** 2) - (y + 3) ** 2 + 7.7
# Store the constraints as user attributes so that they can be restored after optimization.
trial.set_user_attr("constraint", (c0, c1))
return x ** 2 + y ** 2
def constraints(trial):
return trial.user_attrs["constraint"]
# Specify the constraint function when instantiating the `BoTorchSampler`.
sampler = optuna.integration.BoTorchSampler(constraints_func=constraints)
study = optuna.create_study(sampler=sampler)
study.optimize(objective, n_trials=32)
Multi-objective Optimization
BoTorchSampler
supports both single- and multi-objective optimization. By default, the sampler selects the appropriate sampling algorithm with respect to the number of objectives.
Customizability
BoTorchSampler
is customizable via the candidates_func
callback parameter. Users familiar with BoTorch can change the surrogate model, acquisition function, and its optimizer in this callback to utilize any of the algorithms provided by BoTorch.
Visualization with Callback Specified Target Values
Visualization functions can now plot values other than objective values, such as inference time or evaluation by other metrics. Users can specify the values to be plotted by specifying the target
argument. Even in multi-objective optimization, visualization functions can be available with the target
argument along a specific objective.
New Tutorials
The tutorial has been improved and new content for each Optuna’s key feature have been added. More contents will be added in the future. Please look forward to it!
Breaking Changes
- Allow filtering trials from
Study
andBaseStorage
based onTrialState
(#1943) - Stop storing error stack traces in
fail_reason
in trialsystem_attr
(#1964) - Importance with target values other than objective value (#2109)
New Features
- Implement
plot_contour
and_get_contour_plot
with Matplotlib backend (#1782, thanks @ytknzw!) - Implement
plot_param_importances
and_get_param_importance_plot
with Matplotlib backend (#1787, thanks @ytknzw!) - Implement
plot_slice
and_get_slice_plot
with Matplotlib backend (#1823, thanks @ytknzw!) - Add
PartialFixedSampler
(#1892, thanks @norihitoishida!) - Allow filtering trials from
Study
andBaseStorage
based onTrialState
(#1943) - Add rung promotion limitation in ASHA/Hyperband to enable arbitrary unknown length runs (#1945, thanks @alexrobomind!)
- Add Fastai V2 pruner callback (#1954, thanks @hal-314!)
- Support options available on AllenNLP except to
node_rank
anddry_run
(#1959) - Universal data transformer (#1987)
- Introduce
BoTorchSampler
(#1989) - Add axis order for
plot_pareto_front
(#2000, thanks @okdshin!) plot_optimization_history
with target values other than objective value (#2064)plot_contour
with target values other than objective value (#2075)plot_parallel_coordinate
with target values other than objective value (#2089)plot_slice
with target values other than objective value (#2093)plot_edf
with target values other than objective value (#2103)- Importance with target values other than objective value (#2109)
- Migrate
optuna.multi_objective.visualization.plot_pareto_front
(#2110) - Raise
ValueError
iftarget
isNone
andstudy
is for multi-objective optimization forplot_contour
(#2112) - Raise
ValueError
iftarget
isNone
andstudy
is for multi-objective optimization forplot_edf
(#2117) - Raise
ValueError
iftarget
isNone
andstudy
is for multi-objective optimization forplot_optimization_history
(#2118) plot_param_importances
with target values other than objective value (#2119)- Raise
ValueError
iftarget
isNone
andstudy
is for multi-objective optimization forplot_parallel_coordinate
(#2120) - Raise
ValueError
iftarget
isNone
andstudy
is for multi-objective optimization forplot_slice
(#2121) - Trial post processing (#2134)
- Raise
NotImplementedError
fortrial.report
andtrial.should_prune
during multi-objective optimization (#2135) - Raise
ValueError
in TPE and CMA-ES ifstudy
is being used for multi-objective optimization (#2136) - Raise
ValueError
iftarget
isNone
andstudy
is for multi-objective optimization forget_param_importances
,BaseImportanceEvaluator.evaluate
, andplot_param_importances
(#2137) - Raise
ValueError
in integration samplers ifstudy
is being used for multi-objective optimization (#2145) - Migrate NSGA2 sampler (#2150)
- Migrate MOTPE sampler (#2167)
- Storages to query trial IDs from numbers (#2168)
Enhancements
- Use context manager to treat session correctly (#1628)
- Integrate multi-objective optimization module for the storages, study, and frozen trial (#1994)
- Pass
include_package
to AllenNLP for distributed setting (#2018) - Change the RDB schema for multi-objective integration (#2030)
- Update pruning callback for xgboost 1.3 (#2078, thanks @trivialfis!)
- Fix log format for single objective optimization to include best trial (#2128)
- Implement
Study._is_multi_objective()
to check whether study has multiple objectives (#2142, thanks @nyanhi!) TFKerasPruningCallback
to warn when an evaluation metric does not exist (#2156, thanks @bigbird555!)- Warn default target name when target is specified (#2170)
Study.trials_dataframe
for multi-objective optimization (#2181)
Bug Fixes
- Make always compute
weights_below
inMOTPEMultiObjectiveSampler
(#1979) - Fix the range of categorical values (#1983)
- Remove circular reference of study (#2079)
- Fix flipped colormap in
matplotlib
backendplot_parallel_coordinate
(#2090) - Replace builtin
isnumerical
to capture float values inplot_contour
(#2096, thanks @nzw0301!) - Drop unnecessary constraint from upgraded
trial_values
table (#2180)
Installation
- Ignore
tests
directory on install (#2015, thanks @130ndim!) - Clean up
setup.py
requirements (#2051) - Pin
xgboost<1.3
(#2084) - Bump up PyTorch version (#2094)
Documentation
- Update tutorial (#1722)
- Introduce plotly directive (#1944, thanks @harupy!)
- Check everything by
blackdoc
(#1982) - Remove
codecov
fromCONTRIBUTING.md
(#2005) - Make the visualization examples deterministic (#2022, thanks @harupy!)
- Use plotly directive in
plot_pareto_front
(#2025) - Remove plotly scripts and unused generated files (#2026)
- Add mandarin link to ReadTheDocs layout (#2028)
- Document about possible duplicate parameter configurations in
GridSampler
(#2040) - Fix
MOTPEMultiObjectiveSampler
's example (#2045, thanks @norihitoishida!) - Fix Read the Docs build failure caused by
pip install --find-links
(#2065) - Fix
lt
symbol (#2068, thanks @KoyamaSohei!) - Fix parameter section of
RandomSampler
in docs (#2071, thanks @akihironitta!) - Add note on the behavior of
suggest_float
withstep
argument (#2087) - Tune build time of #2076 (#2088)
- Add
matplotlib.plot_parallel_coordinate
example (#2097, thanks @nzw0301!) - Add
matplotlib.plot_param_importances
example (#2098, thanks @nzw0301!) - Add
matplotlib.plot_slice
example (#2099, thanks @nzw0301!) - Add
matplotlib.plot_contour
example (#2100, thanks @nzw0301!) - Bump Sphinx up to 3.4.0 (#2127)
- Additional docs about
optuna.multi_objective
deprecation (#2132) - Move type hints to description from signature (#2147)
- Add copy button to all the code examples (#2148)
- Fix wrong wording in distributed execution tutorial (#2152)
Examples
- Add MXNet Gluon example (#1985)
- Update logging in PyTorch Lightning example (#2037, thanks @pbmstrk!)
- Change return type of
training_step
of PyTorch Lightning example (#2043) - Fix dead links in
examples/README.md
(#2056, thanks @nai62!) - Add
enqueue_trial
example (#2059) - Skip FastAI v2 example in examples job (#2108)
- Move
examples/multi_objective/plot_pareto_front.py
toexamples/visualization/plot_pareto_front.py
(#2122) - Use latest multi-objective functionality in multi-objective example (#2123)
- Add haiku and jax simple example (#2155, thanks @nzw0301!)
Tests
- Update
parametrize_sampler
oftest_samplers.py
(#2020, thanks @norihitoishida!) - Change
trail_id + 123
->trial_id
(#2052) - Fix
scipy==1.6.0
test failure withLogisticRegression
(#2166)
Code Fixes
- Introduce plotly directive (#1944, thanks @harupy!)
- Stop storing error stack traces in
fail_reason
in trialsystem_attr
(#1964) - Check everything by blackdoc (#1982)
- HPI with
_SearchSpaceTransform
(#1988) - Fix TODO comment about orders of
dict
s (#2007) - Add
__all__
to reexport modules explicitly (#2013) - Update
CmaEsSampler
's warning message (#2019, thanks @norihitoishida!) - Put up an alias for
structs.StudySummary
againststudy.StudySummary
(#2029) - Deprecate
optuna.type_checking
module (#2032) - Remove
py35
from black config inpyproject.toml
(#2035) - Use model methods instead of
session.query()
(#2060) - Use
find_or_raise_by_id
instead offind_by_id
to raise if a study does not exist (#2061) - Organize and remove unused model methods (#2062)
- Leave a comment about RTD compromise (#2066)
- Fix ideographic space (#2067, thanks @KoyamaSohei!)
- Make new visualization parameters keyword only (#2082)
- Use latest APIs in
LightGBMTuner
(#2083) - Add
matplotlib.plot_slice
example (#2099, thanks @nzw0301!) - Deprecate previous multi-objective module (#2124)
_run_trial
refactoring (#2133)- Cosmetic fix of
xgboost
integration (#2143)
Continuous Integration
- Partial support of python 3.9 (#1908)
- Check everything by blackdoc (#1982)
- Avoid
set-env
in GitHub Actions (#1992) - PyTorch and AllenNLP (#1998)
- Remove
checks
from circleci (#2004) - Migrate tests and coverage to GitHub Actions (#2027)
- Enable blackdoc
--diff
option (#2031) - Unpin mypy version (#2069)
- Skip FastAI v2 example in examples job (#2108)
- Fix CI examples for Py3.6 (#2129)
Other
- Add
tox.ini
(#2024) - Allow passing additional arguments when running tox (#2054, thanks @harupy!)
- Add Python 3.9 to README badge (#2063)
- Clarify that generally pull requests need two or more approvals (#2104)
- Release wheel package via PyPI (#2105)
- Adds news entry about the Python 3.9 support (#2114)
- Add description for tox to
CONTRIBUTING.md
(#2159) - Bump up version number to 2.4.0 (#2183)
- [Backport] Fix the syntax of
pypi-publish.yml
(#2188)
Thanks to All the Contributors!
This release was made possible by authors, and everyone who participated in reviews and discussions.
@130ndim, @Crissman, @HideakiImamura, @KoyamaSohei, @akihironitta, @alexrobomind, @bigbird555, @c-bata, @crcrpar, @eytan, @g-votte, @hal-314, @harupy, @himkt, @hvy, @keisuke-umezawa, @nai62, @norihitoishida, @not522, @nyanhi, @nzw0301, @okdshin, @pbmstrk, @sdaulton, @sile, @toshihikoyanase, @trivialfis, @ytknzw, @ytsmiling