This is the release note of v3.2.0.
Highlights
Human-in-the-loop optimization
With the latest release, we have incorporated support for human-in-the-loop optimization. It enables an interactive optimization process between users and the optimization algorithm. As a result, it opens up new opportunities for the application of Optuna in tuning Generative AI. For further details, please check out our human-in-the-loop optimization tutorial.

Overview of human-in-the-loop optimization. Generated images and sounds are displayed on Optuna Dashboard, and users can directly evaluate them there.
Automatic optimization terminator(Optuna Terminator)
Optuna Terminator is a new feature that quantitatively estimates room for optimization and automatically stops the optimization process. It is designed to alleviate the burden of figuring out an appropriate value for the number of trials (n_trials
), or unnecessarily consuming computational resources by indefinitely running the optimization loop. See #4398 and optuna-examples#190.
Transition of estimated room for improvement. It steadily decreases towards the level of cross-validation errors.
New sampling algorithms
NSGA-III for many-objective optimization
We've introduced the NSGAIIISampler as a new multi-objective optimization sampler. It implements NSGA-III, which is an extended variant of NSGA-II, designed to efficiently optimize even when the dimensionality of the objective values is large (especially when it's four or more). NSGA-II had an issue where the search would become biased towards specific regions when the dimensionality of the objective values exceeded four. In NSGA-III, the algorithm is designed to distribute the points more uniformly. This feature was introduced by #4436.
Objective value space for multi-objective optimization (minimization problem). Red points represent Pareto solutions found by NSGA-II. Blue points represent those found by NSGA-III. NSGA-II shows a tendency for points to concentrate towards each axis (corresponding to the ends of the Pareto Front). On the other hand, NSGA-III displays a wider distribution across the Pareto Front.
BI-population CMA-ES
Continuing from v3.1, significant improvements have been made to the CMA-ES Sampler. As a new feature, we've added the BI-population CMA-ES algorithm, a kind of restart strategy that mitigates the problem of falling into local optima. Whether the IPOP CMA-ES, which we've been providing so far, or the new BI-population CMA-ES is better depends on the problems. If you're struggling with local optima, please try BI-population CMA-ES as well. For more details, please see #4464.
New visualization functions
Timeline plot for trial life cycle
The timeline plot visualizes the progress (status, start and end times) of each trial. In this plot, the horizontal axis represents time, and trials are plotted in the vertical direction. Each trial is represented as a horizontal bar, drawn from the start to the end of the trial. With this plot, you can quickly get an understanding of the overall progress of the optimization experiment, such as whether parallel optimization is progressing properly or if there are any trials taking an unusually long time.
Similar to other plot functions, all you need to do is pass the study object to plot_timeline
. For more details, please refer to #4470 and #4538.
Rank plot to understand input-output relationship
A new visualization feature, plot_rank
, has been introduced. This plot provides valuable insights into landscapes of objective functions, i.e., relationship between parameters and objective values. In this plot, the vertical and horizontal axes represent the parameter values, and each point represents a single trial. The points are colored according to their ranks.
Similar to other plot functions, all you need to do is pass the study object to plot_rank. For more details, please refer to #4427 and #4541.
Isolating integration modules
We have separated Optuna's integration module into a different package called optuna-integration. Maintaining many integrations within the Optuna package was becoming costly. By separating the integration module, we aim to improve the development speed of both Optuna itself and its integration module. As of the release of v3.2, we have migrated six integration modules: allennlp, catalyst, chainer, keras, skorch, and tensorflow (excepting for the TensorBoard integration). To use integration module, pip install optuna-integration
will be necessary. See #4484.
- Move
chainermn
integration (optuna/optuna-integration#1) - Move
integration/keras.py
(optuna/optuna-integration#5) - Move
integration/allennlp
(optuna/optuna-integration#8) - Move Catalyst (optuna/optuna-integration#19)
- Move
tf.keras
integration (optuna/optuna-integration#21) - Move
skorch
(optuna/optuna-integration#22) - Move
tensorflow
integration (optuna/optuna-integration#23) - Partially follow
sklearn.model_selection.GridSearchCV
's arguments (#4336) - Delete
optuna.integration.ChainerPruningExtension
for migrating to optuna-integration package (#4370) - Delete
optuna.integration.ChainerMNStudy
for migrating to optuna-integration package (#4497) - Delete
optuna.integration.KerasPruningCallback
for migration to optuna-integration (#4558) - Delete
AllenNLP
integration for migration to optuna-integration (#4579) - DeleteCatalyst integration for migration to optuna-integration (#4644)
- Remove
tf.keras
integration (#4662) - Delete
skorch
integration for migration to optuna-integration (#4663) - Remove
tensorflow
integration (#4666)
Starting support for Mac & Windows
We have started supporting Optuna on Mac and Windows. While many features already worked in previous versions, we have fixed issues that arose in certain modules, such as Storage. See #4457 and #4458.
Breaking Changes
- Update deletion timing of
system_attrs
andset_system_attr
(optuna/optuna-integration#4) - Change deletion timing of
system_attrs
andset_system_attr
(#4550)
New Features
- Show custom objective names for multi-objective optimization (#4383)
- Support DDP in
PyTorch-Lightning
(#4384) - Implement the evaluator of regret bounds and its GP backend for Optuna Terminator 🤖 (#4401)
- Implement the termination logic and APIs of Optuna Terminator 🤖 (#4405)
- Add rank plot (#4427)
- Implement NSGA-III (#4436)
- Add BIPOP-CMA-ES support in
CmaEsSampler
(#4464) - Add timeline plot with plotly as backend (#4470)
- Move
optuna.samplers._search_space.intersection.py
tooptuna.search_space.intersection.py
(#4505) - Add timeline plot with matplotlib as backend (#4538)
- Add rank plot matplotlib version (#4541)
- Support batched sampling with BoTorch (#4591, thanks @kstoneriv3!)
- Add
plot_terminator_improvement
as visualization ofoptuna.terminator
(#4609) - Add import for public API of
optuna.terminator
tooptuna/terminator/__init__.py
(#4669) - Add matplotlib version of
plot_terminator_improvement
(#4701)
Enhancements
- Import
cmaes
package lazily (#4394) - Make
BruteForceSampler
stateless (#4408) - Sort studies by study_id (#4414)
- Add index study_id column on trials table (#4449, thanks @Ilevk!)
- Cache all trials in Study with delayed relative sampling (#4468)
- Avoid error at import time for
optuna.terminator.improvement.gp.botorch
(#4483) - Avoid standardizing
Yvar
in_BoTorchGaussianProcess
(#4488) - Change the noise value in
_BoTorchGaussianProcess
to suppress warning messages (#4510) - Change the argument of
intersection_search_space
fromstudy
totrials
(#4514) - Improve deprecated messages in the old suggest functions (#4562)
- Add support for
distributed>=2023.3.2
(#4589, thanks @jrbourbeau!) - Fix
plot_rank
marker lines (#4602) - Sync owned trials when calling
study.ask
andstudy.get_trials
(#4631) - Ensure that the plotly version of timeline plot draws a legend even if all TrialStates are the same (#4635)
Bug Fixes
- Fix
botorch
dependency (#4368) - Mitigate a blocking issue while running migrations with SQLAlchemy 2.0 (#4386)
- Fix
colorlog
compatibility problem (#4406) - Validate length of values in
add_trial
(#4416) - Fix
RDBStorage.get_best_trial
when there areinf
s (#4422) - Fix bug of CMA-ES with margin on
RDBStorage
orJournalStorage
(#4434) - Fix CMA-ES Sampler (#4443)
- Fix
param_mask
for multivariate TPE withconstant_liar
(#4462) - Make
QMCSampler
samplers reproducible withseed=0
(#4480) - Fix noise becoming NaN for the terminator module (#4512)
- Fix
metric_names
on_log_completed_trial()
function (#4594) - Fix
ImportError
forbotorch<=0.4.0
(#4626) - Fix index of
n_retries += 1
inRDBStorage
(#4658) - Fix CMA-ES with margin bug (#4661)
- Fix a logic for invalidating the cache in
CachedStorage
(#4670) - Fix #4697
ValueError
: Rank 0 node expects anoptuna.trial.Trial
instance as the trial argument (#4698, thanks @keisukefukuda!) - Fix a bug reported in issue #4699 (#4700)
- Add tests for
plot_terminator_improvement
and fix some bugs (#4702)
Installation
- Remove codecov dependencies (optuna/optuna-integration#13)
- Migration to
pyproject.toml
for packaging (#4164) - [RFC] Remove specific pytorch version to support the latest stable PyTorch (#4585)
Documentation
- Create the document and run the test to create document in each PR (optuna/optuna-integration#2)
- Fix Keras docs (optuna/optuna-integration#12)
- Add links of documents (optuna/optuna-integration#17)
- Load
sphinxcontrib.jquery
explicitly (optuna/optuna-integration#18) - Add docstring for the
Terminator
class (#4596) - Fix the build on Read the Docs by following optuna #4659 (optuna/optuna-integration#20)
- Add external packages to
intersphinx_mapping
inconf.py
(#4290) - Minor fix of documents (#4360)
- Fix a typo in
MeanDecreaseImpurityImportanceEvaluator
(#4385) - Update to Sphinx 6 (#4479)
- Fix URL to the line of optuna-integration file (#4498)
- Fix typo (#4515, thanks @gituser789!)
- Resolve error in compiling PDF documents (#4605)
- Add
sphinxcontrib.jquery
extension toconf.py
(#4615) - Remove an example code of
SkoptSampler
(#4625) - Add links to the optuna-integration document (#4638)
- Add manually written index page of tutorial (#4640)
- Fix the build on Read the Docs (#4659)
- Improve docstring of
rank_plot
function and its matplotlib version (#4660) - Add a link to tutorial of human-in-the-loop optimization (#4665)
- Fix typo for progress bar in documentation (#4673, thanks @gituser789!)
- Add docstrings to
optuna.termintor
(#4675) - Add docstring for
plot_terminator_improvement
(#4677) - Remove
versionadded
directives (#4681) - Add pareto front display example: 2D-plot from 3D-optimization including crop the scale (#4685, thanks @gituser789!)
- Embed a YouTube video in the docstring of
DaskStorage
(#4694) - List Dashboard in navbar (#4708)
- Fix docstring of terminator improvement for
min_n_trials
(#4709)
Examples
- An example of using pytorch distributed data parallel on 1 machine with arbitrary multiple GPUs (optuna/optuna-examples#155, thanks @li-li-github!)
- Apply
black .
with black 23.1.0 (optuna/optuna-examples#168) - Add Aim example (optuna/optuna-examples#170)
- Resolve todo and fix docstrings in fastaiv2 example (optuna/optuna-examples#171)
- Update pytorch-lightning version (optuna/optuna-examples#172)
- Add python 3.11 to ray's version matrix (optuna/optuna-examples#174)
- Minor code change suggestions to
pytorch_distributed_spawn.py
(optuna/optuna-examples#175) - Install
optuna-integration
inchainer
CI (optuna/optuna-examples#176) - Add python 3.11 skimage's version matrix and remove warning for inputs data (optuna/optuna-examples#177)
- Execute Ray example in CI (optuna/optuna-examples#178)
- Update pytorch lightning version for ddp (optuna/optuna-examples#179)
- Don't run evaluation twice on the last epoch (optuna/optuna-examples#181, thanks @Jendker!)
- Use BoTorch 0.8 or higher (optuna/optuna-examples#185)
- Run catboost example with python 3.11 (optuna/optuna-examples#186)
- Add terminator examples (optuna/optuna-examples#190)
- Use Gymnasium and pre-released Stable-Baselines3 (optuna/optuna-examples#191)
- Fix the AllenNLP CI (optuna/optuna-examples#193)
Tests
- Suppress
FutureWarning
aboutTrial.set_system_attr
in storage tests (#4323) - Add test for casting in
test_nsgaii.py
(#4387) - Fix the blocking issue on
test_with_server.py
(#4402) - Fix mypy error about
Chainer
(#4410) - Add unit tests for the _BoTorchGaussianProcess class (#4441)
- Implement unit tests for
optuna.terminator.improvement._preprocessing.py
(#4506) - Fix mypy error about
PyTorch Lightning
(#4520)
Code Fixes
- Simplify type annotations (optuna/optuna-integration#10)
- Copy
_imports.py
from optuna (optuna/optuna-integration#16) - Refactor ParzenEstimator (#4183)
- Fix mypy error abut
AllenNLP
in Checks (integration) (#4277) - Fix checks integration about pytorch lightning (#4322)
- Minor refactoring of
tests/hypervolume_tests/test_hssp.py
(#4329) - Remove unnecessary sklearn version condition (#4379)
- Support black 23.1.0 (#4382)
- Warn unexpected search spaces for
CmaEsSampler
(#4395) - Fix flake8 errors on sklearn integration (#4407)
- Fix mypy error about
PyTorch Distributed
(#4413) - Use
numpy.polynomial
in_erf.py
(#4415) - Refactor
_ParzenEstimator
(#4433) - Simplify an argument's name of
RegretBoundEvaluator
(#4442) - Fix
Checks(integration)
aboutterminator/.../botorch.py
(#4461) - Add an experimental decorator to
RegretBoundEvaluator
(#4469) - Add JSON serializable type (#4478)
- Move
optuna.samplers._search_space.group_decomposed.py
tooptuna.search_space.group_decomposed.py
(#4491) - Simplify annotations in
optuna.visualization
(#4525, thanks @harupy!) - Simplify annotations in
tests.visualization_tests
(#4526, thanks @harupy!) - Remove unused instance variables in
_BoTorchGaussianProcess
(#4530) - Avoid deepcopy in
optuna.visualization.plot_timeline
(#4540) - Use
SingleTaskGP
for Optuna terminator (#4542) - Change deletion timing of
optuna.samplers.IntersectionSearchSpace
andoptuna.samplers.intersection_search_space
(#4549) - Remove
IntersectionSearchSpace
inoptuna.terminator
module (#4595) - Change arguments of
BaseErrorEvaluator
and classes that inherit from it (#4607) - Delete
import Rectangle
invisualization/matplotlib
(#4620) - Simplify type annotations in
visualize/_rank.py
andvisualization_tests/
(#4628) - Move the function
_distribution_is_log
tooptuna.distributionsP from
optuna/terminator/init.py` (#4668) - Separate
_fast_non_dominated_sort()
from the samplers (#4671) - Read trials from remote storage whenever
get_all_trials
of_CachedStorage
is called (#4672) - Remove experimental label from _ProgressBar (#4684, thanks @tungbq!)
Continuous Integration
- Fix coverage.yml (optuna/optuna-integration#3)
- Delete labeler.yaml (optuna/optuna-integration#6)
- Fix pypi publish.yaml (optuna/optuna-integration#11)
- Test on an arbitrary branch (optuna/optuna-integration#15)
- Fix the CI with AllenNLP (optuna/optuna-integration#24)
- Update actions/setup-python@v2 -> v4 (#4307, thanks @Kaushik-Iyer!)
- Update action versions (#4328)
- Update
actions/setup-python
inmac-tests
(follow-up for #4307) (#4343) - Add type ignore to
ProcessGroup
import fromtorch.distributed
(#4347) - Fix label of
pypigh-action-pypi-publish
(#4359) - [Hotfix] Avoid to install SQLAlchemy 2.0 on
checks
(#4364) - [Hotfix] Add version constriant on SQLAlchemy for tests storage with server (#4372)
- Disable colored log when
NO_COLOR
env or not tty (#4376) - Output installed packages in Tests CI (#4381)
- Output installed packages in mac-test CI (#4397)
- Use
ubuntu-latest
in PyPI publish CI (#4400) - Output installed packages in Checks CI (#4417, thanks @Kaushik-Iyer!)
- Output installed packages in Coverage CI (#4423, thanks @Kaushik-Iyer!)
- Fix mypy error on checks-integration CI (#4424)
- Fix mac-test cache path (#4425)
- Add minimum version tests of numpy, tqdm, colorlog, PyYAML (#4428)
- Remove ignore test_pytorch_lightning (#4432)
- Use
PyYAML==5.1
ontests-with-minimum-dependencies
(#4435) - Remove trailing spaces in CI configs (#4439)
- Output installed packages in all remaining CIs (#4445, thanks @Kaushik-Iyer!)
- Add windows ci check (#4457)
- Make mac-test executed on PRs (#4458)
- Add sqlalchemy<2.0.0 in
Checks(integration)
(#4482) - Fix ci test conditions (#4496)
- Deploy results of visual regression test on Netlify (#4507)
- Pin pytorch lightning version (#4522)
- Securely deploy results of visual regression test on Netlify (#4532)
- Pin
Distributed
version (#4545) - Delete fragile heartbeat test (#4551)
- Ignore AllenNLP test from Mac-CI (#4561)
- Delete visual-regression.yml (#4597)
- Remove dependency on
codecov
(#4606) - Install
test
inchecks-integration
CI (#4612) - Fix checks integration (#4617)
- Add
Output dependency tree
by pipdeptree to Actions (#4624) - Add a version constraint on
fakeredis
(#4637) - Hotfix and run catboost test w/ python 3.11 except for MacOS (#4646)
- Run
mlflow
with Python 3.11 (#4647)
Other
- Update repository settings as in optuna/optuna (optuna/optuna-integration#7)
- Bump up version to v3.2.0.dev (#4345)
- Remove
cached-path
fromsetup.py
(#4357) - Revert a merge commit for #4183 (#4429)
- Include both venv and .venv in the exclude setting of the formatters (#4476)
- Replace
hacking
withflake8
(#4556) - Fix Codecov link (#4564)
- Add
lightning_logs
to.gitignore
(#4565) - Fix targets of
black
andisort
informats.sh
(#4610) - Install
benchmark
,optional
, andtest
in dev Docker image (#4611) - Provide kind error massage for missing
optuna-integration
(#4636)
Thanks to All the Contributors!
This release was made possible by the authors and the people who participated in the reviews and discussions.
@Alnusjaponica, @HideakiImamura, @Ilevk, @Jendker, @Kaushik-Iyer, @amylase, @c-bata, @contramundum53, @cross32768, @eukaryo, @g-votte, @gen740, @gituser789, @harupy, @himkt, @hvy, @jrbourbeau, @keisuke-umezawa, @keisukefukuda, @knshnb, @kstoneriv3, @li-li-github, @nomuramasahir0, @not522, @nzw0301, @toshihikoyanase, @tungbq