This is the release note for v2.7.0.
Highlights
New optuna-dashboard
Repository
A new dashboard optuna-dashboard
is being developed in a separate repository under the Optuna organization. Install it with pip install optuna-dashboard
and run it with optuna-dashboard $STORAGE_URL
.
The previous optuna dashboard
command is now deprecated.
Deprecate n_jobs
Argument of Study.optimize
The GIL has been an issue when using the n_jobs
argument for multi-threaded optimization. We decided to deprecate this option in favor of the more stable process-level parallelization. Details available in the tutorial. Users who have been parallelizing at the thread level using the n_jobs
argument are encouraged to refer to the tutorial for process-level parallelization.
If the objective function is not affected by the GIL, thread-level parallelism may be useful. You can achieve thread-level parallelization in the following way.
with ThreadPoolExecutor(max_workers=5) as executor:
for _ in range(5):
executor.submit(study.optimize, objective, 100)
New Tutorial and Examples
Tutorial pages about the usage of the ask-and-tell interface (#2422) and best_trial
(#2427) have been added, as well as an example that demonstrates parallel optimization using Ray (#2298) and an example to explain how to stop the optimization based on the number of completed trials instead of the total number of trials (#2449).
Improved Code Quality
The code quality was improved in terms of bug fixes, third party library support, and platform support.
For instance, the bugs on warm starting CMA-ES and visualization.matplotlib.plot_optimization_history
were resolved by #2501 and #2532, respectively.
Third party libraries such as PyTorch, fastai, and AllenNLP were updated. We have updated the corresponding integration modules and examples for the new versions. See #2442, #2550 and #2528 for details.
From this version, we are expanding the platform support. Previously, changes were tested in Linux containers. Now, we also test changes merged into the master branch in macOS containers as well (#2461).
Breaking Changes
New Features
- Support object representation of
StudyDirection
forcreate_study
arguments (#2516)
Enhancements
- Change caching implementation of MOTPE (#2406, thanks @y0z!)
- Fix to replace
numpy.append
(#2419, thanks @nyanhi!) - Modify
after_trial
forNSGAIISampler
(#2436, thanks @jeromepatel!) - Print a URL of a related release note in the warning message (#2496)
- Add log-linear algorithm for 2d Pareto front (#2503, thanks @parsiad!)
- Concatenate the argument text after the deprecation warning (#2558)
Bug Fixes
- Use 2.0 style delete API of SQLAlchemy (#2487)
- Fix Warm Starting CMA-ES with a maximize direction (#2501)
- Fix
visualization.matplotlib.plot_optimization_history
for multi-objective (#2532)
Installation
- Bump
torch
to 1.8.0 (#2442) - Remove Cython from
install_requires
(#2466) - Fix Cython installation for Python 3.9 (#2474)
- Avoid catalyst 21.3 (#2480, thanks @crcrpar!)
Documentation
- Add ask and tell interface tutorial (#2422)
- Add tutorial for re-use of the
best_trial
(#2427) - Add explanation for get_storage in the API reference (#2430)
- Follow-up of the user-defined pruner tutorial (#2446)
- Add a new example
max_trial_callback
tooptuna/examples
(#2449, thanks @jeromepatel!) - Standardize on 'hyperparameter' usage (#2460)
- Replace MNIST with Fashion MNIST in multi-objective optimization tutorial (#2468)
- Fix links on
SuccessiveHalvingPruner
page (#2489) - Swap the order of
load_if_exists
anddirections
for consistency (#2491) - Clarify
n_jobs
forOptunaSearchCV
(#2545) - Mention the paper is in Japanese (#2547, thanks @crcrpar!)
- Fix typo of the paper's author name (#2552)
Examples
- Add an example of
Ray
withjoblib
backend (#2298) - Added RL and Multi-Objective examples to
examples/README.md
(#2432, thanks @jeromepatel!) - Replace
sh
withbash
in README of kubernetes examples (#2440) - Apply #2438 to pytorch examples (#2453, thanks @crcrpar!)
- More Examples Folders after #2302 (#2458, thanks @crcrpar!)
- Apply
urllib
patch for MNIST download (#2459, thanks @crcrpar!) - Update
Dockerfile
of MLflow Kubernetes examples (#2472, thanks @0x41head!) - Replace Optuna's Catalyst pruning callback with Catalyst's Optuna pruning callback (#2485, thanks @crcrpar!)
- Use whitespace tokenizer instead of spacy tokenizer (#2494)
- Use Fashion MNIST in example (#2505, thanks @crcrpar!)
- Update
pytorch_lightning_distributed.py
to remove MNIST and PyTorch Lightning errors (#2514, thanks @0x41head!) - Use
OptunaPruningCallback
incatalyst_simple.py
(#2546, thanks @crcrpar!) - Support fastai 2.3.0 (#2550)
Tests
- Add
MOTPESampler
inparametrize_multi_objective_sampler
(#2448) - Extract test cases regarding Pareto front to
test_multi_objective.py
(#2525)
Code Fixes
- Fix
mypy
errors produced bynumpy==1.20.0
(#2300, thanks @0x41head!) - Simplify the code to find best values (#2394)
- Use
_SearchSpaceTransform
inRandomSampler
(#2410, thanks @sfujiwara!) - Set the default value of
state
ofcreate_trial
asCOMPLETE
(#2429)
Continuous Integration
- Run TensorFlow related examples on Python3.8 (#2368, thanks @crcrpar!)
- Use legacy resolver in CI's pip installation (#2434, thanks @crcrpar!)
- Run tests and integration tests on Mac & Python3.7 (#2461, thanks @crcrpar!)
- Run Dask ML example on Python3.8 (#2499, thanks @crcrpar!)
- Install OpenBLAS for mxnet1.8.0 (#2508, thanks @crcrpar!)
- Add ray to requirements (#2519, thanks @crcrpar!)
- Upgrade AllenNLP to
v2.2.0
(#2528) - Add Coverage for ChainerMN in codecov (#2535, thanks @jeromepatel!)
- Skip fastai2.3 tentatively (#2548, thanks @crcrpar!)
Other
- Add
-f
option tomake clean
command idempotent (#2439) - Bump
master
version to2.7.0dev
(#2444) - Document how to write a new tutorial in
CONTRIBUTING.md
(#2463, thanks @crcrpar!) - Bump up version number to 2.7.0 (#2561)
Thanks to All the Contributors!
This release was made possible by authors, and everyone who participated in reviews and discussions.
@0x41head, @AmeerHajAli, @Crissman, @HideakiImamura, @c-bata, @crcrpar, @g-votte, @himkt, @hvy, @jeromepatel, @keisuke-umezawa, @not522, @nyanhi, @nzw0301, @parsiad, @sfujiwara, @sile, @toshihikoyanase, @y0z