Release Highlights
- Serve: Better streaming support -- In this release, Support for HTTP streaming response and WebSockets is now on by default. Also,
@serve.batch
-decorated methods can stream responses. - Train and Tune: Users are now expected to provide cloud storage or NFS path for distributed training or tuning jobs instead of a local path. This means that results written to different worker machines will not be directly synced to the head node. Instead, this will raise an error telling you to switch to one of the recommended alternatives: cloud storage or NFS. Please see #37177 if you have questions.
- Data: We are introducing a new streaming integration of Ray Data and Ray Train. This allows streaming data ingestion for model training, and enables per-epoch data preprocessing. The DatasetPipeline API is also being deprecated in favor of Dataset with streaming execution.
- RLlib: Public alpha release for the new multi-gpu Learner API that is less complex and more powerful compared to our previous solution (blogpost). This is used under PPO algorithm by default.
Ray Libraries
Ray AIR
π New Features:
- Added support for restoring Results from local trial directories. (#35406)
π« Enhancements:
- [Train/Tune] Disable Train/Tune syncing to head node (#37142)
- [Train/Tune] Introduce new console output progress reporter for Train and Tune (#35389, #36154, #36072, #35770, #36764, #36765, #36156, #35977)
- [Train/Data] New Train<>Data streaming integration (#35236, #37215, #37383)
π¨ Fixes:
- Pass on KMS-related kwargs for s3fs (#35938)
- Fix infinite recursion in log redirection (#36644)
- Remove temporary checkpoint directories after restore (#37173)
- Removed actors that haven't been started shouldn't be tracked (#36020)
- Fix bug in execution for actor re-use (#36951)
- Cancel
pg.ready()
task for pending trials that end up reusing an actor (#35748) - Add case for
Dict[str, np.array]
batches inDummyTrainer
read bytes calculation (#36484)
π Documentation:
- Remove experimental features page, add github issue instead (#36950)
- Fix batch format in
dreambooth
example (#37102) - Fix Checkpoint.from_checkpoint docstring (#35793)
π Architecture refactoring:
- Remove deprecated mlflow and wandb integrations (#36860, #36899)
- Move constants from tune/results.py to air/constants.py (#35404)
- Clean up a few checkpoint related things. (#35321)
Ray Data
π New Features:
- New streaming integration of Ray Data and Ray Train. This allows streaming data ingestion for model training, and enables per-epoch data preprocessing. (#35236)
- Enable execution optimizer by default (#36294, #35648, #35621, #35952)
- Deprecate DatasetPipeline (#35753)
- Add
Dataset.unique()
(#36655, #36802) - Add option for parallelizing post-collation data batch operations in
DataIterator.iter_batches()
(#36842) (#37260) - Enforce strict mode batch format for
DataIterator.iter_batches()
(#36686) - Remove
ray.data.range_arrow()
(#35756)
π« Enhancements:
- Optimize block prefetching (#35568)
- Enable isort for data directory (#35836)
- Skip writing a file for an empty block in
Dataset.write_datasource()
(#36134) - Remove shutdown logging from StreamingExecutor (#36408)
- Spread map task stages by default for arg size <50MB (#36290)
- Read->SplitBlocks to ensure requested read parallelism is always met (#36352)
- Support partial execution in
Dataset.schema()
with new execution plan optimizer (#36740) - Propagate iter stats for
Dataset.streaming_split()
(#36908) - Cache the computed schema to avoid re-executing (#37103)
π¨ Fixes:
- Support sub-progress bars on AllToAllOperators with optimizer enabled (#34997)
- Fix DataContext not propagated properly for
Dataset.streaming_split()
operator - Fix edge case in empty bundles with
Dataset.streaming_split()
(#36039) - Apply Arrow table indices mapping on HuggingFace Dataset prior to reading into Ray Data (#36141)
- Fix issues with combining use of
Dataset.materialize()
andDataset.streaming_split()
(#36092) - Fix quadratic slowdown when locally shuffling tensor extension types (#36102)
- Make sure progress bars always finish at 100% (#36679)
- Fix wrong output order of
Dataset.streaming_split()
(#36919) - Fix the issue that StreamingExecutor is not shutdown when the iterator is not fully consumed (#36933)
- Calculate stage execution time in StageStatsSummary from
BlockMetadata
(#37119)
π Documentation:
- Standardize Data API ref (#36432, #36937)
- Docs for working with PyTorch (#36880)
- Split "Consuming data" guide (#36121)
- Revise "Loading data" (#36144)
- Consolidate Data user guides (#36439)
π Architecture refactoring:
- Remove simple blocks representation (#36477)
Ray Train
π New Features:
- LightningTrainer support DeepSpeedStrategy (#36165)
π« Enhancements:
- Unify Lightning and AIR CheckpointConfig (#36368)
- Add support for custom pipeline class in TransformersPredictor (#36494)
π¨ Fixes:
- Fix Deepspeed device ranks check in Lightning 2.0.5 (#37387)
- Clear stale lazy checkpointing markers on all workers. (#36291)
π Documentation:
- Migrate Ray Train
code-block
totestcode
. (#36483)
π Architecture refactoring:
Ray Tune
π¨ Fixes:
- Optuna: Update distributions to use new APIs (#36704)
- BOHB: Fix nested bracket processing (#36568)
- Hyperband: Fix scheduler raising an error for good
PENDING
trials (#35338) - Fix param space placeholder injection for numpy/pandas objects (#35763)
- Fix result restoration with Ray Client (#35742)
- Fix trial runner/controller whitelist attributes (#35769)
π Documentation:
- Remove missing example from Tune "Other examples" (#36691)
π Architecture refactoring:
- Remove
tune/automl
(#35557) - Remove hard-deprecated modules from structure refactor (#36984)
- Remove deprecated mlflow and wandb integrations (#36860, #36899)
- Move constants from tune/results.py to air/constants.py (#35404)
- Deprecate redundant syncing related parameters (#36900)
- Deprecate legacy modules in
ray.tune.integration
(#35160)
Ray Serve
π« Enhancements:
- Support for HTTP streaming response and WebSockets is now on by default.
@serve.batch
-decorated methods can stream responses.@serve.batch
settings can be reconfigured dynamically.- Ray Serve now uses βpower of two random choicesβ routing. This improves enforcement of
max_concurrent_queries
and tail latencies under load.
π¨ Fixes:
- Fixed the bug previously unable to use a custom module named after βutilsβ.
- Fixed serve downscaling issue by adding a new draining state to the http proxy. This helps http proxies to not take new requests when there are no replicas on the node and prevents interruption on the ongoing requests when the node is downscaled. Also, enables downscaling to happen when the requests use Rayβs object store which is blocking downscaling of the node.
- Fixed non-atomic shutdown logic. Serve shutdown will be run in the background and not require the client to wait for the shutdown to complete. And wonβt be interrupted when the client is force killed.
RLlib
π New Features:
- Public alpha release for the new multi-gpu Learner API that is less complex and more powerful than the old training stack (blogpost). This is used under PPO algorithm by default.
- Added RNN support on the new RLModule API
- Added TF-version of DreamerV3 (link). The comprehensive results will be published soon.
- Added support for torch 2.x compile method in sampling from environment
π« Enhancements:
- Added an Example on how to do pretraining with BC and then continuing finetuning with PPO (example)
- RLlib deprecation Notices (algorithm/, evaluation/, execution/, models/jax/) (#36826)
- Enable eager_tracing=True by default. (#36556)
π¨ Fixes:
- Fix bug in Multi-Categorical distribution. It should use logp and not log_p. (#36814)
- Fix LSTM + Connector bug: StateBuffer restarting states on every in_eval() call. (#36774)
π Architecture refactoring:
- Multi-GPU Learner API
Ray Core
π New Features:
- [Core][Streaming Generator] Cpp interfaces and implementation (#35291)
- [Core][Streaming Generator] Streaming Generator. Support Core worker APIs + cython generator interface. (#35324)
- [Core][Streaming Generator] Streaming Generator. E2e integration (#35325)
- [Core][Streaming Generator] Support async actor and async generator interface. (#35584)
- [Core][Streaming Generator] Streaming Generator. Support the basic retry/lineage reconstruction (#35768)
- [Core][Streaming Generator] Allow to raise an exception to avoid check failures. (#35766)
- [Core][Streaming Generator] Fix a reference leak when a stream is deleted with out of order writes. (#35591)
- [Core][Streaming Generator] Fix a reference leak when pinning requests are received after refs are consumed. (#35712)
- [Core][Streaming Generator] Handle out of order report when retry (#36069)
- [Core][Streaming Generator] Make it compatible with wait (#36071)
- [Core][Streaming Generator] Remove busy waiting (#36070)
- [Core][Autoscaler v2] add test for node provider (#35593)
- [Core][Autoscaler v2] add unit tests for NodeProviderConfig (#35590)
- [Core][Autoscaler v2] test ray-installer (#35875)
- [Core][Autoscaler v2] fix too many values to unpack (expected 2) bug (#36231)
- [Core][Autoscaler v2] Add idle time information to Autoscaler endpoint. (#36918)
- [Core][Autoscaler v2] Cherry picks change to Autoscaler intereface (#37407)
- [Core][Autoscaler v2] Fix idle time duration when node resource is not updated periodically (#37121) (#37175)
- [Core][Autoscaler v2] Fix pg id serialization with hex rather than binary for cluster state reporting #37132 (#37176)
- [Core][Autoscaler v2] GCS Autoscaler V2: Add instance id to ray [3/x] (#35649)
- [Core][Autoscaler v2] GCS Autoscaler V2: Add node type name to ray (#36714)
- [Core][Autoscaler v2] GCS Autoscaler V2: Add placement group's gang resource requests handling [4/x] (#35970)
- [Core][Autoscaler v2] GCS Autoscaler V2: Handle ReportAutoscalingState (#36768)
- [Core][Autoscaler v2] GCS Autoscaler V2: Interface [1/x] (#35549)
- [Core][Autoscaler v2] GCS Autoscaler V2: Node states and resource requests [2/x] (#35596)
- [Core][Autoscaler v2] GCS Autoscaler V2: Support Autoscaler.sdk.request_resources [5/x] (#35846)
- [Core][Autoscaler v2] Ray status interface [1/x] (#36894)
- [Core][Autoscaler v2] Remove usage of grpcio from Autoscaler SDK (#36967)
- [Core][Autoscaler v2] Update Autoscaler proto for default enum value (#36962)
- [Core][Autoscalerv2] Update Autoscaler.proto / instance_manager.proto dependency (#36116)
π« Enhancements:
- [Core] Make some grpcio imports lazy (#35705)
- [Core] Only instantiate gcs channels on driver (#36389)
- [Core] Port GcsSubscriber to Cython (#35094)
- [Core] Print out warning every 1s when sched_cls_id is greater than 100 (#35629)
- [Core] Remove attrs dependency (#36270)
- [Core] Remove dataclasses requirement (#36218)
- [Core] Remove grpcio from Ray minimal dashboard (#36636)
- [Core] Remove grpcio import from usage_lib (#36542)
- [Core] remove import thread (#36293)
- [Core] Remove Python grpcio from check_health (#36304)
- [Core] Retrieve the token from GCS server [4/n] (#37003) (#37294)
- [Core] Retry failed redis request (#35249)
- [Core] Sending ReportWorkerFailure after the process died. (#35320)
- [Core] Serialize auto-inits (#36127)
- [Core] Support auto-init ray for get_runtime_context() (#35903)
- [Core] Suppress harmless ObjectRefStreamEndOfStreamError when using asyncio (#37062) (#37200)
- [Core] Unpin grpcio and make Ray run on mac M1 out of the box (#35932)
- [Core] Add a better error message for health checking network failures (#36957) (#37366)
- [Core] Add ClusterID to ClientCallManager [2/n] (#36526)
- [Core] Add ClusterID token to GCS server [3/n] (#36535)
- [Core] Add ClusterID token to GRPC server [1/n] (#36517)
- [Core] Add extra metrics for workers (#36973)
- [Core] Add get_worker_id() to runtime context (#35967)
- [Core] Add logs for Redis leader discovery for observability. (#36108)
- [Core] Add metrics for object size distribution in object store (#37005) (#37110)
- [Core] Add resource idle time to resource report from node. (#36670)
- [Core] Check that temp_dir must be absolute path. (#36431)
- [Core] Clear CPU affinity for worker processes (#36816)
- [Core] Delete object spilling dead code path. (#36286)
- [Core] Don't drop rpc status in favor of reply status (#35530)
- [Core] Feature flag actor task logs with off by default (#35921)
- [Core] Graceful handling of returning bundles when node is removed (#34726)
- [Core] Graceful shutdown in TaskEventBuffer destructor (#35857)
- [Core] Guarantee the ordering of put ActorTaskSpecTable and ActorTable (#35683)
- [Core] Introduce fail_on_unavailable option for hard NodeAffinitySchedulingStrategy (#36718)
- [Core] Make βimportβ ray work without grpcio (#35737)
- [Core][dashboard] Add task name in task log magic token (#35377)
- [Core][deprecate run_function_on_all_workers 3/n] delete run_function_on_all_workers (#30895)
- [Core][devex] Move ray/util build targets to separate build files (#36598)
- [Core][logging][ipython] Fix log buffering when consecutive runs within ray log dedup window (#37134) (#37174)
- [Core][Logging] Switch worker_setup_hook to worker_process_setup_hook (#37247) (#37463)
- [Core][Metrics] Use Autoscaler-emitted metrics for pending/active/failed nodes. (#35884)
- [Core][state] Record file offsets instead of logging magic token to track task log (#35572)
- [CI] [Minimal install] Check python version in minimal install (#36887)
- [CI] second try of fixing vllm example in CI #36712
- [CI] skip vllm_example #36665
- [CI][Core] Add more visbility into state api stress test (#36465)
- [CI][Doc] Add windows 3.11 wheel support in doc and CI #37297 (#37302)
- [CI][py3.11] Build python wheels on mac os for 3.11 (#36185)
- [CI][python3.11] windows 3.11 wheel build
- [CI][release] Add mac 3.11 wheels to release scripts (#36396)
- [CI] Update state api scale test (#35543)
- [Release Test] Fix dask on ray 1tb sort failure. (#36905)
- [Release Test] Make the cluster name unique for cluster launcher release tests (#35801)
- [Test] Deflakey gcs fault tolerance test in mac os (#36471)
- [Test] Deflakey pubsub integration_test (#36284)
- [Test] Change instance type to r5.8xlarge for dask_on_ray_1tb_sort (#37321) (#37409)
- [Test] Move generators test to large (#35747)
- [Test][Core] Handled the case where memories is empty for dashboard test (#35979)
π¨ Fixes:
- [Core] Fix GCS FD usage increase regression. (#35624)
- [Core] Fix issues with worker churn in WorkerPool (#36766)
- [Core] Fix proctitle for generator tasks (#36928)
- [Core] Fix ray.timeline() (#36676)
- [Core] Fix raylet memory leak in the wrong setup. (#35647)
- [Core] Fix test_no_worker_child_process_leaks (#35840)
- [Core] Fix the GCS crash when connecting to a redis cluster with TLS (#36916)
- [Core] Fix the race condition where grpc requests are handled while c⦠(#37301)
- [Core] Fix the recursion error when async actor has lots of deserialization. (#35494)
- [Core] Fix the segfault from Opencensus upon shutdown (#36906) (#37311)
- [Core] Fix the unnecessary logs (#36931) (#37313)
- [Core] Add a special head node resource and use it to pin the serve controller to the head node (#35929)
- [Core] Add debug log for serialized object size (#35992)
- [Core] Cache schema and test (#37103) (#37201)
- [Core] Fix 'ray stack' on macOS (#36100)
- [Core] Fix a wrong metrics setup link from the doc. (#37312) (#37367)
- [Core] Fix lint (#35844)(#36739)
- [Core] Fix literalinclude path (#35660)
- [Core] Fix microbenchmark (#35823)
- [Core] Fix single_client_wait_1k perf regression (#35614)
- [Core] Get rid of shared_ptr for GcsNodeManager (#36738)
- [Core] Remove extra step in M1 installation instructions (#36029)
- [Core] Remove unnecessary AsyncGetResources in NodeManager::NodeAdded (#36412)
- [Core] Unskip test_Autoscaler_shutdown_node_http_everynode (#36420)
- [Core] Unskip test_get_release_wheel_url for mac (#36430)
π Documentation:
- [Doc] Clarify that session can also mean a ray cluster (#36422)
- [Doc] Fix doc build on M1 (#35689)
- [Doc] Fix documentation failure due to typing_extensions (#36732)
- [Doc] Make doc code snippet testable [3/n] (#35407)
- [Doc] Make doc code snippet testable [4/n] (#35506)
- [Doc] Make doc code snippet testable [5/n] (#35562)
- [Doc] Make doc code snippet testable [7/n] (#36960)
- [Doc] Make doc code snippet testable [8/n] (#36963)
- [Doc] Some instructions on how to size the head node (#36429)
- [Doc] Fix doc for runtime-env-auth (#36421)
- [Doc][dashboard][state] Promote state api and dashboard usage in Core user guide. (#35760)
- [Doc][python3.11] Update mac os wheels built link (#36379)
- [Doc] [typo] Rename acecelerators.md to accelerators.md (#36500)
Many thanks to all those who contributed to this release!
@ericl, @ArturNiederfahrenhorst, @sihanwang41, @scv119, @aslonnie, @bluecoconut, @alanwguo, @krfricke, @frazierprime, @vitsai, @amogkam, @GeneDer, @jovany-wang, @gjoliver, @simran-2797, @rkooo567, @shrekris-anyscale, @kevin85421, @angelinalg, @maxpumperla, @kouroshHakha, @Yard1, @chaowanggg, @justinvyu, @fantow, @Catch-Bull, @cadedaniel, @ckw017, @hora-anyscale, @rickyyx, @scottsun94, @XiaodongLv, @SongGuyang, @RocketRider, @stephanie-wang, @inpefess, @peytondmurray, @sven1977, @matthewdeng, @ijrsvt, @MattiasDC, @richardliaw, @bveeramani, @rynewang, @woshiyyya, @can-anyscale, @omus, @eax-anyscale, @raulchen, @larrylian, @Deegue, @Rohan138, @jjyao, @iycheng, @akshay-anyscale, @edoakes, @zcin, @dmatrix, @bryant1410, @WanNJ, @architkulkarni, @scottjlee, @JungeAlexander, @avnishn, @harisankar95, @pcmoritz, @wuisawesome, @mattip