Release Highlights
Ray Data:
This release offers many updates to Ray Data, including:
- The default shuffle strategy is now changed from sort-based to hash-based. This will result in much lower peak memory usage and improved shuffle performance for aggregations.
- We’ve added a new expression API enables predicate-based filtering, UDF transformations with
with_column
, and column aliasing for more powerful data transformations - Ray Data LLM has a number of new enhancements for multimodal data pipelines, including multi-node tensor and pipeline parallelism support per replica and ability to share vLLM engines across processors.
Ray Core:
Alpha release of Ray Direct Transport (formerly GPU objects) - simply enable it by adding the tensor_transport
parameter to the existing native Ray Core API. This keeps GPU data in GPU memory until a transfer is needed, avoiding expensive serialization and copies to and from the Ray object store. It uses efficient data transports such as collective communication libraries (GLOO or NCCL) or point-to-point RDMA (via NVIDIA’s NIXL) to transfer data directly between devices, including both CPUs and GPUs.
Ray Train:
Local mode support for multi-process training with torchrun
, enhanced checkpoint management with new upload modes and validation functions
Ray Serve:
- Async Inference alpha release - New Ray Serve APIs for supporting long-running asynchronous inference tasks, such as for video or large document processing. Includes capabilities for using different message brokers, adapters like
celery
and DLQ. - Support for replica ranks - Replica level ranks are added for supporting large-model inference use-cases such as wide Data Parallel and Expert Parallel setups.
- FastAPI factory pattern support - Enables using FastAPI plugins that are not serializable via cloudpickle.
- Throughput optimizations - Enable these using the
RAY_SERVE_THROUGHPUT_OPTIMIZED
environment variable.
RLLib:
Add StepFailedRecreateEnv
exception for users with unsatisfiable environments
Ray Serve/Data LLM:
Improvements to multi node serving, loading models from remote storages, and sharing resources for efficiency (fractional gpus, sharing gpus on a data pipeline with shared stages)
Ray Libraries
Ray Data
🎉 New Features:
- Expression and Filtering API: New expression API enables predicate-based filtering, UDF transformations with with_column, and column aliasing for more powerful data transformations (#56716, #56313, #56550, #55915, #55788, #56193, #56596)
- Added support for projection pushdown into Parquet reads (#56500)
- New download expression enables efficient loading of data from columns containing URIs with improved performance and error handling (#55824, #56462, #56294, #56852, #57146)
- New
explain()
API provides insights into dataset execution plans (#55482) - Added
streaming_train_test_split
to avoid materialization for train/test splits (#56803) - Ray Data LLM:
- Enabled multi-node tensor and pipeline parallelism for LLM processing (#56779)
- Added
chat_template_kwargs
parameter for customizing chat templates (#56490) - Added support for OpenAI's nested image URL format in multimodal pipelines (#56584)
- vLLM engines can now be shared across sequential processors for better resource utilization (#55179)
- Enhanced Dataset.stats() output with input/output row counts per operator (#56040)
- Added new metrics for task duration, inputs per task, and output blocks (#56958, #56379)
- Time to first batch metric for better iteration performance monitoring (#55758)
- Added type-specific aggregators for numerical, categorical, and vector columns (#56610)
- Added fine-grained concurrency controls with
max_task_concurrency
and resource allocation options (#56370, #56381)
💫 Enhancements:
- Join and shuffle improvements:
- Tensor type handling improvements:
- Improved compatibility between PyArrow native types, extension types, and pandas Arrow dtypes (#57566, #57176, #57057)
- Joins now supported with list/tensor non-key columns (#5648)
- Enhanced support for variable-shaped tensor arrays with different dimensions (#57240, #56918, #56457)
- Added serialization/deserialization for PyArrow Extension Arrays (#51972)
- Removing Parquet metadata fetching in ParquetDatasource (#56105)
- Resource requirements (num_cpus/gpus, memory) are now top-level parameters in most APIs for easier configuration (#56419)
- zip() operator now supports combining multiple datasets, not just pairs (#56524)
- Concurrency parameter now accepts tuples for more flexible configuration (#55867)
- Write operations now use iterators instead of accumulating blocks in memory (#57108)
- Reduced memory usage for OneHotEncoder (#56565)
- Reduced memory usage for schema unification (#55880)
- Eliminated unnecessary block copying and double execution of arrow conversions (#56569, #56793)
- Improved Parquet encoding ratio estimation (#56268)
- Enabled per-block limiting for Limit operator (#55239)
- Optimized schema handling with deduplication and removed unnecessary unification (#55854, #55926)
- Improved issue detection with event emission instead of just logs (#55717)
- Better metric organization and external queue metric handling (#55495, #56604)
- New backpressure policy based on downstream processing capacity (#55463)
🔨 Fixes:
- Fixed streaming executor to properly drain output queues (#56941)
- Improved resource management and reservation for operators (#56319, #57123)
- Fixed retry logic for hash shuffle operations (#57575)
- Fix split_blocks produce empty blocks (#57085)
- Initialize datacontext after setting src_fn_name in actor worker (#57117)
- Fix mongo datasource collStats invocation (#57027)
- Fixing empty projection handling in ParquetDataSource (#56299)
- Fix UnboundLocalError when read_parquet with columns and no partitioning (#55820)
- Fix high memory usage with FileBasedDatasource & ParquetDatasource when using a large number of files (#55978)
- [llm] Fixed LLM processor deployment with Ray Serve (#57061)
- [llm] Fixed multimodal image extraction when system prompts are absent (#56435)
- Ignore metadata for pandas block (#56402)
- Remove metadata for hashing + truncate warning logs (#56093)
📖 Documentation:
- Error in ray.data.groupby example in docs. (#57036)
- Update on ray.data.Dataset.map() type hints. (#52455)
- Small typo fix. (#56560)
- Fix a typo. (#56587)
- Fix documentation for new execution options resource limits assignment. (#56051)
- Fix broken code snippets in user guides. (#55519)
- Add Autoscaling Config for Context docs. (#55712)
- Make object store tuning tips consistent with other pages. (#56705)
- New example of how to perform batch inference with embedding models (#56027)
Ray Train
🎉 New Features:
- Local mode support for Ray Train V2
- Async checkpoint and validation for Ray Train
💫 Enhancements:
- Ray Train V2 Migration
- Implement BaseWorkerGroup for V1/V2 compatibility. (#57151)
- Train Controller is always actor + fix tune integration to enable this. (#55556)
- Refactor AcceleratorSetupCallback to use before_init_train_context. (#56509)
- Move collective implementations to train_fn_utils. (#55689)
- Ray Train Framework support enhancements
- Add Torch process group shutdown timeout. (#56182)
- Ray Train disables blocking get inside async warning. (#56757)
- ThreadRunner captures exceptions from nested threads. (#55756)
- Abort reconciliation thread catches ray.util.state.get_actor exception. (#56600)
- Ray Data Integration
- Refactor call_with_retry into shared library and use it to retry checkpoint upload. (#56608)
- Remove Placement Group on Train Run Abort. (#56011)
🔨 Fixes:
- Fix LightGBM v2 callbacks for Tune only usage. (#57042)
- Ignore tensorflow test for py312. (#56244)
- Revising test_jax_trainer flaky test. (#56854)
- Fix test_jax_trainer imports. (#55799)
- Fix test_jax_trainer::test_minimal_multihost Flaky Test. (#56548)
- Disable drop_last flag to fix division by zero in torch dataloader baselines. (#56395)
- Preload a subset of modules for torch dataloader forkserver multiprocessing. (#56343)
📖 Documentation:
- Add checkpoint_upload_mode to checkpoint docs. (#56860)
- Add get_all_reported_checkpoints and ReportedCheckpoint to API docs. (#56174)
- Fix typo for Instantiating in ray train doc. (#55826)
🏗 Architecture refactoring:
- Release tests for ray train local mode. (#56862)
- Migrate tune_rllib_connect_test & tune_cloud_long_running_cloud_storage to ray train v2. (#56844)
- Add v2 multinode persistence release test. (#56856)
- Attach a quick checkpoint when reporting metrics. (#56718)
- Upgrade tune_torch_benchmark to v2. (#56804)
- Move tune_with_frequent_pausing to Ray Train v2 and tune_tests folder. (#56799)
- Migrate xgboost/lgbm benchmarks to train V2. (#56792)
Ray Tune
🎉 New Features:
- Trigger Checkpointing via Trial / Tuner Callback. (#55527)
💫 Enhancements:
- Improve _PBTTrialState for dev/debugging usage. (#56890)
- Enable Train V2 in Tune unit tests and examples. (#56816)
- Enable Train v2 in doc examples. (#56820)
- Reintroduce keras tune callback. (#57121)
🔨 Fixes:
- Increase tune checkpoint test latency threshold. (#56251)
- Remove a bunch of low-signal/redundant train/air/tune tests. (#56477)
- Remove tune_air_oom test. (#57089)
Ray Serve
🎉 New Features:
- Add tests and DLQ business logic for async inference. (#55608)
- Foundation work for aggregating metrics on controller. (#55568)
- Include custom metrics method and report to controller. (#56005)
- Add post scaling api. (#56135)
- Introduce deployment rank manager. (#55729)
- Integrated deployment ranks with deployment state. (#55829)
- Add rank and world size in replica context. (#55827)
- Added ssl to ray serve. (#55228)
- Custom parameter for downscaling to zero. (#56573)
- Add optional APIType filter to /api/serve/applications/ endpoint. (#56458)
- Make deployment retry configurable. (#56530)
💫 Enhancements:
- Aggregate autoscaling metrics on controller. (#56306)
- Update metrics_utils for future global metrics aggregation in controller. (#55568)
- Use deployment method in access logs for replicas. (#56829)
- Cache router metrics. (#55897)
- Allow same event loop handle shutdown from sync context. (#55551)
- Additional deps to start with prometheus. (#57155)
- Require prefix RAY_SERVE_ for env vars + value verification. (#55864)
- Record queued metrics on timeseries. (#57024)
- Add throughput opt env var for serve. (#55804)
- Fix None pending Request. (#54775)
- Omit unnecessary newlines in the config generated by serve build app:app. (#56609)
- Expose actor name for target group api. (#56738)
🔨 Fixes:
- Fix proxy lua dependency in dockerfile. (#57221)
- Fix non thread safe asyncio task creation in router. (#56124)
- Fix throughput optimized benchmarks. (#56173)
- Move ingress validation for multiple fastapi deployment into client. (#56706)
- Explicitly close choose_replicas_with_backoff async generator. (#56357)
- Fix buffered logging reusing request context. (#56094)
- Use default gc frequency for proxy. (#56511)
- Fixing deployment scoped custom autoscaling. (#56192)
📖 Documentation:
- Stable links for Ray serve. (#56241)
- Add document for using fastapi factory pattern in serve. (#56607)
- Add documentation for async inference (#56453)
🏗 Architecture refactoring:
- Add microbenchmark for throughput optimized configuration. (#55900)
- Only checkpoint controller state when it is confirmed that target state has changed (#55848)
- Proxy Actor Interface. (#56288)
- Allow ProxyActor to return true/false for health check. (#56660)
Ray Serve/Data LLM
🎉 New Features:
- Score API Integration for Serve LLM. (#55914)
- Add start/stop_profile method to LLMServer. (#55920)
- Add prefix cache hit rate to Serve LLM dashboard. (#55675)
- Configure aggregation interval for dashboard. (#56591)
💫 Enhancements:
- Bump vLLM to 0.10.2. (#56535)
- Vllm bump -> 0.10.1.1. (#56099)
- Refactor: Improve Deployment Builder Ergonomics and Code Organization. (#57181)
- Fix build_llm_processor for ServeDeploymentProcessor. (#57061)
- Allow setting data_parallel_size=1 in engine_kwargs. (#55750)
- Allow tuple for concurrency arg. (#55867)
- Fix multimodal image extraction when no system prompt is present. (#56435)
- Support azure and abfss in LLM config. (#56441)
- Support custom s3 endpoint when downloading models from remote. (#55458)
- Skip safetensor file downloads for runai streamer mode. (#55662)
- Support colocating local DP ranks in DPRankAssigner. (#55720)
- Adjust LLM engine timing logic. (#55595)
- Fixed DP DSV3 issues. (#55802)
- Gracefully return timeouts as HTTPException. (#56264)
- Remove upstreamed workarounds 1/3. (#54512)
🔨 Fixes:
- Changed LMCache dependency to use 0.3.3 to avoid regressions in the release test. (#56104)
- Fix doc test for Working with LLMs guide. (#55917)
- Fix sglang byod on release. (#55885)
📖 Documentation:
- Add gpt oss deployment example. (#56400)
- Add serve llm example to index page + other minor fix. (#56788)
- Example serve llm deployment. (#55819)
- Fix serve llm examples. (#56382)
- Docs: serve llm deployment examples refinement. (#56287)
- Add example of serving a VLLM model on fractional gpu. (#57197)
- Add main pytest code snippet to those tests that were missing it. (#57167)
RLlib
🎉 New Features:
- Add StepFailedRecreateEnv exception. (#55146)
💫 Enhancements:
- Add tags to envrunner calls, count in flight requests in ActorManager. (#56930, #56953)
- Add spaces in case only offline data is used. (#56141)
- Add Footsies environment and tests. (#55041)
🔨 Fixes:
- Fix failing env step in MultiAgentEnvRunner. (#55567)
- Fix Metrics/Stats lifetime count and throughput measurement for async remote actors. (#56047)
- Fixes Implementation of Shared Encoder. (#54571)
- Fix MetricsLogger/Stats throughput bugs. (#55696)
📖 Documentation:
- [RLlib] [DOC] Fix documentation typos and grammatical issues in RLlib docs (#56130)
- Update rllib-env.rst - typo. (#56140)
- Fixing typo in the RLlib documentation. (#55752)
- Fix formatting of class references. (#55764)
🏗 Architecture refactoring:
- Remove checkpoint release tests. (#57105)
- Remove long_running_apex test. (#57097)
- LINT: Enable ruff imports for multiple directories in rllib. (#56736)
- Upgrade g3 to g4 machine for aws release test. (#56248)
Ray Core
🎉 New Features:
- Alpha release of Ray Direct Transport
- Support ray.put() and ray.get() with nixl in gpu objects. (#56146)
- Support using ray.get with nixl to retrieve data from GPU object refs created by remote tasks. (#56559)
- Support tensor transfer from outside owners of actors. (#56485)
- Automatically enable tensor transport for the actor if any method specifies one. (#55324)
- Support cpu tensor transfer with NIXL in GPU Objects. (#55793)
- Handle multiple transfers of the same object to an actor. (#55628)
- Support NIXL as tensor transport backend. (#54459)
- Add a user-facing call to wait for tensor to be freed. (#55076)
- Always write to GPUObjectStore to avoid _get_tensor_meta() from hanging indefinitely. (#55433)
- Add warning when GPU object refs passed back to the same actor. (#55639)
- Avoid triggering a KeyError by the GPU object GC callback for intra-actor communication. (#54556)
- Enable autoscaler v2 on clusters launched by the cluster launcher. (#55865)
- Ray Symmetric Run Script and ray symmetric-run command. (#55111, #56497)
💫 Enhancements:
- Ray Event Export
- GCS AddEvent support. (#55528)
- Actor event: add proto schema. (#56221)
- Node event: add proto schema and send node events to the aggregator. (#56031, #56426)
- Job event: add schema for driver job event and send job events to the aggregator. (#55032, #55213)
- Emit actor events to Event aggregator. (#56617)
- Export node event by default. (#56810)
- Security
- Bind ray internal servers to the specified node ip instead of 0.0.0.0. (#55178, #55210, #55298, #55484)
- Bind dashboard agent http server to localhost in addition to the node ip. (#55910)
- Bind dashboard agent grpc to specified ip instead of 0.0.0.0. (#55732)
- Bind runtime env agent and dashboard agent http server to specified ip instead of 0.0.0.0. (#55431)
- RPC network fault tolerance
- Making core worker pub sub RPCs fault tolerant. (#56436)
- Make RequestWorkerLease RPC Fault Tolerant. (#56191)
- Making ReturnWorkerLease Idempotent. (#56073)
- Making CancelWorkerLease RPC Fault Tolerant. (#56195)
- Make Free Objects RPC Fault Tolerant. (#56293)
- Make PinObjectIDs RPC Fault Tolerant. (#56443)
- Make Unsubscribe Idempotent. (#57546)
- Core Worker GetObjStatus GRPC Fault Tolerance. (#54567)
- Not overriding accelerator id env vars when num_accelerators is 0 or not set. (#54928)
- Migrate metric collection from opencensus to opentelemetry. (#53098, #53740)
- Add per worker process group and deprecate process subreaper in favor of cleanup using process group. (#56476)
- Add node_id validation in NodeAffinitySchedulingStrategy. (#56708)
- Add io_context metrics to gcs and raylet. (#55762)
- Modify RedisDelKeyPrefixSync to use the Redis SCAN command instead of KEYS. (#56907)
- Add error_type to job failures. (#55578)
- Add PID to structured logs for tasks and actors. (#55176)
- Log actor name when warning about excess queueing. (#57124)
- Output the error log on the driver side if the failed task will still retry. (#56472)
- Prometheus http service discovery API. (#55656)
- Add node ip in runtime env error message to improve debug observability. (#56837)
- Fallback unserializable exceptions to their string representation. (#55476)
- Introduce new exception type for un-pickleable exceptions. (#55878)
- Improve docs for custom serialization for exceptions + add test. (#56156)
- Add a warning when returning an object w/ num_returns=0. (#56213)
- Adding ability to specify availability zones for ray cluster node pools on Azure. (#55532)
- Query for supported Microsoft.Network/virtualNetworks API versions instead of relying on resource_client.DEFAULT_API_VERSION. (#54874)
- Loosen Ray self-dependency check to allow matching versions. (#57019)
- Add support for pip_install_options for pip. (#53551)
- Proper typing for ObjectRef. (#55566)
🔨 Fixes:
- Use subscription id from azure profile if not provided in config during AzureNodeProvider init. (#56640)
- Always create standard public IP addresses (basic sku is deprecated). (#57131)
- Fix: bug with config key pairs when launching worker nodes. (#57107)
- If azure cluster launcher keypair doesnt exist create one automatically + doc typo fix. (#54596)
- Fix "objects_valid" for the case that multiple instances of the same task are storing returns. (#54904)
- Fix objects_valid check failure with except from BaseException. (#55602)
- Preserve err type in case of task cancellation due to actor death. (#57538)
- Fix checking for uv existence during ray_runtime setup. (#54141)
- Prevent sending SIGTERM after calling Worker::MarkDead. (#54377)
- Fixed the bug where the head was unable to submit tasks after redis is turned on. (#54267)
- Fix possible race by checking node cache status instead of just subscription. (#54745)
- Fix get actor timeout multiplier. (#54525)
- Use a temporary file to share default worker path in runtime env. (#53653)
- Fix check fail when task buffer periodical runner runs before RayEvent is initialized. (#55249)
- Patch grpc with RAY_num_grpc_threads to control grpc thread count. (#54988)
- Fix HandleRefRemoved thread safety. (#56445)
- Fix error handling for plasma put errors. (#56070)
- Fix batching logic in CoreWorkerPlasmaStoreProvider::Get. (#56041)
- Fix RAY_CHECK failure during shutdown due to plasma store race condition. (#55367)
- Fix autoscaler RAY_CHECK when GcsAutoscalerStateManager is out of sync with NodeManager. (#57010)
- Fix bug where inflight requests are not taken into account by retryable. (#57142)
- A timeout should be set when submitting patch requests for autoscaler. (#56605)
- Fix the bug in memray regarding the default configuration of -o {output_file_path}. (#56732)
- Fixed the issue of RemoveActorNameFromRegistry being called repeatedly. (#54955)
- Fixed an issue where the command executed when use_podman=true and run_env=None was not prefixed with podman exec. (#56619)
- Fix data race when using async gpu to gpu transfer. (#57112)
- Retry + Make FreeActorObject idempotent. (#56447)
- Fix check crash on gpu obj free if driver knows actor is dead. (#56404)
- Handle system errors with a background monitor thread. (#56513)
- Fix GPU metrics. (#56009)
- Don't disconnect worker client on OBOD unless the worker is dead. (#57185)
- Prevent stale GET request being registered if its lease was cleared. (#56766)
- Drop messages received after ClientConnection::Close. (#56240)
- Fix cancel race that leads to RAY_CHECK it->second.submitted_task_ref_count > 0. (#56123)
- Reorder asyncio actor shutdown to terminate asyncio thread first. (#56827)
- Fix actor import error message for async actors. (#55722)
- Allow task manager access with submitter mutex + unify retry. (#56216)
- Fix bug in restore_from_path such that connector states are also restored on remote EnvRunners. (#54672)
- Fix S3 access issue in AKS. (#56358)
- Add S3 public bucket fallback to handle NoCredentialsError. (#56334)
- Fix ABFSS (Azure Blob File System Secure) protocol support problems during E2E test. (#56188)
- Ray cluster commands (up, attach, status, etc) updates to work on Windows. (#54982)
- Update cluster scheduler to handle label selector hard node id constraint. (#56235)
📖 Documentation:
- Added guide on using type hints with Ray Core. (#55013)
- Lifecycle of a task. (#55496)
- Add OSS Document for Task Events. (#56203)
- Fix Missing Events Issue in Task Events. (#55916)
- Add docs for asyncio and object mutability. (#56790)
- Update getting started and set up document for ray on vsphere. (#56954)
- Docfix - rst annotation showing up in render. (#57104)
- Add threading requirement to NodeProvider interface. (#56349)
- Add guidance for matching Ray and Python versions with uv envs. (#56597)
- Fix documentation typos, grammar, and terminology inconsistencies. (#56066, #56067, #56068, #56069, #56128, #56129, #56130, #56131, #56132, #56272, #56273, #56274, #56275, #56277, #56278, #56279)
- Update SLURM docs with symmetric-run. (#56775)
- Update Kueue integration documentation to include RayService & RayCluster support. (#56781)
- Application Gateway for Containers as ingress to access Ray Cluster. (#56574)
- Update DLAMI Information in aws.md. (#55702)
Dashboard
💫 Enhancements:
- Use pynvml for GPU metrics. (#56000)
- Default dashboard usability improvements. (#55620)
- Make Ray Train Dashboard Panel Ids Static. (#55559)
- Small fixes to Metrics Tab for kube-ray clusters. (#57149)
- Add metadata to indicate full dashboard embedding is supported. (#56077)
- Use ray node id instead of ip for profilinglink. (#55439)
- Fix grafana dashboard generation bug. (#56346)
- Catch OSError when detecting the GPU. (#56158)
🔨 Fixes:
- Removed references to a deleted Data metrics panel. (#55478)
- Fix typo in memory_utils and adjust display formatting for clarity. (#56217)
Ray Images
🎉 New Features:
- Add support for building and publishing ray-extra images. (#56543)
- Add ray-llm and ray-ml extra images. (#56800)
- Build ray-extra images for aarch64. (#56818)
- Add slim image to the image build matrix. (#55723)
💫 Enhancements:
- Add haproxy binary, for ray serve use. (#56845)
- Add ~/.local/bin to PATH in slim image. (#56920)
- Remove slim's dependency on normal bases. (#56544)
- Add label for ray version and commit. (#56493)
- Refactor apt package installation. (#55701)
- Allow using explicit base type. (#56545)
- Add extra-test stage in image building. (#55725)
- Add test rules for image building files. (#56554)
- Add ray-llm image type check. (#56542)
- Unify label and tag conventions. (#56189)
- GKE GPU compat paths: PATH, LD_LIBRARY_PATH (temporarily). (#55569)
- Stop publishing ray-ml images. (#57070)
- Stop building and releasing x86 osx wheels. (#57077)
📖 Documentation:
- Update latest Docker dependencies for 2.49.0 release. (#55966)
- Update latest Docker dependencies for 2.49.2 release. (#56760)
Wheels and images
💫 Enhancements:
- Use bazel run to generate files required for the wheel and testing (#55957, #56527, #56969, #56004, #55928)
- Ban click 8.3.0. (#56789)
- Upgrade protobuf to v4. (#54496)
- Add adlfs[abfs] into image (#56084)
- Upgrade boto3 to 1.29.x. (#56363)
- Upgrading orjson to 3.9.15. (#55972)
- Update spdlog to 15.3. (#56711)
Thanks!
Thank you to everyone who contributed to this release!
@alexeykudinkin, @richardliaw, @nrghosh, @ljstrnadiii, @Daraan, @kouroshHakha, @Bye-legumes, @kamil-kaczmarek, @jugalshah291, @sampan-s-nayak, @jjyao, @Evelynn-V, @gangsf, @omatthew98, @TimothySeah, @kshanmol, @goutamvenkat-anyscale, @axreldable, @jiangwu300, @simonsays1980, @400Ping, @JasonLi1909, @chuang0221, @weiliango, @Myasuka, @win5923, @liulehui, @khluu, @ok-scale, @eicherseiji, @tianyi-ge, @MengjinYan, @kevin85421, @Yevet, @orangeQWJ, @vie-serendipity, @edoakes, @wyhong3103, @israbbani, @vickytsang, @HassamSheikh, @acrewdson, @czgdp1807, @daiping8, @carolynwang, @thc1006, @jeffreyjeffreywang, @Stack-Attack, @Catch-Bull, @elliot-barn, @Levi080513, @BestVIncent, @dragongu, @jmajety-dev, @jcarlson212, @tohtana, @abrarsheikh, @crypdick, @Yicheng-Lu-llll, @ZacAttack, @justinvyu, @lk-chen, @alanwguo, @mcoder6425, @my-vegetable-has-exploded, @yancanmao, @arcyleung, @rjpower, @codope, @harshit-anyscale, @dayshah, @stephanie-wang, @KaisennHu, @ryanaoleary, @saihaj, @mattip, @rueian, @Kunchd, @pavitrabhalla, @owenowenisme, @Aydin-ab, @gvspraveen, @minerharry, @JackGammack, @jpatra72, @coqian, @zcin, @dstrodtman, @aslonnie, @ahao-anyscale, @GuyStone, @iamjustinhsu, @seanlaii, @ruisearch42, @akyang-anyscale, @ArturNiederfahrenhorst, @bveeramani, @OneSizeFitsQuorum, @xinyuangui2, @sb-hakunamatata, @22quinn, @Sparks0219, @sven1977, @snehachhabria, @dioptre, @nadongjun, @eric-higgins-ai, @marosset, @MatthewCWeston, @pcmoritz, @can-anyscale, @pimdh, @roshankathawate, @matthewdeng, @martinbomio, @GokuMohandas, @alimaazamat, @ali-corpo, @landscapepainter, @Qiaolin-Yu, @vaishdho1, @avigyabb, @srinathk10, @tannerdwood