github dmlc/xgboost v2.1.0
Release 2.1.0 stable

latest releases: v2.1.2, v2.1.1
4 months ago

2.1.0 (2024 Jun 20)

We are thrilled to announce the XGBoost 2.1 release. This note will start by summarizing some general changes and then highlighting specific package updates. As we are working on a new R interface, this release will not include the R package. We'll update the R package as soon as it's ready. Stay tuned!

Networking Improvements

An important ongoing work for XGBoost, which we've been collaborating on, is to support resilience for improved scaling and federated learning on various platforms. The existing networking library in XGBoost, adopted from the RABIT project, can no longer meet the feature demand. We've revamped the RABIT module in this release to pave the way for future development. The choice of using an in-house version instead of an existing library is due to the active development status with frequent new feature requests like loading extra plugins for federated learning. The new implementation features:

  • Both CPU and GPU communication (based on NCCL).
  • A reusable tracker for both the Python package and JVM packages. With the new release, the JVM packages no longer require Python as a runtime dependency.
  • Supports federated communication patterns for both CPU and GPU.
  • Supports timeout. The high-level interface parameter is currently hard-coded to 30 minutes, which we plan to improve.
  • Supports significantly more data types.
  • Supports thread-based workers.
  • Improved handling for worker errors, including better error messages when one of the peers dies during training.
  • Work with IPv6. Currently, this is only supported by the dask interface.
  • Built-in support for various operations like broadcast, allgatherV, allreduce, etc.

Related PRs (#9597, #9576, #9523, #9524, #9593, #9596, #9661, #10319, #10152, #10125, #10332, #10306, #10208, #10203, #10199, #9784, #9777, #9773, #9772, #9759, #9745, #9695, #9738, #9732, #9726, #9688, #9681, #9679, #9659, #9650, #9644, #9649, #9917, #9990, #10313, #10315, #10112, #9531, #10075, #9805, #10198, #10414).

The existing option of using MPI in RABIT is removed in the release. (#9525)

NCCL is now fetched from PyPI.

In the previous version, XGBoost statically linked NCCL, which significantly increased the binary size and led to hitting the PyPI repository limit. With the new release, we have made a significant improvement. The new release can now dynamically load NCCL from an external source, reducing the binary size. For the PyPI package, the nvidia-nccl-cu12 package will be fetched during installation. With more downstream packages reusing NCCL, we expect the user environments to be slimmer in the future as well. (#9796, #9804, #10447)

Multi-output

We continue the work on multi-target and vector leaf in this release:

  • Revise the support for custom objectives with a new API, XGBoosterTrainOneIter. This new function supports strided matrices and CUDA inputs. In addition, custom objectives now return the correct shape for prediction. (#9508)
  • The hinge objective now supports multi-target regression (#9850)
  • Fix the gain calculation with vector leaf (#9978)
  • Support graphviz plot for multi-target tree. (#10093)
  • Fix multi-output with alternating strategies. (#9933)

Please note that the feature is still in progress and not suitable for production use.

Federated Learning

Progress has been made on federated learning with improved support for column-split, including the following updates:

Ongoing work for SYCL support.

XGBoost is developing a SYCL plugin for SYCL devices, starting with the hist tree method. (#10216, #9800, #10311, #9691, #10269, #10251, #10222, #10174, #10080, #10057, #10011, #10138, #10119, #10045, #9876, #9846, #9682) XGBoost now supports launchable inference on SYCL devices, and work on adding SYCL support for training is ongoing.

Looking ahead, we plan to complete the training in the coming releases and then focus on improving test coverage for SYCL, particularly for Python tests.

Optimizations

  • Implement column sampler in CUDA for GPU-based tree methods. This helps us get faster training time when column sampling is employed (#9785)
  • CMake LTO and CUDA arch (#9677)
  • Small optimization to external memory with a thread pool. This reduces the number of threads launched during iteration. (#9605, #10288, #10374)

Deprecation and breaking changes

Package-specific breaking changes are outlined in respective sections. Here we list general breaking changes in this release:

  • The command line interface is deprecated due to the increasing complexity of the machine learning ecosystem. Building a machine learning model using a command shell is no longer feasible and could mislead newcomers. (#9485)
  • Universal binary JSON is now the default format for saving models (#9947, #9958, #9954, #9955). See #7547 for more info.
  • The XGBoosterGetModelRaw is now removed after deprecation in 1.6. (#9617)
  • Drop support for loading remote files. Users are encouraged to use dedicated libraries to fetch remote content. (#9504)
  • Remove the dense libsvm parser plugin. This plugin is never tested or documented (#9799)
  • XGDMatrixSetDenseInfo and XGDMatrixSetUIntInfo are now deprecated. Use the array interface based alternatives instead.

Features

This section lists some new features that are general to all language bindings. For package-specific changes, please visit respective sections.

  • Adopt a new XGBoost logo (#10270)
  • Now supports dataframe data format in native XGBoost. This improvement enhances performance and reduces memory usage when working with dataframe-based structures such as pandas, arrow, and R dataframe. (#9828, #9616, #9905)
  • Change default metric for gamma regression to deviance. (#9757)
  • Normalization for learning to rank is now optional with the introduction of the new lambdarank_normalization parameter. (#10094)
  • Contribution prediction with QuantileDMatrix on CPU. (#10043)
  • XGBoost on macos no longer bundles OpenMP runtime. Users can install the latest runtime from their dependency manager of choice. (#10440). Along with which, JVM packages on MacoOS are not built with OpenMP support (#10449).

Bug fixes

  • Fix training with categorical data from external memory. (#10433)
  • Fix compilation with CTK-12. (#10123)
  • Fix inconsistent runtime library on Windows. (#10404)
  • Fix default metric configuration. (#9575)
  • Fix feature names with special characters. (#9923)
  • Fix global configuration for external memory training. (#10173)
  • Disable column sample by node for the exact tree method. (#10083)
  • Fix the FieldEntry constructor specialization syntax error (#9980)
  • Fix pairwise objective with NDCG metric along with custom gain. (#10100)
  • Fix the default value for lambdarank_pair_method. (#10098)
  • Fix UBJSON with boolean values. No existing code is affected by this fix. (#10054)
  • Be more lenient on floating point errors for AUC. This prevents the AUC > 1.0 error. (#10264)
  • Check support status for categorical features. This prevents gblinear from treating categorical features as numerical. (#9946)

Document

Here is a list of documentation changes not specific to any XGBoost package.

Python package

  • Dask
    Other than the changes in networking, we have some optimizations and document updates in dask:
  • Filter models on workers instead of clients; this prevents an OOM error on the client machine. (#9518)
  • Users are now encouraged to use from xgboost import dask instead of import xgboost.dask to avoid drawing in unnecessary dependencies for non-dask users. (#9742)
  • Add seed to demos. (#10009)
  • New document for using dask XGBoost with k8s. (#10271)
  • Workaround potentially unaligned pointer from an empty partition. (#10418)
  • Workaround a race condition in the latest dask. (#10419)
  • Add typing to dask demos. (#10207)
  • PySpark
    PySpark has several new features along with some small fixes:
  • Support stage-level scheduling for training on various platforms, including yarn/k8s. (#9519, #10209, #9786, #9727)
  • Support GPU-based transform methods (#9542)
  • Avoid expensive repartition when appropriate. (#10408)
  • Refactor the logging and the GPU code path (#10077, 9724)
  • Sort workers by task ID. This helps the PySpark interface obtain deterministic results. (#10220)
  • Fix PySpark with verbosity=3. (#10172)
  • Fix spark estimator doc. (#10066)
  • Rework transform for improved code reusing. (#9292)
  • Breaking changes
    For the Python package, eval_metric, early_stopping_rounds, and callbacks from now removed from the fit method in the sklearn interface. They were deprecated in 1.6. Use the parameters with the same name in constructors instead. (#9986)

  • Features
    Following is a list of new features in the Python package:

  • Support sample weight in sklearn custom objective. (#10050)
  • New supported data types, including cudf.pandas (#9602), torch.Tensor (#9971), and more scipy types (#9881).
  • Support pandas 2.2 and numpy 2.0. (#10266, #9557, #10252, #10175)
  • Support the latest rapids including rmm. (#10435)
  • Improved data cache option in data iterator. (#10286)
  • Accept numpy generators as random_state (#9743)
  • Support returning base score as intercept in the sklearn interface. (#9486)
  • Support arrow through pandas ext types. This is built on top of the new DataFrame API in XGBoost. See general features for more info. (#9612)
  • Handle np integer in model slice and prediction. (#10007)
  • Improved sklearn tags support. (#10230)
  • The base image for building Linux binary wheels is updated to rockylinux8. (#10399)
  • Improved handling for float128. (#10322)
  • Fixes
  • Fix DMatrix with None input. (#10052)
  • Fix native library discovery logic. (#9712, #9860)
  • Fix using categorical data with the score function for the ranker. (#9753)
  • Document

JVM package

Here is a list of JVM-specific changes. Like the PySpark package, the JVM package also gains stage-level scheduling.

  • Features and related documents
  • Bug Fixes
  • Fixes memory leak in error handling. (#10307)
  • Fixes group col for GPU packages (#10254)

Additional artifacts:

You can verify the downloaded packages by running the following command on your Unix shell:

echo "<hash> <artifact>" | shasum -a 256 --check
28bec8e821b1fefcea722d96add66024adba399063f723bc5c815f7af4a5f5e4  xgboost-2.1.0.tar.gz
60c715d8c97ef710185469b27f30303b6efa655600d035963f96e6acf65f4dac  xgboost_r_gpu_linux_2.1.0.tar.gz

Experimental binary packages for R with CUDA enabled

  • xgboost_r_gpu_linux_2.1.0.tar.gz: Download

Source tarball

Don't miss a new xgboost release

NewReleases is sending notifications on new releases.