github apache/mxnet 2.0.0.beta0.rc0
Apache MXNet (incubating) 2.0.0.beta0 Release Candidate 0

latest releases: 1.9.1, 1.9.1.rc0, 2.0.0.beta1...
pre-release3 years ago

Features

Implementations and Improvements

  • Improve add_bias_kernel for small bias length (#19744)
  • [FFI] Add new containers and Implementations (#19685)
  • 1bit gradient compression implementation (#17952)
  • [Op] Fix reshape and mean (#20058)
  • [FFI] Randint (#20083)
  • [FFI] npx.softmax, npx.activation, npx.batch_norm, npx.fully_connected (#20087)
  • [FFI] expand_dims (#20073)
  • [FFI] npx.pick, npx.convolution, npx.deconvolution (#20101)
  • [FFI] npx.pooling, npx.dropout, npx.one_hot, npx.rnn (#20102)
  • [FFI] fix masked_softmax (#20114)
  • add inline for __half2float_warp (#20152)
  • [FFI] part5: npx.batch_dot, npx.arange_like, npx.broadcast_like (#20110)
  • [FFI] part4: npx.embedding, npx.topk, npx.layer_norm, npx.leaky_relu (#20105)
  • [PERF] Moving GPU softmax to RTC and optimizations (#19905)
  • [FEATURE] AdaBelief operator (#20065)
  • Fusing gelu post operator in Fully Connected symbol (#20228)
  • [operator] Add logsigmoid activation function (#20268)
  • [FEATURE] Use RTC for reduction ops (#19426)
  • make stack use faster API (#20059)
  • [operator] Add Mish Activation Function (#20320)
  • [operator] add threshold for mish (#20339)
  • [operator] Integrate matmul primitive from oneDNN in batch dot (#20340)
  • [FEATURE] Add interleaved batch_dot oneDNN fuses for new GluonNLP models (#20312)
  • Add interleaved_matmul_* to npx namespace (#20375)
  • [FEATURE] Add backend MXGetMaxSupportedArch() and frontend get_rtc_compile_opts() for CUDA enhanced compatibility (#20443)
  • [ONNX] Foward port new mx2onnx into master (#20355)
  • Add new benchmark function for single operator comparison (#20388)
  • [BACKPORT] [FEATURE] Add API to control denormalized computations (#20387)
  • [FEATURE] Load libcuda with dlopen instead of dynamic linking (#20484)
  • [operator] Integrate oneDNN layer normalization implementation (#19562)
  • [v1.9.x] modify erfinv implementation based on scipy (#20517) (#20550)
  • [REFACTOR] Refactor test_quantize.py to use Gluon API (#20227)
  • Switch all HybridBlocks to use forward interface (#20262)
  • [API] Extend NumPy Array dtypes with int16, uint16, uint32, uint64 (#20478)
  • [FEATURE] MXIndexedRecordIO: avoid re-build index (#20549)
  • [FEATURE] Add oneDNN support for npx.reshape and np.reshape (#20563)
  • Split np_elemwise_broadcast_logic_op.cc (#20580)
  • Expand NVTX usage (#18683)
  • [FEATURE] Add feature of retain_grad (#20500)
  • [v2.0] Split Large Source Files (#20604)

Language Bindings

  • Adding MxNet.Sharp package to the ecosystem page (#20162)
  • Add back cpp-package (#20131)

OneDNN

  • Change inner mxnet flags nomenclature for oneDNN library (#19944)
  • Change MXNET_MKLDNN_DEBUG define name to MXNET_ONEDNN_DEBUG (#20031)
  • Change mx_mkldnn_lib to mx_onednn_lib in Jenkins_steps.groovy file (#20035)
  • Fix oneDNN feature name in MxNET (#20070)
  • Change MXNET_MKLDNN* flag names to MXNET_ONEDNN* (#20071)
  • Change _mkldnn test and build scenarios names to _onednn (#20034)
  • [submodule] Upgrade oneDNN to v2.2.1 (#20080)
  • [submodule] Upgrade oneDNN to v2.2.2 (#20267)
  • [submodule] Upgrade oneDNN to v2.2.3 (#20345)
  • [submodule] Upgrade oneDNN to v2.2.4 (#20360)
  • [submodule] Upgrade oneDNN to v2.3 (#20418)
  • Fix backport of SoftmaxOutput implementation using onednn kernels (#20459)
  • [submodule] Upgrade oneDNN to v2.3.2 (#20502)
  • [Backport] Enabling BRGEMM FullyConnected based on shapes (#20568)
  • [BACKPORT][BUGFIX][FEATURE] Add oneDNN 1D and 3D deconvolution support and fix bias (#20292)

CI-CD

  • CI Infra updates (#19903)
  • Fix cd by adding to $PATH (#19939)
  • Fix nightly CD for python docker image releases (#19772)
  • pass version param (#19984)
  • Update ci/dev_menu.py file (#20053)
  • add gomp and quadmath (#20121)
  • [CD] Fix the name of the pip wheels in CD (#20115)
  • Attemp to fix nightly docker for master cu112 (#20126)
  • Disable codecov (#20173)
  • [BUGFIX] Fix CI slowdown issue after removing 3rdparty/openmp (#20367)
  • cudnn8 for cu101 in cd (#20408)
  • [wip] Re-enable code cov (#20427)
  • [CI] Fix centos CI & website build (#20512)
  • [CI] Move link check from jenkins to github action (#20526)
  • Pin jupyter-client (#20545)
  • [CI] Add node for website full build and nightly build (#20543)
  • use restricted g4 node (#20554)

Website & Documentation & Style

  • Fix static website build (#19906)
  • [website] Fix broken website for master version (#19945)
  • add djl (#19970)
  • [website] Automate website artifacts uploading (#19955)
  • Grammar fix (added period to README) (#19998)
  • [website] Update for MXNet 1.8.0 website release (#20013)
  • fix format issue (#20022)
  • [DOC]Disabling hybridization steps added (#19986)
  • [DOC] Add Flower to MXNet ecosystem (#20038)
  • doc add relu (#20193)
  • Avoid UnicodeDecodeError in method doc on Windows (#20215)
  • updated news.md and readme.md for 1.8.0 release (#19975)
  • [DOC] Update Website to Add Prerequisites for GPU pip install (#20168)
  • update short desc for pip (#20236)
  • [website] Fix Jinja2 version for python doc (#20263)
  • [Master] Auto-formatter to keep the same coding style (#20472)
  • [DOC][v2.0] Part1: Link Check (#20487)
  • [DOC][v2.0] Part3: Evaluate Notebooks (#20490)
  • If variable is not used within the loop body, start the name with an underscore (#20505)
  • [v2.0][DOC] Add migration guide (#20473)
  • [Master] Clang-formatter: only src/ directory (#20571)
  • [Website] Fix website publish (#20573)
  • [v2.0] Update Examples (#20602)

Build

  • add cmake config for cu112 (#19870)
  • Remove USE_MKL_IF_AVAILABLE flag (#20004)
  • Define NVML_NO_UNVERSIONED_FUNC_DEFS (#20146)
  • Fix ChooseBlas.cmake for CMake build dir name (#20072)
  • Update select_compute_arch.cmake from upstream (#20369)
  • Remove duplicated project command in CMakeLists.txt (#20481)
  • Add check for MKL version selection (#20562)
  • fix macos cmake with TVM_OP ON (#20570)

License

Bug Fixes and Others

  • Mark test_masked_softmax as flaky and skip subgraph tests on windows (#19908)
  • Removed 3rdparty/openmp submodule (#19953)
  • [BUGFIX] Fix AmpCast for float16 (#19749) (#20003)
  • fix bugs for encoding params (#20007)
  • Fix for test_lans failure (#20036)
  • add flaky to norm (#20091)
  • Fix dropout and doc (#20124)
  • Revert "add flaky to norm (#20091)" (#20125)
  • Fix broadcast_like (#20169)
  • [BUGFIX] Add check to make sure num_group is non-zero (#20186)
  • Update CONTRIBUTORS.md (#20200)
  • Update CONTRIBUTORS.md (#20201)
  • [Bugfix] Fix take gradient (#20203)
  • Fix workspace of BoxNMS (#20212)
  • [BUGFIX][BACKPORT] Impose a plain format on padded concat output (#20129)
  • [BUGFIX] Fix Windows GPU VS2019 build (#20206) (#20207)
  • [BUGFIX]try avoid the error in operator/tensor/amp_cast.h (#20188)
  • [BUGFIX] Fix Windows GPU VS2019 build (#20206) (#20207)
  • [BUGFIX] fix #18936, #18937 (#19878)
  • [BUGFIX] fix numpy op fallback bug when ndarray in kwargs (#20233)
  • [BUGFIX] Fix test_zero_sized_dim save/restore of np_shape state (#20365)
  • [BUGFIX] Fix quantized_op + requantize + dequantize fuse (#20323)
  • [BUGFIX] Switch hybrid_forward to forward in test_fc_int8_fp32_outputs (#20398)
  • [2.0] fix benchmark and nightly tests (#20370)
  • [BUGFIX] fix log_sigmoid bugs (#20372)
  • [BUGFIX] fix npi_concatenate quantization dim/axis (#20383)
  • [BUGFIX] enable test_fc_subgraph.py::test_fc_eltwise (#20393)
  • [2.0] make npx.load support empty .npz files (#20403)
  • change argument order (#20413)
  • [BUGFIX] Add checks in BatchNorm's infer shape (#20415)
  • [BUGFIX] Fix Precision (#20421)
  • [v2.0] Add Optim Warning (#20426)
  • fix (#20534)
  • Test_take, add additional axis (#20532)
  • [BUGFIX] Fix (de)conv (#20597)
  • [BUGFIX] Fix NightlyTestForBinary in master branch (#20601)

Don't miss a new mxnet release

NewReleases is sending notifications on new releases.