github Unity-Technologies/ml-agents 0.13.0
ML-Agents Beta 0.13.0

latest releases: release_21_docs, release_21, python-packages_1.0.0...
pre-release4 years ago

Major Changes

  • The low level Python API has changed (#3022). See the Low Level Python API documentation for more information.
  • Parameters such as resolution and time scale are no longer configured in the Academy; these are now passed on the mlagents-learn command line. See the Migration Guide for more details. (#2956)
  • Offline Behavioral Cloning training was removed. To learn from demonstrations, use the GAIL and Behavioral Cloning features with either PPO or SAC (#2969)
  • Agents can now use sensors of child GameObjects as observations (#3095)
  • The RayPerceptionSensor now supports a layerMask option that is used in raycasting. (#3111 )
  • The official minimum version of Unity supported by ML-Agents is now 2018.4 LTS.

For instructions on how to migrate from 0.12.0 to 0.13.0 see the Migration Guide.

Minor Fixes and Improvements

  • A separation of statistics collection from the trainer with the StatsReporter and StatsWriter classes (#3076, #3108)
  • mlagents.envs was renamed to mlagents_envs
  • A new AgentProcessor class and Trajectory abstraction is now live. Trainers now ingest trajectories which are assembled by the AgentProcessor. (#3067)
  • A bug that could cause the Academy to call a disabled agent’s _AgentReset() method is fixed (#3072)
  • Better error handling when the trainer configuration doesn’t contain a “default” entry (#3063)
  • Better error handling when there is a mismatch between the metacurriculum configuration and brains being trained (#3034)
  • A bug that prevented agents with different decision intervals from learning in the same scene is fixed (#3181)

Don't miss a new ml-agents release

NewReleases is sending notifications on new releases.