github Unity-Technologies/ml-agents release_7
ML-Agents Release 7

latest releases: release_21_docs, release_21, python-packages_1.0.0...
3 years ago

ML-Agents Release 7

Package Versions

NOTE: It is strongly recommended that you use packages from the same release together for the best experience.

Package Version
com.unity.ml-agents (C#) v1.4.0
ml-agents (Python) v0.20.0
ml-agents-envs (Python) v0.20.0
gym-unity (Python) v0.20.0
Communicator (C#/Python) v1.1.0

Major Features and Improvements

com.unity.ml-agents (C#)

  • The IActuator interface and ActuatorComponent abstract class were added. These are analogous to ISensor and SensorComponent, but for applying actions for an Agent. They allow you to control the action space more programmatically than defining the actions in the Agent's Behavior Parameters. See BasicActuatorComponent.cs for an example of how to use them. (#4297, #4315)

ml-agents (Python)

  • Experimental PyTorch support has been added. Use --torch when running mlagents-learn, or add framework: pytorch to your trainer configuration (under the behavior name) to enable it. Note that PyTorch 1.6.0 or greater should be installed to use this feature; see see the PyTorch website for installation instructions and the relevant ML-Agents docs for usage. (#4335)

Breaking Changes

ml-agents (Python)

The minimum supported version of TensorFlow was increased to 1.14.0. (#4411)

Known Issues

ml-agents (Python)

  • Soft-Actor Critic (SAC) runs considerably slower when using the PyTorch backend than when using TensorFlow.

Bug Fixes and Minor Changes

com.unity.ml-agents (C#)

  • Updated Barracuda to 1.1.1-preview (#4482)
  • Enabled C# formatting using dotnet-format. (#4362)
  • GridSensor was added to the com.unity.ml-agents.extensions package. Thank you to Jaden Travnik from Eidos Montreal for the contribution! (#4399)
  • Added Agent.EpisodeInterrupted(), which can be used to reset the agent when it has reached a user-determined maximum number of steps. This behaves similarly to Agent.EndEpsiode() but has a slightly different effect on training (#4453).
  • Previously, com.unity.ml-agents was not declaring built-in packages as dependencies in its package.json. The relevant dependencies are now listed. (#4384)
  • Fixed the sample code in the custom SideChannel example. (#4466)

ml-agents (Python)

  • Compressed visual observations with >3 channels are now supported. In ISensor.GetCompressedObservation(), this can be done by writing 3 channels at a time to a PNG and concatenating the resulting bytes. (#4399)
  • The Communication API was changed to 1.1.0 to indicate support for concatenated PNGs (see above). Newer versions of the package that wish to make use of this will also need a compatible version of the trainer. (#4462)
  • A CNN (vis_encode_type: match3) for smaller grids, e.g. board games, has been added. (#4434)
  • You can now again specify a default configuration for your behaviors. Specify default_settings in your trainer configuration to do so. (#4448)
  • A bug in the observation normalizer that would cause rewards to decrease when using --resume was fixed. (#4463)

Acknowledgements

Thank you @NeonMika , @armando-fandango, @Sebastian-Schuchmann , and everyone at Unity for their contributions to this release.

Don't miss a new ml-agents release

NewReleases is sending notifications on new releases.