ML-Agents Release 11
Package Versions
NOTE: It is strongly recommended that you use packages from the same release together for the best experience.
Package | Version |
---|---|
com.unity.ml-agents (C#) | v1.7.0 |
ml-agents (Python) | v0.23.0 |
ml-agents-envs (Python) | v0.23.0 |
gym-unity (Python) | v0.23.0 |
Communicator (C#/Python) | v1.3.0 |
Major Features and Improvements
com.unity.ml-agents / com.unity.ml-agents.extensions (C#)
- An individual agent can now take both continuous and discrete actions. You can specify both continuous and discrete action sizes in Behavior Parameters. (#4702, #4718)
ml-agents / ml-agents-envs / gym-unity (Python)
- PyTorch trainers now support training agents with both continuous and discrete action spaces. (#4702)
Bug Fixes and Minor Changes
com.unity.ml-agents / com.unity.ml-agents.extensions (C#)
- In order to improve the developer experience for Unity ML-Agents Toolkit, we have added in-editor analytics. Please refer to "Information that is passively collected by Unity" in the Unity Privacy Policy. (#4677)
- The FoodCollector example environment now uses continuous actions for moving and discrete actions for shooting. (#4746)
- Removed noisy warnings about API minor version mismatches in both the C# and python code. (#4688)
ml-agents / ml-agents-envs / gym-unity (Python)
- ActionSpec._validate_action() now enforces that UnityEnvironment.set_action_for_agent() receives a 1D np.array.