- You can now select, at install time, which modules you wish to have initially installed
- Some modules (Coral, Yolov8) now allow you to download individual models at runtime via the dashboard.
- A new generative AI module (Llama LLM Chatbot)
- A standardised way to handle (in code) modules that run long processes such as generative AI
- Debian support has been improved
- Small UI improvements to the dashboard
- Some simplification of the modulesettings files
- The inclusion, in the source code, of template .NET and Python modules (both simple and long process demos)
- Improvements to the Coral and ALPR modules (thanks to Seth and Mike)
- Docker CUDA 12.2 image now includes cuDNN
- Install script fixes
- Added Object Segmentation to the YOLOv8 module