Beta docs: https://deploy-preview-19787--frigate-docs.netlify.app/
Images
- ghcr.io/blakeblackshear/frigate:0.17.0-beta1
- ghcr.io/blakeblackshear/frigate:0.17.0-beta1-standard-arm64
- ghcr.io/blakeblackshear/frigate:0.17.0-beta1-tensorrt
- ghcr.io/blakeblackshear/frigate:0.17.0-beta1-rk
- ghcr.io/blakeblackshear/frigate:0.17.0-beta1-rocm
- ghcr.io/blakeblackshear/frigate:0.17.0-beta1-tensorrt-jp6
- ghcr.io/blakeblackshear/frigate:430cebe-synaptics
Major Changes for 0.17.0
Breaking Changes
There are several breaking changes in this release, Frigate will attempt to update the configuration automatically. In some cases manual changes may be required. It is always recommended to back up your current config and database before upgrading:
- Simply copy your current config file to a new location
- Stop Frigate and make a copy of the
frigate.dbfile
- GenAI now supports reviews and object descriptions. As a result, the global
genaiconfig now only configures the provider. Other fields have moved underobjects -> genai. See the new GenAI documentation. - Recordings retention is now fully tiered. This means that
record -> continuousandrecord -> motionare separate config fields. See the examples in the documentation. - Some of the LPR models have been updated, and most users should manually switch to the
smallmodel, which performs well on both CPU and GPU. Thelargemodel is the same as 0.16's and is not as accurate as the upgradedsmallmodel in 0.17. Uselargeonly if you live in a region with multi-line plates and you are having issues detecting text on them with thesmallmodel. - strftime_fmt was deprecated in 0.16, and should now be fully removed from the config in 0.17. Date/time formatting is based on the language selected in the UI.
New Features
Frigate 0.17 introduces several major new features.
Classification Model Training
Frigate 0.17 supports classification models in two separate types: state classification and object classification. These models are trained locally on your machine using ImageNet via MobileNetV2.
State Classification
State classification allows you to choose a certain region of camera(s) with multiple states, and train on images showing these states. For example, you could create a state classification model to determine if a gate is currently open or closed.
See the documentation.
Object Classification
Object classification allows you to choose an object type, like dog, and classify specific dogs. For example, you can train the model to classify your dog Fido and add a sub label, while not labeling unknown dogs. Another example would be classifying if a person in a construction site is wearing a helmet or not.
See the documentation.
Custom Viewer Roles
Frigate 0.17 now has the ability to create additional viewer user roles to limit access to specific cameras. Users with the admin role can create a uniquely named role from the UI (or auth --> roles in the config) and assign at least one camera to it. Users assigned to the new role will have:
- Guarded API access
- Limited frontend access, following what the
viewerrole has access to (Live, Review/History, Explore, Exports), but only to the assigned cameras
See the documentation.
Review Item Summary with GenAI
Frigate 0.17 supports using GenAI to summarize review items. Unlike object descriptions which add a searchable description, review summaries have a structured output that instruct the AI provider to generate a title, description, and classify the activity as dangerous, suspicious, or normal.
This information is displayed in the UI automatically making it easier to see when activity requires further review and easier to understand what is happening during a particular video segment.
See the documentation.
Semantic Search Triggers
Triggers utilize Semantic Search to automate actions when a tracked object matches a specified image or description. Triggers can be configured so that Frigate executes a specific actions when a tracked object's image or description matches a predefined image or text, based on a similarity threshold. Triggers are managed per camera and can be configured via the Frigate UI in the Settings page under the Triggers tab.
See the documentation.
Object Detector Improvements
Frigate 0.17 brings performance increases for many detectors as well as support for new object detection hardware.
Nvidia GPU Performance
Support for Nvidia GPUs has been enhanced by implementing CUDA Graphs. CUDA Graphs work to reduce the involvement of the CPU for each inference, leading to faster inference times and lower CPU usage. CUDA graphs do have some limitations based on the complexity of the model, which means that YOLO-NAS, Semantic Search, and LPR models are not accelerated with CUDA Graphs. They will still continue to run on GPU as they did before.
Intel OpenVINO
Frigate 0.17 supports running models on Intel NPUs, for many models performance on NPU is similar to GPU but more efficient, leaving room to run more enrichment features on the GPU.
OpenVINO has also had many optimizations put in place to reduce memory and CPU utilization for object detection.
RKNN
Frigate 0.17 brings several improvements to RKNN platform including:
- Automatic Model Conversion: automatically convert ONNX models to RKNN format. This allows Frigate+ and other models to be seamlessly configured and converted on startup.
- Accelerated Enrichment Support: convert and run Semantic Search and Face Recognition models using the NPU. This greatly enhances performance while maintaining high accuracy with
largemodel sizes.
Apple Silicon
Frigate 0.17 supports running object detection on Apple Silicon NPU. This is provided through the Apple Silicon Detector which runs on the host and connects via IPC proxy to Frigate, providing fast and efficient inferences when run within the same Apple device.
See the documentation.
YOLOv9 on Google Coral
Frigate 0.17 supports running a quantized version of YOLOv9 on Coral devices, bringing improved accuracy over the default mobiledet model. Note that due to hardware limitations, only a subset of the objects on the standard COCO labelmap is included. YOLOv9 Frigate+ models are not supported on Coral at this time.
See the documentation.
New Community Supported Detectors
Frigate 0.17 has community support for several new object detectors:
- MemryX: MemryX MX3 M.2 module. Documentation
- Degirum SDK: a proxy for inference with a variety of models. Documentation
- Synaptics: Synaptics SL1680 NPU. Documentation
Frontend Improvements
In addition to supporting the new features, the frontend has many improvements.
Detail Stream
History view in 0.17 supports an additional view mode, Detail. This mode shows a card for each review item, and expanding a card reveals all tracked objects and their lifecycle events. Selecting any lifecycle event seeks the video to that exact timestamp. You can also overlay a tracked object's path on the video to help with debugging.
Redesigned Tracked Object Details pane
The Tracked Object Details pane in Explore has been redesigned to streamline the layout and consolidate related information. The Object Lifecycle tab is now the Tracking Details tab, which displays video overlays of the tracked object instead of static images, giving a clearer and more intuitive view of its activity.
Revamped Settings
Frigate 0.17 has a revamped Settings menu with a sidebar that categorizes the available options. This brings more scalability which will make it easier to support full UI configuration in a future version.
NOTE: The Debug view has been moved to the single camera Live view instead of Settings. Access the Debug view by enabling the switch under the Live view settings (cog icon) menu.
Add Camera Wizard
Frigate 0.17 supports adding camera via the UI without manually modifying your configuration file. When installing and starting Frigate for the first time, the main dashboard will include a button to start adding cameras via the Wizard.
Access the Wizard from the Cameras --> Management page in Settings.
Update Without Restarting
Frigate 0.17 supports saving many more features dynamically. Cameras, zones, and masks will not require a restart to take effect when saved through the UI. More will come in future versions.
Configuration Safe Mode
If an invalid configuration is detected, Frigate will enter safe mode and highlight the location of the issue. While in safe mode, the frontend is limited to the configuration editor, making it easy to correct the problem directly in the UI without needing an external file editor.
Other Notable Frontend Improvements
- No recordings indicator on the History timeline. When no recordings are available, the timeline now displays a black background to make this clear at a glance.
- Clickable Birdseye view. When using the Frigate UI, you can now click a camera within Birdseye to jump directly to its individual Live view.
- Object paths in Debug view. The Debug view can now display each tracked object's path — just enable the Paths toggle.
- Audio debugging support. When audio detection is enabled, the Debug view includes an Audio tab showing live dbFS and RMS values from the camera’s microphone.
Other Backend Features and Improvements
Audio Transcription and Analysis
Frigate 0.17 supports fully local audio transcription using either sherpa-onnx or faster-whisper. The single camera Live view in the Frigate UI supports live transcription of audio for streams defined with the audio role, and any speech events in Explore can be transcribed and/or translated through the Transcribe button in the Tracked Object Details pane.
See the documentation.
Process and Efficiency Improvements
Frigate 0.17 uses the forkserver spawn method, this allows for better segmented memory control and better process management. Some processes are also started with lower priority, allowing the most important processes to have more CPU time when it is required.
Review Item Improvements
Review items have been refined to behave more intuitively:
-
Revamped stationary object tracking. Stationary object tracking has been enhanced to use new features to reduce incorrectly marking objects as active:
- Tracking now uses a history of the object's positions to better avoid inaccurate bounding boxes making the object be considered active.
- If an object is marked as having moved, Frigate will use image heuristics to compare the object from when it was known to be stationary to double-check if the object has moved from its original position.
-
Smarter handling of loitering objects. Stationary behavior is now dynamic based on object type. Objects that are normally stationary for long periods (e.g., cars) will no longer keep a review item active indefinitely when stopped inside a loitering zone. Objects that are not expected to remain still (e.g., people) will continue the review item as long as they stay within the zone.
-
Severity-based review item cutoff. Review items now end when a higher-severity event (such as an
alertfor arriving home) finishes. Ongoing lower-severity motion (e.g., passing cars) will no longer keep the higher-severity review item alive. In these cases, thealertends and a newdetectionreview item begins immediately.
Enrichment Improvements
- LPR now includes a normalization configuration, this allows removing some commonly confused characters such as
-,, etc. to ensure that plates are more consistently recognized as the same plate. Documentation - LPR now uses newer PaddleOCR models with support for Chinese characters.
- All enrichments can now be assigned a specific device with the
deviceconfig option. This is useful in cases when multiple GPUs are available. Documentation
Other Improvements
- IPv6 can be toggled via the config with
networking -> ipv6 -> enabled. Documentation - There is now config support for mapping Frigate roles to arbitrary values used in proxy headers. Documentation
- MQTT now has a dedicated topic for camera health / status. Documentation
- go2rtc support for HomeKit Secure Video has now been improved, including persistent configuration being saved automatically when a camera is shared with HomeKit. Documentation
- Add a toggle in the UI Settings to always overlay camera names on the Live dashboard
- Add browser console logging to help debug Live view issues Documentation
- Add a fallback timeout value to the UI Settings pane to configure the amount of time to wait to fall back to jsmpeg after the MSE player fails
- Add the ability to download an instant snapshot from single camera Live view
- Recording playback bugfixes and efficiency improvements should cause playback to start more quickly
- User account passwords have a stricter password policy (minimum length and special characters) for improved security