[1.0.0.3] — 2026-04-27
Release focus
Failed download recovery, retry automation, and downloader hygiene. This release adds comprehensive tooling to detect, diagnose, recover, and prevent repeated failed downloads, including cleanup workflows, retry ladder logic, blacklist memory, orphan discovery, and richer quality/comparison rules.
Failed download handling
- Added
cleanup_statusfield to download records to track cleanup attempts ("pending"|"cleaned"|"error") - Implemented
purge_job()on both SABnzbd and NZBGet clients to remove jobs from downloader history via native APIs - Added
cleanup_failed_download()async function to orchestrate cleanup:- Calls client API to purge job from history
- Deletes local storage folder (tree deletion for incomplete paths)
- Records cleanup outcome in database
- Emits
download:cleanupevent for real-time UI updates - Handles edge cases: missing folders, API errors, permission issues
Failed downloads UI page
- New "Failed Downloads" navigation link in sidebar (AlertCircle icon)
- Dedicated page listing all failed downloads with:
- Release title and error reason
- Storage folder path (formatted for readability)
- Cleanup status indicator (pending | cleaned | error)
- "Clean Folder" button — manually trigger cleanup for any failed download
- "Retry Search" button wired to retry ladder API
- Real-time updates via
download:cleanupsocket events
- Pagination ready (initial: 50 failed downloads per page)
Retry ladder and failure recovery (Phase 2A)
- Added retry metadata to downloads:
retry_countgrabbed_atlast_error_atblacklist_reason
- Added retry endpoint:
POST /queue/{id}/retry - Implemented retry selection flow that:
- Verifies retry eligibility and max retry count
- Selects next candidate by score while skipping failed/blacklisted options
- Starts replacement download and schedules monitor flow
- Carries retry metadata forward for diagnostics
- Failed Downloads page "Retry Search" action is now fully wired to backend retry flow
Blacklist memory and management
- Added persistent blacklist table and logic to prevent repeated attempts of bad releases
- Added blacklist CRUD endpoints:
GET /settings/blacklistPOST /settings/blacklistDELETE /settings/blacklist/{release_hash}
- Added dedicated Blacklist management page in UI with add/remove workflows
- Added blacklist expiry/cleanup support for temporary and timed entries
Orphan scanner and cleanup tooling (Phase 2B)
- Added orphan tracking table for downloader jobs/folders not represented in Slimarr DB
- Added orphan scanner service for SABnzbd and NZBGet history reconciliation
- Added orphan endpoints:
GET /queue/orphanedPOST /queue/orphaned/{id}/cleanup
- Added dedicated Orphaned Downloads page for review and manual cleanup scheduling
- Scheduler now includes:
- Daily orphan scan job (04:00 UTC)
- Periodic downloader health pulse (every 30 minutes)
Quality and comparison enhancements
- Parser now extracts additional metadata:
- uploader/group (
uploader) - release freshness (
release_age_days)
- uploader/group (
- Comparison engine now applies stricter and richer decision rules:
- Strong preference for higher resolution (including smaller 4K upgrades)
- Preferred-language enforcement with safer handling for untagged releases
- Staleness penalties for older releases
- Uploader health scoring and low-health rejection thresholds
Uploader health tracking
- Added uploader statistics table with success/failure/corruption counters and computed health score
- Download monitor now updates uploader health stats on completion/failure paths
- Comparison pipeline uses uploader health data to reduce repeat failures
API additions
GET /queue/failed?limit=50— fetch failed downloads with cleanup metadataPOST /queue/{id}/cleanup— manually trigger cleanup for a download- Updated
/queue/activeand/queue/recentresponses to includestorage_pathandcleanup_status - Updated queue payloads to include retry metadata fields for diagnostics and UI state
Diagnostics
- Download model now tracks:
storage_path— downloader's folder location (captured from job metadata)cleanup_status— cleanup attempt outcome
- Added explicit retry/failure timing metadata in API output for supportability
- Logs now include full storage paths for failed jobs, making it easy to diagnose orphaned folders
- Failed downloads are queryable by status, making audit and recovery workflows simpler
Download client improvements
- Download client protocol now defines
purge_job()contract — all downloader adapters must implement it - SABnzbd client now uses
queue?action=deleteAPI for clean history removal - NZBGet client now uses
editqueueRPC withGroupDeleteoperation for job removal - Client purge failures are non-fatal and logged as warnings (cleanup continues with folder deletion)
System and UX quick wins
- Added quick stats block in System page for active downloads, total movies, and improved items
- Added navigation links/routes for Orphaned Downloads and Blacklist pages
- Extended frontend API/types for retry/orphan/blacklist workflows
Post-merge improvements (same 1.0.0.3 release)
- Added end-to-end health matrix API (
GET /system/health/matrix) covering API, DB, queue, scheduler, orchestrator, recycling bin, and integration summaries - Added release decision audit logging with persistent decision rationale and endpoint (
GET /system/decision-audit) - System page now includes a live Health Matrix panel and recent Release Decision Audit feed
- Orphan auto-cleanup now deletes orphaned storage paths from disk before removing stale orphan records