Added
Multi-LLM Provider Support (2024-12-15)
-
Ollama Integration: Full support for local Ollama models
- Privacy-focused: all processing stays on your server
- Cost-free: no API charges for local models
- Offline capable: works without internet connection
- Support for various open-source models (gpt-oss, llama3, mistral, etc.)
-
Provider Architecture:
- Factory pattern in
PaperlessAITitlesfor dynamic provider selection BaseLLMProviderabstract class for shared functionalityOllamaTitlesprovider class for Ollama API integration- Unified configuration format with
llm_providersetting - Backward compatible with existing OpenAI-only configurations
- Factory pattern in
-
Comprehensive Test Suite (17 new tests):
test_ollama_integration.py: 8 tests for Ollama provider- Title generation from various document types
- Settings loading and validation
- Date prefix functionality
- Error handling (service unavailable, model not found)
- Empty text edge cases
test_llm_provider_selection.py: 9 tests for multi-provider logic- Provider factory pattern verification
- Missing credentials error handling
- Invalid provider detection
- Backward compatibility testing
- Runtime provider switching
- Multi-provider comparison tests
-
Test Infrastructure:
- 4 new test fixture files for provider configurations
- Extended
conftest.pywith Ollama fixtures and helpers - Added
ollamapytest marker for Ollama-specific tests - Updated
pytest.iniwith marker documentation
-
Documentation:
- Updated README.md with Ollama setup instructions
- Complete AGENTS.md architecture update with multi-LLM diagrams
- Provider selection flow documentation
- Docker networking guide for Ollama
- Troubleshooting section for Ollama connectivity
-
Configuration Flexibility:
- Simple provider switching via
llm_providersetting in settings.yaml - Provider-specific configuration sections (openai.model, ollama.model)
- Shared prompt templates across all providers
- Environment variable support for both providers
- Simple provider switching via
Added
Convenient Docker Installation Methods
-
Auto-Init Method (Recommended): Fully automated installation with zero manual setup
- Automatic Python virtual environment initialization on container start
- Persistent venv in Docker volume (survives container restarts)
- Auto-detects dependency changes and rebuilds automatically
- Custom entrypoint wrapper (
scripts/init-and-start.sh) - Conditional venv setup script (
scripts/setup-venv-if-needed.sh) - Post-consume wrapper script (
scripts/post_consume_wrapper.sh)
-
Standalone Single-File Method: Ultra-minimal installation alternative
- All-in-one Python script (
ngx-renamer-standalone.py) - No virtual environment needed
- Configuration via environment variables only
- Perfect for beginners and simple setups
- All-in-one Python script (
Documentation & Configuration
- Comprehensive
.env.exampletemplate with detailed instructions - Two configuration methods supported:
- Method A: Using
.envfile (traditional) - Method B: Using docker-compose environment variables (recommended)
- Method A: Using
- Complete README rewrite with:
- Step-by-step installation instructions for both methods
- Migration guide from old manual setup
- Comprehensive troubleshooting section
- Docker-compose.yml examples
Testing Infrastructure
- Complete pytest test suite with fixtures
- Integration tests for:
- End-to-end document processing
- OpenAI API integration
- Paperless API integration
- Test configuration:
pytest.iniwith markers (integration, openai, smoke, slow)requirements-dev.txtfor development dependencies- Test fixtures for various settings configurations
- Code coverage reporting with pytest-cov
Fixed
Critical Fixes
-
API URL Configuration Error (Commit: 8cdf3fb)
- Fixed incorrect guidance about PAPERLESS_NGX_URL
- URL must include
/apiat the end (e.g.,http://webserver:8000/api) - Updated all documentation and examples
- Fixes 401 Unauthorized errors when accessing Paperless API
-
Virtual Environment Initialization (Commit: 7124bd6)
- Fixed boot loop caused by Docker volume mount behavior
- Changed check from directory existence to
activatefile existence - Prevents container restart loops when venv is incomplete
- Docker creates volume mount point as empty directory, causing false detection
-
Docker Internal Networking (Commit: 8cdf3fb)
- Changed from
http://localhost:8000tohttp://webserver:8000 - Fixed connectivity issues within Docker container network
- Service name must be used instead of localhost for internal communication
- Changed from
Changed
Installation Process
-
Before: Required 8+ manual steps including:
sudo chown -R root ngx-renamer/sudo chmod +xon scriptsdocker compose exec webserver /usr/src/ngx-renamer/setup_venv.sh- Re-run setup after every container rebuild
-
After: Just 3 steps:
- Clone/copy ngx-renamer
- Configure API keys
docker compose up -d- Everything else is automatic!
Configuration
- Enhanced environment variable fallback in
change_title.py- Supports both
.envfile and direct environment variables - Better error messages for missing configuration
- Backward compatible with existing setups
- Supports both
Optimizations
- Settings.yaml changes no longer require container restart
- Upgraded default model from
gpt-4o-minitogpt-4ofor better quality - Streamlined prompt structure for more consistent results
Maintenance
- Updated
.gitignoreto exclude:.claude/directory.coveragefiles- Development workspace files
- Improved logging output with
[ngx-renamer]prefix - Better error messages and troubleshooting guidance
File Changes Summary
New Files (9)
scripts/init-and-start.sh- Entrypoint wrapper for auto-initializationscripts/setup-venv-if-needed.sh- Conditional venv setupscripts/post_consume_wrapper.sh- Updated post-consume scriptngx-renamer-standalone.py- Single-file alternative (231 lines).env.example- Configuration templatepytest.ini- Test configurationrequirements-dev.txt- Development dependenciestests/- Complete test suite (8 files, 603 lines)
Modified Files (8)
README.md- Complete rewrite (+176 lines)change_title.py- Environment variable fallback (+9 lines)settings.yaml- Optimized prompt structure.gitignore- Additional exclusionsmodules/openai_titles.py- Minor improvementsrequirements.txt- Updated dependenciesdocker-compose.yml- Example configuration (not committed)
Statistics (Multi-LLM Update)
- New Test Files: 2 files, 17 tests, 100% passing
- New Fixtures: 4 YAML configuration files
- Modified Files: 4 (AGENTS.md, README.md, pytest.ini, conftest.py)
- Test Coverage: OpenAI (6/6), Ollama (8/8), Multi-provider (9/9)
- Lines Added: ~800 lines (tests + documentation)
Previous Statistics
- Total Changes: 28 files changed
- Additions: +1,292 lines
- Deletions: -126 lines
- Net: +1,166 lines
Breaking Changes
None - Fully Backward Compatible
All changes are backward compatible:
- Multi-LLM support defaults to OpenAI if
llm_provideris not specified - Existing OpenAI configurations continue to work without modification
- Legacy
openai_modelsetting still supported (deprecated but functional) - Existing installations continue to work with the manual setup method
- Users can migrate to the new auto-init method at their convenience
Migration Guide
From Old Manual Setup to Auto-Init Method
-
Backup your configuration:
- Save your
.envfile (if using) - Note your API keys
- Save your
-
Update docker-compose.yml:
volumes: - ./ngx-renamer:/usr/src/ngx-renamer:ro - ngx-renamer-venv:/usr/src/ngx-renamer-venv environment: PAPERLESS_POST_CONSUME_SCRIPT: /usr/src/ngx-renamer/scripts/post_consume_wrapper.sh entrypoint: /usr/src/ngx-renamer/scripts/init-and-start.sh volumes: ngx-renamer-venv:
-
Clean up old venv:
rm -rf ngx-renamer/venv # Old venv no longer needed -
Restart:
docker compose down docker compose up -d
The new method uses /usr/src/ngx-renamer-venv in a Docker volume instead of the local directory.
Troubleshooting
See the comprehensive Troubleshooting section in README.md for solutions to common issues:
- Python environment setup failed
- Failed to get document details (401 errors)
- OPENAI_API_KEY not set
- Title generation not running
- Dependencies not updating
- Settings.yaml changes not applying
Credits
Multi-LLM implementation (2024-12-15): Architecture, testing, and documentation by Claude Code with human oversight.
Original implementation and Docker setup: Claude Code with human oversight.