🚀 Overview
This release transforms the script into a self-maintaining tool. With the new Auto-Update system and Dynamic Dependency Management, you no longer need to worry about manual updates or missing libraries.
✨ Key Features
🔄 Self-Updating & Lifecycle Management
GitHub API Integration:
The script now checks your repository for new commits upon every startup. If a newer version is detected, it downloads the update via tqdm and performs a clean reboot using os.execv.
Automatic Dependency Installer:
No more ModuleNotFoundError. The script detects missing packages (like requests, bs4, tqdm, fake_useragent) and installs them via pip automatically before the main execution.
🕷️ Enhanced Reconnaissance & Crawling
Recursive Crawler:
You can now explore entire domains. Set your max_depth and max_pages to find files buried deep within a website's structure.
Deep Header Inspection:
Beyond just looking at file extensions in the URL, the script now uses HEAD requests to inspect Content-Type and Content-Disposition, allowing it to find downloads hidden behind redirects or query strings.
Smart Deduplication:
Advanced filtering ensures that you only see unique files in your CLI menu, even if they are linked multiple times across different pages.
🛠️ Usability & Stability
Dynamic User-Agents:
Powered by fake_useragent to rotate browser identities and bypass basic anti-bot
Granular Logging:
Choose between Quiet, Normal, or Verbose modes at startup to control the amount of information displayed.
Resilient Downloads:
Improved handling for CTRL+C interruptions, ensuring partial downloads are cleaned up and the UI exits gracefully.
📦 Quick Start
Download CliDownloader.py.
Run the script:
python CliDownloader.py
⚠️ Technical Requirements
Python: 3.7 or higher
File Name: Ensure your local file is named CliDownloader.py for the auto-update function to overwrite it correctly.