Merge branch 'dev'
This commit is contained in:
2
.gitignore
vendored
2
.gitignore
vendored
@@ -25,6 +25,7 @@ unshackle/certs/
|
||||
unshackle/WVDs/
|
||||
unshackle/PRDs/
|
||||
temp/
|
||||
logs/
|
||||
services/
|
||||
|
||||
# Byte-compiled / optimized / DLL files
|
||||
@@ -217,6 +218,7 @@ cython_debug/
|
||||
# you could uncomment the following to ignore the entire vscode folder
|
||||
.vscode/
|
||||
.github/copilot-instructions.md
|
||||
CLAUDE.md
|
||||
|
||||
# Ruff stuff:
|
||||
.ruff_cache/
|
||||
|
||||
153
CHANGELOG.md
153
CHANGELOG.md
@@ -5,6 +5,159 @@ All notable changes to this project will be documented in this file.
|
||||
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/),
|
||||
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
|
||||
|
||||
## [2.0.0] - 2025-11-10
|
||||
|
||||
### Breaking Changes
|
||||
|
||||
- **REST API Integration**: Core architecture modified to support REST API functionality
|
||||
- Changes to internal APIs for download management and tracking
|
||||
- Title and track classes updated with API integration points
|
||||
- Core component interfaces modified for queue management support
|
||||
- **Configuration Changes**: New required configuration options for API and enhanced features
|
||||
- Added `simkl_client_id` now required for Simkl functionality
|
||||
- Service-specific configuration override structure introduced
|
||||
- Debug logging configuration options added
|
||||
- **Forced Subtitles**: Behavior change for forced subtitle inclusion
|
||||
- Forced subs no longer auto-included, requires explicit `--forced-subs` flag
|
||||
|
||||
### Added
|
||||
|
||||
- **REST API Server**: Complete download management via REST API (early development)
|
||||
- Implemented download queue management and worker system
|
||||
- Added OpenAPI/Swagger documentation for easy API exploration
|
||||
- Included download progress tracking and status endpoints
|
||||
- API authentication and comprehensive error handling
|
||||
- Updated core components to support API integration
|
||||
- Early development work with more changes planned
|
||||
- **CustomRemoteCDM**: Highly configurable custom CDM API support
|
||||
- Support for third-party CDM API providers with maximum configurability
|
||||
- Full configuration through YAML without code changes
|
||||
- Addresses GitHub issue #26 for flexible CDM integration
|
||||
- **WindscribeVPN Proxy Provider**: New VPN provider support
|
||||
- Added WindscribeVPN following NordVPN and SurfsharkVPN patterns
|
||||
- Fixes GitHub issue #29
|
||||
- **Latest Episode Download**: New `--latest-episode` CLI option
|
||||
- `-le, --latest-episode` flag to download only the most recent episode
|
||||
- Automatically selects the single most recent episode regardless of season
|
||||
- Fixes GitHub issue #28
|
||||
- **Video Track Exclusion**: New `--no-video` CLI option
|
||||
- `-nv, --no-video` flag to skip downloading video tracks
|
||||
- Allows downloading only audio, subtitles, attachments, and chapters
|
||||
- Useful for audio-only or subtitle extraction workflows
|
||||
- Fixes GitHub issue #39
|
||||
- **Service-Specific Configuration Overrides**: Per-service fine-tuned control
|
||||
- Support for per-service configuration overrides in YAML
|
||||
- Fine-tuned control of downloader and command options per service
|
||||
- Fixes GitHub issue #13
|
||||
- **Comprehensive JSON Debug Logging**: Structured logging for troubleshooting
|
||||
- Binary toggle via `--debug` flag or `debug: true` in config
|
||||
- JSON Lines (.jsonl) format for easy parsing and analysis
|
||||
- Comprehensive logging of all operations (session info, CLI params, CDM details, auth status, title/track metadata, DRM operations, vault queries)
|
||||
- Configurable key logging via `debug_keys` option with smart redaction
|
||||
- Error logging for all critical operations
|
||||
- Removed old text logging system
|
||||
- **curl_cffi Retry Handler**: Enhanced session reliability
|
||||
- Added automatic retry mechanism to curl_cffi Session
|
||||
- Improved download reliability with configurable retries
|
||||
- **Simkl API Configuration**: New API key support
|
||||
- Added `simkl_client_id` configuration option
|
||||
- Simkl now requires client_id from https://simkl.com/settings/developer/
|
||||
- **Custom Session Fingerprints**: Enhanced browser impersonation capabilities
|
||||
- Added custom fingerprint and preset support for better service compatibility
|
||||
- Configurable fingerprint presets for different device types
|
||||
- Improved success rate with services using advanced bot detection
|
||||
- **TMDB and Simkl Metadata Caching**: Enhanced title cache system
|
||||
- Added metadata caching to title cache to reduce API calls
|
||||
- Caches movie/show metadata alongside title information
|
||||
- Improves performance for repeated title lookups and reduces API rate limiting
|
||||
- **API Enhancements**: Improved REST API functionality
|
||||
- Added default parameter handling for better request processing
|
||||
- Added URL field to services endpoint response for easier service identification
|
||||
- Complete API enhancements for production readiness
|
||||
- Improved error responses with better detail and debugging information
|
||||
|
||||
### Changed
|
||||
|
||||
- **Binary Search Enhancement**: Improved binary discovery
|
||||
- Refactored to search for binaries in root of binary folder or subfolder named after the binary
|
||||
- Better organization of binary dependencies
|
||||
- **Type Annotations**: Modernized to PEP 604 syntax
|
||||
- Updated session.py type annotations to use modern Python syntax
|
||||
- Improved code readability and type checking
|
||||
|
||||
### Fixed
|
||||
|
||||
- **Audio Description Track Support**: Added option to download audio description tracks
|
||||
- Added `--audio-description/-ad` flag to optionally include descriptive audio tracks
|
||||
- Previously, audio description tracks were always filtered out
|
||||
- Users can now choose to download AD tracks when needed
|
||||
- Fixes GitHub issue #33
|
||||
- **Config Directory Support**: Cross-platform user config directory support
|
||||
- Fixed config loading to properly support user config directories across all platforms
|
||||
- Fixes GitHub issue #23
|
||||
- **HYBRID Mode Validation**: Pre-download validation for hybrid processing
|
||||
- Added validation to check both HDR10 and DV tracks are available before download
|
||||
- Prevents wasted downloads when hybrid processing would fail
|
||||
- **TMDB/Simkl API Keys**: Graceful handling of missing API keys
|
||||
- Improved error handling when TMDB or Simkl API keys are not configured
|
||||
- Better user messaging for API configuration requirements
|
||||
- **Forced Subtitles Behavior**: Correct forced subtitle filtering
|
||||
- Fixed forced subtitles being incorrectly included without `--forced-subs` flag
|
||||
- Forced subs now only included when explicitly requested
|
||||
- **Font Attachment Constructor**: Fixed ASS/SSA font attachment
|
||||
- Use keyword arguments for Attachment constructor in font attachment
|
||||
- Fixes "Invalid URL: No scheme supplied" error
|
||||
- Fixes GitHub issue #24
|
||||
- **Binary Subdirectory Checking**: Enhanced binary location discovery (by @TPD94, PR #19)
|
||||
- Updated binaries.py to check subdirectories in binaries folders named after the binary
|
||||
- Improved binary detection and loading
|
||||
- **HLS Manifest Processing**: Minor HLS parser fix (by @TPD94, PR #19)
|
||||
- **lxml and pyplayready**: Updated dependencies (by @Sp5rky)
|
||||
- Updated lxml constraint and pyplayready import path for compatibility
|
||||
- **DASH Segment Calculation**: Corrected segment handling
|
||||
- Fixed segment count calculation for DASH manifests with startNumber=0
|
||||
- Ensures accurate segment processing for all DASH manifest configurations
|
||||
- Prevents off-by-one errors in segment downloads
|
||||
- **HDR Detection and Naming**: Comprehensive HDR format support
|
||||
- Improved HDR detection with comprehensive transfer characteristics checks
|
||||
- Added hybrid DV+HDR10 support for accurate file naming
|
||||
- Better identification of HDR formats across different streaming services
|
||||
- More accurate HDR/DV detection in filename generation
|
||||
- **Subtitle Processing**: VTT subtitle handling improvements
|
||||
- Resolved SDH (Subtitles for Deaf and Hard of hearing) stripping crash when processing VTT files
|
||||
- More robust subtitle processing pipeline with better error handling
|
||||
- Fixes crashes when filtering specific VTT subtitle formats
|
||||
- **DRM Processing**: Enhanced encoding handling
|
||||
- Added explicit UTF-8 encoding to mp4decrypt subprocess calls
|
||||
- Prevents encoding issues on systems with non-UTF-8 default encodings
|
||||
- Improves cross-platform compatibility for Windows and some Linux configurations
|
||||
- **Session Fingerprints**: Updated OkHttp presets
|
||||
- Updated OkHttp fingerprint presets for better Android TV compatibility
|
||||
- Improved success rate with services using fingerprint-based detection
|
||||
|
||||
### Documentation
|
||||
|
||||
- **GitHub Issue Templates**: Enhanced issue reporting
|
||||
- Improved bug report template with better structure and required fields
|
||||
- Enhanced feature request template for clearer specifications
|
||||
- Added helpful guidance for contributors to provide complete information
|
||||
|
||||
### Refactored
|
||||
|
||||
- **Import Cleanup**: Removed unused imports
|
||||
- Removed unused mypy import from binaries.py
|
||||
- Fixed import ordering in API download_manager and handlers
|
||||
|
||||
### Contributors
|
||||
|
||||
This release includes contributions from:
|
||||
|
||||
- @Sp5rky - REST API server implementation, dependency updates
|
||||
- @stabbedbybrick - curl_cffi retry handler (PR #31)
|
||||
- @stabbedbybrick - n_m3u8dl-re refactor (PR #38)
|
||||
- @TPD94 - Binary search enhancements, manifest parser fixes (PR #19)
|
||||
- @scene (Andy) - Core features, configuration system, bug fixes
|
||||
|
||||
## [1.4.8] - 2025-10-08
|
||||
|
||||
### Added
|
||||
|
||||
@@ -547,9 +547,12 @@ Configuration data for pywidevine's serve functionality run through unshackle.
|
||||
This effectively allows you to run `unshackle serve` to start serving pywidevine Serve-compliant CDMs right from your
|
||||
local widevine device files.
|
||||
|
||||
- `api_secret` - Secret key for REST API authentication. When set, enables the REST API server alongside the CDM serve functionality. This key is required for authenticating API requests.
|
||||
|
||||
For example,
|
||||
|
||||
```yaml
|
||||
api_secret: "your-secret-key-here"
|
||||
users:
|
||||
secret_key_for_jane: # 32bit hex recommended, case-sensitive
|
||||
devices: # list of allowed devices for this user
|
||||
|
||||
@@ -4,7 +4,7 @@ build-backend = "hatchling.build"
|
||||
|
||||
[project]
|
||||
name = "unshackle"
|
||||
version = "1.4.8"
|
||||
version = "2.0.0"
|
||||
description = "Modular Movie, TV, and Music Archival Software."
|
||||
authors = [{ name = "unshackle team" }]
|
||||
requires-python = ">=3.10,<3.13"
|
||||
@@ -31,6 +31,7 @@ dependencies = [
|
||||
"click>=8.1.8,<9",
|
||||
"construct>=2.8.8,<3",
|
||||
"crccheck>=1.3.0,<2",
|
||||
"fonttools>=4.0.0,<5",
|
||||
"jsonpickle>=3.0.4,<4",
|
||||
"langcodes>=3.4.0,<4",
|
||||
"lxml>=5.2.1,<7",
|
||||
@@ -44,7 +45,7 @@ dependencies = [
|
||||
"pymysql>=1.1.0,<2",
|
||||
"pywidevine[serve]>=1.8.0,<2",
|
||||
"PyYAML>=6.0.1,<7",
|
||||
"requests[socks]>=2.31.0,<3",
|
||||
"requests[socks]>=2.32.5,<3",
|
||||
"rich>=13.7.1,<14",
|
||||
"rlaphoenix.m3u8>=3.4.0,<4",
|
||||
"ruamel.yaml>=0.18.6,<0.19",
|
||||
@@ -58,7 +59,9 @@ dependencies = [
|
||||
"httpx>=0.28.1,<0.29",
|
||||
"cryptography>=45.0.0",
|
||||
"subby",
|
||||
"aiohttp-swagger3>=0.9.0,<1",
|
||||
"pysubs2>=1.7.0,<2",
|
||||
"PyExecJS>=1.5.1,<2",
|
||||
]
|
||||
|
||||
[project.urls]
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
@@ -12,7 +12,7 @@ from unshackle.core.vault import Vault
|
||||
from unshackle.core.vaults import Vaults
|
||||
|
||||
|
||||
def _load_vaults(vault_names: list[str]) -> Vaults:
|
||||
def load_vaults(vault_names: list[str]) -> Vaults:
|
||||
"""Load and validate vaults by name."""
|
||||
vaults = Vaults()
|
||||
for vault_name in vault_names:
|
||||
@@ -30,7 +30,7 @@ def _load_vaults(vault_names: list[str]) -> Vaults:
|
||||
return vaults
|
||||
|
||||
|
||||
def _process_service_keys(from_vault: Vault, service: str, log: logging.Logger) -> dict[str, str]:
|
||||
def process_service_keys(from_vault: Vault, service: str, log: logging.Logger) -> dict[str, str]:
|
||||
"""Get and validate keys from a vault for a specific service."""
|
||||
content_keys = list(from_vault.get_keys(service))
|
||||
|
||||
@@ -41,9 +41,9 @@ def _process_service_keys(from_vault: Vault, service: str, log: logging.Logger)
|
||||
return {kid: key for kid, key in content_keys if kid not in bad_keys}
|
||||
|
||||
|
||||
def _copy_service_data(to_vault: Vault, from_vault: Vault, service: str, log: logging.Logger) -> int:
|
||||
def copy_service_data(to_vault: Vault, from_vault: Vault, service: str, log: logging.Logger) -> int:
|
||||
"""Copy data for a single service between vaults."""
|
||||
content_keys = _process_service_keys(from_vault, service, log)
|
||||
content_keys = process_service_keys(from_vault, service, log)
|
||||
total_count = len(content_keys)
|
||||
|
||||
if total_count == 0:
|
||||
@@ -95,7 +95,7 @@ def copy(to_vault_name: str, from_vault_names: list[str], service: Optional[str]
|
||||
log = logging.getLogger("kv")
|
||||
|
||||
all_vault_names = [to_vault_name] + list(from_vault_names)
|
||||
vaults = _load_vaults(all_vault_names)
|
||||
vaults = load_vaults(all_vault_names)
|
||||
|
||||
to_vault = vaults.vaults[0]
|
||||
from_vaults = vaults.vaults[1:]
|
||||
@@ -112,7 +112,7 @@ def copy(to_vault_name: str, from_vault_names: list[str], service: Optional[str]
|
||||
services_to_copy = [service] if service else from_vault.get_services()
|
||||
|
||||
for service_tag in services_to_copy:
|
||||
added = _copy_service_data(to_vault, from_vault, service_tag, log)
|
||||
added = copy_service_data(to_vault, from_vault, service_tag, log)
|
||||
total_added += added
|
||||
|
||||
if total_added > 0:
|
||||
@@ -164,7 +164,7 @@ def add(file: Path, service: str, vaults: list[str]) -> None:
|
||||
log = logging.getLogger("kv")
|
||||
service = Services.get_tag(service)
|
||||
|
||||
vaults_ = _load_vaults(list(vaults))
|
||||
vaults_ = load_vaults(list(vaults))
|
||||
|
||||
data = file.read_text(encoding="utf8")
|
||||
kid_keys: dict[str, str] = {}
|
||||
@@ -194,7 +194,7 @@ def prepare(vaults: list[str]) -> None:
|
||||
"""Create Service Tables on Vaults if not yet created."""
|
||||
log = logging.getLogger("kv")
|
||||
|
||||
vaults_ = _load_vaults(vaults)
|
||||
vaults_ = load_vaults(vaults)
|
||||
|
||||
for vault in vaults_:
|
||||
if hasattr(vault, "has_table") and hasattr(vault, "create_table"):
|
||||
|
||||
@@ -1,19 +1,32 @@
|
||||
import logging
|
||||
import subprocess
|
||||
|
||||
import click
|
||||
from aiohttp import web
|
||||
|
||||
from unshackle.core import binaries
|
||||
from unshackle.core.api import cors_middleware, setup_routes, setup_swagger
|
||||
from unshackle.core.config import config
|
||||
from unshackle.core.constants import context_settings
|
||||
|
||||
|
||||
@click.command(short_help="Serve your Local Widevine Devices for Remote Access.", context_settings=context_settings)
|
||||
@click.command(
|
||||
short_help="Serve your Local Widevine Devices and REST API for Remote Access.", context_settings=context_settings
|
||||
)
|
||||
@click.option("-h", "--host", type=str, default="0.0.0.0", help="Host to serve from.")
|
||||
@click.option("-p", "--port", type=int, default=8786, help="Port to serve from.")
|
||||
@click.option("--caddy", is_flag=True, default=False, help="Also serve with Caddy.")
|
||||
def serve(host: str, port: int, caddy: bool) -> None:
|
||||
@click.option("--api-only", is_flag=True, default=False, help="Serve only the REST API, not pywidevine CDM.")
|
||||
@click.option("--no-key", is_flag=True, default=False, help="Disable API key authentication (allows all requests).")
|
||||
@click.option(
|
||||
"--debug-api",
|
||||
is_flag=True,
|
||||
default=False,
|
||||
help="Include technical debug information (tracebacks, stderr) in API error responses.",
|
||||
)
|
||||
def serve(host: str, port: int, caddy: bool, api_only: bool, no_key: bool, debug_api: bool) -> None:
|
||||
"""
|
||||
Serve your Local Widevine Devices for Remote Access.
|
||||
Serve your Local Widevine Devices and REST API for Remote Access.
|
||||
|
||||
\b
|
||||
Host as 127.0.0.1 may block remote access even if port-forwarded.
|
||||
@@ -23,8 +36,28 @@ def serve(host: str, port: int, caddy: bool) -> None:
|
||||
You may serve with Caddy at the same time with --caddy. You can use Caddy
|
||||
as a reverse-proxy to serve with HTTPS. The config used will be the Caddyfile
|
||||
next to the unshackle config.
|
||||
|
||||
\b
|
||||
The REST API provides programmatic access to unshackle functionality.
|
||||
Configure authentication in your config under serve.users and serve.api_secret.
|
||||
"""
|
||||
from pywidevine import serve
|
||||
from pywidevine import serve as pywidevine_serve
|
||||
|
||||
log = logging.getLogger("serve")
|
||||
|
||||
# Validate API secret for REST API routes (unless --no-key is used)
|
||||
if not no_key:
|
||||
api_secret = config.serve.get("api_secret")
|
||||
if not api_secret:
|
||||
raise click.ClickException(
|
||||
"API secret key is not configured. Please add 'api_secret' to the 'serve' section in your config."
|
||||
)
|
||||
else:
|
||||
api_secret = None
|
||||
log.warning("Running with --no-key: Authentication is DISABLED for all API endpoints!")
|
||||
|
||||
if debug_api:
|
||||
log.warning("Running with --debug-api: Error responses will include technical debug information!")
|
||||
|
||||
if caddy:
|
||||
if not binaries.Caddy:
|
||||
@@ -39,7 +72,53 @@ def serve(host: str, port: int, caddy: bool) -> None:
|
||||
if not config.serve.get("devices"):
|
||||
config.serve["devices"] = []
|
||||
config.serve["devices"].extend(list(config.directories.wvds.glob("*.wvd")))
|
||||
serve.run(config.serve, host, port)
|
||||
|
||||
if api_only:
|
||||
# API-only mode: serve just the REST API
|
||||
log.info("Starting REST API server (pywidevine CDM disabled)")
|
||||
if no_key:
|
||||
app = web.Application(middlewares=[cors_middleware])
|
||||
app["config"] = {"users": []}
|
||||
else:
|
||||
app = web.Application(middlewares=[cors_middleware, pywidevine_serve.authentication])
|
||||
app["config"] = {"users": [api_secret]}
|
||||
app["debug_api"] = debug_api
|
||||
setup_routes(app)
|
||||
setup_swagger(app)
|
||||
log.info(f"REST API endpoints available at http://{host}:{port}/api/")
|
||||
log.info(f"Swagger UI available at http://{host}:{port}/api/docs/")
|
||||
log.info("(Press CTRL+C to quit)")
|
||||
web.run_app(app, host=host, port=port, print=None)
|
||||
else:
|
||||
# Integrated mode: serve both pywidevine + REST API
|
||||
log.info("Starting integrated server (pywidevine CDM + REST API)")
|
||||
|
||||
# Create integrated app with both pywidevine and API routes
|
||||
if no_key:
|
||||
app = web.Application(middlewares=[cors_middleware])
|
||||
app["config"] = dict(config.serve)
|
||||
app["config"]["users"] = []
|
||||
else:
|
||||
app = web.Application(middlewares=[cors_middleware, pywidevine_serve.authentication])
|
||||
# Setup config - add API secret to users for authentication
|
||||
serve_config = dict(config.serve)
|
||||
if not serve_config.get("users"):
|
||||
serve_config["users"] = []
|
||||
if api_secret not in serve_config["users"]:
|
||||
serve_config["users"].append(api_secret)
|
||||
app["config"] = serve_config
|
||||
|
||||
app.on_startup.append(pywidevine_serve._startup)
|
||||
app.on_cleanup.append(pywidevine_serve._cleanup)
|
||||
app.add_routes(pywidevine_serve.routes)
|
||||
app["debug_api"] = debug_api
|
||||
setup_routes(app)
|
||||
setup_swagger(app)
|
||||
|
||||
log.info(f"REST API endpoints available at http://{host}:{port}/api/")
|
||||
log.info(f"Swagger UI available at http://{host}:{port}/api/docs/")
|
||||
log.info("(Press CTRL+C to quit)")
|
||||
web.run_app(app, host=host, port=port, print=None)
|
||||
finally:
|
||||
if caddy_p:
|
||||
caddy_p.kill()
|
||||
|
||||
@@ -1 +1 @@
|
||||
__version__ = "1.4.8"
|
||||
__version__ = "2.0.0"
|
||||
|
||||
@@ -1,6 +1,5 @@
|
||||
import atexit
|
||||
import logging
|
||||
from pathlib import Path
|
||||
|
||||
import click
|
||||
import urllib3
|
||||
@@ -16,23 +15,16 @@ from unshackle.core.config import config
|
||||
from unshackle.core.console import ComfyRichHandler, console
|
||||
from unshackle.core.constants import context_settings
|
||||
from unshackle.core.update_checker import UpdateChecker
|
||||
from unshackle.core.utilities import rotate_log_file
|
||||
|
||||
LOGGING_PATH = None
|
||||
from unshackle.core.utilities import close_debug_logger, init_debug_logger
|
||||
|
||||
|
||||
@click.command(cls=Commands, invoke_without_command=True, context_settings=context_settings)
|
||||
@click.option("-v", "--version", is_flag=True, default=False, help="Print version information.")
|
||||
@click.option("-d", "--debug", is_flag=True, default=False, help="Enable DEBUG level logs.")
|
||||
@click.option(
|
||||
"--log",
|
||||
"log_path",
|
||||
type=Path,
|
||||
default=config.directories.logs / config.filenames.log,
|
||||
help="Log path (or filename). Path can contain the following f-string args: {name} {time}.",
|
||||
)
|
||||
def main(version: bool, debug: bool, log_path: Path) -> None:
|
||||
@click.option("-d", "--debug", is_flag=True, default=False, help="Enable DEBUG level logs and JSON debug logging.")
|
||||
def main(version: bool, debug: bool) -> None:
|
||||
"""unshackle—Modular Movie, TV, and Music Archival Software."""
|
||||
debug_logging_enabled = debug or config.debug
|
||||
|
||||
logging.basicConfig(
|
||||
level=logging.DEBUG if debug else logging.INFO,
|
||||
format="%(message)s",
|
||||
@@ -48,11 +40,8 @@ def main(version: bool, debug: bool, log_path: Path) -> None:
|
||||
],
|
||||
)
|
||||
|
||||
if log_path:
|
||||
global LOGGING_PATH
|
||||
console.record = True
|
||||
new_log_path = rotate_log_file(log_path)
|
||||
LOGGING_PATH = new_log_path
|
||||
if debug_logging_enabled:
|
||||
init_debug_logger(enabled=True)
|
||||
|
||||
urllib3.disable_warnings(InsecureRequestWarning)
|
||||
|
||||
@@ -98,10 +87,9 @@ def main(version: bool, debug: bool, log_path: Path) -> None:
|
||||
|
||||
|
||||
@atexit.register
|
||||
def save_log():
|
||||
if console.record and LOGGING_PATH:
|
||||
# TODO: Currently semi-bust. Everything that refreshes gets duplicated.
|
||||
console.save_text(LOGGING_PATH)
|
||||
def cleanup():
|
||||
"""Clean up resources on exit."""
|
||||
close_debug_logger()
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
|
||||
3
unshackle/core/api/__init__.py
Normal file
3
unshackle/core/api/__init__.py
Normal file
@@ -0,0 +1,3 @@
|
||||
from unshackle.core.api.routes import cors_middleware, setup_routes, setup_swagger
|
||||
|
||||
__all__ = ["setup_routes", "setup_swagger", "cors_middleware"]
|
||||
660
unshackle/core/api/download_manager.py
Normal file
660
unshackle/core/api/download_manager.py
Normal file
@@ -0,0 +1,660 @@
|
||||
import asyncio
|
||||
import json
|
||||
import logging
|
||||
import os
|
||||
import sys
|
||||
import tempfile
|
||||
import threading
|
||||
import uuid
|
||||
from contextlib import suppress
|
||||
from dataclasses import dataclass, field
|
||||
from datetime import datetime, timedelta
|
||||
from enum import Enum
|
||||
from typing import Any, Callable, Dict, List, Optional
|
||||
|
||||
log = logging.getLogger("download_manager")
|
||||
|
||||
|
||||
class JobStatus(Enum):
|
||||
QUEUED = "queued"
|
||||
DOWNLOADING = "downloading"
|
||||
COMPLETED = "completed"
|
||||
FAILED = "failed"
|
||||
CANCELLED = "cancelled"
|
||||
|
||||
|
||||
@dataclass
|
||||
class DownloadJob:
|
||||
"""Represents a download job with all its parameters and status."""
|
||||
|
||||
job_id: str
|
||||
status: JobStatus
|
||||
created_time: datetime
|
||||
service: str
|
||||
title_id: str
|
||||
parameters: Dict[str, Any]
|
||||
|
||||
# Progress tracking
|
||||
started_time: Optional[datetime] = None
|
||||
completed_time: Optional[datetime] = None
|
||||
progress: float = 0.0
|
||||
|
||||
# Results and error info
|
||||
output_files: List[str] = field(default_factory=list)
|
||||
error_message: Optional[str] = None
|
||||
error_details: Optional[str] = None
|
||||
error_code: Optional[str] = None
|
||||
error_traceback: Optional[str] = None
|
||||
worker_stderr: Optional[str] = None
|
||||
|
||||
# Cancellation support
|
||||
cancel_event: threading.Event = field(default_factory=threading.Event)
|
||||
|
||||
def to_dict(self, include_full_details: bool = False) -> Dict[str, Any]:
|
||||
"""Convert job to dictionary for JSON response."""
|
||||
result = {
|
||||
"job_id": self.job_id,
|
||||
"status": self.status.value,
|
||||
"created_time": self.created_time.isoformat(),
|
||||
"service": self.service,
|
||||
"title_id": self.title_id,
|
||||
"progress": self.progress,
|
||||
}
|
||||
|
||||
if include_full_details:
|
||||
result.update(
|
||||
{
|
||||
"parameters": self.parameters,
|
||||
"started_time": self.started_time.isoformat() if self.started_time else None,
|
||||
"completed_time": self.completed_time.isoformat() if self.completed_time else None,
|
||||
"output_files": self.output_files,
|
||||
"error_message": self.error_message,
|
||||
"error_details": self.error_details,
|
||||
"error_code": self.error_code,
|
||||
"error_traceback": self.error_traceback,
|
||||
"worker_stderr": self.worker_stderr,
|
||||
}
|
||||
)
|
||||
|
||||
return result
|
||||
|
||||
|
||||
def _perform_download(
|
||||
job_id: str,
|
||||
service: str,
|
||||
title_id: str,
|
||||
params: Dict[str, Any],
|
||||
cancel_event: Optional[threading.Event] = None,
|
||||
progress_callback: Optional[Callable[[Dict[str, Any]], None]] = None,
|
||||
) -> List[str]:
|
||||
"""Execute the synchronous download logic for a job."""
|
||||
|
||||
def _check_cancel(stage: str):
|
||||
if cancel_event and cancel_event.is_set():
|
||||
raise Exception(f"Job was cancelled {stage}")
|
||||
|
||||
from contextlib import redirect_stderr, redirect_stdout
|
||||
from io import StringIO
|
||||
|
||||
_check_cancel("before execution started")
|
||||
|
||||
# Import dl.py components lazily to avoid circular deps during module import
|
||||
import click
|
||||
import yaml
|
||||
|
||||
from unshackle.commands.dl import dl
|
||||
from unshackle.core.config import config
|
||||
from unshackle.core.services import Services
|
||||
from unshackle.core.utils.click_types import ContextData
|
||||
from unshackle.core.utils.collections import merge_dict
|
||||
|
||||
log.info(f"Starting sync download for job {job_id}")
|
||||
|
||||
# Load service configuration
|
||||
service_config_path = Services.get_path(service) / config.filenames.config
|
||||
if service_config_path.exists():
|
||||
service_config = yaml.safe_load(service_config_path.read_text(encoding="utf8"))
|
||||
else:
|
||||
service_config = {}
|
||||
merge_dict(config.services.get(service), service_config)
|
||||
|
||||
from unshackle.commands.dl import dl as dl_command
|
||||
|
||||
ctx = click.Context(dl_command.cli)
|
||||
ctx.invoked_subcommand = service
|
||||
ctx.obj = ContextData(config=service_config, cdm=None, proxy_providers=[], profile=params.get("profile"))
|
||||
ctx.params = {
|
||||
"proxy": params.get("proxy"),
|
||||
"no_proxy": params.get("no_proxy", False),
|
||||
"profile": params.get("profile"),
|
||||
"tag": params.get("tag"),
|
||||
"tmdb_id": params.get("tmdb_id"),
|
||||
"tmdb_name": params.get("tmdb_name", False),
|
||||
"tmdb_year": params.get("tmdb_year", False),
|
||||
}
|
||||
|
||||
dl_instance = dl(
|
||||
ctx=ctx,
|
||||
no_proxy=params.get("no_proxy", False),
|
||||
profile=params.get("profile"),
|
||||
proxy=params.get("proxy"),
|
||||
tag=params.get("tag"),
|
||||
tmdb_id=params.get("tmdb_id"),
|
||||
tmdb_name=params.get("tmdb_name", False),
|
||||
tmdb_year=params.get("tmdb_year", False),
|
||||
)
|
||||
|
||||
service_module = Services.load(service)
|
||||
|
||||
_check_cancel("before service instantiation")
|
||||
|
||||
try:
|
||||
import inspect
|
||||
|
||||
service_init_params = inspect.signature(service_module.__init__).parameters
|
||||
|
||||
service_ctx = click.Context(click.Command(service))
|
||||
service_ctx.parent = ctx
|
||||
service_ctx.obj = ctx.obj
|
||||
|
||||
service_kwargs = {}
|
||||
|
||||
if "title" in service_init_params:
|
||||
service_kwargs["title"] = title_id
|
||||
|
||||
for key, value in params.items():
|
||||
if key in service_init_params and key not in ["service", "title_id"]:
|
||||
service_kwargs[key] = value
|
||||
|
||||
for param_name, param_info in service_init_params.items():
|
||||
if param_name not in service_kwargs and param_name not in ["self", "ctx"]:
|
||||
if param_info.default is inspect.Parameter.empty:
|
||||
if param_name == "movie":
|
||||
service_kwargs[param_name] = "/movies/" in title_id
|
||||
elif param_name == "meta_lang":
|
||||
service_kwargs[param_name] = None
|
||||
else:
|
||||
log.warning(f"Unknown required parameter '{param_name}' for service {service}, using None")
|
||||
service_kwargs[param_name] = None
|
||||
|
||||
service_instance = service_module(service_ctx, **service_kwargs)
|
||||
|
||||
except Exception as exc: # noqa: BLE001 - propagate meaningful failure
|
||||
log.error(f"Failed to create service instance: {exc}")
|
||||
raise
|
||||
|
||||
original_download_dir = config.directories.downloads
|
||||
|
||||
_check_cancel("before download execution")
|
||||
|
||||
stdout_capture = StringIO()
|
||||
stderr_capture = StringIO()
|
||||
|
||||
# Simple progress tracking if callback provided
|
||||
if progress_callback:
|
||||
# Report initial progress
|
||||
progress_callback({"progress": 0.0, "status": "starting"})
|
||||
|
||||
# Simple approach: report progress at key points
|
||||
original_result = dl_instance.result
|
||||
|
||||
def result_with_progress(*args, **kwargs):
|
||||
try:
|
||||
# Report that download started
|
||||
progress_callback({"progress": 5.0, "status": "downloading"})
|
||||
|
||||
# Call original method
|
||||
result = original_result(*args, **kwargs)
|
||||
|
||||
# Report completion
|
||||
progress_callback({"progress": 100.0, "status": "completed"})
|
||||
return result
|
||||
except Exception as e:
|
||||
progress_callback({"progress": 0.0, "status": "failed", "error": str(e)})
|
||||
raise
|
||||
|
||||
dl_instance.result = result_with_progress
|
||||
|
||||
try:
|
||||
with redirect_stdout(stdout_capture), redirect_stderr(stderr_capture):
|
||||
dl_instance.result(
|
||||
service=service_instance,
|
||||
quality=params.get("quality", []),
|
||||
vcodec=params.get("vcodec"),
|
||||
acodec=params.get("acodec"),
|
||||
vbitrate=params.get("vbitrate"),
|
||||
abitrate=params.get("abitrate"),
|
||||
range_=params.get("range", ["SDR"]),
|
||||
channels=params.get("channels"),
|
||||
no_atmos=params.get("no_atmos", False),
|
||||
wanted=params.get("wanted", []),
|
||||
latest_episode=params.get("latest_episode", False),
|
||||
lang=params.get("lang", ["orig"]),
|
||||
v_lang=params.get("v_lang", []),
|
||||
a_lang=params.get("a_lang", []),
|
||||
s_lang=params.get("s_lang", ["all"]),
|
||||
require_subs=params.get("require_subs", []),
|
||||
forced_subs=params.get("forced_subs", False),
|
||||
exact_lang=params.get("exact_lang", False),
|
||||
sub_format=params.get("sub_format"),
|
||||
video_only=params.get("video_only", False),
|
||||
audio_only=params.get("audio_only", False),
|
||||
subs_only=params.get("subs_only", False),
|
||||
chapters_only=params.get("chapters_only", False),
|
||||
no_subs=params.get("no_subs", False),
|
||||
no_audio=params.get("no_audio", False),
|
||||
no_chapters=params.get("no_chapters", False),
|
||||
audio_description=params.get("audio_description", False),
|
||||
slow=params.get("slow", False),
|
||||
list_=False,
|
||||
list_titles=False,
|
||||
skip_dl=params.get("skip_dl", False),
|
||||
export=params.get("export"),
|
||||
cdm_only=params.get("cdm_only"),
|
||||
no_proxy=params.get("no_proxy", False),
|
||||
no_folder=params.get("no_folder", False),
|
||||
no_source=params.get("no_source", False),
|
||||
no_mux=params.get("no_mux", False),
|
||||
workers=params.get("workers"),
|
||||
downloads=params.get("downloads", 1),
|
||||
best_available=params.get("best_available", False),
|
||||
)
|
||||
|
||||
except SystemExit as exc:
|
||||
if exc.code != 0:
|
||||
stdout_str = stdout_capture.getvalue()
|
||||
stderr_str = stderr_capture.getvalue()
|
||||
log.error(f"Download exited with code {exc.code}")
|
||||
log.error(f"Stdout: {stdout_str}")
|
||||
log.error(f"Stderr: {stderr_str}")
|
||||
raise Exception(f"Download failed with exit code {exc.code}")
|
||||
|
||||
except Exception as exc: # noqa: BLE001 - propagate to caller
|
||||
stdout_str = stdout_capture.getvalue()
|
||||
stderr_str = stderr_capture.getvalue()
|
||||
log.error(f"Download execution failed: {exc}")
|
||||
log.error(f"Stdout: {stdout_str}")
|
||||
log.error(f"Stderr: {stderr_str}")
|
||||
raise
|
||||
|
||||
log.info(f"Download completed for job {job_id}, files in {original_download_dir}")
|
||||
|
||||
return []
|
||||
|
||||
|
||||
class DownloadQueueManager:
|
||||
"""Manages download job queue with configurable concurrency limits."""
|
||||
|
||||
def __init__(self, max_concurrent_downloads: int = 2, job_retention_hours: int = 24):
|
||||
self.max_concurrent_downloads = max_concurrent_downloads
|
||||
self.job_retention_hours = job_retention_hours
|
||||
|
||||
self._jobs: Dict[str, DownloadJob] = {}
|
||||
self._job_queue: asyncio.Queue = asyncio.Queue()
|
||||
self._active_downloads: Dict[str, asyncio.Task] = {}
|
||||
self._download_processes: Dict[str, asyncio.subprocess.Process] = {}
|
||||
self._job_temp_files: Dict[str, Dict[str, str]] = {}
|
||||
self._workers_started = False
|
||||
self._shutdown_event = asyncio.Event()
|
||||
|
||||
log.info(
|
||||
f"Initialized download queue manager: max_concurrent={max_concurrent_downloads}, retention_hours={job_retention_hours}"
|
||||
)
|
||||
|
||||
def create_job(self, service: str, title_id: str, **parameters) -> DownloadJob:
|
||||
"""Create a new download job and add it to the queue."""
|
||||
job_id = str(uuid.uuid4())
|
||||
job = DownloadJob(
|
||||
job_id=job_id,
|
||||
status=JobStatus.QUEUED,
|
||||
created_time=datetime.now(),
|
||||
service=service,
|
||||
title_id=title_id,
|
||||
parameters=parameters,
|
||||
)
|
||||
|
||||
self._jobs[job_id] = job
|
||||
self._job_queue.put_nowait(job)
|
||||
|
||||
log.info(f"Created download job {job_id} for {service}:{title_id}")
|
||||
return job
|
||||
|
||||
def get_job(self, job_id: str) -> Optional[DownloadJob]:
|
||||
"""Get job by ID."""
|
||||
return self._jobs.get(job_id)
|
||||
|
||||
def list_jobs(self) -> List[DownloadJob]:
|
||||
"""List all jobs."""
|
||||
return list(self._jobs.values())
|
||||
|
||||
def cancel_job(self, job_id: str) -> bool:
|
||||
"""Cancel a job if it's queued or downloading."""
|
||||
job = self._jobs.get(job_id)
|
||||
if not job:
|
||||
return False
|
||||
|
||||
if job.status == JobStatus.QUEUED:
|
||||
job.status = JobStatus.CANCELLED
|
||||
job.cancel_event.set() # Signal cancellation
|
||||
log.info(f"Cancelled queued job {job_id}")
|
||||
return True
|
||||
elif job.status == JobStatus.DOWNLOADING:
|
||||
# Set the cancellation event first - this will be checked by the download thread
|
||||
job.cancel_event.set()
|
||||
job.status = JobStatus.CANCELLED
|
||||
log.info(f"Signaled cancellation for downloading job {job_id}")
|
||||
|
||||
# Cancel the active download task
|
||||
task = self._active_downloads.get(job_id)
|
||||
if task:
|
||||
task.cancel()
|
||||
log.info(f"Cancelled download task for job {job_id}")
|
||||
|
||||
process = self._download_processes.get(job_id)
|
||||
if process:
|
||||
try:
|
||||
process.terminate()
|
||||
log.info(f"Terminated worker process for job {job_id}")
|
||||
except ProcessLookupError:
|
||||
log.debug(f"Worker process for job {job_id} already exited")
|
||||
|
||||
return True
|
||||
|
||||
return False
|
||||
|
||||
def cleanup_old_jobs(self) -> int:
|
||||
"""Remove jobs older than retention period."""
|
||||
cutoff_time = datetime.now() - timedelta(hours=self.job_retention_hours)
|
||||
jobs_to_remove = []
|
||||
|
||||
for job_id, job in self._jobs.items():
|
||||
if job.status in [JobStatus.COMPLETED, JobStatus.FAILED, JobStatus.CANCELLED]:
|
||||
if job.completed_time and job.completed_time < cutoff_time:
|
||||
jobs_to_remove.append(job_id)
|
||||
elif not job.completed_time and job.created_time < cutoff_time:
|
||||
jobs_to_remove.append(job_id)
|
||||
|
||||
for job_id in jobs_to_remove:
|
||||
del self._jobs[job_id]
|
||||
|
||||
if jobs_to_remove:
|
||||
log.info(f"Cleaned up {len(jobs_to_remove)} old jobs")
|
||||
|
||||
return len(jobs_to_remove)
|
||||
|
||||
async def start_workers(self):
|
||||
"""Start worker tasks to process the download queue."""
|
||||
if self._workers_started:
|
||||
return
|
||||
|
||||
self._workers_started = True
|
||||
|
||||
# Start worker tasks
|
||||
for i in range(self.max_concurrent_downloads):
|
||||
asyncio.create_task(self._download_worker(f"worker-{i}"))
|
||||
|
||||
# Start cleanup task
|
||||
asyncio.create_task(self._cleanup_worker())
|
||||
|
||||
log.info(f"Started {self.max_concurrent_downloads} download workers")
|
||||
|
||||
async def shutdown(self):
|
||||
"""Shutdown the queue manager and cancel all active downloads."""
|
||||
log.info("Shutting down download queue manager")
|
||||
self._shutdown_event.set()
|
||||
|
||||
# Cancel all active downloads
|
||||
for task in self._active_downloads.values():
|
||||
task.cancel()
|
||||
|
||||
# Terminate worker processes
|
||||
for job_id, process in list(self._download_processes.items()):
|
||||
try:
|
||||
process.terminate()
|
||||
except ProcessLookupError:
|
||||
log.debug(f"Worker process for job {job_id} already exited during shutdown")
|
||||
|
||||
for job_id, process in list(self._download_processes.items()):
|
||||
try:
|
||||
await asyncio.wait_for(process.wait(), timeout=5)
|
||||
except asyncio.TimeoutError:
|
||||
log.warning(f"Worker process for job {job_id} did not exit, killing")
|
||||
process.kill()
|
||||
await process.wait()
|
||||
finally:
|
||||
self._download_processes.pop(job_id, None)
|
||||
|
||||
# Clean up any remaining temp files
|
||||
for paths in self._job_temp_files.values():
|
||||
for path in paths.values():
|
||||
try:
|
||||
os.remove(path)
|
||||
except OSError:
|
||||
pass
|
||||
self._job_temp_files.clear()
|
||||
|
||||
# Wait for workers to finish
|
||||
if self._active_downloads:
|
||||
await asyncio.gather(*self._active_downloads.values(), return_exceptions=True)
|
||||
|
||||
async def _download_worker(self, worker_name: str):
|
||||
"""Worker task that processes jobs from the queue."""
|
||||
log.debug(f"Download worker {worker_name} started")
|
||||
|
||||
while not self._shutdown_event.is_set():
|
||||
try:
|
||||
# Wait for a job or shutdown signal
|
||||
job = await asyncio.wait_for(self._job_queue.get(), timeout=1.0)
|
||||
|
||||
if job.status == JobStatus.CANCELLED:
|
||||
continue
|
||||
|
||||
# Start processing the job
|
||||
job.status = JobStatus.DOWNLOADING
|
||||
job.started_time = datetime.now()
|
||||
|
||||
log.info(f"Worker {worker_name} starting job {job.job_id}")
|
||||
|
||||
# Create download task
|
||||
download_task = asyncio.create_task(self._execute_download(job))
|
||||
self._active_downloads[job.job_id] = download_task
|
||||
|
||||
try:
|
||||
await download_task
|
||||
except asyncio.CancelledError:
|
||||
job.status = JobStatus.CANCELLED
|
||||
log.info(f"Job {job.job_id} was cancelled")
|
||||
except Exception as e:
|
||||
job.status = JobStatus.FAILED
|
||||
job.error_message = str(e)
|
||||
log.error(f"Job {job.job_id} failed: {e}")
|
||||
finally:
|
||||
job.completed_time = datetime.now()
|
||||
if job.job_id in self._active_downloads:
|
||||
del self._active_downloads[job.job_id]
|
||||
|
||||
except asyncio.TimeoutError:
|
||||
continue
|
||||
except Exception as e:
|
||||
log.error(f"Worker {worker_name} error: {e}")
|
||||
|
||||
async def _execute_download(self, job: DownloadJob):
|
||||
"""Execute the actual download for a job."""
|
||||
log.info(f"Executing download for job {job.job_id}")
|
||||
|
||||
try:
|
||||
output_files = await self._run_download_async(job)
|
||||
job.status = JobStatus.COMPLETED
|
||||
job.output_files = output_files
|
||||
job.progress = 100.0
|
||||
log.info(f"Download completed for job {job.job_id}: {len(output_files)} files")
|
||||
except Exception as e:
|
||||
import traceback
|
||||
|
||||
from unshackle.core.api.errors import categorize_exception
|
||||
|
||||
job.status = JobStatus.FAILED
|
||||
job.error_message = str(e)
|
||||
job.error_details = str(e)
|
||||
|
||||
api_error = categorize_exception(
|
||||
e, context={"service": job.service, "title_id": job.title_id, "job_id": job.job_id}
|
||||
)
|
||||
job.error_code = api_error.error_code.value
|
||||
|
||||
job.error_traceback = traceback.format_exc()
|
||||
|
||||
log.error(f"Download failed for job {job.job_id}: {e}")
|
||||
raise
|
||||
|
||||
async def _run_download_async(self, job: DownloadJob) -> List[str]:
|
||||
"""Invoke a worker subprocess to execute the download."""
|
||||
|
||||
payload = {
|
||||
"job_id": job.job_id,
|
||||
"service": job.service,
|
||||
"title_id": job.title_id,
|
||||
"parameters": job.parameters,
|
||||
}
|
||||
|
||||
payload_fd, payload_path = tempfile.mkstemp(prefix=f"unshackle_job_{job.job_id}_", suffix="_payload.json")
|
||||
os.close(payload_fd)
|
||||
result_fd, result_path = tempfile.mkstemp(prefix=f"unshackle_job_{job.job_id}_", suffix="_result.json")
|
||||
os.close(result_fd)
|
||||
progress_fd, progress_path = tempfile.mkstemp(prefix=f"unshackle_job_{job.job_id}_", suffix="_progress.json")
|
||||
os.close(progress_fd)
|
||||
|
||||
with open(payload_path, "w", encoding="utf-8") as handle:
|
||||
json.dump(payload, handle)
|
||||
|
||||
process = await asyncio.create_subprocess_exec(
|
||||
sys.executable,
|
||||
"-m",
|
||||
"unshackle.core.api.download_worker",
|
||||
payload_path,
|
||||
result_path,
|
||||
progress_path,
|
||||
stdout=asyncio.subprocess.PIPE,
|
||||
stderr=asyncio.subprocess.PIPE,
|
||||
)
|
||||
|
||||
self._download_processes[job.job_id] = process
|
||||
self._job_temp_files[job.job_id] = {"payload": payload_path, "result": result_path, "progress": progress_path}
|
||||
|
||||
communicate_task = asyncio.create_task(process.communicate())
|
||||
|
||||
stdout_bytes = b""
|
||||
stderr_bytes = b""
|
||||
|
||||
try:
|
||||
while True:
|
||||
done, _ = await asyncio.wait({communicate_task}, timeout=0.5)
|
||||
if communicate_task in done:
|
||||
stdout_bytes, stderr_bytes = communicate_task.result()
|
||||
break
|
||||
|
||||
# Check for progress updates
|
||||
try:
|
||||
if os.path.exists(progress_path):
|
||||
with open(progress_path, "r", encoding="utf-8") as handle:
|
||||
progress_data = json.load(handle)
|
||||
if "progress" in progress_data:
|
||||
new_progress = float(progress_data["progress"])
|
||||
if new_progress != job.progress:
|
||||
job.progress = new_progress
|
||||
log.info(f"Job {job.job_id} progress updated: {job.progress}%")
|
||||
except (FileNotFoundError, json.JSONDecodeError, ValueError) as e:
|
||||
log.debug(f"Could not read progress for job {job.job_id}: {e}")
|
||||
|
||||
if job.cancel_event.is_set() or job.status == JobStatus.CANCELLED:
|
||||
log.info(f"Cancellation detected for job {job.job_id}, terminating worker process")
|
||||
process.terminate()
|
||||
try:
|
||||
await asyncio.wait_for(communicate_task, timeout=5)
|
||||
except asyncio.TimeoutError:
|
||||
log.warning(f"Worker process for job {job.job_id} did not terminate, killing")
|
||||
process.kill()
|
||||
await asyncio.wait_for(communicate_task, timeout=5)
|
||||
raise asyncio.CancelledError("Job was cancelled")
|
||||
|
||||
returncode = process.returncode
|
||||
stdout = stdout_bytes.decode("utf-8", errors="ignore")
|
||||
stderr = stderr_bytes.decode("utf-8", errors="ignore")
|
||||
|
||||
if stdout.strip():
|
||||
log.debug(f"Worker stdout for job {job.job_id}: {stdout.strip()}")
|
||||
if stderr.strip():
|
||||
log.warning(f"Worker stderr for job {job.job_id}: {stderr.strip()}")
|
||||
job.worker_stderr = stderr.strip()
|
||||
|
||||
result_data: Optional[Dict[str, Any]] = None
|
||||
try:
|
||||
with open(result_path, "r", encoding="utf-8") as handle:
|
||||
result_data = json.load(handle)
|
||||
except FileNotFoundError:
|
||||
log.error(f"Result file missing for job {job.job_id}")
|
||||
except json.JSONDecodeError as exc:
|
||||
log.error(f"Failed to parse worker result for job {job.job_id}: {exc}")
|
||||
|
||||
if returncode != 0:
|
||||
message = result_data.get("message") if result_data else "unknown error"
|
||||
if result_data:
|
||||
job.error_details = result_data.get("error_details", message)
|
||||
job.error_code = result_data.get("error_code")
|
||||
raise Exception(f"Worker exited with code {returncode}: {message}")
|
||||
|
||||
if not result_data or result_data.get("status") != "success":
|
||||
message = result_data.get("message") if result_data else "worker did not report success"
|
||||
if result_data:
|
||||
job.error_details = result_data.get("error_details", message)
|
||||
job.error_code = result_data.get("error_code")
|
||||
raise Exception(f"Worker failure: {message}")
|
||||
|
||||
return result_data.get("output_files", [])
|
||||
|
||||
finally:
|
||||
if not communicate_task.done():
|
||||
communicate_task.cancel()
|
||||
with suppress(asyncio.CancelledError):
|
||||
await communicate_task
|
||||
|
||||
self._download_processes.pop(job.job_id, None)
|
||||
|
||||
temp_paths = self._job_temp_files.pop(job.job_id, {})
|
||||
for path in temp_paths.values():
|
||||
try:
|
||||
os.remove(path)
|
||||
except OSError:
|
||||
pass
|
||||
|
||||
def _execute_download_sync(self, job: DownloadJob) -> List[str]:
|
||||
"""Execute download synchronously using existing dl.py logic."""
|
||||
return _perform_download(job.job_id, job.service, job.title_id, job.parameters.copy(), job.cancel_event)
|
||||
|
||||
async def _cleanup_worker(self):
|
||||
"""Worker that periodically cleans up old jobs."""
|
||||
while not self._shutdown_event.is_set():
|
||||
try:
|
||||
await asyncio.sleep(3600) # Run every hour
|
||||
self.cleanup_old_jobs()
|
||||
except Exception as e:
|
||||
log.error(f"Cleanup worker error: {e}")
|
||||
|
||||
|
||||
# Global instance
|
||||
download_manager: Optional[DownloadQueueManager] = None
|
||||
|
||||
|
||||
def get_download_manager() -> DownloadQueueManager:
|
||||
"""Get the global download manager instance."""
|
||||
global download_manager
|
||||
if download_manager is None:
|
||||
# Load configuration from unshackle config
|
||||
from unshackle.core.config import config
|
||||
|
||||
max_concurrent = getattr(config, "max_concurrent_downloads", 2)
|
||||
retention_hours = getattr(config, "download_job_retention_hours", 24)
|
||||
|
||||
download_manager = DownloadQueueManager(max_concurrent, retention_hours)
|
||||
|
||||
return download_manager
|
||||
102
unshackle/core/api/download_worker.py
Normal file
102
unshackle/core/api/download_worker.py
Normal file
@@ -0,0 +1,102 @@
|
||||
"""Standalone worker process entry point for executing download jobs."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import json
|
||||
import logging
|
||||
import sys
|
||||
import traceback
|
||||
from pathlib import Path
|
||||
from typing import Any, Dict
|
||||
|
||||
from .download_manager import _perform_download
|
||||
|
||||
log = logging.getLogger("download_worker")
|
||||
|
||||
|
||||
def _read_payload(path: Path) -> Dict[str, Any]:
|
||||
with path.open("r", encoding="utf-8") as handle:
|
||||
return json.load(handle)
|
||||
|
||||
|
||||
def _write_result(path: Path, payload: Dict[str, Any]) -> None:
|
||||
path.parent.mkdir(parents=True, exist_ok=True)
|
||||
with path.open("w", encoding="utf-8") as handle:
|
||||
json.dump(payload, handle)
|
||||
|
||||
|
||||
def main(argv: list[str]) -> int:
|
||||
if len(argv) not in [3, 4]:
|
||||
print(
|
||||
"Usage: python -m unshackle.core.api.download_worker <payload_path> <result_path> [progress_path]",
|
||||
file=sys.stderr,
|
||||
)
|
||||
return 2
|
||||
|
||||
payload_path = Path(argv[1])
|
||||
result_path = Path(argv[2])
|
||||
progress_path = Path(argv[3]) if len(argv) > 3 else None
|
||||
|
||||
result: Dict[str, Any] = {}
|
||||
exit_code = 0
|
||||
|
||||
try:
|
||||
payload = _read_payload(payload_path)
|
||||
job_id = payload["job_id"]
|
||||
service = payload["service"]
|
||||
title_id = payload["title_id"]
|
||||
params = payload.get("parameters", {})
|
||||
|
||||
log.info(f"Worker starting job {job_id} ({service}:{title_id})")
|
||||
|
||||
def progress_callback(progress_data: Dict[str, Any]) -> None:
|
||||
"""Write progress updates to file for main process to read."""
|
||||
if progress_path:
|
||||
try:
|
||||
log.info(f"Writing progress update: {progress_data}")
|
||||
_write_result(progress_path, progress_data)
|
||||
log.info(f"Progress update written to {progress_path}")
|
||||
except Exception as e:
|
||||
log.error(f"Failed to write progress update: {e}")
|
||||
|
||||
output_files = _perform_download(
|
||||
job_id, service, title_id, params, cancel_event=None, progress_callback=progress_callback
|
||||
)
|
||||
|
||||
result = {"status": "success", "output_files": output_files}
|
||||
|
||||
except Exception as exc: # noqa: BLE001 - capture for parent process
|
||||
from unshackle.core.api.errors import categorize_exception
|
||||
|
||||
exit_code = 1
|
||||
tb = traceback.format_exc()
|
||||
log.error(f"Worker failed with error: {exc}")
|
||||
|
||||
api_error = categorize_exception(
|
||||
exc,
|
||||
context={
|
||||
"service": payload.get("service") if "payload" in locals() else None,
|
||||
"title_id": payload.get("title_id") if "payload" in locals() else None,
|
||||
"job_id": payload.get("job_id") if "payload" in locals() else None,
|
||||
},
|
||||
)
|
||||
|
||||
result = {
|
||||
"status": "error",
|
||||
"message": str(exc),
|
||||
"error_details": api_error.message,
|
||||
"error_code": api_error.error_code.value,
|
||||
"traceback": tb,
|
||||
}
|
||||
|
||||
finally:
|
||||
try:
|
||||
_write_result(result_path, result)
|
||||
except Exception as exc: # noqa: BLE001 - last resort logging
|
||||
log.error(f"Failed to write worker result file: {exc}")
|
||||
|
||||
return exit_code
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
sys.exit(main(sys.argv))
|
||||
322
unshackle/core/api/errors.py
Normal file
322
unshackle/core/api/errors.py
Normal file
@@ -0,0 +1,322 @@
|
||||
"""
|
||||
API Error Handling System
|
||||
|
||||
Provides structured error responses with error codes, categorization,
|
||||
and optional debug information for the unshackle REST API.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import traceback
|
||||
from datetime import datetime, timezone
|
||||
from enum import Enum
|
||||
from typing import Any
|
||||
|
||||
from aiohttp import web
|
||||
|
||||
|
||||
class APIErrorCode(str, Enum):
|
||||
"""Standard API error codes for programmatic error handling."""
|
||||
|
||||
# Client errors (4xx)
|
||||
INVALID_INPUT = "INVALID_INPUT" # Missing or malformed request data
|
||||
INVALID_SERVICE = "INVALID_SERVICE" # Unknown service name
|
||||
INVALID_TITLE_ID = "INVALID_TITLE_ID" # Invalid or malformed title ID
|
||||
INVALID_PROFILE = "INVALID_PROFILE" # Profile doesn't exist
|
||||
INVALID_PROXY = "INVALID_PROXY" # Invalid proxy specification
|
||||
INVALID_LANGUAGE = "INVALID_LANGUAGE" # Invalid language code
|
||||
INVALID_PARAMETERS = "INVALID_PARAMETERS" # Invalid download parameters
|
||||
|
||||
AUTH_FAILED = "AUTH_FAILED" # Authentication failure (invalid credentials/cookies)
|
||||
AUTH_REQUIRED = "AUTH_REQUIRED" # Missing authentication
|
||||
FORBIDDEN = "FORBIDDEN" # Action not allowed
|
||||
GEOFENCE = "GEOFENCE" # Content not available in region
|
||||
|
||||
NOT_FOUND = "NOT_FOUND" # Resource not found (title, job, etc.)
|
||||
NO_CONTENT = "NO_CONTENT" # No titles/tracks/episodes found
|
||||
JOB_NOT_FOUND = "JOB_NOT_FOUND" # Download job doesn't exist
|
||||
|
||||
RATE_LIMITED = "RATE_LIMITED" # Service rate limiting
|
||||
|
||||
# Server errors (5xx)
|
||||
INTERNAL_ERROR = "INTERNAL_ERROR" # Unexpected server error
|
||||
SERVICE_ERROR = "SERVICE_ERROR" # Streaming service API error
|
||||
NETWORK_ERROR = "NETWORK_ERROR" # Network connectivity issue
|
||||
DRM_ERROR = "DRM_ERROR" # DRM/license acquisition failure
|
||||
DOWNLOAD_ERROR = "DOWNLOAD_ERROR" # Download process failure
|
||||
SERVICE_UNAVAILABLE = "SERVICE_UNAVAILABLE" # Service temporarily unavailable
|
||||
WORKER_ERROR = "WORKER_ERROR" # Download worker process error
|
||||
|
||||
|
||||
class APIError(Exception):
|
||||
"""
|
||||
Structured API error with error code, message, and details.
|
||||
|
||||
Attributes:
|
||||
error_code: Standardized error code from APIErrorCode enum
|
||||
message: User-friendly error message
|
||||
details: Additional structured error information
|
||||
retryable: Whether the operation can be retried
|
||||
http_status: HTTP status code to return (default based on error_code)
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
error_code: APIErrorCode,
|
||||
message: str,
|
||||
details: dict[str, Any] | None = None,
|
||||
retryable: bool = False,
|
||||
http_status: int | None = None,
|
||||
):
|
||||
super().__init__(message)
|
||||
self.error_code = error_code
|
||||
self.message = message
|
||||
self.details = details or {}
|
||||
self.retryable = retryable
|
||||
self.http_status = http_status or self._default_http_status(error_code)
|
||||
|
||||
@staticmethod
|
||||
def _default_http_status(error_code: APIErrorCode) -> int:
|
||||
"""Map error codes to default HTTP status codes."""
|
||||
status_map = {
|
||||
# 400 Bad Request
|
||||
APIErrorCode.INVALID_INPUT: 400,
|
||||
APIErrorCode.INVALID_SERVICE: 400,
|
||||
APIErrorCode.INVALID_TITLE_ID: 400,
|
||||
APIErrorCode.INVALID_PROFILE: 400,
|
||||
APIErrorCode.INVALID_PROXY: 400,
|
||||
APIErrorCode.INVALID_LANGUAGE: 400,
|
||||
APIErrorCode.INVALID_PARAMETERS: 400,
|
||||
# 401 Unauthorized
|
||||
APIErrorCode.AUTH_REQUIRED: 401,
|
||||
APIErrorCode.AUTH_FAILED: 401,
|
||||
# 403 Forbidden
|
||||
APIErrorCode.FORBIDDEN: 403,
|
||||
APIErrorCode.GEOFENCE: 403,
|
||||
# 404 Not Found
|
||||
APIErrorCode.NOT_FOUND: 404,
|
||||
APIErrorCode.NO_CONTENT: 404,
|
||||
APIErrorCode.JOB_NOT_FOUND: 404,
|
||||
# 429 Too Many Requests
|
||||
APIErrorCode.RATE_LIMITED: 429,
|
||||
# 500 Internal Server Error
|
||||
APIErrorCode.INTERNAL_ERROR: 500,
|
||||
# 502 Bad Gateway
|
||||
APIErrorCode.SERVICE_ERROR: 502,
|
||||
APIErrorCode.DRM_ERROR: 502,
|
||||
# 503 Service Unavailable
|
||||
APIErrorCode.NETWORK_ERROR: 503,
|
||||
APIErrorCode.SERVICE_UNAVAILABLE: 503,
|
||||
APIErrorCode.DOWNLOAD_ERROR: 500,
|
||||
APIErrorCode.WORKER_ERROR: 500,
|
||||
}
|
||||
return status_map.get(error_code, 500)
|
||||
|
||||
|
||||
def build_error_response(
|
||||
error: APIError | Exception,
|
||||
debug_mode: bool = False,
|
||||
extra_debug_info: dict[str, Any] | None = None,
|
||||
) -> web.Response:
|
||||
"""
|
||||
Build a structured JSON error response.
|
||||
|
||||
Args:
|
||||
error: APIError or generic Exception to convert to response
|
||||
debug_mode: Whether to include technical debug information
|
||||
extra_debug_info: Additional debug info (stderr, stdout, etc.)
|
||||
|
||||
Returns:
|
||||
aiohttp JSON response with structured error data
|
||||
"""
|
||||
if isinstance(error, APIError):
|
||||
error_code = error.error_code.value
|
||||
message = error.message
|
||||
details = error.details
|
||||
http_status = error.http_status
|
||||
retryable = error.retryable
|
||||
else:
|
||||
# Generic exception - convert to INTERNAL_ERROR
|
||||
error_code = APIErrorCode.INTERNAL_ERROR.value
|
||||
message = str(error) or "An unexpected error occurred"
|
||||
details = {}
|
||||
http_status = 500
|
||||
retryable = False
|
||||
|
||||
response_data: dict[str, Any] = {
|
||||
"status": "error",
|
||||
"error_code": error_code,
|
||||
"message": message,
|
||||
"timestamp": datetime.now(timezone.utc).isoformat(),
|
||||
}
|
||||
|
||||
# Add details if present
|
||||
if details:
|
||||
response_data["details"] = details
|
||||
|
||||
# Add retryable hint if specified
|
||||
if retryable:
|
||||
response_data["retryable"] = True
|
||||
|
||||
# Add debug information if in debug mode
|
||||
if debug_mode:
|
||||
debug_info: dict[str, Any] = {
|
||||
"exception_type": type(error).__name__,
|
||||
}
|
||||
|
||||
# Add traceback for debugging
|
||||
if isinstance(error, Exception):
|
||||
debug_info["traceback"] = traceback.format_exc()
|
||||
|
||||
# Add any extra debug info provided
|
||||
if extra_debug_info:
|
||||
debug_info.update(extra_debug_info)
|
||||
|
||||
response_data["debug_info"] = debug_info
|
||||
|
||||
return web.json_response(response_data, status=http_status)
|
||||
|
||||
|
||||
def categorize_exception(
|
||||
exc: Exception,
|
||||
context: dict[str, Any] | None = None,
|
||||
) -> APIError:
|
||||
"""
|
||||
Categorize a generic exception into a structured APIError.
|
||||
|
||||
This function attempts to identify the type of error based on the exception
|
||||
type, message patterns, and optional context information.
|
||||
|
||||
Args:
|
||||
exc: The exception to categorize
|
||||
context: Optional context (service name, operation type, etc.)
|
||||
|
||||
Returns:
|
||||
APIError with appropriate error code and details
|
||||
"""
|
||||
context = context or {}
|
||||
exc_str = str(exc).lower()
|
||||
exc_type = type(exc).__name__
|
||||
|
||||
# Authentication errors
|
||||
if any(keyword in exc_str for keyword in ["auth", "login", "credential", "unauthorized", "forbidden", "token"]):
|
||||
return APIError(
|
||||
error_code=APIErrorCode.AUTH_FAILED,
|
||||
message=f"Authentication failed: {exc}",
|
||||
details={**context, "reason": "authentication_error"},
|
||||
retryable=False,
|
||||
)
|
||||
|
||||
# Network errors
|
||||
if any(
|
||||
keyword in exc_str
|
||||
for keyword in [
|
||||
"connection",
|
||||
"timeout",
|
||||
"network",
|
||||
"unreachable",
|
||||
"socket",
|
||||
"dns",
|
||||
"resolve",
|
||||
]
|
||||
) or exc_type in ["ConnectionError", "TimeoutError", "URLError", "SSLError"]:
|
||||
return APIError(
|
||||
error_code=APIErrorCode.NETWORK_ERROR,
|
||||
message=f"Network error occurred: {exc}",
|
||||
details={**context, "reason": "network_connectivity"},
|
||||
retryable=True,
|
||||
http_status=503,
|
||||
)
|
||||
|
||||
# Geofence/region errors
|
||||
if any(keyword in exc_str for keyword in ["geofence", "region", "not available in", "territory"]):
|
||||
return APIError(
|
||||
error_code=APIErrorCode.GEOFENCE,
|
||||
message=f"Content not available in your region: {exc}",
|
||||
details={**context, "reason": "geofence_restriction"},
|
||||
retryable=False,
|
||||
)
|
||||
|
||||
# Not found errors
|
||||
if any(keyword in exc_str for keyword in ["not found", "404", "does not exist", "invalid id"]):
|
||||
return APIError(
|
||||
error_code=APIErrorCode.NOT_FOUND,
|
||||
message=f"Resource not found: {exc}",
|
||||
details={**context, "reason": "not_found"},
|
||||
retryable=False,
|
||||
)
|
||||
|
||||
# Rate limiting
|
||||
if any(keyword in exc_str for keyword in ["rate limit", "too many requests", "429", "throttle"]):
|
||||
return APIError(
|
||||
error_code=APIErrorCode.RATE_LIMITED,
|
||||
message=f"Rate limit exceeded: {exc}",
|
||||
details={**context, "reason": "rate_limited"},
|
||||
retryable=True,
|
||||
http_status=429,
|
||||
)
|
||||
|
||||
# DRM errors
|
||||
if any(keyword in exc_str for keyword in ["drm", "license", "widevine", "playready", "decrypt"]):
|
||||
return APIError(
|
||||
error_code=APIErrorCode.DRM_ERROR,
|
||||
message=f"DRM error: {exc}",
|
||||
details={**context, "reason": "drm_failure"},
|
||||
retryable=False,
|
||||
)
|
||||
|
||||
# Service unavailable
|
||||
if any(keyword in exc_str for keyword in ["service unavailable", "503", "maintenance", "temporarily unavailable"]):
|
||||
return APIError(
|
||||
error_code=APIErrorCode.SERVICE_UNAVAILABLE,
|
||||
message=f"Service temporarily unavailable: {exc}",
|
||||
details={**context, "reason": "service_unavailable"},
|
||||
retryable=True,
|
||||
http_status=503,
|
||||
)
|
||||
|
||||
# Validation errors
|
||||
if any(keyword in exc_str for keyword in ["invalid", "malformed", "validation"]) or exc_type in [
|
||||
"ValueError",
|
||||
"ValidationError",
|
||||
]:
|
||||
return APIError(
|
||||
error_code=APIErrorCode.INVALID_INPUT,
|
||||
message=f"Invalid input: {exc}",
|
||||
details={**context, "reason": "validation_failed"},
|
||||
retryable=False,
|
||||
)
|
||||
|
||||
# Default to internal error for unknown exceptions
|
||||
return APIError(
|
||||
error_code=APIErrorCode.INTERNAL_ERROR,
|
||||
message=f"An unexpected error occurred: {exc}",
|
||||
details={**context, "exception_type": exc_type},
|
||||
retryable=False,
|
||||
)
|
||||
|
||||
|
||||
def handle_api_exception(
|
||||
exc: Exception,
|
||||
context: dict[str, Any] | None = None,
|
||||
debug_mode: bool = False,
|
||||
extra_debug_info: dict[str, Any] | None = None,
|
||||
) -> web.Response:
|
||||
"""
|
||||
Convenience function to categorize an exception and build an error response.
|
||||
|
||||
Args:
|
||||
exc: The exception to handle
|
||||
context: Optional context information
|
||||
debug_mode: Whether to include debug information
|
||||
extra_debug_info: Additional debug info
|
||||
|
||||
Returns:
|
||||
Structured JSON error response
|
||||
"""
|
||||
if isinstance(exc, APIError):
|
||||
api_error = exc
|
||||
else:
|
||||
api_error = categorize_exception(exc, context)
|
||||
|
||||
return build_error_response(api_error, debug_mode, extra_debug_info)
|
||||
936
unshackle/core/api/handlers.py
Normal file
936
unshackle/core/api/handlers.py
Normal file
@@ -0,0 +1,936 @@
|
||||
import logging
|
||||
from typing import Any, Dict, List, Optional
|
||||
|
||||
from aiohttp import web
|
||||
|
||||
from unshackle.core.api.errors import APIError, APIErrorCode, handle_api_exception
|
||||
from unshackle.core.constants import AUDIO_CODEC_MAP, DYNAMIC_RANGE_MAP, VIDEO_CODEC_MAP
|
||||
from unshackle.core.proxies.basic import Basic
|
||||
from unshackle.core.proxies.hola import Hola
|
||||
from unshackle.core.proxies.nordvpn import NordVPN
|
||||
from unshackle.core.proxies.surfsharkvpn import SurfsharkVPN
|
||||
from unshackle.core.services import Services
|
||||
from unshackle.core.titles import Episode, Movie, Title_T
|
||||
from unshackle.core.tracks import Audio, Subtitle, Video
|
||||
|
||||
log = logging.getLogger("api")
|
||||
|
||||
DEFAULT_DOWNLOAD_PARAMS = {
|
||||
"profile": None,
|
||||
"quality": [],
|
||||
"vcodec": None,
|
||||
"acodec": None,
|
||||
"vbitrate": None,
|
||||
"abitrate": None,
|
||||
"range": ["SDR"],
|
||||
"channels": None,
|
||||
"no_atmos": False,
|
||||
"wanted": [],
|
||||
"latest_episode": False,
|
||||
"lang": ["orig"],
|
||||
"v_lang": [],
|
||||
"a_lang": [],
|
||||
"s_lang": ["all"],
|
||||
"require_subs": [],
|
||||
"forced_subs": False,
|
||||
"exact_lang": False,
|
||||
"sub_format": None,
|
||||
"video_only": False,
|
||||
"audio_only": False,
|
||||
"subs_only": False,
|
||||
"chapters_only": False,
|
||||
"no_subs": False,
|
||||
"no_audio": False,
|
||||
"no_chapters": False,
|
||||
"audio_description": False,
|
||||
"slow": False,
|
||||
"skip_dl": False,
|
||||
"export": None,
|
||||
"cdm_only": None,
|
||||
"no_proxy": False,
|
||||
"no_folder": False,
|
||||
"no_source": False,
|
||||
"no_mux": False,
|
||||
"workers": None,
|
||||
"downloads": 1,
|
||||
"best_available": False,
|
||||
}
|
||||
|
||||
|
||||
def initialize_proxy_providers() -> List[Any]:
|
||||
"""Initialize and return available proxy providers."""
|
||||
proxy_providers = []
|
||||
try:
|
||||
from unshackle.core import binaries
|
||||
# Load the main unshackle config to get proxy provider settings
|
||||
from unshackle.core.config import config as main_config
|
||||
|
||||
log.debug(f"Main config proxy providers: {getattr(main_config, 'proxy_providers', {})}")
|
||||
log.debug(f"Available proxy provider configs: {list(getattr(main_config, 'proxy_providers', {}).keys())}")
|
||||
|
||||
# Use main_config instead of the service-specific config for proxy providers
|
||||
proxy_config = getattr(main_config, "proxy_providers", {})
|
||||
|
||||
if proxy_config.get("basic"):
|
||||
log.debug("Loading Basic proxy provider")
|
||||
proxy_providers.append(Basic(**proxy_config["basic"]))
|
||||
if proxy_config.get("nordvpn"):
|
||||
log.debug("Loading NordVPN proxy provider")
|
||||
proxy_providers.append(NordVPN(**proxy_config["nordvpn"]))
|
||||
if proxy_config.get("surfsharkvpn"):
|
||||
log.debug("Loading SurfsharkVPN proxy provider")
|
||||
proxy_providers.append(SurfsharkVPN(**proxy_config["surfsharkvpn"]))
|
||||
if hasattr(binaries, "HolaProxy") and binaries.HolaProxy:
|
||||
log.debug("Loading Hola proxy provider")
|
||||
proxy_providers.append(Hola())
|
||||
|
||||
for proxy_provider in proxy_providers:
|
||||
log.info(f"Loaded {proxy_provider.__class__.__name__}: {proxy_provider}")
|
||||
|
||||
if not proxy_providers:
|
||||
log.warning("No proxy providers were loaded. Check your proxy provider configuration in unshackle.yaml")
|
||||
|
||||
except Exception as e:
|
||||
log.warning(f"Failed to initialize some proxy providers: {e}")
|
||||
|
||||
return proxy_providers
|
||||
|
||||
|
||||
def resolve_proxy(proxy: str, proxy_providers: List[Any]) -> str:
|
||||
"""Resolve proxy parameter to actual proxy URI."""
|
||||
import re
|
||||
|
||||
if not proxy:
|
||||
return proxy
|
||||
|
||||
# Check if explicit proxy URI
|
||||
if re.match(r"^https?://", proxy):
|
||||
return proxy
|
||||
|
||||
# Handle provider:country format (e.g., "nordvpn:us")
|
||||
requested_provider = None
|
||||
if re.match(r"^[a-z]+:.+$", proxy, re.IGNORECASE):
|
||||
requested_provider, proxy = proxy.split(":", maxsplit=1)
|
||||
|
||||
# Handle country code format (e.g., "us", "uk")
|
||||
if re.match(r"^[a-z]{2}(?:\d+)?$", proxy, re.IGNORECASE):
|
||||
proxy = proxy.lower()
|
||||
|
||||
if requested_provider:
|
||||
# Find specific provider (case-insensitive matching)
|
||||
proxy_provider = next(
|
||||
(x for x in proxy_providers if x.__class__.__name__.lower() == requested_provider.lower()),
|
||||
None,
|
||||
)
|
||||
if not proxy_provider:
|
||||
available_providers = [x.__class__.__name__ for x in proxy_providers]
|
||||
raise ValueError(
|
||||
f"The proxy provider '{requested_provider}' was not recognized. Available providers: {available_providers}"
|
||||
)
|
||||
|
||||
proxy_uri = proxy_provider.get_proxy(proxy)
|
||||
if not proxy_uri:
|
||||
raise ValueError(f"The proxy provider {requested_provider} had no proxy for {proxy}")
|
||||
|
||||
log.info(f"Using {proxy_provider.__class__.__name__} Proxy: {proxy_uri}")
|
||||
return proxy_uri
|
||||
else:
|
||||
# Try all providers
|
||||
for proxy_provider in proxy_providers:
|
||||
proxy_uri = proxy_provider.get_proxy(proxy)
|
||||
if proxy_uri:
|
||||
log.info(f"Using {proxy_provider.__class__.__name__} Proxy: {proxy_uri}")
|
||||
return proxy_uri
|
||||
|
||||
raise ValueError(f"No proxy provider had a proxy for {proxy}")
|
||||
|
||||
# Return as-is if not recognized format
|
||||
log.info(f"Using explicit Proxy: {proxy}")
|
||||
return proxy
|
||||
|
||||
|
||||
def validate_service(service_tag: str) -> Optional[str]:
|
||||
"""Validate and normalize service tag."""
|
||||
try:
|
||||
normalized = Services.get_tag(service_tag)
|
||||
service_path = Services.get_path(normalized)
|
||||
if not service_path.exists():
|
||||
return None
|
||||
return normalized
|
||||
except Exception:
|
||||
return None
|
||||
|
||||
|
||||
def serialize_title(title: Title_T) -> Dict[str, Any]:
|
||||
"""Convert a title object to JSON-serializable dict."""
|
||||
if isinstance(title, Episode):
|
||||
episode_name = title.name if title.name else f"Episode {title.number:02d}"
|
||||
result = {
|
||||
"type": "episode",
|
||||
"name": episode_name,
|
||||
"series_title": str(title.title),
|
||||
"season": title.season,
|
||||
"number": title.number,
|
||||
"year": title.year,
|
||||
"id": str(title.id) if hasattr(title, "id") else None,
|
||||
}
|
||||
elif isinstance(title, Movie):
|
||||
result = {
|
||||
"type": "movie",
|
||||
"name": str(title.name) if hasattr(title, "name") else str(title),
|
||||
"year": title.year,
|
||||
"id": str(title.id) if hasattr(title, "id") else None,
|
||||
}
|
||||
else:
|
||||
result = {
|
||||
"type": "other",
|
||||
"name": str(title.name) if hasattr(title, "name") else str(title),
|
||||
"id": str(title.id) if hasattr(title, "id") else None,
|
||||
}
|
||||
|
||||
return result
|
||||
|
||||
|
||||
def serialize_video_track(track: Video) -> Dict[str, Any]:
|
||||
"""Convert video track to JSON-serializable dict."""
|
||||
codec_name = track.codec.name if hasattr(track.codec, "name") else str(track.codec)
|
||||
range_name = track.range.name if hasattr(track.range, "name") else str(track.range)
|
||||
|
||||
return {
|
||||
"id": str(track.id),
|
||||
"codec": codec_name,
|
||||
"codec_display": VIDEO_CODEC_MAP.get(codec_name, codec_name),
|
||||
"bitrate": int(track.bitrate / 1000) if track.bitrate else None,
|
||||
"width": track.width,
|
||||
"height": track.height,
|
||||
"resolution": f"{track.width}x{track.height}" if track.width and track.height else None,
|
||||
"fps": track.fps if track.fps else None,
|
||||
"range": range_name,
|
||||
"range_display": DYNAMIC_RANGE_MAP.get(range_name, range_name),
|
||||
"language": str(track.language) if track.language else None,
|
||||
"drm": str(track.drm) if hasattr(track, "drm") and track.drm else None,
|
||||
}
|
||||
|
||||
|
||||
def serialize_audio_track(track: Audio) -> Dict[str, Any]:
|
||||
"""Convert audio track to JSON-serializable dict."""
|
||||
codec_name = track.codec.name if hasattr(track.codec, "name") else str(track.codec)
|
||||
|
||||
return {
|
||||
"id": str(track.id),
|
||||
"codec": codec_name,
|
||||
"codec_display": AUDIO_CODEC_MAP.get(codec_name, codec_name),
|
||||
"bitrate": int(track.bitrate / 1000) if track.bitrate else None,
|
||||
"channels": track.channels if track.channels else None,
|
||||
"language": str(track.language) if track.language else None,
|
||||
"atmos": track.atmos if hasattr(track, "atmos") else False,
|
||||
"descriptive": track.descriptive if hasattr(track, "descriptive") else False,
|
||||
"drm": str(track.drm) if hasattr(track, "drm") and track.drm else None,
|
||||
}
|
||||
|
||||
|
||||
def serialize_subtitle_track(track: Subtitle) -> Dict[str, Any]:
|
||||
"""Convert subtitle track to JSON-serializable dict."""
|
||||
return {
|
||||
"id": str(track.id),
|
||||
"codec": track.codec.name if hasattr(track.codec, "name") else str(track.codec),
|
||||
"language": str(track.language) if track.language else None,
|
||||
"forced": track.forced if hasattr(track, "forced") else False,
|
||||
"sdh": track.sdh if hasattr(track, "sdh") else False,
|
||||
"cc": track.cc if hasattr(track, "cc") else False,
|
||||
}
|
||||
|
||||
|
||||
async def list_titles_handler(data: Dict[str, Any], request: Optional[web.Request] = None) -> web.Response:
|
||||
"""Handle list-titles request."""
|
||||
service_tag = data.get("service")
|
||||
title_id = data.get("title_id")
|
||||
profile = data.get("profile")
|
||||
|
||||
if not service_tag:
|
||||
raise APIError(
|
||||
APIErrorCode.INVALID_INPUT,
|
||||
"Missing required parameter: service",
|
||||
details={"missing_parameter": "service"},
|
||||
)
|
||||
|
||||
if not title_id:
|
||||
raise APIError(
|
||||
APIErrorCode.INVALID_INPUT,
|
||||
"Missing required parameter: title_id",
|
||||
details={"missing_parameter": "title_id"},
|
||||
)
|
||||
|
||||
normalized_service = validate_service(service_tag)
|
||||
if not normalized_service:
|
||||
raise APIError(
|
||||
APIErrorCode.INVALID_SERVICE,
|
||||
f"Invalid or unavailable service: {service_tag}",
|
||||
details={"service": service_tag},
|
||||
)
|
||||
|
||||
try:
|
||||
import inspect
|
||||
|
||||
import click
|
||||
import yaml
|
||||
|
||||
from unshackle.commands.dl import dl
|
||||
from unshackle.core.config import config
|
||||
from unshackle.core.utils.click_types import ContextData
|
||||
from unshackle.core.utils.collections import merge_dict
|
||||
|
||||
service_config_path = Services.get_path(normalized_service) / config.filenames.config
|
||||
if service_config_path.exists():
|
||||
service_config = yaml.safe_load(service_config_path.read_text(encoding="utf8"))
|
||||
else:
|
||||
service_config = {}
|
||||
merge_dict(config.services.get(normalized_service), service_config)
|
||||
|
||||
@click.command()
|
||||
@click.pass_context
|
||||
def dummy_service(ctx: click.Context) -> None:
|
||||
pass
|
||||
|
||||
# Handle proxy configuration
|
||||
proxy_param = data.get("proxy")
|
||||
no_proxy = data.get("no_proxy", False)
|
||||
proxy_providers = []
|
||||
|
||||
if not no_proxy:
|
||||
proxy_providers = initialize_proxy_providers()
|
||||
|
||||
if proxy_param and not no_proxy:
|
||||
try:
|
||||
resolved_proxy = resolve_proxy(proxy_param, proxy_providers)
|
||||
proxy_param = resolved_proxy
|
||||
except ValueError as e:
|
||||
raise APIError(
|
||||
APIErrorCode.INVALID_PROXY,
|
||||
f"Proxy error: {e}",
|
||||
details={"proxy": proxy_param, "service": normalized_service},
|
||||
)
|
||||
|
||||
ctx = click.Context(dummy_service)
|
||||
ctx.obj = ContextData(config=service_config, cdm=None, proxy_providers=proxy_providers, profile=profile)
|
||||
ctx.params = {"proxy": proxy_param, "no_proxy": no_proxy}
|
||||
|
||||
service_module = Services.load(normalized_service)
|
||||
|
||||
dummy_service.name = normalized_service
|
||||
dummy_service.params = [click.Argument([title_id], type=str)]
|
||||
ctx.invoked_subcommand = normalized_service
|
||||
|
||||
service_ctx = click.Context(dummy_service, parent=ctx)
|
||||
service_ctx.obj = ctx.obj
|
||||
|
||||
service_kwargs = {"title": title_id}
|
||||
|
||||
# Add additional parameters from request data
|
||||
for key, value in data.items():
|
||||
if key not in ["service", "title_id", "profile", "season", "episode", "wanted", "proxy", "no_proxy"]:
|
||||
service_kwargs[key] = value
|
||||
|
||||
# Get service parameter info and click command defaults
|
||||
service_init_params = inspect.signature(service_module.__init__).parameters
|
||||
|
||||
# Extract default values from the click command
|
||||
if hasattr(service_module, "cli") and hasattr(service_module.cli, "params"):
|
||||
for param in service_module.cli.params:
|
||||
if hasattr(param, "name") and param.name not in service_kwargs:
|
||||
# Add default value if parameter is not already provided
|
||||
if hasattr(param, "default") and param.default is not None:
|
||||
service_kwargs[param.name] = param.default
|
||||
|
||||
# Handle required parameters that don't have click defaults
|
||||
for param_name, param_info in service_init_params.items():
|
||||
if param_name not in service_kwargs and param_name not in ["self", "ctx"]:
|
||||
# Check if parameter is required (no default value in signature)
|
||||
if param_info.default is inspect.Parameter.empty:
|
||||
# Provide sensible defaults for common required parameters
|
||||
if param_name == "meta_lang":
|
||||
service_kwargs[param_name] = None
|
||||
elif param_name == "movie":
|
||||
service_kwargs[param_name] = False
|
||||
else:
|
||||
# Log warning for unknown required parameters
|
||||
log.warning(f"Unknown required parameter '{param_name}' for service {normalized_service}")
|
||||
|
||||
# Filter out any parameters that the service doesn't accept
|
||||
filtered_kwargs = {}
|
||||
for key, value in service_kwargs.items():
|
||||
if key in service_init_params:
|
||||
filtered_kwargs[key] = value
|
||||
|
||||
service_instance = service_module(service_ctx, **filtered_kwargs)
|
||||
|
||||
cookies = dl.get_cookie_jar(normalized_service, profile)
|
||||
credential = dl.get_credentials(normalized_service, profile)
|
||||
service_instance.authenticate(cookies, credential)
|
||||
|
||||
titles = service_instance.get_titles()
|
||||
|
||||
if hasattr(titles, "__iter__") and not isinstance(titles, str):
|
||||
title_list = [serialize_title(t) for t in titles]
|
||||
else:
|
||||
title_list = [serialize_title(titles)]
|
||||
|
||||
return web.json_response({"titles": title_list})
|
||||
|
||||
except APIError:
|
||||
raise
|
||||
except Exception as e:
|
||||
log.exception("Error listing titles")
|
||||
debug_mode = request.app.get("debug_api", False) if request else False
|
||||
return handle_api_exception(
|
||||
e,
|
||||
context={"operation": "list_titles", "service": normalized_service, "title_id": title_id},
|
||||
debug_mode=debug_mode,
|
||||
)
|
||||
|
||||
|
||||
async def list_tracks_handler(data: Dict[str, Any], request: Optional[web.Request] = None) -> web.Response:
|
||||
"""Handle list-tracks request."""
|
||||
service_tag = data.get("service")
|
||||
title_id = data.get("title_id")
|
||||
profile = data.get("profile")
|
||||
|
||||
if not service_tag:
|
||||
raise APIError(
|
||||
APIErrorCode.INVALID_INPUT,
|
||||
"Missing required parameter: service",
|
||||
details={"missing_parameter": "service"},
|
||||
)
|
||||
|
||||
if not title_id:
|
||||
raise APIError(
|
||||
APIErrorCode.INVALID_INPUT,
|
||||
"Missing required parameter: title_id",
|
||||
details={"missing_parameter": "title_id"},
|
||||
)
|
||||
|
||||
normalized_service = validate_service(service_tag)
|
||||
if not normalized_service:
|
||||
raise APIError(
|
||||
APIErrorCode.INVALID_SERVICE,
|
||||
f"Invalid or unavailable service: {service_tag}",
|
||||
details={"service": service_tag},
|
||||
)
|
||||
|
||||
try:
|
||||
import inspect
|
||||
|
||||
import click
|
||||
import yaml
|
||||
|
||||
from unshackle.commands.dl import dl
|
||||
from unshackle.core.config import config
|
||||
from unshackle.core.utils.click_types import ContextData
|
||||
from unshackle.core.utils.collections import merge_dict
|
||||
|
||||
service_config_path = Services.get_path(normalized_service) / config.filenames.config
|
||||
if service_config_path.exists():
|
||||
service_config = yaml.safe_load(service_config_path.read_text(encoding="utf8"))
|
||||
else:
|
||||
service_config = {}
|
||||
merge_dict(config.services.get(normalized_service), service_config)
|
||||
|
||||
@click.command()
|
||||
@click.pass_context
|
||||
def dummy_service(ctx: click.Context) -> None:
|
||||
pass
|
||||
|
||||
# Handle proxy configuration
|
||||
proxy_param = data.get("proxy")
|
||||
no_proxy = data.get("no_proxy", False)
|
||||
proxy_providers = []
|
||||
|
||||
if not no_proxy:
|
||||
proxy_providers = initialize_proxy_providers()
|
||||
|
||||
if proxy_param and not no_proxy:
|
||||
try:
|
||||
resolved_proxy = resolve_proxy(proxy_param, proxy_providers)
|
||||
proxy_param = resolved_proxy
|
||||
except ValueError as e:
|
||||
raise APIError(
|
||||
APIErrorCode.INVALID_PROXY,
|
||||
f"Proxy error: {e}",
|
||||
details={"proxy": proxy_param, "service": normalized_service},
|
||||
)
|
||||
|
||||
ctx = click.Context(dummy_service)
|
||||
ctx.obj = ContextData(config=service_config, cdm=None, proxy_providers=proxy_providers, profile=profile)
|
||||
ctx.params = {"proxy": proxy_param, "no_proxy": no_proxy}
|
||||
|
||||
service_module = Services.load(normalized_service)
|
||||
|
||||
dummy_service.name = normalized_service
|
||||
dummy_service.params = [click.Argument([title_id], type=str)]
|
||||
ctx.invoked_subcommand = normalized_service
|
||||
|
||||
service_ctx = click.Context(dummy_service, parent=ctx)
|
||||
service_ctx.obj = ctx.obj
|
||||
|
||||
service_kwargs = {"title": title_id}
|
||||
|
||||
# Add additional parameters from request data
|
||||
for key, value in data.items():
|
||||
if key not in ["service", "title_id", "profile", "season", "episode", "wanted", "proxy", "no_proxy"]:
|
||||
service_kwargs[key] = value
|
||||
|
||||
# Get service parameter info and click command defaults
|
||||
service_init_params = inspect.signature(service_module.__init__).parameters
|
||||
|
||||
# Extract default values from the click command
|
||||
if hasattr(service_module, "cli") and hasattr(service_module.cli, "params"):
|
||||
for param in service_module.cli.params:
|
||||
if hasattr(param, "name") and param.name not in service_kwargs:
|
||||
# Add default value if parameter is not already provided
|
||||
if hasattr(param, "default") and param.default is not None:
|
||||
service_kwargs[param.name] = param.default
|
||||
|
||||
# Handle required parameters that don't have click defaults
|
||||
for param_name, param_info in service_init_params.items():
|
||||
if param_name not in service_kwargs and param_name not in ["self", "ctx"]:
|
||||
# Check if parameter is required (no default value in signature)
|
||||
if param_info.default is inspect.Parameter.empty:
|
||||
# Provide sensible defaults for common required parameters
|
||||
if param_name == "meta_lang":
|
||||
service_kwargs[param_name] = None
|
||||
elif param_name == "movie":
|
||||
service_kwargs[param_name] = False
|
||||
else:
|
||||
# Log warning for unknown required parameters
|
||||
log.warning(f"Unknown required parameter '{param_name}' for service {normalized_service}")
|
||||
|
||||
# Filter out any parameters that the service doesn't accept
|
||||
filtered_kwargs = {}
|
||||
for key, value in service_kwargs.items():
|
||||
if key in service_init_params:
|
||||
filtered_kwargs[key] = value
|
||||
|
||||
service_instance = service_module(service_ctx, **filtered_kwargs)
|
||||
|
||||
cookies = dl.get_cookie_jar(normalized_service, profile)
|
||||
credential = dl.get_credentials(normalized_service, profile)
|
||||
service_instance.authenticate(cookies, credential)
|
||||
|
||||
titles = service_instance.get_titles()
|
||||
|
||||
wanted_param = data.get("wanted")
|
||||
season = data.get("season")
|
||||
episode = data.get("episode")
|
||||
|
||||
if hasattr(titles, "__iter__") and not isinstance(titles, str):
|
||||
titles_list = list(titles)
|
||||
|
||||
wanted = None
|
||||
if wanted_param:
|
||||
from unshackle.core.utils.click_types import SeasonRange
|
||||
|
||||
try:
|
||||
season_range = SeasonRange()
|
||||
wanted = season_range.parse_tokens(wanted_param)
|
||||
log.debug(f"Parsed wanted '{wanted_param}' into {len(wanted)} episodes: {wanted[:10]}...")
|
||||
except Exception as e:
|
||||
raise APIError(
|
||||
APIErrorCode.INVALID_PARAMETERS,
|
||||
f"Invalid wanted parameter: {e}",
|
||||
details={"wanted": wanted_param, "service": normalized_service},
|
||||
)
|
||||
elif season is not None and episode is not None:
|
||||
wanted = [f"{season}x{episode}"]
|
||||
|
||||
if wanted:
|
||||
# Filter titles based on wanted episodes, similar to how dl.py does it
|
||||
matching_titles = []
|
||||
log.debug(f"Filtering {len(titles_list)} titles with {len(wanted)} wanted episodes")
|
||||
for title in titles_list:
|
||||
if isinstance(title, Episode):
|
||||
episode_key = f"{title.season}x{title.number}"
|
||||
if episode_key in wanted:
|
||||
log.debug(f"Episode {episode_key} matches wanted list")
|
||||
matching_titles.append(title)
|
||||
else:
|
||||
log.debug(f"Episode {episode_key} not in wanted list")
|
||||
else:
|
||||
matching_titles.append(title)
|
||||
|
||||
log.debug(f"Found {len(matching_titles)} matching titles")
|
||||
|
||||
if not matching_titles:
|
||||
raise APIError(
|
||||
APIErrorCode.NO_CONTENT,
|
||||
"No episodes found matching wanted criteria",
|
||||
details={
|
||||
"service": normalized_service,
|
||||
"title_id": title_id,
|
||||
"wanted": wanted_param or f"{season}x{episode}",
|
||||
},
|
||||
)
|
||||
|
||||
# If multiple episodes match, return tracks for all episodes
|
||||
if len(matching_titles) > 1 and all(isinstance(t, Episode) for t in matching_titles):
|
||||
episodes_data = []
|
||||
failed_episodes = []
|
||||
|
||||
# Sort matching titles by season and episode number for consistent ordering
|
||||
sorted_titles = sorted(matching_titles, key=lambda t: (t.season, t.number))
|
||||
|
||||
for title in sorted_titles:
|
||||
try:
|
||||
tracks = service_instance.get_tracks(title)
|
||||
video_tracks = sorted(tracks.videos, key=lambda t: t.bitrate or 0, reverse=True)
|
||||
audio_tracks = sorted(tracks.audio, key=lambda t: t.bitrate or 0, reverse=True)
|
||||
|
||||
episode_data = {
|
||||
"title": serialize_title(title),
|
||||
"video": [serialize_video_track(t) for t in video_tracks],
|
||||
"audio": [serialize_audio_track(t) for t in audio_tracks],
|
||||
"subtitles": [serialize_subtitle_track(t) for t in tracks.subtitles],
|
||||
}
|
||||
episodes_data.append(episode_data)
|
||||
log.debug(f"Successfully got tracks for {title.season}x{title.number}")
|
||||
except SystemExit:
|
||||
# Service calls sys.exit() for unavailable episodes - catch and skip
|
||||
failed_episodes.append(f"S{title.season}E{title.number:02d}")
|
||||
log.debug(f"Episode {title.season}x{title.number} not available, skipping")
|
||||
continue
|
||||
except Exception as e:
|
||||
# Handle other errors gracefully
|
||||
failed_episodes.append(f"S{title.season}E{title.number:02d}")
|
||||
log.debug(f"Error getting tracks for {title.season}x{title.number}: {e}")
|
||||
continue
|
||||
|
||||
if episodes_data:
|
||||
response = {"episodes": episodes_data}
|
||||
if failed_episodes:
|
||||
response["unavailable_episodes"] = failed_episodes
|
||||
return web.json_response(response)
|
||||
else:
|
||||
raise APIError(
|
||||
APIErrorCode.NO_CONTENT,
|
||||
f"No available episodes found. Unavailable: {', '.join(failed_episodes)}",
|
||||
details={
|
||||
"service": normalized_service,
|
||||
"title_id": title_id,
|
||||
"unavailable_episodes": failed_episodes,
|
||||
},
|
||||
)
|
||||
else:
|
||||
# Single episode or movie
|
||||
first_title = matching_titles[0]
|
||||
else:
|
||||
first_title = titles_list[0]
|
||||
else:
|
||||
first_title = titles
|
||||
|
||||
tracks = service_instance.get_tracks(first_title)
|
||||
|
||||
video_tracks = sorted(tracks.videos, key=lambda t: t.bitrate or 0, reverse=True)
|
||||
audio_tracks = sorted(tracks.audio, key=lambda t: t.bitrate or 0, reverse=True)
|
||||
|
||||
response = {
|
||||
"title": serialize_title(first_title),
|
||||
"video": [serialize_video_track(t) for t in video_tracks],
|
||||
"audio": [serialize_audio_track(t) for t in audio_tracks],
|
||||
"subtitles": [serialize_subtitle_track(t) for t in tracks.subtitles],
|
||||
}
|
||||
|
||||
return web.json_response(response)
|
||||
|
||||
except APIError:
|
||||
raise
|
||||
except Exception as e:
|
||||
log.exception("Error listing tracks")
|
||||
debug_mode = request.app.get("debug_api", False) if request else False
|
||||
return handle_api_exception(
|
||||
e,
|
||||
context={"operation": "list_tracks", "service": normalized_service, "title_id": title_id},
|
||||
debug_mode=debug_mode,
|
||||
)
|
||||
|
||||
|
||||
def validate_download_parameters(data: Dict[str, Any]) -> Optional[str]:
|
||||
"""
|
||||
Validate download parameters and return error message if invalid.
|
||||
|
||||
Returns:
|
||||
None if valid, error message string if invalid
|
||||
"""
|
||||
if "vcodec" in data and data["vcodec"]:
|
||||
valid_vcodecs = ["H264", "H265", "VP9", "AV1"]
|
||||
if data["vcodec"].upper() not in valid_vcodecs:
|
||||
return f"Invalid vcodec: {data['vcodec']}. Must be one of: {', '.join(valid_vcodecs)}"
|
||||
|
||||
if "acodec" in data and data["acodec"]:
|
||||
valid_acodecs = ["AAC", "AC3", "EAC3", "OPUS", "FLAC", "ALAC", "VORBIS", "DTS"]
|
||||
if data["acodec"].upper() not in valid_acodecs:
|
||||
return f"Invalid acodec: {data['acodec']}. Must be one of: {', '.join(valid_acodecs)}"
|
||||
|
||||
if "sub_format" in data and data["sub_format"]:
|
||||
valid_sub_formats = ["SRT", "VTT", "ASS", "SSA"]
|
||||
if data["sub_format"].upper() not in valid_sub_formats:
|
||||
return f"Invalid sub_format: {data['sub_format']}. Must be one of: {', '.join(valid_sub_formats)}"
|
||||
|
||||
if "vbitrate" in data and data["vbitrate"] is not None:
|
||||
if not isinstance(data["vbitrate"], int) or data["vbitrate"] <= 0:
|
||||
return "vbitrate must be a positive integer"
|
||||
|
||||
if "abitrate" in data and data["abitrate"] is not None:
|
||||
if not isinstance(data["abitrate"], int) or data["abitrate"] <= 0:
|
||||
return "abitrate must be a positive integer"
|
||||
|
||||
if "channels" in data and data["channels"] is not None:
|
||||
if not isinstance(data["channels"], (int, float)) or data["channels"] <= 0:
|
||||
return "channels must be a positive number"
|
||||
|
||||
if "workers" in data and data["workers"] is not None:
|
||||
if not isinstance(data["workers"], int) or data["workers"] <= 0:
|
||||
return "workers must be a positive integer"
|
||||
|
||||
if "downloads" in data and data["downloads"] is not None:
|
||||
if not isinstance(data["downloads"], int) or data["downloads"] <= 0:
|
||||
return "downloads must be a positive integer"
|
||||
|
||||
exclusive_flags = []
|
||||
if data.get("video_only"):
|
||||
exclusive_flags.append("video_only")
|
||||
if data.get("audio_only"):
|
||||
exclusive_flags.append("audio_only")
|
||||
if data.get("subs_only"):
|
||||
exclusive_flags.append("subs_only")
|
||||
if data.get("chapters_only"):
|
||||
exclusive_flags.append("chapters_only")
|
||||
|
||||
if len(exclusive_flags) > 1:
|
||||
return f"Cannot use multiple exclusive flags: {', '.join(exclusive_flags)}"
|
||||
|
||||
if data.get("no_subs") and data.get("subs_only"):
|
||||
return "Cannot use both no_subs and subs_only"
|
||||
if data.get("no_audio") and data.get("audio_only"):
|
||||
return "Cannot use both no_audio and audio_only"
|
||||
|
||||
if data.get("s_lang") and data.get("require_subs"):
|
||||
return "Cannot use both s_lang and require_subs"
|
||||
|
||||
if "range" in data and data["range"]:
|
||||
valid_ranges = ["SDR", "HDR10", "HDR10+", "DV", "HLG"]
|
||||
if isinstance(data["range"], list):
|
||||
for r in data["range"]:
|
||||
if r.upper() not in valid_ranges:
|
||||
return f"Invalid range value: {r}. Must be one of: {', '.join(valid_ranges)}"
|
||||
elif data["range"].upper() not in valid_ranges:
|
||||
return f"Invalid range value: {data['range']}. Must be one of: {', '.join(valid_ranges)}"
|
||||
|
||||
return None
|
||||
|
||||
|
||||
async def download_handler(data: Dict[str, Any], request: Optional[web.Request] = None) -> web.Response:
|
||||
"""Handle download request - create and queue a download job."""
|
||||
from unshackle.core.api.download_manager import get_download_manager
|
||||
|
||||
service_tag = data.get("service")
|
||||
title_id = data.get("title_id")
|
||||
|
||||
if not service_tag:
|
||||
raise APIError(
|
||||
APIErrorCode.INVALID_INPUT,
|
||||
"Missing required parameter: service",
|
||||
details={"missing_parameter": "service"},
|
||||
)
|
||||
|
||||
if not title_id:
|
||||
raise APIError(
|
||||
APIErrorCode.INVALID_INPUT,
|
||||
"Missing required parameter: title_id",
|
||||
details={"missing_parameter": "title_id"},
|
||||
)
|
||||
|
||||
normalized_service = validate_service(service_tag)
|
||||
if not normalized_service:
|
||||
raise APIError(
|
||||
APIErrorCode.INVALID_SERVICE,
|
||||
f"Invalid or unavailable service: {service_tag}",
|
||||
details={"service": service_tag},
|
||||
)
|
||||
|
||||
validation_error = validate_download_parameters(data)
|
||||
if validation_error:
|
||||
raise APIError(
|
||||
APIErrorCode.INVALID_PARAMETERS,
|
||||
validation_error,
|
||||
details={"service": normalized_service, "title_id": title_id},
|
||||
)
|
||||
|
||||
try:
|
||||
# Load service module to extract service-specific parameter defaults
|
||||
service_module = Services.load(normalized_service)
|
||||
service_specific_defaults = {}
|
||||
|
||||
# Extract default values from the service's click command
|
||||
if hasattr(service_module, "cli") and hasattr(service_module.cli, "params"):
|
||||
for param in service_module.cli.params:
|
||||
if hasattr(param, "name") and hasattr(param, "default") and param.default is not None:
|
||||
# Store service-specific defaults (e.g., drm_system, hydrate_track, profile for NF)
|
||||
service_specific_defaults[param.name] = param.default
|
||||
|
||||
# Get download manager and start workers if needed
|
||||
manager = get_download_manager()
|
||||
await manager.start_workers()
|
||||
|
||||
# Create download job with filtered parameters (exclude service and title_id as they're already passed)
|
||||
filtered_params = {k: v for k, v in data.items() if k not in ["service", "title_id"]}
|
||||
# Merge defaults with provided parameters (user params override service defaults, which override global defaults)
|
||||
params_with_defaults = {**DEFAULT_DOWNLOAD_PARAMS, **service_specific_defaults, **filtered_params}
|
||||
job = manager.create_job(normalized_service, title_id, **params_with_defaults)
|
||||
|
||||
return web.json_response(
|
||||
{"job_id": job.job_id, "status": job.status.value, "created_time": job.created_time.isoformat()}, status=202
|
||||
)
|
||||
|
||||
except APIError:
|
||||
raise
|
||||
except Exception as e:
|
||||
log.exception("Error creating download job")
|
||||
debug_mode = request.app.get("debug_api", False) if request else False
|
||||
return handle_api_exception(
|
||||
e,
|
||||
context={"operation": "create_download_job", "service": normalized_service, "title_id": title_id},
|
||||
debug_mode=debug_mode,
|
||||
)
|
||||
|
||||
|
||||
async def list_download_jobs_handler(data: Dict[str, Any], request: Optional[web.Request] = None) -> web.Response:
|
||||
"""Handle list download jobs request with optional filtering and sorting."""
|
||||
from unshackle.core.api.download_manager import get_download_manager
|
||||
|
||||
try:
|
||||
manager = get_download_manager()
|
||||
jobs = manager.list_jobs()
|
||||
|
||||
status_filter = data.get("status")
|
||||
if status_filter:
|
||||
jobs = [job for job in jobs if job.status.value == status_filter]
|
||||
|
||||
service_filter = data.get("service")
|
||||
if service_filter:
|
||||
jobs = [job for job in jobs if job.service == service_filter]
|
||||
|
||||
sort_by = data.get("sort_by", "created_time")
|
||||
sort_order = data.get("sort_order", "desc")
|
||||
|
||||
valid_sort_fields = ["created_time", "started_time", "completed_time", "progress", "status", "service"]
|
||||
if sort_by not in valid_sort_fields:
|
||||
raise APIError(
|
||||
APIErrorCode.INVALID_PARAMETERS,
|
||||
f"Invalid sort_by: {sort_by}. Must be one of: {', '.join(valid_sort_fields)}",
|
||||
details={"sort_by": sort_by, "valid_values": valid_sort_fields},
|
||||
)
|
||||
|
||||
if sort_order not in ["asc", "desc"]:
|
||||
raise APIError(
|
||||
APIErrorCode.INVALID_PARAMETERS,
|
||||
"Invalid sort_order: must be 'asc' or 'desc'",
|
||||
details={"sort_order": sort_order, "valid_values": ["asc", "desc"]},
|
||||
)
|
||||
|
||||
reverse = sort_order == "desc"
|
||||
|
||||
def get_sort_key(job):
|
||||
"""Get the sorting key value, handling None values."""
|
||||
value = getattr(job, sort_by, None)
|
||||
if value is None:
|
||||
if sort_by in ["created_time", "started_time", "completed_time"]:
|
||||
from datetime import datetime
|
||||
|
||||
return datetime.min if not reverse else datetime.max
|
||||
elif sort_by == "progress":
|
||||
return 0
|
||||
elif sort_by in ["status", "service"]:
|
||||
return ""
|
||||
return value
|
||||
|
||||
jobs = sorted(jobs, key=get_sort_key, reverse=reverse)
|
||||
|
||||
job_list = [job.to_dict(include_full_details=False) for job in jobs]
|
||||
|
||||
return web.json_response({"jobs": job_list})
|
||||
|
||||
except APIError:
|
||||
raise
|
||||
except Exception as e:
|
||||
log.exception("Error listing download jobs")
|
||||
debug_mode = request.app.get("debug_api", False) if request else False
|
||||
return handle_api_exception(
|
||||
e,
|
||||
context={"operation": "list_download_jobs"},
|
||||
debug_mode=debug_mode,
|
||||
)
|
||||
|
||||
|
||||
async def get_download_job_handler(job_id: str, request: Optional[web.Request] = None) -> web.Response:
|
||||
"""Handle get specific download job request."""
|
||||
from unshackle.core.api.download_manager import get_download_manager
|
||||
|
||||
try:
|
||||
manager = get_download_manager()
|
||||
job = manager.get_job(job_id)
|
||||
|
||||
if not job:
|
||||
raise APIError(
|
||||
APIErrorCode.JOB_NOT_FOUND,
|
||||
"Job not found",
|
||||
details={"job_id": job_id},
|
||||
)
|
||||
|
||||
return web.json_response(job.to_dict(include_full_details=True))
|
||||
|
||||
except APIError:
|
||||
raise
|
||||
except Exception as e:
|
||||
log.exception(f"Error getting download job {job_id}")
|
||||
debug_mode = request.app.get("debug_api", False) if request else False
|
||||
return handle_api_exception(
|
||||
e,
|
||||
context={"operation": "get_download_job", "job_id": job_id},
|
||||
debug_mode=debug_mode,
|
||||
)
|
||||
|
||||
|
||||
async def cancel_download_job_handler(job_id: str, request: Optional[web.Request] = None) -> web.Response:
|
||||
"""Handle cancel download job request."""
|
||||
from unshackle.core.api.download_manager import get_download_manager
|
||||
|
||||
try:
|
||||
manager = get_download_manager()
|
||||
|
||||
if not manager.get_job(job_id):
|
||||
raise APIError(
|
||||
APIErrorCode.JOB_NOT_FOUND,
|
||||
"Job not found",
|
||||
details={"job_id": job_id},
|
||||
)
|
||||
|
||||
success = manager.cancel_job(job_id)
|
||||
|
||||
if success:
|
||||
return web.json_response({"status": "success", "message": "Job cancelled"})
|
||||
else:
|
||||
raise APIError(
|
||||
APIErrorCode.INVALID_PARAMETERS,
|
||||
"Job cannot be cancelled (already completed or failed)",
|
||||
details={"job_id": job_id},
|
||||
)
|
||||
|
||||
except APIError:
|
||||
raise
|
||||
except Exception as e:
|
||||
log.exception(f"Error cancelling download job {job_id}")
|
||||
debug_mode = request.app.get("debug_api", False) if request else False
|
||||
return handle_api_exception(
|
||||
e,
|
||||
context={"operation": "cancel_download_job", "job_id": job_id},
|
||||
debug_mode=debug_mode,
|
||||
)
|
||||
758
unshackle/core/api/routes.py
Normal file
758
unshackle/core/api/routes.py
Normal file
@@ -0,0 +1,758 @@
|
||||
import logging
|
||||
import re
|
||||
|
||||
from aiohttp import web
|
||||
from aiohttp_swagger3 import SwaggerDocs, SwaggerInfo, SwaggerUiSettings
|
||||
|
||||
from unshackle.core import __version__
|
||||
from unshackle.core.api.errors import APIError, APIErrorCode, build_error_response, handle_api_exception
|
||||
from unshackle.core.api.handlers import (cancel_download_job_handler, download_handler, get_download_job_handler,
|
||||
list_download_jobs_handler, list_titles_handler, list_tracks_handler)
|
||||
from unshackle.core.services import Services
|
||||
from unshackle.core.update_checker import UpdateChecker
|
||||
|
||||
|
||||
@web.middleware
|
||||
async def cors_middleware(request: web.Request, handler):
|
||||
"""Add CORS headers to all responses."""
|
||||
# Handle preflight requests
|
||||
if request.method == "OPTIONS":
|
||||
response = web.Response()
|
||||
else:
|
||||
response = await handler(request)
|
||||
|
||||
# Add CORS headers
|
||||
response.headers["Access-Control-Allow-Origin"] = "*"
|
||||
response.headers["Access-Control-Allow-Methods"] = "GET, POST, PUT, DELETE, OPTIONS"
|
||||
response.headers["Access-Control-Allow-Headers"] = "Content-Type, X-API-Key, Authorization"
|
||||
response.headers["Access-Control-Max-Age"] = "3600"
|
||||
|
||||
return response
|
||||
|
||||
|
||||
log = logging.getLogger("api")
|
||||
|
||||
|
||||
async def health(request: web.Request) -> web.Response:
|
||||
"""
|
||||
Health check endpoint.
|
||||
---
|
||||
summary: Health check
|
||||
description: Get server health status, version info, and update availability
|
||||
responses:
|
||||
'200':
|
||||
description: Health status
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
type: object
|
||||
properties:
|
||||
status:
|
||||
type: string
|
||||
example: ok
|
||||
version:
|
||||
type: string
|
||||
example: "2.0.0"
|
||||
update_check:
|
||||
type: object
|
||||
properties:
|
||||
update_available:
|
||||
type: boolean
|
||||
nullable: true
|
||||
current_version:
|
||||
type: string
|
||||
latest_version:
|
||||
type: string
|
||||
nullable: true
|
||||
"""
|
||||
try:
|
||||
latest_version = await UpdateChecker.check_for_updates(__version__)
|
||||
update_info = {
|
||||
"update_available": latest_version is not None,
|
||||
"current_version": __version__,
|
||||
"latest_version": latest_version,
|
||||
}
|
||||
except Exception as e:
|
||||
log.warning(f"Failed to check for updates: {e}")
|
||||
update_info = {"update_available": None, "current_version": __version__, "latest_version": None}
|
||||
|
||||
return web.json_response({"status": "ok", "version": __version__, "update_check": update_info})
|
||||
|
||||
|
||||
async def services(request: web.Request) -> web.Response:
|
||||
"""
|
||||
List available services.
|
||||
---
|
||||
summary: List services
|
||||
description: Get all available streaming services with their details
|
||||
responses:
|
||||
'200':
|
||||
description: List of services
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
type: object
|
||||
properties:
|
||||
services:
|
||||
type: array
|
||||
items:
|
||||
type: object
|
||||
properties:
|
||||
tag:
|
||||
type: string
|
||||
aliases:
|
||||
type: array
|
||||
items:
|
||||
type: string
|
||||
geofence:
|
||||
type: array
|
||||
items:
|
||||
type: string
|
||||
title_regex:
|
||||
oneOf:
|
||||
- type: string
|
||||
- type: array
|
||||
items:
|
||||
type: string
|
||||
nullable: true
|
||||
url:
|
||||
type: string
|
||||
nullable: true
|
||||
description: Service URL from short_help
|
||||
help:
|
||||
type: string
|
||||
nullable: true
|
||||
description: Full service documentation
|
||||
'500':
|
||||
description: Server error
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
type: object
|
||||
properties:
|
||||
status:
|
||||
type: string
|
||||
example: error
|
||||
error_code:
|
||||
type: string
|
||||
example: INTERNAL_ERROR
|
||||
message:
|
||||
type: string
|
||||
example: An unexpected error occurred
|
||||
details:
|
||||
type: object
|
||||
timestamp:
|
||||
type: string
|
||||
format: date-time
|
||||
debug_info:
|
||||
type: object
|
||||
description: Only present when --debug-api flag is enabled
|
||||
"""
|
||||
try:
|
||||
service_tags = Services.get_tags()
|
||||
services_info = []
|
||||
|
||||
for tag in service_tags:
|
||||
service_data = {"tag": tag, "aliases": [], "geofence": [], "title_regex": None, "url": None, "help": None}
|
||||
|
||||
try:
|
||||
service_module = Services.load(tag)
|
||||
|
||||
if hasattr(service_module, "ALIASES"):
|
||||
service_data["aliases"] = list(service_module.ALIASES)
|
||||
|
||||
if hasattr(service_module, "GEOFENCE"):
|
||||
service_data["geofence"] = list(service_module.GEOFENCE)
|
||||
|
||||
if hasattr(service_module, "TITLE_RE"):
|
||||
title_re = service_module.TITLE_RE
|
||||
# Handle different types of TITLE_RE
|
||||
if isinstance(title_re, re.Pattern):
|
||||
service_data["title_regex"] = title_re.pattern
|
||||
elif isinstance(title_re, str):
|
||||
service_data["title_regex"] = title_re
|
||||
elif isinstance(title_re, (list, tuple)):
|
||||
# Convert list/tuple of patterns to list of strings
|
||||
patterns = []
|
||||
for item in title_re:
|
||||
if isinstance(item, re.Pattern):
|
||||
patterns.append(item.pattern)
|
||||
elif isinstance(item, str):
|
||||
patterns.append(item)
|
||||
service_data["title_regex"] = patterns if patterns else None
|
||||
|
||||
if hasattr(service_module, "cli") and hasattr(service_module.cli, "short_help"):
|
||||
service_data["url"] = service_module.cli.short_help
|
||||
|
||||
if service_module.__doc__:
|
||||
service_data["help"] = service_module.__doc__.strip()
|
||||
|
||||
except Exception as e:
|
||||
log.warning(f"Could not load details for service {tag}: {e}")
|
||||
|
||||
services_info.append(service_data)
|
||||
|
||||
return web.json_response({"services": services_info})
|
||||
except Exception as e:
|
||||
log.exception("Error listing services")
|
||||
debug_mode = request.app.get("debug_api", False)
|
||||
return handle_api_exception(e, context={"operation": "list_services"}, debug_mode=debug_mode)
|
||||
|
||||
|
||||
async def list_titles(request: web.Request) -> web.Response:
|
||||
"""
|
||||
List titles for a service and title ID.
|
||||
---
|
||||
summary: List titles
|
||||
description: Get available titles for a service and title ID
|
||||
requestBody:
|
||||
required: true
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
type: object
|
||||
required:
|
||||
- service
|
||||
- title_id
|
||||
properties:
|
||||
service:
|
||||
type: string
|
||||
description: Service tag
|
||||
title_id:
|
||||
type: string
|
||||
description: Title identifier
|
||||
responses:
|
||||
'200':
|
||||
description: List of titles
|
||||
'400':
|
||||
description: Invalid request (missing parameters, invalid service)
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
type: object
|
||||
properties:
|
||||
status:
|
||||
type: string
|
||||
example: error
|
||||
error_code:
|
||||
type: string
|
||||
example: INVALID_INPUT
|
||||
message:
|
||||
type: string
|
||||
example: Missing required parameter
|
||||
details:
|
||||
type: object
|
||||
timestamp:
|
||||
type: string
|
||||
format: date-time
|
||||
'401':
|
||||
description: Authentication failed
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
type: object
|
||||
properties:
|
||||
status:
|
||||
type: string
|
||||
example: error
|
||||
error_code:
|
||||
type: string
|
||||
example: AUTH_FAILED
|
||||
message:
|
||||
type: string
|
||||
details:
|
||||
type: object
|
||||
timestamp:
|
||||
type: string
|
||||
format: date-time
|
||||
'404':
|
||||
description: Title not found
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
type: object
|
||||
properties:
|
||||
status:
|
||||
type: string
|
||||
example: error
|
||||
error_code:
|
||||
type: string
|
||||
example: NOT_FOUND
|
||||
message:
|
||||
type: string
|
||||
details:
|
||||
type: object
|
||||
timestamp:
|
||||
type: string
|
||||
format: date-time
|
||||
'500':
|
||||
description: Server error
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
type: object
|
||||
properties:
|
||||
status:
|
||||
type: string
|
||||
example: error
|
||||
error_code:
|
||||
type: string
|
||||
example: INTERNAL_ERROR
|
||||
message:
|
||||
type: string
|
||||
details:
|
||||
type: object
|
||||
timestamp:
|
||||
type: string
|
||||
format: date-time
|
||||
"""
|
||||
try:
|
||||
data = await request.json()
|
||||
except Exception as e:
|
||||
return build_error_response(
|
||||
APIError(
|
||||
APIErrorCode.INVALID_INPUT,
|
||||
"Invalid JSON request body",
|
||||
details={"error": str(e)},
|
||||
),
|
||||
request.app.get("debug_api", False),
|
||||
)
|
||||
|
||||
try:
|
||||
return await list_titles_handler(data, request)
|
||||
except APIError as e:
|
||||
debug_mode = request.app.get("debug_api", False)
|
||||
return build_error_response(e, debug_mode)
|
||||
|
||||
|
||||
async def list_tracks(request: web.Request) -> web.Response:
|
||||
"""
|
||||
List tracks for a title, separated by type.
|
||||
---
|
||||
summary: List tracks
|
||||
description: Get available video, audio, and subtitle tracks for a title
|
||||
requestBody:
|
||||
required: true
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
type: object
|
||||
required:
|
||||
- service
|
||||
- title_id
|
||||
properties:
|
||||
service:
|
||||
type: string
|
||||
description: Service tag
|
||||
title_id:
|
||||
type: string
|
||||
description: Title identifier
|
||||
wanted:
|
||||
type: string
|
||||
description: Specific episode/season (optional)
|
||||
proxy:
|
||||
type: string
|
||||
description: Proxy configuration (optional)
|
||||
responses:
|
||||
'200':
|
||||
description: Track information
|
||||
'400':
|
||||
description: Invalid request
|
||||
"""
|
||||
try:
|
||||
data = await request.json()
|
||||
except Exception as e:
|
||||
return build_error_response(
|
||||
APIError(
|
||||
APIErrorCode.INVALID_INPUT,
|
||||
"Invalid JSON request body",
|
||||
details={"error": str(e)},
|
||||
),
|
||||
request.app.get("debug_api", False),
|
||||
)
|
||||
|
||||
try:
|
||||
return await list_tracks_handler(data, request)
|
||||
except APIError as e:
|
||||
debug_mode = request.app.get("debug_api", False)
|
||||
return build_error_response(e, debug_mode)
|
||||
|
||||
|
||||
async def download(request: web.Request) -> web.Response:
|
||||
"""
|
||||
Download content based on provided parameters.
|
||||
---
|
||||
summary: Download content
|
||||
description: Download video content based on specified parameters
|
||||
requestBody:
|
||||
required: true
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
type: object
|
||||
required:
|
||||
- service
|
||||
- title_id
|
||||
properties:
|
||||
service:
|
||||
type: string
|
||||
description: Service tag
|
||||
title_id:
|
||||
type: string
|
||||
description: Title identifier
|
||||
profile:
|
||||
type: string
|
||||
description: Profile to use for credentials and cookies (default - None)
|
||||
quality:
|
||||
type: array
|
||||
items:
|
||||
type: integer
|
||||
description: Download resolution(s) (default - best available)
|
||||
vcodec:
|
||||
type: string
|
||||
description: Video codec to download (e.g., H264, H265, VP9, AV1) (default - None)
|
||||
acodec:
|
||||
type: string
|
||||
description: Audio codec to download (e.g., AAC, AC3, EAC3) (default - None)
|
||||
vbitrate:
|
||||
type: integer
|
||||
description: Video bitrate in kbps (default - None)
|
||||
abitrate:
|
||||
type: integer
|
||||
description: Audio bitrate in kbps (default - None)
|
||||
range:
|
||||
type: array
|
||||
items:
|
||||
type: string
|
||||
description: Video color range (SDR, HDR10, DV) (default - ["SDR"])
|
||||
channels:
|
||||
type: number
|
||||
description: Audio channels (e.g., 2.0, 5.1, 7.1) (default - None)
|
||||
no_atmos:
|
||||
type: boolean
|
||||
description: Exclude Dolby Atmos audio tracks (default - false)
|
||||
wanted:
|
||||
type: array
|
||||
items:
|
||||
type: string
|
||||
description: Wanted episodes (e.g., ["S01E01", "S01E02"]) (default - all)
|
||||
latest_episode:
|
||||
type: boolean
|
||||
description: Download only the single most recent episode (default - false)
|
||||
lang:
|
||||
type: array
|
||||
items:
|
||||
type: string
|
||||
description: Language for video and audio (use 'orig' for original) (default - ["orig"])
|
||||
v_lang:
|
||||
type: array
|
||||
items:
|
||||
type: string
|
||||
description: Language for video tracks only (default - [])
|
||||
a_lang:
|
||||
type: array
|
||||
items:
|
||||
type: string
|
||||
description: Language for audio tracks only (default - [])
|
||||
s_lang:
|
||||
type: array
|
||||
items:
|
||||
type: string
|
||||
description: Language for subtitle tracks (default - ["all"])
|
||||
require_subs:
|
||||
type: array
|
||||
items:
|
||||
type: string
|
||||
description: Required subtitle languages (default - [])
|
||||
forced_subs:
|
||||
type: boolean
|
||||
description: Include forced subtitle tracks (default - false)
|
||||
exact_lang:
|
||||
type: boolean
|
||||
description: Use exact language matching (no variants) (default - false)
|
||||
sub_format:
|
||||
type: string
|
||||
description: Output subtitle format (SRT, VTT, etc.) (default - None)
|
||||
video_only:
|
||||
type: boolean
|
||||
description: Only download video tracks (default - false)
|
||||
audio_only:
|
||||
type: boolean
|
||||
description: Only download audio tracks (default - false)
|
||||
subs_only:
|
||||
type: boolean
|
||||
description: Only download subtitle tracks (default - false)
|
||||
chapters_only:
|
||||
type: boolean
|
||||
description: Only download chapters (default - false)
|
||||
no_subs:
|
||||
type: boolean
|
||||
description: Do not download subtitle tracks (default - false)
|
||||
no_audio:
|
||||
type: boolean
|
||||
description: Do not download audio tracks (default - false)
|
||||
no_chapters:
|
||||
type: boolean
|
||||
description: Do not download chapters (default - false)
|
||||
audio_description:
|
||||
type: boolean
|
||||
description: Download audio description tracks (default - false)
|
||||
slow:
|
||||
type: boolean
|
||||
description: Add 60-120s delay between downloads (default - false)
|
||||
skip_dl:
|
||||
type: boolean
|
||||
description: Skip downloading, only retrieve decryption keys (default - false)
|
||||
export:
|
||||
type: string
|
||||
description: Path to export decryption keys as JSON (default - None)
|
||||
cdm_only:
|
||||
type: boolean
|
||||
description: Only use CDM for key retrieval (true) or only vaults (false) (default - None)
|
||||
proxy:
|
||||
type: string
|
||||
description: Proxy URI or country code (default - None)
|
||||
no_proxy:
|
||||
type: boolean
|
||||
description: Force disable all proxy use (default - false)
|
||||
tag:
|
||||
type: string
|
||||
description: Set the group tag to be used (default - None)
|
||||
tmdb_id:
|
||||
type: integer
|
||||
description: Use this TMDB ID for tagging (default - None)
|
||||
tmdb_name:
|
||||
type: boolean
|
||||
description: Rename titles using TMDB name (default - false)
|
||||
tmdb_year:
|
||||
type: boolean
|
||||
description: Use release year from TMDB (default - false)
|
||||
no_folder:
|
||||
type: boolean
|
||||
description: Disable folder creation for TV shows (default - false)
|
||||
no_source:
|
||||
type: boolean
|
||||
description: Disable source tag from output file name (default - false)
|
||||
no_mux:
|
||||
type: boolean
|
||||
description: Do not mux tracks into a container file (default - false)
|
||||
workers:
|
||||
type: integer
|
||||
description: Max workers/threads per track download (default - None)
|
||||
downloads:
|
||||
type: integer
|
||||
description: Amount of tracks to download concurrently (default - 1)
|
||||
best_available:
|
||||
type: boolean
|
||||
description: Continue with best available if requested quality unavailable (default - false)
|
||||
responses:
|
||||
'202':
|
||||
description: Download job created
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
type: object
|
||||
properties:
|
||||
job_id:
|
||||
type: string
|
||||
status:
|
||||
type: string
|
||||
created_time:
|
||||
type: string
|
||||
'400':
|
||||
description: Invalid request
|
||||
"""
|
||||
try:
|
||||
data = await request.json()
|
||||
except Exception as e:
|
||||
return build_error_response(
|
||||
APIError(
|
||||
APIErrorCode.INVALID_INPUT,
|
||||
"Invalid JSON request body",
|
||||
details={"error": str(e)},
|
||||
),
|
||||
request.app.get("debug_api", False),
|
||||
)
|
||||
|
||||
try:
|
||||
return await download_handler(data, request)
|
||||
except APIError as e:
|
||||
debug_mode = request.app.get("debug_api", False)
|
||||
return build_error_response(e, debug_mode)
|
||||
|
||||
|
||||
async def download_jobs(request: web.Request) -> web.Response:
|
||||
"""
|
||||
List all download jobs with optional filtering and sorting.
|
||||
---
|
||||
summary: List download jobs
|
||||
description: Get list of all download jobs with their status, with optional filtering by status/service and sorting
|
||||
parameters:
|
||||
- name: status
|
||||
in: query
|
||||
required: false
|
||||
schema:
|
||||
type: string
|
||||
enum: [queued, downloading, completed, failed, cancelled]
|
||||
description: Filter jobs by status
|
||||
- name: service
|
||||
in: query
|
||||
required: false
|
||||
schema:
|
||||
type: string
|
||||
description: Filter jobs by service tag
|
||||
- name: sort_by
|
||||
in: query
|
||||
required: false
|
||||
schema:
|
||||
type: string
|
||||
enum: [created_time, started_time, completed_time, progress, status, service]
|
||||
default: created_time
|
||||
description: Field to sort by
|
||||
- name: sort_order
|
||||
in: query
|
||||
required: false
|
||||
schema:
|
||||
type: string
|
||||
enum: [asc, desc]
|
||||
default: desc
|
||||
description: Sort order (ascending or descending)
|
||||
responses:
|
||||
'200':
|
||||
description: List of download jobs
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
type: object
|
||||
properties:
|
||||
jobs:
|
||||
type: array
|
||||
items:
|
||||
type: object
|
||||
properties:
|
||||
job_id:
|
||||
type: string
|
||||
status:
|
||||
type: string
|
||||
created_time:
|
||||
type: string
|
||||
service:
|
||||
type: string
|
||||
title_id:
|
||||
type: string
|
||||
progress:
|
||||
type: number
|
||||
'400':
|
||||
description: Invalid query parameters
|
||||
'500':
|
||||
description: Server error
|
||||
"""
|
||||
# Extract query parameters
|
||||
query_params = {
|
||||
"status": request.query.get("status"),
|
||||
"service": request.query.get("service"),
|
||||
"sort_by": request.query.get("sort_by", "created_time"),
|
||||
"sort_order": request.query.get("sort_order", "desc"),
|
||||
}
|
||||
try:
|
||||
return await list_download_jobs_handler(query_params, request)
|
||||
except APIError as e:
|
||||
debug_mode = request.app.get("debug_api", False)
|
||||
return build_error_response(e, debug_mode)
|
||||
|
||||
|
||||
async def download_job_detail(request: web.Request) -> web.Response:
|
||||
"""
|
||||
Get download job details.
|
||||
---
|
||||
summary: Get download job
|
||||
description: Get detailed information about a specific download job
|
||||
parameters:
|
||||
- name: job_id
|
||||
in: path
|
||||
required: true
|
||||
schema:
|
||||
type: string
|
||||
responses:
|
||||
'200':
|
||||
description: Download job details
|
||||
'404':
|
||||
description: Job not found
|
||||
'500':
|
||||
description: Server error
|
||||
"""
|
||||
job_id = request.match_info["job_id"]
|
||||
try:
|
||||
return await get_download_job_handler(job_id, request)
|
||||
except APIError as e:
|
||||
debug_mode = request.app.get("debug_api", False)
|
||||
return build_error_response(e, debug_mode)
|
||||
|
||||
|
||||
async def cancel_download_job(request: web.Request) -> web.Response:
|
||||
"""
|
||||
Cancel download job.
|
||||
---
|
||||
summary: Cancel download job
|
||||
description: Cancel a queued or running download job
|
||||
parameters:
|
||||
- name: job_id
|
||||
in: path
|
||||
required: true
|
||||
schema:
|
||||
type: string
|
||||
responses:
|
||||
'200':
|
||||
description: Job cancelled successfully
|
||||
'400':
|
||||
description: Job cannot be cancelled
|
||||
'404':
|
||||
description: Job not found
|
||||
'500':
|
||||
description: Server error
|
||||
"""
|
||||
job_id = request.match_info["job_id"]
|
||||
try:
|
||||
return await cancel_download_job_handler(job_id, request)
|
||||
except APIError as e:
|
||||
debug_mode = request.app.get("debug_api", False)
|
||||
return build_error_response(e, debug_mode)
|
||||
|
||||
|
||||
def setup_routes(app: web.Application) -> None:
|
||||
"""Setup all API routes."""
|
||||
app.router.add_get("/api/health", health)
|
||||
app.router.add_get("/api/services", services)
|
||||
app.router.add_post("/api/list-titles", list_titles)
|
||||
app.router.add_post("/api/list-tracks", list_tracks)
|
||||
app.router.add_post("/api/download", download)
|
||||
app.router.add_get("/api/download/jobs", download_jobs)
|
||||
app.router.add_get("/api/download/jobs/{job_id}", download_job_detail)
|
||||
app.router.add_delete("/api/download/jobs/{job_id}", cancel_download_job)
|
||||
|
||||
|
||||
def setup_swagger(app: web.Application) -> None:
|
||||
"""Setup Swagger UI documentation."""
|
||||
swagger = SwaggerDocs(
|
||||
app,
|
||||
swagger_ui_settings=SwaggerUiSettings(path="/api/docs/"),
|
||||
info=SwaggerInfo(
|
||||
title="Unshackle REST API",
|
||||
version=__version__,
|
||||
description="REST API for Unshackle - Modular Movie, TV, and Music Archival Software",
|
||||
),
|
||||
)
|
||||
|
||||
# Add routes with OpenAPI documentation
|
||||
swagger.add_routes(
|
||||
[
|
||||
web.get("/api/health", health),
|
||||
web.get("/api/services", services),
|
||||
web.post("/api/list-titles", list_titles),
|
||||
web.post("/api/list-tracks", list_tracks),
|
||||
web.post("/api/download", download),
|
||||
web.get("/api/download/jobs", download_jobs),
|
||||
web.get("/api/download/jobs/{job_id}", download_job_detail),
|
||||
web.delete("/api/download/jobs/{job_id}", cancel_download_job),
|
||||
]
|
||||
)
|
||||
@@ -8,22 +8,20 @@ __shaka_platform = {"win32": "win", "darwin": "osx"}.get(sys.platform, sys.platf
|
||||
|
||||
def find(*names: str) -> Optional[Path]:
|
||||
"""Find the path of the first found binary name."""
|
||||
# Get the directory containing this file to find the local binaries folder
|
||||
current_dir = Path(__file__).parent.parent
|
||||
current_dir = Path(__file__).resolve().parent.parent
|
||||
local_binaries_dir = current_dir / "binaries"
|
||||
|
||||
for name in names:
|
||||
# First check local binaries folder
|
||||
if local_binaries_dir.exists():
|
||||
local_path = local_binaries_dir / name
|
||||
if local_path.is_file() and local_path.stat().st_mode & 0o111: # Check if executable
|
||||
return local_path
|
||||
ext = ".exe" if sys.platform == "win32" else ""
|
||||
|
||||
# Also check with .exe extension on Windows
|
||||
if sys.platform == "win32":
|
||||
local_path_exe = local_binaries_dir / f"{name}.exe"
|
||||
if local_path_exe.is_file():
|
||||
return local_path_exe
|
||||
for name in names:
|
||||
if local_binaries_dir.exists():
|
||||
candidate_paths = [local_binaries_dir / f"{name}{ext}", local_binaries_dir / name / f"{name}{ext}"]
|
||||
|
||||
for path in candidate_paths:
|
||||
if path.is_file():
|
||||
# On Unix-like systems, check if file is executable
|
||||
if sys.platform == "win32" or (path.stat().st_mode & 0o111):
|
||||
return path
|
||||
|
||||
# Fall back to system PATH
|
||||
path = shutil.which(name)
|
||||
|
||||
@@ -91,7 +91,7 @@ class Cacher:
|
||||
except jwt.DecodeError:
|
||||
pass
|
||||
|
||||
self.expiration = self._resolve_datetime(expiration) if expiration else None
|
||||
self.expiration = self.resolve_datetime(expiration) if expiration else None
|
||||
|
||||
payload = {"data": self.data, "expiration": self.expiration, "version": self.version}
|
||||
payload["crc32"] = zlib.crc32(jsonpickle.dumps(payload).encode("utf8"))
|
||||
@@ -109,7 +109,7 @@ class Cacher:
|
||||
return self.path.stat()
|
||||
|
||||
@staticmethod
|
||||
def _resolve_datetime(timestamp: EXP_T) -> datetime:
|
||||
def resolve_datetime(timestamp: EXP_T) -> datetime:
|
||||
"""
|
||||
Resolve multiple formats of a Datetime or Timestamp to an absolute Datetime.
|
||||
|
||||
@@ -118,15 +118,15 @@ class Cacher:
|
||||
datetime.datetime(2022, 6, 27, 9, 49, 13, 657208)
|
||||
>>> iso8601 = now.isoformat()
|
||||
'2022-06-27T09:49:13.657208'
|
||||
>>> Cacher._resolve_datetime(iso8601)
|
||||
>>> Cacher.resolve_datetime(iso8601)
|
||||
datetime.datetime(2022, 6, 27, 9, 49, 13, 657208)
|
||||
>>> Cacher._resolve_datetime(iso8601 + "Z")
|
||||
>>> Cacher.resolve_datetime(iso8601 + "Z")
|
||||
datetime.datetime(2022, 6, 27, 9, 49, 13, 657208)
|
||||
>>> Cacher._resolve_datetime(3600)
|
||||
>>> Cacher.resolve_datetime(3600)
|
||||
datetime.datetime(2022, 6, 27, 10, 52, 50, 657208)
|
||||
>>> Cacher._resolve_datetime('3600')
|
||||
>>> Cacher.resolve_datetime('3600')
|
||||
datetime.datetime(2022, 6, 27, 10, 52, 51, 657208)
|
||||
>>> Cacher._resolve_datetime(7800.113)
|
||||
>>> Cacher.resolve_datetime(7800.113)
|
||||
datetime.datetime(2022, 6, 27, 11, 59, 13, 770208)
|
||||
|
||||
In the int/float examples you may notice that it did not return now + 3600 seconds
|
||||
|
||||
@@ -1,3 +1,4 @@
|
||||
from .custom_remote_cdm import CustomRemoteCDM
|
||||
from .decrypt_labs_remote_cdm import DecryptLabsRemoteCDM
|
||||
|
||||
__all__ = ["DecryptLabsRemoteCDM"]
|
||||
__all__ = ["DecryptLabsRemoteCDM", "CustomRemoteCDM"]
|
||||
|
||||
1092
unshackle/core/cdm/custom_remote_cdm.py
Normal file
1092
unshackle/core/cdm/custom_remote_cdm.py
Normal file
File diff suppressed because it is too large
Load Diff
@@ -474,7 +474,6 @@ class DecryptLabsRemoteCDM:
|
||||
if "vault_keys" in session:
|
||||
all_available_keys.extend(session["vault_keys"])
|
||||
|
||||
session["keys"] = all_available_keys
|
||||
session["tried_cache"] = True
|
||||
|
||||
if self._required_kids:
|
||||
@@ -505,10 +504,7 @@ class DecryptLabsRemoteCDM:
|
||||
license_request_data = request_data.copy()
|
||||
license_request_data["get_cached_keys_if_exists"] = False
|
||||
|
||||
session["decrypt_labs_session_id"] = None
|
||||
session["challenge"] = None
|
||||
session["tried_cache"] = False
|
||||
|
||||
# Make license request for missing keys
|
||||
response = self._http_session.post(
|
||||
f"{self.host}/get-request", json=license_request_data, timeout=30
|
||||
)
|
||||
@@ -522,8 +518,12 @@ class DecryptLabsRemoteCDM:
|
||||
|
||||
return b""
|
||||
else:
|
||||
# All required keys are available from cache
|
||||
session["keys"] = all_available_keys
|
||||
return b""
|
||||
else:
|
||||
# No required KIDs specified - return cached keys
|
||||
session["keys"] = all_available_keys
|
||||
return b""
|
||||
|
||||
if message_type == "license-request" or "challenge" in data:
|
||||
@@ -572,7 +572,9 @@ class DecryptLabsRemoteCDM:
|
||||
|
||||
session = self._sessions[session_id]
|
||||
|
||||
if session["keys"] and not (self.is_playready and "cached_keys" in session):
|
||||
# Skip parsing if we already have final keys (no cached keys to combine)
|
||||
# If cached_keys exist (Widevine or PlayReady), we need to combine them with license keys
|
||||
if session["keys"] and "cached_keys" not in session:
|
||||
return
|
||||
|
||||
if not session.get("challenge") or not session.get("decrypt_labs_session_id"):
|
||||
|
||||
@@ -31,6 +31,7 @@ class Config:
|
||||
class _Filenames:
|
||||
# default filenames, do not modify here, set via config
|
||||
log = "unshackle_{name}_{time}.log" # Directories.logs
|
||||
debug_log = "unshackle_debug_{service}_{time}.jsonl" # Directories.logs
|
||||
config = "config.yaml" # Directories.services / tag
|
||||
root_config = "unshackle.yaml" # Directories.user_configs
|
||||
chapters = "Chapters_{title}_{random}.txt" # Directories.temp
|
||||
@@ -88,6 +89,7 @@ class Config:
|
||||
self.tag_group_name: bool = kwargs.get("tag_group_name", True)
|
||||
self.tag_imdb_tmdb: bool = kwargs.get("tag_imdb_tmdb", True)
|
||||
self.tmdb_api_key: str = kwargs.get("tmdb_api_key") or ""
|
||||
self.simkl_client_id: str = kwargs.get("simkl_client_id") or ""
|
||||
self.decrypt_labs_api_key: str = kwargs.get("decrypt_labs_api_key") or ""
|
||||
self.update_checks: bool = kwargs.get("update_checks", True)
|
||||
self.update_check_interval: int = kwargs.get("update_check_interval", 24)
|
||||
@@ -98,6 +100,9 @@ class Config:
|
||||
self.title_cache_max_retention: int = kwargs.get("title_cache_max_retention", 86400) # 24 hours default
|
||||
self.title_cache_enabled: bool = kwargs.get("title_cache_enabled", True)
|
||||
|
||||
self.debug: bool = kwargs.get("debug", False)
|
||||
self.debug_keys: bool = kwargs.get("debug_keys", False)
|
||||
|
||||
@classmethod
|
||||
def from_yaml(cls, path: Path) -> Config:
|
||||
if not path.exists():
|
||||
@@ -113,8 +118,8 @@ POSSIBLE_CONFIG_PATHS = (
|
||||
Config._Directories.namespace_dir / Config._Filenames.root_config,
|
||||
# The Parent Folder to the unshackle Namespace Folder (e.g., %appdata%/Python/Python311/site-packages)
|
||||
Config._Directories.namespace_dir.parent / Config._Filenames.root_config,
|
||||
# The AppDirs User Config Folder (e.g., %localappdata%/unshackle)
|
||||
Config._Directories.user_configs / Config._Filenames.root_config,
|
||||
# The AppDirs User Config Folder (e.g., ~/.config/unshackle on Linux, %LOCALAPPDATA%\unshackle on Windows)
|
||||
Path(Config._Directories.app_dirs.user_config_dir) / Config._Filenames.root_config,
|
||||
)
|
||||
|
||||
|
||||
|
||||
@@ -8,7 +8,13 @@ DRM_SORT_MAP = ["ClearKey", "Widevine"]
|
||||
LANGUAGE_MAX_DISTANCE = 5 # this is max to be considered "same", e.g., en, en-US, en-AU
|
||||
LANGUAGE_EXACT_DISTANCE = 0 # exact match only, no variants
|
||||
VIDEO_CODEC_MAP = {"AVC": "H.264", "HEVC": "H.265"}
|
||||
DYNAMIC_RANGE_MAP = {"HDR10": "HDR", "HDR10+": "HDR10P", "Dolby Vision": "DV", "HDR10 / HDR10+": "HDR10P", "HDR10 / HDR10": "HDR"}
|
||||
DYNAMIC_RANGE_MAP = {
|
||||
"HDR10": "HDR",
|
||||
"HDR10+": "HDR10P",
|
||||
"Dolby Vision": "DV",
|
||||
"HDR10 / HDR10+": "HDR10P",
|
||||
"HDR10 / HDR10": "HDR",
|
||||
}
|
||||
AUDIO_CODEC_MAP = {"E-AC-3": "DDP", "AC-3": "DD"}
|
||||
|
||||
context_settings = dict(
|
||||
|
||||
@@ -1,12 +1,10 @@
|
||||
import logging
|
||||
import os
|
||||
import re
|
||||
import subprocess
|
||||
import warnings
|
||||
from http.cookiejar import CookieJar
|
||||
from itertools import chain
|
||||
from pathlib import Path
|
||||
from typing import Any, Generator, MutableMapping, Optional, Union
|
||||
from typing import Any, Generator, MutableMapping
|
||||
|
||||
import requests
|
||||
from requests.cookies import cookiejar_from_dict, get_cookie_header
|
||||
@@ -16,251 +14,331 @@ from unshackle.core.config import config
|
||||
from unshackle.core.console import console
|
||||
from unshackle.core.constants import DOWNLOAD_CANCELLED
|
||||
|
||||
PERCENT_RE = re.compile(r"(\d+\.\d+%)")
|
||||
SPEED_RE = re.compile(r"(\d+\.\d+(?:MB|KB)ps)")
|
||||
SIZE_RE = re.compile(r"(\d+\.\d+(?:MB|GB|KB)/\d+\.\d+(?:MB|GB|KB))")
|
||||
WARN_RE = re.compile(r"(WARN : Response.*|WARN : One or more errors occurred.*)")
|
||||
ERROR_RE = re.compile(r"(ERROR.*)")
|
||||
|
||||
DECRYPTION_ENGINE = {
|
||||
"shaka": "SHAKA_PACKAGER",
|
||||
"mp4decrypt": "MP4DECRYPT",
|
||||
}
|
||||
|
||||
# Ignore FutureWarnings
|
||||
warnings.simplefilter(action="ignore", category=FutureWarning)
|
||||
|
||||
AUDIO_CODEC_MAP = {"AAC": "mp4a", "AC3": "ac-3", "EC3": "ec-3"}
|
||||
VIDEO_CODEC_MAP = {"AVC": "avc", "HEVC": "hvc", "DV": "dvh", "HLG": "hev"}
|
||||
|
||||
def get_track_selection_args(track: Any) -> list[str]:
|
||||
"""
|
||||
Generates track selection arguments for N_m3u8dl_RE.
|
||||
|
||||
def track_selection(track: object) -> list[str]:
|
||||
"""Return the N_m3u8DL-RE stream selection arguments for a track."""
|
||||
Args:
|
||||
track: A track object with attributes like descriptor, data, and class name.
|
||||
|
||||
if "dash" in track.data:
|
||||
adaptation_set = track.data["dash"]["adaptation_set"]
|
||||
representation = track.data["dash"]["representation"]
|
||||
Returns:
|
||||
A list of strings for track selection.
|
||||
|
||||
track_type = track.__class__.__name__
|
||||
codec = track.codec.name
|
||||
bitrate = track.bitrate // 1000
|
||||
language = track.language
|
||||
width = track.width if track_type == "Video" else None
|
||||
height = track.height if track_type == "Video" else None
|
||||
range = track.range.name if track_type == "Video" else None
|
||||
Raises:
|
||||
ValueError: If the manifest type is unsupported or track selection fails.
|
||||
"""
|
||||
descriptor = track.descriptor.name
|
||||
track_type = track.__class__.__name__
|
||||
|
||||
elif "ism" in track.data:
|
||||
stream_index = track.data["ism"]["stream_index"]
|
||||
quality_level = track.data["ism"]["quality_level"]
|
||||
def _create_args(flag: str, parts: list[str], type_str: str, extra_args: list[str] | None = None) -> list[str]:
|
||||
if not parts:
|
||||
raise ValueError(f"[N_m3u8DL-RE]: Unable to select {type_str} track from {descriptor} manifest")
|
||||
|
||||
track_type = track.__class__.__name__
|
||||
codec = track.codec.name
|
||||
bitrate = track.bitrate // 1000
|
||||
language = track.language
|
||||
width = track.width if track_type == "Video" else None
|
||||
height = track.height if track_type == "Video" else None
|
||||
range = track.range.name if track_type == "Video" else None
|
||||
adaptation_set = stream_index
|
||||
representation = quality_level
|
||||
final_args = [flag, ":".join(parts)]
|
||||
if extra_args:
|
||||
final_args.extend(extra_args)
|
||||
|
||||
else:
|
||||
return []
|
||||
return final_args
|
||||
|
||||
if track_type == "Audio":
|
||||
codecs = AUDIO_CODEC_MAP.get(codec)
|
||||
langs = adaptation_set.findall("lang") + representation.findall("lang")
|
||||
track_ids = list(
|
||||
set(
|
||||
v
|
||||
for x in chain(adaptation_set, representation)
|
||||
for v in (x.get("audioTrackId"), x.get("id"))
|
||||
if v is not None
|
||||
match descriptor:
|
||||
case "HLS":
|
||||
# HLS playlists are direct inputs; no selection arguments needed.
|
||||
return []
|
||||
|
||||
case "DASH":
|
||||
representation = track.data.get("dash", {}).get("representation", {})
|
||||
adaptation_set = track.data.get("dash", {}).get("adaptation_set", {})
|
||||
parts = []
|
||||
|
||||
if track_type == "Audio":
|
||||
if track_id := representation.get("id") or adaptation_set.get("audioTrackId"):
|
||||
parts.append(rf'"id=\b{track_id}\b"')
|
||||
else:
|
||||
if codecs := representation.get("codecs"):
|
||||
parts.append(f"codecs={codecs}")
|
||||
if lang := representation.get("lang") or adaptation_set.get("lang"):
|
||||
parts.append(f"lang={lang}")
|
||||
if bw := representation.get("bandwidth"):
|
||||
bitrate = int(bw) // 1000
|
||||
parts.append(f"bwMin={bitrate}:bwMax={bitrate + 5}")
|
||||
if roles := representation.findall("Role") + adaptation_set.findall("Role"):
|
||||
if role := next((r.get("value") for r in roles if r.get("value", "").lower() == "main"), None):
|
||||
parts.append(f"role={role}")
|
||||
return _create_args("-sa", parts, "audio")
|
||||
|
||||
if track_type == "Video":
|
||||
if track_id := representation.get("id"):
|
||||
parts.append(rf'"id=\b{track_id}\b"')
|
||||
else:
|
||||
if width := representation.get("width"):
|
||||
parts.append(f"res={width}*")
|
||||
if codecs := representation.get("codecs"):
|
||||
parts.append(f"codecs={codecs}")
|
||||
if bw := representation.get("bandwidth"):
|
||||
bitrate = int(bw) // 1000
|
||||
parts.append(f"bwMin={bitrate}:bwMax={bitrate + 5}")
|
||||
return _create_args("-sv", parts, "video")
|
||||
|
||||
if track_type == "Subtitle":
|
||||
if track_id := representation.get("id"):
|
||||
parts.append(rf'"id=\b{track_id}\b"')
|
||||
else:
|
||||
if lang := representation.get("lang"):
|
||||
parts.append(f"lang={lang}")
|
||||
return _create_args("-ss", parts, "subtitle", extra_args=["--auto-subtitle-fix", "false"])
|
||||
|
||||
case "ISM":
|
||||
quality_level = track.data.get("ism", {}).get("quality_level", {})
|
||||
stream_index = track.data.get("ism", {}).get("stream_index", {})
|
||||
parts = []
|
||||
|
||||
if track_type == "Audio":
|
||||
if name := stream_index.get("Name") or quality_level.get("Index"):
|
||||
parts.append(rf'"id=\b{name}\b"')
|
||||
else:
|
||||
if codecs := quality_level.get("FourCC"):
|
||||
parts.append(f"codecs={codecs}")
|
||||
if lang := stream_index.get("Language"):
|
||||
parts.append(f"lang={lang}")
|
||||
if br := quality_level.get("Bitrate"):
|
||||
bitrate = int(br) // 1000
|
||||
parts.append(f"bwMin={bitrate}:bwMax={bitrate + 5}")
|
||||
return _create_args("-sa", parts, "audio")
|
||||
|
||||
if track_type == "Video":
|
||||
if name := stream_index.get("Name") or quality_level.get("Index"):
|
||||
parts.append(rf'"id=\b{name}\b"')
|
||||
else:
|
||||
if width := quality_level.get("MaxWidth"):
|
||||
parts.append(f"res={width}*")
|
||||
if codecs := quality_level.get("FourCC"):
|
||||
parts.append(f"codecs={codecs}")
|
||||
if br := quality_level.get("Bitrate"):
|
||||
bitrate = int(br) // 1000
|
||||
parts.append(f"bwMin={bitrate}:bwMax={bitrate + 5}")
|
||||
return _create_args("-sv", parts, "video")
|
||||
|
||||
# I've yet to encounter a subtitle track in ISM manifests, so this is mostly theoretical.
|
||||
if track_type == "Subtitle":
|
||||
if name := stream_index.get("Name") or quality_level.get("Index"):
|
||||
parts.append(rf'"id=\b{name}\b"')
|
||||
else:
|
||||
if lang := stream_index.get("Language"):
|
||||
parts.append(f"lang={lang}")
|
||||
return _create_args("-ss", parts, "subtitle", extra_args=["--auto-subtitle-fix", "false"])
|
||||
|
||||
case "URL":
|
||||
raise ValueError(
|
||||
f"[N_m3u8DL-RE]: Direct URL downloads are not supported for {track_type} tracks. "
|
||||
f"The track should use a different downloader (e.g., 'requests', 'aria2c')."
|
||||
)
|
||||
)
|
||||
roles = adaptation_set.findall("Role") + representation.findall("Role")
|
||||
role = ":role=main" if next((i for i in roles if i.get("value").lower() == "main"), None) else ""
|
||||
bandwidth = f"bwMin={bitrate}:bwMax={bitrate + 5}"
|
||||
|
||||
if langs:
|
||||
track_selection = ["-sa", f"lang={language}:codecs={codecs}:{bandwidth}{role}"]
|
||||
elif len(track_ids) == 1:
|
||||
track_selection = ["-sa", f"id={track_ids[0]}"]
|
||||
else:
|
||||
track_selection = ["-sa", f"for=best{role}"]
|
||||
return track_selection
|
||||
raise ValueError(f"[N_m3u8DL-RE]: Unsupported manifest type: {descriptor}")
|
||||
|
||||
if track_type == "Video":
|
||||
# adjust codec based on range
|
||||
codec_adjustments = {("HEVC", "DV"): "DV", ("HEVC", "HLG"): "HLG"}
|
||||
codec = codec_adjustments.get((codec, range), codec)
|
||||
codecs = VIDEO_CODEC_MAP.get(codec)
|
||||
|
||||
bandwidth = f"bwMin={bitrate}:bwMax={bitrate + 5}"
|
||||
if width and height:
|
||||
resolution = f"{width}x{height}"
|
||||
elif width:
|
||||
resolution = f"{width}*"
|
||||
else:
|
||||
resolution = "for=best"
|
||||
if resolution.startswith("for="):
|
||||
track_selection = ["-sv", resolution]
|
||||
track_selection.append(f"codecs={codecs}:{bandwidth}")
|
||||
else:
|
||||
track_selection = ["-sv", f"res={resolution}:codecs={codecs}:{bandwidth}"]
|
||||
return track_selection
|
||||
def build_download_args(
|
||||
track_url: str,
|
||||
filename: str,
|
||||
output_dir: Path,
|
||||
thread_count: int,
|
||||
retry_count: int,
|
||||
track_from_file: Path | None,
|
||||
custom_args: dict[str, Any] | None,
|
||||
headers: dict[str, Any] | None,
|
||||
cookies: CookieJar | None,
|
||||
proxy: str | None,
|
||||
content_keys: dict[str, str] | None,
|
||||
ad_keyword: str | None,
|
||||
skip_merge: bool | None = False,
|
||||
) -> list[str]:
|
||||
"""Constructs the CLI arguments for N_m3u8DL-RE."""
|
||||
|
||||
# Default arguments
|
||||
args = {
|
||||
"--save-name": filename,
|
||||
"--save-dir": output_dir,
|
||||
"--tmp-dir": output_dir,
|
||||
"--thread-count": thread_count,
|
||||
"--download-retry-count": retry_count,
|
||||
"--write-meta-json": False,
|
||||
"--no-log": True,
|
||||
}
|
||||
if proxy:
|
||||
args["--custom-proxy"] = proxy
|
||||
if skip_merge:
|
||||
args["--skip-merge"] = skip_merge
|
||||
if ad_keyword:
|
||||
args["--ad-keyword"] = ad_keyword
|
||||
if content_keys:
|
||||
args["--key"] = next((f"{kid.hex}:{key.lower()}" for kid, key in content_keys.items()), None)
|
||||
args["--decryption-engine"] = DECRYPTION_ENGINE.get(config.decryption.lower()) or "SHAKA_PACKAGER"
|
||||
if custom_args:
|
||||
args.update(custom_args)
|
||||
|
||||
command = [track_from_file or track_url]
|
||||
for flag, value in args.items():
|
||||
if value is True:
|
||||
command.append(flag)
|
||||
elif value is False:
|
||||
command.extend([flag, "false"])
|
||||
elif value is not False and value is not None:
|
||||
command.extend([flag, str(value)])
|
||||
|
||||
if headers:
|
||||
for key, value in headers.items():
|
||||
if key.lower() not in ("accept-encoding", "cookie"):
|
||||
command.extend(["--header", f"{key}: {value}"])
|
||||
|
||||
if cookies:
|
||||
req = requests.Request(method="GET", url=track_url)
|
||||
cookie_header = get_cookie_header(cookies, req)
|
||||
command.extend(["--header", f"Cookie: {cookie_header}"])
|
||||
|
||||
return command
|
||||
|
||||
|
||||
def download(
|
||||
urls: Union[str, dict[str, Any], list[str], list[dict[str, Any]]],
|
||||
track: object,
|
||||
urls: str | dict[str, Any] | list[str | dict[str, Any]],
|
||||
track: Any,
|
||||
output_dir: Path,
|
||||
filename: str,
|
||||
headers: Optional[MutableMapping[str, Union[str, bytes]]] = None,
|
||||
cookies: Optional[Union[MutableMapping[str, str], CookieJar]] = None,
|
||||
proxy: Optional[str] = None,
|
||||
max_workers: Optional[int] = None,
|
||||
content_keys: Optional[dict[str, Any]] = None,
|
||||
headers: MutableMapping[str, str | bytes] | None,
|
||||
cookies: MutableMapping[str, str] | CookieJar | None,
|
||||
proxy: str | None,
|
||||
max_workers: int | None,
|
||||
content_keys: dict[str, Any] | None,
|
||||
skip_merge: bool | None = False,
|
||||
) -> Generator[dict[str, Any], None, None]:
|
||||
if not urls:
|
||||
raise ValueError("urls must be provided and not empty")
|
||||
elif not isinstance(urls, (str, dict, list)):
|
||||
raise TypeError(f"Expected urls to be {str} or {dict} or a list of one of them, not {type(urls)}")
|
||||
|
||||
if not output_dir:
|
||||
raise ValueError("output_dir must be provided")
|
||||
elif not isinstance(output_dir, Path):
|
||||
raise TypeError(f"Expected output_dir to be {Path}, not {type(output_dir)}")
|
||||
|
||||
if not filename:
|
||||
raise ValueError("filename must be provided")
|
||||
elif not isinstance(filename, str):
|
||||
raise TypeError(f"Expected filename to be {str}, not {type(filename)}")
|
||||
|
||||
if not isinstance(urls, (str, dict, list)):
|
||||
raise TypeError(f"Expected urls to be str, dict, or list, not {type(urls)}")
|
||||
if not isinstance(output_dir, Path):
|
||||
raise TypeError(f"Expected output_dir to be Path, not {type(output_dir)}")
|
||||
if not isinstance(filename, str) or not filename:
|
||||
raise ValueError("filename must be a non-empty string")
|
||||
if not isinstance(headers, (MutableMapping, type(None))):
|
||||
raise TypeError(f"Expected headers to be {MutableMapping}, not {type(headers)}")
|
||||
|
||||
raise TypeError(f"Expected headers to be a mapping or None, not {type(headers)}")
|
||||
if not isinstance(cookies, (MutableMapping, CookieJar, type(None))):
|
||||
raise TypeError(f"Expected cookies to be {MutableMapping} or {CookieJar}, not {type(cookies)}")
|
||||
|
||||
raise TypeError(f"Expected cookies to be a mapping, CookieJar, or None, not {type(cookies)}")
|
||||
if not isinstance(proxy, (str, type(None))):
|
||||
raise TypeError(f"Expected proxy to be {str}, not {type(proxy)}")
|
||||
|
||||
if not max_workers:
|
||||
max_workers = min(32, (os.cpu_count() or 1) + 4)
|
||||
elif not isinstance(max_workers, int):
|
||||
raise TypeError(f"Expected max_workers to be {int}, not {type(max_workers)}")
|
||||
|
||||
if not isinstance(urls, list):
|
||||
urls = [urls]
|
||||
|
||||
if not binaries.N_m3u8DL_RE:
|
||||
raise EnvironmentError("N_m3u8DL-RE executable not found...")
|
||||
raise TypeError(f"Expected proxy to be a str or None, not {type(proxy)}")
|
||||
if not isinstance(max_workers, (int, type(None))):
|
||||
raise TypeError(f"Expected max_workers to be an int or None, not {type(max_workers)}")
|
||||
if not isinstance(content_keys, (dict, type(None))):
|
||||
raise TypeError(f"Expected content_keys to be a dict or None, not {type(content_keys)}")
|
||||
if not isinstance(skip_merge, (bool, type(None))):
|
||||
raise TypeError(f"Expected skip_merge to be a bool or None, not {type(skip_merge)}")
|
||||
|
||||
if cookies and not isinstance(cookies, CookieJar):
|
||||
cookies = cookiejar_from_dict(cookies)
|
||||
|
||||
track_type = track.__class__.__name__
|
||||
thread_count = str(config.n_m3u8dl_re.get("thread_count", max_workers))
|
||||
retry_count = str(config.n_m3u8dl_re.get("retry_count", max_workers))
|
||||
if not binaries.N_m3u8DL_RE:
|
||||
raise EnvironmentError("N_m3u8DL-RE executable not found...")
|
||||
|
||||
effective_max_workers = max_workers or min(32, (os.cpu_count() or 1) + 4)
|
||||
|
||||
if proxy and not config.n_m3u8dl_re.get("use_proxy", True):
|
||||
proxy = None
|
||||
|
||||
thread_count = config.n_m3u8dl_re.get("thread_count", effective_max_workers)
|
||||
retry_count = config.n_m3u8dl_re.get("retry_count", 10)
|
||||
ad_keyword = config.n_m3u8dl_re.get("ad_keyword")
|
||||
|
||||
arguments = [
|
||||
track.url,
|
||||
"--save-dir",
|
||||
output_dir,
|
||||
"--tmp-dir",
|
||||
output_dir,
|
||||
"--thread-count",
|
||||
thread_count,
|
||||
"--download-retry-count",
|
||||
retry_count,
|
||||
"--no-log",
|
||||
"--write-meta-json",
|
||||
"false",
|
||||
]
|
||||
arguments = build_download_args(
|
||||
track_url=track.url,
|
||||
track_from_file=track.from_file,
|
||||
filename=filename,
|
||||
output_dir=output_dir,
|
||||
thread_count=thread_count,
|
||||
retry_count=retry_count,
|
||||
custom_args=track.downloader_args,
|
||||
headers=headers,
|
||||
cookies=cookies,
|
||||
proxy=proxy,
|
||||
content_keys=content_keys,
|
||||
skip_merge=skip_merge,
|
||||
ad_keyword=ad_keyword,
|
||||
)
|
||||
arguments.extend(get_track_selection_args(track))
|
||||
|
||||
for header, value in (headers or {}).items():
|
||||
if header.lower() in ("accept-encoding", "cookie"):
|
||||
continue
|
||||
arguments.extend(["--header", f"{header}: {value}"])
|
||||
|
||||
if cookies:
|
||||
cookie_header = get_cookie_header(cookies, requests.Request(url=track.url))
|
||||
if cookie_header:
|
||||
arguments.extend(["--header", f"Cookie: {cookie_header}"])
|
||||
|
||||
if proxy:
|
||||
arguments.extend(["--custom-proxy", proxy])
|
||||
|
||||
if content_keys:
|
||||
for kid, key in content_keys.items():
|
||||
keys = f"{kid.hex}:{key.lower()}"
|
||||
arguments.extend(["--key", keys])
|
||||
arguments.extend(["--use-shaka-packager"])
|
||||
|
||||
if ad_keyword:
|
||||
arguments.extend(["--ad-keyword", ad_keyword])
|
||||
|
||||
if track.descriptor.name == "URL":
|
||||
error = f"[N_m3u8DL-RE]: {track.descriptor} is currently not supported"
|
||||
raise ValueError(error)
|
||||
elif track.descriptor.name == "DASH":
|
||||
arguments.extend(track_selection(track))
|
||||
|
||||
# TODO: improve this nonsense
|
||||
percent_re = re.compile(r"(\d+\.\d+%)")
|
||||
speed_re = re.compile(r"(?<!/)(\d+\.\d+MB)(?!.*\/)")
|
||||
warn = re.compile(r"(WARN : Response.*)")
|
||||
error = re.compile(r"(ERROR.*)")
|
||||
size_patterns = [
|
||||
re.compile(r"(\d+\.\d+MB/\d+\.\d+GB)"),
|
||||
re.compile(r"(\d+\.\d+GB/\d+\.\d+GB)"),
|
||||
re.compile(r"(\d+\.\d+MB/\d+\.\d+MB)"),
|
||||
]
|
||||
|
||||
yield dict(total=100)
|
||||
yield {"total": 100}
|
||||
yield {"downloaded": "Parsing streams..."}
|
||||
|
||||
try:
|
||||
with subprocess.Popen(
|
||||
[binaries.N_m3u8DL_RE, *arguments], stdout=subprocess.PIPE, stderr=subprocess.STDOUT, text=True
|
||||
) as p:
|
||||
for line in p.stdout:
|
||||
[binaries.N_m3u8DL_RE, *arguments],
|
||||
stdout=subprocess.PIPE,
|
||||
stderr=subprocess.STDOUT,
|
||||
text=True,
|
||||
encoding="utf-8",
|
||||
) as process:
|
||||
last_line = ""
|
||||
track_type = track.__class__.__name__
|
||||
|
||||
for line in process.stdout:
|
||||
output = line.strip()
|
||||
if output:
|
||||
percent = percent_re.search(output)
|
||||
speed = speed_re.search(output)
|
||||
size = next(
|
||||
(pattern.search(output).group(1) for pattern in size_patterns if pattern.search(output)), ""
|
||||
)
|
||||
if not output:
|
||||
continue
|
||||
last_line = output
|
||||
|
||||
if speed:
|
||||
yield dict(downloaded=f"{speed.group(1)}ps {size}")
|
||||
if percent:
|
||||
progress = int(percent.group(1).split(".")[0])
|
||||
yield dict(completed=progress) if progress < 100 else dict(downloaded="Merging")
|
||||
if warn_match := WARN_RE.search(output):
|
||||
console.log(f"{track_type} {warn_match.group(1)}")
|
||||
continue
|
||||
|
||||
if warn.search(output):
|
||||
console.log(f"{track_type} " + warn.search(output).group(1))
|
||||
if speed_match := SPEED_RE.search(output):
|
||||
size = size_match.group(1) if (size_match := SIZE_RE.search(output)) else ""
|
||||
yield {"downloaded": f"{speed_match.group(1)} {size}"}
|
||||
|
||||
p.wait()
|
||||
if percent_match := PERCENT_RE.search(output):
|
||||
progress = int(percent_match.group(1).split(".", 1)[0])
|
||||
yield {"completed": progress} if progress < 100 else {"downloaded": "Merging"}
|
||||
|
||||
if p.returncode != 0:
|
||||
if error.search(output):
|
||||
raise ValueError(f"[N_m3u8DL-RE]: {error.search(output).group(1)}")
|
||||
raise subprocess.CalledProcessError(p.returncode, arguments)
|
||||
process.wait()
|
||||
if process.returncode != 0:
|
||||
if error_match := ERROR_RE.search(last_line):
|
||||
raise ValueError(f"[N_m3u8DL-RE]: {error_match.group(1)}")
|
||||
raise subprocess.CalledProcessError(process.returncode, arguments)
|
||||
|
||||
except ConnectionResetError:
|
||||
# interrupted while passing URI to download
|
||||
raise KeyboardInterrupt()
|
||||
except KeyboardInterrupt:
|
||||
DOWNLOAD_CANCELLED.set() # skip pending track downloads
|
||||
yield dict(downloaded="[yellow]CANCELLED")
|
||||
yield {"downloaded": "[yellow]CANCELLED"}
|
||||
raise
|
||||
except Exception:
|
||||
DOWNLOAD_CANCELLED.set() # skip pending track downloads
|
||||
yield dict(downloaded="[red]FAILED")
|
||||
yield {"downloaded": "[red]FAILED"}
|
||||
raise
|
||||
|
||||
|
||||
def n_m3u8dl_re(
|
||||
urls: Union[str, list[str], dict[str, Any], list[dict[str, Any]]],
|
||||
track: object,
|
||||
urls: str | list[str] | dict[str, Any] | list[dict[str, Any]],
|
||||
track: Any,
|
||||
output_dir: Path,
|
||||
filename: str,
|
||||
headers: Optional[MutableMapping[str, Union[str, bytes]]] = None,
|
||||
cookies: Optional[Union[MutableMapping[str, str], CookieJar]] = None,
|
||||
proxy: Optional[str] = None,
|
||||
max_workers: Optional[int] = None,
|
||||
content_keys: Optional[dict[str, Any]] = None,
|
||||
headers: MutableMapping[str, str | bytes] | None = None,
|
||||
cookies: MutableMapping[str, str] | CookieJar | None = None,
|
||||
proxy: str | None = None,
|
||||
max_workers: int | None = None,
|
||||
content_keys: dict[str, Any] | None = None,
|
||||
skip_merge: bool | None = False,
|
||||
) -> Generator[dict[str, Any], None, None]:
|
||||
"""
|
||||
Download files using N_m3u8DL-RE.
|
||||
@@ -275,28 +353,33 @@ def n_m3u8dl_re(
|
||||
The data is in the same format accepted by rich's progress.update() function.
|
||||
|
||||
Parameters:
|
||||
urls: Web URL(s) to file(s) to download. You can use a dictionary with the key
|
||||
"url" for the URI, and other keys for extra arguments to use per-URL.
|
||||
urls: Web URL(s) to file(s) to download. NOTE: This parameter is ignored for now.
|
||||
track: The track to download. Used to get track attributes for the selection
|
||||
process. Note that Track.Descriptor.URL is not supported by N_m3u8DL-RE.
|
||||
output_dir: The folder to save the file into. If the save path's directory does
|
||||
not exist then it will be made automatically.
|
||||
filename: The filename or filename template to use for each file. The variables
|
||||
you can use are `i` for the URL index and `ext` for the URL extension.
|
||||
headers: A mapping of HTTP Header Key/Values to use for the download.
|
||||
cookies: A mapping of Cookie Key/Values or a Cookie Jar to use for the download.
|
||||
filename: The filename or filename template to use for each file.
|
||||
headers: A mapping of HTTP Header Key/Values to use for all downloads.
|
||||
cookies: A mapping of Cookie Key/Values or a Cookie Jar to use for all downloads.
|
||||
proxy: A proxy to use for all downloads.
|
||||
max_workers: The maximum amount of threads to use for downloads. Defaults to
|
||||
min(32,(cpu_count+4)). Can be set in config with --thread-count option.
|
||||
content_keys: The content keys to use for decryption.
|
||||
skip_merge: Whether to skip merging the downloaded chunks.
|
||||
"""
|
||||
track_type = track.__class__.__name__
|
||||
|
||||
log = logging.getLogger("N_m3u8DL-RE")
|
||||
if proxy and not config.n_m3u8dl_re.get("use_proxy", True):
|
||||
log.warning(f"{track_type}: Ignoring proxy as N_m3u8DL-RE is set to use_proxy=False")
|
||||
proxy = None
|
||||
|
||||
yield from download(urls, track, output_dir, filename, headers, cookies, proxy, max_workers, content_keys)
|
||||
yield from download(
|
||||
urls=urls,
|
||||
track=track,
|
||||
output_dir=output_dir,
|
||||
filename=filename,
|
||||
headers=headers,
|
||||
cookies=cookies,
|
||||
proxy=proxy,
|
||||
max_workers=max_workers,
|
||||
content_keys=content_keys,
|
||||
skip_merge=skip_merge,
|
||||
)
|
||||
|
||||
|
||||
__all__ = ("n_m3u8dl_re",)
|
||||
|
||||
@@ -338,7 +338,7 @@ class PlayReady:
|
||||
]
|
||||
|
||||
try:
|
||||
subprocess.run(cmd, check=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True)
|
||||
subprocess.run(cmd, check=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True, encoding='utf-8')
|
||||
except subprocess.CalledProcessError as e:
|
||||
error_msg = e.stderr if e.stderr else f"mp4decrypt failed with exit code {e.returncode}"
|
||||
raise subprocess.CalledProcessError(e.returncode, cmd, output=e.stdout, stderr=error_msg)
|
||||
|
||||
@@ -289,7 +289,7 @@ class Widevine:
|
||||
]
|
||||
|
||||
try:
|
||||
subprocess.run(cmd, check=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True)
|
||||
subprocess.run(cmd, check=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True, encoding='utf-8')
|
||||
except subprocess.CalledProcessError as e:
|
||||
error_msg = e.stderr if e.stderr else f"mp4decrypt failed with exit code {e.returncode}"
|
||||
raise subprocess.CalledProcessError(e.returncode, cmd, output=e.stdout, stderr=error_msg)
|
||||
|
||||
@@ -384,7 +384,8 @@ class DASH:
|
||||
segment_duration = float(segment_template.get("duration")) or 1
|
||||
|
||||
if not end_number:
|
||||
end_number = math.ceil(period_duration / (segment_duration / segment_timescale))
|
||||
segment_count = math.ceil(period_duration / (segment_duration / segment_timescale))
|
||||
end_number = start_number + segment_count - 1
|
||||
|
||||
for s in range(start_number, end_number + 1):
|
||||
segments.append(
|
||||
|
||||
@@ -249,17 +249,20 @@ class HLS:
|
||||
|
||||
log = logging.getLogger("HLS")
|
||||
|
||||
# Get the playlist text and handle both session types
|
||||
response = session.get(track.url)
|
||||
if isinstance(response, requests.Response):
|
||||
if not response.ok:
|
||||
log.error(f"Failed to request the invariant M3U8 playlist: {response.status_code}")
|
||||
sys.exit(1)
|
||||
playlist_text = response.text
|
||||
if track.from_file:
|
||||
master = m3u8.load(str(track.from_file))
|
||||
else:
|
||||
raise TypeError(f"Expected response to be a requests.Response or curl_cffi.Response, not {type(response)}")
|
||||
# Get the playlist text and handle both session types
|
||||
response = session.get(track.url)
|
||||
if isinstance(response, requests.Response):
|
||||
if not response.ok:
|
||||
log.error(f"Failed to request the invariant M3U8 playlist: {response.status_code}")
|
||||
sys.exit(1)
|
||||
playlist_text = response.text
|
||||
else:
|
||||
raise TypeError(f"Expected response to be a requests.Response or curl_cffi.Response, not {type(response)}")
|
||||
|
||||
master = m3u8.loads(playlist_text, uri=track.url)
|
||||
master = m3u8.loads(playlist_text, uri=track.url)
|
||||
|
||||
if not master.segments:
|
||||
log.error("Track's HLS playlist has no segments, expecting an invariant M3U8 playlist.")
|
||||
@@ -439,7 +442,7 @@ class HLS:
|
||||
elif len(files) != range_len:
|
||||
raise ValueError(f"Missing {range_len - len(files)} segment files for {segment_range}...")
|
||||
|
||||
if isinstance(drm, Widevine):
|
||||
if isinstance(drm, (Widevine, PlayReady)):
|
||||
# with widevine we can merge all segments and decrypt once
|
||||
merge(to=merged_path, via=files, delete=True, include_map_data=True)
|
||||
drm.decrypt(merged_path)
|
||||
|
||||
@@ -2,5 +2,6 @@ from .basic import Basic
|
||||
from .hola import Hola
|
||||
from .nordvpn import NordVPN
|
||||
from .surfsharkvpn import SurfsharkVPN
|
||||
from .windscribevpn import WindscribeVPN
|
||||
|
||||
__all__ = ("Basic", "Hola", "NordVPN", "SurfsharkVPN")
|
||||
__all__ = ("Basic", "Hola", "NordVPN", "SurfsharkVPN", "WindscribeVPN")
|
||||
|
||||
99
unshackle/core/proxies/windscribevpn.py
Normal file
99
unshackle/core/proxies/windscribevpn.py
Normal file
@@ -0,0 +1,99 @@
|
||||
import json
|
||||
import random
|
||||
import re
|
||||
from typing import Optional
|
||||
|
||||
import requests
|
||||
|
||||
from unshackle.core.proxies.proxy import Proxy
|
||||
|
||||
|
||||
class WindscribeVPN(Proxy):
|
||||
def __init__(self, username: str, password: str, server_map: Optional[dict[str, str]] = None):
|
||||
"""
|
||||
Proxy Service using WindscribeVPN Service Credentials.
|
||||
|
||||
A username and password must be provided. These are Service Credentials, not your Login Credentials.
|
||||
The Service Credentials can be found here: https://windscribe.com/getconfig/openvpn
|
||||
"""
|
||||
if not username:
|
||||
raise ValueError("No Username was provided to the WindscribeVPN Proxy Service.")
|
||||
if not password:
|
||||
raise ValueError("No Password was provided to the WindscribeVPN Proxy Service.")
|
||||
|
||||
if server_map is not None and not isinstance(server_map, dict):
|
||||
raise TypeError(f"Expected server_map to be a dict mapping a region to a hostname, not '{server_map!r}'.")
|
||||
|
||||
self.username = username
|
||||
self.password = password
|
||||
self.server_map = server_map or {}
|
||||
|
||||
self.countries = self.get_countries()
|
||||
|
||||
def __repr__(self) -> str:
|
||||
countries = len(set(x.get("country_code") for x in self.countries if x.get("country_code")))
|
||||
servers = sum(
|
||||
len(host)
|
||||
for location in self.countries
|
||||
for group in location.get("groups", [])
|
||||
for host in group.get("hosts", [])
|
||||
)
|
||||
|
||||
return f"{countries} Countr{['ies', 'y'][countries == 1]} ({servers} Server{['s', ''][servers == 1]})"
|
||||
|
||||
def get_proxy(self, query: str) -> Optional[str]:
|
||||
"""
|
||||
Get an HTTPS proxy URI for a WindscribeVPN server.
|
||||
"""
|
||||
query = query.lower()
|
||||
|
||||
if query in self.server_map:
|
||||
hostname = self.server_map[query]
|
||||
else:
|
||||
if re.match(r"^[a-z]+$", query):
|
||||
hostname = self.get_random_server(query)
|
||||
else:
|
||||
raise ValueError(f"The query provided is unsupported and unrecognized: {query}")
|
||||
|
||||
if not hostname:
|
||||
return None
|
||||
|
||||
return f"https://{self.username}:{self.password}@{hostname}:443"
|
||||
|
||||
def get_random_server(self, country_code: str) -> Optional[str]:
|
||||
"""
|
||||
Get a random server hostname for a country.
|
||||
|
||||
Returns None if no servers are available for the country.
|
||||
"""
|
||||
for location in self.countries:
|
||||
if location.get("country_code", "").lower() == country_code.lower():
|
||||
hostnames = []
|
||||
for group in location.get("groups", []):
|
||||
for host in group.get("hosts", []):
|
||||
if hostname := host.get("hostname"):
|
||||
hostnames.append(hostname)
|
||||
|
||||
if hostnames:
|
||||
return random.choice(hostnames)
|
||||
|
||||
return None
|
||||
|
||||
@staticmethod
|
||||
def get_countries() -> list[dict]:
|
||||
"""Get a list of available Countries and their metadata."""
|
||||
res = requests.get(
|
||||
url="https://assets.windscribe.com/serverlist/firefox/1/1",
|
||||
headers={
|
||||
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/128.0.0.0 Safari/537.36",
|
||||
"Content-Type": "application/json",
|
||||
},
|
||||
)
|
||||
if not res.ok:
|
||||
raise ValueError(f"Failed to get a list of WindscribeVPN locations [{res.status_code}]")
|
||||
|
||||
try:
|
||||
data = res.json()
|
||||
return data.get("data", [])
|
||||
except json.JSONDecodeError:
|
||||
raise ValueError("Could not decode list of WindscribeVPN locations, not JSON data.")
|
||||
@@ -2,9 +2,16 @@
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import logging
|
||||
import random
|
||||
import time
|
||||
import warnings
|
||||
from datetime import datetime, timezone
|
||||
from email.utils import parsedate_to_datetime
|
||||
from typing import Any
|
||||
from urllib.parse import urlparse
|
||||
|
||||
from curl_cffi.requests import Session as CurlSession
|
||||
from curl_cffi.requests import Response, Session, exceptions
|
||||
|
||||
from unshackle.core.config import config
|
||||
|
||||
@@ -14,30 +21,145 @@ warnings.filterwarnings(
|
||||
"ignore", message="Make sure you are using https over https proxy.*", category=RuntimeWarning, module="curl_cffi.*"
|
||||
)
|
||||
|
||||
FINGERPRINT_PRESETS = {
|
||||
"okhttp4": {
|
||||
"ja3": (
|
||||
"771," # TLS 1.2
|
||||
"4865-4866-4867-49195-49196-52393-49199-49200-52392-49171-49172-156-157-47-53," # Ciphers
|
||||
"0-23-65281-10-11-35-16-5-13-51-45-43," # Extensions
|
||||
"29-23-24," # Named groups (x25519, secp256r1, secp384r1)
|
||||
"0" # EC point formats
|
||||
),
|
||||
"akamai": "4:16777216|16711681|0|m,p,a,s",
|
||||
"description": "OkHttp 3.x/4.x (BoringSSL TLS stack)",
|
||||
},
|
||||
"okhttp5": {
|
||||
"ja3": (
|
||||
"771," # TLS 1.2
|
||||
"4865-4866-4867-49195-49199-49196-49200-52393-52392-49171-49172-156-157-47-53," # Ciphers
|
||||
"0-23-65281-10-11-35-16-5-13-51-45-43," # Extensions
|
||||
"29-23-24," # Named groups (x25519, secp256r1, secp384r1)
|
||||
"0" # EC point formats
|
||||
),
|
||||
"akamai": "4:16777216|16711681|0|m,p,a,s",
|
||||
"description": "OkHttp 5.x (BoringSSL TLS stack)",
|
||||
},
|
||||
}
|
||||
|
||||
class Session(CurlSession):
|
||||
"""curl_cffi Session with warning suppression."""
|
||||
|
||||
def request(self, method, url, **kwargs):
|
||||
with warnings.catch_warnings():
|
||||
warnings.filterwarnings(
|
||||
"ignore", message="Make sure you are using https over https proxy.*", category=RuntimeWarning
|
||||
)
|
||||
class MaxRetriesError(exceptions.RequestException):
|
||||
def __init__(self, message, cause=None):
|
||||
super().__init__(message)
|
||||
self.__cause__ = cause
|
||||
|
||||
|
||||
class CurlSession(Session):
|
||||
def __init__(
|
||||
self,
|
||||
max_retries: int = 10,
|
||||
backoff_factor: float = 0.2,
|
||||
max_backoff: float = 60.0,
|
||||
status_forcelist: list[int] | None = None,
|
||||
allowed_methods: set[str] | None = None,
|
||||
catch_exceptions: tuple[type[Exception], ...] | None = None,
|
||||
**session_kwargs: Any,
|
||||
):
|
||||
super().__init__(**session_kwargs)
|
||||
|
||||
self.max_retries = max_retries
|
||||
self.backoff_factor = backoff_factor
|
||||
self.max_backoff = max_backoff
|
||||
self.status_forcelist = status_forcelist or [429, 500, 502, 503, 504]
|
||||
self.allowed_methods = allowed_methods or {"GET", "POST", "HEAD", "OPTIONS", "PUT", "DELETE", "TRACE"}
|
||||
self.catch_exceptions = catch_exceptions or (
|
||||
exceptions.ConnectionError,
|
||||
exceptions.ProxyError,
|
||||
exceptions.SSLError,
|
||||
exceptions.Timeout,
|
||||
)
|
||||
self.log = logging.getLogger(self.__class__.__name__)
|
||||
|
||||
def get_sleep_time(self, response: Response | None, attempt: int) -> float | None:
|
||||
if response:
|
||||
retry_after = response.headers.get("Retry-After")
|
||||
if retry_after:
|
||||
try:
|
||||
return float(retry_after)
|
||||
except ValueError:
|
||||
if retry_date := parsedate_to_datetime(retry_after):
|
||||
return (retry_date - datetime.now(timezone.utc)).total_seconds()
|
||||
|
||||
if attempt == 0:
|
||||
return 0.0
|
||||
|
||||
backoff_value = self.backoff_factor * (2 ** (attempt - 1))
|
||||
jitter = backoff_value * 0.1
|
||||
sleep_time = backoff_value + random.uniform(-jitter, jitter)
|
||||
return min(sleep_time, self.max_backoff)
|
||||
|
||||
def request(self, method: str, url: str, **kwargs: Any) -> Response:
|
||||
if method.upper() not in self.allowed_methods:
|
||||
return super().request(method, url, **kwargs)
|
||||
|
||||
last_exception = None
|
||||
response = None
|
||||
|
||||
def session(browser: str | None = None, **kwargs) -> Session:
|
||||
for attempt in range(self.max_retries + 1):
|
||||
try:
|
||||
response = super().request(method, url, **kwargs)
|
||||
if response.status_code not in self.status_forcelist:
|
||||
return response
|
||||
last_exception = exceptions.HTTPError(f"Received status code: {response.status_code}")
|
||||
self.log.warning(
|
||||
f"{response.status_code} {response.reason}({urlparse(url).path}). Retrying... "
|
||||
f"({attempt + 1}/{self.max_retries})"
|
||||
)
|
||||
|
||||
except self.catch_exceptions as e:
|
||||
last_exception = e
|
||||
response = None
|
||||
self.log.warning(
|
||||
f"{e.__class__.__name__}({urlparse(url).path}). Retrying... ({attempt + 1}/{self.max_retries})"
|
||||
)
|
||||
|
||||
if attempt < self.max_retries:
|
||||
if sleep_duration := self.get_sleep_time(response, attempt + 1):
|
||||
if sleep_duration > 0:
|
||||
time.sleep(sleep_duration)
|
||||
else:
|
||||
break
|
||||
|
||||
raise MaxRetriesError(f"Max retries exceeded for {method} {url}", cause=last_exception)
|
||||
|
||||
|
||||
def session(
|
||||
browser: str | None = None,
|
||||
ja3: str | None = None,
|
||||
akamai: str | None = None,
|
||||
extra_fp: dict | None = None,
|
||||
**kwargs,
|
||||
) -> CurlSession:
|
||||
"""
|
||||
Create a curl_cffi session that impersonates a browser.
|
||||
Create a curl_cffi session that impersonates a browser or custom TLS/HTTP fingerprint.
|
||||
|
||||
This is a full replacement for requests.Session with browser impersonation
|
||||
and anti-bot capabilities. The session uses curl-impersonate under the hood
|
||||
to mimic real browser behavior.
|
||||
|
||||
Args:
|
||||
browser: Browser to impersonate (e.g. "chrome124", "firefox", "safari").
|
||||
browser: Browser to impersonate (e.g. "chrome124", "firefox", "safari") OR
|
||||
fingerprint preset name (e.g. "okhttp4").
|
||||
Uses the configured default from curl_impersonate.browser if not specified.
|
||||
See https://github.com/lexiforest/curl_cffi#sessions for available options.
|
||||
Available presets: okhttp4
|
||||
See https://github.com/lexiforest/curl_cffi#sessions for browser options.
|
||||
ja3: Custom JA3 TLS fingerprint string (format: "SSLVersion,Ciphers,Extensions,Curves,PointFormats").
|
||||
When provided, curl_cffi will use this exact TLS fingerprint instead of the browser's default.
|
||||
See https://curl-cffi.readthedocs.io/en/latest/impersonate/customize.html
|
||||
akamai: Custom Akamai HTTP/2 fingerprint string (format: "SETTINGS|WINDOW_UPDATE|PRIORITY|PSEUDO_HEADERS").
|
||||
When provided, curl_cffi will use this exact HTTP/2 fingerprint instead of the browser's default.
|
||||
See https://curl-cffi.readthedocs.io/en/latest/impersonate/customize.html
|
||||
extra_fp: Additional fingerprint parameters dict for advanced customization.
|
||||
See https://curl-cffi.readthedocs.io/en/latest/impersonate/customize.html
|
||||
**kwargs: Additional arguments passed to CurlSession constructor:
|
||||
- headers: Additional headers (dict)
|
||||
- cookies: Cookie jar or dict
|
||||
@@ -49,31 +171,80 @@ def session(browser: str | None = None, **kwargs) -> Session:
|
||||
- max_redirects: Maximum redirect count (int)
|
||||
- cert: Client certificate (str or tuple)
|
||||
|
||||
Returns:
|
||||
curl_cffi.requests.Session configured with browser impersonation, common headers,
|
||||
and equivalent retry behavior to requests.Session.
|
||||
Extra arguments for retry handler:
|
||||
- max_retries: Maximum number of retries (int, default 10)
|
||||
- backoff_factor: Backoff factor (float, default 0.2)
|
||||
- max_backoff: Maximum backoff time (float, default 60.0)
|
||||
- status_forcelist: List of status codes to force retry (list, default [429, 500, 502, 503, 504])
|
||||
- allowed_methods: List of allowed HTTP methods (set, default {"GET", "POST", "HEAD", "OPTIONS", "PUT", "DELETE", "TRACE"})
|
||||
- catch_exceptions: List of exceptions to catch (tuple, default (exceptions.ConnectionError, exceptions.ProxyError, exceptions.SSLError, exceptions.Timeout))
|
||||
|
||||
Example:
|
||||
Returns:
|
||||
curl_cffi.requests.Session configured with browser impersonation or custom fingerprints,
|
||||
common headers, and equivalent retry behavior to requests.Session.
|
||||
|
||||
Examples:
|
||||
# Standard browser impersonation
|
||||
from unshackle.core.session import session
|
||||
|
||||
class MyService(Service):
|
||||
@staticmethod
|
||||
def get_session():
|
||||
return session() # Uses config default browser
|
||||
"""
|
||||
if browser is None:
|
||||
browser = config.curl_impersonate.get("browser", "chrome124")
|
||||
|
||||
session_config = {
|
||||
"impersonate": browser,
|
||||
"timeout": 30.0,
|
||||
"allow_redirects": True,
|
||||
"max_redirects": 15,
|
||||
"verify": True,
|
||||
}
|
||||
# Use OkHttp 4.x preset for Android TV
|
||||
class AndroidService(Service):
|
||||
@staticmethod
|
||||
def get_session():
|
||||
return session("okhttp4")
|
||||
|
||||
# Custom fingerprint (manual)
|
||||
class CustomService(Service):
|
||||
@staticmethod
|
||||
def get_session():
|
||||
return session(
|
||||
ja3="771,4865-4866-4867-49195...",
|
||||
akamai="1:65536;2:0;4:6291456;6:262144|15663105|0|m,a,s,p",
|
||||
)
|
||||
|
||||
# With retry configuration
|
||||
class MyService(Service):
|
||||
@staticmethod
|
||||
def get_session():
|
||||
return session(
|
||||
"okhttp4",
|
||||
max_retries=5,
|
||||
status_forcelist=[429, 500],
|
||||
allowed_methods={"GET", "HEAD", "OPTIONS"},
|
||||
)
|
||||
"""
|
||||
|
||||
if browser and browser in FINGERPRINT_PRESETS:
|
||||
preset = FINGERPRINT_PRESETS[browser]
|
||||
if ja3 is None:
|
||||
ja3 = preset.get("ja3")
|
||||
if akamai is None:
|
||||
akamai = preset.get("akamai")
|
||||
if extra_fp is None:
|
||||
extra_fp = preset.get("extra_fp")
|
||||
browser = None
|
||||
|
||||
if browser is None and ja3 is None and akamai is None:
|
||||
browser = config.curl_impersonate.get("browser", "chrome")
|
||||
|
||||
session_config = {}
|
||||
if browser:
|
||||
session_config["impersonate"] = browser
|
||||
|
||||
if ja3:
|
||||
session_config["ja3"] = ja3
|
||||
if akamai:
|
||||
session_config["akamai"] = akamai
|
||||
if extra_fp:
|
||||
session_config["extra_fp"] = extra_fp
|
||||
|
||||
session_config.update(kwargs)
|
||||
session_obj = Session(**session_config)
|
||||
session_obj.headers.update(config.headers)
|
||||
|
||||
session_obj = CurlSession(**session_config)
|
||||
session_obj.headers.update(config.headers)
|
||||
return session_obj
|
||||
|
||||
@@ -180,6 +180,167 @@ class TitleCacher:
|
||||
"hit_rate": f"{hit_rate:.1f}%",
|
||||
}
|
||||
|
||||
def get_cached_tmdb(
|
||||
self, title_id: str, kind: str, region: Optional[str] = None, account_hash: Optional[str] = None
|
||||
) -> Optional[dict]:
|
||||
"""
|
||||
Get cached TMDB data for a title.
|
||||
|
||||
Args:
|
||||
title_id: The title identifier
|
||||
kind: "movie" or "tv"
|
||||
region: The region/proxy identifier
|
||||
account_hash: Hash of account credentials
|
||||
|
||||
Returns:
|
||||
Dict with 'detail' and 'external_ids' if cached and valid, None otherwise
|
||||
"""
|
||||
if not config.title_cache_enabled:
|
||||
return None
|
||||
|
||||
cache_key = self._generate_cache_key(title_id, region, account_hash)
|
||||
cache = self.cacher.get(cache_key, version=1)
|
||||
|
||||
if not cache or not cache.data:
|
||||
return None
|
||||
|
||||
tmdb_data = getattr(cache.data, "tmdb_data", None)
|
||||
if not tmdb_data:
|
||||
return None
|
||||
|
||||
tmdb_expiration = tmdb_data.get("expires_at")
|
||||
if not tmdb_expiration or datetime.now() >= tmdb_expiration:
|
||||
self.log.debug(f"TMDB cache expired for {title_id}")
|
||||
return None
|
||||
|
||||
if tmdb_data.get("kind") != kind:
|
||||
self.log.debug(f"TMDB cache kind mismatch for {title_id}: cached {tmdb_data.get('kind')}, requested {kind}")
|
||||
return None
|
||||
|
||||
self.log.debug(f"TMDB cache hit for {title_id}")
|
||||
return {
|
||||
"detail": tmdb_data.get("detail"),
|
||||
"external_ids": tmdb_data.get("external_ids"),
|
||||
"fetched_at": tmdb_data.get("fetched_at"),
|
||||
}
|
||||
|
||||
def cache_tmdb(
|
||||
self,
|
||||
title_id: str,
|
||||
detail_response: dict,
|
||||
external_ids_response: dict,
|
||||
kind: str,
|
||||
region: Optional[str] = None,
|
||||
account_hash: Optional[str] = None,
|
||||
) -> None:
|
||||
"""
|
||||
Cache TMDB data for a title.
|
||||
|
||||
Args:
|
||||
title_id: The title identifier
|
||||
detail_response: Full TMDB detail API response
|
||||
external_ids_response: Full TMDB external_ids API response
|
||||
kind: "movie" or "tv"
|
||||
region: The region/proxy identifier
|
||||
account_hash: Hash of account credentials
|
||||
"""
|
||||
if not config.title_cache_enabled:
|
||||
return
|
||||
|
||||
cache_key = self._generate_cache_key(title_id, region, account_hash)
|
||||
cache = self.cacher.get(cache_key, version=1)
|
||||
|
||||
if not cache or not cache.data:
|
||||
self.log.debug(f"Cannot cache TMDB data: no title cache exists for {title_id}")
|
||||
return
|
||||
|
||||
now = datetime.now()
|
||||
tmdb_data = {
|
||||
"detail": detail_response,
|
||||
"external_ids": external_ids_response,
|
||||
"kind": kind,
|
||||
"fetched_at": now,
|
||||
"expires_at": now + timedelta(days=7), # 7-day expiration
|
||||
}
|
||||
|
||||
cache.data.tmdb_data = tmdb_data
|
||||
|
||||
cache.set(cache.data, expiration=cache.expiration)
|
||||
self.log.debug(f"Cached TMDB data for {title_id} (kind={kind})")
|
||||
|
||||
def get_cached_simkl(
|
||||
self, title_id: str, region: Optional[str] = None, account_hash: Optional[str] = None
|
||||
) -> Optional[dict]:
|
||||
"""
|
||||
Get cached Simkl data for a title.
|
||||
|
||||
Args:
|
||||
title_id: The title identifier
|
||||
region: The region/proxy identifier
|
||||
account_hash: Hash of account credentials
|
||||
|
||||
Returns:
|
||||
Simkl response dict if cached and valid, None otherwise
|
||||
"""
|
||||
if not config.title_cache_enabled:
|
||||
return None
|
||||
|
||||
cache_key = self._generate_cache_key(title_id, region, account_hash)
|
||||
cache = self.cacher.get(cache_key, version=1)
|
||||
|
||||
if not cache or not cache.data:
|
||||
return None
|
||||
|
||||
simkl_data = getattr(cache.data, "simkl_data", None)
|
||||
if not simkl_data:
|
||||
return None
|
||||
|
||||
simkl_expiration = simkl_data.get("expires_at")
|
||||
if not simkl_expiration or datetime.now() >= simkl_expiration:
|
||||
self.log.debug(f"Simkl cache expired for {title_id}")
|
||||
return None
|
||||
|
||||
self.log.debug(f"Simkl cache hit for {title_id}")
|
||||
return simkl_data.get("response")
|
||||
|
||||
def cache_simkl(
|
||||
self,
|
||||
title_id: str,
|
||||
simkl_response: dict,
|
||||
region: Optional[str] = None,
|
||||
account_hash: Optional[str] = None,
|
||||
) -> None:
|
||||
"""
|
||||
Cache Simkl data for a title.
|
||||
|
||||
Args:
|
||||
title_id: The title identifier
|
||||
simkl_response: Full Simkl API response
|
||||
region: The region/proxy identifier
|
||||
account_hash: Hash of account credentials
|
||||
"""
|
||||
if not config.title_cache_enabled:
|
||||
return
|
||||
|
||||
cache_key = self._generate_cache_key(title_id, region, account_hash)
|
||||
cache = self.cacher.get(cache_key, version=1)
|
||||
|
||||
if not cache or not cache.data:
|
||||
self.log.debug(f"Cannot cache Simkl data: no title cache exists for {title_id}")
|
||||
return
|
||||
|
||||
now = datetime.now()
|
||||
simkl_data = {
|
||||
"response": simkl_response,
|
||||
"fetched_at": now,
|
||||
"expires_at": now + timedelta(days=7),
|
||||
}
|
||||
|
||||
cache.data.simkl_data = simkl_data
|
||||
|
||||
cache.set(cache.data, expiration=cache.expiration)
|
||||
self.log.debug(f"Cached Simkl data for {title_id}")
|
||||
|
||||
|
||||
def get_region_from_proxy(proxy_url: Optional[str]) -> Optional[str]:
|
||||
"""
|
||||
|
||||
@@ -95,9 +95,9 @@ class Episode(Title):
|
||||
media_info.audio_tracks,
|
||||
key=lambda x: (
|
||||
float(x.bit_rate) if x.bit_rate else 0,
|
||||
bool(x.format_additionalfeatures and "JOC" in x.format_additionalfeatures)
|
||||
bool(x.format_additionalfeatures and "JOC" in x.format_additionalfeatures),
|
||||
),
|
||||
reverse=True
|
||||
reverse=True,
|
||||
)
|
||||
primary_audio_track = sorted_audio[0]
|
||||
unique_audio_languages = len({x.language.split("-")[0] for x in media_info.audio_tracks if x.language})
|
||||
@@ -173,20 +173,26 @@ class Episode(Title):
|
||||
if primary_video_track:
|
||||
codec = primary_video_track.format
|
||||
hdr_format = primary_video_track.hdr_format_commercial
|
||||
hdr_format_full = primary_video_track.hdr_format or ""
|
||||
trc = (
|
||||
primary_video_track.transfer_characteristics
|
||||
or primary_video_track.transfer_characteristics_original
|
||||
or ""
|
||||
)
|
||||
frame_rate = float(primary_video_track.frame_rate)
|
||||
|
||||
# Primary HDR format detection
|
||||
if hdr_format:
|
||||
if (primary_video_track.hdr_format or "").startswith("Dolby Vision"):
|
||||
if hdr_format_full.startswith("Dolby Vision"):
|
||||
name += " DV"
|
||||
if DYNAMIC_RANGE_MAP.get(hdr_format) and DYNAMIC_RANGE_MAP.get(hdr_format) != "DV":
|
||||
if any(indicator in hdr_format_full for indicator in ["HDR10", "SMPTE ST 2086"]):
|
||||
name += " HDR"
|
||||
else:
|
||||
name += f" {DYNAMIC_RANGE_MAP.get(hdr_format)} "
|
||||
elif trc and "HLG" in trc:
|
||||
elif "HLG" in trc or "Hybrid Log-Gamma" in trc or "ARIB STD-B67" in trc or "arib-std-b67" in trc.lower():
|
||||
name += " HLG"
|
||||
elif any(indicator in trc for indicator in ["PQ", "SMPTE ST 2084", "BT.2100"]) or "smpte2084" in trc.lower() or "bt.2020-10" in trc.lower():
|
||||
name += " HDR"
|
||||
if frame_rate > 30:
|
||||
name += " HFR"
|
||||
name += f" {VIDEO_CODEC_MAP.get(codec, codec)}"
|
||||
|
||||
@@ -58,9 +58,9 @@ class Movie(Title):
|
||||
media_info.audio_tracks,
|
||||
key=lambda x: (
|
||||
float(x.bit_rate) if x.bit_rate else 0,
|
||||
bool(x.format_additionalfeatures and "JOC" in x.format_additionalfeatures)
|
||||
bool(x.format_additionalfeatures and "JOC" in x.format_additionalfeatures),
|
||||
),
|
||||
reverse=True
|
||||
reverse=True,
|
||||
)
|
||||
primary_audio_track = sorted_audio[0]
|
||||
unique_audio_languages = len({x.language.split("-")[0] for x in media_info.audio_tracks if x.language})
|
||||
@@ -124,20 +124,26 @@ class Movie(Title):
|
||||
if primary_video_track:
|
||||
codec = primary_video_track.format
|
||||
hdr_format = primary_video_track.hdr_format_commercial
|
||||
hdr_format_full = primary_video_track.hdr_format or ""
|
||||
trc = (
|
||||
primary_video_track.transfer_characteristics
|
||||
or primary_video_track.transfer_characteristics_original
|
||||
or ""
|
||||
)
|
||||
frame_rate = float(primary_video_track.frame_rate)
|
||||
|
||||
# Primary HDR format detection
|
||||
if hdr_format:
|
||||
if (primary_video_track.hdr_format or "").startswith("Dolby Vision"):
|
||||
if hdr_format_full.startswith("Dolby Vision"):
|
||||
name += " DV"
|
||||
if DYNAMIC_RANGE_MAP.get(hdr_format) and DYNAMIC_RANGE_MAP.get(hdr_format) != "DV":
|
||||
if any(indicator in hdr_format_full for indicator in ["HDR10", "SMPTE ST 2086"]):
|
||||
name += " HDR"
|
||||
else:
|
||||
name += f" {DYNAMIC_RANGE_MAP.get(hdr_format)} "
|
||||
elif trc and "HLG" in trc:
|
||||
elif "HLG" in trc or "Hybrid Log-Gamma" in trc or "ARIB STD-B67" in trc or "arib-std-b67" in trc.lower():
|
||||
name += " HLG"
|
||||
elif any(indicator in trc for indicator in ["PQ", "SMPTE ST 2084", "BT.2100"]) or "smpte2084" in trc.lower() or "bt.2020-10" in trc.lower():
|
||||
name += " HDR"
|
||||
if frame_rate > 30:
|
||||
name += " HFR"
|
||||
name += f" {VIDEO_CODEC_MAP.get(codec, codec)}"
|
||||
|
||||
@@ -239,25 +239,29 @@ class Subtitle(Track):
|
||||
|
||||
# Sanitize WebVTT timestamps before parsing
|
||||
text = Subtitle.sanitize_webvtt_timestamps(text)
|
||||
preserve_formatting = config.subtitle.get("preserve_formatting", True)
|
||||
|
||||
try:
|
||||
caption_set = pycaption.WebVTTReader().read(text)
|
||||
Subtitle.merge_same_cues(caption_set)
|
||||
Subtitle.filter_unwanted_cues(caption_set)
|
||||
subtitle_text = pycaption.WebVTTWriter().write(caption_set)
|
||||
self.path.write_text(subtitle_text, encoding="utf8")
|
||||
except pycaption.exceptions.CaptionReadSyntaxError:
|
||||
# If first attempt fails, try more aggressive sanitization
|
||||
text = Subtitle.sanitize_webvtt(text)
|
||||
if preserve_formatting:
|
||||
self.path.write_text(text, encoding="utf8")
|
||||
else:
|
||||
try:
|
||||
caption_set = pycaption.WebVTTReader().read(text)
|
||||
Subtitle.merge_same_cues(caption_set)
|
||||
Subtitle.filter_unwanted_cues(caption_set)
|
||||
subtitle_text = pycaption.WebVTTWriter().write(caption_set)
|
||||
self.path.write_text(subtitle_text, encoding="utf8")
|
||||
except Exception:
|
||||
# Keep the sanitized version even if parsing failed
|
||||
self.path.write_text(text, encoding="utf8")
|
||||
except pycaption.exceptions.CaptionReadSyntaxError:
|
||||
# If first attempt fails, try more aggressive sanitization
|
||||
text = Subtitle.sanitize_webvtt(text)
|
||||
try:
|
||||
caption_set = pycaption.WebVTTReader().read(text)
|
||||
Subtitle.merge_same_cues(caption_set)
|
||||
Subtitle.filter_unwanted_cues(caption_set)
|
||||
subtitle_text = pycaption.WebVTTWriter().write(caption_set)
|
||||
self.path.write_text(subtitle_text, encoding="utf8")
|
||||
except Exception:
|
||||
# Keep the sanitized version even if parsing failed
|
||||
self.path.write_text(text, encoding="utf8")
|
||||
|
||||
@staticmethod
|
||||
def sanitize_webvtt_timestamps(text: str) -> str:
|
||||
@@ -979,20 +983,33 @@ class Subtitle(Track):
|
||||
stdout=subprocess.DEVNULL,
|
||||
)
|
||||
else:
|
||||
sub = Subtitles(self.path)
|
||||
if config.subtitle.get("convert_before_strip", True) and self.codec != Subtitle.Codec.SubRip:
|
||||
self.path = self.convert(Subtitle.Codec.SubRip)
|
||||
self.codec = Subtitle.Codec.SubRip
|
||||
|
||||
try:
|
||||
sub.filter(rm_fonts=True, rm_ast=True, rm_music=True, rm_effects=True, rm_names=True, rm_author=True)
|
||||
except ValueError as e:
|
||||
if "too many values to unpack" in str(e):
|
||||
# Retry without name removal if the error is due to multiple colons in time references
|
||||
# This can happen with lines like "at 10:00 and 2:00"
|
||||
sub = Subtitles(self.path)
|
||||
sub = Subtitles(self.path)
|
||||
try:
|
||||
sub.filter(
|
||||
rm_fonts=True, rm_ast=True, rm_music=True, rm_effects=True, rm_names=False, rm_author=True
|
||||
rm_fonts=True, rm_ast=True, rm_music=True, rm_effects=True, rm_names=True, rm_author=True
|
||||
)
|
||||
except ValueError as e:
|
||||
if "too many values to unpack" in str(e):
|
||||
# Retry without name removal if the error is due to multiple colons in time references
|
||||
# This can happen with lines like "at 10:00 and 2:00"
|
||||
sub = Subtitles(self.path)
|
||||
sub.filter(
|
||||
rm_fonts=True, rm_ast=True, rm_music=True, rm_effects=True, rm_names=False, rm_author=True
|
||||
)
|
||||
else:
|
||||
raise
|
||||
sub.save()
|
||||
except (IOError, OSError) as e:
|
||||
if "is not valid subtitle file" in str(e):
|
||||
self.log.warning(f"Failed to strip SDH from {self.path.name}: {e}")
|
||||
self.log.warning("Continuing without SDH stripping for this subtitle")
|
||||
else:
|
||||
raise
|
||||
sub.save()
|
||||
|
||||
def reverse_rtl(self) -> None:
|
||||
"""
|
||||
|
||||
@@ -25,7 +25,7 @@ from unshackle.core.constants import DOWNLOAD_CANCELLED, DOWNLOAD_LICENCE_ONLY
|
||||
from unshackle.core.downloaders import aria2c, curl_impersonate, n_m3u8dl_re, requests
|
||||
from unshackle.core.drm import DRM_T, PlayReady, Widevine
|
||||
from unshackle.core.events import events
|
||||
from unshackle.core.utilities import get_boxes, try_ensure_utf8
|
||||
from unshackle.core.utilities import get_boxes, get_extension, try_ensure_utf8
|
||||
from unshackle.core.utils.subprocess import ffprobe
|
||||
|
||||
|
||||
@@ -47,6 +47,8 @@ class Track:
|
||||
drm: Optional[Iterable[DRM_T]] = None,
|
||||
edition: Optional[str] = None,
|
||||
downloader: Optional[Callable] = None,
|
||||
downloader_args: Optional[dict] = None,
|
||||
from_file: Optional[Path] = None,
|
||||
data: Optional[Union[dict, defaultdict]] = None,
|
||||
id_: Optional[str] = None,
|
||||
extra: Optional[Any] = None,
|
||||
@@ -69,6 +71,10 @@ class Track:
|
||||
raise TypeError(f"Expected edition to be a {str}, not {type(edition)}")
|
||||
if not isinstance(downloader, (Callable, type(None))):
|
||||
raise TypeError(f"Expected downloader to be a {Callable}, not {type(downloader)}")
|
||||
if not isinstance(downloader_args, (dict, type(None))):
|
||||
raise TypeError(f"Expected downloader_args to be a {dict}, not {type(downloader_args)}")
|
||||
if not isinstance(from_file, (Path, type(None))):
|
||||
raise TypeError(f"Expected from_file to be a {Path}, not {type(from_file)}")
|
||||
if not isinstance(data, (dict, defaultdict, type(None))):
|
||||
raise TypeError(f"Expected data to be a {dict} or {defaultdict}, not {type(data)}")
|
||||
|
||||
@@ -100,6 +106,8 @@ class Track:
|
||||
self.drm = drm
|
||||
self.edition: str = edition
|
||||
self.downloader = downloader
|
||||
self.downloader_args = downloader_args
|
||||
self.from_file = from_file
|
||||
self._data: defaultdict[Any, Any] = defaultdict(dict)
|
||||
self.data = data or {}
|
||||
self.extra: Any = extra or {} # allow anything for extra, but default to a dict
|
||||
@@ -203,7 +211,21 @@ class Track:
|
||||
save_path = config.directories.temp / f"{track_type}_{self.id}.mp4"
|
||||
if track_type == "Subtitle":
|
||||
save_path = save_path.with_suffix(f".{self.codec.extension}")
|
||||
if self.downloader.__name__ == "n_m3u8dl_re":
|
||||
# n_m3u8dl_re doesn't support directly downloading subtitles from URLs
|
||||
# or when the subtitle has a direct file extension
|
||||
if self.downloader.__name__ == "n_m3u8dl_re" and (
|
||||
self.descriptor == self.Descriptor.URL
|
||||
or get_extension(self.url) in {
|
||||
".srt",
|
||||
".vtt",
|
||||
".ttml",
|
||||
".ssa",
|
||||
".ass",
|
||||
".stpp",
|
||||
".wvtt",
|
||||
".xml",
|
||||
}
|
||||
):
|
||||
self.downloader = requests
|
||||
|
||||
if self.descriptor != self.Descriptor.URL:
|
||||
|
||||
@@ -28,21 +28,21 @@ class UpdateChecker:
|
||||
DEFAULT_CHECK_INTERVAL = 24 * 60 * 60
|
||||
|
||||
@classmethod
|
||||
def _get_cache_file(cls) -> Path:
|
||||
def get_cache_file(cls) -> Path:
|
||||
"""Get the path to the update check cache file."""
|
||||
from unshackle.core.config import config
|
||||
|
||||
return config.directories.cache / "update_check.json"
|
||||
|
||||
@classmethod
|
||||
def _load_cache_data(cls) -> dict:
|
||||
def load_cache_data(cls) -> dict:
|
||||
"""
|
||||
Load cache data from file.
|
||||
|
||||
Returns:
|
||||
Cache data dictionary or empty dict if loading fails
|
||||
"""
|
||||
cache_file = cls._get_cache_file()
|
||||
cache_file = cls.get_cache_file()
|
||||
|
||||
if not cache_file.exists():
|
||||
return {}
|
||||
@@ -54,7 +54,7 @@ class UpdateChecker:
|
||||
return {}
|
||||
|
||||
@staticmethod
|
||||
def _parse_version(version_string: str) -> str:
|
||||
def parse_version(version_string: str) -> str:
|
||||
"""
|
||||
Parse and normalize version string by removing 'v' prefix.
|
||||
|
||||
@@ -107,7 +107,7 @@ class UpdateChecker:
|
||||
return None
|
||||
|
||||
data = response.json()
|
||||
latest_version = cls._parse_version(data.get("tag_name", ""))
|
||||
latest_version = cls.parse_version(data.get("tag_name", ""))
|
||||
|
||||
return latest_version if cls._is_valid_version(latest_version) else None
|
||||
|
||||
@@ -125,7 +125,7 @@ class UpdateChecker:
|
||||
Returns:
|
||||
True if we should check for updates, False otherwise
|
||||
"""
|
||||
cache_data = cls._load_cache_data()
|
||||
cache_data = cls.load_cache_data()
|
||||
|
||||
if not cache_data:
|
||||
return True
|
||||
@@ -144,7 +144,7 @@ class UpdateChecker:
|
||||
latest_version: The latest version found, if any
|
||||
current_version: The current version being used
|
||||
"""
|
||||
cache_file = cls._get_cache_file()
|
||||
cache_file = cls.get_cache_file()
|
||||
|
||||
try:
|
||||
cache_file.parent.mkdir(parents=True, exist_ok=True)
|
||||
@@ -231,7 +231,7 @@ class UpdateChecker:
|
||||
Returns:
|
||||
The latest version string if an update is available from cache, None otherwise
|
||||
"""
|
||||
cache_data = cls._load_cache_data()
|
||||
cache_data = cls.load_cache_data()
|
||||
|
||||
if not cache_data:
|
||||
return None
|
||||
|
||||
@@ -1,23 +1,27 @@
|
||||
import ast
|
||||
import contextlib
|
||||
import importlib.util
|
||||
import json
|
||||
import logging
|
||||
import os
|
||||
import re
|
||||
import socket
|
||||
import sys
|
||||
import time
|
||||
import traceback
|
||||
import unicodedata
|
||||
from collections import defaultdict
|
||||
from datetime import datetime
|
||||
from datetime import datetime, timezone
|
||||
from pathlib import Path
|
||||
from types import ModuleType
|
||||
from typing import Optional, Sequence, Union
|
||||
from typing import Any, Optional, Sequence, Union
|
||||
from urllib.parse import ParseResult, urlparse
|
||||
from uuid import uuid4
|
||||
|
||||
import chardet
|
||||
import requests
|
||||
from construct import ValidationError
|
||||
from fontTools import ttLib
|
||||
from langcodes import Language, closest_match
|
||||
from pymp4.parser import Box
|
||||
from unidecode import unidecode
|
||||
@@ -26,6 +30,30 @@ from unshackle.core.cacher import Cacher
|
||||
from unshackle.core.config import config
|
||||
from unshackle.core.constants import LANGUAGE_EXACT_DISTANCE, LANGUAGE_MAX_DISTANCE
|
||||
|
||||
"""
|
||||
Utility functions for the unshackle media archival tool.
|
||||
|
||||
This module provides various utility functions including:
|
||||
- Font discovery and fallback system for subtitle rendering
|
||||
- Cross-platform system font scanning with Windows → Linux font family mapping
|
||||
- Log file management and rotation
|
||||
- IP geolocation with caching and provider rotation
|
||||
- Language matching utilities
|
||||
- MP4/ISOBMFF box parsing
|
||||
- File sanitization and path handling
|
||||
- Structured JSON debug logging
|
||||
|
||||
Font System:
|
||||
The font subsystem enables cross-platform font discovery for ASS/SSA subtitles.
|
||||
On Linux, it scans standard font directories and maps Windows font names (Arial,
|
||||
Times New Roman) to their Linux equivalents (Liberation Sans, Liberation Serif).
|
||||
|
||||
Main Font Functions:
|
||||
- get_system_fonts(): Discover installed fonts across platforms
|
||||
- find_font_with_fallbacks(): Match fonts with intelligent fallback strategies
|
||||
- suggest_font_packages(): Recommend packages to install for missing fonts
|
||||
"""
|
||||
|
||||
|
||||
def rotate_log_file(log_path: Path, keep: int = 20) -> Path:
|
||||
"""
|
||||
@@ -122,7 +150,7 @@ def is_exact_match(language: Union[str, Language], languages: Sequence[Union[str
|
||||
return closest_match(language, list(map(str, languages)))[1] <= LANGUAGE_EXACT_DISTANCE
|
||||
|
||||
|
||||
def get_boxes(data: bytes, box_type: bytes, as_bytes: bool = False) -> Box:
|
||||
def get_boxes(data: bytes, box_type: bytes, as_bytes: bool = False) -> Box: # type: ignore
|
||||
"""
|
||||
Scan a byte array for a wanted MP4/ISOBMFF box, then parse and yield each find.
|
||||
|
||||
@@ -425,21 +453,263 @@ def get_extension(value: Union[str, Path, ParseResult]) -> Optional[str]:
|
||||
return ext
|
||||
|
||||
|
||||
def get_system_fonts() -> dict[str, Path]:
|
||||
if sys.platform == "win32":
|
||||
import winreg
|
||||
def extract_font_family(font_path: Path) -> Optional[str]:
|
||||
"""
|
||||
Extract font family name from TTF/OTF file using fontTools.
|
||||
|
||||
with winreg.ConnectRegistry(None, winreg.HKEY_LOCAL_MACHINE) as reg:
|
||||
key = winreg.OpenKey(reg, r"SOFTWARE\Microsoft\Windows NT\CurrentVersion\Fonts", 0, winreg.KEY_READ)
|
||||
total_fonts = winreg.QueryInfoKey(key)[1]
|
||||
return {
|
||||
name.replace(" (TrueType)", ""): Path(r"C:\Windows\Fonts", filename)
|
||||
for n in range(0, total_fonts)
|
||||
for name, filename, _ in [winreg.EnumValue(key, n)]
|
||||
}
|
||||
else:
|
||||
# TODO: Get System Fonts for Linux and mac OS
|
||||
return {}
|
||||
Args:
|
||||
font_path: Path to the font file
|
||||
|
||||
Returns:
|
||||
Font family name if successfully extracted, None otherwise
|
||||
"""
|
||||
# Suppress verbose fontTools logging during font table parsing
|
||||
import io
|
||||
|
||||
logging.getLogger("fontTools").setLevel(logging.ERROR)
|
||||
logging.getLogger("fontTools.ttLib").setLevel(logging.ERROR)
|
||||
logging.getLogger("fontTools.ttLib.tables").setLevel(logging.ERROR)
|
||||
logging.getLogger("fontTools.ttLib.tables._n_a_m_e").setLevel(logging.ERROR)
|
||||
stderr_backup = sys.stderr
|
||||
sys.stderr = io.StringIO()
|
||||
|
||||
try:
|
||||
font = ttLib.TTFont(font_path, lazy=True)
|
||||
name_table = font["name"]
|
||||
|
||||
# Try to get family name (nameID 1) for Windows platform (platformID 3)
|
||||
# This matches the naming convention used in Windows registry
|
||||
for record in name_table.names:
|
||||
if record.nameID == 1 and record.platformID == 3:
|
||||
return record.toUnicode()
|
||||
|
||||
# Fallback to other platforms if Windows name not found
|
||||
for record in name_table.names:
|
||||
if record.nameID == 1:
|
||||
return record.toUnicode()
|
||||
|
||||
except Exception:
|
||||
pass
|
||||
finally:
|
||||
sys.stderr = stderr_backup
|
||||
|
||||
return None
|
||||
|
||||
|
||||
def get_windows_fonts() -> dict[str, Path]:
|
||||
"""
|
||||
Get fonts from Windows registry.
|
||||
|
||||
Returns:
|
||||
Dictionary mapping font family names to their file paths
|
||||
"""
|
||||
import winreg
|
||||
|
||||
with winreg.ConnectRegistry(None, winreg.HKEY_LOCAL_MACHINE) as reg:
|
||||
key = winreg.OpenKey(reg, r"SOFTWARE\Microsoft\Windows NT\CurrentVersion\Fonts", 0, winreg.KEY_READ)
|
||||
total_fonts = winreg.QueryInfoKey(key)[1]
|
||||
return {
|
||||
name.replace(" (TrueType)", ""): Path(r"C:\Windows\Fonts", filename)
|
||||
for n in range(0, total_fonts)
|
||||
for name, filename, _ in [winreg.EnumValue(key, n)]
|
||||
}
|
||||
|
||||
|
||||
def scan_font_directory(font_dir: Path, fonts: dict[str, Path], log: logging.Logger) -> None:
|
||||
"""
|
||||
Scan a single directory for fonts.
|
||||
|
||||
Args:
|
||||
font_dir: Directory to scan
|
||||
fonts: Dictionary to populate with found fonts
|
||||
log: Logger instance for error reporting
|
||||
"""
|
||||
font_files = list(font_dir.rglob("*.ttf")) + list(font_dir.rglob("*.otf"))
|
||||
|
||||
for font_file in font_files:
|
||||
try:
|
||||
if family_name := extract_font_family(font_file):
|
||||
if family_name not in fonts:
|
||||
fonts[family_name] = font_file
|
||||
except Exception as e:
|
||||
log.debug(f"Failed to process {font_file}: {e}")
|
||||
|
||||
|
||||
def get_unix_fonts() -> dict[str, Path]:
|
||||
"""
|
||||
Get fonts from Linux/macOS standard directories.
|
||||
|
||||
Returns:
|
||||
Dictionary mapping font family names to their file paths
|
||||
"""
|
||||
log = logging.getLogger("get_system_fonts")
|
||||
fonts = {}
|
||||
|
||||
font_dirs = [
|
||||
Path("/usr/share/fonts"),
|
||||
Path("/usr/local/share/fonts"),
|
||||
Path.home() / ".fonts",
|
||||
Path.home() / ".local/share/fonts",
|
||||
]
|
||||
|
||||
for font_dir in font_dirs:
|
||||
if not font_dir.exists():
|
||||
continue
|
||||
|
||||
try:
|
||||
scan_font_directory(font_dir, fonts, log)
|
||||
except Exception as e:
|
||||
log.warning(f"Failed to scan {font_dir}: {e}")
|
||||
return fonts
|
||||
|
||||
|
||||
def get_system_fonts() -> dict[str, Path]:
|
||||
"""
|
||||
Get system fonts as a mapping of font family names to font file paths.
|
||||
|
||||
On Windows: Uses registry to get font display names
|
||||
On Linux/macOS: Scans standard font directories and extracts family names using fontTools
|
||||
|
||||
Returns:
|
||||
Dictionary mapping font family names to their file paths
|
||||
"""
|
||||
if sys.platform == "win32":
|
||||
return get_windows_fonts()
|
||||
return get_unix_fonts()
|
||||
|
||||
|
||||
# Common Windows font names mapped to their Linux equivalents
|
||||
# Ordered by preference (first match is used)
|
||||
FONT_ALIASES = {
|
||||
"Arial": ["Liberation Sans", "DejaVu Sans", "Nimbus Sans", "FreeSans"],
|
||||
"Arial Black": ["Liberation Sans", "DejaVu Sans", "Nimbus Sans"],
|
||||
"Arial Bold": ["Liberation Sans", "DejaVu Sans"],
|
||||
"Arial Unicode MS": ["DejaVu Sans", "Noto Sans", "FreeSans"],
|
||||
"Times New Roman": ["Liberation Serif", "DejaVu Serif", "Nimbus Roman", "FreeSerif"],
|
||||
"Courier New": ["Liberation Mono", "DejaVu Sans Mono", "Nimbus Mono PS", "FreeMono"],
|
||||
"Comic Sans MS": ["Comic Neue", "Comic Relief", "DejaVu Sans"],
|
||||
"Georgia": ["Gelasio", "DejaVu Serif", "Liberation Serif"],
|
||||
"Impact": ["Impact", "Anton", "Liberation Sans"],
|
||||
"Trebuchet MS": ["Ubuntu", "DejaVu Sans", "Liberation Sans"],
|
||||
"Verdana": ["DejaVu Sans", "Bitstream Vera Sans", "Liberation Sans"],
|
||||
"Tahoma": ["DejaVu Sans", "Liberation Sans"],
|
||||
"Adobe Arabic": ["Noto Sans Arabic", "DejaVu Sans"],
|
||||
"Noto Sans Thai": ["Noto Sans Thai", "Noto Sans"],
|
||||
}
|
||||
|
||||
|
||||
def find_case_insensitive(font_name: str, fonts: dict[str, Path]) -> Optional[Path]:
|
||||
"""
|
||||
Find font by case-insensitive name match.
|
||||
|
||||
Args:
|
||||
font_name: Font family name to find
|
||||
fonts: Dictionary of available fonts
|
||||
|
||||
Returns:
|
||||
Path to matched font, or None if not found
|
||||
"""
|
||||
font_lower = font_name.lower()
|
||||
for name, path in fonts.items():
|
||||
if name.lower() == font_lower:
|
||||
return path
|
||||
return None
|
||||
|
||||
|
||||
def find_font_with_fallbacks(font_name: str, system_fonts: dict[str, Path]) -> Optional[Path]:
|
||||
"""
|
||||
Find a font by name with intelligent fallback matching.
|
||||
|
||||
Tries multiple strategies in order:
|
||||
1. Exact match (case-sensitive)
|
||||
2. Case-insensitive match
|
||||
3. Alias lookup (Windows → Linux font equivalents)
|
||||
4. Partial/prefix match
|
||||
|
||||
Args:
|
||||
font_name: The requested font family name (e.g., "Arial", "Times New Roman")
|
||||
system_fonts: Dictionary of available fonts (family name → path)
|
||||
|
||||
Returns:
|
||||
Path to the matched font file, or None if no match found
|
||||
"""
|
||||
if not system_fonts:
|
||||
return None
|
||||
|
||||
# Strategy 1: Exact match (case-sensitive)
|
||||
if font_name in system_fonts:
|
||||
return system_fonts[font_name]
|
||||
|
||||
# Strategy 2: Case-insensitive match
|
||||
if result := find_case_insensitive(font_name, system_fonts):
|
||||
return result
|
||||
|
||||
# Strategy 3: Alias lookup
|
||||
if font_name in FONT_ALIASES:
|
||||
for alias in FONT_ALIASES[font_name]:
|
||||
# Try exact match for alias
|
||||
if alias in system_fonts:
|
||||
return system_fonts[alias]
|
||||
# Try case-insensitive match for alias
|
||||
if result := find_case_insensitive(alias, system_fonts):
|
||||
return result
|
||||
|
||||
# Strategy 4: Partial/prefix match as last resort
|
||||
font_name_lower = font_name.lower()
|
||||
for name, path in system_fonts.items():
|
||||
if name.lower().startswith(font_name_lower):
|
||||
return path
|
||||
|
||||
return None
|
||||
|
||||
|
||||
# Mapping of font families to system packages that provide them
|
||||
FONT_PACKAGES = {
|
||||
"liberation": {
|
||||
"debian": "fonts-liberation fonts-liberation2",
|
||||
"fonts": ["Liberation Sans", "Liberation Serif", "Liberation Mono"],
|
||||
},
|
||||
"dejavu": {
|
||||
"debian": "fonts-dejavu fonts-dejavu-core fonts-dejavu-extra",
|
||||
"fonts": ["DejaVu Sans", "DejaVu Serif", "DejaVu Sans Mono"],
|
||||
},
|
||||
"noto": {
|
||||
"debian": "fonts-noto fonts-noto-core",
|
||||
"fonts": ["Noto Sans", "Noto Serif", "Noto Sans Mono", "Noto Sans Arabic", "Noto Sans Thai"],
|
||||
},
|
||||
"ubuntu": {
|
||||
"debian": "fonts-ubuntu",
|
||||
"fonts": ["Ubuntu", "Ubuntu Mono"],
|
||||
},
|
||||
}
|
||||
|
||||
|
||||
def suggest_font_packages(missing_fonts: list[str]) -> dict[str, list[str]]:
|
||||
"""
|
||||
Suggest system packages to install for missing fonts.
|
||||
|
||||
Args:
|
||||
missing_fonts: List of font family names that couldn't be found
|
||||
|
||||
Returns:
|
||||
Dictionary mapping package names to lists of fonts they would provide
|
||||
"""
|
||||
suggestions = {}
|
||||
|
||||
# Check which fonts from aliases would help
|
||||
needed_aliases = set()
|
||||
for font in missing_fonts:
|
||||
if font in FONT_ALIASES:
|
||||
needed_aliases.update(FONT_ALIASES[font])
|
||||
|
||||
# Map needed aliases to packages
|
||||
for package_name, package_info in FONT_PACKAGES.items():
|
||||
provided_fonts = package_info["fonts"]
|
||||
matching_fonts = [f for f in provided_fonts if f in needed_aliases]
|
||||
if matching_fonts:
|
||||
suggestions[package_info["debian"]] = matching_fonts
|
||||
|
||||
return suggestions
|
||||
|
||||
|
||||
class FPS(ast.NodeVisitor):
|
||||
@@ -457,3 +727,334 @@ class FPS(ast.NodeVisitor):
|
||||
@classmethod
|
||||
def parse(cls, expr: str) -> float:
|
||||
return cls().visit(ast.parse(expr).body[0])
|
||||
|
||||
|
||||
"""
|
||||
Structured JSON debug logging for unshackle.
|
||||
|
||||
Provides comprehensive debugging information for service developers and troubleshooting.
|
||||
When enabled, logs all operations, requests, responses, DRM operations, and errors in JSON format.
|
||||
"""
|
||||
|
||||
|
||||
class DebugLogger:
|
||||
"""
|
||||
Structured JSON debug logger for unshackle.
|
||||
|
||||
Outputs JSON Lines format where each line is a complete JSON object.
|
||||
This makes it easy to parse, filter, and analyze logs programmatically.
|
||||
"""
|
||||
|
||||
def __init__(self, log_path: Optional[Path] = None, enabled: bool = False, log_keys: bool = False):
|
||||
"""
|
||||
Initialize the debug logger.
|
||||
|
||||
Args:
|
||||
log_path: Path to the log file. If None, logging is disabled.
|
||||
enabled: Whether debug logging is enabled.
|
||||
log_keys: Whether to log decryption keys (for debugging key issues).
|
||||
"""
|
||||
self.enabled = enabled and log_path is not None
|
||||
self.log_path = log_path
|
||||
self.session_id = str(uuid4())[:8]
|
||||
self.file_handle = None
|
||||
self.log_keys = log_keys
|
||||
|
||||
if self.enabled:
|
||||
self.log_path.parent.mkdir(parents=True, exist_ok=True)
|
||||
self.file_handle = open(self.log_path, "a", encoding="utf-8")
|
||||
self.log_session_start()
|
||||
|
||||
def log_session_start(self):
|
||||
"""Log the start of a new session with environment information."""
|
||||
import platform
|
||||
|
||||
from unshackle.core import __version__
|
||||
|
||||
self.log(
|
||||
level="INFO",
|
||||
operation="session_start",
|
||||
message="Debug logging session started",
|
||||
context={
|
||||
"unshackle_version": __version__,
|
||||
"python_version": sys.version,
|
||||
"platform": platform.platform(),
|
||||
"platform_system": platform.system(),
|
||||
"platform_release": platform.release(),
|
||||
},
|
||||
)
|
||||
|
||||
def log(
|
||||
self,
|
||||
level: str = "DEBUG",
|
||||
operation: str = "",
|
||||
message: str = "",
|
||||
context: Optional[dict[str, Any]] = None,
|
||||
service: Optional[str] = None,
|
||||
error: Optional[Exception] = None,
|
||||
request: Optional[dict[str, Any]] = None,
|
||||
response: Optional[dict[str, Any]] = None,
|
||||
duration_ms: Optional[float] = None,
|
||||
success: Optional[bool] = None,
|
||||
**kwargs,
|
||||
):
|
||||
"""
|
||||
Log a structured JSON entry.
|
||||
|
||||
Args:
|
||||
level: Log level (DEBUG, INFO, WARNING, ERROR)
|
||||
operation: Name of the operation being performed
|
||||
message: Human-readable message
|
||||
context: Additional context information
|
||||
service: Service name (e.g., DSNP, NF)
|
||||
error: Exception object if an error occurred
|
||||
request: Request details (URL, method, headers, body)
|
||||
response: Response details (status, headers, body)
|
||||
duration_ms: Operation duration in milliseconds
|
||||
success: Whether the operation succeeded
|
||||
**kwargs: Additional fields to include in the log entry
|
||||
"""
|
||||
if not self.enabled or not self.file_handle:
|
||||
return
|
||||
|
||||
entry = {
|
||||
"timestamp": datetime.now(timezone.utc).isoformat(),
|
||||
"session_id": self.session_id,
|
||||
"level": level,
|
||||
}
|
||||
|
||||
if operation:
|
||||
entry["operation"] = operation
|
||||
if message:
|
||||
entry["message"] = message
|
||||
if service:
|
||||
entry["service"] = service
|
||||
if context:
|
||||
entry["context"] = self.sanitize_data(context)
|
||||
if request:
|
||||
entry["request"] = self.sanitize_data(request)
|
||||
if response:
|
||||
entry["response"] = self.sanitize_data(response)
|
||||
if duration_ms is not None:
|
||||
entry["duration_ms"] = duration_ms
|
||||
if success is not None:
|
||||
entry["success"] = success
|
||||
|
||||
if error:
|
||||
entry["error"] = {
|
||||
"type": type(error).__name__,
|
||||
"message": str(error),
|
||||
"traceback": traceback.format_exception(type(error), error, error.__traceback__),
|
||||
}
|
||||
|
||||
for key, value in kwargs.items():
|
||||
if key not in entry:
|
||||
entry[key] = self.sanitize_data(value)
|
||||
|
||||
try:
|
||||
self.file_handle.write(json.dumps(entry, default=str) + "\n")
|
||||
self.file_handle.flush()
|
||||
except Exception as e:
|
||||
print(f"Failed to write debug log: {e}", file=sys.stderr)
|
||||
|
||||
def sanitize_data(self, data: Any) -> Any:
|
||||
"""
|
||||
Sanitize data for JSON serialization.
|
||||
Handles complex objects and removes sensitive information.
|
||||
"""
|
||||
if data is None:
|
||||
return None
|
||||
|
||||
if isinstance(data, (str, int, float, bool)):
|
||||
return data
|
||||
|
||||
if isinstance(data, (list, tuple)):
|
||||
return [self.sanitize_data(item) for item in data]
|
||||
|
||||
if isinstance(data, dict):
|
||||
sanitized = {}
|
||||
for key, value in data.items():
|
||||
key_lower = str(key).lower()
|
||||
has_prefix = key_lower.startswith("has_")
|
||||
|
||||
is_always_sensitive = not has_prefix and any(
|
||||
sensitive in key_lower for sensitive in ["password", "token", "secret", "auth", "cookie"]
|
||||
)
|
||||
|
||||
is_key_field = (
|
||||
"key" in key_lower
|
||||
and not has_prefix
|
||||
and not any(safe in key_lower for safe in ["_count", "_id", "_type", "kid", "keys_", "key_found"])
|
||||
)
|
||||
|
||||
should_redact = is_always_sensitive or (is_key_field and not self.log_keys)
|
||||
|
||||
if should_redact:
|
||||
sanitized[key] = "[REDACTED]"
|
||||
else:
|
||||
sanitized[key] = self.sanitize_data(value)
|
||||
return sanitized
|
||||
|
||||
if isinstance(data, bytes):
|
||||
try:
|
||||
return data.hex()
|
||||
except Exception:
|
||||
return "[BINARY_DATA]"
|
||||
|
||||
if isinstance(data, Path):
|
||||
return str(data)
|
||||
|
||||
try:
|
||||
return str(data)
|
||||
except Exception:
|
||||
return f"[{type(data).__name__}]"
|
||||
|
||||
def log_operation_start(self, operation: str, **kwargs) -> str:
|
||||
"""
|
||||
Log the start of an operation and return an operation ID.
|
||||
|
||||
Args:
|
||||
operation: Name of the operation
|
||||
**kwargs: Additional context
|
||||
|
||||
Returns:
|
||||
Operation ID that can be used to log the end of the operation
|
||||
"""
|
||||
op_id = str(uuid4())[:8]
|
||||
self.log(
|
||||
level="DEBUG",
|
||||
operation=f"{operation}_start",
|
||||
message=f"Starting operation: {operation}",
|
||||
operation_id=op_id,
|
||||
**kwargs,
|
||||
)
|
||||
return op_id
|
||||
|
||||
def log_operation_end(
|
||||
self, operation: str, operation_id: str, success: bool = True, duration_ms: Optional[float] = None, **kwargs
|
||||
):
|
||||
"""
|
||||
Log the end of an operation.
|
||||
|
||||
Args:
|
||||
operation: Name of the operation
|
||||
operation_id: Operation ID from log_operation_start
|
||||
success: Whether the operation succeeded
|
||||
duration_ms: Operation duration in milliseconds
|
||||
**kwargs: Additional context
|
||||
"""
|
||||
self.log(
|
||||
level="INFO" if success else "ERROR",
|
||||
operation=f"{operation}_end",
|
||||
message=f"Finished operation: {operation}",
|
||||
operation_id=operation_id,
|
||||
success=success,
|
||||
duration_ms=duration_ms,
|
||||
**kwargs,
|
||||
)
|
||||
|
||||
def log_service_call(self, method: str, url: str, **kwargs):
|
||||
"""
|
||||
Log a service API call.
|
||||
|
||||
Args:
|
||||
method: HTTP method (GET, POST, etc.)
|
||||
url: Request URL
|
||||
**kwargs: Additional request details (headers, body, etc.)
|
||||
"""
|
||||
self.log(level="DEBUG", operation="service_call", request={"method": method, "url": url, **kwargs})
|
||||
|
||||
def log_drm_operation(self, drm_type: str, operation: str, **kwargs):
|
||||
"""
|
||||
Log a DRM operation (PSSH extraction, license request, key retrieval).
|
||||
|
||||
Args:
|
||||
drm_type: DRM type (Widevine, PlayReady, etc.)
|
||||
operation: DRM operation name
|
||||
**kwargs: Additional context (PSSH, KIDs, keys, etc.)
|
||||
"""
|
||||
self.log(
|
||||
level="DEBUG", operation=f"drm_{operation}", message=f"{drm_type} {operation}", drm_type=drm_type, **kwargs
|
||||
)
|
||||
|
||||
def log_vault_query(self, vault_name: str, operation: str, **kwargs):
|
||||
"""
|
||||
Log a vault query operation.
|
||||
|
||||
Args:
|
||||
vault_name: Name of the vault
|
||||
operation: Vault operation (get_key, add_key, etc.)
|
||||
**kwargs: Additional context (KID, key, success, etc.)
|
||||
"""
|
||||
self.log(
|
||||
level="DEBUG",
|
||||
operation=f"vault_{operation}",
|
||||
message=f"Vault {vault_name}: {operation}",
|
||||
vault=vault_name,
|
||||
**kwargs,
|
||||
)
|
||||
|
||||
def log_error(self, operation: str, error: Exception, **kwargs):
|
||||
"""
|
||||
Log an error with full context.
|
||||
|
||||
Args:
|
||||
operation: Operation that failed
|
||||
error: Exception that occurred
|
||||
**kwargs: Additional context
|
||||
"""
|
||||
self.log(
|
||||
level="ERROR",
|
||||
operation=operation,
|
||||
message=f"Error in {operation}: {str(error)}",
|
||||
error=error,
|
||||
success=False,
|
||||
**kwargs,
|
||||
)
|
||||
|
||||
def close(self):
|
||||
"""Close the log file and clean up resources."""
|
||||
if self.file_handle:
|
||||
self.log(level="INFO", operation="session_end", message="Debug logging session ended")
|
||||
self.file_handle.close()
|
||||
self.file_handle = None
|
||||
|
||||
|
||||
# Global debug logger instance
|
||||
_debug_logger: Optional[DebugLogger] = None
|
||||
|
||||
|
||||
def get_debug_logger() -> Optional[DebugLogger]:
|
||||
"""Get the global debug logger instance."""
|
||||
return _debug_logger
|
||||
|
||||
|
||||
def init_debug_logger(log_path: Optional[Path] = None, enabled: bool = False, log_keys: bool = False):
|
||||
"""
|
||||
Initialize the global debug logger.
|
||||
|
||||
Args:
|
||||
log_path: Path to the log file
|
||||
enabled: Whether debug logging is enabled
|
||||
log_keys: Whether to log decryption keys (for debugging key issues)
|
||||
"""
|
||||
global _debug_logger
|
||||
if _debug_logger:
|
||||
_debug_logger.close()
|
||||
_debug_logger = DebugLogger(log_path=log_path, enabled=enabled, log_keys=log_keys)
|
||||
|
||||
|
||||
def close_debug_logger():
|
||||
"""Close the global debug logger."""
|
||||
global _debug_logger
|
||||
if _debug_logger:
|
||||
_debug_logger.close()
|
||||
_debug_logger = None
|
||||
|
||||
|
||||
__all__ = (
|
||||
"DebugLogger",
|
||||
"get_debug_logger",
|
||||
"init_debug_logger",
|
||||
"close_debug_logger",
|
||||
)
|
||||
|
||||
@@ -1,7 +1,6 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import logging
|
||||
import os
|
||||
import re
|
||||
import subprocess
|
||||
import tempfile
|
||||
@@ -44,7 +43,11 @@ def _get_session() -> requests.Session:
|
||||
|
||||
|
||||
def _api_key() -> Optional[str]:
|
||||
return config.tmdb_api_key or os.getenv("TMDB_API_KEY")
|
||||
return config.tmdb_api_key
|
||||
|
||||
|
||||
def _simkl_client_id() -> Optional[str]:
|
||||
return config.simkl_client_id
|
||||
|
||||
|
||||
def _clean(s: str) -> str:
|
||||
@@ -62,10 +65,44 @@ def fuzzy_match(a: str, b: str, threshold: float = 0.8) -> bool:
|
||||
return ratio >= threshold
|
||||
|
||||
|
||||
def search_simkl(title: str, year: Optional[int], kind: str) -> Tuple[Optional[dict], Optional[str], Optional[int]]:
|
||||
"""Search Simkl API for show information by filename (no auth required)."""
|
||||
def search_simkl(
|
||||
title: str,
|
||||
year: Optional[int],
|
||||
kind: str,
|
||||
title_cacher=None,
|
||||
cache_title_id: Optional[str] = None,
|
||||
cache_region: Optional[str] = None,
|
||||
cache_account_hash: Optional[str] = None,
|
||||
) -> Tuple[Optional[dict], Optional[str], Optional[int]]:
|
||||
"""Search Simkl API for show information by filename."""
|
||||
|
||||
if title_cacher and cache_title_id:
|
||||
cached_simkl = title_cacher.get_cached_simkl(cache_title_id, cache_region, cache_account_hash)
|
||||
if cached_simkl:
|
||||
log.debug("Using cached Simkl data")
|
||||
if cached_simkl.get("type") == "episode" and "show" in cached_simkl:
|
||||
show_info = cached_simkl["show"]
|
||||
show_title = show_info.get("title")
|
||||
tmdb_id = show_info.get("ids", {}).get("tmdbtv")
|
||||
if tmdb_id:
|
||||
tmdb_id = int(tmdb_id)
|
||||
return cached_simkl, show_title, tmdb_id
|
||||
elif cached_simkl.get("type") == "movie" and "movie" in cached_simkl:
|
||||
movie_info = cached_simkl["movie"]
|
||||
movie_title = movie_info.get("title")
|
||||
ids = movie_info.get("ids", {})
|
||||
tmdb_id = ids.get("tmdb") or ids.get("moviedb")
|
||||
if tmdb_id:
|
||||
tmdb_id = int(tmdb_id)
|
||||
return cached_simkl, movie_title, tmdb_id
|
||||
|
||||
log.debug("Searching Simkl for %r (%s, %s)", title, kind, year)
|
||||
|
||||
client_id = _simkl_client_id()
|
||||
if not client_id:
|
||||
log.debug("No SIMKL client ID configured; skipping SIMKL search")
|
||||
return None, None, None
|
||||
|
||||
# Construct appropriate filename based on type
|
||||
filename = f"{title}"
|
||||
if year:
|
||||
@@ -78,7 +115,8 @@ def search_simkl(title: str, year: Optional[int], kind: str) -> Tuple[Optional[d
|
||||
|
||||
try:
|
||||
session = _get_session()
|
||||
resp = session.post("https://api.simkl.com/search/file", json={"file": filename}, timeout=30)
|
||||
headers = {"simkl-api-key": client_id}
|
||||
resp = session.post("https://api.simkl.com/search/file", json={"file": filename}, headers=headers, timeout=30)
|
||||
resp.raise_for_status()
|
||||
data = resp.json()
|
||||
log.debug("Simkl API response received")
|
||||
@@ -102,19 +140,23 @@ def search_simkl(title: str, year: Optional[int], kind: str) -> Tuple[Optional[d
|
||||
log.debug("Simkl year mismatch: searched %d, got %d", year, show_year)
|
||||
return None, None, None
|
||||
|
||||
if title_cacher and cache_title_id:
|
||||
try:
|
||||
title_cacher.cache_simkl(cache_title_id, data, cache_region, cache_account_hash)
|
||||
except Exception as exc:
|
||||
log.debug("Failed to cache Simkl data: %s", exc)
|
||||
|
||||
tmdb_id = show_info.get("ids", {}).get("tmdbtv")
|
||||
if tmdb_id:
|
||||
tmdb_id = int(tmdb_id)
|
||||
log.debug("Simkl -> %s (TMDB ID %s)", show_title, tmdb_id)
|
||||
return data, show_title, tmdb_id
|
||||
|
||||
# Handle movie responses
|
||||
elif data.get("type") == "movie" and "movie" in data:
|
||||
movie_info = data["movie"]
|
||||
movie_title = movie_info.get("title")
|
||||
movie_year = movie_info.get("year")
|
||||
|
||||
# Verify title matches and year if provided
|
||||
if not fuzzy_match(movie_title, title):
|
||||
log.debug("Simkl title mismatch: searched %r, got %r", title, movie_title)
|
||||
return None, None, None
|
||||
@@ -122,6 +164,12 @@ def search_simkl(title: str, year: Optional[int], kind: str) -> Tuple[Optional[d
|
||||
log.debug("Simkl year mismatch: searched %d, got %d", year, movie_year)
|
||||
return None, None, None
|
||||
|
||||
if title_cacher and cache_title_id:
|
||||
try:
|
||||
title_cacher.cache_simkl(cache_title_id, data, cache_region, cache_account_hash)
|
||||
except Exception as exc:
|
||||
log.debug("Failed to cache Simkl data: %s", exc)
|
||||
|
||||
ids = movie_info.get("ids", {})
|
||||
tmdb_id = ids.get("tmdb") or ids.get("moviedb")
|
||||
if tmdb_id:
|
||||
@@ -135,18 +183,85 @@ def search_simkl(title: str, year: Optional[int], kind: str) -> Tuple[Optional[d
|
||||
return None, None, None
|
||||
|
||||
|
||||
def search_show_info(title: str, year: Optional[int], kind: str) -> Tuple[Optional[int], Optional[str], Optional[str]]:
|
||||
def search_show_info(
|
||||
title: str,
|
||||
year: Optional[int],
|
||||
kind: str,
|
||||
title_cacher=None,
|
||||
cache_title_id: Optional[str] = None,
|
||||
cache_region: Optional[str] = None,
|
||||
cache_account_hash: Optional[str] = None,
|
||||
) -> Tuple[Optional[int], Optional[str], Optional[str]]:
|
||||
"""Search for show information, trying Simkl first, then TMDB fallback. Returns (tmdb_id, title, source)."""
|
||||
simkl_data, simkl_title, simkl_tmdb_id = search_simkl(title, year, kind)
|
||||
simkl_data, simkl_title, simkl_tmdb_id = search_simkl(
|
||||
title, year, kind, title_cacher, cache_title_id, cache_region, cache_account_hash
|
||||
)
|
||||
|
||||
if simkl_data and simkl_title and fuzzy_match(simkl_title, title):
|
||||
return simkl_tmdb_id, simkl_title, "simkl"
|
||||
|
||||
tmdb_id, tmdb_title = search_tmdb(title, year, kind)
|
||||
tmdb_id, tmdb_title = search_tmdb(title, year, kind, title_cacher, cache_title_id, cache_region, cache_account_hash)
|
||||
return tmdb_id, tmdb_title, "tmdb"
|
||||
|
||||
|
||||
def search_tmdb(title: str, year: Optional[int], kind: str) -> Tuple[Optional[int], Optional[str]]:
|
||||
def _fetch_tmdb_detail(tmdb_id: int, kind: str) -> Optional[dict]:
|
||||
"""Fetch full TMDB detail response for caching."""
|
||||
api_key = _api_key()
|
||||
if not api_key:
|
||||
return None
|
||||
|
||||
try:
|
||||
session = _get_session()
|
||||
r = session.get(
|
||||
f"https://api.themoviedb.org/3/{kind}/{tmdb_id}",
|
||||
params={"api_key": api_key},
|
||||
timeout=30,
|
||||
)
|
||||
r.raise_for_status()
|
||||
return r.json()
|
||||
except requests.RequestException as exc:
|
||||
log.debug("Failed to fetch TMDB detail: %s", exc)
|
||||
return None
|
||||
|
||||
|
||||
def _fetch_tmdb_external_ids(tmdb_id: int, kind: str) -> Optional[dict]:
|
||||
"""Fetch full TMDB external_ids response for caching."""
|
||||
api_key = _api_key()
|
||||
if not api_key:
|
||||
return None
|
||||
|
||||
try:
|
||||
session = _get_session()
|
||||
r = session.get(
|
||||
f"https://api.themoviedb.org/3/{kind}/{tmdb_id}/external_ids",
|
||||
params={"api_key": api_key},
|
||||
timeout=30,
|
||||
)
|
||||
r.raise_for_status()
|
||||
return r.json()
|
||||
except requests.RequestException as exc:
|
||||
log.debug("Failed to fetch TMDB external IDs: %s", exc)
|
||||
return None
|
||||
|
||||
|
||||
def search_tmdb(
|
||||
title: str,
|
||||
year: Optional[int],
|
||||
kind: str,
|
||||
title_cacher=None,
|
||||
cache_title_id: Optional[str] = None,
|
||||
cache_region: Optional[str] = None,
|
||||
cache_account_hash: Optional[str] = None,
|
||||
) -> Tuple[Optional[int], Optional[str]]:
|
||||
if title_cacher and cache_title_id:
|
||||
cached_tmdb = title_cacher.get_cached_tmdb(cache_title_id, kind, cache_region, cache_account_hash)
|
||||
if cached_tmdb and cached_tmdb.get("detail"):
|
||||
detail = cached_tmdb["detail"]
|
||||
tmdb_id = detail.get("id")
|
||||
tmdb_title = detail.get("title") or detail.get("name")
|
||||
log.debug("Using cached TMDB data: %r (ID %s)", tmdb_title, tmdb_id)
|
||||
return tmdb_id, tmdb_title
|
||||
|
||||
api_key = _api_key()
|
||||
if not api_key:
|
||||
return None, None
|
||||
@@ -205,15 +320,41 @@ def search_tmdb(title: str, year: Optional[int], kind: str) -> Tuple[Optional[in
|
||||
)
|
||||
|
||||
if best_id is not None:
|
||||
if title_cacher and cache_title_id:
|
||||
try:
|
||||
detail_response = _fetch_tmdb_detail(best_id, kind)
|
||||
external_ids_response = _fetch_tmdb_external_ids(best_id, kind)
|
||||
if detail_response and external_ids_response:
|
||||
title_cacher.cache_tmdb(
|
||||
cache_title_id, detail_response, external_ids_response, kind, cache_region, cache_account_hash
|
||||
)
|
||||
except Exception as exc:
|
||||
log.debug("Failed to cache TMDB data: %s", exc)
|
||||
|
||||
return best_id, best_title
|
||||
|
||||
first = results[0]
|
||||
return first.get("id"), first.get("title") or first.get("name")
|
||||
|
||||
|
||||
def get_title(tmdb_id: int, kind: str) -> Optional[str]:
|
||||
def get_title(
|
||||
tmdb_id: int,
|
||||
kind: str,
|
||||
title_cacher=None,
|
||||
cache_title_id: Optional[str] = None,
|
||||
cache_region: Optional[str] = None,
|
||||
cache_account_hash: Optional[str] = None,
|
||||
) -> Optional[str]:
|
||||
"""Fetch the name/title of a TMDB entry by ID."""
|
||||
|
||||
if title_cacher and cache_title_id:
|
||||
cached_tmdb = title_cacher.get_cached_tmdb(cache_title_id, kind, cache_region, cache_account_hash)
|
||||
if cached_tmdb and cached_tmdb.get("detail"):
|
||||
detail = cached_tmdb["detail"]
|
||||
tmdb_title = detail.get("title") or detail.get("name")
|
||||
log.debug("Using cached TMDB title: %r", tmdb_title)
|
||||
return tmdb_title
|
||||
|
||||
api_key = _api_key()
|
||||
if not api_key:
|
||||
return None
|
||||
@@ -226,17 +367,44 @@ def get_title(tmdb_id: int, kind: str) -> Optional[str]:
|
||||
timeout=30,
|
||||
)
|
||||
r.raise_for_status()
|
||||
js = r.json()
|
||||
|
||||
if title_cacher and cache_title_id:
|
||||
try:
|
||||
external_ids_response = _fetch_tmdb_external_ids(tmdb_id, kind)
|
||||
if external_ids_response:
|
||||
title_cacher.cache_tmdb(
|
||||
cache_title_id, js, external_ids_response, kind, cache_region, cache_account_hash
|
||||
)
|
||||
except Exception as exc:
|
||||
log.debug("Failed to cache TMDB data: %s", exc)
|
||||
|
||||
return js.get("title") or js.get("name")
|
||||
except requests.RequestException as exc:
|
||||
log.debug("Failed to fetch TMDB title: %s", exc)
|
||||
return None
|
||||
|
||||
js = r.json()
|
||||
return js.get("title") or js.get("name")
|
||||
|
||||
|
||||
def get_year(tmdb_id: int, kind: str) -> Optional[int]:
|
||||
def get_year(
|
||||
tmdb_id: int,
|
||||
kind: str,
|
||||
title_cacher=None,
|
||||
cache_title_id: Optional[str] = None,
|
||||
cache_region: Optional[str] = None,
|
||||
cache_account_hash: Optional[str] = None,
|
||||
) -> Optional[int]:
|
||||
"""Fetch the release year of a TMDB entry by ID."""
|
||||
|
||||
if title_cacher and cache_title_id:
|
||||
cached_tmdb = title_cacher.get_cached_tmdb(cache_title_id, kind, cache_region, cache_account_hash)
|
||||
if cached_tmdb and cached_tmdb.get("detail"):
|
||||
detail = cached_tmdb["detail"]
|
||||
date = detail.get("release_date") or detail.get("first_air_date")
|
||||
if date and len(date) >= 4 and date[:4].isdigit():
|
||||
year = int(date[:4])
|
||||
log.debug("Using cached TMDB year: %d", year)
|
||||
return year
|
||||
|
||||
api_key = _api_key()
|
||||
if not api_key:
|
||||
return None
|
||||
@@ -249,18 +417,41 @@ def get_year(tmdb_id: int, kind: str) -> Optional[int]:
|
||||
timeout=30,
|
||||
)
|
||||
r.raise_for_status()
|
||||
js = r.json()
|
||||
|
||||
if title_cacher and cache_title_id:
|
||||
try:
|
||||
external_ids_response = _fetch_tmdb_external_ids(tmdb_id, kind)
|
||||
if external_ids_response:
|
||||
title_cacher.cache_tmdb(
|
||||
cache_title_id, js, external_ids_response, kind, cache_region, cache_account_hash
|
||||
)
|
||||
except Exception as exc:
|
||||
log.debug("Failed to cache TMDB data: %s", exc)
|
||||
|
||||
date = js.get("release_date") or js.get("first_air_date")
|
||||
if date and len(date) >= 4 and date[:4].isdigit():
|
||||
return int(date[:4])
|
||||
return None
|
||||
except requests.RequestException as exc:
|
||||
log.debug("Failed to fetch TMDB year: %s", exc)
|
||||
return None
|
||||
|
||||
js = r.json()
|
||||
date = js.get("release_date") or js.get("first_air_date")
|
||||
if date and len(date) >= 4 and date[:4].isdigit():
|
||||
return int(date[:4])
|
||||
return None
|
||||
|
||||
def external_ids(
|
||||
tmdb_id: int,
|
||||
kind: str,
|
||||
title_cacher=None,
|
||||
cache_title_id: Optional[str] = None,
|
||||
cache_region: Optional[str] = None,
|
||||
cache_account_hash: Optional[str] = None,
|
||||
) -> dict:
|
||||
if title_cacher and cache_title_id:
|
||||
cached_tmdb = title_cacher.get_cached_tmdb(cache_title_id, kind, cache_region, cache_account_hash)
|
||||
if cached_tmdb and cached_tmdb.get("external_ids"):
|
||||
log.debug("Using cached TMDB external IDs")
|
||||
return cached_tmdb["external_ids"]
|
||||
|
||||
def external_ids(tmdb_id: int, kind: str) -> dict:
|
||||
api_key = _api_key()
|
||||
if not api_key:
|
||||
return {}
|
||||
@@ -277,13 +468,22 @@ def external_ids(tmdb_id: int, kind: str) -> dict:
|
||||
r.raise_for_status()
|
||||
js = r.json()
|
||||
log.debug("External IDs response: %s", js)
|
||||
|
||||
if title_cacher and cache_title_id:
|
||||
try:
|
||||
detail_response = _fetch_tmdb_detail(tmdb_id, kind)
|
||||
if detail_response:
|
||||
title_cacher.cache_tmdb(cache_title_id, detail_response, js, kind, cache_region, cache_account_hash)
|
||||
except Exception as exc:
|
||||
log.debug("Failed to cache TMDB data: %s", exc)
|
||||
|
||||
return js
|
||||
except requests.RequestException as exc:
|
||||
log.warning("Failed to fetch external IDs for %s %s: %s", kind, tmdb_id, exc)
|
||||
return {}
|
||||
|
||||
|
||||
def _apply_tags(path: Path, tags: dict[str, str]) -> None:
|
||||
def apply_tags(path: Path, tags: dict[str, str]) -> None:
|
||||
if not tags:
|
||||
return
|
||||
if not binaries.Mkvpropedit:
|
||||
@@ -334,83 +534,109 @@ def tag_file(path: Path, title: Title, tmdb_id: Optional[int] | None = None) ->
|
||||
name = title.title
|
||||
year = title.year
|
||||
else:
|
||||
_apply_tags(path, custom_tags)
|
||||
apply_tags(path, custom_tags)
|
||||
return
|
||||
|
||||
if config.tag_imdb_tmdb:
|
||||
# If tmdb_id is provided (via --tmdb), skip Simkl and use TMDB directly
|
||||
if tmdb_id is not None:
|
||||
log.debug("Using provided TMDB ID %s for tags", tmdb_id)
|
||||
else:
|
||||
# Try Simkl first for automatic lookup
|
||||
simkl_data, simkl_title, simkl_tmdb_id = search_simkl(name, year, kind)
|
||||
|
||||
if simkl_data and simkl_title and fuzzy_match(simkl_title, name):
|
||||
log.debug("Using Simkl data for tags")
|
||||
if simkl_tmdb_id:
|
||||
tmdb_id = simkl_tmdb_id
|
||||
|
||||
# Handle TV show data from Simkl
|
||||
if simkl_data.get("type") == "episode" and "show" in simkl_data:
|
||||
show_ids = simkl_data.get("show", {}).get("ids", {})
|
||||
if show_ids.get("imdb"):
|
||||
standard_tags["IMDB"] = show_ids["imdb"]
|
||||
if show_ids.get("tvdb"):
|
||||
standard_tags["TVDB2"] = f"series/{show_ids['tvdb']}"
|
||||
if show_ids.get("tmdbtv"):
|
||||
standard_tags["TMDB"] = f"tv/{show_ids['tmdbtv']}"
|
||||
|
||||
# Handle movie data from Simkl
|
||||
elif simkl_data.get("type") == "movie" and "movie" in simkl_data:
|
||||
movie_ids = simkl_data.get("movie", {}).get("ids", {})
|
||||
if movie_ids.get("imdb"):
|
||||
standard_tags["IMDB"] = movie_ids["imdb"]
|
||||
if movie_ids.get("tvdb"):
|
||||
standard_tags["TVDB2"] = f"movies/{movie_ids['tvdb']}"
|
||||
if movie_ids.get("tmdb"):
|
||||
standard_tags["TMDB"] = f"movie/{movie_ids['tmdb']}"
|
||||
|
||||
# Use TMDB API for additional metadata (either from provided ID or Simkl lookup)
|
||||
# Check if we have any API keys available for metadata lookup
|
||||
api_key = _api_key()
|
||||
if not api_key:
|
||||
log.debug("No TMDB API key set; applying basic tags only")
|
||||
_apply_tags(path, custom_tags)
|
||||
simkl_client = _simkl_client_id()
|
||||
|
||||
if not api_key and not simkl_client:
|
||||
log.debug("No TMDB API key or Simkl client ID configured; skipping IMDB/TMDB tag lookup")
|
||||
apply_tags(path, custom_tags)
|
||||
return
|
||||
|
||||
tmdb_title: Optional[str] = None
|
||||
if tmdb_id is None:
|
||||
tmdb_id, tmdb_title = search_tmdb(name, year, kind)
|
||||
log.debug("TMDB search result: %r (ID %s)", tmdb_title, tmdb_id)
|
||||
if not tmdb_id or not tmdb_title or not fuzzy_match(tmdb_title, name):
|
||||
log.debug("TMDB search did not match; skipping external ID lookup")
|
||||
_apply_tags(path, custom_tags)
|
||||
return
|
||||
|
||||
prefix = "movie" if kind == "movie" else "tv"
|
||||
standard_tags["TMDB"] = f"{prefix}/{tmdb_id}"
|
||||
try:
|
||||
ids = external_ids(tmdb_id, kind)
|
||||
except requests.RequestException as exc:
|
||||
log.debug("Failed to fetch external IDs: %s", exc)
|
||||
ids = {}
|
||||
else:
|
||||
log.debug("External IDs found: %s", ids)
|
||||
|
||||
imdb_id = ids.get("imdb_id")
|
||||
if imdb_id:
|
||||
standard_tags["IMDB"] = imdb_id
|
||||
tvdb_id = ids.get("tvdb_id")
|
||||
if tvdb_id:
|
||||
if kind == "movie":
|
||||
standard_tags["TVDB2"] = f"movies/{tvdb_id}"
|
||||
# If tmdb_id is provided (via --tmdb), skip Simkl and use TMDB directly
|
||||
if tmdb_id is not None:
|
||||
log.debug("Using provided TMDB ID %s for tags", tmdb_id)
|
||||
else:
|
||||
standard_tags["TVDB2"] = f"series/{tvdb_id}"
|
||||
# Try Simkl first for automatic lookup (only if client ID is available)
|
||||
if simkl_client:
|
||||
simkl_data, simkl_title, simkl_tmdb_id = search_simkl(name, year, kind)
|
||||
|
||||
if simkl_data and simkl_title and fuzzy_match(simkl_title, name):
|
||||
log.debug("Using Simkl data for tags")
|
||||
if simkl_tmdb_id:
|
||||
tmdb_id = simkl_tmdb_id
|
||||
|
||||
# Handle TV show data from Simkl
|
||||
if simkl_data.get("type") == "episode" and "show" in simkl_data:
|
||||
show_ids = simkl_data.get("show", {}).get("ids", {})
|
||||
if show_ids.get("imdb"):
|
||||
standard_tags["IMDB"] = show_ids["imdb"]
|
||||
if show_ids.get("tvdb"):
|
||||
standard_tags["TVDB2"] = f"series/{show_ids['tvdb']}"
|
||||
if show_ids.get("tmdbtv"):
|
||||
standard_tags["TMDB"] = f"tv/{show_ids['tmdbtv']}"
|
||||
|
||||
# Handle movie data from Simkl
|
||||
elif simkl_data.get("type") == "movie" and "movie" in simkl_data:
|
||||
movie_ids = simkl_data.get("movie", {}).get("ids", {})
|
||||
if movie_ids.get("imdb"):
|
||||
standard_tags["IMDB"] = movie_ids["imdb"]
|
||||
if movie_ids.get("tvdb"):
|
||||
standard_tags["TVDB2"] = f"movies/{movie_ids['tvdb']}"
|
||||
if movie_ids.get("tmdb"):
|
||||
standard_tags["TMDB"] = f"movie/{movie_ids['tmdb']}"
|
||||
|
||||
# Use TMDB API for additional metadata (either from provided ID or Simkl lookup)
|
||||
if api_key:
|
||||
tmdb_title: Optional[str] = None
|
||||
if tmdb_id is None:
|
||||
tmdb_id, tmdb_title = search_tmdb(name, year, kind)
|
||||
log.debug("TMDB search result: %r (ID %s)", tmdb_title, tmdb_id)
|
||||
if not tmdb_id or not tmdb_title or not fuzzy_match(tmdb_title, name):
|
||||
log.debug("TMDB search did not match; skipping external ID lookup")
|
||||
else:
|
||||
prefix = "movie" if kind == "movie" else "tv"
|
||||
standard_tags["TMDB"] = f"{prefix}/{tmdb_id}"
|
||||
try:
|
||||
ids = external_ids(tmdb_id, kind)
|
||||
except requests.RequestException as exc:
|
||||
log.debug("Failed to fetch external IDs: %s", exc)
|
||||
ids = {}
|
||||
else:
|
||||
log.debug("External IDs found: %s", ids)
|
||||
|
||||
imdb_id = ids.get("imdb_id")
|
||||
if imdb_id:
|
||||
standard_tags["IMDB"] = imdb_id
|
||||
tvdb_id = ids.get("tvdb_id")
|
||||
if tvdb_id:
|
||||
if kind == "movie":
|
||||
standard_tags["TVDB2"] = f"movies/{tvdb_id}"
|
||||
else:
|
||||
standard_tags["TVDB2"] = f"series/{tvdb_id}"
|
||||
elif tmdb_id is not None:
|
||||
# tmdb_id was provided or found via Simkl
|
||||
prefix = "movie" if kind == "movie" else "tv"
|
||||
standard_tags["TMDB"] = f"{prefix}/{tmdb_id}"
|
||||
try:
|
||||
ids = external_ids(tmdb_id, kind)
|
||||
except requests.RequestException as exc:
|
||||
log.debug("Failed to fetch external IDs: %s", exc)
|
||||
ids = {}
|
||||
else:
|
||||
log.debug("External IDs found: %s", ids)
|
||||
|
||||
imdb_id = ids.get("imdb_id")
|
||||
if imdb_id:
|
||||
standard_tags["IMDB"] = imdb_id
|
||||
tvdb_id = ids.get("tvdb_id")
|
||||
if tvdb_id:
|
||||
if kind == "movie":
|
||||
standard_tags["TVDB2"] = f"movies/{tvdb_id}"
|
||||
else:
|
||||
standard_tags["TVDB2"] = f"series/{tvdb_id}"
|
||||
else:
|
||||
log.debug("No TMDB API key configured; skipping TMDB external ID lookup")
|
||||
|
||||
merged_tags = {
|
||||
**custom_tags,
|
||||
**standard_tags,
|
||||
}
|
||||
_apply_tags(path, merged_tags)
|
||||
apply_tags(path, merged_tags)
|
||||
|
||||
|
||||
__all__ = [
|
||||
|
||||
@@ -3,8 +3,11 @@ import sys
|
||||
import typing
|
||||
from typing import Optional
|
||||
|
||||
import pysubs2
|
||||
from pycaption import Caption, CaptionList, CaptionNode, CaptionReadError, WebVTTReader, WebVTTWriter
|
||||
|
||||
from unshackle.core.config import config
|
||||
|
||||
|
||||
class CaptionListExt(CaptionList):
|
||||
@typing.no_type_check
|
||||
@@ -142,7 +145,24 @@ def merge_segmented_webvtt(vtt_raw: str, segment_durations: Optional[list[int]]
|
||||
"""
|
||||
MPEG_TIMESCALE = 90_000
|
||||
|
||||
vtt = WebVTTReaderExt().read(vtt_raw)
|
||||
# Check config for conversion method preference
|
||||
conversion_method = config.subtitle.get("conversion_method", "auto")
|
||||
use_pysubs2 = conversion_method in ("pysubs2", "auto")
|
||||
|
||||
if use_pysubs2:
|
||||
# Try using pysubs2 first for more lenient parsing
|
||||
try:
|
||||
# Use pysubs2 to parse and normalize the VTT
|
||||
subs = pysubs2.SSAFile.from_string(vtt_raw)
|
||||
# Convert back to WebVTT string for pycaption processing
|
||||
normalized_vtt = subs.to_string("vtt")
|
||||
vtt = WebVTTReaderExt().read(normalized_vtt)
|
||||
except Exception:
|
||||
# Fall back to direct pycaption parsing
|
||||
vtt = WebVTTReaderExt().read(vtt_raw)
|
||||
else:
|
||||
# Use pycaption directly
|
||||
vtt = WebVTTReaderExt().read(vtt_raw)
|
||||
for lang in vtt.get_languages():
|
||||
prev_caption = None
|
||||
duplicate_index: list[int] = []
|
||||
|
||||
@@ -1,3 +1,10 @@
|
||||
# API key for The Movie Database (TMDB)
|
||||
tmdb_api_key: ""
|
||||
|
||||
# Client ID for SIMKL API (optional, improves metadata matching)
|
||||
# Get your free client ID at: https://simkl.com/settings/developer/
|
||||
simkl_client_id: ""
|
||||
|
||||
# Group or Username to postfix to the end of all download filenames following a dash
|
||||
tag: user_tag
|
||||
|
||||
@@ -32,6 +39,26 @@ title_cache_enabled: true # Enable/disable title caching globally (default: true
|
||||
title_cache_time: 1800 # Cache duration in seconds (default: 1800 = 30 minutes)
|
||||
title_cache_max_retention: 86400 # Maximum cache retention for fallback when API fails (default: 86400 = 24 hours)
|
||||
|
||||
# Debug logging configuration
|
||||
# Comprehensive JSON-based debug logging for troubleshooting and service development
|
||||
debug:
|
||||
false # Enable structured JSON debug logging (default: false)
|
||||
# When enabled with --debug flag or set to true:
|
||||
# - Creates JSON Lines (.jsonl) log files with complete debugging context
|
||||
# - Logs: session info, CLI params, service config, CDM details, authentication,
|
||||
# titles, tracks metadata, DRM operations, vault queries, errors with stack traces
|
||||
# - File location: logs/unshackle_debug_{service}_{timestamp}.jsonl
|
||||
# - Also creates text log: logs/unshackle_root_{timestamp}.log
|
||||
|
||||
debug_keys:
|
||||
false # Log decryption keys in debug logs (default: false)
|
||||
# Set to true to include actual decryption keys in logs
|
||||
# Useful for debugging key retrieval and decryption issues
|
||||
# SECURITY NOTE: Passwords, tokens, cookies, and session tokens
|
||||
# are ALWAYS redacted regardless of this setting
|
||||
# Only affects: content_key, key fields (the actual CEKs)
|
||||
# Never affects: kid, keys_count, key_id (metadata is always logged)
|
||||
|
||||
# Muxing configuration
|
||||
muxing:
|
||||
set_title: false
|
||||
@@ -109,6 +136,74 @@ cdm:
|
||||
default: netflix_standard_l3
|
||||
|
||||
# Use pywidevine Serve-compliant Remote CDMs
|
||||
|
||||
# Example: Custom CDM API Configuration
|
||||
# This demonstrates the highly configurable custom_api type that can adapt to any CDM API format
|
||||
# - name: "chrome"
|
||||
# type: "custom_api"
|
||||
# host: "http://remotecdm.test/"
|
||||
# timeout: 30
|
||||
# device:
|
||||
# name: "ChromeCDM"
|
||||
# type: "CHROME"
|
||||
# system_id: 34312
|
||||
# security_level: 3
|
||||
# auth:
|
||||
# type: "header"
|
||||
# header_name: "x-api-key"
|
||||
# key: "YOUR_API_KEY_HERE"
|
||||
# custom_headers:
|
||||
# User-Agent: "Unshackle/2.0.0"
|
||||
# endpoints:
|
||||
# get_request:
|
||||
# path: "/get-challenge"
|
||||
# method: "POST"
|
||||
# timeout: 30
|
||||
# decrypt_response:
|
||||
# path: "/get-keys"
|
||||
# method: "POST"
|
||||
# timeout: 30
|
||||
# request_mapping:
|
||||
# get_request:
|
||||
# param_names:
|
||||
# scheme: "device"
|
||||
# init_data: "init_data"
|
||||
# static_params:
|
||||
# scheme: "Widevine"
|
||||
# decrypt_response:
|
||||
# param_names:
|
||||
# scheme: "device"
|
||||
# license_request: "license_request"
|
||||
# license_response: "license_response"
|
||||
# static_params:
|
||||
# scheme: "Widevine"
|
||||
# response_mapping:
|
||||
# get_request:
|
||||
# fields:
|
||||
# challenge: "challenge"
|
||||
# session_id: "session_id"
|
||||
# message: "message"
|
||||
# message_type: "message_type"
|
||||
# response_types:
|
||||
# - condition: "message_type == 'license-request'"
|
||||
# type: "license_request"
|
||||
# success_conditions:
|
||||
# - "message == 'success'"
|
||||
# decrypt_response:
|
||||
# fields:
|
||||
# keys: "keys"
|
||||
# message: "message"
|
||||
# key_fields:
|
||||
# kid: "kid"
|
||||
# key: "key"
|
||||
# type: "type"
|
||||
# success_conditions:
|
||||
# - "message == 'success'"
|
||||
# caching:
|
||||
# enabled: true
|
||||
# use_vaults: true
|
||||
# check_cached_first: true
|
||||
|
||||
remote_cdm:
|
||||
- name: "chrome"
|
||||
device_name: chrome
|
||||
@@ -239,27 +334,40 @@ headers:
|
||||
|
||||
# Override default filenames used across unshackle
|
||||
filenames:
|
||||
log: "unshackle_{name}_{time}.log"
|
||||
debug_log: "unshackle_debug_{service}_{time}.jsonl" # JSON Lines debug log file
|
||||
config: "config.yaml"
|
||||
root_config: "unshackle.yaml"
|
||||
chapters: "Chapters_{title}_{random}.txt"
|
||||
subtitle: "Subtitle_{id}_{language}.srt"
|
||||
|
||||
# API key for The Movie Database (TMDB)
|
||||
tmdb_api_key: ""
|
||||
|
||||
# conversion_method:
|
||||
# - auto (default): Smart routing - subby for WebVTT/SAMI, standard for others
|
||||
# - auto (default): Smart routing - subby for WebVTT/SAMI, pycaption for others
|
||||
# - subby: Always use subby with advanced processing
|
||||
# - pycaption: Use only pycaption library (no SubtitleEdit, no subby)
|
||||
# - subtitleedit: Prefer SubtitleEdit when available, fall back to pycaption
|
||||
# - pysubs2: Use pysubs2 library (supports SRT/SSA/ASS/WebVTT/TTML/SAMI/MicroDVD/MPL2/TMP)
|
||||
subtitle:
|
||||
conversion_method: auto
|
||||
# sdh_method: Method to use for SDH (hearing impaired) stripping
|
||||
# - auto (default): Try subby (SRT only), then SubtitleEdit (if available), then subtitle-filter
|
||||
# - subby: Use subby library (SRT only)
|
||||
# - subtitleedit: Use SubtitleEdit tool (Windows only, falls back to subtitle-filter)
|
||||
# - filter-subs: Use subtitle-filter library directly
|
||||
sdh_method: auto
|
||||
# strip_sdh: Automatically create stripped (non-SDH) versions of SDH subtitles
|
||||
# Set to false to disable automatic SDH stripping entirely (default: true)
|
||||
strip_sdh: true
|
||||
# convert_before_strip: Auto-convert VTT/other formats to SRT before using subtitle-filter
|
||||
# This ensures compatibility when subtitle-filter is used as fallback (default: true)
|
||||
convert_before_strip: true
|
||||
# preserve_formatting: Preserve original subtitle formatting (tags, positioning, styling)
|
||||
# When true, skips pycaption processing for WebVTT files to keep tags like <i>, <b>, positioning intact
|
||||
# Combined with no sub_format setting, ensures subtitles remain in their original format (default: true)
|
||||
preserve_formatting: true
|
||||
|
||||
# Configuration for pywidevine's serve functionality
|
||||
serve:
|
||||
api_secret: "your-secret-key-here"
|
||||
users:
|
||||
secret_key_for_user:
|
||||
devices:
|
||||
@@ -273,9 +381,13 @@ services:
|
||||
# Service-specific configuration goes here
|
||||
# Profile-specific configurations can be nested under service names
|
||||
|
||||
# Example: with profile-specific device configs
|
||||
# You can override ANY global configuration option on a per-service basis
|
||||
# This allows fine-tuned control for services with special requirements
|
||||
# Supported overrides: dl, aria2c, n_m3u8dl_re, curl_impersonate, subtitle, muxing, headers, etc.
|
||||
|
||||
# Example: Comprehensive service configuration showing all features
|
||||
EXAMPLE:
|
||||
# Global service config
|
||||
# Standard service config
|
||||
api_key: "service_api_key"
|
||||
|
||||
# Service certificate for Widevine L1/L2 (base64 encoded)
|
||||
@@ -296,6 +408,42 @@ services:
|
||||
app_name: "AIV"
|
||||
device_model: "Fire TV Stick 4K"
|
||||
|
||||
# NEW: Configuration overrides (can be combined with profiles and certificates)
|
||||
# Override dl command defaults for this service
|
||||
dl:
|
||||
downloads: 4 # Limit concurrent track downloads (global default: 6)
|
||||
workers: 8 # Reduce workers per track (global default: 16)
|
||||
lang: ["en", "es-419"] # Different language priority for this service
|
||||
sub_format: srt # Force SRT subtitle format
|
||||
|
||||
# Override n_m3u8dl_re downloader settings
|
||||
n_m3u8dl_re:
|
||||
thread_count: 8 # Lower thread count for rate-limited service (global default: 16)
|
||||
use_proxy: true # Force proxy usage for this service
|
||||
retry_count: 10 # More retries for unstable connections
|
||||
ad_keyword: "advertisement" # Service-specific ad filtering
|
||||
|
||||
# Override aria2c downloader settings
|
||||
aria2c:
|
||||
max_concurrent_downloads: 2 # Limit concurrent downloads (global default: 4)
|
||||
max_connection_per_server: 1 # Single connection per server
|
||||
split: 3 # Fewer splits (global default: 5)
|
||||
file_allocation: none # Faster allocation for this service
|
||||
|
||||
# Override subtitle processing for this service
|
||||
subtitle:
|
||||
conversion_method: pycaption # Use specific subtitle converter
|
||||
sdh_method: auto
|
||||
|
||||
# Service-specific headers
|
||||
headers:
|
||||
User-Agent: "Service-specific user agent string"
|
||||
Accept-Language: "en-US,en;q=0.9"
|
||||
|
||||
# Override muxing options
|
||||
muxing:
|
||||
set_title: true
|
||||
|
||||
# Example: Service with different regions per profile
|
||||
SERVICE_NAME:
|
||||
profiles:
|
||||
@@ -306,6 +454,25 @@ services:
|
||||
region: "GB"
|
||||
api_endpoint: "https://api.uk.service.com"
|
||||
|
||||
# Example: Rate-limited service
|
||||
RATE_LIMITED_SERVICE:
|
||||
dl:
|
||||
downloads: 2 # Limit concurrent downloads
|
||||
workers: 4 # Reduce workers to avoid rate limits
|
||||
n_m3u8dl_re:
|
||||
thread_count: 4 # Very low thread count
|
||||
retry_count: 20 # More retries for flaky service
|
||||
aria2c:
|
||||
max_concurrent_downloads: 1 # Download tracks one at a time
|
||||
max_connection_per_server: 1 # Single connection only
|
||||
|
||||
# Notes on service-specific overrides:
|
||||
# - Overrides are merged with global config, not replaced
|
||||
# - Only specified keys are overridden, others use global defaults
|
||||
# - Reserved keys (profiles, api_key, certificate, etc.) are NOT treated as overrides
|
||||
# - Any dict-type config option can be overridden (dl, aria2c, n_m3u8dl_re, subtitle, etc.)
|
||||
# - CLI arguments always take priority over service-specific config
|
||||
|
||||
# External proxy provider services
|
||||
proxy_providers:
|
||||
nordvpn:
|
||||
@@ -320,6 +487,12 @@ proxy_providers:
|
||||
us: 3844 # force US server #3844 for US proxies
|
||||
gb: 2697 # force GB server #2697 for GB proxies
|
||||
au: 4621 # force AU server #4621 for AU proxies
|
||||
windscribevpn:
|
||||
username: your_windscribe_username # Service credentials from https://windscribe.com/getconfig/openvpn
|
||||
password: your_windscribe_password # Service credentials (not your login password)
|
||||
server_map:
|
||||
us: "us-central-096.totallyacdn.com" # force US server
|
||||
gb: "uk-london-055.totallyacdn.com" # force GB server
|
||||
basic:
|
||||
GB:
|
||||
- "socks5://username:password@bhx.socks.ipvanish.com:1080" # 1 (Birmingham)
|
||||
|
||||
@@ -37,7 +37,9 @@ class SQLite(Vault):
|
||||
if not self.has_table(service_name):
|
||||
continue
|
||||
|
||||
cursor.execute(f"SELECT `id`, `key_` FROM `{service_name}` WHERE `kid`=? AND `key_`!=?", (kid, "0" * 32))
|
||||
cursor.execute(
|
||||
f"SELECT `id`, `key_` FROM `{service_name}` WHERE `kid`=? AND `key_`!=?", (kid, "0" * 32)
|
||||
)
|
||||
cek = cursor.fetchone()
|
||||
if cek:
|
||||
return cek[1]
|
||||
|
||||
104
uv.lock
generated
104
uv.lock
generated
@@ -80,6 +80,22 @@ wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/2b/d8/fa65d2a349fe938b76d309db1a56a75c4fb8cc7b17a398b698488a939903/aiohttp-3.12.15-cp312-cp312-win_amd64.whl", hash = "sha256:b390ef5f62bb508a9d67cb3bba9b8356e23b3996da7062f1a57ce1a79d2b3d34", size = 450266, upload-time = "2025-07-29T05:51:17.239Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "aiohttp-swagger3"
|
||||
version = "0.10.0"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
dependencies = [
|
||||
{ name = "aiohttp" },
|
||||
{ name = "attrs" },
|
||||
{ name = "fastjsonschema" },
|
||||
{ name = "pyyaml" },
|
||||
{ name = "rfc3339-validator" },
|
||||
]
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/a1/06/00ccb2c8afdde4ca7c3cac424d54715c7d90cdd4e13e1ca71d68f5b2e665/aiohttp_swagger3-0.10.0.tar.gz", hash = "sha256:a333c59328f64dd64587e5f276ee84dc256f587d09f2da6ddaae3812fa4d4f33", size = 1839028, upload-time = "2025-02-11T10:51:26.974Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/0a/8f/db4cb843999a3088846d170f38eda2182b50b5733387be8102fed171c53f/aiohttp_swagger3-0.10.0-py3-none-any.whl", hash = "sha256:0ae2d2ba7dbd8ea8fe1cffe8f0197db5d0aa979eb9679bd699ecd87923912509", size = 1826491, upload-time = "2025-02-11T10:51:25.174Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "aiosignal"
|
||||
version = "1.4.0"
|
||||
@@ -468,6 +484,15 @@ wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/36/f4/c6e662dade71f56cd2f3735141b265c3c79293c109549c1e6933b0651ffc/exceptiongroup-1.3.0-py3-none-any.whl", hash = "sha256:4d111e6e0c13d0644cad6ddaa7ed0261a0b36971f6d23e7ec9b4b9097da78a10", size = 16674, upload-time = "2025-05-10T17:42:49.33Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "fastjsonschema"
|
||||
version = "2.19.1"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/ba/7f/cedf77ace50aa60c566deaca9066750f06e1fcf6ad24f254d255bb976dd6/fastjsonschema-2.19.1.tar.gz", hash = "sha256:e3126a94bdc4623d3de4485f8d468a12f02a67921315ddc87836d6e456dc789d", size = 372732, upload-time = "2023-12-28T14:02:06.823Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/9c/b9/79691036d4a8f9857e74d1728b23f34f583b81350a27492edda58d5604e1/fastjsonschema-2.19.1-py3-none-any.whl", hash = "sha256:3672b47bc94178c9f23dbb654bf47440155d4db9df5f7bc47643315f9c405cd0", size = 23388, upload-time = "2023-12-28T14:02:04.512Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "filelock"
|
||||
version = "3.19.1"
|
||||
@@ -477,6 +502,39 @@ wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/42/14/42b2651a2f46b022ccd948bca9f2d5af0fd8929c4eec235b8d6d844fbe67/filelock-3.19.1-py3-none-any.whl", hash = "sha256:d38e30481def20772f5baf097c122c3babc4fcdb7e14e57049eb9d88c6dc017d", size = 15988, upload-time = "2025-08-14T16:56:01.633Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "fonttools"
|
||||
version = "4.60.1"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/4b/42/97a13e47a1e51a5a7142475bbcf5107fe3a68fc34aef331c897d5fb98ad0/fonttools-4.60.1.tar.gz", hash = "sha256:ef00af0439ebfee806b25f24c8f92109157ff3fac5731dc7867957812e87b8d9", size = 3559823, upload-time = "2025-09-29T21:13:27.129Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/26/70/03e9d89a053caff6ae46053890eba8e4a5665a7c5638279ed4492e6d4b8b/fonttools-4.60.1-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:9a52f254ce051e196b8fe2af4634c2d2f02c981756c6464dc192f1b6050b4e28", size = 2810747, upload-time = "2025-09-29T21:10:59.653Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/6f/41/449ad5aff9670ab0df0f61ee593906b67a36d7e0b4d0cd7fa41ac0325bf5/fonttools-4.60.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:c7420a2696a44650120cdd269a5d2e56a477e2bfa9d95e86229059beb1c19e15", size = 2346909, upload-time = "2025-09-29T21:11:02.882Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/9a/18/e5970aa96c8fad1cb19a9479cc3b7602c0c98d250fcdc06a5da994309c50/fonttools-4.60.1-cp310-cp310-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:ee0c0b3b35b34f782afc673d503167157094a16f442ace7c6c5e0ca80b08f50c", size = 4864572, upload-time = "2025-09-29T21:11:05.096Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/ce/20/9b2b4051b6ec6689480787d506b5003f72648f50972a92d04527a456192c/fonttools-4.60.1-cp310-cp310-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:282dafa55f9659e8999110bd8ed422ebe1c8aecd0dc396550b038e6c9a08b8ea", size = 4794635, upload-time = "2025-09-29T21:11:08.651Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/10/52/c791f57347c1be98f8345e3dca4ac483eb97666dd7c47f3059aeffab8b59/fonttools-4.60.1-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:4ba4bd646e86de16160f0fb72e31c3b9b7d0721c3e5b26b9fa2fc931dfdb2652", size = 4843878, upload-time = "2025-09-29T21:11:10.893Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/69/e9/35c24a8d01644cee8c090a22fad34d5b61d1e0a8ecbc9945ad785ebf2e9e/fonttools-4.60.1-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:0b0835ed15dd5b40d726bb61c846a688f5b4ce2208ec68779bc81860adb5851a", size = 4954555, upload-time = "2025-09-29T21:11:13.24Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/f7/86/fb1e994971be4bdfe3a307de6373ef69a9df83fb66e3faa9c8114893d4cc/fonttools-4.60.1-cp310-cp310-win32.whl", hash = "sha256:1525796c3ffe27bb6268ed2a1bb0dcf214d561dfaf04728abf01489eb5339dce", size = 2232019, upload-time = "2025-09-29T21:11:15.73Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/40/84/62a19e2bd56f0e9fb347486a5b26376bade4bf6bbba64dda2c103bd08c94/fonttools-4.60.1-cp310-cp310-win_amd64.whl", hash = "sha256:268ecda8ca6cb5c4f044b1fb9b3b376e8cd1b361cef275082429dc4174907038", size = 2276803, upload-time = "2025-09-29T21:11:18.152Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/ea/85/639aa9bface1537e0fb0f643690672dde0695a5bbbc90736bc571b0b1941/fonttools-4.60.1-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:7b4c32e232a71f63a5d00259ca3d88345ce2a43295bb049d21061f338124246f", size = 2831872, upload-time = "2025-09-29T21:11:20.329Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/6b/47/3c63158459c95093be9618794acb1067b3f4d30dcc5c3e8114b70e67a092/fonttools-4.60.1-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:3630e86c484263eaac71d117085d509cbcf7b18f677906824e4bace598fb70d2", size = 2356990, upload-time = "2025-09-29T21:11:22.754Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/94/dd/1934b537c86fcf99f9761823f1fc37a98fbd54568e8e613f29a90fed95a9/fonttools-4.60.1-cp311-cp311-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:5c1015318e4fec75dd4943ad5f6a206d9727adf97410d58b7e32ab644a807914", size = 5042189, upload-time = "2025-09-29T21:11:25.061Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/d2/d2/9f4e4c4374dd1daa8367784e1bd910f18ba886db1d6b825b12edf6db3edc/fonttools-4.60.1-cp311-cp311-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:e6c58beb17380f7c2ea181ea11e7db8c0ceb474c9dd45f48e71e2cb577d146a1", size = 4978683, upload-time = "2025-09-29T21:11:27.693Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/cc/c4/0fb2dfd1ecbe9a07954cc13414713ed1eab17b1c0214ef07fc93df234a47/fonttools-4.60.1-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:ec3681a0cb34c255d76dd9d865a55f260164adb9fa02628415cdc2d43ee2c05d", size = 5021372, upload-time = "2025-09-29T21:11:30.257Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/0c/d5/495fc7ae2fab20223cc87179a8f50f40f9a6f821f271ba8301ae12bb580f/fonttools-4.60.1-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:f4b5c37a5f40e4d733d3bbaaef082149bee5a5ea3156a785ff64d949bd1353fa", size = 5132562, upload-time = "2025-09-29T21:11:32.737Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/bc/fa/021dab618526323c744e0206b3f5c8596a2e7ae9aa38db5948a131123e83/fonttools-4.60.1-cp311-cp311-win32.whl", hash = "sha256:398447f3d8c0c786cbf1209711e79080a40761eb44b27cdafffb48f52bcec258", size = 2230288, upload-time = "2025-09-29T21:11:35.015Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/bb/78/0e1a6d22b427579ea5c8273e1c07def2f325b977faaf60bb7ddc01456cb1/fonttools-4.60.1-cp311-cp311-win_amd64.whl", hash = "sha256:d066ea419f719ed87bc2c99a4a4bfd77c2e5949cb724588b9dd58f3fd90b92bf", size = 2278184, upload-time = "2025-09-29T21:11:37.434Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/e3/f7/a10b101b7a6f8836a5adb47f2791f2075d044a6ca123f35985c42edc82d8/fonttools-4.60.1-cp312-cp312-macosx_10_13_universal2.whl", hash = "sha256:7b0c6d57ab00dae9529f3faf187f2254ea0aa1e04215cf2f1a8ec277c96661bc", size = 2832953, upload-time = "2025-09-29T21:11:39.616Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/ed/fe/7bd094b59c926acf2304d2151354ddbeb74b94812f3dc943c231db09cb41/fonttools-4.60.1-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:839565cbf14645952d933853e8ade66a463684ed6ed6c9345d0faf1f0e868877", size = 2352706, upload-time = "2025-09-29T21:11:41.826Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/c0/ca/4bb48a26ed95a1e7eba175535fe5805887682140ee0a0d10a88e1de84208/fonttools-4.60.1-cp312-cp312-manylinux1_x86_64.manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:8177ec9676ea6e1793c8a084a90b65a9f778771998eb919d05db6d4b1c0b114c", size = 4923716, upload-time = "2025-09-29T21:11:43.893Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/b8/9f/2cb82999f686c1d1ddf06f6ae1a9117a880adbec113611cc9d22b2fdd465/fonttools-4.60.1-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:996a4d1834524adbb423385d5a629b868ef9d774670856c63c9a0408a3063401", size = 4968175, upload-time = "2025-09-29T21:11:46.439Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/18/79/be569699e37d166b78e6218f2cde8c550204f2505038cdd83b42edc469b9/fonttools-4.60.1-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:a46b2f450bc79e06ef3b6394f0c68660529ed51692606ad7f953fc2e448bc903", size = 4911031, upload-time = "2025-09-29T21:11:48.977Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/cc/9f/89411cc116effaec5260ad519162f64f9c150e5522a27cbb05eb62d0c05b/fonttools-4.60.1-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:6ec722ee589e89a89f5b7574f5c45604030aa6ae24cb2c751e2707193b466fed", size = 5062966, upload-time = "2025-09-29T21:11:54.344Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/62/a1/f888221934b5731d46cb9991c7a71f30cb1f97c0ef5fcf37f8da8fce6c8e/fonttools-4.60.1-cp312-cp312-win32.whl", hash = "sha256:b2cf105cee600d2de04ca3cfa1f74f1127f8455b71dbad02b9da6ec266e116d6", size = 2218750, upload-time = "2025-09-29T21:11:56.601Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/88/8f/a55b5550cd33cd1028601df41acd057d4be20efa5c958f417b0c0613924d/fonttools-4.60.1-cp312-cp312-win_amd64.whl", hash = "sha256:992775c9fbe2cf794786fa0ffca7f09f564ba3499b8fe9f2f80bd7197db60383", size = 2267026, upload-time = "2025-09-29T21:11:58.852Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/c7/93/0dd45cd283c32dea1545151d8c3637b4b8c53cdb3a625aeb2885b184d74d/fonttools-4.60.1-py3-none-any.whl", hash = "sha256:906306ac7afe2156fcf0042173d6ebbb05416af70f6b370967b47f8f00103bbb", size = 1143175, upload-time = "2025-09-29T21:13:24.134Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "frozenlist"
|
||||
version = "1.7.0"
|
||||
@@ -1101,6 +1159,15 @@ wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/f2/5f/af7da8e6f1e42b52f44a24d08b8e4c726207434e2593732d39e7af5e7256/pycryptodomex-3.23.0-pp310-pypy310_pp73-win_amd64.whl", hash = "sha256:14c37aaece158d0ace436f76a7bb19093db3b4deade9797abfc39ec6cd6cc2fe", size = 1806478, upload-time = "2025-05-17T17:23:26.066Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "pyexecjs"
|
||||
version = "1.5.1"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
dependencies = [
|
||||
{ name = "six" },
|
||||
]
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/ba/8e/aedef81641c8dca6fd0fb7294de5bed9c45f3397d67fddf755c1042c2642/PyExecJS-1.5.1.tar.gz", hash = "sha256:34cc1d070976918183ff7bdc0ad71f8157a891c92708c00c5fbbff7a769f505c", size = 13344, upload-time = "2018-01-18T04:33:55.126Z" }
|
||||
|
||||
[[package]]
|
||||
name = "pygments"
|
||||
version = "2.19.2"
|
||||
@@ -1249,7 +1316,7 @@ wheels = [
|
||||
|
||||
[[package]]
|
||||
name = "requests"
|
||||
version = "2.32.4"
|
||||
version = "2.32.5"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
dependencies = [
|
||||
{ name = "certifi" },
|
||||
@@ -1257,9 +1324,9 @@ dependencies = [
|
||||
{ name = "idna" },
|
||||
{ name = "urllib3" },
|
||||
]
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/e1/0a/929373653770d8a0d7ea76c37de6e41f11eb07559b103b1c02cafb3f7cf8/requests-2.32.4.tar.gz", hash = "sha256:27d0316682c8a29834d3264820024b62a36942083d52caf2f14c0591336d3422", size = 135258, upload-time = "2025-06-09T16:43:07.34Z" }
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/c9/74/b3ff8e6c8446842c3f5c837e9c3dfcfe2018ea6ecef224c710c85ef728f4/requests-2.32.5.tar.gz", hash = "sha256:dbba0bac56e100853db0ea71b82b4dfd5fe2bf6d3754a8893c3af500cec7d7cf", size = 134517, upload-time = "2025-08-18T20:46:02.573Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/7c/e4/56027c4a6b4ae70ca9de302488c5ca95ad4a39e190093d6c1a8ace08341b/requests-2.32.4-py3-none-any.whl", hash = "sha256:27babd3cda2a6d50b30443204ee89830707d396671944c998b5975b031ac2b2c", size = 64847, upload-time = "2025-06-09T16:43:05.728Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/1e/db/4254e3eabe8020b458f1a747140d32277ec7a271daf1d235b70dc0b4e6e3/requests-2.32.5-py3-none-any.whl", hash = "sha256:2462f94637a34fd532264295e186976db0f5d453d1cdd31473c85a6a161affb6", size = 64738, upload-time = "2025-08-18T20:46:00.542Z" },
|
||||
]
|
||||
|
||||
[package.optional-dependencies]
|
||||
@@ -1267,6 +1334,18 @@ socks = [
|
||||
{ name = "pysocks" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "rfc3339-validator"
|
||||
version = "0.1.4"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
dependencies = [
|
||||
{ name = "six" },
|
||||
]
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/28/ea/a9387748e2d111c3c2b275ba970b735e04e15cdb1eb30693b6b5708c4dbd/rfc3339_validator-0.1.4.tar.gz", hash = "sha256:138a2abdf93304ad60530167e51d2dfb9549521a836871b88d7f4695d0022f6b", size = 5513, upload-time = "2021-05-12T16:37:54.178Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/7b/44/4e421b96b67b2daff264473f7465db72fbdf36a07e05494f50300cc7b0c6/rfc3339_validator-0.1.4-py2.py3-none-any.whl", hash = "sha256:24f6ec1eda14ef823da9e36ec7113124b39c04d50a4d3d3a3c2859577e7791fa", size = 3490, upload-time = "2021-05-12T16:37:52.536Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "rich"
|
||||
version = "13.9.4"
|
||||
@@ -1373,6 +1452,15 @@ wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/a3/dc/17031897dae0efacfea57dfd3a82fdd2a2aeb58e0ff71b77b87e44edc772/setuptools-80.9.0-py3-none-any.whl", hash = "sha256:062d34222ad13e0cc312a4c02d73f059e86a4acbfbdea8f8f76b28c99f306922", size = 1201486, upload-time = "2025-05-27T00:56:49.664Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "six"
|
||||
version = "1.17.0"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/94/e7/b2c673351809dca68a0e064b6af791aa332cf192da575fd474ed7d6f16a2/six-1.17.0.tar.gz", hash = "sha256:ff70335d468e7eb6ec65b95b99d3a2836546063f63acc5171de367e834932a81", size = 34031, upload-time = "2024-12-04T17:35:28.174Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/b7/ce/149a00dd41f10bc29e5921b496af8b574d8413afcd5e30dfa0ed46c2cc5e/six-1.17.0-py2.py3-none-any.whl", hash = "sha256:4721f391ed90541fddacab5acf947aa0d3dc7d27b2e1e8eda2be8970586c3274", size = 11050, upload-time = "2024-12-04T17:35:26.475Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "sniffio"
|
||||
version = "1.3.1"
|
||||
@@ -1514,9 +1602,10 @@ wheels = [
|
||||
|
||||
[[package]]
|
||||
name = "unshackle"
|
||||
version = "1.4.8"
|
||||
version = "2.0.0"
|
||||
source = { editable = "." }
|
||||
dependencies = [
|
||||
{ name = "aiohttp-swagger3" },
|
||||
{ name = "appdirs" },
|
||||
{ name = "brotli" },
|
||||
{ name = "chardet" },
|
||||
@@ -1525,6 +1614,7 @@ dependencies = [
|
||||
{ name = "crccheck" },
|
||||
{ name = "cryptography" },
|
||||
{ name = "curl-cffi" },
|
||||
{ name = "fonttools" },
|
||||
{ name = "httpx" },
|
||||
{ name = "jsonpickle" },
|
||||
{ name = "langcodes" },
|
||||
@@ -1533,6 +1623,7 @@ dependencies = [
|
||||
{ name = "protobuf" },
|
||||
{ name = "pycaption" },
|
||||
{ name = "pycryptodomex" },
|
||||
{ name = "pyexecjs" },
|
||||
{ name = "pyjwt" },
|
||||
{ name = "pymediainfo" },
|
||||
{ name = "pymp4" },
|
||||
@@ -1567,6 +1658,7 @@ dev = [
|
||||
|
||||
[package.metadata]
|
||||
requires-dist = [
|
||||
{ name = "aiohttp-swagger3", specifier = ">=0.9.0,<1" },
|
||||
{ name = "appdirs", specifier = ">=1.4.4,<2" },
|
||||
{ name = "brotli", specifier = ">=1.1.0,<2" },
|
||||
{ name = "chardet", specifier = ">=5.2.0,<6" },
|
||||
@@ -1575,6 +1667,7 @@ requires-dist = [
|
||||
{ name = "crccheck", specifier = ">=1.3.0,<2" },
|
||||
{ name = "cryptography", specifier = ">=45.0.0" },
|
||||
{ name = "curl-cffi", specifier = ">=0.7.0b4,<0.8" },
|
||||
{ name = "fonttools", specifier = ">=4.0.0,<5" },
|
||||
{ name = "httpx", specifier = ">=0.28.1,<0.29" },
|
||||
{ name = "jsonpickle", specifier = ">=3.0.4,<4" },
|
||||
{ name = "langcodes", specifier = ">=3.4.0,<4" },
|
||||
@@ -1583,6 +1676,7 @@ requires-dist = [
|
||||
{ name = "protobuf", specifier = ">=4.25.3,<5" },
|
||||
{ name = "pycaption", specifier = ">=2.2.6,<3" },
|
||||
{ name = "pycryptodomex", specifier = ">=3.20.0,<4" },
|
||||
{ name = "pyexecjs", specifier = ">=1.5.1,<2" },
|
||||
{ name = "pyjwt", specifier = ">=2.8.0,<3" },
|
||||
{ name = "pymediainfo", specifier = ">=6.1.0,<7" },
|
||||
{ name = "pymp4", specifier = ">=1.4.0,<2" },
|
||||
@@ -1591,7 +1685,7 @@ requires-dist = [
|
||||
{ name = "pysubs2", specifier = ">=1.7.0,<2" },
|
||||
{ name = "pywidevine", extras = ["serve"], specifier = ">=1.8.0,<2" },
|
||||
{ name = "pyyaml", specifier = ">=6.0.1,<7" },
|
||||
{ name = "requests", extras = ["socks"], specifier = ">=2.31.0,<3" },
|
||||
{ name = "requests", extras = ["socks"], specifier = ">=2.32.5,<3" },
|
||||
{ name = "rich", specifier = ">=13.7.1,<14" },
|
||||
{ name = "rlaphoenix-m3u8", specifier = ">=3.4.0,<4" },
|
||||
{ name = "ruamel-yaml", specifier = ">=0.18.6,<0.19" },
|
||||
|
||||
Reference in New Issue
Block a user