initial services
This commit is contained in:
207
.gitignore
vendored
Normal file
207
.gitignore
vendored
Normal file
@@ -0,0 +1,207 @@
|
||||
# Byte-compiled / optimized / DLL files
|
||||
__pycache__/
|
||||
*.py[codz]
|
||||
*$py.class
|
||||
|
||||
# C extensions
|
||||
*.so
|
||||
|
||||
# Distribution / packaging
|
||||
.Python
|
||||
build/
|
||||
develop-eggs/
|
||||
dist/
|
||||
downloads/
|
||||
eggs/
|
||||
.eggs/
|
||||
lib/
|
||||
lib64/
|
||||
parts/
|
||||
sdist/
|
||||
var/
|
||||
wheels/
|
||||
share/python-wheels/
|
||||
*.egg-info/
|
||||
.installed.cfg
|
||||
*.egg
|
||||
MANIFEST
|
||||
|
||||
# PyInstaller
|
||||
# Usually these files are written by a python script from a template
|
||||
# before PyInstaller builds the exe, so as to inject date/other infos into it.
|
||||
*.manifest
|
||||
*.spec
|
||||
|
||||
# Installer logs
|
||||
pip-log.txt
|
||||
pip-delete-this-directory.txt
|
||||
|
||||
# Unit test / coverage reports
|
||||
htmlcov/
|
||||
.tox/
|
||||
.nox/
|
||||
.coverage
|
||||
.coverage.*
|
||||
.cache
|
||||
nosetests.xml
|
||||
coverage.xml
|
||||
*.cover
|
||||
*.py.cover
|
||||
.hypothesis/
|
||||
.pytest_cache/
|
||||
cover/
|
||||
|
||||
# Translations
|
||||
*.mo
|
||||
*.pot
|
||||
|
||||
# Django stuff:
|
||||
*.log
|
||||
local_settings.py
|
||||
db.sqlite3
|
||||
db.sqlite3-journal
|
||||
|
||||
# Flask stuff:
|
||||
instance/
|
||||
.webassets-cache
|
||||
|
||||
# Scrapy stuff:
|
||||
.scrapy
|
||||
|
||||
# Sphinx documentation
|
||||
docs/_build/
|
||||
|
||||
# PyBuilder
|
||||
.pybuilder/
|
||||
target/
|
||||
|
||||
# Jupyter Notebook
|
||||
.ipynb_checkpoints
|
||||
|
||||
# IPython
|
||||
profile_default/
|
||||
ipython_config.py
|
||||
|
||||
# pyenv
|
||||
# For a library or package, you might want to ignore these files since the code is
|
||||
# intended to run in multiple environments; otherwise, check them in:
|
||||
# .python-version
|
||||
|
||||
# pipenv
|
||||
# According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
|
||||
# However, in case of collaboration, if having platform-specific dependencies or dependencies
|
||||
# having no cross-platform support, pipenv may install dependencies that don't work, or not
|
||||
# install all needed dependencies.
|
||||
#Pipfile.lock
|
||||
|
||||
# UV
|
||||
# Similar to Pipfile.lock, it is generally recommended to include uv.lock in version control.
|
||||
# This is especially recommended for binary packages to ensure reproducibility, and is more
|
||||
# commonly ignored for libraries.
|
||||
#uv.lock
|
||||
|
||||
# poetry
|
||||
# Similar to Pipfile.lock, it is generally recommended to include poetry.lock in version control.
|
||||
# This is especially recommended for binary packages to ensure reproducibility, and is more
|
||||
# commonly ignored for libraries.
|
||||
# https://python-poetry.org/docs/basic-usage/#commit-your-poetrylock-file-to-version-control
|
||||
#poetry.lock
|
||||
#poetry.toml
|
||||
|
||||
# pdm
|
||||
# Similar to Pipfile.lock, it is generally recommended to include pdm.lock in version control.
|
||||
# pdm recommends including project-wide configuration in pdm.toml, but excluding .pdm-python.
|
||||
# https://pdm-project.org/en/latest/usage/project/#working-with-version-control
|
||||
#pdm.lock
|
||||
#pdm.toml
|
||||
.pdm-python
|
||||
.pdm-build/
|
||||
|
||||
# pixi
|
||||
# Similar to Pipfile.lock, it is generally recommended to include pixi.lock in version control.
|
||||
#pixi.lock
|
||||
# Pixi creates a virtual environment in the .pixi directory, just like venv module creates one
|
||||
# in the .venv directory. It is recommended not to include this directory in version control.
|
||||
.pixi
|
||||
|
||||
# PEP 582; used by e.g. github.com/David-OConnor/pyflow and github.com/pdm-project/pdm
|
||||
__pypackages__/
|
||||
|
||||
# Celery stuff
|
||||
celerybeat-schedule
|
||||
celerybeat.pid
|
||||
|
||||
# SageMath parsed files
|
||||
*.sage.py
|
||||
|
||||
# Environments
|
||||
.env
|
||||
.envrc
|
||||
.venv
|
||||
env/
|
||||
venv/
|
||||
ENV/
|
||||
env.bak/
|
||||
venv.bak/
|
||||
|
||||
# Spyder project settings
|
||||
.spyderproject
|
||||
.spyproject
|
||||
|
||||
# Rope project settings
|
||||
.ropeproject
|
||||
|
||||
# mkdocs documentation
|
||||
/site
|
||||
|
||||
# mypy
|
||||
.mypy_cache/
|
||||
.dmypy.json
|
||||
dmypy.json
|
||||
|
||||
# Pyre type checker
|
||||
.pyre/
|
||||
|
||||
# pytype static type analyzer
|
||||
.pytype/
|
||||
|
||||
# Cython debug symbols
|
||||
cython_debug/
|
||||
|
||||
# PyCharm
|
||||
# JetBrains specific template is maintained in a separate JetBrains.gitignore that can
|
||||
# be found at https://github.com/github/gitignore/blob/main/Global/JetBrains.gitignore
|
||||
# and can be added to the global gitignore or merged into this file. For a more nuclear
|
||||
# option (not recommended) you can uncomment the following to ignore the entire idea folder.
|
||||
#.idea/
|
||||
|
||||
# Abstra
|
||||
# Abstra is an AI-powered process automation framework.
|
||||
# Ignore directories containing user credentials, local state, and settings.
|
||||
# Learn more at https://abstra.io/docs
|
||||
.abstra/
|
||||
|
||||
# Visual Studio Code
|
||||
# Visual Studio Code specific template is maintained in a separate VisualStudioCode.gitignore
|
||||
# that can be found at https://github.com/github/gitignore/blob/main/Global/VisualStudioCode.gitignore
|
||||
# and can be added to the global gitignore or merged into this file. However, if you prefer,
|
||||
# you could uncomment the following to ignore the entire vscode folder
|
||||
# .vscode/
|
||||
|
||||
# Ruff stuff:
|
||||
.ruff_cache/
|
||||
|
||||
# PyPI configuration file
|
||||
.pypirc
|
||||
|
||||
# Cursor
|
||||
# Cursor is an AI-powered code editor. `.cursorignore` specifies files/directories to
|
||||
# exclude from AI features like autocomplete and code analysis. Recommended for sensitive data
|
||||
# refer to https://docs.cursor.com/context/ignore-files
|
||||
.cursorignore
|
||||
.cursorindexingignore
|
||||
|
||||
# Marimo
|
||||
marimo/_static/
|
||||
marimo/_lsp/
|
||||
__marimo__/
|
||||
1015
ADN/__init__.py
Normal file
1015
ADN/__init__.py
Normal file
File diff suppressed because it is too large
Load Diff
29
ADN/config.yaml
Normal file
29
ADN/config.yaml
Normal file
@@ -0,0 +1,29 @@
|
||||
# Animation Digital Network API Configuration
|
||||
|
||||
# Endpoints API
|
||||
endpoints:
|
||||
# Authentification
|
||||
login: "https://gw.api.animationdigitalnetwork.com/authentication/login"
|
||||
refresh: "https://gw.api.animationdigitalnetwork.com/authentication/refresh"
|
||||
|
||||
# Catalogue
|
||||
search: "https://gw.api.animationdigitalnetwork.com/show/catalog"
|
||||
show: "https://gw.api.animationdigitalnetwork.com/video/show/{show_id}?maxAgeCategory=18&limit=-1&order=asc"
|
||||
seasons: "https://gw.api.animationdigitalnetwork.com/video/show/{show_id}/seasons?maxAgeCategory=18&order=asc"
|
||||
|
||||
# Player & Lecture
|
||||
player_config: "https://gw.api.animationdigitalnetwork.com/player/video/{video_id}/configuration"
|
||||
player_refresh: "https://gw.api.animationdigitalnetwork.com/player/refresh/token"
|
||||
player_links: "https://gw.api.animationdigitalnetwork.com/player/video/{video_id}/link"
|
||||
|
||||
# Headers par défaut
|
||||
headers:
|
||||
User-Agent: "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36"
|
||||
Origin: "https://animationdigitalnetwork.com"
|
||||
Referer: "https://animationdigitalnetwork.com/"
|
||||
Content-Type: "application/json"
|
||||
X-Target-Distribution: "fr"
|
||||
|
||||
# Paramètres
|
||||
params:
|
||||
locale: "fr"
|
||||
2098
AMZN/__init__.py
Normal file
2098
AMZN/__init__.py
Normal file
File diff suppressed because it is too large
Load Diff
158
AMZN/config.yaml
Normal file
158
AMZN/config.yaml
Normal file
@@ -0,0 +1,158 @@
|
||||
certificate: |
|
||||
CAUSwgUKvAIIAxIQCuQRtZRasVgFt7DIvVtVHBi17OSpBSKOAjCCAQoCggEBAKU2UrYVOSDlcXajWhpEgGhqGraJtFdUPgu6plJGy9ViaRn5mhyXON5PXm
|
||||
w1krQdi0SLxf00FfIgnYFLpDfvNeItGn9rcx0RNPwP39PW7aW0Fbqi6VCaKWlR24kRpd7NQ4woyMXr7xlBWPwPNxK4xmR/6UuvKyYWEkroyeIjWHAqgCjC
|
||||
mpfIpVcPsyrnMuPFGl82MMVnAhTweTKnEPOqJpxQ1bdQvVNCvkba5gjOTbEnJ7aXegwhmCdRQzXjTeEV2dO8oo5YfxW6pRBovzF6wYBMQYpSCJIA24ptAP
|
||||
/2TkneyJuqm4hJNFvtF8fsBgTQQ4TIhnX4bZ9imuhivYLa6HsCAwEAAToPYW1hem9uLmNvbS1wcm9kEoADETQD6R0H/h9fyg0Hw7mj0M7T4s0bcBf4fMhA
|
||||
Rpwk2X4HpvB49bJ5Yvc4t41mAnXGe/wiXbzsddKMiMffkSE1QWK1CFPBgziU23y1PjQToGiIv/sJIFRKRJ4qMBxIl95xlvSEzKdt68n7wqGa442+uAgk7C
|
||||
XU3uTfVofYY76CrPBnEKQfad/CVqTh48geNTb4qRH1TX30NzCsB9NWlcdvg10pCnWSm8cSHu1d9yH+2yQgsGe52QoHHCqHNzG/wAxMYWTevXQW7EPTBeFy
|
||||
SPY0xUN+2F2FhCf5/A7uFUHywd0zNTswh0QJc93LBTh46clRLO+d4RKBiBSj3rah6Y5iXMw9N9o58tCRc9gFHrjfMNubopWHjDOO3ATUgqXrTp+fKVCmsG
|
||||
uGl1ComHxXV9i1AqHwzzY2JY2vFqo73jR3IElr6oChPIwcNokmNc0D4TXtjE0BoYkbWKJfHvJJihzMOvDicWUsemVHvua9/FBtpbHgpbgwijFPjtQF9Ldb
|
||||
8Swf
|
||||
|
||||
dtid_dict: [
|
||||
"A3EFHJ9BGBJ8L2", "A3VN4E5F7BBC7S", "A28RQHJKHM2A2W", "AFOQV1TK6EU6O","A1IJNVP3L4AY8B", "A2Z1NVLU6LCAUO",
|
||||
"A2E50Q8IVZ79QG", "A17PYKRA4ES6YB", "AFRP7VQQ7US69", "A2VZ790DVVI91K", "A2IK56KYGDHUVQ", "A1EXS4KWCX7GYC",
|
||||
"ANSTXZRNSPRXT", "A1KF4O3GA2MPZU", "A2XEBUI9EJ55OK", "A7YWW3KUHA54O", "A3DOA4P2WFIK9D", "ABJRG3JXAY5JL",
|
||||
"AI00TLC8V1PCT", "A2YWQEZX3UC3J1", "AM08S97P8ESGT", "A234HDVPYTUVNS", "AJ3B6LDE2HP5J", "A2GZIBBOG0DCV4",
|
||||
"A3MEKX9EL7SW8T", "A1GXILNJBV9CU7", "ANHPQG9GCMO4A", "A2CBAN119017AE", "A3FO6QR7E7PFQX", "A27OOP63XLO9TI",
|
||||
"A2EOVT31LL6KPV", "A1FNA83TYYU3QK", "A3JB7490MR9K86", "A2TIAYXTNQWU3T", "ADP5BND5THPTX", "A1H8RTR0E3Y362",
|
||||
"AUNIXHOL9EVMI", "AZKAGPPWORIRY", "A324X3KDTS7NYA", "A71I8788P1ZV8", "A2D0X18EHNKEOJ", "A3JN21B5ZOWUAN",
|
||||
"A1TG8VNKP4DSQR", "A2XZMRZUFPEDN4", "A2LJ4A527WOX9J", "A2RJLFEH0UEKI9", "AE5DW8GVLP9NX", "AIE8AMJ60B7OK",
|
||||
"A3ZKCWKG4097P", "A2H1I0AR67NWAC", "A3QXXOBP9MU5LY", "A25521KS9QCAMD", "AGQHFIWNI20PO", "A27XSKZJJPVQA4",
|
||||
"A1LCAPNEM1C36Z", "A31POKKHZJR1J4", "AOLDXB6WYN0UM", "A3SSWQ04XYPXBH", "A1BSQJM6E77NJE", "A2TX61L00VISA5",
|
||||
"AO4A5QLO9663Q", "A15MU3EQ4XZ3Y5", "A3IWJ2DYJQRA3T", "A1Q878J3NE8P81", "AAJB0R7QJO84W", "A2M1CHUCI6RHN8",
|
||||
"A6IUL9CVJZXRR", "A2RGJ95OVLR12U", "A1G2XVSR1VA5DI", "A1S15DUFSI8AUG", "ALYWZPYF4JAIT", "A2M4YX06LWP8WI",
|
||||
"A2O85NMVNLPKVN", "A3L0T0VL9A921N", "A1J16TEDOYCZTN", "A1Q7QCGNMXAKYW", "A38EHHIB10L47V", "A3R9S4ZZECZ6YL",
|
||||
"A1C66CX2XD756O", "A1ZB65LA390I4K", "AVU7CPPF2ZRAS", "ATNLRCEBX3W4P", "A2N49KXGVA18AR", "A271DR1789MXDS",
|
||||
"A1TD5Z1R8IWBHA", "A1DOD0Z74XEFYC", "A17AIVOKIKR4QQ", "A1S310WB67VFPY", "A2QCPPMSOLGVZE", "A1NPAGU1M4PA7Z",
|
||||
"A3ORLONYQTBTOZ", "A3M7PA8JXKE627", "A1D9UOSBPO2IVH", "A2V9UEGZ82H4KZ", "A3URJAABOST7NW", "A2E0SNTXJVT7WK",
|
||||
"ADVBD696BHNV5", "A12GXV8XMS007S", "A2LWARUGJLBYEW", "A2GFL5ZMWNE0PX", "AKPGW064GI9HE", "A3HF4YRA2L7XGC",
|
||||
"AGHZIK8D6X7QR", "A1F8D55J0FWDTN", "A1P7E7V3FCZKU6", "A1NL4BVLQ4L3N3", "A10A33FOX2NUBK", "AWZZ5CVHX2CD",
|
||||
"A4ZP7ZC4PI6TO", "A1Z88NGR2BK6A2", "A2MDL5WTJIU8SZ", "AP4RS91ZQ0OOI", "AFF5OAL5E3DIU", "A2HYAJ0FEWP6N3",
|
||||
"A3SUJTTQGF9GNF", "A346DYAAR4WSNS", "A93SQJNJQLDSS", "A2JKHJ0PX4J3L3", "A2WJI2JG7UW2O1", "ARJHEDRXLP6DM",
|
||||
"A1AGU0A4YA5RF9", "A2WV8TTM37P2CB", "AN630UQPG2CA4", "A30OJ8LMIAF6GP", "A8MCGN45KMHDH", "A33S43L213VSHQ",
|
||||
"A2NYIDFQSJW39B", "A31DTMEEVDDOIV", "A2FDUYD6UQ1BQ", "A3MTL1JKF2IXY3", "AK6OCP5ZLUJI1", "A3JTVZS31ZJ340",
|
||||
"A43PXU4ZN2AL1", "A1OTX5GMM5144Z", "A2Z1NVLU6LCAUO", "A43PXU4ZN2AL1", "A1OTX5GMM5144Z", "A2Z1NVLU6LCAUO"
|
||||
]
|
||||
|
||||
# device:
|
||||
# new:
|
||||
# domain: Device
|
||||
# app_name: 'com.amazon.amazonvideo.livingroom' # AIV, com.amazon.amazonvideo.livingroom
|
||||
# app_version: '1.1' # AIV: 3.12.0, livingroom: 1.1
|
||||
# device_model: 'Nexus Player' # SHIELD Android TV, Nexus Player
|
||||
# os_version: '8.0.0' # 25, 8.0.0
|
||||
# device_type: 'A2SNKIF736WF4T'
|
||||
# device_name: '%FIRST_NAME%''s%DUPE_STRATEGY_1ST% Nexus Player'
|
||||
# device_serial: 'a906a7f9bfd6a7ab' # f8eb5625fd608718
|
||||
|
||||
|
||||
device:
|
||||
default: # !<< take note that this is done per-profile
|
||||
manufacturer: Hisense
|
||||
device_chipset: mt7663
|
||||
domain: Device
|
||||
app_name: AIV
|
||||
os_name: Android
|
||||
app_version: 3.12.0
|
||||
device_model: HAT4KDTV
|
||||
os_version: VIDAA
|
||||
device_serial: # auto-generated
|
||||
device_type: A2RGJ95OVLR12U
|
||||
device_name: "%FIRST_NAME%'s%DUPE_STRATEGY_1ST% Hisense TV"
|
||||
software_version: 248
|
||||
firmware: fmw:25-app:PVFTV-97.0580-L
|
||||
firmware_version: Hisense/HAT4KDTV/HAT4KDTV:VIDAA/VIDAA/VIDAA:user/release-keys
|
||||
user_agent: Dalvik/2.1.0 (Linux; U; Android 6.3.2; HAT4KDTV Build/PS7652.3556N)
|
||||
device_serial: '13f5b56b4a17de5d136f0e4c28236109' # `os.urandom(8).hex()`
|
||||
|
||||
|
||||
device_types:
|
||||
browser: 'AOAGZA014O5RE' # all browsers? all platforms?
|
||||
tv_generic: 'A2SNKIF736WF4T' # type is shared among various random smart tvs
|
||||
pc_app: 'A1RTAM01W29CUP'
|
||||
mobile_app: 'A43PXU4ZN2AL1'
|
||||
echo: 'A7WXQPH584YP' # echo Gen2
|
||||
echo_dot: 'A32DOYMUN6DTXA' # echo dot Gen3
|
||||
echo_studio: 'A3RBAYBE7VM004' # for audio stuff, this is probably the one to use
|
||||
fire_7: 'A2M4YX06LWP8WI'
|
||||
fire_7_again: 'A1Q7QCGNMXAKYW' # not sure the difference
|
||||
fire_hd_8: 'A1C66CX2XD756O'
|
||||
fire_hd_8_again: 'A38EHHIB10L47V' # not sure the difference
|
||||
fire_hd_8_plus_2020: 'AVU7CPPF2ZRAS'
|
||||
fire_hd_10: 'A1ZB65LA390I4K'
|
||||
fire_tv: 'A2E0SNTXJVT7WK' # this is not the stick, this is the older stick-like diamond shaped one
|
||||
fire_tv_gen2: 'A12GXV8XMS007S'
|
||||
fire_tv_cube: 'A2JKHJ0PX4J3L3' # this is the STB-style big bulky cube
|
||||
fire_tv_stick_gen1: 'ADVBD696BHNV5' # non-4k fire tv stick
|
||||
fire_tv_stick_gen2: 'A2LWARUGJLBYEW'
|
||||
fire_tv_stick_with_alexa: 'A265XOI9586NML'
|
||||
fire_tv_stick_4k: 'A2GFL5ZMWNE0PX' # 4k fire tv stick
|
||||
fire_tv_stick_4k_gen3: 'AKPGW064GI9HE'
|
||||
nvidia_shield: 'A1KAXIG6VXSG8Y' # nvidia shield, unknown which one or if all
|
||||
|
||||
endpoints:
|
||||
#browse: '/cdp/catalog/Browse' # Old endpoint
|
||||
configuration: '/acm/GetConfiguration/WebClient' # For primevideo
|
||||
details: '/gp/video/api/getDetailPage' # Old endpoint
|
||||
getDetailWidgets: '/gp/video/api/getDetailWidgets' # Old endpoint
|
||||
playback: '/playback/prs/GetVodPlaybackResources'
|
||||
metadata: '/api/enrichItemMetadata'
|
||||
refreshplayback: '/playback/tags/getRefreshedPlaybackEnvelope'
|
||||
licence: '/playback/drm-vod/GetWidevineLicense'
|
||||
licence_pr: '/playback/drm-vod/GetPlayReadyLicense'
|
||||
# session
|
||||
opensession: '/cdp/playback/pes/StartSession'
|
||||
updatesession: '/cdp/playback/pes/UpdateSession'
|
||||
closesession: '/cdp/playback/pes/StopSession'
|
||||
# chapters/scenes
|
||||
xray: '/swift/page/xray' # Old enpoint (removed)
|
||||
# device registration
|
||||
ontv: '/gp/video/ontv/code'
|
||||
devicelink: '/gp/video/api/codeBasedLinking'
|
||||
codepair: '/auth/create/codepair'
|
||||
register: '/auth/register'
|
||||
token: '/auth/token'
|
||||
|
||||
regions:
|
||||
us:
|
||||
base: 'www.amazon.com'
|
||||
base_api: 'api.amazon.com'
|
||||
base_manifest: 'atv-ps.amazon.com'
|
||||
marketplace_id: 'ATVPDKIKX0DER'
|
||||
|
||||
gb:
|
||||
base: 'www.amazon.co.uk'
|
||||
base_api: 'api.amazon.co.uk'
|
||||
base_manifest: 'atv-ps-eu.amazon.co.uk'
|
||||
marketplace_id: 'A2IR4J4NTCP2M5' # A1F83G8C2ARO7P is also another marketplace_id
|
||||
|
||||
it:
|
||||
base: 'www.amazon.it'
|
||||
base_api: 'api.amazon.it'
|
||||
base_manifest: 'atv-ps-eu.primevideo.com'
|
||||
marketplace_id: 'A3K6Y4MI8GDYMT'
|
||||
|
||||
de:
|
||||
base: 'www.amazon.de'
|
||||
base_api: 'api.amazon.de'
|
||||
base_manifest: 'atv-ps-eu.amazon.de'
|
||||
marketplace_id: 'A1PA6795UKMFR9'
|
||||
|
||||
au:
|
||||
base: 'www.amazon.com.au'
|
||||
base_api: 'api.amazon.com.au'
|
||||
base_manifest: 'atv-ps-fe.amazon.com.au'
|
||||
marketplace_id: 'A3K6Y4MI8GDYMT'
|
||||
|
||||
jp:
|
||||
base: 'www.amazon.co.jp'
|
||||
base_api: 'api.amazon.co.jp'
|
||||
base_manifest: 'atv-ps-fe.amazon.co.jp'
|
||||
marketplace_id: 'A1VC38T7YXB528'
|
||||
|
||||
pl:
|
||||
base: 'www.amazon.com'
|
||||
base_api: 'api.amazon.com'
|
||||
base_manifest: 'atv-ps-eu.primevideo.com'
|
||||
marketplace_id: 'A3K6Y4MI8GDYMT'
|
||||
710
BLBL/__init__.py
Normal file
710
BLBL/__init__.py
Normal file
@@ -0,0 +1,710 @@
|
||||
|
||||
import re
|
||||
|
||||
from http.cookiejar import CookieJar
|
||||
from typing import Optional
|
||||
|
||||
import click
|
||||
from langcodes import Language
|
||||
|
||||
from unshackle.core.constants import AnyTrack
|
||||
from unshackle.core.credential import Credential
|
||||
from unshackle.core.manifests import DASH
|
||||
from unshackle.core.service import Service
|
||||
from unshackle.core.titles import Episode, Movie, Movies, Series, Title_T, Titles_T
|
||||
from unshackle.core.tracks import Chapter, Subtitle, Tracks, Video, Audio
|
||||
|
||||
from urllib.parse import urlparse
|
||||
from bs4 import BeautifulSoup
|
||||
import requests
|
||||
import time
|
||||
|
||||
import pysubs2
|
||||
from subby import BilibiliJSONConverter,CommonIssuesFixer
|
||||
from hashlib import md5
|
||||
|
||||
class BLBL(Service):
|
||||
"""
|
||||
Service code for BiliBili streaming service (https://www.bilibili.tv/).
|
||||
Author: SeFree
|
||||
Version: 1.0.0
|
||||
|
||||
Authorization: Cookies or android access_key
|
||||
|
||||
Security: No CDM require. Just direct download
|
||||
|
||||
Note: BiliBili has a lot of different regions, so you may need to specify the region in the config.
|
||||
The default region is "th".
|
||||
Cookies are required for authentication, and you can get them from the browser.
|
||||
Cookies seem not to expire, so you can use them for a long time.
|
||||
"""
|
||||
|
||||
TITLE_RE = [
|
||||
r"^https?://(?:www\.)?bilibili\.tv/(?:[^/]+/)?play/(?:\d+/)?(?P<id>\d+)/?",
|
||||
r"^https?://(?:www\.)?bili\.im/(?P<id>[A-Za-z0-9]+)",
|
||||
]
|
||||
|
||||
# VIDEO_CODEC_MAP = {
|
||||
# 'h265': Video.Codec.HEVC,
|
||||
# 'h264': Video.Codec.AVC,
|
||||
# }
|
||||
|
||||
VIDEO_RANGE_MAP = {
|
||||
"SDR": "sdr",
|
||||
"HDR10": "hdr10",
|
||||
"DV": "dolby_vision",
|
||||
}
|
||||
|
||||
LANGUAGE_MAP = {
|
||||
'Japan': 'ja',
|
||||
'Chinese Mainland': 'zh',
|
||||
'Thai': 'th',
|
||||
'CN': 'zh',
|
||||
}
|
||||
|
||||
@staticmethod
|
||||
@click.command(name="BLBL", short_help="https://www.bilibili.tv/")
|
||||
@click.argument("title", type=str)
|
||||
@click.option("-SE", "--season", default=1, required=False, type=int,
|
||||
help="BiliBili not provide Season in info, so specify it manually.")
|
||||
@click.option("-tl", "--title_lang", default=None, required=False, type=str,
|
||||
help="If the title is foreigner audio language, specify the foreigner language.")
|
||||
@click.option("-m", "--movie", default=False,is_flag=True, required=False, type=bool,
|
||||
help="If the title is foreigner audio language, specify the original title.")
|
||||
@click.option("-o", "--original_url", default=None, required=False, type=str,
|
||||
help="If the title is foreigner audio language, specify the original title.")
|
||||
@click.option("-ol", "--original_lang", default=None, required=False, type=str,
|
||||
help="If the title is foreigner audio language, specify the original language.")
|
||||
|
||||
|
||||
@click.pass_context
|
||||
def cli(ctx, **kwargs):
|
||||
return BLBL(ctx, **kwargs)
|
||||
|
||||
def extract_id(self,url: str):
|
||||
for pattern in self.TITLE_RE:
|
||||
match = re.search(pattern, url)
|
||||
if match:
|
||||
return match.group("id")
|
||||
return None
|
||||
|
||||
def __init__(self, ctx, title,season, original_url,title_lang,original_lang,movie):
|
||||
super().__init__(ctx)
|
||||
if "bili.im" in title:
|
||||
title="https://bili.im/"+ self.extract_id(title)
|
||||
self.log.warning("Short URL detected")
|
||||
title= self.session.get(title, allow_redirects=True).url
|
||||
|
||||
title = self.extract_id(title)
|
||||
|
||||
self.skip_dl=ctx.parent.params.get("skip_dl")
|
||||
|
||||
self.title=f"https://www.bilibili.tv/play/{title}"
|
||||
|
||||
self.Season = season
|
||||
self.title_lang = title_lang
|
||||
self.movie = movie
|
||||
self.android_access_key= self.config.get("access_key")
|
||||
|
||||
if original_url and "bili.im" in original_url:
|
||||
original_url="https://bili.im/"+ self.extract_id(original_url)
|
||||
self.log.warning("Short URL detected")
|
||||
original_url= self.session.get(original_url, allow_redirects=True).url
|
||||
|
||||
self.original_url=f"https://www.bilibili.tv/play/{self.extract_id(original_url)}" if original_url else None
|
||||
self.original_lang = original_lang
|
||||
|
||||
self.appbuild = self.config['default_config']['appbuild']
|
||||
self.lang = self.config['default_config']['lang']
|
||||
self.region = self.config['default_config']['region']
|
||||
|
||||
self.already_json_to_srt=[]
|
||||
|
||||
|
||||
def authenticate(self, cookies: Optional[CookieJar] = None, credential: Optional[Credential] = None) -> None:
|
||||
super().authenticate(cookies, credential)
|
||||
if not cookies:
|
||||
raise EnvironmentError("Service requires Cookies for Authentication.")
|
||||
|
||||
def get_titles(self) -> Titles_T:
|
||||
content_type = None
|
||||
media_title = None
|
||||
media_year = None
|
||||
want_ep=None
|
||||
|
||||
if self.original_url and (len(urlparse(self.title).path.split('/')) != len(urlparse(self.original_url).path.split('/'))) :
|
||||
self.log.error("Original URL is provided, but the title and original URL do not match in structure.")
|
||||
self.log.error(f"Foreigner URL : {self.title}")
|
||||
self.log.error(f"Original URL : {self.original_url}")
|
||||
exit(1)
|
||||
id=urlparse(self.title).path.split('/')[-1]
|
||||
org_id=urlparse(self.original_url).path.split('/')[-1] if self.original_url else None
|
||||
|
||||
content_type = "SERIES"
|
||||
if urlparse(self.title).path.split('/')[-2] != 'play':
|
||||
self.log.info("Title is a single Episode")
|
||||
content_type = "SINGLE"
|
||||
id=urlparse(self.title).path.split('/')[-2]
|
||||
org_id=urlparse(self.original_url).path.split('/')[-2] if self.original_url else None
|
||||
want_ep= urlparse(self.title).path.split('/')[-1]
|
||||
else:
|
||||
self.log.info("Title is a Series, will fetch all episodes")
|
||||
self.original_lang=self.get_original_language(id) if not self.original_lang else self.original_lang
|
||||
|
||||
Season_Info=self.getSeasonInfo(id)
|
||||
Season_Org_Info=self.getSeasonInfo(org_id) if org_id else None
|
||||
|
||||
if self.movie :
|
||||
content_type = "MOVIE"
|
||||
if (Season_Info or Season_Info.get('episode_info')) and Season_Org_Info and (Season_Org_Info or Season_Org_Info.get('episode_info')) :
|
||||
intersection = set(item['short_title_display'] for item in Season_Info['episode_info']) & \
|
||||
set(item['short_title_display'] for item in Season_Org_Info['episode_info'])
|
||||
# Filter both lists to include only items with matching short_title_display
|
||||
Season_Info['episode_info'] = [item for item in Season_Info['episode_info'] if item['short_title_display'] in intersection]
|
||||
Season_Org_Info['episode_info'] = [item for item in Season_Org_Info['episode_info'] if item['short_title_display'] in intersection]
|
||||
|
||||
|
||||
media_title= Season_Org_Info['detail']['title'] if Season_Org_Info else Season_Info['detail']['title']
|
||||
|
||||
if 'Dub' in Season_Info['detail']['title']:
|
||||
match = re.search(r'\((.*?)\)', Season_Info['detail']['title'])
|
||||
if match:
|
||||
result = match.group(1)
|
||||
lang=result.split(' ')
|
||||
lang.remove('Dub')
|
||||
# print(f"Dubbed Language: {lang[0]}")
|
||||
self.title_lang= self.LANGUAGE_MAP.get(lang[0], self.original_lang) if not self.title_lang else self.title_lang
|
||||
elif 'Thai' in Season_Info['detail']['title']:
|
||||
self.title_lang= 'th'
|
||||
else:
|
||||
self.title_lang = self.original_lang
|
||||
|
||||
episode_list=[{"ep": s['short_title_display'].split('E')[-1],
|
||||
"short_title_display": s['short_title_display'],
|
||||
"ep_name":s['long_title_display'],
|
||||
"id": s['episode_id']} for s in Season_Info['episode_info']]
|
||||
episode_original_list=[{"ep": s['short_title_display'].split('E')[-1],
|
||||
"id": s['episode_id'],
|
||||
"name": s['long_title_display']} for s in Season_Org_Info['episode_info']] if Season_Org_Info else []
|
||||
titles = []
|
||||
special_ep=0
|
||||
for i,ep in enumerate(episode_list):
|
||||
|
||||
if content_type == "SINGLE" or content_type == "MOVIE":
|
||||
# print(ep.get('streams', '').get('dash', '').get('auto', '')[0].get('subtitles', ''))
|
||||
if content_type == "SINGLE":
|
||||
if ep.get('id') != want_ep:
|
||||
self.log.info(f"Skipping episode {ep.get('id')} as it does not match the requested episode {want_ep}.")
|
||||
continue
|
||||
if not (isinstance(ep.get('ep', ''), int) or isinstance(ep.get('ep', ''), int)):
|
||||
self.log.info(f"Skipping episode {ep.get('id')} as it is not a valid episode number.")
|
||||
continue
|
||||
|
||||
return Movies([
|
||||
Movie(
|
||||
id_=ep.get('id', ''),
|
||||
service=self.__class__,
|
||||
name=media_title,
|
||||
language= self.original_lang,
|
||||
data={
|
||||
"Episode": ep,
|
||||
"Original": episode_original_list[i] if Season_Org_Info else None,
|
||||
},
|
||||
)]
|
||||
)
|
||||
if content_type == "MOVIE":
|
||||
return Movies([
|
||||
Movie(
|
||||
id_=ep.get('id', ''),
|
||||
service=self.__class__,
|
||||
name=media_title,
|
||||
language= self.original_lang,
|
||||
data={
|
||||
"Episode": ep,
|
||||
"Original": episode_original_list[i] if Season_Org_Info else None,
|
||||
},
|
||||
)]
|
||||
)
|
||||
elif content_type == "SERIES":
|
||||
# ep_name=None
|
||||
|
||||
ep_name=ep.get('ep_name', None)
|
||||
special_season=False
|
||||
|
||||
## to handle special episode like SP1, SP2, 2.5, OVA etc.
|
||||
if str(ep['ep']).isdigit():
|
||||
ep['ep'] = float(ep.get('ep', ''))
|
||||
if ep['ep'] % 1 > 0:
|
||||
ep_name = f"special_{ep['ep']}"
|
||||
special_season=True
|
||||
special_ep=special_ep+1
|
||||
else:
|
||||
special_season=False
|
||||
else:
|
||||
if bool(re.match(r"(SP|SP\d|OVA*|OAD*)", ep['short_title_display'])):
|
||||
special_season=True
|
||||
special_ep=special_ep+1
|
||||
else:
|
||||
special_season=False
|
||||
|
||||
titles.append(Episode(
|
||||
id_=ep.get('id', ''),
|
||||
service=self.__class__,
|
||||
title=self.shorten_filename(media_title),
|
||||
season=self.Season if not special_season else 0,
|
||||
number=int(ep.get('ep', '')) if not special_season else int(special_ep),
|
||||
name=ep_name,
|
||||
data={
|
||||
"Episode": ep,
|
||||
"Original": episode_original_list[i] if Season_Org_Info else None,
|
||||
},
|
||||
language= self.original_lang
|
||||
))
|
||||
return Series(titles)
|
||||
# DASH Example: Service requires separate API calls per codec/range.
|
||||
# Uses _get_tracks_for_variants() which iterates codecs x ranges,
|
||||
# handles HYBRID (HDR10+DV), and best_available fallback.
|
||||
|
||||
def get_tracks(self, title: Title_T) -> Tracks:
|
||||
tracks=Tracks()
|
||||
time.sleep(1)
|
||||
# print(title.id)
|
||||
media_url=self.getMediaURL(title.id)
|
||||
media_ori_url=self.getMediaURL(title.data['Original']['id']) if title.data.get('Original') else None
|
||||
|
||||
videos=media_url['videos']
|
||||
video_track=[]
|
||||
|
||||
for _, video in enumerate(videos):
|
||||
video_track = Video(
|
||||
id_=md5(video["url"].encode()).hexdigest()[0:6],
|
||||
drm=None,
|
||||
url=video["url"],
|
||||
width=video["width"],
|
||||
height=video["height"],
|
||||
fps=video.get("framerate", None),
|
||||
# metadata
|
||||
# size=int(video["size"]),
|
||||
language=Language.get(self.title_lang),
|
||||
bitrate=video["bandwidth"],
|
||||
codec=video["codec"],
|
||||
# extra={'user-agent': True if self.android_access_key else False,
|
||||
# 'host': urlparse(video["url"]).hostname} if self.android else None
|
||||
)
|
||||
tracks.add(video_track)
|
||||
audios=media_url['audios']
|
||||
audio_tracks=[]
|
||||
for audio in audios:
|
||||
|
||||
audio_track = Audio(
|
||||
id_=md5(audio["url"].encode()).hexdigest()[0:6],
|
||||
drm=None,
|
||||
url=audio["url"],
|
||||
# metadata
|
||||
# size=int(audio["size"]),
|
||||
bitrate=audio["bandwidth"],
|
||||
codec=Audio.Codec.from_mime(audio['codec'].split(".")[0]),
|
||||
channels=audio['codec'].split(".")[-1],
|
||||
language=Language.get(self.title_lang),
|
||||
# extra={'user-agent': self.android,
|
||||
# 'host': urlparse(video["url"]).hostname} if self.android else None
|
||||
)
|
||||
audio_tracks.append(audio_track)
|
||||
audios_ori=media_ori_url['audios'] if media_ori_url else []
|
||||
for audio in audios_ori:
|
||||
audio_track = Audio(
|
||||
id_=md5(audio["url"].encode()).hexdigest()[0:6],
|
||||
drm=None,
|
||||
url=audio["url"],
|
||||
# metadata
|
||||
# size=int(audio["size"])- 1,
|
||||
bitrate=audio["bandwidth"],
|
||||
codec=Audio.Codec.from_mime(audio['codec'].split(".")[0]),
|
||||
channels=audio['codec'].split(".")[-1],
|
||||
is_original_lang=True,
|
||||
language=Language.get(self.original_lang),
|
||||
# extra={'user-agent': self.android,
|
||||
# 'host': urlparse(video["url"]).hostname} if self.android else None
|
||||
)
|
||||
# audio_track.is_original_lang=True
|
||||
audio_tracks.append(audio_track)
|
||||
tracks.add(audio_tracks)
|
||||
|
||||
subtitle_tracks = []
|
||||
get_sub=self.getSubtitle(title.id)['video_subtitle']
|
||||
|
||||
if get_sub:
|
||||
|
||||
for subtitle in get_sub:
|
||||
|
||||
subtitle_track_srt=Subtitle(
|
||||
id_=md5(subtitle['srt']["url"].encode()).hexdigest()[0:6],
|
||||
drm=None,
|
||||
url=subtitle['srt']["url"],
|
||||
# metadata
|
||||
codec=Subtitle.Codec.SubRip,
|
||||
language=subtitle["lang_key"],
|
||||
)
|
||||
subtitle_track_srt.is_original_lang=False if media_ori_url else True
|
||||
subtitle_tracks.append(subtitle_track_srt)
|
||||
if subtitle['ass']:
|
||||
subtitle_track_ass=Subtitle(
|
||||
id_=md5(subtitle['ass']["url"].encode()).hexdigest()[0:6],
|
||||
drm=None,
|
||||
url=subtitle['ass']["url"],
|
||||
# metadata
|
||||
codec=Subtitle.Codec.SubStationAlphav4,
|
||||
language=subtitle["lang_key"],
|
||||
|
||||
)
|
||||
subtitle_track_ass.is_original_lang=False if media_ori_url else True
|
||||
subtitle_tracks.append(subtitle_track_ass)
|
||||
if media_ori_url:
|
||||
get_ori_sub=self.getSubtitle(title.data['Original']['id'])['video_subtitle']
|
||||
if get_ori_sub:
|
||||
for subtitle in get_ori_sub:
|
||||
|
||||
subtitle_track_srt=Subtitle(
|
||||
id_=md5(subtitle['srt']["url"].encode()).hexdigest()[0:6],
|
||||
drm=None,
|
||||
url=subtitle['srt']["url"],
|
||||
# metadata
|
||||
codec=Subtitle.Codec.SubRip,
|
||||
language=subtitle["lang_key"],
|
||||
)
|
||||
subtitle_track_srt.is_original_lang=True
|
||||
subtitle_tracks.append(subtitle_track_srt)
|
||||
if subtitle['ass']:
|
||||
subtitle_track_ass=Subtitle(
|
||||
id_=md5(subtitle['ass']["url"].encode()).hexdigest()[0:6],
|
||||
drm=None,
|
||||
url=subtitle['ass']["url"],
|
||||
# metadata
|
||||
codec=Subtitle.Codec.SubStationAlphav4,
|
||||
language=subtitle["lang_key"],
|
||||
)
|
||||
subtitle_track_ass.is_original_lang=True
|
||||
subtitle_tracks.append(subtitle_track_ass)
|
||||
tracks.add(subtitle_tracks)
|
||||
return tracks
|
||||
|
||||
def get_chapters(self, title: Title_T) -> list[Chapter]:
|
||||
chapters=[]
|
||||
CHAPTER_TITLE_MAP={
|
||||
'opening_start_time':'intro',
|
||||
'opening_end_time':'Chapter {chapter}',
|
||||
'ending_start_time':'ending',
|
||||
'ending_end_time':'Chapter {chapter}',
|
||||
}
|
||||
params = {
|
||||
'episode_id': title.data['Episode']['id'],
|
||||
}
|
||||
res=self.session.get(self.config['endpoints']['time_stampdetail'], params=params).json()
|
||||
i=0
|
||||
c=1
|
||||
for cpt in res.get("data").values():
|
||||
if cpt is None:
|
||||
continue
|
||||
for chapter,time_stamp in cpt.items():
|
||||
# print(CHAPTER_TITLE_MAP.get(chapter, chapter),time_stamp)
|
||||
if time_stamp ==0:
|
||||
continue
|
||||
if "_start_" in chapter:
|
||||
chapters.append(
|
||||
Chapter(
|
||||
name=CHAPTER_TITLE_MAP.get(chapter, chapter),
|
||||
timestamp=time_stamp,
|
||||
)
|
||||
)
|
||||
i+=1
|
||||
elif "_end_" in chapter:
|
||||
chapters.append(
|
||||
Chapter(
|
||||
name=CHAPTER_TITLE_MAP.get(chapter, chapter).format(chapter=c),
|
||||
timestamp=time_stamp,
|
||||
)
|
||||
)
|
||||
i+=1
|
||||
c+=1
|
||||
|
||||
|
||||
return chapters
|
||||
|
||||
def getEpisodes(self, season_id: int):
|
||||
url = self.config['endpoints']['gateway_play'].format(season_id=season_id, lang=self.lang.lower(), region=self.region.upper())
|
||||
response = self.session.get(url).json()
|
||||
|
||||
|
||||
if not response['code'] == 0:
|
||||
raise ValueError(f"API Error {response['message']}")
|
||||
|
||||
episodes = []
|
||||
for section in response['data']['sections']:
|
||||
for episode in section['episodes']:
|
||||
|
||||
if (re.match(r"(E[0-9]+|\d+[AB]|SP|SP\d|OVA*|OAD)", episode['short_title_display'])):
|
||||
episodes.append(episode)
|
||||
return episodes
|
||||
|
||||
def getSeasonInfo(self, season_id):
|
||||
agent = f"bilibili/{self.appbuild} CFNetwork/1.0 Darwin/23.0.0 os/ios model/iPhone 12 Pro Max mobi_app/bstar_i build/2450100 osVer/17.0 network/2 channel/AppStore"
|
||||
url = self.config['endpoints']['gateway_view'].format(appbuild=self.appbuild,season_id=season_id, lang=self.lang.lower(), region=self.region.upper())
|
||||
payload = {}
|
||||
headers = headers = {
|
||||
'User-Agent': agent
|
||||
}
|
||||
response = requests.request(
|
||||
"GET", url, headers=headers, data=payload).json()
|
||||
|
||||
if not response['code'] == 0:
|
||||
raise ValueError(f"API Error {response['message']}")
|
||||
#if is_db:
|
||||
season_info=response['data']
|
||||
episode_info = self.getEpisodes(season_id=int(season_id))
|
||||
|
||||
# with open("data.json", "w") as f:
|
||||
# json.dump(episode_info, f, indent=4, ensure_ascii=False)
|
||||
|
||||
return {'detail':season_info,'episode_info':episode_info}
|
||||
|
||||
def getMediaURL(self, episode_id: int):
|
||||
res=self.config['default_config']['res']
|
||||
responses=[]
|
||||
|
||||
url = self.config['endpoints']['get_media_url'].format(episode_id=episode_id, res=res)
|
||||
res = self.session.get(url,headers={'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/137.0.0.0 Safari/537.36 Edg/137.0.0.0'}).json()
|
||||
|
||||
if not res['code'] == 0 and not self.android_access_key:
|
||||
if res['message'] in ["10004004", "10004005", "10023006"]:
|
||||
raise ValueError(f"API Code {res['message']}: This video is only available for registered users")
|
||||
elif res['message'] in ["10004001"]:
|
||||
raise ValueError(f"API Code {res['message']}: This video is not available from your location due to geo restriction")
|
||||
else:
|
||||
raise ValueError(f"API Error {res['message']}")
|
||||
elif res['code'] == 0 :
|
||||
responses.append({"manifest":"web","data":res['data']})
|
||||
|
||||
if self.android_access_key:
|
||||
url = self.config['endpoints']['get_media_url_android']
|
||||
# cookies = {
|
||||
# 'regionforbid': self.config['android']['regionforbid'],
|
||||
# }
|
||||
|
||||
params = {
|
||||
'access_key': self.android_access_key,
|
||||
'ep_id': episode_id,
|
||||
'platform': 'android',
|
||||
'prefer_code_type': '1',
|
||||
}
|
||||
res =self.session.get(url,params=params).json()
|
||||
if not res['code'] == 0:
|
||||
if res['message'] in ["10004004", "10004005", "10023006"]:
|
||||
raise ValueError(f"API Code {res['message']}: This video is only available for registered users")
|
||||
elif res['message'] in ["10004001"]:
|
||||
raise ValueError(f"API Code {res['message']}: This video is not available from your location due to geo restriction")
|
||||
else:
|
||||
raise ValueError(f"API Error {res['message']}")
|
||||
responses.append({"manifest":"android","data":res['data']})
|
||||
|
||||
videos=[]
|
||||
audios=[]
|
||||
for response_data in responses:
|
||||
|
||||
if response_data["manifest"] == "web" and response_data["data"]:
|
||||
response=response_data['data']
|
||||
for video in response['playurl']['video']:
|
||||
|
||||
sel_res = int(video['stream_info']['quality'])
|
||||
video_url = video['video_resource']['url']
|
||||
if not video_url:
|
||||
continue
|
||||
# video_backup_url = video['video_resource']['backup_url'][0]
|
||||
video_size = video['video_resource']['size']
|
||||
video_bandwidth = video['video_resource']['bandwidth']
|
||||
video_codec= Video.Codec.HEVC if "hev" in video['video_resource']['codecs'] else Video.Codec.AVC
|
||||
video_width= video['video_resource']['width']
|
||||
video_height= video['video_resource']['height']
|
||||
# print(video_codec)
|
||||
# print(video_url)
|
||||
video_framerate= int(video['video_resource']['frame_rate'].split('/')[0])/int(video['video_resource']['frame_rate'].split('/')[-1])
|
||||
videos.append({
|
||||
"url": video_url,
|
||||
# "backup_url": video_backup_url,
|
||||
"size": video_size,
|
||||
"bandwidth": video_bandwidth,
|
||||
"codec": video_codec,
|
||||
"width": video_width,
|
||||
"height": video_height,
|
||||
"framerate": video_framerate
|
||||
})
|
||||
|
||||
|
||||
for audio in response['playurl']['audio_resource']:
|
||||
audios.append({
|
||||
"url": audio['url'],
|
||||
"backup_url": audio['backup_url'],
|
||||
"size": audio['size'],
|
||||
"bandwidth": audio['bandwidth'],
|
||||
"codec": audio['codecs'],
|
||||
})
|
||||
|
||||
if response_data["manifest"] == "android" and response_data["data"]:
|
||||
# video_codec_map={
|
||||
# 120:"avc1.640033",
|
||||
# 112:"avc1.640032",
|
||||
# 80:"avc1.640032",
|
||||
# 64:"avc1.640028",
|
||||
# 32:"avc1.64001F",
|
||||
# 16:"avc1.64001E",
|
||||
# 12:"hev1.1.6.L153.90",
|
||||
# 6:"avc1.64001E",
|
||||
# 5:"avc1.64001E"
|
||||
# }
|
||||
response=response_data['data']
|
||||
video_codec_map={
|
||||
7:Video.Codec.AVC,
|
||||
12:Video.Codec.HEVC
|
||||
}
|
||||
audio_codec_map={
|
||||
30280:"mp4a.40.2",
|
||||
30232:"mp4a.40.2",
|
||||
30216:"mp4a.40.5"
|
||||
|
||||
}
|
||||
quality_map={
|
||||
120: [3840, 2160],
|
||||
112: [1920, 1080],
|
||||
80: [1920, 1080],
|
||||
64: [1280, 720],
|
||||
32: [852, 480],
|
||||
16: [640, 360],
|
||||
6: [426, 240],
|
||||
5: [256, 144]
|
||||
}
|
||||
# print(response)
|
||||
if response.get('video_info') is None:
|
||||
# if response.get('dialog')['type'] == 2:
|
||||
# raise ValueError(f"API Error : {response.get('dialog')['title']}")
|
||||
# elif response.get('dialog')['type'] == 5:
|
||||
# raise ValueError(f"API Error : {response.get('dialog')['title']}")
|
||||
raise ValueError(f"API Error : {response.get('dialog')['title']}")
|
||||
for video in response['video_info']['stream_list']:
|
||||
sel_res = int(video['stream_info']['quality'])
|
||||
video_url = video['dash_video']['base_url']
|
||||
# video_backup_url = video['dash_video']['backup_url'][0]
|
||||
video_size = video['dash_video']['size']
|
||||
video_bandwidth = video['dash_video']['bandwidth']
|
||||
video_codec= video['dash_video']['codecid']
|
||||
video_width= quality_map.get(sel_res,[])[0]
|
||||
video_height= quality_map.get(sel_res,[])[1]
|
||||
|
||||
# video_framerate= int(video['dash_video']['frame_rate'].split('/')[0])/int(video['dash_video']['frame_rate'].split('/')[-1])
|
||||
videos.append({
|
||||
"url": video_url,
|
||||
# "backup_url": video_backup_url,
|
||||
"size": video_size,
|
||||
"bandwidth": video_bandwidth,
|
||||
"codec": video_codec_map.get(video_codec,Video.Codec.AVC),
|
||||
"width": video_width,
|
||||
"height": video_height,
|
||||
# "framerate": video_framerate
|
||||
})
|
||||
|
||||
|
||||
for audio in response['video_info']['dash_audio']:
|
||||
audios.append({
|
||||
# "id": audio['id'],
|
||||
"url": audio['base_url'],
|
||||
"backup_url": audio['backup_url'][0],
|
||||
"size": audio['size'],
|
||||
"bandwidth": audio['bandwidth'],
|
||||
"codec": audio_codec_map.get(audio['id'], "mp4a.40.2"),
|
||||
})
|
||||
import json
|
||||
with open("data.json", "w") as f:
|
||||
json.dump({"videos":videos,
|
||||
"audios": audios,
|
||||
}, f, indent=4, ensure_ascii=False)
|
||||
return {"videos":videos,
|
||||
"audios": audios,
|
||||
}
|
||||
|
||||
def getSubtitle(self, episode_id: int):
|
||||
url = self.config['endpoints']['get_subtitle_url'].format(episode_id=episode_id, lang=self.lang.lower(), region=self.region.upper())
|
||||
|
||||
response = requests.get(url,cookies=self.session.cookies).json()
|
||||
|
||||
if not response['code'] == 0:
|
||||
raise ValueError(f"API Error {response['message']}")
|
||||
|
||||
response = response['data']
|
||||
return response
|
||||
|
||||
def get_original_language(self,id):
|
||||
"""
|
||||
Get the homepage of BiliBili.
|
||||
"""
|
||||
url = self.config['endpoints']['get_homepage'].format(media_id=id)
|
||||
|
||||
response = requests.get(url, headers={'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/137.0.0.0 Safari/537.36 Edg/137.0.0.0'})
|
||||
|
||||
if response.status_code != 200:
|
||||
raise ValueError(f"Failed to fetch homepage: {response.status_code}")
|
||||
|
||||
soup = BeautifulSoup(response.text, 'html.parser')
|
||||
# div = soup.find('div', class_='detail-table media-info__info')
|
||||
# print(div)
|
||||
spans = soup.findAll(name='span', class_='detail-table__text')
|
||||
for span in spans:
|
||||
original_lang = self.LANGUAGE_MAP.get(span.contents[0], None)
|
||||
if original_lang:
|
||||
# self.log.info(f"Original Language: {original_lang}")
|
||||
return original_lang
|
||||
return None
|
||||
@staticmethod
|
||||
def shorten_filename(name, max_length=150):
|
||||
# Extract folder, file name, and extension
|
||||
|
||||
if len(name) > max_length:
|
||||
short_name = name[:max_length] + "..."
|
||||
else:
|
||||
short_name = name
|
||||
|
||||
# Reconstruct shortened path with extension
|
||||
return short_name
|
||||
|
||||
|
||||
def on_track_downloaded(self, track: AnyTrack) -> None:
|
||||
"""
|
||||
Called when a Track has finished downloading.
|
||||
|
||||
Parameters:
|
||||
track: The Track object that was downloaded.
|
||||
"""
|
||||
if isinstance(track,Subtitle) and not self.skip_dl:
|
||||
# print(track.path.absolute().__str__().replace(track.path.suffix,""))
|
||||
if track.path.suffix == ".ass":
|
||||
font_name= "Noto Sans Thai"
|
||||
|
||||
ass_file = pysubs2.load(track.path.absolute().__str__())
|
||||
for name, style in ass_file.styles.items():
|
||||
if "Noto Sans" in style.fontname and "Thai" not in style.fontname:
|
||||
style.fontname = style.fontname.replace("Noto Sans", font_name)
|
||||
style.fontsize=style.fontsize/1.25
|
||||
ass_file.save(track.path.absolute().__str__())
|
||||
if track.path.suffix == ".srt" and track.path.name not in self.already_json_to_srt:
|
||||
|
||||
with open(track.path.__str__(), 'rb') as fd:
|
||||
data = fd.read()
|
||||
converter=BilibiliJSONConverter()
|
||||
if isinstance(data, bytes):
|
||||
srt = converter.from_bytes(data)
|
||||
else:
|
||||
srt = converter.from_string(data)
|
||||
fixer = CommonIssuesFixer()
|
||||
fixed, status = fixer.from_srt(srt)
|
||||
if status and fixed:
|
||||
srt = fixed
|
||||
srt.save(track.path)
|
||||
self.already_json_to_srt.append(track.path.name)
|
||||
|
||||
19
BLBL/config.yaml
Normal file
19
BLBL/config.yaml
Normal file
@@ -0,0 +1,19 @@
|
||||
endpoints:
|
||||
gateway_play: "https://api.bilibili.tv/intl/gateway/web/v2/ogv/play/episodes?platform=web&season_id={season_id}&s_locale={lang}_{region}"
|
||||
gateway_view: "https://app.biliintl.com/intl/gateway/v2/ogv/view/app/season2?build={appbuild}&platform=ios&s_locale={lang}_{region}&season_id={season_id}"
|
||||
get_media_url: "https://api.bilibili.tv/intl/gateway/web/playurl?ep_id={episode_id}&device=wap&platform=web&qn={res}&tf=0&type=0"
|
||||
get_media_url_android: "https://app.biliintl.com/intl/gateway/v2/app/playurl/player"
|
||||
get_subtitle_url: "https://api.bilibili.tv/intl/gateway/web/v2/subtitle?s_locale={lang}_{region}&platform=web&episode_id={episode_id}&spm_id=bstar-web.pgc-video-detail.0.0&from_spm_id=bstar-web.homepage.top-list.all"
|
||||
get_homepage: "https://www.bilibili.tv/en/media/{media_id}"
|
||||
get_playpage: "https://www.bilibili.tv/en/play/{media_id}"
|
||||
time_stampdetail: https://api.bilibili.tv/intl/gateway/web/v2/ogv/play/episode
|
||||
|
||||
|
||||
default_config:
|
||||
appbuild: 2450100
|
||||
lang : "en"
|
||||
region: "th"
|
||||
user_agent: "UserAgentRedacted"
|
||||
agent: "bilibili/{appbuild} CFNetwork/1.0 Darwin/23.0.0 os/ios model/iPhone 12 Pro Max mobi_app/bstar_i build/2450100 osVer/17.0 network/2 channel/AppStore"
|
||||
res: 112
|
||||
|
||||
975
CR/__init__.py
Normal file
975
CR/__init__.py
Normal file
@@ -0,0 +1,975 @@
|
||||
import re
|
||||
import time
|
||||
import uuid
|
||||
from threading import Lock
|
||||
from typing import Generator, Optional, Union
|
||||
|
||||
import click
|
||||
import jwt
|
||||
from langcodes import Language
|
||||
|
||||
from unshackle.core.manifests import DASH
|
||||
from unshackle.core.search_result import SearchResult
|
||||
from unshackle.core.service import Service
|
||||
from unshackle.core.session import session
|
||||
from unshackle.core.titles import Episode, Series
|
||||
from unshackle.core.tracks import Chapters, Tracks
|
||||
from unshackle.core.tracks.chapter import Chapter
|
||||
from unshackle.core.tracks.subtitle import Subtitle
|
||||
|
||||
|
||||
class CR(Service):
|
||||
"""
|
||||
Service code for Crunchyroll streaming service (https://www.crunchyroll.com).
|
||||
|
||||
\b
|
||||
Version: 3.0.0
|
||||
Author: sp4rk.y
|
||||
Date: 2026-03-26
|
||||
Authorization: Credentials
|
||||
Robustness:
|
||||
Widevine:
|
||||
L3: 1080p, AAC2.0
|
||||
|
||||
\b
|
||||
Tips:
|
||||
- Input should be complete URL or series ID
|
||||
https://www.crunchyroll.com/series/GRMG8ZQZR/series-name OR GRMG8ZQZR
|
||||
- Supports multiple audio and subtitle languages
|
||||
- Device ID is cached for consistent authentication across runs
|
||||
|
||||
\b
|
||||
Notes:
|
||||
- Emulates Android TV client (v3.58.0) with Playback API v3
|
||||
- Uses password-based authentication with refresh token caching
|
||||
- Refresh tokens are cached for 30 days for cross-session reuse
|
||||
- Manages concurrent stream limits automatically
|
||||
"""
|
||||
|
||||
TITLE_RE = r"^(?:https?://(?:www\.)?crunchyroll\.com/(?:series|watch)/)?(?P<id>[A-Z0-9]+)"
|
||||
LICENSE_LOCK = Lock()
|
||||
MAX_CONCURRENT_STREAMS = 3
|
||||
ACTIVE_STREAMS: list[tuple[str, str]] = []
|
||||
|
||||
@staticmethod
|
||||
def get_session():
|
||||
return session("okhttp4")
|
||||
|
||||
@staticmethod
|
||||
@click.command(name="CR", short_help="https://crunchyroll.com")
|
||||
@click.argument("title", type=str, required=True)
|
||||
@click.pass_context
|
||||
def cli(ctx, **kwargs) -> "CR":
|
||||
return CR(ctx, **kwargs)
|
||||
|
||||
def __init__(self, ctx, title: str):
|
||||
self.title = title
|
||||
self.account_id: Optional[str] = None
|
||||
self.access_token: Optional[str] = None
|
||||
self.refresh_token: Optional[str] = None
|
||||
self.profile_id: Optional[str] = None
|
||||
self.token_expiration: Optional[int] = None
|
||||
self.anonymous_id = str(uuid.uuid4())
|
||||
|
||||
super().__init__(ctx)
|
||||
|
||||
device_cache_key = "cr_device_id"
|
||||
cached_device = self.cache.get(device_cache_key)
|
||||
|
||||
if cached_device and not cached_device.expired:
|
||||
self.device_id = cached_device.data["device_id"]
|
||||
else:
|
||||
self.device_id = str(uuid.uuid4())
|
||||
cached_device.set(
|
||||
data={"device_id": self.device_id},
|
||||
expiration=60 * 60 * 24 * 365 * 10,
|
||||
)
|
||||
|
||||
self.device_name = self.config.get("device", {}).get("name", "SHIELD Android TV")
|
||||
self.device_type = self.config.get("device", {}).get("type", "ANDROIDTV")
|
||||
|
||||
self.session.headers.update(self.config.get("headers", {}))
|
||||
self.session.headers["etp-anonymous-id"] = self.anonymous_id
|
||||
|
||||
@property
|
||||
def auth_header(self) -> dict:
|
||||
"""Return authorization header dict."""
|
||||
return {"authorization": f"Bearer {self.access_token}"}
|
||||
|
||||
def ensure_authenticated(self) -> None:
|
||||
"""Check if token is expired and re-authenticate if needed."""
|
||||
if not self.token_expiration:
|
||||
cache_key = f"cr_auth_token_{self.credential.sha1 if self.credential else 'default'}"
|
||||
cached = self.cache.get(cache_key)
|
||||
|
||||
if cached and not cached.expired:
|
||||
self.access_token = cached.data["access_token"]
|
||||
self.account_id = cached.data.get("account_id")
|
||||
self.profile_id = cached.data.get("profile_id")
|
||||
self.refresh_token = cached.data.get("refresh_token")
|
||||
self.token_expiration = cached.data.get("token_expiration")
|
||||
self.session.headers.update(self.auth_header)
|
||||
else:
|
||||
self.authenticate(credential=self.credential)
|
||||
return
|
||||
|
||||
current_time = int(time.time())
|
||||
if current_time >= (self.token_expiration - 60):
|
||||
if self.refresh_token:
|
||||
self._refresh_access_token()
|
||||
else:
|
||||
self.authenticate(credential=self.credential)
|
||||
|
||||
def authenticate(self, cookies=None, credential=None) -> None:
|
||||
"""Authenticate using username and password credentials, with refresh token support."""
|
||||
super().authenticate(cookies, credential)
|
||||
|
||||
cache_key = f"cr_auth_token_{credential.sha1 if credential else 'default'}"
|
||||
cached = self.cache.get(cache_key)
|
||||
|
||||
if cached and not cached.expired:
|
||||
self.access_token = cached.data["access_token"]
|
||||
self.account_id = cached.data.get("account_id")
|
||||
self.profile_id = cached.data.get("profile_id")
|
||||
self.refresh_token = cached.data.get("refresh_token")
|
||||
self.token_expiration = cached.data.get("token_expiration")
|
||||
else:
|
||||
# Try refresh token from a previous session first
|
||||
refresh_cache_key = f"cr_refresh_token_{credential.sha1 if credential else 'default'}"
|
||||
refresh_cached = self.cache.get(refresh_cache_key)
|
||||
|
||||
if refresh_cached and not refresh_cached.expired and refresh_cached.data.get("refresh_token"):
|
||||
self.refresh_token = refresh_cached.data["refresh_token"]
|
||||
try:
|
||||
self._refresh_access_token()
|
||||
self._cache_auth(cache_key)
|
||||
self.session.headers.update(self.auth_header)
|
||||
|
||||
if self.ACTIVE_STREAMS:
|
||||
self.ACTIVE_STREAMS.clear()
|
||||
|
||||
try:
|
||||
self.clear_all_sessions()
|
||||
except Exception as e:
|
||||
self.log.warning(f"Failed to clear previous sessions: {e}")
|
||||
return
|
||||
except Exception:
|
||||
self.log.warning("Refresh token expired or invalid, falling back to password login")
|
||||
self.refresh_token = None
|
||||
|
||||
if not credential:
|
||||
raise ValueError("No credential provided for authentication")
|
||||
|
||||
response = self.session.post(
|
||||
url=self.config["endpoints"]["token"],
|
||||
headers={
|
||||
"content-type": "application/x-www-form-urlencoded; charset=UTF-8",
|
||||
"request-type": "SignIn",
|
||||
},
|
||||
data={
|
||||
"grant_type": "password",
|
||||
"username": credential.username,
|
||||
"password": credential.password,
|
||||
"scope": "offline_access",
|
||||
"client_id": self.config["client"]["id"],
|
||||
"client_secret": self.config["client"]["secret"],
|
||||
"device_type": self.device_type,
|
||||
"device_id": self.device_id,
|
||||
"device_name": self.device_name,
|
||||
},
|
||||
)
|
||||
|
||||
if response.status_code != 200:
|
||||
try:
|
||||
error_data = response.json()
|
||||
error_msg = error_data.get("error", "Unknown error")
|
||||
error_code = error_data.get("code", "")
|
||||
raise ValueError(f"Login failed: {response.status_code} - {error_msg} ({error_code})")
|
||||
except ValueError:
|
||||
raise
|
||||
except Exception:
|
||||
raise ValueError(f"Login failed: {response.status_code} - {response.text}")
|
||||
|
||||
self._apply_token_response(response.json())
|
||||
self._cache_auth(cache_key)
|
||||
|
||||
self.session.headers.update(self.auth_header)
|
||||
|
||||
if self.ACTIVE_STREAMS:
|
||||
self.ACTIVE_STREAMS.clear()
|
||||
|
||||
try:
|
||||
self.clear_all_sessions()
|
||||
except Exception as e:
|
||||
self.log.warning(f"Failed to clear previous sessions: {e}")
|
||||
|
||||
def _apply_token_response(self, token_data: dict) -> None:
|
||||
"""Extract and store auth fields from a token response."""
|
||||
self.access_token = token_data["access_token"]
|
||||
self.refresh_token = token_data.get("refresh_token", self.refresh_token)
|
||||
self.account_id = token_data.get("account_id", self.account_id)
|
||||
self.profile_id = token_data.get("profile_id", self.profile_id)
|
||||
|
||||
try:
|
||||
decoded_token = jwt.decode(self.access_token, options={"verify_signature": False})
|
||||
self.token_expiration = decoded_token.get("exp")
|
||||
except Exception:
|
||||
self.token_expiration = int(time.time()) + token_data.get("expires_in", 300)
|
||||
|
||||
def _cache_auth(self, cache_key: str) -> None:
|
||||
"""Cache current auth state (access token + refresh token)."""
|
||||
cached = self.cache.get(cache_key)
|
||||
cached.set(
|
||||
data={
|
||||
"access_token": self.access_token,
|
||||
"account_id": self.account_id,
|
||||
"profile_id": self.profile_id,
|
||||
"refresh_token": self.refresh_token,
|
||||
"token_expiration": self.token_expiration,
|
||||
},
|
||||
expiration=self.token_expiration
|
||||
if isinstance(self.token_expiration, int) and self.token_expiration > int(time.time())
|
||||
else 300,
|
||||
)
|
||||
|
||||
# Cache refresh token separately with a long TTL for cross-session reuse
|
||||
if self.refresh_token:
|
||||
refresh_cache_key = cache_key.replace("cr_auth_token_", "cr_refresh_token_")
|
||||
refresh_cached = self.cache.get(refresh_cache_key)
|
||||
refresh_cached.set(
|
||||
data={"refresh_token": self.refresh_token},
|
||||
expiration=60 * 60 * 24 * 30, # 30 days
|
||||
)
|
||||
|
||||
def _refresh_access_token(self) -> None:
|
||||
"""Refresh the access token using the stored refresh token."""
|
||||
if not self.refresh_token:
|
||||
raise ValueError("No refresh token available")
|
||||
|
||||
response = self.session.post(
|
||||
url=self.config["endpoints"]["token"],
|
||||
headers={
|
||||
"content-type": "application/x-www-form-urlencoded; charset=UTF-8",
|
||||
},
|
||||
data={
|
||||
"grant_type": "refresh_token",
|
||||
"refresh_token": self.refresh_token,
|
||||
"scope": "offline_access",
|
||||
"client_id": self.config["client"]["id"],
|
||||
"client_secret": self.config["client"]["secret"],
|
||||
"device_type": self.device_type,
|
||||
"device_id": self.device_id,
|
||||
"device_name": self.device_name,
|
||||
},
|
||||
)
|
||||
|
||||
if response.status_code != 200:
|
||||
self.refresh_token = None
|
||||
raise ValueError(f"Token refresh failed: {response.status_code}")
|
||||
|
||||
self._apply_token_response(response.json())
|
||||
|
||||
cache_key = f"cr_auth_token_{self.credential.sha1 if self.credential else 'default'}"
|
||||
self._cache_auth(cache_key)
|
||||
self.session.headers.update(self.auth_header)
|
||||
|
||||
def get_titles(self) -> Union[Series]:
|
||||
"""Fetch series and episode information."""
|
||||
series_id = self.parse_series_id(self.title)
|
||||
|
||||
series_http = self.session.get(
|
||||
url=self.config["endpoints"]["series"].format(series_id=series_id),
|
||||
params={"locale": self.config["params"]["locale"]},
|
||||
)
|
||||
series_response = series_http.json()
|
||||
|
||||
if "error" in series_response:
|
||||
raise ValueError(f"Series not found: {series_id} - {series_response.get('error')}")
|
||||
|
||||
series_data = (
|
||||
series_response.get("data", [{}])[0] if isinstance(series_response.get("data"), list) else series_response
|
||||
)
|
||||
series_title = series_data.get("title", "Unknown Series")
|
||||
|
||||
seasons_http = self.session.get(
|
||||
url=self.config["endpoints"]["seasons"].format(series_id=series_id),
|
||||
params={
|
||||
"locale": self.config["params"]["locale"],
|
||||
"preferred_audio_language": self.config.get("params", {}).get("preferred_audio_language", "en-US"),
|
||||
},
|
||||
)
|
||||
seasons_response = seasons_http.json()
|
||||
|
||||
seasons_data = seasons_response.get("data", [])
|
||||
|
||||
if not seasons_data:
|
||||
raise ValueError(f"No seasons found for series: {series_id}")
|
||||
|
||||
used_season_numbers: set[int] = set()
|
||||
season_id_to_number: dict[str, int] = {}
|
||||
|
||||
all_episode_data = []
|
||||
special_episodes = []
|
||||
|
||||
for season in seasons_data:
|
||||
season_id = season["id"]
|
||||
season_number = season.get("season_number", 0)
|
||||
|
||||
effective_season_number = season_number
|
||||
if isinstance(season_number, int) and season_number > 0:
|
||||
if season_number in used_season_numbers:
|
||||
candidate = season_number + 1
|
||||
while candidate in used_season_numbers:
|
||||
candidate += 1
|
||||
effective_season_number = candidate
|
||||
used_season_numbers.add(effective_season_number)
|
||||
|
||||
season_id_to_number[season_id] = effective_season_number
|
||||
|
||||
episodes_http = self.session.get(
|
||||
url=self.config["endpoints"]["season_episodes"].format(season_id=season_id),
|
||||
params={"locale": self.config["params"]["locale"]},
|
||||
)
|
||||
episodes_response = episodes_http.json()
|
||||
|
||||
episodes_data = episodes_response.get("data", [])
|
||||
|
||||
for episode_data in episodes_data:
|
||||
episode_number = episode_data.get("episode_number")
|
||||
|
||||
if episode_number is None or isinstance(episode_number, float):
|
||||
special_episodes.append(episode_data)
|
||||
|
||||
all_episode_data.append((episode_data, effective_season_number))
|
||||
|
||||
if not all_episode_data:
|
||||
raise ValueError(f"No episodes found for series: {series_id}")
|
||||
|
||||
series_year = None
|
||||
if all_episode_data:
|
||||
first_episode_data = all_episode_data[0][0]
|
||||
first_air_date = first_episode_data.get("episode_air_date")
|
||||
if first_air_date:
|
||||
series_year = int(first_air_date[:4])
|
||||
|
||||
special_episodes.sort(key=lambda x: x.get("episode_air_date", ""))
|
||||
special_episode_numbers = {ep["id"]: idx + 1 for idx, ep in enumerate(special_episodes)}
|
||||
episodes = []
|
||||
season_episode_counts = {}
|
||||
|
||||
for episode_data, season_number in all_episode_data:
|
||||
episode_number = episode_data.get("episode_number")
|
||||
|
||||
if episode_number is None or isinstance(episode_number, float):
|
||||
final_season = 0
|
||||
final_number = special_episode_numbers[episode_data["id"]]
|
||||
else:
|
||||
final_season = season_number
|
||||
if final_season not in season_episode_counts:
|
||||
season_episode_counts[final_season] = 0
|
||||
|
||||
season_episode_counts[final_season] += 1
|
||||
final_number = season_episode_counts[final_season]
|
||||
|
||||
original_language = None
|
||||
versions = episode_data.get("versions", [])
|
||||
for version in versions:
|
||||
if "main" in version.get("roles", []):
|
||||
original_language = version.get("audio_locale")
|
||||
break
|
||||
|
||||
episode = Episode(
|
||||
id_=episode_data["id"],
|
||||
service=self.__class__,
|
||||
title=series_title,
|
||||
season=final_season,
|
||||
number=final_number,
|
||||
name=episode_data.get("title"),
|
||||
year=series_year,
|
||||
language=original_language,
|
||||
description=episode_data.get("description"),
|
||||
data=episode_data,
|
||||
)
|
||||
episodes.append(episode)
|
||||
|
||||
return Series(episodes)
|
||||
|
||||
def set_track_metadata(
|
||||
self, tracks: Tracks, episode_id: str, is_original: bool, audio_locale: Optional[str] = None
|
||||
) -> None:
|
||||
"""Set metadata for video and audio tracks."""
|
||||
for video in tracks.videos:
|
||||
video.needs_repack = True
|
||||
video.data["episode_id"] = episode_id
|
||||
video.is_original_lang = is_original
|
||||
if audio_locale:
|
||||
video.data["audio_locale"] = audio_locale
|
||||
for audio in tracks.audio:
|
||||
audio.data["episode_id"] = episode_id
|
||||
audio.is_original_lang = is_original
|
||||
if audio_locale:
|
||||
audio.data["audio_locale"] = audio_locale
|
||||
|
||||
def get_tracks(self, title: Episode) -> Tracks:
|
||||
"""Fetch video, audio, and subtitle tracks for an episode."""
|
||||
self.ensure_authenticated()
|
||||
|
||||
episode_id = title.id
|
||||
|
||||
if self.ACTIVE_STREAMS:
|
||||
self.ACTIVE_STREAMS.clear()
|
||||
|
||||
self.clear_all_sessions()
|
||||
|
||||
# Définir les endpoints à essayer (tv/android_tv pour vidéo, android/phone/download pour audio 192kbps)
|
||||
endpoints_to_try = ["playback", "playback_download"]
|
||||
|
||||
preferred_audio = self.config.get("params", {}).get("preferred_audio_language", "en-US")
|
||||
initial_response = self.get_playback_data(
|
||||
episode_id, track_stream=False, endpoint_key="playback", audio_locale=preferred_audio
|
||||
)
|
||||
versions = initial_response.get("versions", [])
|
||||
|
||||
if not versions:
|
||||
self.log.warning("No versions found in playback response, using single version")
|
||||
versions = [{"audio_locale": initial_response.get("audioLocale", "ja-JP")}]
|
||||
|
||||
tracks = None
|
||||
|
||||
for idx, version in enumerate(versions):
|
||||
audio_locale = version.get("audio_locale")
|
||||
version_guid = version.get("guid")
|
||||
is_original = version.get("original", False)
|
||||
|
||||
if not audio_locale:
|
||||
continue
|
||||
|
||||
request_episode_id = version_guid if version_guid else episode_id
|
||||
|
||||
# Essayer chaque endpoint pour cette version
|
||||
for endpoint_key in endpoints_to_try:
|
||||
try:
|
||||
if idx == 0 and not version_guid and endpoint_key == "playback":
|
||||
version_response = initial_response
|
||||
version_token = version_response.get("token")
|
||||
else:
|
||||
if idx == 1 and not versions[0].get("guid") and endpoint_key == "playback":
|
||||
initial_token = initial_response.get("token")
|
||||
if initial_token:
|
||||
self.close_stream(episode_id, initial_token)
|
||||
|
||||
try:
|
||||
version_response = self.get_playback_data(
|
||||
request_episode_id,
|
||||
track_stream=False,
|
||||
endpoint_key=endpoint_key,
|
||||
audio_locale=audio_locale,
|
||||
)
|
||||
except ValueError as e:
|
||||
self.log.warning(
|
||||
f"Could not get playback info for audio {audio_locale} from {endpoint_key}: {e}"
|
||||
)
|
||||
continue
|
||||
|
||||
version_token = version_response.get("token")
|
||||
|
||||
hard_subs = version_response.get("hardSubs", {})
|
||||
|
||||
root_url = version_response.get("url")
|
||||
|
||||
if root_url and "/clean/" in root_url:
|
||||
dash_url = root_url
|
||||
elif "none" in hard_subs:
|
||||
dash_url = hard_subs["none"].get("url")
|
||||
elif "fr-FR" in hard_subs:
|
||||
dash_url = hard_subs["fr-FR"].get("url")
|
||||
elif hard_subs:
|
||||
first_key = list(hard_subs.keys())[0]
|
||||
dash_url = hard_subs[first_key].get("url")
|
||||
else:
|
||||
dash_url = None
|
||||
|
||||
if not dash_url:
|
||||
if version_token:
|
||||
self.close_stream(request_episode_id, version_token)
|
||||
continue
|
||||
|
||||
try:
|
||||
version_tracks = DASH.from_url(
|
||||
url=dash_url,
|
||||
session=self.session,
|
||||
).to_tracks(language=audio_locale)
|
||||
|
||||
if tracks is None:
|
||||
tracks = version_tracks
|
||||
self.set_track_metadata(tracks, request_episode_id, is_original, audio_locale)
|
||||
else:
|
||||
self.set_track_metadata(version_tracks, request_episode_id, is_original, audio_locale)
|
||||
for video in version_tracks.videos:
|
||||
if not any(v.id == video.id for v in tracks.videos):
|
||||
tracks.add(video)
|
||||
for audio in version_tracks.audio:
|
||||
existing_audio = next((a for a in tracks.audio if a.language == audio.language), None)
|
||||
if existing_audio is None or (
|
||||
hasattr(audio, "bitrate")
|
||||
and hasattr(existing_audio, "bitrate")
|
||||
and audio.bitrate > existing_audio.bitrate
|
||||
):
|
||||
tracks.add(audio)
|
||||
elif existing_audio is None:
|
||||
tracks.add(audio)
|
||||
|
||||
except Exception as e:
|
||||
self.log.warning(
|
||||
f"Failed to parse DASH manifest for audio {audio_locale} from {endpoint_key}: {e}"
|
||||
)
|
||||
if version_token:
|
||||
self.close_stream(request_episode_id, version_token)
|
||||
continue
|
||||
|
||||
if is_original and endpoint_key == "playback":
|
||||
captions = version_response.get("captions", {})
|
||||
subtitles_data = version_response.get("subtitles", {})
|
||||
all_subs = {**captions, **subtitles_data}
|
||||
|
||||
for lang_code, sub_data in all_subs.items():
|
||||
if lang_code == "none":
|
||||
continue
|
||||
|
||||
if isinstance(sub_data, dict) and "url" in sub_data:
|
||||
try:
|
||||
lang = Language.get(lang_code)
|
||||
except (ValueError, LookupError):
|
||||
lang = Language.get("fr")
|
||||
|
||||
subtitle_format = sub_data.get("format", "vtt").lower()
|
||||
if subtitle_format == "ass" or subtitle_format == "ssa":
|
||||
codec = Subtitle.Codec.SubStationAlphav4
|
||||
else:
|
||||
codec = Subtitle.Codec.WebVTT
|
||||
|
||||
tracks.add(
|
||||
Subtitle(
|
||||
id_=f"subtitle-{audio_locale}-{lang_code}",
|
||||
url=sub_data["url"],
|
||||
codec=codec,
|
||||
language=lang,
|
||||
forced=False,
|
||||
sdh=False,
|
||||
),
|
||||
warn_only=True,
|
||||
)
|
||||
|
||||
if version_token:
|
||||
self.close_stream(request_episode_id, version_token)
|
||||
|
||||
except Exception as e:
|
||||
self.log.warning(f"Error processing endpoint {endpoint_key} for version {idx}: {e}")
|
||||
continue
|
||||
|
||||
if versions and versions[0].get("guid"):
|
||||
initial_token = initial_response.get("token")
|
||||
if initial_token:
|
||||
self.close_stream(episode_id, initial_token)
|
||||
|
||||
if tracks is None:
|
||||
raise ValueError(f"Failed to fetch any tracks for episode: {episode_id}")
|
||||
|
||||
for track in tracks.audio + tracks.subtitles:
|
||||
if track.language:
|
||||
try:
|
||||
lang_obj = Language.get(str(track.language))
|
||||
base_lang = Language.get(lang_obj.language)
|
||||
lang_display = base_lang.language_name()
|
||||
track.name = lang_display
|
||||
except (ValueError, LookupError):
|
||||
pass
|
||||
|
||||
images = title.data.get("images", {})
|
||||
thumbnails = images.get("thumbnail", [])
|
||||
if thumbnails:
|
||||
thumb_variants = thumbnails[0] if isinstance(thumbnails[0], list) else [thumbnails[0]]
|
||||
if thumb_variants:
|
||||
thumb_index = min(7, len(thumb_variants) - 1)
|
||||
thumb = thumb_variants[thumb_index]
|
||||
if isinstance(thumb, dict) and "source" in thumb:
|
||||
pass
|
||||
|
||||
return tracks
|
||||
|
||||
def get_widevine_license(self, challenge: bytes, title: Episode, track) -> bytes:
|
||||
"""
|
||||
Get Widevine license for decryption.
|
||||
|
||||
Creates a fresh playback session for each track, gets the license, then immediately
|
||||
closes the stream. This prevents hitting the 3 concurrent stream limit.
|
||||
CDN authorization is embedded in the manifest URLs, not tied to active sessions.
|
||||
"""
|
||||
self.ensure_authenticated()
|
||||
|
||||
track_episode_id = track.data.get("episode_id", title.id)
|
||||
|
||||
with self.LICENSE_LOCK:
|
||||
playback_token = None
|
||||
try:
|
||||
audio_locale = track.data.get("audio_locale")
|
||||
playback_data = self.get_playback_data(track_episode_id, track_stream=True, audio_locale=audio_locale)
|
||||
playback_token = playback_data.get("token")
|
||||
|
||||
if not playback_token:
|
||||
raise ValueError(f"No playback token in response for {track_episode_id}")
|
||||
|
||||
track.data["playback_token"] = playback_token
|
||||
|
||||
license_response = self.session.post(
|
||||
url=self.config["endpoints"]["license_widevine"],
|
||||
params={"specConform": "true"},
|
||||
data=challenge,
|
||||
headers={
|
||||
**self.auth_header,
|
||||
"content-type": "application/octet-stream",
|
||||
"accept": "application/octet-stream",
|
||||
"x-cr-content-id": track_episode_id,
|
||||
"x-cr-video-token": playback_token,
|
||||
},
|
||||
)
|
||||
if license_response.status_code != 200:
|
||||
self.close_stream(track_episode_id, playback_token)
|
||||
try:
|
||||
error_detail = license_response.text[:200]
|
||||
except Exception:
|
||||
error_detail = "Unknown error"
|
||||
raise ValueError(f"License request failed: {license_response.status_code} - {error_detail}")
|
||||
|
||||
self.close_stream(track_episode_id, playback_token)
|
||||
return license_response.content
|
||||
|
||||
except Exception:
|
||||
if playback_token:
|
||||
try:
|
||||
self.close_stream(track_episode_id, playback_token)
|
||||
except Exception:
|
||||
pass
|
||||
raise
|
||||
|
||||
def cleanup_active_streams(self) -> None:
|
||||
"""
|
||||
Close all remaining active streams.
|
||||
Called to ensure no streams are left open.
|
||||
"""
|
||||
if self.ACTIVE_STREAMS:
|
||||
try:
|
||||
self.authenticate()
|
||||
except Exception as e:
|
||||
self.log.warning(f"Failed to re-authenticate during cleanup: {e}")
|
||||
|
||||
for episode_id, token in list(self.ACTIVE_STREAMS):
|
||||
try:
|
||||
self.close_stream(episode_id, token)
|
||||
except Exception as e:
|
||||
self.log.warning(f"Failed to close stream {episode_id}: {e}")
|
||||
if (episode_id, token) in self.ACTIVE_STREAMS:
|
||||
self.ACTIVE_STREAMS.remove((episode_id, token))
|
||||
|
||||
def __del__(self) -> None:
|
||||
"""Cleanup any remaining streams when service is destroyed."""
|
||||
try:
|
||||
self.cleanup_active_streams()
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
def get_chapters(self, title: Episode) -> Chapters:
|
||||
"""Get chapters/skip events for an episode."""
|
||||
chapters = Chapters()
|
||||
|
||||
chapter_response = self.session.get(
|
||||
url=self.config["endpoints"]["skip_events"].format(episode_id=title.id),
|
||||
)
|
||||
special_chapters = []
|
||||
|
||||
if chapter_response.status_code == 200:
|
||||
try:
|
||||
chapter_data = chapter_response.json()
|
||||
except Exception as e:
|
||||
self.log.warning(f"Failed to parse chapter data: {e}")
|
||||
return chapters
|
||||
|
||||
for chapter_type in ["intro", "recap", "credits", "preview"]:
|
||||
if chapter_info := chapter_data.get(chapter_type):
|
||||
try:
|
||||
start_time = int(chapter_info["start"] * 1000)
|
||||
end_time = int(chapter_info.get("end", chapter_info["start"]) * 1000)
|
||||
special_chapters.append(
|
||||
{
|
||||
"start": start_time,
|
||||
"end": end_time,
|
||||
"name": chapter_info["type"].capitalize(),
|
||||
"type": chapter_type,
|
||||
}
|
||||
)
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
special_chapters.sort(key=lambda x: x["start"])
|
||||
|
||||
all_chapters = []
|
||||
chapter_counter = 1
|
||||
|
||||
all_chapters.append({"timestamp": 0, "name": f"Chapter {chapter_counter}"})
|
||||
chapter_counter += 1
|
||||
|
||||
for idx, special in enumerate(special_chapters):
|
||||
all_chapters.append({"timestamp": special["start"], "name": special["name"]})
|
||||
|
||||
should_add_chapter_after = False
|
||||
|
||||
if special["end"] > special["start"]:
|
||||
if idx + 1 < len(special_chapters):
|
||||
next_special = special_chapters[idx + 1]
|
||||
if next_special["start"] - special["end"] > 2000:
|
||||
should_add_chapter_after = True
|
||||
else:
|
||||
should_add_chapter_after = True
|
||||
|
||||
if should_add_chapter_after:
|
||||
all_chapters.append({"timestamp": special["end"], "name": f"Chapter {chapter_counter}"})
|
||||
chapter_counter += 1
|
||||
|
||||
for chapter in all_chapters:
|
||||
try:
|
||||
chapters.add(
|
||||
Chapter(
|
||||
timestamp=chapter["timestamp"],
|
||||
name=chapter["name"],
|
||||
)
|
||||
)
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
return chapters
|
||||
|
||||
def search(self) -> Generator[SearchResult, None, None]:
|
||||
"""Search for content on Crunchyroll."""
|
||||
try:
|
||||
response = self.session.get(
|
||||
url=self.config["endpoints"]["search"],
|
||||
params={
|
||||
"q": self.title,
|
||||
"type": "series",
|
||||
"start": 0,
|
||||
"n": 20,
|
||||
"locale": self.config["params"]["locale"],
|
||||
},
|
||||
)
|
||||
|
||||
if response.status_code != 200:
|
||||
raise ValueError(f"Search request failed with status {response.status_code}")
|
||||
|
||||
search_data = response.json()
|
||||
for result_group in search_data.get("data", []):
|
||||
for series in result_group.get("items", []):
|
||||
series_id = series.get("id")
|
||||
|
||||
if not series_id:
|
||||
continue
|
||||
|
||||
title = series.get("title", "Unknown")
|
||||
description = series.get("description", "")
|
||||
year = series.get("series_launch_year")
|
||||
if len(description) > 300:
|
||||
description = description[:300] + "..."
|
||||
|
||||
url = f"https://www.crunchyroll.com/series/{series_id}"
|
||||
label = f"SERIES ({year})" if year else "SERIES"
|
||||
|
||||
yield SearchResult(
|
||||
id_=series_id,
|
||||
title=title,
|
||||
label=label,
|
||||
description=description,
|
||||
url=url,
|
||||
)
|
||||
|
||||
except Exception as e:
|
||||
raise ValueError(f"Search failed: {e}")
|
||||
|
||||
def get_account_id(self) -> str:
|
||||
"""Fetch and return the account ID."""
|
||||
response = self.session.get(url=self.config["endpoints"]["account_me"], headers=self.auth_header)
|
||||
|
||||
if response.status_code != 200:
|
||||
raise ValueError(f"Failed to get account info: {response.status_code} - {response.text}")
|
||||
|
||||
data = response.json()
|
||||
return data["account_id"]
|
||||
|
||||
def close_stream(self, episode_id: str, token: str) -> None:
|
||||
"""Close an active playback stream to free up concurrent stream slots."""
|
||||
should_remove = False
|
||||
try:
|
||||
response = self.session.delete(
|
||||
url=self.config["endpoints"]["playback_delete"].format(episode_id=episode_id, token=token),
|
||||
headers=self.auth_header,
|
||||
)
|
||||
if response.status_code in (200, 204, 403):
|
||||
should_remove = True
|
||||
else:
|
||||
self.log.warning(
|
||||
f"Failed to close stream for {episode_id} (status {response.status_code}): {response.text[:200]}"
|
||||
)
|
||||
except Exception as e:
|
||||
self.log.warning(f"Error closing stream for {episode_id}: {e}")
|
||||
finally:
|
||||
if should_remove and (episode_id, token) in self.ACTIVE_STREAMS:
|
||||
self.ACTIVE_STREAMS.remove((episode_id, token))
|
||||
|
||||
def get_active_sessions(self) -> list:
|
||||
"""Get all active streaming sessions for the account."""
|
||||
try:
|
||||
response = self.session.get(
|
||||
url=self.config["endpoints"]["playback_sessions"],
|
||||
headers=self.auth_header,
|
||||
)
|
||||
if response.status_code == 200:
|
||||
data = response.json()
|
||||
items = data.get("items", [])
|
||||
return items
|
||||
else:
|
||||
self.log.warning(f"Failed to get active sessions (status {response.status_code})")
|
||||
return []
|
||||
except Exception as e:
|
||||
self.log.warning(f"Error getting active sessions: {e}")
|
||||
return []
|
||||
|
||||
def clear_all_sessions(self) -> int:
|
||||
"""
|
||||
Clear all active streaming sessions created during this or previous runs.
|
||||
|
||||
Tries multiple approaches to ensure all streams are closed:
|
||||
1. Clear tracked streams with known tokens
|
||||
2. Query active sessions API and close all found streams
|
||||
3. Try alternate token formats if needed
|
||||
"""
|
||||
cleared = 0
|
||||
|
||||
if self.ACTIVE_STREAMS:
|
||||
streams_to_close = self.ACTIVE_STREAMS[:]
|
||||
for episode_id, playback_token in streams_to_close:
|
||||
try:
|
||||
self.close_stream(episode_id, playback_token)
|
||||
cleared += 1
|
||||
except Exception:
|
||||
if (episode_id, playback_token) in self.ACTIVE_STREAMS:
|
||||
self.ACTIVE_STREAMS.remove((episode_id, playback_token))
|
||||
|
||||
sessions = self.get_active_sessions()
|
||||
if sessions:
|
||||
for session_data in sessions:
|
||||
content_id = session_data.get("contentId")
|
||||
session_token = session_data.get("token")
|
||||
|
||||
if content_id and session_token:
|
||||
tokens_to_try = (
|
||||
["11-" + session_token[3:], session_token]
|
||||
if session_token.startswith("08-")
|
||||
else [session_token]
|
||||
)
|
||||
|
||||
session_closed = False
|
||||
for token in tokens_to_try:
|
||||
try:
|
||||
response = self.session.delete(
|
||||
url=self.config["endpoints"]["playback_delete"].format(
|
||||
episode_id=content_id, token=token
|
||||
),
|
||||
headers=self.auth_header,
|
||||
)
|
||||
if response.status_code in (200, 204):
|
||||
cleared += 1
|
||||
session_closed = True
|
||||
break
|
||||
elif response.status_code == 403:
|
||||
session_closed = True
|
||||
break
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
if not session_closed:
|
||||
self.log.warning(f"Unable to close session {content_id} with any token format")
|
||||
|
||||
return cleared
|
||||
|
||||
def get_playback_data(
|
||||
self,
|
||||
episode_id: str,
|
||||
track_stream: bool = True,
|
||||
endpoint_key: str = "playback",
|
||||
audio_locale: Optional[str] = None,
|
||||
) -> dict:
|
||||
"""
|
||||
Get playback data for an episode with automatic retry on stream limits.
|
||||
|
||||
Args:
|
||||
episode_id: The episode ID to get playback data for
|
||||
track_stream: Whether to track this stream in active_streams (False for temporary streams)
|
||||
endpoint_key: Which endpoint to use ('playback' or 'playback_download')
|
||||
audio_locale: Preferred audio locale (e.g. 'en-US', 'ja-JP')
|
||||
|
||||
Returns:
|
||||
dict: The playback response data
|
||||
|
||||
Raises:
|
||||
ValueError: If playback request fails after retry
|
||||
"""
|
||||
self.ensure_authenticated()
|
||||
|
||||
params: dict[str, str] = {"queue": "false"}
|
||||
if audio_locale:
|
||||
params["audio"] = audio_locale
|
||||
|
||||
max_retries = 2
|
||||
for attempt in range(max_retries + 1):
|
||||
http_response = self.session.get(
|
||||
url=self.config["endpoints"][endpoint_key].format(episode_id=episode_id),
|
||||
params=params,
|
||||
)
|
||||
|
||||
try:
|
||||
response = http_response.json()
|
||||
except Exception as e:
|
||||
raise ValueError(f"Playback: failed to parse JSON (episode_id={episode_id}): {e}")
|
||||
|
||||
if "error" in response:
|
||||
error_code = response.get("code", "")
|
||||
error_msg = response.get("message", response.get("error", "Unknown error"))
|
||||
|
||||
if error_code == "TOO_MANY_ACTIVE_STREAMS" and attempt < max_retries:
|
||||
self.log.warning(f"Hit stream limit: {error_msg}")
|
||||
cleared = self.clear_all_sessions()
|
||||
|
||||
if cleared == 0 and attempt == 0:
|
||||
wait_time = 30
|
||||
self.log.warning(
|
||||
f"Found orphaned sessions from previous run. Waiting {wait_time}s for them to expire..."
|
||||
)
|
||||
time.sleep(wait_time)
|
||||
|
||||
continue
|
||||
|
||||
raise ValueError(f"Could not get playback info for episode: {episode_id} - {error_msg}")
|
||||
|
||||
playback_token = response.get("token")
|
||||
if playback_token and track_stream:
|
||||
self.ACTIVE_STREAMS.append((episode_id, playback_token))
|
||||
|
||||
return response
|
||||
|
||||
raise ValueError(f"Failed to get playback data for episode: {episode_id}")
|
||||
|
||||
def parse_series_id(self, title_input: str) -> str:
|
||||
"""Parse series ID from URL or direct ID input."""
|
||||
match = re.match(self.TITLE_RE, title_input, re.IGNORECASE)
|
||||
if not match:
|
||||
raise ValueError(f"Could not parse series ID from: {title_input}")
|
||||
series_id = match.group("id")
|
||||
return series_id
|
||||
47
CR/config.yaml
Normal file
47
CR/config.yaml
Normal file
@@ -0,0 +1,47 @@
|
||||
# Crunchyroll API Configuration
|
||||
client:
|
||||
id: "o7uowy7q4lgltbavyhjq"
|
||||
secret: "lqrjETNx6W7uRnpcDm8wRVj8BChjC1er"
|
||||
|
||||
# API Endpoints
|
||||
endpoints:
|
||||
# Authentication
|
||||
token: "https://www.crunchyroll.com/auth/v1/token"
|
||||
|
||||
# Account
|
||||
account_me: "https://www.crunchyroll.com/accounts/v1/me"
|
||||
multiprofile: "https://www.crunchyroll.com/accounts/v1/{account_id}/multiprofile"
|
||||
|
||||
# Content Metadata
|
||||
series: "https://www.crunchyroll.com/content/v2/cms/series/{series_id}"
|
||||
seasons: "https://www.crunchyroll.com/content/v2/cms/series/{series_id}/seasons"
|
||||
season_episodes: "https://www.crunchyroll.com/content/v2/cms/seasons/{season_id}/episodes"
|
||||
skip_events: "https://static.crunchyroll.com/skip-events/production/{episode_id}.json"
|
||||
|
||||
# Playback
|
||||
playback: "https://www.crunchyroll.com/playback/v3/{episode_id}/tv/android_tv/play"
|
||||
playback_download: "https://www.crunchyroll.com/playback/v3/{episode_id}/android/phone/download"
|
||||
playback_delete: "https://www.crunchyroll.com/playback/v1/token/{episode_id}/{token}"
|
||||
playback_sessions: "https://www.crunchyroll.com/playback/v1/sessions/streaming"
|
||||
license_widevine: "https://cr-license-proxy.prd.crunchyrollsvc.com/v1/license/widevine"
|
||||
search: "https://www.crunchyroll.com/content/v2/discover/search"
|
||||
|
||||
# Headers for Android TV client
|
||||
headers:
|
||||
user-agent: "Crunchyroll/ANDROIDTV/3.58.0_22336 (Android 11; en-US; SHIELD Android TV)"
|
||||
accept: "application/json"
|
||||
accept-charset: "UTF-8"
|
||||
accept-encoding: "gzip"
|
||||
connection: "Keep-Alive"
|
||||
content-type: "application/x-www-form-urlencoded; charset=UTF-8"
|
||||
|
||||
# Query parameters
|
||||
params:
|
||||
locale: "en-US"
|
||||
preferred_audio_language: "en-US"
|
||||
|
||||
# Device parameters for authentication
|
||||
device:
|
||||
type: "ANDROIDTV"
|
||||
name: "SHIELD Android TV"
|
||||
model: "SHIELD Android TV"
|
||||
342
EXAMPLE/__init__.py
Normal file
342
EXAMPLE/__init__.py
Normal file
@@ -0,0 +1,342 @@
|
||||
import base64
|
||||
import hashlib
|
||||
import json
|
||||
import re
|
||||
from collections.abc import Generator
|
||||
from datetime import datetime
|
||||
from http.cookiejar import CookieJar
|
||||
from typing import Optional, Union
|
||||
|
||||
import click
|
||||
from langcodes import Language
|
||||
|
||||
from unshackle.core.constants import AnyTrack
|
||||
from unshackle.core.credential import Credential
|
||||
from unshackle.core.manifests import DASH
|
||||
# from unshackle.core.manifests import HLS
|
||||
from unshackle.core.search_result import SearchResult
|
||||
from unshackle.core.service import Service
|
||||
from unshackle.core.titles import Episode, Movie, Movies, Series, Title_T, Titles_T
|
||||
from unshackle.core.tracks import Chapter, Subtitle, Tracks, Video
|
||||
|
||||
|
||||
class EXAMPLE(Service):
|
||||
"""
|
||||
Service code for domain.com
|
||||
Version: 1.0.0
|
||||
|
||||
Authorization: Cookies
|
||||
|
||||
Security: FHD@L3
|
||||
|
||||
Use full URL (for example - https://domain.com/details/20914) or title ID (for example - 20914).
|
||||
"""
|
||||
|
||||
TITLE_RE = r"^(?:https?://?domain\.com/details/)?(?P<title_id>[^/]+)"
|
||||
GEOFENCE = ("US", "UK")
|
||||
NO_SUBTITLES = True
|
||||
|
||||
VIDEO_RANGE_MAP = {
|
||||
"SDR": "sdr",
|
||||
"HDR10": "hdr10",
|
||||
"DV": "dolby_vision",
|
||||
}
|
||||
|
||||
@staticmethod
|
||||
@click.command(name="EXAMPLE", short_help="https://domain.com")
|
||||
@click.argument("title", type=str)
|
||||
@click.option("-m", "--movie", is_flag=True, default=False, help="Specify if it's a movie")
|
||||
@click.option("-d", "--device", type=str, default="android_tv", help="Select device from the config file")
|
||||
@click.pass_context
|
||||
def cli(ctx, **kwargs):
|
||||
return EXAMPLE(ctx, **kwargs)
|
||||
|
||||
def __init__(self, ctx, title, movie, device):
|
||||
super().__init__(ctx)
|
||||
|
||||
self.title = title
|
||||
self.movie = movie
|
||||
self.device = device
|
||||
self.cdm = ctx.obj.cdm
|
||||
|
||||
# self.track_request is set by Service.__init__() from CLI params
|
||||
# Contains: codecs (list[Video.Codec]), ranges (list[Video.Range]), best_available (bool)
|
||||
|
||||
# Override codec for HDR ranges (HDR requires HEVC)
|
||||
if any(r != Video.Range.SDR for r in self.track_request.ranges):
|
||||
self.track_request.codecs = [Video.Codec.HEVC]
|
||||
|
||||
# Override for L3 CDM limitations
|
||||
if self.cdm and self.cdm.security_level == 3:
|
||||
self.track_request.codecs = [Video.Codec.AVC]
|
||||
self.track_request.ranges = [Video.Range.SDR]
|
||||
|
||||
if self.config is None:
|
||||
raise Exception("Config is missing!")
|
||||
|
||||
profile_name = ctx.parent.params.get("profile")
|
||||
self.profile = profile_name or "default"
|
||||
|
||||
def authenticate(self, cookies: Optional[CookieJar] = None, credential: Optional[Credential] = None) -> None:
|
||||
super().authenticate(cookies, credential)
|
||||
if not cookies:
|
||||
raise EnvironmentError("Service requires Cookies for Authentication.")
|
||||
|
||||
jwt_token = next((cookie.value for cookie in cookies if cookie.name == "streamco_token"), None)
|
||||
payload = json.loads(base64.urlsafe_b64decode(jwt_token.split(".")[1] + "==").decode("utf-8"))
|
||||
profile_id = payload.get("profileId", None)
|
||||
self.session.headers.update({"user-agent": self.config["client"][self.device]["user_agent"]})
|
||||
|
||||
cache = self.cache.get(f"tokens_{self.device}_{self.profile}")
|
||||
|
||||
if cache:
|
||||
if cache.data["expires_in"] > int(datetime.now().timestamp()):
|
||||
self.log.info("Using cached tokens")
|
||||
else:
|
||||
self.log.info("Refreshing tokens")
|
||||
|
||||
refresh = self.session.post(
|
||||
url=self.config["endpoints"]["refresh"], data={"refresh_token": cache.data["refresh_data"]}
|
||||
).json()
|
||||
|
||||
cache.set(data=refresh)
|
||||
|
||||
else:
|
||||
self.log.info("Retrieving new tokens")
|
||||
|
||||
token = self.session.post(
|
||||
url=self.config["endpoints"]["login"],
|
||||
data={
|
||||
"token": jwt_token,
|
||||
"profileId": profile_id,
|
||||
},
|
||||
).json()
|
||||
|
||||
cache.set(data=token)
|
||||
|
||||
self.token = cache.data["token"]
|
||||
self.user_id = cache.data["userId"]
|
||||
|
||||
def search(self) -> Generator[SearchResult, None, None]:
|
||||
search = self.session.get(
|
||||
url=self.config["endpoints"]["search"], params={"q": self.title, "token": self.token}
|
||||
).json()
|
||||
|
||||
for result in search["entries"]:
|
||||
yield SearchResult(
|
||||
id_=result["id"],
|
||||
title=result["title"],
|
||||
label="SERIES" if result["programType"] == "series" else "MOVIE",
|
||||
url=result["url"],
|
||||
)
|
||||
|
||||
def get_titles(self) -> Titles_T:
|
||||
self.title = re.match(self.TITLE_RE, self.title).group(1)
|
||||
|
||||
metadata = self.session.get(
|
||||
url=self.config["endpoints"]["metadata"].format(title_id=self.title), params={"token": self.token}
|
||||
).json()
|
||||
|
||||
if metadata["programType"] == "movie":
|
||||
self.movie = True
|
||||
|
||||
if self.movie:
|
||||
return Movies(
|
||||
[
|
||||
Movie(
|
||||
id_=metadata["id"],
|
||||
service=self.__class__,
|
||||
name=metadata["title"],
|
||||
description=metadata["description"],
|
||||
year=metadata["releaseYear"] if metadata["releaseYear"] > 0 else None,
|
||||
language=Language.find(metadata["languages"][0]),
|
||||
data=metadata,
|
||||
)
|
||||
]
|
||||
)
|
||||
else:
|
||||
episodes = []
|
||||
|
||||
for season in metadata["seasons"]:
|
||||
if "Trailers" not in season["title"]:
|
||||
season_data = self.session.get(url=season["url"], params={"token": self.token}).json()
|
||||
|
||||
for episode in season_data["entries"]:
|
||||
episodes.append(
|
||||
Episode(
|
||||
id_=episode["id"],
|
||||
service=self.__class__,
|
||||
title=metadata["title"],
|
||||
season=episode["season"],
|
||||
number=episode["episode"],
|
||||
name=episode["title"],
|
||||
description=episode["description"],
|
||||
year=metadata["releaseYear"] if metadata["releaseYear"] > 0 else None,
|
||||
language=Language.find(metadata["languages"][0]),
|
||||
data=episode,
|
||||
)
|
||||
)
|
||||
return Series(episodes)
|
||||
|
||||
# DASH Example: Service requires separate API calls per codec/range.
|
||||
# Uses _get_tracks_for_variants() which iterates codecs x ranges,
|
||||
# handles HYBRID (HDR10+DV), and best_available fallback.
|
||||
|
||||
def get_tracks(self, title: Title_T) -> Tracks:
|
||||
def _fetch_variant(
|
||||
title: Title_T,
|
||||
codec: Optional[Video.Codec],
|
||||
range_: Video.Range,
|
||||
) -> Tracks:
|
||||
vcodec_str = "H265" if codec == Video.Codec.HEVC else "H264"
|
||||
range_str = range_.name
|
||||
video_format = self.VIDEO_RANGE_MAP.get(range_str, "sdr")
|
||||
|
||||
self.log.info(f" + Fetching {vcodec_str} {range_str} manifest")
|
||||
tracks = self._fetch_dash_manifest(title, vcodec=vcodec_str, video_format=video_format)
|
||||
|
||||
expected_range = {
|
||||
"HDR10": Video.Range.HDR10,
|
||||
"DV": Video.Range.DV,
|
||||
}.get(range_str)
|
||||
if expected_range and not any(v.range == expected_range for v in tracks.videos):
|
||||
raise ValueError(f"{range_str} requested but no {range_str} tracks available")
|
||||
|
||||
return tracks
|
||||
|
||||
return self._get_tracks_for_variants(title, _fetch_variant)
|
||||
|
||||
# HLS Example: Service returns all codecs/ranges in one master playlist.
|
||||
# No need for _get_tracks_for_variants, dl.py filters by user selection.
|
||||
#
|
||||
# def get_tracks(self, title: Title_T) -> Tracks:
|
||||
# playback = self.session.get(
|
||||
# url=self.config["endpoints"]["playback"].format(title_id=title.id),
|
||||
# params={"token": self.token},
|
||||
# ).json()
|
||||
# return HLS.from_url(
|
||||
# url=playback["manifest_url"],
|
||||
# session=self.session,
|
||||
# ).to_tracks(title.language)
|
||||
|
||||
def _fetch_dash_manifest(
|
||||
self,
|
||||
title: Title_T,
|
||||
vcodec: str = "H264",
|
||||
video_format: str = "sdr",
|
||||
) -> Tracks:
|
||||
streams = self.session.post(
|
||||
url=self.config["endpoints"]["streams"],
|
||||
params={
|
||||
"token": self.token,
|
||||
"guid": title.id,
|
||||
},
|
||||
data={
|
||||
"type": self.config["client"][self.device]["type"],
|
||||
"video_format": video_format,
|
||||
"video_codec": vcodec,
|
||||
},
|
||||
).json()["media"]
|
||||
|
||||
self.license_data = {
|
||||
"url": streams["drm"]["url"],
|
||||
"data": streams["drm"]["data"],
|
||||
"session": streams["drm"]["session"],
|
||||
}
|
||||
|
||||
manifest_url = streams["url"].split("?")[0]
|
||||
self.log.debug(f"Manifest URL: {manifest_url}")
|
||||
tracks = DASH.from_url(url=manifest_url, session=self.session).to_tracks(language=title.language)
|
||||
|
||||
range_enum = {
|
||||
"hdr10": Video.Range.HDR10,
|
||||
"dolby_vision": Video.Range.DV,
|
||||
}.get(video_format, Video.Range.SDR)
|
||||
for video in tracks.videos:
|
||||
video.range = range_enum
|
||||
|
||||
tracks.audio = [
|
||||
track for track in tracks.audio if "clear" not in track.data["dash"]["representation"].get("id")
|
||||
]
|
||||
|
||||
for track in tracks.audio:
|
||||
if track.channels == 6.0:
|
||||
track.channels = 5.1
|
||||
track_label = track.data["dash"]["adaptation_set"].get("label")
|
||||
if track_label and "Audio Description" in track_label:
|
||||
track.descriptive = True
|
||||
|
||||
tracks.subtitles.clear()
|
||||
if streams.get("captions"):
|
||||
for subtitle in streams["captions"]:
|
||||
tracks.add(
|
||||
Subtitle(
|
||||
id_=hashlib.md5(subtitle["url"].encode()).hexdigest()[0:6],
|
||||
url=subtitle["url"],
|
||||
codec=Subtitle.Codec.from_mime("vtt"),
|
||||
language=Language.get(subtitle["language"]),
|
||||
sdh=True,
|
||||
)
|
||||
)
|
||||
|
||||
if not self.movie:
|
||||
title.data["chapters"] = self.session.get(
|
||||
url=self.config["endpoints"]["metadata"].format(title_id=title.id),
|
||||
params={"token": self.token},
|
||||
).json()["chapters"]
|
||||
|
||||
return tracks
|
||||
|
||||
def get_chapters(self, title: Title_T) -> list[Chapter]:
|
||||
chapters = []
|
||||
|
||||
if title.data.get("chapters", []):
|
||||
for chapter in title.data["chapters"]:
|
||||
if chapter["name"] == "Intro":
|
||||
chapters.append(Chapter(timestamp=chapter["start"], name="Opening"))
|
||||
chapters.append(Chapter(timestamp=chapter["end"]))
|
||||
if chapter["name"] == "Credits":
|
||||
chapters.append(Chapter(timestamp=chapter["start"], name="Credits"))
|
||||
|
||||
return chapters
|
||||
|
||||
def get_widevine_service_certificate(self, **_: any) -> str:
|
||||
return self.config.get("certificate")
|
||||
|
||||
def get_widevine_license(self, *, challenge: bytes, title: Title_T, track: AnyTrack) -> Optional[Union[bytes, str]]:
|
||||
license_url = self.license_data.get("url") or self.config["endpoints"].get("widevine_license")
|
||||
if not license_url:
|
||||
raise ValueError("Widevine license endpoint not configured")
|
||||
|
||||
response = self.session.post(
|
||||
url=license_url,
|
||||
data=challenge,
|
||||
params={
|
||||
"session": self.license_data.get("session"),
|
||||
"userId": self.user_id,
|
||||
},
|
||||
headers={
|
||||
"dt-custom-data": self.license_data.get("data"),
|
||||
"user-agent": self.config["client"][self.device]["license_user_agent"],
|
||||
},
|
||||
)
|
||||
response.raise_for_status()
|
||||
try:
|
||||
return response.json().get("license")
|
||||
except ValueError:
|
||||
return response.content
|
||||
|
||||
def get_playready_license(self, *, challenge: bytes, title: Title_T, track: AnyTrack) -> Optional[Union[bytes, str]]:
|
||||
license_url = self.config["endpoints"].get("playready_license")
|
||||
if not license_url:
|
||||
raise ValueError("PlayReady license endpoint not configured")
|
||||
|
||||
response = self.session.post(
|
||||
url=license_url,
|
||||
data=challenge,
|
||||
headers={
|
||||
"user-agent": self.config["client"][self.device]["license_user_agent"],
|
||||
},
|
||||
)
|
||||
response.raise_for_status()
|
||||
return response.content
|
||||
12
EXAMPLE/config.yaml
Normal file
12
EXAMPLE/config.yaml
Normal file
@@ -0,0 +1,12 @@
|
||||
endpoints:
|
||||
login: https://api.domain.com/v1/login
|
||||
metadata: https://api.domain.com/v1/metadata/{title_id}.json
|
||||
streams: https://api.domain.com/v1/streams
|
||||
playready_license: https://api.domain.com/v1/license/playready
|
||||
widevine_license: https://api.domain.com/v1/license/widevine
|
||||
|
||||
client:
|
||||
android_tv:
|
||||
user_agent: USER_AGENT
|
||||
license_user_agent: LICENSE_USER_AGENT
|
||||
type: DATA
|
||||
574
HMAX/__init__.py
Normal file
574
HMAX/__init__.py
Normal file
@@ -0,0 +1,574 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import json
|
||||
import re
|
||||
import sys
|
||||
import uuid
|
||||
from collections import defaultdict
|
||||
from collections.abc import Generator
|
||||
from copy import deepcopy
|
||||
from http.cookiejar import CookieJar
|
||||
from typing import Any
|
||||
from urllib.parse import urljoin
|
||||
from zlib import crc32
|
||||
|
||||
import click
|
||||
from click import Context
|
||||
from langcodes import Language
|
||||
from lxml import etree
|
||||
from unshackle.core.credential import Credential
|
||||
from unshackle.core.manifests import DASH
|
||||
from unshackle.core.search_result import SearchResult
|
||||
from unshackle.core.service import Service
|
||||
from unshackle.core.session import session as CurlSession
|
||||
from unshackle.core.titles import Episode, Movie, Movies, Series
|
||||
from unshackle.core.tracks import Audio, Chapter, Chapters, Subtitle, Track, Tracks
|
||||
from unshackle.core.utilities import is_close_match
|
||||
|
||||
|
||||
class HMAX(Service):
|
||||
"""
|
||||
\b
|
||||
Service code for Discovery Plus streaming service (https://www.hbomax.com).
|
||||
Credit to @sp4rk.y for the subtitle fix.
|
||||
|
||||
\b
|
||||
Version: 1.0.1
|
||||
Author: stabbedbybrick
|
||||
Authorization: Cookies for subscription, none for freely available titles
|
||||
Robustness:
|
||||
Widevine:
|
||||
L1: 2160p, 1080p
|
||||
L3: 720p
|
||||
PlayReady:
|
||||
SL3000: 2160p
|
||||
SL2000: 1080p, 720p
|
||||
|
||||
|
||||
\b
|
||||
Notes:
|
||||
- Language tags can be mislabelled or missing on some titles. List tracks with --list to verify.
|
||||
- All qualities, codecs, and ranges are included when available. Use -v H.265, -r HDR10, -q 1080p, etc. to select.
|
||||
|
||||
|
||||
"""
|
||||
|
||||
ALIASES = ("hbomax",)
|
||||
TITLE_RE = (
|
||||
r"^(?:https?://play.hbomax\.com?)?/(?P<type>show|mini-series|video|movie|topical|standalone|sport)/(?P<id>[a-z0-9-/]+)"
|
||||
)
|
||||
|
||||
@staticmethod
|
||||
@click.command(name="HMAX", short_help="https://www.hbomax.com/", help=__doc__)
|
||||
@click.argument("title", type=str)
|
||||
@click.pass_context
|
||||
def cli(ctx: Context, **kwargs: Any) -> HMAX:
|
||||
return HMAX(ctx, **kwargs)
|
||||
|
||||
def __init__(self, ctx: Context, title: str):
|
||||
super().__init__(ctx)
|
||||
self.title = title
|
||||
|
||||
self.profile = ctx.parent.params.get("profile")
|
||||
if not self.profile:
|
||||
self.profile = "default"
|
||||
|
||||
self.cdm = ctx.obj.cdm
|
||||
if self.cdm is not None:
|
||||
self.drm_system = "playready"
|
||||
self.security_level = "SL3000"
|
||||
|
||||
if self.cdm.security_level <= 3:
|
||||
self.drm_system = "widevine"
|
||||
self.security_level = "L1"
|
||||
|
||||
self.base_url = self.config["endpoints"]["default_url"]
|
||||
def get_session(self) -> CurlSession:
|
||||
return CurlSession("okhttp4", status_forcelist=[429, 502, 503, 504])
|
||||
|
||||
def authenticate(self, cookies: CookieJar | None = None, credential: Credential | None = None) -> None:
|
||||
super().authenticate(cookies, credential)
|
||||
|
||||
tokens = {}
|
||||
|
||||
|
||||
if cookies is not None:
|
||||
st_token = next((c.value for c in cookies if c.name == "st"), None)
|
||||
if not st_token:
|
||||
raise ValueError("- Unable to find token in cookies, try refreshing.")
|
||||
|
||||
# Only use cache if cookies are present since it's not needed for free titles
|
||||
cache = self.cache.get(f"tokens_{self.profile}")
|
||||
if cache:
|
||||
self.log.info(" + Using cached Tokens...")
|
||||
tokens = cache.data
|
||||
else:
|
||||
self.log.info(" + Setting up new profile...")
|
||||
profile = {"token": st_token, "device_id": str(uuid.uuid1())}
|
||||
cache.set(profile)
|
||||
tokens = cache.data
|
||||
|
||||
self.device_id = tokens.get("device_id") or str(uuid.uuid1())
|
||||
client_id = self.config["client_id"]
|
||||
|
||||
self.session.headers.update({
|
||||
"user-agent": "androidtv beam/20.8.1.2 (android/9; en-US; SHIELD Android TV-NVIDIA; Build/1)",
|
||||
"x-disco-client": "ANDROIDTV:9:beam:20.8.1.2",
|
||||
"x-disco-params": "realm=bolt,bid=beam,features=ar",
|
||||
"x-device-info": f"beam/20.8.1.2 (NVIDIA/SHIELD Android TV; android/9-mdarcy; {self.device_id}/{client_id})",
|
||||
})
|
||||
|
||||
|
||||
access = self._request("GET", "/token", params={"realm": "bolt", "deviceId": self.device_id})
|
||||
|
||||
self.access_token = access["data"]["attributes"]["token"]
|
||||
|
||||
config = self._request("POST", "/session-context/headwaiter/v1/bootstrap")
|
||||
self.base_url = self.config["endpoints"]["template"].format(config["routing"]["tenant"], config["routing"]["homeMarket"])
|
||||
|
||||
def search(self) -> Generator[SearchResult, None, None]:
|
||||
params = {
|
||||
"include": "default",
|
||||
"decorators": "viewingHistory,isFavorite,playbackAllowed,contentAction,badges",
|
||||
"contentFilter[query]": self.title,
|
||||
"page[items.number]": "1",
|
||||
"page[items.size]": "8",
|
||||
}
|
||||
data = self._request("GET", "/cms/routes/search/result", params=params)
|
||||
|
||||
results = [x.get("attributes") for x in data["included"] if x.get("type") == "show"]
|
||||
|
||||
for result in results:
|
||||
yield SearchResult(
|
||||
id_=f"/show/{result.get('alternateId')}",
|
||||
title=result.get("name"),
|
||||
description=result.get("description"),
|
||||
label="show",
|
||||
url=f"/show/{result.get('alternateId')}",
|
||||
)
|
||||
|
||||
def get_titles(self) -> Movies | Series:
|
||||
try:
|
||||
entity, content_id = (re.match(self.TITLE_RE, self.title).group(i) for i in ("type", "id"))
|
||||
except Exception:
|
||||
raise ValueError("Could not parse ID from title - is the URL correct?")
|
||||
|
||||
if entity in ("show", "mini-series", "topical"):
|
||||
episodes = self._show(content_id)
|
||||
return Series(episodes)
|
||||
|
||||
elif entity in ("movie", "standalone"):
|
||||
movie = self._movie(content_id, entity)
|
||||
return Movies(movie)
|
||||
|
||||
elif entity == "sport":
|
||||
sport = self._sport(content_id)
|
||||
return Movies(sport)
|
||||
|
||||
elif entity == "video":
|
||||
episodes = self._episode(content_id)
|
||||
return Series(episodes)
|
||||
|
||||
else:
|
||||
raise ValueError(f"Unknown content: {entity}")
|
||||
|
||||
|
||||
def get_tracks(self, title: Movie | Episode) -> Tracks:
|
||||
payload = {
|
||||
"appBundle": "com.wbd.stream",
|
||||
"applicationSessionId": self.device_id,
|
||||
"capabilities": {
|
||||
"codecs": {
|
||||
"audio": {
|
||||
"decoders": [
|
||||
{"codec": "aac", "profiles": ["lc", "he", "hev2", "xhe"]},
|
||||
{"codec": "eac3", "profiles": ["atmos"]},
|
||||
]
|
||||
},
|
||||
"video": {
|
||||
"decoders": [
|
||||
{
|
||||
"codec": "h264",
|
||||
"levelConstraints": {
|
||||
"framerate": {"max": 60, "min": 0},
|
||||
"height": {"max": 2160, "min": 48},
|
||||
"width": {"max": 3840, "min": 48},
|
||||
},
|
||||
"maxLevel": "5.2",
|
||||
"profiles": ["baseline", "main", "high"],
|
||||
},
|
||||
{
|
||||
"codec": "h265",
|
||||
"levelConstraints": {
|
||||
"framerate": {"max": 60, "min": 0},
|
||||
"height": {"max": 2160, "min": 144},
|
||||
"width": {"max": 3840, "min": 144},
|
||||
},
|
||||
"maxLevel": "5.1",
|
||||
"profiles": ["main10", "main"],
|
||||
},
|
||||
],
|
||||
"hdrFormats": ["hdr10", "hdr10plus", "dolbyvision", "dolbyvision5", "dolbyvision8", "hlg"],
|
||||
},
|
||||
},
|
||||
"contentProtection": {
|
||||
"contentDecryptionModules": [
|
||||
{"drmKeySystem": self.drm_system, "maxSecurityLevel": self.security_level}
|
||||
]
|
||||
},
|
||||
"manifests": {"formats": {"dash": {}}},
|
||||
},
|
||||
"consumptionType": "streaming",
|
||||
"deviceInfo": {
|
||||
"player": {
|
||||
"mediaEngine": {"name": "", "version": ""},
|
||||
"playerView": {"height": 2160, "width": 3840},
|
||||
"sdk": {"name": "", "version": ""},
|
||||
}
|
||||
},
|
||||
"editId": title.id,
|
||||
"firstPlay": False,
|
||||
"gdpr": False,
|
||||
"playbackSessionId": str(uuid.uuid4()),
|
||||
"userPreferences": {
|
||||
#'uiLanguage': 'en'
|
||||
},
|
||||
}
|
||||
|
||||
playback = self._request(
|
||||
"POST", "/playback-orchestrator/any/playback-orchestrator/v1/playbackInfo",
|
||||
headers={"Authorization": f"Bearer {self.access_token}"},
|
||||
json=payload,
|
||||
)
|
||||
|
||||
original_language = next((
|
||||
x.get("language")
|
||||
for x in playback["videos"][0]["audioTracks"]
|
||||
if "Original" in x.get("displayName", "")
|
||||
), "")
|
||||
|
||||
manifest = (
|
||||
playback.get("fallback", {}).get("manifest", {}).get("url", "").replace("_fallback", "")
|
||||
or playback.get("manifest", {}).get("url")
|
||||
)
|
||||
|
||||
license_url = (
|
||||
playback.get("fallback", {}).get("drm", {}).get("schemes", {}).get(self.drm_system, {}).get("licenseUrl")
|
||||
or playback.get("drm", {}).get("schemes", {}).get(self.drm_system, {}).get("licenseUrl")
|
||||
)
|
||||
|
||||
title.data["license_url"] = license_url
|
||||
title.data["chapters"] = next((x.get("annotations") for x in playback["videos"] if x["type"] == "main"), None)
|
||||
|
||||
dash = DASH.from_url(url=manifest, session=self.session)
|
||||
tracks = dash.to_tracks(language="en", period_filter=self._period_filter)
|
||||
|
||||
for track in tracks:
|
||||
track.is_original_lang = str(track.language) == original_language
|
||||
track.name = "Original" if track.is_original_lang else track.name
|
||||
|
||||
if isinstance(track, Audio):
|
||||
role = track.data["dash"]["representation"].find("Role")
|
||||
if role is not None and role.get("value") in ["description", "alternative", "alternate"]:
|
||||
track.descriptive = True
|
||||
|
||||
if isinstance(track, Subtitle):
|
||||
tracks.subtitles.remove(track)
|
||||
|
||||
subtitles = self._process_subtitles(dash, original_language)
|
||||
tracks.add(subtitles)
|
||||
|
||||
return tracks
|
||||
|
||||
def get_chapters(self, title: Movie | Episode) -> Chapters:
|
||||
if not title.data.get("chapters"):
|
||||
return Chapters()
|
||||
|
||||
chapters = []
|
||||
for chapter in title.data["chapters"]:
|
||||
if "recap" in chapter.get("secondaryType", "").lower():
|
||||
chapters.append(Chapter(name="Recap", timestamp=chapter["start"]))
|
||||
if chapter.get("end"):
|
||||
chapters.append(Chapter(timestamp=chapter.get("end")))
|
||||
if "intro" in chapter.get("secondaryType", "").lower():
|
||||
chapters.append(Chapter(name="Intro", timestamp=chapter["start"]))
|
||||
if chapter.get("end"):
|
||||
chapters.append(Chapter(timestamp=chapter.get("end")))
|
||||
elif "credits" in chapter.get("type", "").lower():
|
||||
chapters.append(Chapter(name="Credits", timestamp=chapter["start"]))
|
||||
|
||||
if not any(c.timestamp == "00:00:00.000" for c in chapters):
|
||||
chapters.append(Chapter(timestamp=0))
|
||||
|
||||
return sorted(chapters, key=lambda x: x.timestamp)
|
||||
|
||||
def get_widevine_service_certificate(self, challenge: bytes, title: Episode | Movie, **_: Any) -> str:
|
||||
if not (license_url := title.data.get("license_url")):
|
||||
return None
|
||||
|
||||
return self.session.post(url=license_url, data=challenge).content
|
||||
|
||||
|
||||
def get_widevine_license(self, *, challenge: bytes, title: Episode | Movie, track: Any):
|
||||
if not (license_url := title.data.get("license_url")):
|
||||
return None
|
||||
|
||||
r = self.session.post(url=license_url, data=challenge)
|
||||
if r.status_code != 200:
|
||||
raise ConnectionError(r.status_code, r.text)
|
||||
|
||||
return r.content
|
||||
|
||||
def get_playready_license(self, *, challenge: bytes, title: Episode | Movie, track: Any):
|
||||
if not (license_url := title.data.get("license_url")):
|
||||
return None
|
||||
|
||||
r = self.session.post(url=license_url, data=challenge)
|
||||
if r.status_code != 200:
|
||||
raise ConnectionError(r.status_code, r.text)
|
||||
|
||||
return r.content
|
||||
|
||||
# Service specific functions
|
||||
|
||||
@staticmethod
|
||||
def _process_subtitles(dash: DASH, language: str) -> list[Subtitle]:
|
||||
subtitle_groups = defaultdict(list)
|
||||
manifest = dash.manifest
|
||||
|
||||
for period in manifest.findall("Period"):
|
||||
for adapt_set in period.findall("AdaptationSet"):
|
||||
if adapt_set.get("contentType") != "text" or not adapt_set.get("lang"):
|
||||
continue
|
||||
|
||||
role = adapt_set.find("Role")
|
||||
label = adapt_set.find("Label")
|
||||
key = (
|
||||
adapt_set.get("lang"),
|
||||
role.get("value") if role is not None else "subtitle",
|
||||
label.text if label is not None else "",
|
||||
)
|
||||
subtitle_groups[key].append((period, adapt_set))
|
||||
|
||||
final_tracks = []
|
||||
for (lang, role_value, label_text), adapt_set_group in subtitle_groups.items():
|
||||
first_period, first_adapt = adapt_set_group[0]
|
||||
if first_adapt.find("Representation") is None:
|
||||
continue
|
||||
|
||||
s_elements_with_context = []
|
||||
for _, adapt_set in adapt_set_group:
|
||||
rep = adapt_set.find("Representation")
|
||||
if rep is None:
|
||||
continue
|
||||
|
||||
template = rep.find("SegmentTemplate") or adapt_set.find("SegmentTemplate")
|
||||
timeline = template.find("SegmentTimeline") if template is not None else None
|
||||
|
||||
if timeline is not None:
|
||||
start_num = int(template.get("startNumber", 1))
|
||||
s_elements_with_context.extend((start_num, s_elem) for s_elem in timeline.findall("S"))
|
||||
|
||||
s_elements_with_context.sort(key=lambda x: x[0])
|
||||
|
||||
combined_adapt = deepcopy(first_adapt)
|
||||
combined_rep = combined_adapt.find("Representation")
|
||||
|
||||
seg_template = combined_rep.find("SegmentTemplate")
|
||||
if seg_template is None:
|
||||
template_at_adapt = combined_adapt.find("SegmentTemplate")
|
||||
if template_at_adapt is not None:
|
||||
seg_template = deepcopy(template_at_adapt)
|
||||
combined_rep.append(seg_template)
|
||||
combined_adapt.remove(template_at_adapt)
|
||||
else:
|
||||
continue
|
||||
|
||||
if seg_template.find("SegmentTimeline") is not None:
|
||||
seg_template.remove(seg_template.find("SegmentTimeline"))
|
||||
|
||||
new_timeline = etree.Element("SegmentTimeline")
|
||||
new_timeline.extend(deepcopy(s) for _, s in s_elements_with_context)
|
||||
seg_template.append(new_timeline)
|
||||
|
||||
seg_template.set("startNumber", "1")
|
||||
if "endNumber" in seg_template.attrib:
|
||||
del seg_template.attrib["endNumber"]
|
||||
|
||||
track_id = hex(crc32(f"sub-{lang}-{role_value}-{label_text}".encode()) & 0xFFFFFFFF)[2:]
|
||||
lang_obj = Language.get(lang)
|
||||
track_name = "Original" if (language and is_close_match(lang_obj, [language])) else lang_obj.display_name()
|
||||
|
||||
final_tracks.append(
|
||||
Subtitle(
|
||||
id_=track_id,
|
||||
url=dash.url,
|
||||
codec=Subtitle.Codec.WebVTT,
|
||||
language=lang_obj,
|
||||
is_original_lang=bool(language and is_close_match(lang_obj, [language])),
|
||||
descriptor=Track.Descriptor.DASH,
|
||||
sdh="sdh" in label_text.lower() or role_value == "caption",
|
||||
forced="forced" in label_text.lower() or "forced" in role_value.lower(),
|
||||
name=track_name,
|
||||
data={
|
||||
"dash": {
|
||||
"manifest": manifest,
|
||||
"period": first_period,
|
||||
"adaptation_set": combined_adapt,
|
||||
"representation": combined_rep,
|
||||
}
|
||||
},
|
||||
)
|
||||
)
|
||||
|
||||
return final_tracks
|
||||
|
||||
@staticmethod
|
||||
def _period_filter(period: Any) -> bool:
|
||||
"""Shouldn't be needed for fallback manifest"""
|
||||
if not (duration := period.get("duration")):
|
||||
return False
|
||||
|
||||
return DASH.pt_to_sec(duration) < 120
|
||||
|
||||
def _show(self, title: str) -> Episode:
|
||||
params = {
|
||||
"include": "default",
|
||||
"decorators": "viewingHistory,badges,isFavorite,contentAction",
|
||||
}
|
||||
data = self._request("GET", "/cms/routes/show/{}".format(title), params=params)
|
||||
|
||||
info = next(x for x in data["included"] if x.get("attributes", {}).get("alternateId", "") == title)
|
||||
content = next((x for x in data["included"] if "show-page-rail-episodes-tabbed-content" in x["attributes"].get("alias", "")), None)
|
||||
if not content:
|
||||
raise ValueError("Show not found")
|
||||
|
||||
content_id = content.get("id")
|
||||
show_id = content["attributes"]["component"].get("mandatoryParams", "")
|
||||
season_params = [x.get("parameter") for x in content["attributes"]["component"]["filters"][0]["options"]]
|
||||
page = next(x for x in data["included"] if x.get("type", "") == "page")
|
||||
|
||||
seasons = [
|
||||
self._request(
|
||||
"GET", "/cms/collections/{}?{}&{}".format(content_id, season, show_id),
|
||||
params={"include": "default", "decorators": "viewingHistory,badges,isFavorite,contentAction"},
|
||||
)
|
||||
for season in season_params
|
||||
]
|
||||
|
||||
videos = [[x for x in season["included"] if x["type"] == "video"] for season in seasons]
|
||||
|
||||
return [
|
||||
Episode(
|
||||
id_=ep["relationships"]["edit"]["data"]["id"],
|
||||
service=self.__class__,
|
||||
title=page["attributes"].get("title") or info["attributes"].get("originalName"),
|
||||
year=ep["attributes"]["airDate"][:4] if ep["attributes"].get("airDate") else None,
|
||||
season=ep["attributes"].get("seasonNumber"),
|
||||
number=ep["attributes"].get("episodeNumber"),
|
||||
name=ep["attributes"]["name"],
|
||||
data=ep,
|
||||
)
|
||||
for episodes in videos
|
||||
for ep in episodes
|
||||
if ep.get("attributes", {}).get("videoType", "") == "EPISODE"
|
||||
]
|
||||
|
||||
def _episode(self, title: str) -> Episode:
|
||||
video_id = title.split("/")[1]
|
||||
|
||||
params = {"decorators": "isFavorite", "include": "show"}
|
||||
content = self._request("GET", "/content/videos/{}".format(video_id), params=params)
|
||||
|
||||
episode = content.get("data", {}).get("attributes")
|
||||
video_type = episode.get("videoType")
|
||||
relationships = content.get("data", {}).get("relationships")
|
||||
show = next((x for x in content["included"] if x.get("type", "") == "show"), {})
|
||||
|
||||
show_title = show.get("attributes", {}).get("name") or show.get("attributes", {}).get("originalName")
|
||||
episode_name = episode.get("originalName") or episode.get("secondaryTitle")
|
||||
if video_type.lower() in ("clip", "standalone_event"):
|
||||
show_title = episode.get("originalName")
|
||||
episode_name = episode.get("secondaryTitle", "")
|
||||
|
||||
return [
|
||||
Episode(
|
||||
id_=relationships.get("edit", {}).get("data", {}).get("id"),
|
||||
service=self.__class__,
|
||||
title=show_title,
|
||||
year=int(episode.get("airDate")[:4]) if episode.get("airDate") else None,
|
||||
season=episode.get("seasonNumber") or 0,
|
||||
number=episode.get("episodeNumber") or 0,
|
||||
name=episode_name,
|
||||
data=episode,
|
||||
)
|
||||
]
|
||||
|
||||
def _sport(self, title: str) -> Movie:
|
||||
params = {
|
||||
"include": "default",
|
||||
"decorators": "viewingHistory,badges,isFavorite,contentAction",
|
||||
}
|
||||
data = self._request("GET", "/cms/routes/sport/{}".format(title), params=params)
|
||||
|
||||
content = next((x for x in data["included"] if x.get("attributes", {}).get("alternateId", "") == title), None)
|
||||
if not content:
|
||||
raise ValueError(f"Content not found for title: {title}")
|
||||
|
||||
movie = content.get("attributes")
|
||||
relationships = content.get("relationships")
|
||||
|
||||
name = movie.get("name") or movie.get("originalName")
|
||||
year = int(movie.get("firstAvailableDate")[:4]) if movie.get("firstAvailableDate") else None
|
||||
|
||||
return [
|
||||
Movie(
|
||||
id_=relationships.get("edit", {}).get("data", {}).get("id"),
|
||||
service=self.__class__,
|
||||
name=name + " - " + movie.get("secondaryTitle", ""),
|
||||
year=year,
|
||||
data=movie,
|
||||
)
|
||||
]
|
||||
|
||||
def _movie(self, title: str, entity: str) -> Movie:
|
||||
params = {
|
||||
"include": "default",
|
||||
"decorators": "isFavorite,playbackAllowed,contentAction,badges",
|
||||
}
|
||||
data = self._request("GET", "/cms/routes/movie/{}".format(title), params=params)
|
||||
|
||||
movie = next((
|
||||
x for x in data["included"]if x.get("attributes", {}).get("videoType", "").lower() == entity), None
|
||||
)
|
||||
if not movie:
|
||||
raise ValueError("Movie not found")
|
||||
|
||||
return [
|
||||
Movie(
|
||||
id_=movie["relationships"]["edit"]["data"]["id"],
|
||||
service=self.__class__,
|
||||
name=movie["attributes"].get("name") or movie["attributes"].get("originalName"),
|
||||
year=int(movie["attributes"]["airDate"][:4]) if movie["attributes"].get("airDate") else None,
|
||||
data=movie,
|
||||
)
|
||||
]
|
||||
|
||||
def _request(self, method: str, endpoint: str, **kwargs: Any) -> Any[dict | str]:
|
||||
url = urljoin(self.base_url, endpoint)
|
||||
|
||||
response = self.session.request(method, url, **kwargs)
|
||||
|
||||
try:
|
||||
data = json.loads(response.content)
|
||||
|
||||
if errors := data.get("errors", []):
|
||||
code = next((x.get("code", "") for x in errors), "")
|
||||
if "missingpackage" in code.lower():
|
||||
self.log.error("\nError: Subscription is required for this title.")
|
||||
sys.exit(1)
|
||||
|
||||
return data
|
||||
|
||||
except Exception as e:
|
||||
raise ConnectionError(f"Request failed for {url}: {e}")
|
||||
5
HMAX/config.yaml
Normal file
5
HMAX/config.yaml
Normal file
@@ -0,0 +1,5 @@
|
||||
endpoints:
|
||||
template: "https://default.{}-{}.prd.api.hbomax.com"
|
||||
default_url: "https://default.any-any.prd.api.hbomax.com"
|
||||
|
||||
client_id: "b6746ddc-7bc7-471f-a16c-f6aaf0c34d26" # androidtv
|
||||
10
NF/MSL/MSLKeys.py
Normal file
10
NF/MSL/MSLKeys.py
Normal file
@@ -0,0 +1,10 @@
|
||||
from .MSLObject import MSLObject
|
||||
|
||||
|
||||
class MSLKeys(MSLObject):
|
||||
def __init__(self, encryption=None, sign=None, rsa=None, mastertoken=None, cdm_session=None):
|
||||
self.encryption = encryption
|
||||
self.sign = sign
|
||||
self.rsa = rsa
|
||||
self.mastertoken = mastertoken
|
||||
self.cdm_session = cdm_session
|
||||
6
NF/MSL/MSLObject.py
Normal file
6
NF/MSL/MSLObject.py
Normal file
@@ -0,0 +1,6 @@
|
||||
import jsonpickle
|
||||
|
||||
|
||||
class MSLObject:
|
||||
def __repr__(self):
|
||||
return "<{} {}>".format(self.__class__.__name__, jsonpickle.encode(self, unpicklable=False))
|
||||
408
NF/MSL/__init__.py
Normal file
408
NF/MSL/__init__.py
Normal file
@@ -0,0 +1,408 @@
|
||||
import base64
|
||||
import gzip
|
||||
import json
|
||||
import logging
|
||||
import os
|
||||
import random
|
||||
import re
|
||||
import sys
|
||||
import time
|
||||
import zlib
|
||||
from datetime import datetime
|
||||
from io import BytesIO
|
||||
|
||||
import jsonpickle
|
||||
import requests
|
||||
from Cryptodome.Cipher import AES, PKCS1_OAEP
|
||||
from Cryptodome.Hash import HMAC, SHA256
|
||||
from Cryptodome.PublicKey import RSA
|
||||
from Cryptodome.Random import get_random_bytes
|
||||
from Cryptodome.Util import Padding
|
||||
|
||||
from unshackle.core.cacher import Cacher
|
||||
|
||||
from .MSLKeys import MSLKeys
|
||||
from .schemes import EntityAuthenticationSchemes # noqa: F401
|
||||
from .schemes import KeyExchangeSchemes
|
||||
from .schemes.EntityAuthentication import EntityAuthentication
|
||||
from .schemes.KeyExchangeRequest import KeyExchangeRequest
|
||||
# from vinetrimmer.utils.widevine.device import RemoteDevice
|
||||
|
||||
class MSL:
|
||||
log = logging.getLogger("MSL")
|
||||
|
||||
def __init__(self, session, endpoint, sender, keys, message_id, user_auth=None):
|
||||
self.session = session
|
||||
self.endpoint = endpoint
|
||||
self.sender = sender
|
||||
self.keys = keys
|
||||
self.user_auth = user_auth
|
||||
self.message_id = message_id
|
||||
|
||||
@classmethod
|
||||
def handshake(cls, scheme: KeyExchangeSchemes, session: requests.Session, endpoint: str, sender: str, cache: Cacher):
|
||||
cache = cache.get(sender)
|
||||
message_id = random.randint(0, pow(2, 52))
|
||||
msl_keys = MSL.load_cache_data(cache)
|
||||
|
||||
if msl_keys is not None:
|
||||
cls.log.info("Using cached MSL data")
|
||||
else:
|
||||
msl_keys = MSLKeys()
|
||||
if scheme != KeyExchangeSchemes.Widevine:
|
||||
msl_keys.rsa = RSA.generate(2048)
|
||||
|
||||
# if not cdm:
|
||||
# raise cls.log.error("- No cached data and no CDM specified")
|
||||
|
||||
# if not msl_keys_path:
|
||||
# raise cls.log.error("- No cached data and no MSL key path specified")
|
||||
|
||||
# Key Exchange Scheme Widevine currently not implemented
|
||||
# if scheme == KeyExchangeSchemes.Widevine:
|
||||
# msl_keys.cdm_session = cdm.open(
|
||||
# pssh=b"\x0A\x7A\x00\x6C\x38\x2B",
|
||||
# raw=True,
|
||||
# offline=True
|
||||
# )
|
||||
# keyrequestdata = KeyExchangeRequest.Widevine(
|
||||
# keyrequest=cdm.get_license_challenge(msl_keys.cdm_session)
|
||||
# )
|
||||
# else:
|
||||
keyrequestdata = KeyExchangeRequest.AsymmetricWrapped(
|
||||
keypairid="superKeyPair",
|
||||
mechanism="JWK_RSA",
|
||||
publickey=msl_keys.rsa.publickey().exportKey(format="DER")
|
||||
)
|
||||
|
||||
data = jsonpickle.encode({
|
||||
"entityauthdata": EntityAuthentication.Unauthenticated(sender),
|
||||
"headerdata": base64.b64encode(MSL.generate_msg_header(
|
||||
message_id=message_id,
|
||||
sender=sender,
|
||||
is_handshake=True,
|
||||
keyrequestdata=keyrequestdata
|
||||
).encode("utf-8")).decode("utf-8"),
|
||||
"signature": ""
|
||||
}, unpicklable=False)
|
||||
data += json.dumps({
|
||||
"payload": base64.b64encode(json.dumps({
|
||||
"messageid": message_id,
|
||||
"data": "",
|
||||
"sequencenumber": 1,
|
||||
"endofmsg": True
|
||||
}).encode("utf-8")).decode("utf-8"),
|
||||
"signature": ""
|
||||
})
|
||||
|
||||
try:
|
||||
r = session.post(
|
||||
url=endpoint,
|
||||
data=data
|
||||
)
|
||||
except requests.HTTPError as e:
|
||||
raise cls.log.error(f"- Key exchange failed, response data is unexpected: {e.response.text}")
|
||||
|
||||
key_exchange = r.json() # expecting no payloads, so this is fine
|
||||
if "errordata" in key_exchange:
|
||||
raise cls.log.error("- Key exchange failed: " + json.loads(base64.b64decode(
|
||||
key_exchange["errordata"]
|
||||
).decode())["errormsg"])
|
||||
|
||||
# parse the crypto keys
|
||||
key_response_data = json.JSONDecoder().decode(base64.b64decode(
|
||||
key_exchange["headerdata"]
|
||||
).decode("utf-8"))["keyresponsedata"]
|
||||
|
||||
if key_response_data["scheme"] != str(scheme):
|
||||
raise cls.log.error("- Key exchange scheme mismatch occurred")
|
||||
|
||||
key_data = key_response_data["keydata"]
|
||||
# if scheme == KeyExchangeSchemes.Widevine:
|
||||
# if isinstance(cdm.device, RemoteDevice):
|
||||
# msl_keys.encryption, msl_keys.sign = cdm.device.exchange(
|
||||
# cdm.sessions[msl_keys.cdm_session],
|
||||
# license_res=key_data["cdmkeyresponse"],
|
||||
# enc_key_id=base64.b64decode(key_data["encryptionkeyid"]),
|
||||
# hmac_key_id=base64.b64decode(key_data["hmackeyid"])
|
||||
# )
|
||||
# cdm.parse_license(msl_keys.cdm_session, key_data["cdmkeyresponse"])
|
||||
# else:
|
||||
# cdm.parse_license(msl_keys.cdm_session, key_data["cdmkeyresponse"])
|
||||
# keys = cdm.get_keys(msl_keys.cdm_session)
|
||||
# msl_keys.encryption = MSL.get_widevine_key(
|
||||
# kid=base64.b64decode(key_data["encryptionkeyid"]),
|
||||
# keys=keys,
|
||||
# permissions=["AllowEncrypt", "AllowDecrypt"]
|
||||
# )
|
||||
# msl_keys.sign = MSL.get_widevine_key(
|
||||
# kid=base64.b64decode(key_data["hmackeyid"]),
|
||||
# keys=keys,
|
||||
# permissions=["AllowSign", "AllowSignatureVerify"]
|
||||
# )
|
||||
# else:
|
||||
cipher_rsa = PKCS1_OAEP.new(msl_keys.rsa)
|
||||
msl_keys.encryption = MSL.base64key_decode(
|
||||
json.JSONDecoder().decode(cipher_rsa.decrypt(
|
||||
base64.b64decode(key_data["encryptionkey"])
|
||||
).decode("utf-8"))["k"]
|
||||
)
|
||||
msl_keys.sign = MSL.base64key_decode(
|
||||
json.JSONDecoder().decode(cipher_rsa.decrypt(
|
||||
base64.b64decode(key_data["hmackey"])
|
||||
).decode("utf-8"))["k"]
|
||||
)
|
||||
msl_keys.mastertoken = key_response_data["mastertoken"]
|
||||
|
||||
MSL.cache_keys(msl_keys, cache)
|
||||
cls.log.info("MSL handshake successful")
|
||||
return cls(
|
||||
session=session,
|
||||
endpoint=endpoint,
|
||||
sender=sender,
|
||||
keys=msl_keys,
|
||||
message_id=message_id
|
||||
)
|
||||
|
||||
@staticmethod
|
||||
def load_cache_data(cacher: Cacher):
|
||||
if not cacher or cacher == {}:
|
||||
return None
|
||||
# with open(msl_keys_path, encoding="utf-8") as fd:
|
||||
# msl_keys = jsonpickle.decode(fd.read())
|
||||
msl_keys = jsonpickle.decode(cacher.data)
|
||||
if msl_keys.rsa:
|
||||
# noinspection PyTypeChecker
|
||||
# expects RsaKey, but is a string, this is because jsonpickle can't pickle RsaKey object
|
||||
# so as a workaround it exports to PEM, and then when reading, it imports that PEM back
|
||||
# to an RsaKey :)
|
||||
msl_keys.rsa = RSA.importKey(msl_keys.rsa)
|
||||
# If it's expired or close to, return None as it's unusable
|
||||
if msl_keys.mastertoken and ((datetime.utcfromtimestamp(int(json.JSONDecoder().decode(
|
||||
base64.b64decode(msl_keys.mastertoken["tokendata"]).decode("utf-8")
|
||||
)["expiration"])) - datetime.now()).total_seconds() / 60 / 60) < 10:
|
||||
return None
|
||||
return msl_keys
|
||||
|
||||
@staticmethod
|
||||
def cache_keys(msl_keys, cache: Cacher):
|
||||
# os.makedirs(os.path.dirname(cache), exist_ok=True)
|
||||
if msl_keys.rsa:
|
||||
# jsonpickle can't pickle RsaKey objects :(
|
||||
msl_keys.rsa = msl_keys.rsa.export_key()
|
||||
# with open(cache, "w", encoding="utf-8") as fd:
|
||||
# fd.write()
|
||||
cache.set(jsonpickle.encode(msl_keys))
|
||||
if msl_keys.rsa:
|
||||
# re-import now
|
||||
msl_keys.rsa = RSA.importKey(msl_keys.rsa)
|
||||
|
||||
@staticmethod
|
||||
def generate_msg_header(message_id, sender, is_handshake, userauthdata=None, keyrequestdata=None,
|
||||
compression="GZIP"):
|
||||
"""
|
||||
The MSL header carries all MSL data used for entity and user authentication, message encryption
|
||||
and verification, and service tokens. Portions of the MSL header are encrypted.
|
||||
https://github.com/Netflix/msl/wiki/Messages#header-data
|
||||
|
||||
:param message_id: number against which payload chunks are bound to protect against replay.
|
||||
:param sender: ESN
|
||||
:param is_handshake: This flag is set true if the message is a handshake message and will not include any
|
||||
payload chunks. It will include keyrequestdata.
|
||||
:param userauthdata: UserAuthData
|
||||
:param keyrequestdata: KeyRequestData
|
||||
:param compression: Supported compression algorithms.
|
||||
|
||||
:return: The base64 encoded JSON String of the header
|
||||
"""
|
||||
header_data = {
|
||||
"messageid": message_id,
|
||||
"renewable": True, # MUST be True if is_handshake
|
||||
"handshake": is_handshake,
|
||||
"capabilities": {
|
||||
"compressionalgos": [compression] if compression else [],
|
||||
"languages": ["en-US"], # bcp-47
|
||||
"encoderformats": ["JSON"]
|
||||
},
|
||||
"timestamp": int(time.time()),
|
||||
# undocumented or unused:
|
||||
"sender": sender,
|
||||
"nonreplayable": False,
|
||||
"recipient": "Netflix",
|
||||
}
|
||||
if userauthdata:
|
||||
header_data["userauthdata"] = userauthdata
|
||||
if keyrequestdata:
|
||||
header_data["keyrequestdata"] = [keyrequestdata]
|
||||
return jsonpickle.encode(header_data, unpicklable=False)
|
||||
|
||||
@classmethod
|
||||
def get_widevine_key(cls, kid, keys, permissions):
|
||||
for key in keys:
|
||||
if key.kid != kid:
|
||||
continue
|
||||
if key.type != "OPERATOR_SESSION":
|
||||
cls.log.warning(f"Widevine Key Exchange: Wrong key type (not operator session) key {key}")
|
||||
continue
|
||||
if not set(permissions) <= set(key.permissions):
|
||||
cls.log.warning(f"Widevine Key Exchange: Incorrect permissions, key {key}, needed perms {permissions}")
|
||||
continue
|
||||
return key.key
|
||||
return None
|
||||
|
||||
def send_message(self, endpoint, params, application_data, userauthdata=None):
|
||||
message = self.create_message(application_data, userauthdata)
|
||||
res = self.session.post(url=endpoint, data=message, params=params)
|
||||
header, payload_data = self.parse_message(res.text)
|
||||
if "errordata" in header:
|
||||
raise self.log.error(
|
||||
"- MSL response message contains an error: {}".format(
|
||||
json.loads(base64.b64decode(header["errordata"].encode("utf-8")).decode("utf-8"))
|
||||
)
|
||||
)
|
||||
return header, payload_data
|
||||
|
||||
def create_message(self, application_data, userauthdata=None):
|
||||
self.message_id += 1 # new message must ue a new message id
|
||||
headerdata = self.encrypt(self.generate_msg_header(
|
||||
message_id=self.message_id,
|
||||
sender=self.sender,
|
||||
is_handshake=False,
|
||||
userauthdata=userauthdata
|
||||
))
|
||||
|
||||
header = json.dumps({
|
||||
"headerdata": base64.b64encode(headerdata.encode("utf-8")).decode("utf-8"),
|
||||
"signature": self.sign(headerdata).decode("utf-8"),
|
||||
"mastertoken": self.keys.mastertoken
|
||||
})
|
||||
|
||||
payload_chunks = [self.encrypt(json.dumps({
|
||||
"messageid": self.message_id,
|
||||
"data": self.gzip_compress(json.dumps(application_data).encode("utf-8")).decode("utf-8"),
|
||||
"compressionalgo": "GZIP",
|
||||
"sequencenumber": 1, # todo ; use sequence_number from master token instead?
|
||||
"endofmsg": True
|
||||
}))]
|
||||
|
||||
message = header
|
||||
for payload_chunk in payload_chunks:
|
||||
message += json.dumps({
|
||||
"payload": base64.b64encode(payload_chunk.encode("utf-8")).decode("utf-8"),
|
||||
"signature": self.sign(payload_chunk).decode("utf-8")
|
||||
})
|
||||
|
||||
return message
|
||||
|
||||
def decrypt_payload_chunks(self, payload_chunks):
|
||||
"""
|
||||
Decrypt and extract data from payload chunks
|
||||
|
||||
:param payload_chunks: List of payload chunks
|
||||
:return: json object
|
||||
"""
|
||||
raw_data = ""
|
||||
|
||||
for payload_chunk in payload_chunks:
|
||||
# todo ; verify signature of payload_chunk["signature"] against payload_chunk["payload"]
|
||||
# expecting base64-encoded json string
|
||||
payload_chunk = json.loads(base64.b64decode(payload_chunk["payload"]).decode("utf-8"))
|
||||
# decrypt the payload
|
||||
payload_decrypted = AES.new(
|
||||
key=self.keys.encryption,
|
||||
mode=AES.MODE_CBC,
|
||||
iv=base64.b64decode(payload_chunk["iv"])
|
||||
).decrypt(base64.b64decode(payload_chunk["ciphertext"]))
|
||||
payload_decrypted = Padding.unpad(payload_decrypted, 16)
|
||||
payload_decrypted = json.loads(payload_decrypted.decode("utf-8"))
|
||||
# decode and uncompress data if compressed
|
||||
payload_data = base64.b64decode(payload_decrypted["data"])
|
||||
if payload_decrypted.get("compressionalgo") == "GZIP":
|
||||
payload_data = zlib.decompress(payload_data, 16 + zlib.MAX_WBITS)
|
||||
raw_data += payload_data.decode("utf-8")
|
||||
|
||||
data = json.loads(raw_data)
|
||||
if "error" in data:
|
||||
error = data["error"]
|
||||
error_display = error.get("display")
|
||||
error_detail = re.sub(r" \(E3-[^)]+\)", "", error.get("detail", ""))
|
||||
|
||||
if error_display:
|
||||
self.log.critical(f"- {error_display}")
|
||||
if error_detail:
|
||||
self.log.critical(f"- {error_detail}")
|
||||
|
||||
if not (error_display or error_detail):
|
||||
self.log.critical(f"- {error}")
|
||||
|
||||
# sys.exit(1)
|
||||
|
||||
return data["result"]
|
||||
|
||||
def parse_message(self, message):
|
||||
"""
|
||||
Parse an MSL message into a header and list of payload chunks
|
||||
|
||||
:param message: MSL message
|
||||
:returns: a 2-item tuple containing message and list of payload chunks if available
|
||||
"""
|
||||
parsed_message = json.loads("[{}]".format(message.replace("}{", "},{")))
|
||||
|
||||
header = parsed_message[0]
|
||||
encrypted_payload_chunks = parsed_message[1:] if len(parsed_message) > 1 else []
|
||||
if encrypted_payload_chunks:
|
||||
payload_chunks = self.decrypt_payload_chunks(encrypted_payload_chunks)
|
||||
else:
|
||||
payload_chunks = {}
|
||||
|
||||
return header, payload_chunks
|
||||
|
||||
@staticmethod
|
||||
def gzip_compress(data):
|
||||
out = BytesIO()
|
||||
with gzip.GzipFile(fileobj=out, mode="w") as fd:
|
||||
fd.write(data)
|
||||
return base64.b64encode(out.getvalue())
|
||||
|
||||
@staticmethod
|
||||
def base64key_decode(payload):
|
||||
length = len(payload) % 4
|
||||
if length == 2:
|
||||
payload += "=="
|
||||
elif length == 3:
|
||||
payload += "="
|
||||
elif length != 0:
|
||||
raise ValueError("Invalid base64 string")
|
||||
return base64.urlsafe_b64decode(payload.encode("utf-8"))
|
||||
|
||||
def encrypt(self, plaintext):
|
||||
"""
|
||||
Encrypt the given Plaintext with the encryption key
|
||||
:param plaintext:
|
||||
:return: Serialized JSON String of the encryption Envelope
|
||||
"""
|
||||
iv = get_random_bytes(16)
|
||||
return json.dumps({
|
||||
"ciphertext": base64.b64encode(
|
||||
AES.new(
|
||||
self.keys.encryption,
|
||||
AES.MODE_CBC,
|
||||
iv
|
||||
).encrypt(
|
||||
Padding.pad(plaintext.encode("utf-8"), 16)
|
||||
)
|
||||
).decode("utf-8"),
|
||||
"keyid": "{}_{}".format(self.sender, json.loads(
|
||||
base64.b64decode(self.keys.mastertoken["tokendata"]).decode("utf-8")
|
||||
)["sequencenumber"]),
|
||||
"sha256": "AA==",
|
||||
"iv": base64.b64encode(iv).decode("utf-8")
|
||||
})
|
||||
|
||||
def sign(self, text):
|
||||
"""
|
||||
Calculates the HMAC signature for the given text with the current sign key and SHA256
|
||||
:param text:
|
||||
:return: Base64 encoded signature
|
||||
"""
|
||||
return base64.b64encode(HMAC.new(self.keys.sign, text.encode("utf-8"), SHA256).digest())
|
||||
59
NF/MSL/schemes/EntityAuthentication.py
Normal file
59
NF/MSL/schemes/EntityAuthentication.py
Normal file
@@ -0,0 +1,59 @@
|
||||
from .. import EntityAuthenticationSchemes
|
||||
from ..MSLObject import MSLObject
|
||||
|
||||
|
||||
# noinspection PyPep8Naming
|
||||
class EntityAuthentication(MSLObject):
|
||||
def __init__(self, scheme, authdata):
|
||||
"""
|
||||
Data used to identify and authenticate the entity associated with a message.
|
||||
https://github.com/Netflix/msl/wiki/Entity-Authentication-%28Configuration%29
|
||||
|
||||
:param scheme: Entity Authentication Scheme identifier
|
||||
:param authdata: Entity Authentication data
|
||||
"""
|
||||
self.scheme = str(scheme)
|
||||
self.authdata = authdata
|
||||
|
||||
@classmethod
|
||||
def Unauthenticated(cls, identity):
|
||||
"""
|
||||
The unauthenticated entity authentication scheme does not provide encryption or authentication and only
|
||||
identifies the entity. Therefore entity identities can be harvested and spoofed. The benefit of this
|
||||
authentication scheme is that the entity has control over its identity. This may be useful if the identity is
|
||||
derived from or related to other data, or if retaining the identity is desired across state resets or in the
|
||||
event of MSL errors requiring entity re-authentication.
|
||||
"""
|
||||
return cls(
|
||||
scheme=EntityAuthenticationSchemes.Unauthenticated,
|
||||
authdata={"identity": identity}
|
||||
)
|
||||
|
||||
@classmethod
|
||||
def Widevine(cls, devtype, keyrequest):
|
||||
"""
|
||||
The Widevine entity authentication scheme is used by devices with the Widevine CDM. It does not provide
|
||||
encryption or authentication and only identifies the entity. Therefore entity identities can be harvested
|
||||
and spoofed. The entity identity is composed from the provided device type and Widevine key request data. The
|
||||
Widevine CDM properties can be extracted from the key request data.
|
||||
|
||||
When coupled with the Widevine key exchange scheme, the entity identity can be cryptographically validated by
|
||||
comparing the entity authentication key request data against the key exchange key request data.
|
||||
|
||||
Note that the local entity will not know its entity identity when using this scheme.
|
||||
|
||||
> Devtype
|
||||
|
||||
An arbitrary value identifying the device type the local entity wishes to assume. The data inside the Widevine
|
||||
key request may be optionally used to validate the claimed device type.
|
||||
|
||||
:param devtype: Local entity device type
|
||||
:param keyrequest: Widevine key request
|
||||
"""
|
||||
return cls(
|
||||
scheme=EntityAuthenticationSchemes.Widevine,
|
||||
authdata={
|
||||
"devtype": devtype,
|
||||
"keyrequest": keyrequest
|
||||
}
|
||||
)
|
||||
80
NF/MSL/schemes/KeyExchangeRequest.py
Normal file
80
NF/MSL/schemes/KeyExchangeRequest.py
Normal file
@@ -0,0 +1,80 @@
|
||||
import base64
|
||||
|
||||
from .. import KeyExchangeSchemes
|
||||
from ..MSLObject import MSLObject
|
||||
|
||||
|
||||
# noinspection PyPep8Naming
|
||||
class KeyExchangeRequest(MSLObject):
|
||||
def __init__(self, scheme, keydata):
|
||||
"""
|
||||
Session key exchange data from a requesting entity.
|
||||
https://github.com/Netflix/msl/wiki/Key-Exchange-%28Configuration%29
|
||||
|
||||
:param scheme: Key Exchange Scheme identifier
|
||||
:param keydata: Key Request data
|
||||
"""
|
||||
self.scheme = str(scheme)
|
||||
self.keydata = keydata
|
||||
|
||||
@classmethod
|
||||
def AsymmetricWrapped(cls, keypairid, mechanism, publickey):
|
||||
"""
|
||||
Asymmetric wrapped key exchange uses a generated ephemeral asymmetric key pair for key exchange. It will
|
||||
typically be used when there is no other data or keys from which to base secure key exchange.
|
||||
|
||||
This mechanism provides perfect forward secrecy but does not guarantee that session keys will only be available
|
||||
to the requesting entity if the requesting MSL stack has been modified to perform the operation on behalf of a
|
||||
third party.
|
||||
|
||||
> Key Pair ID
|
||||
|
||||
The key pair ID is included as a sanity check.
|
||||
|
||||
> Mechanism & Public Key
|
||||
|
||||
The following mechanisms are associated public key formats are currently supported.
|
||||
|
||||
Field Public Key Format Description
|
||||
RSA SPKI RSA-OAEP encrypt/decrypt
|
||||
ECC SPKI ECIES encrypt/decrypt
|
||||
JWEJS_RSA SPKI RSA-OAEP JSON Web Encryption JSON Serialization
|
||||
JWE_RSA SPKI RSA-OAEP JSON Web Encryption Compact Serialization
|
||||
JWK_RSA SPKI RSA-OAEP JSON Web Key
|
||||
JWK_RSAES SPKI RSA PKCS#1 JSON Web Key
|
||||
|
||||
:param keypairid: key pair ID
|
||||
:param mechanism: asymmetric key type
|
||||
:param publickey: public key
|
||||
"""
|
||||
return cls(
|
||||
scheme=KeyExchangeSchemes.AsymmetricWrapped,
|
||||
keydata={
|
||||
"keypairid": keypairid,
|
||||
"mechanism": mechanism,
|
||||
"publickey": base64.b64encode(publickey).decode("utf-8")
|
||||
}
|
||||
)
|
||||
|
||||
@classmethod
|
||||
def Widevine(cls, keyrequest):
|
||||
"""
|
||||
Google Widevine provides a secure key exchange mechanism. When requested the Widevine component will issue a
|
||||
one-time use key request. The Widevine server library can be used to authenticate the request and return
|
||||
randomly generated symmetric keys in a protected key response bound to the request and Widevine client library.
|
||||
The key response also specifies the key identities, types and their permitted usage.
|
||||
|
||||
The Widevine key request also contains a model identifier and a unique device identifier with an expectation of
|
||||
long-term persistence. These values are available from the Widevine client library and can be retrieved from
|
||||
the key request by the Widevine server library.
|
||||
|
||||
The Widevine client library will protect the returned keys from inspection or misuse.
|
||||
|
||||
:param keyrequest: Base64-encoded Widevine CDM license challenge (PSSH: b'\x0A\x7A\x00\x6C\x38\x2B')
|
||||
"""
|
||||
if not isinstance(keyrequest, str):
|
||||
keyrequest = base64.b64encode(keyrequest).decode()
|
||||
return cls(
|
||||
scheme=KeyExchangeSchemes.Widevine,
|
||||
keydata={"keyrequest": keyrequest}
|
||||
)
|
||||
59
NF/MSL/schemes/UserAuthentication.py
Normal file
59
NF/MSL/schemes/UserAuthentication.py
Normal file
@@ -0,0 +1,59 @@
|
||||
from ..MSLObject import MSLObject
|
||||
from . import UserAuthenticationSchemes
|
||||
|
||||
|
||||
# noinspection PyPep8Naming
|
||||
class UserAuthentication(MSLObject):
|
||||
def __init__(self, scheme, authdata):
|
||||
"""
|
||||
Data used to identify and authenticate the user associated with a message.
|
||||
https://github.com/Netflix/msl/wiki/User-Authentication-%28Configuration%29
|
||||
|
||||
:param scheme: User Authentication Scheme identifier
|
||||
:param authdata: User Authentication data
|
||||
"""
|
||||
self.scheme = str(scheme)
|
||||
self.authdata = authdata
|
||||
|
||||
@classmethod
|
||||
def EmailPassword(cls, email, password):
|
||||
"""
|
||||
Email and password is a standard user authentication scheme in wide use.
|
||||
|
||||
:param email: user email address
|
||||
:param password: user password
|
||||
"""
|
||||
return cls(
|
||||
scheme=UserAuthenticationSchemes.EmailPassword,
|
||||
authdata={
|
||||
"email": email,
|
||||
"password": password
|
||||
}
|
||||
)
|
||||
|
||||
@classmethod
|
||||
def NetflixIDCookies(cls, netflixid, securenetflixid):
|
||||
"""
|
||||
Netflix ID HTTP cookies are used when the user has previously logged in to a web site. Possession of the
|
||||
cookies serves as proof of user identity, in the same manner as they do when communicating with the web site.
|
||||
|
||||
The Netflix ID cookie and Secure Netflix ID cookie are HTTP cookies issued by the Netflix web site after
|
||||
subscriber login. The Netflix ID cookie is encrypted and identifies the subscriber and analogous to a
|
||||
subscriber’s username. The Secure Netflix ID cookie is tied to a Netflix ID cookie and only sent over HTTPS
|
||||
and analogous to a subscriber’s password.
|
||||
|
||||
In some cases the Netflix ID and Secure Netflix ID cookies will be unavailable to the MSL stack or application.
|
||||
If either or both of the Netflix ID or Secure Netflix ID cookies are absent in the above data structure the
|
||||
HTTP cookie headers will be queried for it; this is only acceptable when HTTPS is used as the underlying
|
||||
transport protocol.
|
||||
|
||||
:param netflixid: Netflix ID cookie
|
||||
:param securenetflixid: Secure Netflix ID cookie
|
||||
"""
|
||||
return cls(
|
||||
scheme=UserAuthenticationSchemes.NetflixIDCookies,
|
||||
authdata={
|
||||
"netflixid": netflixid,
|
||||
"securenetflixid": securenetflixid
|
||||
}
|
||||
)
|
||||
24
NF/MSL/schemes/__init__.py
Normal file
24
NF/MSL/schemes/__init__.py
Normal file
@@ -0,0 +1,24 @@
|
||||
from enum import Enum
|
||||
|
||||
|
||||
class Scheme(Enum):
|
||||
def __str__(self):
|
||||
return str(self.value)
|
||||
|
||||
|
||||
class EntityAuthenticationSchemes(Scheme):
|
||||
"""https://github.com/Netflix/msl/wiki/Entity-Authentication-%28Configuration%29"""
|
||||
Unauthenticated = "NONE"
|
||||
Widevine = "WIDEVINE"
|
||||
|
||||
|
||||
class UserAuthenticationSchemes(Scheme):
|
||||
"""https://github.com/Netflix/msl/wiki/User-Authentication-%28Configuration%29"""
|
||||
EmailPassword = "EMAIL_PASSWORD"
|
||||
NetflixIDCookies = "NETFLIXID"
|
||||
|
||||
|
||||
class KeyExchangeSchemes(Scheme):
|
||||
"""https://github.com/Netflix/msl/wiki/Key-Exchange-%28Configuration%29"""
|
||||
AsymmetricWrapped = "ASYMMETRIC_WRAPPED"
|
||||
Widevine = "WIDEVINE"
|
||||
770
NF/__init__.py
Normal file
770
NF/__init__.py
Normal file
@@ -0,0 +1,770 @@
|
||||
import base64
|
||||
from datetime import datetime
|
||||
import json
|
||||
from math import e
|
||||
|
||||
import random
|
||||
import sys
|
||||
import time
|
||||
import typing
|
||||
from uuid import UUID
|
||||
import click
|
||||
import re
|
||||
from typing import List, Literal, Optional, Set, Union, Tuple
|
||||
from http.cookiejar import CookieJar
|
||||
from itertools import zip_longest
|
||||
from Crypto.Random import get_random_bytes
|
||||
|
||||
import jsonpickle
|
||||
from pymp4.parser import Box
|
||||
from pywidevine import PSSH, Cdm
|
||||
import requests
|
||||
from langcodes import Language
|
||||
|
||||
from unshackle.core.constants import AnyTrack
|
||||
from unshackle.core.credential import Credential
|
||||
from unshackle.core.drm.widevine import Widevine
|
||||
from unshackle.core.service import Service
|
||||
from unshackle.core.titles import Titles_T, Title_T
|
||||
from unshackle.core.titles.episode import Episode, Series
|
||||
from unshackle.core.titles.movie import Movie, Movies
|
||||
from unshackle.core.titles.title import Title
|
||||
from unshackle.core.tracks import Tracks, Chapters
|
||||
from unshackle.core.tracks.audio import Audio
|
||||
from unshackle.core.tracks.chapter import Chapter
|
||||
from unshackle.core.tracks.subtitle import Subtitle
|
||||
from unshackle.core.tracks.track import Track
|
||||
from unshackle.core.tracks.video import Video
|
||||
from unshackle.core.utils.collections import flatten, as_list
|
||||
|
||||
from unshackle.core.tracks.attachment import Attachment
|
||||
from unshackle.core.drm.playready import PlayReady
|
||||
from unshackle.core.titles.song import Song
|
||||
from unshackle.utils.base62 import decode
|
||||
from .MSL import MSL, KeyExchangeSchemes
|
||||
from .MSL.schemes.UserAuthentication import UserAuthentication
|
||||
|
||||
class NF(Service):
|
||||
"""
|
||||
Service for https://netflix.com
|
||||
Version: 1.0.0
|
||||
|
||||
Authorization: Cookies
|
||||
Security: UHD@SL3000/L1 FHD@SL3000/L1
|
||||
"""
|
||||
TITLE_RE = [
|
||||
r"^(?:https?://(?:www\.)?netflix\.com(?:/[a-z0-9]{2})?/(?:title/|watch/|.+jbv=))?(?P<id>\d+)",
|
||||
r"^https?://(?:www\.)?unogs\.com/title/(?P<id>\d+)",
|
||||
]
|
||||
ALIASES= ("NF", "Netflix")
|
||||
NF_LANG_MAP = {
|
||||
"es": "es-419",
|
||||
"pt": "pt-PT",
|
||||
}
|
||||
|
||||
@staticmethod
|
||||
@click.command(name="NF", short_help="https://netflix.com")
|
||||
@click.argument("title", type=str)
|
||||
@click.option("-drm", "--drm-system", type=click.Choice(["widevine", "playready"], case_sensitive=False),
|
||||
default="widevine",
|
||||
help="which drm system to use")
|
||||
@click.option("-p", "--profile", type=click.Choice(["MPL", "HPL", "QC", "MPL+HPL", "MPL+HPL+QC", "MPL+QC"], case_sensitive=False),
|
||||
default=None,
|
||||
help="H.264 profile to use. Default is best available.")
|
||||
@click.option("--meta-lang", type=str, help="Language to use for metadata")
|
||||
@click.option("-ht","--hydrate-track", is_flag=True, default=False, help="Hydrate missing audio and subtitle.")
|
||||
@click.option("-hb", "--high-bitrate", is_flag=True, default=False, help="Get more video bitrate")
|
||||
@click.pass_context
|
||||
def cli(ctx, **kwargs):
|
||||
return NF(ctx, **kwargs)
|
||||
|
||||
def __init__(self, ctx: click.Context, title: str, drm_system: Literal["widevine", "playready"], profile: str, meta_lang: str, hydrate_track: bool, high_bitrate: bool):
|
||||
super().__init__(ctx)
|
||||
# General
|
||||
self.title = title
|
||||
self.profile = profile
|
||||
self.meta_lang = meta_lang
|
||||
self.hydrate_track = hydrate_track
|
||||
self.drm_system = drm_system
|
||||
self.profiles: List[str] = []
|
||||
self.requested_profiles: List[str] = []
|
||||
self.high_bitrate = high_bitrate
|
||||
|
||||
# MSL
|
||||
self.esn = self.cache.get("ESN")
|
||||
self.msl: Optional[MSL] = None
|
||||
self.userauthdata = None
|
||||
|
||||
# Download options
|
||||
self.range = ctx.parent.params.get("range_") or [Video.Range.SDR]
|
||||
self.vcodec = ctx.parent.params.get("vcodec") or Video.Codec.AVC # Defaults to H264
|
||||
self.acodec : Audio.Codec = ctx.parent.params.get("acodec") or Audio.Codec.EC3
|
||||
self.quality: List[int] = ctx.parent.params.get("quality")
|
||||
self.audio_only = ctx.parent.params.get("audio_only")
|
||||
self.subs_only = ctx.parent.params.get("subs_only")
|
||||
self.chapters_only = ctx.parent.params.get("chapters_only")
|
||||
|
||||
self.vcodec = self.vcodec[0] if isinstance(self.vcodec,list) else self.vcodec
|
||||
|
||||
|
||||
def authenticate(self, cookies: Optional[CookieJar] = None, credential: Optional[Credential] = None) -> None:
|
||||
# Configure first before download
|
||||
self.log.debug("Authenticating Netflix service")
|
||||
auth = super().authenticate(cookies, credential)
|
||||
if not cookies:
|
||||
raise EnvironmentError("Service requires Cookies for Authentication.")
|
||||
self.configure()
|
||||
return auth
|
||||
|
||||
def get_titles(self) -> Titles_T:
|
||||
metadata = self.get_metadata(self.title)
|
||||
if "video" not in metadata:
|
||||
self.log.error(f"Failed to get metadata: {metadata}")
|
||||
sys.exit(1)
|
||||
titles: Titles_T | None = None
|
||||
if metadata["video"]["type"] == "movie":
|
||||
movie = Movie(
|
||||
id_=self.title,
|
||||
name=metadata["video"]["title"],
|
||||
year=metadata["video"]["year"],
|
||||
service=self.__class__,
|
||||
data=metadata["video"],
|
||||
description=metadata["video"]["synopsis"]
|
||||
)
|
||||
titles = Movies([
|
||||
movie
|
||||
])
|
||||
else:
|
||||
episode_list: List[Episode] = []
|
||||
for season in metadata["video"]["seasons"]:
|
||||
for episode in season["episodes"]:
|
||||
episode_list.append(
|
||||
Episode(
|
||||
id_=self.title,
|
||||
title=metadata["video"]["title"],
|
||||
year=season["year"],
|
||||
service=self.__class__,
|
||||
season=season["seq"],
|
||||
number=episode["seq"],
|
||||
name=episode["title"],
|
||||
data=episode,
|
||||
description=episode["synopsis"],
|
||||
)
|
||||
)
|
||||
|
||||
titles = Series(episode_list)
|
||||
|
||||
return titles
|
||||
|
||||
|
||||
def get_tracks(self, title: Title_T) -> Tracks:
|
||||
|
||||
tracks = Tracks()
|
||||
|
||||
# If Video Codec is H.264 is selected but `self.profile is none` profile QC has to be requested separately
|
||||
if self.vcodec == Video.Codec.AVC:
|
||||
try:
|
||||
manifest = self.get_manifest(title, self.profiles)
|
||||
movie_track = self.manifest_as_tracks(manifest, title, self.hydrate_track)
|
||||
tracks.add(movie_track)
|
||||
|
||||
if self.profile is not None:
|
||||
self.log.info(f"Requested profiles: {self.profile}")
|
||||
else:
|
||||
qc_720_profile = [x for x in self.config["profiles"]["video"][self.vcodec.extension.upper()]["QC"] if "l40" not in x and 720 in self.quality]
|
||||
qc_manifest = self.get_manifest(title, qc_720_profile if 720 in self.quality else self.config["profiles"]["video"][self.vcodec.extension.upper()]["QC"])
|
||||
qc_tracks = self.manifest_as_tracks(qc_manifest, title, False)
|
||||
tracks.add(qc_tracks.videos)
|
||||
|
||||
mpl_manifest = self.get_manifest(title, [x for x in self.config["profiles"]["video"][self.vcodec.extension.upper()]["MPL"] if "l40" not in x])
|
||||
mpl_tracks = self.manifest_as_tracks(mpl_manifest, title, False)
|
||||
tracks.add(mpl_tracks.videos)
|
||||
except Exception as e:
|
||||
self.log.error(e)
|
||||
else:
|
||||
if self.high_bitrate:
|
||||
splitted_profiles = self.split_profiles(self.profiles)
|
||||
for index, profile_list in enumerate(splitted_profiles):
|
||||
try:
|
||||
self.log.debug(f"Index: {index}. Getting profiles: {profile_list}")
|
||||
manifest = self.get_manifest(title, profile_list)
|
||||
manifest_tracks = self.manifest_as_tracks(manifest, title, self.hydrate_track if index == 0 else False)
|
||||
tracks.add(manifest_tracks if index == 0 else manifest_tracks.videos)
|
||||
except Exception:
|
||||
self.log.error(f"Error getting profile: {profile_list}. Skipping")
|
||||
continue
|
||||
else:
|
||||
try:
|
||||
manifest = self.get_manifest(title, self.profiles)
|
||||
manifest_tracks = self.manifest_as_tracks(manifest, title, self.hydrate_track)
|
||||
tracks.add(manifest_tracks)
|
||||
except Exception as e:
|
||||
self.log.error(e)
|
||||
|
||||
|
||||
# Add Attachments for profile picture
|
||||
if isinstance(title, Movie):
|
||||
tracks.add(
|
||||
Attachment.from_url(
|
||||
url=title.data["boxart"][0]["url"]
|
||||
)
|
||||
)
|
||||
else:
|
||||
tracks.add(
|
||||
Attachment.from_url(title.data["stills"][0]["url"])
|
||||
)
|
||||
|
||||
return tracks
|
||||
|
||||
def split_profiles(self, profiles: List[str]) -> List[List[str]]:
|
||||
"""
|
||||
Split profiles with names containing specific patterns based on video codec
|
||||
For H264: uses patterns "l30", "l31", "l40" (lowercase)
|
||||
For non-H264: uses patterns "L30", "L31", "L40", "L41", "L50", "L51" (uppercase)
|
||||
Returns List[List[str]] type with profiles grouped by pattern
|
||||
"""
|
||||
if self.vcodec == Video.Codec.AVC: # H264
|
||||
patterns = ["l30", "l31", "l40"]
|
||||
else:
|
||||
patterns = ["L30", "L31", "L40", "L41", "L50", "L51"]
|
||||
|
||||
result: List[List[str]] = []
|
||||
for pattern in patterns:
|
||||
pattern_group = []
|
||||
for profile in profiles:
|
||||
if pattern in profile:
|
||||
pattern_group.append(profile)
|
||||
if pattern_group:
|
||||
result.append(pattern_group)
|
||||
|
||||
return result
|
||||
|
||||
|
||||
def get_chapters(self, title: Title_T) -> Chapters:
|
||||
chapters: Chapters = Chapters()
|
||||
credits = title.data["skipMarkers"]["credit"]
|
||||
if credits["start"] > 0 and credits["end"] > 0:
|
||||
chapters.add(Chapter(
|
||||
timestamp=credits["start"], # Milliseconds
|
||||
name="Intro"
|
||||
))
|
||||
chapters.add(
|
||||
Chapter(
|
||||
timestamp=credits["end"], # Milliseconds
|
||||
name="Part 01"
|
||||
)
|
||||
)
|
||||
|
||||
chapters.add(Chapter(
|
||||
timestamp=float(title.data["creditsOffset"]), # this is seconds, needed to assign to float
|
||||
name="Outro"
|
||||
))
|
||||
|
||||
return chapters
|
||||
|
||||
def get_widevine_license(self, *, challenge: bytes, title, track: AnyTrack, session_id=None):
|
||||
if not self.msl:
|
||||
self.log.error(f"MSL Client is not initialized!")
|
||||
sys.exit(1)
|
||||
application_data = {
|
||||
"version": 2,
|
||||
"url": track.data["license_url"],
|
||||
"id": int(time.time() * 10000),
|
||||
"esn": self.esn.data,
|
||||
"languages": ["en-US"],
|
||||
"clientVersion": "6.0026.291.011",
|
||||
"params": [{
|
||||
"sessionId": base64.b64encode(get_random_bytes(16)).decode("utf-8"),
|
||||
"clientTime": int(time.time()),
|
||||
"challengeBase64": base64.b64encode(challenge).decode("utf-8"),
|
||||
"xid": str(int((int(time.time()) + 0.1612) * 1000)),
|
||||
}],
|
||||
"echo": "sessionId"
|
||||
}
|
||||
header, payload_data = self.msl.send_message(
|
||||
endpoint=self.config["endpoints"]["license"],
|
||||
params={
|
||||
"reqAttempt": 1,
|
||||
"reqName": "license",
|
||||
},
|
||||
application_data=application_data,
|
||||
userauthdata=self.userauthdata
|
||||
)
|
||||
if not payload_data:
|
||||
self.log.error(f" - Failed to get license: {header['message']} [{header['code']}]")
|
||||
sys.exit(1)
|
||||
if "error" in payload_data[0]:
|
||||
error = payload_data[0]["error"]
|
||||
error_display = error.get("display")
|
||||
error_detail = re.sub(r" \(E3-[^)]+\)", "", error.get("detail", ""))
|
||||
|
||||
if error_display:
|
||||
self.log.critical(f" - {error_display}")
|
||||
if error_detail:
|
||||
self.log.critical(f" - {error_detail}")
|
||||
|
||||
if not (error_display or error_detail):
|
||||
self.log.critical(f" - {error}")
|
||||
|
||||
sys.exit(1)
|
||||
return payload_data[0]["licenseResponseBase64"]
|
||||
|
||||
def get_playready_license(self, *, challenge: bytes, title, track: AnyTrack, session_id=None):
|
||||
return None # PlayReady is not implemented
|
||||
|
||||
def configure(self):
|
||||
if self.profile is None:
|
||||
self.profiles = self.config["profiles"]["video"][self.vcodec.extension.upper()]
|
||||
|
||||
if self.profile is not None:
|
||||
self.requested_profiles = self.profile.split('+')
|
||||
self.log.info(f"Requested profile: {self.requested_profiles}")
|
||||
else:
|
||||
self.requested_profiles = self.config["profiles"]["video"][self.vcodec.extension.upper()]
|
||||
|
||||
# Make sure video codec is supported by Netflix
|
||||
if self.vcodec.extension.upper() not in self.config["profiles"]["video"]:
|
||||
raise ValueError(f"Video Codec {self.vcodec} is not supported by Netflix")
|
||||
|
||||
if self.range[0].name not in list(self.config["profiles"]["video"][self.vcodec.extension.upper()].keys()) and self.vcodec != Video.Codec.AVC and self.vcodec != Video.Codec.VP9:
|
||||
self.log.error(f"Video range {self.range[0].name} is not supported by Video Codec: {self.vcodec}")
|
||||
sys.exit(1)
|
||||
|
||||
if len(self.range) > 1:
|
||||
self.log.error(f"Multiple video range is not supported right now.")
|
||||
sys.exit(1)
|
||||
|
||||
if self.vcodec == Video.Codec.AVC and self.range[0] != Video.Range.SDR:
|
||||
self.log.error(f"H.264 Video Codec only supports SDR")
|
||||
sys.exit(1)
|
||||
|
||||
self.profiles = self.get_profiles()
|
||||
self.log.info("Initializing a MSL client")
|
||||
self.get_esn()
|
||||
scheme = KeyExchangeSchemes.AsymmetricWrapped
|
||||
self.log.info(f"Scheme: {scheme}")
|
||||
|
||||
self.msl = MSL.handshake(
|
||||
scheme=scheme,
|
||||
session=self.session,
|
||||
endpoint=self.config["endpoints"]["manifest"],
|
||||
sender=self.esn.data,
|
||||
cache=self.cache.get("MSL")
|
||||
)
|
||||
cookie = self.session.cookies.get_dict()
|
||||
self.userauthdata = UserAuthentication.NetflixIDCookies(
|
||||
netflixid=cookie["NetflixId"],
|
||||
securenetflixid=cookie["SecureNetflixId"]
|
||||
)
|
||||
|
||||
|
||||
def get_profiles(self):
|
||||
result_profiles = []
|
||||
|
||||
if self.vcodec == Video.Codec.AVC:
|
||||
if self.requested_profiles is not None:
|
||||
for requested_profiles in self.requested_profiles:
|
||||
result_profiles.extend(flatten(list(self.config["profiles"]["video"][self.vcodec.extension.upper()][requested_profiles])))
|
||||
return result_profiles
|
||||
|
||||
result_profiles.extend(flatten(list(self.config["profiles"]["video"][self.vcodec.extension.upper()].values())))
|
||||
return result_profiles
|
||||
|
||||
# Handle case for codec VP9
|
||||
if self.vcodec == Video.Codec.VP9 and self.range[0] != Video.Range.HDR10:
|
||||
result_profiles.extend(self.config["profiles"]["video"][self.vcodec.extension.upper()].values())
|
||||
return result_profiles
|
||||
|
||||
for profiles in self.config["profiles"]["video"][self.vcodec.extension.upper()]:
|
||||
for range in self.range:
|
||||
if range in profiles:
|
||||
result_profiles.extend(self.config["profiles"]["video"][self.vcodec.extension.upper()][range.name])
|
||||
|
||||
self.log.debug(f"Result_profiles: {result_profiles}")
|
||||
return result_profiles
|
||||
|
||||
def get_esn(self):
|
||||
ESN_GEN = "".join(random.choice("0123456789ABCDEF") for _ in range(30))
|
||||
esn_value = f"NFCDIE-03-{ESN_GEN}"
|
||||
if self.esn.data is None or self.esn.data == {} or (hasattr(self.esn, 'expired') and self.esn.expired):
|
||||
self.esn.set(esn_value, 1 * 60 * 60) # 1 hour in seconds
|
||||
self.log.info(f"Generated new ESN with 1-hour expiration")
|
||||
else:
|
||||
self.log.info(f"Using cached ESN.")
|
||||
self.log.info(f"ESN: {self.esn.data}")
|
||||
|
||||
|
||||
def get_metadata(self, title_id: str):
|
||||
"""
|
||||
Obtain Metadata information about a title by it's ID.
|
||||
:param title_id: Title's ID.
|
||||
:returns: Title Metadata.
|
||||
"""
|
||||
try:
|
||||
metadata = self.session.get(
|
||||
self.config["endpoints"]["metadata"].format(build_id="release"),
|
||||
params={
|
||||
"movieid": title_id,
|
||||
"drmSystem": self.config["configuration"]["drm_system"],
|
||||
"isWatchlistEnabled": False,
|
||||
"isShortformEnabled": False,
|
||||
"languages": self.meta_lang
|
||||
}
|
||||
).json()
|
||||
except requests.HTTPError as e:
|
||||
if e.response.status_code == 500:
|
||||
self.log.warning(
|
||||
" - Received a HTTP 500 error while getting metadata, deleting cached reactContext data"
|
||||
)
|
||||
raise Exception(f"Error getting metadata: {e}")
|
||||
except json.JSONDecodeError:
|
||||
self.log.error(" - Failed to get metadata, title might not be available in your region.")
|
||||
sys.exit(1)
|
||||
else:
|
||||
if "status" in metadata and metadata["status"] == "error":
|
||||
self.log.error(
|
||||
f" - Failed to get metadata, cookies might be expired. ({metadata['message']})"
|
||||
)
|
||||
sys.exit(1)
|
||||
return metadata
|
||||
|
||||
def get_manifest(self, title: Title_T, video_profiles: List[str], required_text_track_id: Optional[str] = None, required_audio_track_id: Optional[str] = None):
|
||||
audio_profiles = self.config["profiles"]["audio"].values()
|
||||
video_profiles = sorted(set(flatten(as_list(
|
||||
video_profiles,
|
||||
audio_profiles,
|
||||
self.config["profiles"]["video"]["H264"]["BPL"] if self.vcodec == Video.Codec.AVC else [],
|
||||
self.config["profiles"]["subtitles"],
|
||||
))))
|
||||
|
||||
self.log.debug("Profiles:\n\t" + "\n\t".join(video_profiles))
|
||||
|
||||
if not self.msl:
|
||||
raise Exception("MSL Client is not initialized.")
|
||||
|
||||
params = {
|
||||
"reqAttempt": 1,
|
||||
"reqPriority": 10,
|
||||
"reqName": "manifest",
|
||||
}
|
||||
_, payload_chunks = self.msl.send_message(
|
||||
endpoint=self.config["endpoints"]["manifest"],
|
||||
params=params,
|
||||
application_data={
|
||||
"version": 2,
|
||||
"url": "manifest",
|
||||
"id": int(time.time()),
|
||||
"esn": self.esn.data,
|
||||
"languages": ["en-US"],
|
||||
"clientVersion": "6.0026.291.011",
|
||||
"params": {
|
||||
"clientVersion": "6.0051.090.911",
|
||||
"challenge": self.config["payload_challenge_pr"] if self.drm_system == 'playready' else self.config["payload_challenge"],
|
||||
"challanges": {
|
||||
"default": self.config["payload_challenge_pr"] if self.drm_system == 'playready' else self.config["payload_challenge"]
|
||||
},
|
||||
"contentPlaygraph": ["v2"],
|
||||
"deviceSecurityLevel": "3000",
|
||||
"drmVersion": 25,
|
||||
"desiredVmaf": "plus_lts",
|
||||
"desiredSegmentVmaf": "plus_lts",
|
||||
"flavor": "STANDARD",
|
||||
"drmType": self.drm_system,
|
||||
"imageSubtitleHeight": 1080,
|
||||
"isBranching": False,
|
||||
"isNonMember": False,
|
||||
"isUIAutoPlay": False,
|
||||
"licenseType": "standard",
|
||||
"liveAdsCapability": "remove",
|
||||
"liveMetadataFormat": "INDEXED_SEGMENT_TEMPLATE",
|
||||
"manifestVersion": "v2",
|
||||
"osName": "windows",
|
||||
"osVersion": "10.0",
|
||||
"platform": "138.0.0.0",
|
||||
"profilesGroups": [{
|
||||
"name": "default",
|
||||
"profiles": video_profiles
|
||||
}],
|
||||
"profiles": video_profiles,
|
||||
"preferAssistiveAudio": False,
|
||||
"requestSegmentVmaf": False,
|
||||
"requiredAudioTrackId": required_audio_track_id,
|
||||
"requiredTextTrackId": required_text_track_id,
|
||||
"supportsAdBreakHydration": False, #
|
||||
"supportsNetflixMediaEvents": True,
|
||||
"supportsPartialHydration": True,
|
||||
"supportsPreReleasePin": True,
|
||||
"supportsUnequalizedDownloadables": True,
|
||||
"supportsWatermark": True,
|
||||
"titleSpecificData": {
|
||||
title.data.get("episodeId", title.data["id"]): {"unletterboxed": False}
|
||||
},
|
||||
"type": "standard",
|
||||
"uiPlatform": "SHAKTI",
|
||||
"uiVersion": "shakti-v49577320",
|
||||
"useBetterTextUrls": True,
|
||||
"useHttpsStreams": True,
|
||||
"usePsshBox": True,
|
||||
"videoOutputInfo": [{
|
||||
"type": "DigitalVideoOutputDescriptor",
|
||||
"outputType": "unknown",
|
||||
"supportedHdcpVersions": self.config["configuration"]["supported_hdcp_versions"],
|
||||
"isHdcpEngaged": self.config["configuration"]["is_hdcp_engaged"]
|
||||
}],
|
||||
"viewableId": title.data.get("episodeId", title.data["id"]),
|
||||
"xid": str(int((int(time.time()) + 0.1612) * 1000)),
|
||||
"showAllSubDubTracks": True,
|
||||
}
|
||||
},
|
||||
userauthdata=self.userauthdata
|
||||
)
|
||||
if "errorDetails" in payload_chunks:
|
||||
raise Exception(f"Manifest call failed: {payload_chunks['errorDetails']}")
|
||||
return payload_chunks
|
||||
|
||||
@staticmethod
|
||||
def get_original_language(manifest) -> Language:
|
||||
for language in manifest["audio_tracks"]:
|
||||
if language["languageDescription"].endswith(" [Original]"):
|
||||
return Language.get(language["language"])
|
||||
# e.g. get `en` from "A:1:1;2;en;0;|V:2:1;[...]"
|
||||
return Language.get(manifest["defaultTrackOrderList"][0]["mediaId"].split(";")[2])
|
||||
|
||||
def get_widevine_service_certificate(self, *, challenge: bytes, title, track: AnyTrack):
|
||||
return self.config["certificate"]
|
||||
|
||||
def manifest_as_tracks(self, manifest, title: Title_T, hydrate_tracks: bool = False) -> Tracks:
|
||||
|
||||
tracks = Tracks()
|
||||
original_language = self.get_original_language(manifest)
|
||||
self.log.debug(f"Original language: {original_language}")
|
||||
license_url = manifest["links"]["license"]["href"]
|
||||
|
||||
for video in reversed(manifest["video_tracks"][0]["streams"]):
|
||||
tracks.add(
|
||||
Video(
|
||||
id_=video["downloadable_id"],
|
||||
url=video["urls"][0]["url"],
|
||||
codec=Video.Codec.from_netflix_profile(video["content_profile"]),
|
||||
bitrate=video["bitrate"] * 1000,
|
||||
width=video["res_w"],
|
||||
height=video["res_h"],
|
||||
fps=(float(video["framerate_value"]) / video["framerate_scale"]) if "framerate_value" in video else None,
|
||||
language=Language.get(original_language),
|
||||
edition=video["content_profile"],
|
||||
range_=self.parse_video_range_from_profile(video["content_profile"]),
|
||||
drm=[Widevine(
|
||||
pssh=PSSH(
|
||||
manifest["video_tracks"][0]["drmHeader"]["bytes"]
|
||||
),
|
||||
kid=video["drmHeaderId"]
|
||||
)],
|
||||
data={
|
||||
'license_url': license_url
|
||||
}
|
||||
)
|
||||
)
|
||||
|
||||
# Audio
|
||||
# store unavailable tracks for hydrating later
|
||||
unavailable_audio_tracks: List[Tuple[str, str]] = []
|
||||
for index, audio in enumerate(manifest["audio_tracks"]):
|
||||
if len(audio["streams"]) < 1:
|
||||
unavailable_audio_tracks.append((audio["new_track_id"], audio["id"]))
|
||||
continue
|
||||
is_original_lang = audio["language"] == original_language.language
|
||||
for stream in audio["streams"]:
|
||||
tracks.add(
|
||||
Audio(
|
||||
id_=stream["downloadable_id"],
|
||||
url=stream["urls"][0]["url"],
|
||||
codec=Audio.Codec.from_netflix_profile(stream["content_profile"]),
|
||||
language=Language.get(self.NF_LANG_MAP.get(audio["language"]) or audio["language"]),
|
||||
is_original_lang=is_original_lang,
|
||||
bitrate=stream["bitrate"] * 1000,
|
||||
channels=stream["channels"],
|
||||
descriptive=audio.get("rawTrackType", "").lower() == "assistive",
|
||||
name="[Original]" if Language.get(audio["language"]).language == original_language.language else None,
|
||||
joc=6 if "atmos" in stream["content_profile"] else None
|
||||
)
|
||||
)
|
||||
|
||||
# Subtitle
|
||||
unavailable_subtitle: List[Tuple[str, str]] = []
|
||||
for index, subtitle in enumerate(manifest["timedtexttracks"]):
|
||||
if "isNoneTrack" in subtitle and subtitle["isNoneTrack"] == True:
|
||||
continue
|
||||
if subtitle["hydrated"] == False:
|
||||
unavailable_subtitle.append((subtitle["new_track_id"], subtitle["id"]))
|
||||
continue
|
||||
|
||||
if subtitle["languageDescription"] == 'Off':
|
||||
continue
|
||||
|
||||
id = list(subtitle["downloadableIds"].values())
|
||||
language = Language.get(subtitle["language"])
|
||||
profile = next(iter(subtitle["ttDownloadables"].keys()))
|
||||
tt_downloadables = next(iter(subtitle["ttDownloadables"].values()))
|
||||
is_original_lang = subtitle["language"] == original_language.language
|
||||
tracks.add(
|
||||
Subtitle(
|
||||
id_=id[0],
|
||||
url=tt_downloadables["urls"][0]["url"],
|
||||
codec=Subtitle.Codec.from_netflix_profile(profile),
|
||||
language=language,
|
||||
forced=subtitle["isForcedNarrative"],
|
||||
cc=subtitle["rawTrackType"] == "closedcaptions",
|
||||
sdh=subtitle["trackVariant"] == 'STRIPPED_SDH' if "trackVariant" in subtitle else False,
|
||||
is_original_lang=is_original_lang,
|
||||
name=("[Original]" if language.language == original_language.language else None or "[Dubbing]" if "trackVariant" in subtitle and subtitle["trackVariant"] == "DUBTITLE" else None),
|
||||
)
|
||||
)
|
||||
|
||||
# FIX 2: Return early if hydration not requested
|
||||
if not hydrate_tracks:
|
||||
return tracks
|
||||
|
||||
# Hydrate missing tracks
|
||||
self.log.info(f"Getting all missing audio and subtitle tracks")
|
||||
|
||||
# Netflix API (playapi-459) requires BOTH requiredAudioTrackId AND requiredTextTrackId
|
||||
# to be present together when using partial hydration — you cannot send just one.
|
||||
# When one list is shorter, we fill the missing side with the new_track_id of any
|
||||
# already-hydrated track of that type as a harmless "dummy". The API will simply
|
||||
# return that track hydrated again (already in our tracks), which we skip below.
|
||||
fallback_audio_id: Optional[str] = next(
|
||||
(a["new_track_id"] for a in manifest["audio_tracks"] if a.get("streams")),
|
||||
None
|
||||
)
|
||||
fallback_subtitle_id: Optional[str] = next(
|
||||
(s["new_track_id"] for s in manifest["timedtexttracks"]
|
||||
if s.get("hydrated") and not s.get("isNoneTrack") and s.get("languageDescription") != "Off"),
|
||||
None
|
||||
)
|
||||
|
||||
for audio_hydration, subtitle_hydration in zip_longest(unavailable_audio_tracks, unavailable_subtitle, fillvalue=("N/A", "N/A")):
|
||||
|
||||
is_audio_real = audio_hydration[0] != "N/A"
|
||||
is_subtitle_real = subtitle_hydration[0] != "N/A"
|
||||
|
||||
# Skip entirely if both are exhausted (safety guard)
|
||||
if not is_audio_real and not is_subtitle_real:
|
||||
continue
|
||||
|
||||
# Resolve the actual IDs to send — always send both.
|
||||
# If one side is N/A, substitute the fallback so the API doesn't reject the request.
|
||||
audio_track_id_param = audio_hydration[0] if is_audio_real else fallback_audio_id
|
||||
subtitle_track_id_param = subtitle_hydration[0] if is_subtitle_real else fallback_subtitle_id
|
||||
|
||||
# If we have no fallback at all for a side, skip rather than send None
|
||||
if audio_track_id_param is None or subtitle_track_id_param is None:
|
||||
self.log.warning(
|
||||
f"Cannot hydrate pair (audio={audio_hydration[0]}, sub={subtitle_hydration[0]}): "
|
||||
f"no fallback ID available for the missing side. Skipping."
|
||||
)
|
||||
continue
|
||||
|
||||
manifest_hydrated = self.get_manifest(
|
||||
title,
|
||||
self.profiles,
|
||||
subtitle_track_id_param,
|
||||
audio_track_id_param
|
||||
)
|
||||
|
||||
# Only add audio if this iteration was actually fetching a real audio track
|
||||
if is_audio_real:
|
||||
audios = next(
|
||||
(item for item in manifest_hydrated["audio_tracks"] if 'id' in item and item["id"] == audio_hydration[1]),
|
||||
None
|
||||
)
|
||||
if audios and audios.get("streams"):
|
||||
for stream in audios["streams"]:
|
||||
tracks.add(
|
||||
Audio(
|
||||
id_=stream["downloadable_id"],
|
||||
url=stream["urls"][0]["url"],
|
||||
codec=Audio.Codec.from_netflix_profile(stream["content_profile"]),
|
||||
language=Language.get(self.NF_LANG_MAP.get(audios["language"]) or audios["language"]),
|
||||
is_original_lang=Language.get(audios["language"]).language == original_language.language,
|
||||
bitrate=stream["bitrate"] * 1000,
|
||||
channels=stream["channels"],
|
||||
descriptive=audios.get("rawTrackType", "").lower() == "assistive",
|
||||
name="[Original]" if Language.get(audios["language"]).language == original_language.language else None,
|
||||
joc=6 if "atmos" in stream["content_profile"] else None
|
||||
)
|
||||
)
|
||||
|
||||
# Only add subtitle if this iteration was actually fetching a real subtitle track
|
||||
if not is_subtitle_real:
|
||||
continue
|
||||
|
||||
subtitles = next(
|
||||
(item for item in manifest_hydrated["timedtexttracks"] if 'id' in item and item["id"] == subtitle_hydration[1]),
|
||||
None
|
||||
)
|
||||
|
||||
if not subtitles:
|
||||
self.log.warning(f"Could not find hydrated subtitle track {subtitle_hydration[1]} in manifest")
|
||||
continue
|
||||
|
||||
# Make sure ttDownloadables is present and has URLs
|
||||
if "ttDownloadables" not in subtitles or not subtitles["ttDownloadables"]:
|
||||
self.log.warning(f"Hydrated subtitle track {subtitle_hydration[1]} has no ttDownloadables")
|
||||
continue
|
||||
|
||||
sub_tt_downloadables = next(iter(subtitles["ttDownloadables"].values()))
|
||||
if "urls" not in sub_tt_downloadables or not sub_tt_downloadables["urls"]:
|
||||
self.log.warning(f"Hydrated subtitle track {subtitle_hydration[1]} has no URLs")
|
||||
continue
|
||||
|
||||
id = list(subtitles["downloadableIds"].values())
|
||||
language = Language.get(subtitles["language"])
|
||||
profile = next(iter(subtitles["ttDownloadables"].keys()))
|
||||
tt_downloadables = next(iter(subtitles["ttDownloadables"].values()))
|
||||
tracks.add(
|
||||
Subtitle(
|
||||
id_=id[0],
|
||||
url=tt_downloadables["urls"][0]["url"],
|
||||
codec=Subtitle.Codec.from_netflix_profile(profile),
|
||||
language=language,
|
||||
forced=subtitles["isForcedNarrative"],
|
||||
cc=subtitles["rawTrackType"] == "closedcaptions",
|
||||
sdh=subtitles["trackVariant"] == 'STRIPPED_SDH' if "trackVariant" in subtitles else False,
|
||||
is_original_lang=subtitles["language"] == original_language.language,
|
||||
# FIX 6: was `subtitle` (loop var from outer scope) — now correctly uses `subtitles`
|
||||
name=("[Original]" if language.language == original_language.language else None or "[Dubbing]" if "trackVariant" in subtitles and subtitles["trackVariant"] == "DUBTITLE" else None),
|
||||
)
|
||||
)
|
||||
|
||||
return tracks
|
||||
|
||||
|
||||
def parse_video_range_from_profile(self, profile: str) -> Video.Range:
|
||||
"""
|
||||
Parse the video range from a Netflix profile string.
|
||||
|
||||
Args:
|
||||
profile (str): The Netflix profile string (e.g., "hevc-main10-L30-dash-cenc")
|
||||
|
||||
Returns:
|
||||
Video.Range: The corresponding Video.Range enum value
|
||||
|
||||
Examples:
|
||||
>>> parse_video_range_from_profile("hevc-main10-L30-dash-cenc")
|
||||
<Video.Range.SDR: 'SDR'>
|
||||
>>> parse_video_range_from_profile("hevc-dv5-main10-L30-dash-cenc")
|
||||
<Video.Range.DV: 'DV'>
|
||||
"""
|
||||
|
||||
video_profiles = self.config.get("profiles", {}).get("video", {})
|
||||
|
||||
for codec, ranges in video_profiles.items():
|
||||
for range_name, profiles in ranges.items():
|
||||
if profile in profiles:
|
||||
try:
|
||||
return Video.Range(range_name)
|
||||
except ValueError:
|
||||
self.log.debug(f"Video range is not valid {range_name}")
|
||||
return Video.Range.SDR
|
||||
|
||||
return Video.Range.SDR
|
||||
216
NF/config.yaml
Normal file
216
NF/config.yaml
Normal file
File diff suppressed because one or more lines are too long
636
TID/__init__.py
Normal file
636
TID/__init__.py
Normal file
@@ -0,0 +1,636 @@
|
||||
|
||||
import re
|
||||
import time
|
||||
import json
|
||||
|
||||
from http.cookiejar import CookieJar
|
||||
from typing import Optional, Union
|
||||
|
||||
import click
|
||||
|
||||
from unshackle.core.constants import AnyTrack
|
||||
from unshackle.core.credential import Credential
|
||||
from unshackle.core.manifests import DASH
|
||||
from unshackle.core.manifests import HLS
|
||||
from unshackle.core.service import Service
|
||||
from unshackle.core.titles import Episode, Movie, Movies, Series, Title_T, Titles_T
|
||||
from unshackle.core.tracks import Chapter, Subtitle, Tracks, Video, Audio
|
||||
|
||||
import requests
|
||||
|
||||
from typing import Callable, Any
|
||||
from Crypto.Cipher import AES
|
||||
from Crypto.Hash import MD5
|
||||
import base64
|
||||
from bs4 import BeautifulSoup
|
||||
|
||||
|
||||
class TID(Service):
|
||||
"""
|
||||
Service code for TrueID streaming service (https://www.trueid.net/).
|
||||
|
||||
\b
|
||||
Original-Author: [SeFree]
|
||||
Version: 1.0.0
|
||||
Authorization: Cookies
|
||||
Security: FHD@L3, and can be bypass using HLS instead of DASH.
|
||||
|
||||
\b
|
||||
"""
|
||||
|
||||
TITLE_RE = r'https://(?:www|movie)\.trueid\.net/(?:watch/)?(?:[a-z]{2}-[a-z]{2}/)?(?P<type>series|movie)/(?P<id>[a-zA-Z0-9]+)(?:/(?P<season_id>[a-zA-Z0-9]+))?'
|
||||
|
||||
VIDEO_RANGE_MAP = {
|
||||
"SDR": "sdr",
|
||||
"HDR10": "hdr10",
|
||||
"DV": "dolby_vision",
|
||||
}
|
||||
LANGUAGE_MAP = {
|
||||
"en": "English",
|
||||
"th": "Thai",
|
||||
"jp": "Japanese",
|
||||
"ko": "Korean",
|
||||
"zh": "Chinese",
|
||||
}
|
||||
_DRM_MAP={
|
||||
"wv":"WV_FPS",
|
||||
"aes":"AES_128",
|
||||
}
|
||||
|
||||
@staticmethod
|
||||
@click.command(name="TID", short_help="https://www.trueid.net")
|
||||
@click.argument("title", type=str)
|
||||
@click.option("-SE", "--season", default=None, required=False, type=int,
|
||||
help="TrueID sometime not provide Season in info, so specify it manually.")
|
||||
@click.option("-d", "--drm", default="wv",type=click.Choice(["aes", "wv","None"]), required=False,
|
||||
help="TrueID can be force to use DASH or HLS, this option will force the use of DASH (default: wv).")
|
||||
@click.option("-tl", "--title_lang", default="ja", required=False, type=str,
|
||||
help="If the title is foreigner audio language, specify the foreigner language.")
|
||||
@click.pass_context
|
||||
def cli(ctx, **kwargs):
|
||||
return TID(ctx, **kwargs)
|
||||
|
||||
def extract_id(self,url: str):
|
||||
match = re.compile(self.TITLE_RE).search(url)
|
||||
if not match:
|
||||
return None
|
||||
return match.groupdict()
|
||||
|
||||
def __init__(self, ctx, title,drm, season,title_lang):
|
||||
super().__init__(ctx)
|
||||
|
||||
self.data = self.extract_id(title)
|
||||
self.title =self.data.get("id")
|
||||
self.drm = drm
|
||||
self.type=self.data.get('type') if self.data else "series"
|
||||
self.season = season
|
||||
self.title_lang = title_lang
|
||||
self.license_api = None
|
||||
self.auth_token=None
|
||||
|
||||
def authenticate(self, cookies: Optional[CookieJar] = None, credential: Optional[Credential] = None) -> None:
|
||||
super().authenticate(cookies, credential)
|
||||
"""
|
||||
Refreshes the cookies for the TrueID service.
|
||||
This is necessary to maintain a valid session.
|
||||
"""
|
||||
self.auth_token=self.retry(self.get_auth_token,args=(self.data,))
|
||||
# self.auth_token= self.get_auth_token(self.data)
|
||||
# r=self.session.get(self.title)
|
||||
# data = json.loads(
|
||||
# BeautifulSoup(
|
||||
# r.content, features="lxml"
|
||||
# ).find(
|
||||
# "script",
|
||||
# {
|
||||
# "id": "__NEXT_DATA__"
|
||||
# }).text
|
||||
# )
|
||||
|
||||
params = {
|
||||
'client_id': '893',
|
||||
'browser_id': '*****.*****',
|
||||
'ver': '1.5.0',
|
||||
'scope': 'public_profile,mobile,email,references,identity_account_read,identity_token_bridge',
|
||||
}
|
||||
headers = {
|
||||
'origin': 'https://www.trueid.net',
|
||||
'referer': 'https://www.trueid.net/',
|
||||
'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/145.0.0.0 Safari/537.36 Edg/145.0.0.0',
|
||||
}
|
||||
|
||||
# self.session.cookies.clear()
|
||||
# self.session.cookies.update({
|
||||
# "session_tids":self.config['check_sso_params']['session_tids']
|
||||
# })
|
||||
response = self.session.get(self.config['endpoints']['check_sso'], params=params, headers=headers)
|
||||
access_token=response.json().get('access_token')
|
||||
self.log.debug(f"SSO Check Response: {response.json()}")
|
||||
|
||||
if not response.json().get('status') == 'connected':
|
||||
self.log.error("SSO check failed, cannot refresh cookies.")
|
||||
exit()
|
||||
|
||||
# token = base64.b64encode(f"{data['buildId']}:{"b".join(data['buildId'].split("b")[2:])}".encode("utf-8")).decode("utf-8")
|
||||
|
||||
headers = {
|
||||
# 'authorization': f"Basic {token}",
|
||||
'authorization': self.auth_token,
|
||||
}
|
||||
self.session.headers.update(headers)
|
||||
json_data = {
|
||||
'accessToken': access_token,
|
||||
'ssoId': self.session.cookies.get_dict().get("sso_u"),
|
||||
}
|
||||
response = self.session.post(self.config['endpoints']['me'], json=json_data)
|
||||
|
||||
# response = self.session.get(self.config['endpoints']['storelogin'])
|
||||
# print(response.json())
|
||||
# exit()
|
||||
|
||||
# exit()
|
||||
if response.status_code < 300 and response.status_code >=200 and response.json().get('success')== True:
|
||||
data=response.json()['data']
|
||||
json_data = {
|
||||
'USER_PROFILE': {
|
||||
'avatar': data['avatar'],
|
||||
'displayName': data['display_name'],
|
||||
'uid': data['ssoid'],
|
||||
'user_profile_id': data['id'],
|
||||
'trueid': '',
|
||||
},
|
||||
'isLogin': True,
|
||||
'avatar': data['avatar'],
|
||||
'displayName': data['display_name'],
|
||||
'AAA_PROFILE': {
|
||||
'access_token': access_token,
|
||||
'status': 'connected',
|
||||
'income': 'trueid',
|
||||
'hashed': self.config.get('check_sso_params', {}).get('hashed'),
|
||||
'uid': data['ssoid'],
|
||||
},
|
||||
'SUB_RES': [],
|
||||
'uniqueUserId': '*****.*****',
|
||||
'PPID': '',
|
||||
'UID_ads': '',
|
||||
}
|
||||
response = self.session.post(self.config['endpoints']['storelogin'], headers=headers, json=json_data)
|
||||
if response.text == 'Successfully':
|
||||
return True
|
||||
else:
|
||||
self.log.error("Failed to refresh cookies.")
|
||||
exit()
|
||||
# def refresh_cookies(self):
|
||||
# print(self.cookies)
|
||||
|
||||
exit()
|
||||
|
||||
def get_titles(self) -> Titles_T:
|
||||
kind = None
|
||||
drm=None
|
||||
|
||||
if self.type=="movie":
|
||||
headers = {
|
||||
'authorization': self.auth_token,
|
||||
# "referer": self.title,
|
||||
}
|
||||
self.session.headers.update(headers)
|
||||
|
||||
data = {
|
||||
"id": self.title,
|
||||
"lang": "en",
|
||||
# "seasonId": "",
|
||||
}
|
||||
response = self.session.post(url=self.config['endpoints']['getDetailById'], json=data)
|
||||
|
||||
response.raise_for_status()
|
||||
res_json=response.json()
|
||||
|
||||
chapters = self.find_hhmmss_in_dict(res_json)
|
||||
manifest, license=None,None
|
||||
|
||||
if res_json.get("drm")=="WV_FPS":
|
||||
# print("Found Widevine DRM")
|
||||
drm="wv"
|
||||
elif res_json.get("drm")=="AES_128":
|
||||
# print("Found AES_128 DRM")
|
||||
drm="aes"
|
||||
else:
|
||||
drm="aes"
|
||||
|
||||
if self.drm:
|
||||
drm=self.drm
|
||||
|
||||
return Movies(
|
||||
[
|
||||
Movie(
|
||||
id_=res_json.get('id', ''),
|
||||
service=self.__class__,
|
||||
name=res_json.get('title', ''),
|
||||
year=res_json.get('release_year', ''),
|
||||
# original_lang=Language.find(self.LANGUAGE_MAP[res_json['source_country'].lower()]) if res_json.get('source_country') else 'en',
|
||||
|
||||
data={
|
||||
"title":res_json.get('title', ''),
|
||||
# "series_title":res_json.get('title', ''),
|
||||
"url": self.title,
|
||||
"drm":drm,
|
||||
"season_no":res_json.get('season_no', '') ,
|
||||
"episode_no":res_json.get('episode_no', '') ,
|
||||
"audio":res_json.get('audio', ''),
|
||||
"subtitle":res_json.get('subtitle', '') ,
|
||||
"id":res_json.get('id', ''),
|
||||
"year": res_json.get('release_year', ''),
|
||||
'manifest': manifest,
|
||||
'license': license,
|
||||
'chapters': chapters,
|
||||
}
|
||||
)
|
||||
]
|
||||
)
|
||||
else:
|
||||
titles=[]
|
||||
drm=None
|
||||
headers = {
|
||||
'authorization': self.auth_token,
|
||||
"referer": f'https://www.trueid.net/watch/th-th/series/{self.title}/{self.data.get("season_id")}',
|
||||
}
|
||||
# self.session.headers.update(headers)
|
||||
seasonId=self.data.get("season_id")
|
||||
|
||||
params = {
|
||||
"id": self.title,
|
||||
"lang": "en",
|
||||
# "seasonId": seasonId,
|
||||
}
|
||||
# print(self.session.headers)
|
||||
# print(self.session.cookies.get_dict())
|
||||
response =self.session.get(url=self.config['endpoints']['seasonsSerise'],headers=headers, params=params)
|
||||
|
||||
if not response.json():
|
||||
params = {
|
||||
"id": self.title,
|
||||
"lang": "th",
|
||||
# "seasonId": seasonId,
|
||||
}
|
||||
# print(self.session.headers)
|
||||
# print(self.session.cookies.get_dict())
|
||||
response =self.session.get(url=self.config['endpoints']['seasonsSerise'],headers=headers, params=params)
|
||||
|
||||
seasonItems=response.json()['seasonItems']
|
||||
|
||||
|
||||
# original_lang=Language.find(self.LANGUAGE_MAP[res_json['source_country'].lower()]) if res_json.get('source_country') else 'en'
|
||||
|
||||
series_title=response.json().get('seasonShelf', '').get('title', '')
|
||||
# with open("seasonShelf.json", "w") as f:
|
||||
# json.dump(series_title, f, indent=4, ensure_ascii=False)
|
||||
|
||||
season_detail=None
|
||||
url=None
|
||||
previous_season_id=self.data.get('season_id', '')
|
||||
|
||||
# print(previous_season_id)
|
||||
for season in seasonItems:
|
||||
# print(season['id'])
|
||||
headers = {
|
||||
'authorization': self.auth_token,
|
||||
# "referer": f'https://www.trueid.net/watch/th-th/series/{self.title}/{season['id']}',
|
||||
}
|
||||
params = {
|
||||
"id": self.title,
|
||||
"lang": "en",
|
||||
"seasonId": season['id'],
|
||||
}
|
||||
response = self.session.get(url=self.config['endpoints']['seasonsSerise'], params=params).json()
|
||||
if response is None or len(response) == 0:
|
||||
self.log.warning("Season not found")
|
||||
# exit(1)
|
||||
# if len(response) == 0:
|
||||
params = {
|
||||
"id": self.title,
|
||||
"lang": "th",
|
||||
"seasonId": season['id'],
|
||||
}
|
||||
response = self.session.get(url=self.config['endpoints']['seasonsSerise'], params=params).json()
|
||||
|
||||
response = response.get("seasonShelf",[])
|
||||
|
||||
for ss in response.get('season', []):
|
||||
for s in ss.get('episode', []):
|
||||
|
||||
for ep_item in s['subEpisode']:
|
||||
# path=[]
|
||||
# print(ep_item)
|
||||
if ep_item.get("episode_no") is None:
|
||||
continue
|
||||
if ep_item.get('vod_season_id', '') != previous_season_id :
|
||||
self.season= int(self.season)+1 if self.season else ep_item.get('season_no', '')
|
||||
previous_season_id=ep_item.get('vod_season_id', '')
|
||||
# print(self.season)
|
||||
# path.append(ep_item.get('vod_title_id', ''))
|
||||
# path.append(ep_item.get('vod_season_id', ''))
|
||||
# path.append(ep_item.get('vod_episode_id', ''))
|
||||
# path.append(ep_item.get('id', ''))
|
||||
# url= urljoin(self.config['endpoints']['base_series_url'], '/'.join(path))
|
||||
|
||||
# manifest, license = self._get_manifest(url, drm)
|
||||
manifest, license=None,None
|
||||
# manifest, license = self._get_manifest(url, drm)
|
||||
|
||||
chapters = self.find_hhmmss_in_dict(ep_item)
|
||||
# print(ep_item.get('season_no', '') if not self.season else self.season)
|
||||
titles.append({
|
||||
"title":ep_item.get('title', ''),
|
||||
"series_title":series_title,
|
||||
# "url": url,
|
||||
"drm":self._DRM_MAP.get(ep_item.get('drm', 'AES_128')) if self.drm == None else self.drm,
|
||||
"season_no":ep_item.get('season_no', '') if not self.season else self.season,
|
||||
"episode_no":ep_item.get('episode_no', '') ,
|
||||
"sub_ep_no":ep_item.get('sub_ep_no', '') ,
|
||||
"audio":ep_item.get('audio', ''),
|
||||
'sub_ep_included': ep_item.get('sub_ep_included', False),
|
||||
"subtitle":ep_item.get('subtitle', '') ,
|
||||
"id":ep_item.get('id', ''),
|
||||
# "year": res_json.get('release_year', ''),
|
||||
'manifest': manifest,
|
||||
'license': license,
|
||||
'chapters': chapters,
|
||||
|
||||
# 'original_lang': original_lang,
|
||||
})
|
||||
return Series([Episode(
|
||||
id_=x["id"],
|
||||
service=self.__class__,
|
||||
title=x["series_title"],
|
||||
season=x.get("season_no"),
|
||||
number=x.get("episode_no"),
|
||||
name=x.get("title"),
|
||||
data=x,
|
||||
# original_lang=x.get("original_lang", ""),
|
||||
) for x in titles])
|
||||
|
||||
# DASH Example: Service requires separate API calls per codec/range.
|
||||
# Uses _get_tracks_for_variants() which iterates codecs x ranges,
|
||||
# handles HYBRID (HDR10+DV), and best_available fallback.
|
||||
|
||||
def get_tracks(self, title: Title_T) -> Tracks:
|
||||
manifest, title.data["license_api"] = self._get_manifest(title.id, title.data['drm'])
|
||||
res = self.session.get(url=manifest).text
|
||||
tracks=Tracks()
|
||||
if title.data['drm'] == "aes":
|
||||
|
||||
|
||||
tracks.add(HLS.from_text(res, manifest).to_tracks(self.title_lang))
|
||||
|
||||
else:
|
||||
# r = self.session.get(url=manifest)
|
||||
# res = r.text
|
||||
tracks.add(DASH.from_text(
|
||||
text=res,
|
||||
url=manifest
|
||||
).to_tracks(self.title_lang))
|
||||
|
||||
|
||||
return tracks
|
||||
|
||||
# HLS Example: Service returns all codecs/ranges in one master playlist.
|
||||
# No need for _get_tracks_for_variants, dl.py filters by user selection.
|
||||
#
|
||||
# def get_tracks(self, title: Title_T) -> Tracks:
|
||||
# playback = self.session.get(
|
||||
# url=self.config["endpoints"]["playback"].format(title_id=title.id),
|
||||
# params={"token": self.token},
|
||||
# ).json()
|
||||
# return HLS.from_url(
|
||||
# url=playback["manifest_url"],
|
||||
# session=self.session,
|
||||
# ).to_tracks(title.language)
|
||||
|
||||
def get_chapters(self, title: Title_T) -> list[Chapter]:
|
||||
chapters = []
|
||||
|
||||
list_chapter=title.data.get('chapters', [])
|
||||
already_added=[]
|
||||
for i,chapter in enumerate(list_chapter):
|
||||
|
||||
for key, value in chapter.items():
|
||||
if not self.is_valid_nonzero_time(value) or value in already_added:
|
||||
continue
|
||||
chapters.append(Chapter(
|
||||
name=key.replace('_', ' ').replace('skip', '').strip(),
|
||||
timestamp=value,
|
||||
))
|
||||
already_added.append(value)
|
||||
|
||||
|
||||
return chapters
|
||||
|
||||
def get_widevine_service_certificate(self, **_: any) -> str:
|
||||
return self.config.get("certificate")
|
||||
|
||||
def get_widevine_license(self, *, challenge: bytes, title: Title_T, track: AnyTrack) -> Optional[Union[bytes, str]]:
|
||||
return self.session.post(
|
||||
url=title.data["license_api"],
|
||||
data=challenge # expects bytes
|
||||
).content
|
||||
|
||||
def get_playready_license(self, *, challenge: bytes, title: Title_T, track: AnyTrack) -> Optional[Union[bytes, str]]:
|
||||
license_url = self.config["endpoints"].get("playready_license")
|
||||
if not license_url:
|
||||
raise ValueError("PlayReady license endpoint not configured")
|
||||
|
||||
response = self.session.post(
|
||||
url=license_url,
|
||||
data=challenge,
|
||||
headers={
|
||||
"user-agent": self.config["client"][self.device]["license_user_agent"],
|
||||
},
|
||||
)
|
||||
response.raise_for_status()
|
||||
return response.content
|
||||
|
||||
@staticmethod
|
||||
def find_hhmmss_in_dict(data) -> list[dict]:
|
||||
hhmmss_pattern = re.compile(r'^\d{2}:\d{2}:\d{2}$')
|
||||
matches = []
|
||||
|
||||
def search(d):
|
||||
if isinstance(d, dict):
|
||||
for k, v in d.items():
|
||||
if isinstance(v, (dict, list)):
|
||||
search(v)
|
||||
elif isinstance(v, str) and hhmmss_pattern.match(v):
|
||||
matches.append({k: v})
|
||||
elif isinstance(d, list):
|
||||
for item in d:
|
||||
search(item)
|
||||
|
||||
search(data)
|
||||
|
||||
# Sorting by time value
|
||||
def to_seconds(hms):
|
||||
h, m, s = map(int, hms.split(':'))
|
||||
return h * 3600 + m * 60 + s
|
||||
|
||||
matches.sort(key=lambda d: to_seconds(list(d.values())[0]))
|
||||
return matches
|
||||
@staticmethod
|
||||
def is_valid_nonzero_time(value: str) -> bool:
|
||||
# Match HH:MM:SS where each part is two digits
|
||||
if not re.match(r'^\d{2}:\d{2}:\d{2}$', value):
|
||||
return False
|
||||
# Check that it's not 00:00:00
|
||||
return value != "00:00:00"
|
||||
|
||||
def _get_manifest(self, cmsId: str, drm:str = "wv"):
|
||||
|
||||
def get_stream():
|
||||
data = {
|
||||
"drm": drm,
|
||||
"cmsId": cmsId,
|
||||
"contentType": "movie",
|
||||
"lang": "en"
|
||||
}
|
||||
response = self.session.post(self.config['endpoints']['stream'], json=data)
|
||||
retry_lang=["th"]
|
||||
for lang in retry_lang:
|
||||
if not response.status_code==200:
|
||||
self.log.warning(f"Request for English lang fail. Retry with {lang}")
|
||||
data["lang"] = lang
|
||||
response = self.session.post(self.config['endpoints']['stream'], json=data)
|
||||
if response.status_code==200:
|
||||
return response
|
||||
else:
|
||||
|
||||
return response
|
||||
|
||||
response:requests.Response|None = self.retry(get_stream)
|
||||
|
||||
stream = response.json().get("stream")
|
||||
|
||||
manifest=None
|
||||
license=None
|
||||
|
||||
if drm == "wv":
|
||||
manifest = stream.get("streamurl")
|
||||
license = stream.get("license")
|
||||
elif drm == "aes":
|
||||
manifest = stream.get("result")
|
||||
# print(manifest, license)
|
||||
return manifest, license
|
||||
|
||||
def retry(self,func: Callable, args=None, kwargs=None, count: int = 5, delay: float = 5.0) -> Any:
|
||||
args = args or ()
|
||||
kwargs = kwargs or {}
|
||||
|
||||
last_exception = None
|
||||
|
||||
for attempt in range(1, count + 1):
|
||||
try:
|
||||
return func(*args, **kwargs)
|
||||
except Exception as e:
|
||||
last_exception = e
|
||||
if attempt < count:
|
||||
time.sleep(delay)
|
||||
else:
|
||||
raise last_exception
|
||||
|
||||
def get_auth_token(self,title_data):
|
||||
authen = None
|
||||
self.log.warning("Getting authorization token")
|
||||
r=self.session.get(self.config["endpoints"]["title"].format(id=title_data.get("id"),season_id=title_data.get("season_id")))
|
||||
data = json.loads(
|
||||
BeautifulSoup(
|
||||
r.content, features="lxml"
|
||||
).find(
|
||||
"script",
|
||||
{
|
||||
"id": "__NEXT_DATA__"
|
||||
}).text
|
||||
)
|
||||
|
||||
if self.type =="movie":
|
||||
url = self.config["endpoints"]["movie_video"].format(id=data.get("id"))
|
||||
else:
|
||||
# sub_season_id = data["props"]["pageProps"]["resultTitleDetail"]["seriesData"]["vod_items"][0]["vod_items"][0]['id']
|
||||
|
||||
def get_id(d):
|
||||
for i in d:
|
||||
for j in i["vod_items"]:
|
||||
sub_season_id = j.get("id")
|
||||
if sub_season_id:
|
||||
for k in j["ep_items"]:
|
||||
ep_id = k.get("id")
|
||||
if ep_id:
|
||||
return sub_season_id, ep_id
|
||||
|
||||
sub_season_id,ep_id=get_id(data["props"]["pageProps"]["resultTitleDetail"]["seriesData"]["vod_items"])
|
||||
# break
|
||||
# break
|
||||
# break
|
||||
# ep_id = data["props"]["pageProps"]["resultTitleDetail"]["seriesData"]["vod_items"][0]["vod_items"][0]['ep_items'][0]['id']
|
||||
url = self.config["endpoints"]["series_video"].format(id=title_data.get("id"),season_id=title_data.get("season_id"),sub_season_id=sub_season_id,ep_id=ep_id)
|
||||
|
||||
r=self.session.get(url)
|
||||
match = re.search(r'__NEXT_DATA__\s*=\s*(\{.*?\});', r.text, re.DOTALL)
|
||||
if match:
|
||||
json_str = match.group(1)
|
||||
try:
|
||||
data = json.loads(json_str)
|
||||
# You can also save it to a file:
|
||||
# with open('next_data.json', 'w', encoding='utf-8') as f:
|
||||
# json.dump(data, f, indent=2, ensure_ascii=False)
|
||||
secret = "ads-top-position"
|
||||
|
||||
username = decrypt_cryptojs(data["runtimeConfig"]["baseAuth"]['username'], secret)
|
||||
password = decrypt_cryptojs(data["runtimeConfig"]["baseAuth"]['password'], secret)
|
||||
data_string=username+":"+password
|
||||
|
||||
authen = "Basic " + base64.b64encode(data_string.encode('utf-8')).decode()
|
||||
self.log.warning(f"Got token : {authen}")
|
||||
except json.JSONDecodeError as e:
|
||||
self.log.error(f"JSON parsing error: {e}", )
|
||||
except requests.exceptions.HTTPError as e:
|
||||
self.log.error("HTTP error:", e)
|
||||
|
||||
|
||||
else:
|
||||
self.log.error("Cannot parsing __NEXT_DATA__ from title page.")
|
||||
exit(1)
|
||||
|
||||
|
||||
if not authen:
|
||||
self.log.error("Cannot extract token which require.")
|
||||
exit(1)
|
||||
|
||||
|
||||
return authen
|
||||
|
||||
def evp_bytes_to_key(password, salt, key_len=32, iv_len=16):
|
||||
dtot = b""
|
||||
d = b""
|
||||
|
||||
while len(dtot) < (key_len + iv_len):
|
||||
d = MD5.new(d + password + salt).digest()
|
||||
dtot += d
|
||||
|
||||
return dtot[:key_len], dtot[key_len:key_len+iv_len]
|
||||
|
||||
|
||||
def decrypt_cryptojs(ciphertext_b64, passphrase):
|
||||
data = base64.b64decode(ciphertext_b64)
|
||||
|
||||
assert data[:8] == b"Salted__"
|
||||
salt = data[8:16]
|
||||
ciphertext = data[16:]
|
||||
|
||||
key, iv = evp_bytes_to_key(passphrase.encode(), salt)
|
||||
|
||||
cipher = AES.new(key, AES.MODE_CBC, iv)
|
||||
decrypted = cipher.decrypt(ciphertext)
|
||||
|
||||
# remove PKCS7 padding
|
||||
pad_len = decrypted[-1]
|
||||
return decrypted[:-pad_len].decode("utf-8")
|
||||
|
||||
37
TID/config.yaml
Normal file
37
TID/config.yaml
Normal file
@@ -0,0 +1,37 @@
|
||||
|
||||
|
||||
endpoints:
|
||||
stream: https://movie.trueid.net/apis/stream
|
||||
seasonsSerise: https://movie.trueid.net/apis/titleDetail/seasonsSerise
|
||||
getDetailById: https://movie.trueid.net/apis/titleDetail/getDetailById
|
||||
base_series_url: https://movie.trueid.net/th-en/series/
|
||||
storelogin: https://www.trueid.net/watch/api/session/storelogin
|
||||
check_sso: https://identity.trueid.net/api/session/check-sso
|
||||
me: https://www.trueid.net/watch/api/headerwidgets/single-profile/me
|
||||
title: https://www.trueid.net/watch/th-th/series/{id}/{season_id}
|
||||
series_video: https://movie.trueid.net/series/{id}/{season_id}/{sub_season_id}/{ep_id}
|
||||
movie_video: https://movie.trueid.net/movie/{id}
|
||||
|
||||
|
||||
|
||||
# token: Basic ODk1OGMzZjJkYTVjMmRkMmMzMDVmM2U3M2M4YmZiYzU2NTljZjRlZjo1YzJkZDJjMzA1ZjNlNzNjOGJmYmM1NjU5Y2Y0ZWY=
|
||||
|
||||
# check_sso_params:
|
||||
# hashed: 'U2FsdGVkX1+w+kREB4BPR4NccQTS1/LUfjgYj9PqEF2DCMlOVIuGB44lp/oQrx3tt+C4p5sxParXPQlrxQkPCw==' # from https://movie.trueid.net/th-th/series/{title_id}/{season_id} "baseAuth": {"username"
|
||||
# browser_id: '*****.****' # from check_sso
|
||||
|
||||
# me_data:
|
||||
# ssoId: '24880351' # from cookies
|
||||
|
||||
|
||||
# trueid.yml
|
||||
# https://movie.trueid.net/apis/cookie/getUniqueUserId
|
||||
# token
|
||||
# get from "authorization" in Response Header
|
||||
|
||||
# ssoId
|
||||
# get from "cookie" after "sso_u=" in Response Header
|
||||
|
||||
# https://identity.trueid.net/api/session/check-sso
|
||||
# hashed
|
||||
# get from "hashed" in Response tab
|
||||
68561
iQ/CDM/libmonalisa-v3.0.6-browser.wat
Normal file
68561
iQ/CDM/libmonalisa-v3.0.6-browser.wat
Normal file
File diff suppressed because one or more lines are too long
8
iQ/CDM/monalisa.mld
Normal file
8
iQ/CDM/monalisa.mld
Normal file
@@ -0,0 +1,8 @@
|
||||
{
|
||||
"wasm_path": "libmonalisa-v3.0.6-browser.wat",
|
||||
"metadata": {
|
||||
"version": "3.0.6",
|
||||
"name": "MonaLisa Browser Module",
|
||||
"description": "MonaLisa WASM module for browser environment"
|
||||
}
|
||||
}
|
||||
1124
iQ/__init__.py
Normal file
1124
iQ/__init__.py
Normal file
File diff suppressed because it is too large
Load Diff
84
iQ/config.yaml
Normal file
84
iQ/config.yaml
Normal file
@@ -0,0 +1,84 @@
|
||||
endpoint:
|
||||
mode: "http://ip.geo.iqiyi.com/queryZone"
|
||||
pck: "https://pcw-api.iq.com/api/vip/show"
|
||||
ptid: "https://pcw-api.iq.com/api/conf-id"
|
||||
vip: "https://pcw-api.iq.com/api/pvvp"
|
||||
album: "https://www.iq.com/album/{id}?lang={lang_code}"
|
||||
episode: "https://pcw-api.iq.com/api/v2/episodeListSource/{list_id}"
|
||||
stream: "https://cache-video.iq.com{path}&vf={vf}"
|
||||
audio: "https://data6.video.iqiyi.com/videos/{path}"
|
||||
subtitle: "http://meta.video.iqiyi.com{path}"
|
||||
|
||||
device:
|
||||
user_agent: "QYPlayer/Android/5.2.0500;BT/mcto;Pt/Mobile;NetType/4G;Hv/10.1.25.5051;QTP/2.1.117.104"
|
||||
user_agent_bws: "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/144.0.0.0 Safari/537.36"
|
||||
|
||||
key:
|
||||
dash: "o1cmec973m7qqw5cg93u5k1gsuw1i7uk"
|
||||
ccsn: "c665dfa2f8cdabf239867216ac758a8a"
|
||||
ml: "66647d33d9feca3defff5e56424600cd"
|
||||
|
||||
quality:
|
||||
video:
|
||||
8k:
|
||||
k_ft2: "2147483647"
|
||||
k_ft4: ""
|
||||
k_ft5: "137"
|
||||
k_ft7: "52068007"
|
||||
k_ft8: "_"
|
||||
h264:
|
||||
k_ft1: "549755814020"
|
||||
k_ft4: "4"
|
||||
k_ft5: "1"
|
||||
k_ft7: "0"
|
||||
h265:
|
||||
k_ft1: "748218484789764"
|
||||
k_ft2: "2147483647"
|
||||
k_ft4: ""
|
||||
k_ft5: "137"
|
||||
k_ft7: "52068007"
|
||||
k_ft8: "_"
|
||||
h265_edr:
|
||||
k_ft1: "748218485838340"
|
||||
k_ft2: "2147483647"
|
||||
k_ft4: ""
|
||||
k_ft5: "137"
|
||||
k_ft7: "52068007"
|
||||
k_ft8: "_"
|
||||
hdr_edr:
|
||||
k_ft1: "748218486362628"
|
||||
k_ft2: "2147483647"
|
||||
k_ft4: ""
|
||||
k_ft5: "137"
|
||||
k_ft7: "52068007"
|
||||
k_ft8: "_"
|
||||
hdr:
|
||||
k_ft1: "748218485314052"
|
||||
k_ft2: "2147483647"
|
||||
k_ft4: ""
|
||||
k_ft5: "137"
|
||||
k_ft7: "52068007"
|
||||
k_ft8: "_"
|
||||
dv_edr:
|
||||
k_ft1: "748218485838596"
|
||||
k_ft2: "2147483647"
|
||||
k_ft4: ""
|
||||
k_ft5: "137"
|
||||
k_ft7: "52068007"
|
||||
k_ft8: "_"
|
||||
dv:
|
||||
k_ft1: "748218484790020"
|
||||
k_ft2: "2147483647"
|
||||
k_ft4: ""
|
||||
k_ft5: "137"
|
||||
k_ft7: "52068007"
|
||||
k_ft8: "_"
|
||||
audio:
|
||||
dolby:
|
||||
abid: "500"
|
||||
cf: "1"
|
||||
ct: ""
|
||||
aac:
|
||||
abid: "500"
|
||||
cf: "2"
|
||||
ct: ""
|
||||
Reference in New Issue
Block a user