KeyError: 'chapter_info' #156
-
I'm trying to process an audiobook and it keeps getting hung up on the update chapters process. It displays these errors during the process: Converting old metadata (you may need to manually enter Y) When I try to just download the chapter json file through audible-cli, I do get a file but it contains no chapter information. When I play the audiobook through Audible, it contains proper chapter data. Here is the output from the chapter json file: { The ASIN for the tile is B003ZWFO7E. Any ideas? Thank you. |
Beta Was this translation helpful? Give feedback.
Replies: 9 comments 24 replies
-
My first thought tells me that the book does not include chapter information in the API because there are none. Maybe you can extract the metadata from the aaxc file with |
Beta Was this translation helpful? Give feedback.
-
There are some keys missing. But you can use those and save this as chapter file: {
"content_metadata": {
"chapter_info": {
"brandIntroDurationMs": 1904,
"brandOutroDurationMs": 4969,
"chapters": [
{
"length_ms": 57167,
"start_offset_ms": 0,
"start_offset_sec": 0,
"title": "Opening Credits"
},
{
"length_ms": 8370,
"start_offset_ms": 57167,
"start_offset_sec": 57,
"title": "Dedication"
},
{
"length_ms": 632491,
"start_offset_ms": 65537,
"start_offset_sec": 66,
"title": "Prelude to the Stormlight Archive"
},
{
"length_ms": 2531727,
"start_offset_ms": 698028,
"start_offset_sec": 698,
"title": "Prologue: To Kill"
},
{
"chapters": [
{
"length_ms": 1909731,
"start_offset_ms": 3238595,
"start_offset_sec": 3239,
"title": "1: Stormblessed"
},
{
"length_ms": 2200508,
"start_offset_ms": 5148326,
"start_offset_sec": 5148,
"title": "2: Honor Is Dead"
},
{
"length_ms": 2166468,
"start_offset_ms": 7348834,
"start_offset_sec": 7349,
"title": "3: City of Bells"
},
{
"length_ms": 1871110,
"start_offset_ms": 9515302,
"start_offset_sec": 9515,
"title": "4: The Shattered Plains"
},
{
"length_ms": 1727564,
"start_offset_ms": 11386412,
"start_offset_sec": 11386,
"title": "5: Heretic"
},
{
"length_ms": 3031783,
"start_offset_ms": 13113976,
"start_offset_sec": 13114,
"title": "6: Bridge Four"
},
{
"length_ms": 2647539,
"start_offset_ms": 16145759,
"start_offset_sec": 16146,
"title": "7: Anything Reasonable"
},
{
"length_ms": 2952138,
"start_offset_ms": 18793298,
"start_offset_sec": 18793,
"title": "8: Nearer the Flame"
},
{
"length_ms": 1066213,
"start_offset_ms": 21745436,
"start_offset_sec": 21745,
"title": "9: Damnation"
},
{
"length_ms": 996925,
"start_offset_ms": 22811649,
"start_offset_sec": 22812,
"title": "10: Stories of Surgeons"
},
{
"length_ms": 1426169,
"start_offset_ms": 23808574,
"start_offset_sec": 23809,
"title": "11: Droplets"
}
],
"length_ms": 8840,
"start_offset_ms": 3229755,
"start_offset_sec": 3230,
"title": "Part One: Above Silence"
},
{
"chapters": [
{
"length_ms": 884414,
"start_offset_ms": 25241743,
"start_offset_sec": 25242,
"title": "I-1: Ishkk"
},
{
"length_ms": 575947,
"start_offset_ms": 26126157,
"start_offset_sec": 26126,
"title": "I-2: Nan Balat"
},
{
"length_ms": 871259,
"start_offset_ms": 26702104,
"start_offset_sec": 26702,
"title": "I-3: The Glory of Ignorance"
}
],
"length_ms": 7000,
"start_offset_ms": 25234743,
"start_offset_sec": 25235,
"title": "Interludes"
},
{
"chapters": [
{
"length_ms": 3326057,
"start_offset_ms": 27583363,
"start_offset_sec": 27583,
"title": "12: Unity"
},
{
"length_ms": 1470705,
"start_offset_ms": 30909420,
"start_offset_sec": 30909,
"title": "13: Ten Heartbeats"
},
{
"length_ms": 1698261,
"start_offset_ms": 32380125,
"start_offset_sec": 32380,
"title": "14: Payday"
},
{
"length_ms": 3969636,
"start_offset_ms": 34078386,
"start_offset_sec": 34078,
"title": "15: The Decoy"
},
{
"length_ms": 2275880,
"start_offset_ms": 38048022,
"start_offset_sec": 38048,
"title": "16: Cocoons"
},
{
"length_ms": 2890141,
"start_offset_ms": 40323902,
"start_offset_sec": 40324,
"title": "17: A Bloody, Red Sunset"
},
{
"length_ms": 3599000,
"start_offset_ms": 43214043,
"start_offset_sec": 43214,
"title": "18: Highprince of War"
},
{
"length_ms": 2533297,
"start_offset_ms": 46813043,
"start_offset_sec": 46813,
"title": "19: Starfalls"
},
{
"length_ms": 352989,
"start_offset_ms": 49346340,
"start_offset_sec": 49346,
"title": "20: Scarlet"
},
{
"length_ms": 1854392,
"start_offset_ms": 49699329,
"start_offset_sec": 49699,
"title": "21: Why Men Lie"
},
{
"length_ms": 2373590,
"start_offset_ms": 51553721,
"start_offset_sec": 51554,
"title": "22: Eyes, Hands, or Spheres?"
},
{
"length_ms": 2286283,
"start_offset_ms": 53927311,
"start_offset_sec": 53927,
"title": "23: Many Uses"
},
{
"length_ms": 1462021,
"start_offset_ms": 56213594,
"start_offset_sec": 56214,
"title": "24: The Gallery of Maps"
},
{
"length_ms": 1334729,
"start_offset_ms": 57675615,
"start_offset_sec": 57676,
"title": "25: The Butcher"
},
{
"length_ms": 2691703,
"start_offset_ms": 59010344,
"start_offset_sec": 59010,
"title": "26: Stillness"
},
{
"length_ms": 3681756,
"start_offset_ms": 61702047,
"start_offset_sec": 61702,
"title": "27: Chasm Duty"
},
{
"length_ms": 4052207,
"start_offset_ms": 65383803,
"start_offset_sec": 65384,
"title": "28: Decision"
}
],
"length_ms": 10000,
"start_offset_ms": 27573363,
"start_offset_sec": 27573,
"title": "Part Two: The Illuminating Storms"
},
{
"chapters": [
{
"length_ms": 1196883,
"start_offset_ms": 69444010,
"start_offset_sec": 69444,
"title": "I-4: Rysn"
},
{
"length_ms": 891832,
"start_offset_ms": 70640893,
"start_offset_sec": 70641,
"title": "I-5: Axies the Collector"
},
{
"length_ms": 1476556,
"start_offset_ms": 71532725,
"start_offset_sec": 71533,
"title": "I-6: A Work of Art"
}
],
"length_ms": 8000,
"start_offset_ms": 69436010,
"start_offset_sec": 69436,
"title": "Interludes"
},
{
"chapters": [
{
"length_ms": 2878401,
"start_offset_ms": 73018281,
"start_offset_sec": 73018,
"title": "29: Errorgance"
},
{
"length_ms": 1516030,
"start_offset_ms": 75896682,
"start_offset_sec": 75897,
"title": "30: Darkness Unseen"
},
{
"length_ms": 845856,
"start_offset_ms": 77412712,
"start_offset_sec": 77413,
"title": "31: Beneath the Skin"
},
{
"length_ms": 1843943,
"start_offset_ms": 78258568,
"start_offset_sec": 78259,
"title": "32: Side Carry"
},
{
"length_ms": 2445525,
"start_offset_ms": 80102511,
"start_offset_sec": 80103,
"title": "33: Cymatics"
},
{
"length_ms": 888302,
"start_offset_ms": 82548036,
"start_offset_sec": 82548,
"title": "34: Stormwall"
},
{
"length_ms": 796444,
"start_offset_ms": 83436338,
"start_offset_sec": 83436,
"title": "35: A Light by Which to See"
},
{
"length_ms": 2383992,
"start_offset_ms": 84232782,
"start_offset_sec": 84233,
"title": "36: The Lesson"
},
{
"length_ms": 2446407,
"start_offset_ms": 86616774,
"start_offset_sec": 86617,
"title": "37: Sides"
},
{
"length_ms": 810469,
"start_offset_ms": 89063181,
"start_offset_sec": 89063,
"title": "38: Envisager"
},
{
"length_ms": 1582486,
"start_offset_ms": 89873650,
"start_offset_sec": 89874,
"title": "39: Burned into Her"
},
{
"length_ms": 2204548,
"start_offset_ms": 91456136,
"start_offset_sec": 91456,
"title": "40: Eyes of Red and Blue"
},
{
"length_ms": 906785,
"start_offset_ms": 93660684,
"start_offset_sec": 93661,
"title": "41: Of Alds and Milp"
},
{
"length_ms": 2387847,
"start_offset_ms": 94567469,
"start_offset_sec": 94567,
"title": "42: Beggars and Barmaids"
},
{
"length_ms": 2101266,
"start_offset_ms": 96955316,
"start_offset_sec": 96955,
"title": "43: The Wretch"
},
{
"length_ms": 2096390,
"start_offset_ms": 99056582,
"start_offset_sec": 99057,
"title": "44: The Weeping"
},
{
"length_ms": 3317249,
"start_offset_ms": 101152972,
"start_offset_sec": 101153,
"title": "45: Shadesmar"
},
{
"length_ms": 3156799,
"start_offset_ms": 104470221,
"start_offset_sec": 104470,
"title": "46: Child of Tanavast"
},
{
"length_ms": 2007643,
"start_offset_ms": 107627020,
"start_offset_sec": 107627,
"title": "47: Stormblessings"
},
{
"length_ms": 1457795,
"start_offset_ms": 109634663,
"start_offset_sec": 109635,
"title": "48: Strawberry"
},
{
"length_ms": 1459652,
"start_offset_ms": 111092458,
"start_offset_sec": 111092,
"title": "49: To Care"
},
{
"length_ms": 541396,
"start_offset_ms": 112552110,
"start_offset_sec": 112552,
"title": "50: Backbreaker Powder"
},
{
"length_ms": 836011,
"start_offset_ms": 113093506,
"start_offset_sec": 113094,
"title": "51: Sas Nahn"
}
],
"length_ms": 9000,
"start_offset_ms": 73009281,
"start_offset_sec": 73009,
"title": "Part Three: Dying"
},
{
"chapters": [
{
"length_ms": 539537,
"start_offset_ms": 113938017,
"start_offset_sec": 113938,
"title": "I-7: Baxil"
},
{
"length_ms": 632093,
"start_offset_ms": 114477554,
"start_offset_sec": 114478,
"title": "I-8: Geranid"
},
{
"length_ms": 1127793,
"start_offset_ms": 115109647,
"start_offset_sec": 115110,
"title": "I-9: Death Wears White"
}
],
"length_ms": 8500,
"start_offset_ms": 113929517,
"start_offset_sec": 113930,
"title": "Interludes"
},
{
"chapters": [
{
"length_ms": 2942392,
"start_offset_ms": 116248440,
"start_offset_sec": 116248,
"title": "52: A Highway to the Sun"
},
{
"length_ms": 1051306,
"start_offset_ms": 119190832,
"start_offset_sec": 119191,
"title": "53: Dunny"
},
{
"length_ms": 2427553,
"start_offset_ms": 120242138,
"start_offset_sec": 120242,
"title": "54: Gibletish"
},
{
"length_ms": 2528513,
"start_offset_ms": 122669691,
"start_offset_sec": 122670,
"title": "55: An Emerald Broam"
},
{
"length_ms": 1923169,
"start_offset_ms": 125198204,
"start_offset_sec": 125198,
"title": "56: That Storming Book"
},
{
"length_ms": 4026572,
"start_offset_ms": 127121373,
"start_offset_sec": 127121,
"title": "57: Wandersail"
},
{
"length_ms": 3194972,
"start_offset_ms": 131147945,
"start_offset_sec": 131148,
"title": "58: The Journey"
},
{
"length_ms": 2725511,
"start_offset_ms": 134342917,
"start_offset_sec": 134343,
"title": "59: An Honor"
},
{
"length_ms": 1901574,
"start_offset_ms": 137068428,
"start_offset_sec": 137068,
"title": "60: That Which We Cannot Have"
},
{
"length_ms": 1423568,
"start_offset_ms": 138970002,
"start_offset_sec": 138970,
"title": "61: Right for Wrong"
},
{
"length_ms": 2103867,
"start_offset_ms": 140393570,
"start_offset_sec": 140394,
"title": "62: Three Glyphs"
},
{
"length_ms": 914959,
"start_offset_ms": 142497437,
"start_offset_sec": 142497,
"title": "63: Fear"
},
{
"length_ms": 1367144,
"start_offset_ms": 143412396,
"start_offset_sec": 143412,
"title": "64: A Man of Extremes"
},
{
"length_ms": 1581278,
"start_offset_ms": 144779540,
"start_offset_sec": 144780,
"title": "65: The Tower"
},
{
"length_ms": 1033938,
"start_offset_ms": 146360818,
"start_offset_sec": 146361,
"title": "66: Codes"
},
{
"length_ms": 3217078,
"start_offset_ms": 147394756,
"start_offset_sec": 147395,
"title": "67: Words"
},
{
"length_ms": 2738747,
"start_offset_ms": 150611834,
"start_offset_sec": 150612,
"title": "68: Eshonai"
},
{
"length_ms": 3383983,
"start_offset_ms": 153350581,
"start_offset_sec": 153351,
"title": "69: Justice"
}
],
"length_ms": 11000,
"start_offset_ms": 116237440,
"start_offset_sec": 116237,
"title": "Part Four: Storm's Illuminations"
},
{
"chapters": [
{
"length_ms": 1013551,
"start_offset_ms": 156749564,
"start_offset_sec": 156750,
"title": "70: Sea of Glass"
},
{
"length_ms": 1250208,
"start_offset_ms": 157763115,
"start_offset_sec": 157763,
"title": "71: Recorded in Blood"
},
{
"length_ms": 379785,
"start_offset_ms": 159013323,
"start_offset_sec": 159013,
"title": "72: Veristitalian"
},
{
"length_ms": 1513290,
"start_offset_ms": 159393108,
"start_offset_sec": 159393,
"title": "73: Trust"
},
{
"length_ms": 407524,
"start_offset_ms": 160906398,
"start_offset_sec": 160906,
"title": "74: Ghostblood"
},
{
"length_ms": 939883,
"start_offset_ms": 161313922,
"start_offset_sec": 161314,
"title": "75: In the Top Room"
}
],
"length_ms": 15000,
"start_offset_ms": 156734564,
"start_offset_sec": 156735,
"title": "Part Five: The Silence Above"
},
{
"length_ms": 769695,
"start_offset_ms": 162253805,
"start_offset_sec": 162254,
"title": "Epilogue: Of Most Worth"
},
{
"length_ms": 104350,
"start_offset_ms": 163023500,
"start_offset_sec": 163024,
"title": "Endnote"
},
{
"length_ms": 647233,
"start_offset_ms": 163127850,
"start_offset_sec": 163128,
"title": "Ars Arcanum"
},
{
"length_ms": 52941,
"start_offset_ms": 163775083,
"start_offset_sec": 163775,
"title": "End Credits"
}
],
"is_accurate": true,
"runtime_length_ms": 163828024,
"runtime_length_sec": 163828
},
"content_reference": {
"acr": "XXX",
"asin": "B003ZWFO7E",
"codec": "mp4a.40.2",
"content_format": "AAX_22_64",
"content_size_in_bytes": 1324501805,
"file_version": "5",
"marketplace": "AF2M0KC94RCEA",
"sku": "BK_AREN_001117",
"tempo": "1.0",
"version": "19541071"
},
"last_position_heard": {
"status": "DoesNotExist"
}
},
"response_groups": [
"always-returned",
"last_position_heard",
"chapter_info",
"content_reference"
]
} |
Beta Was this translation helpful? Give feedback.
-
Do you have these issue only with the book above? Can you download chapter files for other audiobooks? |
Beta Was this translation helpful? Give feedback.
-
The API can provide
The The right questions is now, why the chapter file downloaded from the API does not contain the
In 1st reason, the source code for my package must be changed. Therefore please download the standalone binary for v0.2.4 (or v0.2.3) and change the plugin dir env variable or temporary move the files in the default plugin dir to another place. Then try again downloading chapters for some books. If they contain the required chapter information, than something must changed the source code. |
Beta Was this translation helpful? Give feedback.
-
Sorry I was unable to troubleshoot yesterday. I don't know what happened but everything is working today. I recalled my steps installing new versions of audible and audible-cli that I did before I submitted this question and I was confident I had a good install (compared with the instructions you later provided). So I decided to try running the commands again today without changing my setup and I got valid chapter json files for the book in question as well as other books I tried. I don't think the original problem was my commands because I used the same commands today, and when I tried before the commands produced chapter json files, it's just the files were missing the chapter data (as you saw in my submitted samples). So my guess is there was something going on with my account on Audible's end that was just being wonky. Thank you for your time helping me to troubleshoot! |
Beta Was this translation helpful? Give feedback.
-
Is there a way for me to modify my copy of audible-cli to make this
preference? I was looking through many of the py files in the package and
it seemed like the cmd_download.py file was the most appropriate one, but I
am at a loss as how to specify this preference. I tried adding the
chapter_titles_type under kwargs for get_chapters but that just broke the
script. I'm obviously an amateur.
…On Fri, Mar 29, 2024, 11:26 PM mkb79 ***@***.***> wrote:
You can get the chapter info when requesting the licenserequest
<https://audible.readthedocs.io/en/latest/misc/external_api.html#post--1.0-content-(string-asin)-licenserequest>
or metadata
<https://audible.readthedocs.io/en/latest/misc/external_api.html#get--1.0-content-(string-asin)-metadata>
endpoint.
You have to put chapter_info to the response_groups. With the param
chapter_titles_type you can choose between Tree and Flat.
Tree is the default response by the Audible server. I does not know why!
—
Reply to this email directly, view it on GitHub
<#156 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AIHG7UPG46BC5GCGSXPAGELY2ZEI3AVCNFSM6AAAAAA5ERDPHOVHI2DSMVQWIX3LMV43SRDJONRXK43TNFXW4Q3PNVWWK3TUHM4DSNJXGYYTO>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
Beta Was this translation helpful? Give feedback.
-
Thank you! I can't wait to try it out!
…On Sat, Mar 30, 2024 at 1:13 AM mkb79 ***@***.***> wrote:
If you use v0.3.1 of audible-cli, copy the content below to a file named
cmd_download-chapters-flat.py in your plugin dir. Then run audible
download-flat […] --chapters to download chapters in flat. I don’t have
tested the output yet.
import asyncioimport asyncio.logimport asyncio.sslprotoimport jsonimport pathlibimport loggingfrom datetime import datetime
import aiofilesimport clickimport httpximport questionaryfrom audible.exceptions import NotFoundErrorfrom click import echo
from audible_cli.decorators import (
bunch_size_option,
end_date_option,
start_date_option,
timeout_option,
pass_client,
pass_session
)from audible_cli.downloader import Downloader as NewDownloader, Statusfrom audible_cli.exceptions import (
AudibleCliException,
DirectoryDoesNotExists,
DownloadUrlExpired,
NotDownloadableAsAAX,
VoucherNeedRefresh
)from audible_cli.models import Libraryfrom audible_cli.utils import datetime_type, Downloader
logger = logging.getLogger("audible_cli.cmds.cmd_download")
CLIENT_HEADERS = {
"User-Agent": "Audible/671 CFNetwork/1240.0.4 Darwin/20.6.0"
}
QUEUE = None
class DownloadCounter:
def __init__(self):
self._aax: int = 0
self._aaxc: int = 0
self._annotation: int = 0
self._chapter: int = 0
self._cover: int = 0
self._pdf: int = 0
self._voucher: int = 0
self._voucher_saved: int = 0
self._aycl = 0
self._aycl_voucher = 0
@Property
def aax(self):
return self._aax
def count_aax(self):
self._aax += 1
logger.debug(f"Currently downloaded aax files: {self.aax}")
@Property
def aaxc(self):
return self._aaxc
def count_aaxc(self):
self._aaxc += 1
logger.debug(f"Currently downloaded aaxc files: {self.aaxc}")
@Property
def aycl(self):
return self._aycl
def count_aycl(self):
self._aycl += 1
# log as error to display this message in any cases
logger.debug(f"Currently downloaded aycl files: {self.aycl}")
@Property
def aycl_voucher(self):
return self._aycl_voucher
def count_aycl_voucher(self):
self._aycl_voucher += 1
# log as error to display this message in any cases
logger.debug(f"Currently downloaded aycl voucher files: {self.aycl_voucher}")
@Property
def annotation(self):
return self._annotation
def count_annotation(self):
self._annotation += 1
logger.debug(f"Currently downloaded annotations: {self.annotation}")
@Property
def chapter(self):
return self._chapter
def count_chapter(self):
self._chapter += 1
logger.debug(f"Currently downloaded chapters: {self.chapter}")
@Property
def cover(self):
return self._cover
def count_cover(self):
self._cover += 1
logger.debug(f"Currently downloaded covers: {self.cover}")
@Property
def pdf(self):
return self._pdf
def count_pdf(self):
self._pdf += 1
logger.debug(f"Currently downloaded PDFs: {self.pdf}")
@Property
def voucher(self):
return self._voucher
def count_voucher(self):
self._voucher += 1
logger.debug(f"Currently downloaded voucher files: {self.voucher}")
@Property
def voucher_saved(self):
return self._voucher_saved
def count_voucher_saved(self):
self._voucher_saved += 1
logger.debug(f"Currently saved voucher files: {self.voucher_saved}")
def as_dict(self) -> dict:
return {
"aax": self.aax,
"aaxc": self.aaxc,
"annotation": self.annotation,
"chapter": self.chapter,
"cover": self.cover,
"pdf": self.pdf,
"voucher": self.voucher,
"voucher_saved": self.voucher_saved,
"aycl": self.aycl,
"aycl_voucher": self.aycl_voucher
}
def has_downloads(self):
for _, v in self.as_dict().items():
if v > 0:
return True
return False
counter = DownloadCounter()
async def download_cover(
client, output_dir, base_filename, item, res, overwrite_existing
):
filename = f"{base_filename}_({str(res)}).jpg"
filepath = output_dir / filename
url = item.get_cover_url(res)
if url is None:
logger.error(
f"No COVER with size {res} found for {item.full_title}"
)
return
dl = Downloader(url, filepath, client, overwrite_existing, "image/jpeg")
downloaded = await dl.run(stream=False, pb=False)
if downloaded:
counter.count_cover()
async def download_pdf(
client, output_dir, base_filename, item, overwrite_existing
):
url = item.get_pdf_url()
if url is None:
logger.info(f"No PDF found for {item.full_title}")
return
filename = base_filename + ".pdf"
filepath = output_dir / filename
dl = Downloader(
url, filepath, client, overwrite_existing,
["application/octet-stream", "application/pdf"]
)
downloaded = await dl.run(stream=False, pb=False)
if downloaded:
counter.count_pdf()
async def _get_content_metadata(item, quality: str = "high"):
assert quality in ("best", "high", "normal",)
url = f"content/{item.asin}/metadata"
params = {
"response_groups": "last_position_heard, content_reference, "
"chapter_info",
"quality": "High" if quality in ("best", "high") else "Normal",
"drm_type": "Adrm",
"chapter_titles_type": "Flat"
}
metadata = await item._client.get(url, params=params)
return metadata
async def download_chapters(
output_dir, base_filename, item, quality, overwrite_existing
):
if not output_dir.is_dir():
raise DirectoryDoesNotExists(output_dir)
filename = base_filename + "-chapters.json"
file = output_dir / filename
if file.exists() and not overwrite_existing:
logger.info(
f"File {file} already exists. Skip saving chapters"
)
return True
try:
metadata = await _get_content_metadata(item, quality)
except NotFoundError:
logger.info(
f"No chapters found for {item.full_title}."
)
return
metadata = json.dumps(metadata, indent=4)
async with aiofiles.open(file, "w") as f:
await f.write(metadata)
logger.info(f"Chapter file saved to {file}.")
counter.count_chapter()
async def download_annotations(
output_dir, base_filename, item, overwrite_existing
):
if not output_dir.is_dir():
raise DirectoryDoesNotExists(output_dir)
filename = base_filename + "-annotations.json"
file = output_dir / filename
if file.exists() and not overwrite_existing:
logger.info(
f"File {file} already exists. Skip saving annotations"
)
return True
try:
annotation = await item.get_annotations()
except NotFoundError:
logger.info(
f"No annotations found for {item.full_title}."
)
return
annotation = json.dumps(annotation, indent=4)
async with aiofiles.open(file, "w") as f:
await f.write(annotation)
logger.info(f"Annotation file saved to {file}.")
counter.count_annotation()
async def _get_audioparts(item):
parts = []
child_library: Library = await item.get_child_items()
if child_library is not None:
for child in child_library:
if (
child.content_delivery_type is not None
and child.content_delivery_type == "AudioPart"
):
parts.append(child)
return parts
async def _add_audioparts_to_queue(
client, output_dir, filename_mode, item, quality, overwrite_existing,
aax_fallback, download_mode
):
parts = await _get_audioparts(item)
if download_mode == "aax":
get_aax = True
get_aaxc = False
else:
get_aax = False
get_aaxc = True
for part in parts:
queue_job(
get_cover=None,
get_pdf=None,
get_annotation=None,
get_chapters=None,
get_aax=get_aax,
get_aaxc=get_aaxc,
client=client,
output_dir=output_dir,
filename_mode=filename_mode,
item=part,
cover_sizes=None,
quality=quality,
overwrite_existing=overwrite_existing,
aax_fallback=aax_fallback
)
async def download_aax(
client, output_dir, base_filename, item, quality, overwrite_existing,
aax_fallback, filename_mode
):
# url, codec = await item.get_aax_url(quality)
try:
url, codec = await item.get_aax_url_old(quality)
except NotDownloadableAsAAX:
if aax_fallback:
logger.info(f"Fallback to aaxc for {item.full_title}")
return await download_aaxc(
client=client,
output_dir=output_dir,
base_filename=base_filename,
item=item,
quality=quality,
overwrite_existing=overwrite_existing,
filename_mode=filename_mode
)
raise
filename = base_filename + f"-{codec}.aax"
filepath = output_dir / filename
dl = NewDownloader(
source=url,
client=client,
expected_types=[
"audio/aax", "audio/vnd.audible.aax", "audio/audible"
]
)
downloaded = await dl.run(target=filepath, force_reload=overwrite_existing)
if downloaded.status == Status.Success:
counter.count_aax()
elif downloaded.status == Status.DownloadIndividualParts:
logger.info(
f"Item {filepath} must be downloaded in parts. Adding parts to queue"
)
await _add_audioparts_to_queue(
client=client,
output_dir=output_dir,
filename_mode=filename_mode,
item=item,
quality=quality,
overwrite_existing=overwrite_existing,
download_mode="aax",
aax_fallback=aax_fallback,
)
async def _reuse_voucher(lr_file, item):
logger.info(f"Loading data from voucher file {lr_file}.")
async with aiofiles.open(lr_file, "r") as f:
lr = await f.read()
lr = json.loads(lr)
content_license = lr["content_license"]
assert content_license["status_code"] == "Granted", "License not granted"
# try to get the user id
user_id = None
if item._client is not None:
auth = item._client.auth
if auth.customer_info is not None:
user_id = auth.customer_info.get("user_id")
# Verification of allowed user
if user_id is None:
logger.debug("No user id found. Skip user verification.")
else:
if "allowed_users" in content_license:
allowed_users = content_license["allowed_users"]
if allowed_users and user_id not in allowed_users:
# Don't proceed here to prevent overwriting voucher file
msg = f"The current user is not entitled to use the voucher {lr_file}."
raise AudibleCliException(msg)
else:
logger.debug(f"{lr_file} does not contain allowed users key.")
# Verification of voucher validity
if "refresh_date" in content_license:
refresh_date = content_license["refresh_date"]
refresh_date = datetime_type.convert(refresh_date, None, None)
if refresh_date < datetime.utcnow():
raise VoucherNeedRefresh(lr_file)
content_metadata = content_license["content_metadata"]
url = httpx.URL(content_metadata["content_url"]["offline_url"])
codec = content_metadata["content_reference"]["content_format"]
expires = url.params.get("Expires")
if expires:
expires = datetime.utcfromtimestamp(int(expires))
now = datetime.utcnow()
if expires < now:
raise DownloadUrlExpired(lr_file)
return lr, url, codec
async def download_aaxc(
client, output_dir, base_filename, item, quality, overwrite_existing,
filename_mode
):
lr, url, codec = None, None, None
# #60
if not overwrite_existing:
codec, _ = item._get_codec(quality)
if codec is not None:
filepath = pathlib.Path(
output_dir) / f"{base_filename}-{codec}.aaxc"
lr_file = filepath.with_suffix(".voucher")
if lr_file.is_file():
if filepath.is_file():
logger.info(
f"File {lr_file} already exists. Skip download."
)
logger.info(
f"File {filepath} already exists. Skip download."
)
return
try:
lr, url, codec = await _reuse_voucher(lr_file, item)
except DownloadUrlExpired:
logger.debug(f"Download url in {lr_file} is expired. Refreshing license.")
overwrite_existing = True
except VoucherNeedRefresh:
logger.debug(f"Refresh date for voucher {lr_file} reached. Refreshing license.")
overwrite_existing = True
is_aycl = item.benefit_id == "AYCL"
if lr is None or url is None or codec is None:
url, codec, lr = await item.get_aaxc_url(quality)
counter.count_voucher()
if is_aycl:
counter.count_aycl_voucher()
if codec.lower() == "mpeg":
ext = "mp3"
else:
ext = "aaxc"
filepath = pathlib.Path(
output_dir) / f"{base_filename}-{codec}.{ext}"
lr_file = filepath.with_suffix(".voucher")
if lr_file.is_file() and not overwrite_existing:
logger.info(
f"File {lr_file} already exists. Skip download."
)
else:
lr = json.dumps(lr, indent=4)
async with aiofiles.open(lr_file, "w") as f:
await f.write(lr)
logger.info(f"Voucher file saved to {lr_file}.")
counter.count_voucher_saved()
dl = NewDownloader(
source=url,
client=client,
expected_types=[
"audio/aax", "audio/vnd.audible.aax", "audio/mpeg", "audio/x-m4a",
"audio/audible"
],
)
downloaded = await dl.run(target=filepath, force_reload=overwrite_existing)
if downloaded.status == Status.Success:
counter.count_aaxc()
if is_aycl:
counter.count_aycl()
elif downloaded.status == Status.DownloadIndividualParts:
logger.info(
f"Item {filepath} must be downloaded in parts. Adding parts to queue"
)
await _add_audioparts_to_queue(
client=client,
output_dir=output_dir,
filename_mode=filename_mode,
item=item,
quality=quality,
overwrite_existing=overwrite_existing,
aax_fallback=False,
download_mode="aaxc"
)
async def consume(ignore_errors):
while True:
cmd, kwargs = await QUEUE.get()
try:
await cmd(**kwargs)
except Exception as e:
logger.error(e)
if not ignore_errors:
raise
finally:
QUEUE.task_done()
def queue_job(
get_cover,
get_pdf,
get_annotation,
get_chapters,
get_aax,
get_aaxc,
client,
output_dir,
filename_mode,
item,
cover_sizes,
quality,
overwrite_existing,
aax_fallback
):
base_filename = item.create_base_filename(filename_mode)
if get_cover:
for cover_size in cover_sizes:
cmd = download_cover
kwargs = {
"client": client,
"output_dir": output_dir,
"base_filename": base_filename,
"item": item,
"res": cover_size,
"overwrite_existing": overwrite_existing
}
QUEUE.put_nowait((cmd, kwargs))
if get_pdf:
cmd = download_pdf
kwargs = {
"client": client,
"output_dir": output_dir,
"base_filename": base_filename,
"item": item,
"overwrite_existing": overwrite_existing
}
QUEUE.put_nowait((cmd, kwargs))
if get_chapters:
cmd = download_chapters
kwargs = {
"output_dir": output_dir,
"base_filename": base_filename,
"item": item,
"quality": quality,
"overwrite_existing": overwrite_existing
}
QUEUE.put_nowait((cmd, kwargs))
if get_annotation:
cmd = download_annotations
kwargs = {
"output_dir": output_dir,
"base_filename": base_filename,
"item": item,
"overwrite_existing": overwrite_existing
}
QUEUE.put_nowait((cmd, kwargs))
if get_aax:
cmd = download_aax
kwargs = {
"client": client,
"output_dir": output_dir,
"base_filename": base_filename,
"item": item,
"quality": quality,
"overwrite_existing": overwrite_existing,
"aax_fallback": aax_fallback,
"filename_mode": filename_mode
}
QUEUE.put_nowait((cmd, kwargs))
if get_aaxc:
cmd = download_aaxc
kwargs = {
"client": client,
"output_dir": output_dir,
"base_filename": base_filename,
"item": item,
"quality": quality,
"overwrite_existing": overwrite_existing,
"filename_mode": filename_mode
}
QUEUE.put_nowait((cmd, kwargs))
def display_counter():
if counter.has_downloads():
echo("The download ended with the following result:")
for k, v in counter.as_dict().items():
if v == 0:
continue
if k == "voucher_saved":
k = "voucher"
elif k == "aycl_voucher":
k = "aycl voucher"
elif k == "voucher":
diff = v - counter.voucher_saved
if diff > 0:
echo(f"Unsaved voucher: {diff}")
continue
echo(f"New {k} files: {v}")
else:
echo("No new files downloaded.")
@***@***.***( "--output-dir", "-o", type=click.Path(exists=True, dir_okay=True), default=pathlib.Path().cwd(), help="output dir, uses current working dir as ***@***.***( "--all", is_flag=True, help="download all library items, overrides --asin and --title ***@***.***( "--asin", "-a", multiple=True, help="asin of the ***@***.***( "--title", "-t", multiple=True, help="tile of the audiobook (partial ***@***.***( "--aax", is_flag=True, help="Download book in aax ***@***.***( "--aaxc", is_flag=True, help="Download book in aaxc format incl. voucher ***@***.***( "--aax-fallback", is_flag=True, help="Download book in aax format and fallback to aaxc, if former is not ***@***.***( "--quality", "-q", default="best", show_default=True, type=click.Choice(["best", "high", "normal"]), help="download ***@***.***( "--pdf", is_flag=True, help="downloads the pdf in addition to the ***@***.***( "--cover", is_flag=True, help="downloads the cover in addition to the ***@***.***( "--cover-size", type=click.Choice(["252", "315", "360", "408", "500", "558", "570", "882", "900", "1215"]), default=["500"], multiple=True, help="The cover pixel size. This option can be provided multiple ***@***.***( "--chapter", is_flag=True, help="saves chapter metadata as JSON ***@***.***( "--annotation", is_flag=True, help="saves the annotations (e.g. bookmarks, notes) as JSON ***@***.******@***.******@***.***( "--no-confirm", "-y", is_flag=True, help="start without ***@***.***( "--overwrite", is_flag=True, help="rename existing ***@***.***( "--ignore-errors", is_flag=True, help="ignore errors and continue with the ***@***.***( "--jobs", "-j", type=int, default=3, show_default=True, help="number of simultaneous ***@***.***( "--filename-mode", "-f", type=click.Choice( ["config", "ascii", "asin_ascii", "unicode", "asin_unicode"] ), default="config", help="Filename mode to use. [default: ***@***.******@***.***( "--resolve-podcasts", is_flag=True, help="Resolve podcasts to download a single episode via asin or ***@***.***( "--ignore-podcasts", is_flag=True, help="Ignore a podcast if it have ***@***.******@***.******@***.***_client(headers=CLIENT_HEADERS)async def cli(session, api_client, **params):
"""download audiobook(s) from library"""
client = api_client.session
output_dir = pathlib.Path(params.get("output_dir")).resolve()
# which item(s) to download
get_all = params.get("all") is True
asins = params.get("asin")
titles = params.get("title")
if get_all and (asins or titles):
logger.error("Do not mix *asin* or *title* option with *all* option.")
click.Abort()
# what to download
get_aax = params.get("aax")
get_aaxc = params.get("aaxc")
aax_fallback = params.get("aax_fallback")
if aax_fallback:
if get_aax:
logger.info(
"Using --aax is redundant and can be left when using --aax-fallback"
)
get_aax = True
if get_aaxc:
logger.warning("Do not mix --aaxc with --aax-fallback option.")
get_annotation = params.get("annotation")
get_chapters = params.get("chapter")
get_cover = params.get("cover")
get_pdf = params.get("pdf")
if not any(
[get_aax, get_aaxc, get_annotation, get_chapters, get_cover, get_pdf]
):
logger.error("Please select an option what you want download.")
raise click.Abort()
# additional options
sim_jobs = params.get("jobs")
quality = params.get("quality")
cover_sizes = list(set(params.get("cover_size")))
overwrite_existing = params.get("overwrite")
ignore_errors = params.get("ignore_errors")
no_confirm = params.get("no_confirm")
resolve_podcats = params.get("resolve_podcasts")
ignore_podcasts = params.get("ignore_podcasts")
bunch_size = session.params.get("bunch_size")
start_date = session.params.get("start_date")
end_date = session.params.get("end_date")
if all([start_date, end_date]) and start_date > end_date:
logger.error("start date must be before or equal the end date")
raise click.Abort()
if start_date is not None:
logger.info(
f"Selected start date: {start_date.strftime('%Y-%m-%dT%H:%M:%S.%fZ')}"
)
if end_date is not None:
logger.info(
f"Selected end date: {end_date.strftime('%Y-%m-%dT%H:%M:%S.%fZ')}"
)
filename_mode = params.get("filename_mode")
if filename_mode == "config":
filename_mode = session.config.get_profile_option(
session.selected_profile, "filename_mode") or "ascii"
# fetch the user library
library = await Library.from_api_full_sync(
api_client,
image_sizes=", ".join(cover_sizes),
bunch_size=bunch_size,
response_groups=(
"product_desc, media, product_attrs, relationships, "
"series, customer_rights, pdf_url"
),
start_date=start_date,
end_date=end_date,
status="Active",
)
if resolve_podcats:
await library.resolve_podcats(start_date=start_date, end_date=end_date)
# collect jobs
jobs = []
if get_all:
asins = []
titles = []
for i in library:
jobs.append(i.asin)
for asin in asins:
if library.has_asin(asin):
jobs.append(asin)
else:
if not ignore_errors:
logger.error(f"Asin {asin} not found in library.")
click.Abort()
logger.error(
f"Skip asin {asin}: Not found in library"
)
for title in titles:
match = library.search_item_by_title(title)
full_match = [i for i in match if i[1] == 100]
if match:
if no_confirm:
[jobs.append(i[0].asin) for i in full_match or match]
else:
choices = []
for i in full_match or match:
a = i[0].asin
t = i[0].full_title
c = questionary.Choice(title=f"{a} # {t}", value=a)
choices.append(c)
answer = await questionary.checkbox(
f"Found the following matches for '{title}'. Which you want to download?",
choices=choices
).unsafe_ask_async()
if answer is not None:
[jobs.append(i) for i in answer]
else:
logger.error(
f"Skip title {title}: Not found in library"
)
# set queue
global QUEUE
QUEUE = asyncio.Queue()
for job in jobs:
item = library.get_item_by_asin(job)
items = [item]
odir = pathlib.Path(output_dir)
if not ignore_podcasts and item.is_parent_podcast():
items.remove(item)
if item._children is None:
await item.get_child_items(
start_date=start_date, end_date=end_date
)
for i in item._children:
if i.asin not in jobs:
items.append(i)
podcast_dir = item.create_base_filename(filename_mode)
odir = output_dir / podcast_dir
if not odir.is_dir():
odir.mkdir(parents=True)
for item in items:
queue_job(
get_cover=get_cover,
get_pdf=get_pdf,
get_annotation=get_annotation,
get_chapters=get_chapters,
get_aax=get_aax,
get_aaxc=get_aaxc,
client=client,
output_dir=odir,
filename_mode=filename_mode,
item=item,
cover_sizes=cover_sizes,
quality=quality,
overwrite_existing=overwrite_existing,
aax_fallback=aax_fallback
)
# schedule the consumer
consumers = [
asyncio.ensure_future(consume(ignore_errors)) for _ in range(sim_jobs)
]
try:
# wait until the consumer has processed all items
await QUEUE.join()
finally:
# the consumer is still awaiting an item, cancel it
for consumer in consumers:
consumer.cancel()
await asyncio.gather(*consumers, return_exceptions=True)
display_counter()
—
Reply to this email directly, view it on GitHub
<#156 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AIHG7UNJTXPKJW2ALD6XYJLY2ZQ2BAVCNFSM6AAAAAA5ERDPHOVHI2DSMVQWIX3LMV43SRDJONRXK43TNFXW4Q3PNVWWK3TUHM4DSNJYGAYDO>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
Beta Was this translation helpful? Give feedback.
-
Thank you!
I placed the command in that directory and this is the error I get:
---
Warning: entry point could not be loaded. Contact its author for help.
Traceback (most recent call last):
File
"C:\Users\Travis\AppData\Local\pipx\pipx\venvs\audible-cli\Lib\site-packages\audible_cli\plugins.py",
line 41, in decorator
mod = import_module(mod_name)
^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Program Files\Python312\Lib\importlib\__init__.py", line 90, in
import_module
return _bootstrap._gcd_import(name[level:], package, level)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<frozen importlib._bootstrap>", line 1387, in _gcd_import
File "<frozen importlib._bootstrap>", line 1360, in _find_and_load
File "<frozen importlib._bootstrap>", line 1331, in
_find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 935, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 995, in exec_module
File "<frozen importlib._bootstrap>", line 488, in
_call_with_frames_removed
File
"C:\Users\Travis\AppData\Local\Audible\plugins\cmd_download-chapters-flat.py",
line 24, in <module>
from audible_cli.downloader import Downloader as NewDownloader, Status
ModuleNotFoundError: No module named 'audible_cli.downloader'
---
I checked and I am running audible-cli ver. 0.2.6. I will try updating to
the latest version you referenced in an earlier email and see if that makes
a difference.
…On Sat, Mar 30, 2024 at 1:46 PM mkb79 ***@***.***> wrote:
Run audible -v debug wishlist list and it will print out a line like debug:
Plugin dir: /home/marcel/.audible/plugins. This is the dir.
—
Reply to this email directly, view it on GitHub
<#156 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AIHG7UKY4VDFYPMZ2WGS5T3Y24JCVAVCNFSM6AAAAAA5ERDPHOVHI2DSMVQWIX3LMV43SRDJONRXK43TNFXW4Q3PNVWWK3TUHM4DSNRRGUZTG>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
Beta Was this translation helpful? Give feedback.
-
I agree with your suggestion of --chapter and --chapter-flat. But, in my
opinion, I would have flat be the default and chapter-tree be the option. I
know tree is audible's default, but I can't think of a scenario where
someone, using your tools, would prefer the tree over flat version. Dealing
with nested files has only caused more work for me. But, I respect your
preference and am overjoyed that you are building in an option for flat
files, regardless of implementation.
…On Sun, Mar 31, 2024, 9:38 AM mkb79 ***@***.***> wrote:
@Travillion <https://github.com/Travillion>
There are currently two chapter types, which are delivered by the API. In
my implementation --chapter-type flat can be provided, to output flat
chapters. If you left this option or provide --chapter-type tree, the
output is tree. This is not really intuitive. Maybe I should switch to
--chapter and --chapter-flat as options, which can’t be mixed together?
This is more logical in my opinion.
—
Reply to this email directly, view it on GitHub
<#156 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AIHG7UNRTGUMGL6LLOKAQITY3A3YJAVCNFSM6AAAAAA5ERDPHOVHI2DSMVQWIX3LMV43SRDJONRXK43TNFXW4Q3PNVWWK3TUHM4DSNRVHA3DM>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
Beta Was this translation helpful? Give feedback.
If you use v0.3.1 of audible-cli, copy the content below to a file named
cmd_download-chapters-flat.py
in your plugin dir. Then runaudible download-flat […] --chapters
to download chapters in flat. I don’t have tested the output yet.