Quote:
Originally Posted by hiperlink
I'm asking is there a possibility, to reuse the already downloaded news/html files for conversions to different output formats without re-downloading them? (e.g. I shall create a .mobi file from the recipe via the CLI/ebook-convert, then want to create a .epub from the same recipe and don't want to waste bandwith (currently on an EC2 instance).)
|
This is a not an uncommon request. The usual answer is:
1) Use Windows scheduler or cron
2) to run script or batch file
3) to run
ebook-convert first to make recipe-created book
4) then to run
ebook-convert to convert recipe-created ebook to 2nd, 3rd, 4th formats,
5) then to run
calibredb with the
add option to add the books to Calibre.
Quote:
My 2nd question is: how does one create a recipe/parse_index for a page without rss AND has multiple section pages? E.g. there is a technology section on a site, and the last link is "next page" (on every page, but the last), and I want to add the "h2" article items with the same article date to the feed from every page...
Thanks for any advice!
|
parse_index is for non-RSS feed sites. If you have an RSS feed, you don't need parse_index. To do what you want, just grab the 2nd and 3rd pages with index_to_soup and build the feed.