View Single Post
Old 10-15-2022, 12:40 PM   #1010
chaley
Grand Sorcerer
chaley ought to be getting tired of karma fortunes by now.chaley ought to be getting tired of karma fortunes by now.chaley ought to be getting tired of karma fortunes by now.chaley ought to be getting tired of karma fortunes by now.chaley ought to be getting tired of karma fortunes by now.chaley ought to be getting tired of karma fortunes by now.chaley ought to be getting tired of karma fortunes by now.chaley ought to be getting tired of karma fortunes by now.chaley ought to be getting tired of karma fortunes by now.chaley ought to be getting tired of karma fortunes by now.chaley ought to be getting tired of karma fortunes by now.
 
Posts: 12,500
Karma: 8065348
Join Date: Jan 2010
Location: Notts, England
Device: Kobo Libra 2
Quote:
Originally Posted by kiwidude View Post
Iterating just across the author names would ordinarily be the faster approach. But the downside now is that I do not have the details of the books for that author, and instead have to run an additional search to retrieve them in the case where they are showing up as a duplicate.
The calibre db method new_api.books_for_field(field_name, item_id) might help here. For example, given author_id NNN it returns all the book ids that have that author. It is significantly faster than searching.

Then, to get full metadata for a book with book_id you could then use
Code:
new_api.get_proxy_metadata(book_id)
that returns a 'lazy' metadata object that evaluates metadata fields only when requested. If you need only a few metadata fields then the lazy fetch is much faster.
chaley is offline   Reply With Quote