Quote:
Originally Posted by kiwidude
The only other option is to fix the memory leaks. However having helped Kovid track down some memory leak issues in the metadata download over a 5 hour period one Sunday night I know just how painful and difficult this is.
|
Thanks for your timely and well-written reply! I understand fully the problems you outlined. I am completely fine with waiting however long it takes for optimization, after all the plugin already works like a charm - not everyone will find keeping extraction batches under 1000 a problem, lol.
I am definitely a loyal Calibre user... nothing like it... so no complaints from me about waiting. On the same note, contributions from plugin developers are just as significant to my loyalty as the viability of the main platform itself.
In the meantime, I have but one more humble suggestion... which floated from the ether overnight. How about creation of a new tag within/for use with ExtractISBN. Basically the antithesis of
identifier_updated; something like
extract_failed - to allow marking/sorting ([
extract_failed:false & identifier:false] as a sort method to select a new batch for extraction) of documents which ExtractISBN returned negative. I find myself rehashing the same files... with little in the way of keeping track. I suppose it wouldn't have to be persistent. It could have a half-life... or perhaps the value resets when calibre restarts. Or could be batch reset with a command when no longer needed. Hell... as long as the failed files are marked until the next invocation of ExtractISBN... then those files could be called and copy-deleted to a container library to get them out of the way. That would cost time, but would technically be more efficient than the process is at the moment. Just an idea.
Anyway, I must apologize for raising an issue previously discussed. I only skimmed the thread. On the other hand I was taught it never hurts to ask.
Viva Calibre!