View Single Post
Old Yesterday, 06:29 AM   #16
un_pogaz
Chalut o/
un_pogaz ought to be getting tired of karma fortunes by now.un_pogaz ought to be getting tired of karma fortunes by now.un_pogaz ought to be getting tired of karma fortunes by now.un_pogaz ought to be getting tired of karma fortunes by now.un_pogaz ought to be getting tired of karma fortunes by now.un_pogaz ought to be getting tired of karma fortunes by now.un_pogaz ought to be getting tired of karma fortunes by now.un_pogaz ought to be getting tired of karma fortunes by now.un_pogaz ought to be getting tired of karma fortunes by now.un_pogaz ought to be getting tired of karma fortunes by now.un_pogaz ought to be getting tired of karma fortunes by now.
 
un_pogaz's Avatar
 
Posts: 641
Karma: 718860
Join Date: Dec 2017
Device: Kobo
Quote:
Originally Posted by BetterRed View Post
The Shelf 'pages' calculator uses the book format file size to compute its value… that generates disk i/o

But that information is already available in the 'data' table of metadata.db. If the bookshelf 'page' calculator used that information I suspect it would speed up the initial scan by quite a lot… especially if that table is already loaded into calibre's in memory rendition of the database.

Attachment 220757

BR
Calibre precisely does that.
And the format file size in 'pages' table is not used to compute the value, but as a guard: If you trigger the pages recount for this book, Calibre will compare the file size in the 'data' table to the size saved in the 'pages' table, and if their a equal, it skip the effective recount because it assume that the book is identical to the last time than the count was done.

This lead that if you request a recount of the entire library (and that you don't check the force option), the second scan will be even faster and will generates less disk i/o because unchanged books are skiped.

Last edited by un_pogaz; Yesterday at 06:37 AM.
un_pogaz is offline   Reply With Quote