Originally Posted by itimpi
It is likely that most conversions will be CPU bound rather than RAM limited.
One thing that may be a limiting factor is disc speed combined with disc cache. This is why 32 bit calibre on 64 bit windows isn't all that bad. Unused memory will go to disc cache. You may be able to compensate quite a bit for slow discs by having plenty of memory for disc cache. This might even mean that 64 bit calibre could sometimes be slower than 32 bit calibre on computers with slow discs and limited memory (less than 8GB?)?
When doing very large bulk conversions the disc cache may currently be poorly utilized?
I don't know exactly how Calibre handles large bulk jobs, but it seems that when you transfer to device or convert, all the books are read and copied to the temp folder, and modified there. Only when all books have been converted or updated, and are ready in temp, does the transfer to device or back to the library start. If there are many books the cache may have discarded the first books in temp before this happens. Perhaps Calibre would be perceived as being much faster if the books were processed fully one by one. Copied to temp, modified, copied to destination, deleted in temp, and repeat for next book. Then it would be more likely that the cache can be utilized fully, and if temp and the library are on different discs, more of the file transfer may be done in parallel. But perhaps this would increase the risk of corrupt books or metadata?