Large Database Issues
I have now over 9k books in both epub and mobi format. The metadata db is 52 MB in size. The library itself is 36k files and 16.2 GB.
1. Well, calibre (v0.8.11) is sluggish, to say the least. All commands that rely on the db seem to take about 5-30 seconds. Some functions are painful in the extreme. I assume this all comes from the metadata.db being so large. I simply DO NOT recommend this program for anything larger than this unless you have incredible patience and are very happy to risk a lot of heartache.
- can the metadata.db be compacted? how?
- what is the metadata.db size limit for calibre?
- is the sluggishness due to the db or some other issue?
- is there some way to get back the program responsiveness?
It seems to me there are plenty of other people out there with db of this size, and this one is still growing. I would expect it to be doubled in 2 years. Is this just simply the limit of calibre? Its quite disappointing if it is as the memory limit in windows is quite extraordinary by comparison. We are relatively early days with the popularity of ebooks just at the start of a log phase in growth - and this is the only available app out there of any quality.
2. I have used this for about 9 months. The program has crashed three times in that period. Each time, the program was unable to recover the database. So I put in some self-protection mechansms, and the latest crash (2 days ago, after a program update) was recovered with an older database. However I lost all the new files I had added in that period (about 20) and no simple way to work out what was missing. The process of getting going again is stressful, clunky and unreliable. I did get going again, but its such a time waster.
So, am I missing something, or should the author seriously consider a more effective backup/restore procedure? At the very very simplest why not implement a system of rolling copies of the metadata.db file? Its so simple (well of course its easy for me to say so) - every time the app opens, the app could save the db as a rolling copy such as metadata.01, metadata.02, then when reaching .09, start deleting .01 and keep the cycle going. This way any of 10 different databases can be restored more simply and easily. This strategy is clearly rudimentary, but hey, it sounds so easy.
I hugely appreciate everything done to make this program the joy it currently is, but these two issues are really highly limiting (large db size, and the high risk of dependence on a single file that gets easily corrupted). I am hoping I am simply wrong and there is already a solution out there? Please?
|