Quote:
Originally Posted by canpolat
BetterRed, davidfor:Thanks for your replies. I started playing with Import List. The problem I face is: it is taking forever to import 4 fields from ~6500 books. It has been running for around an hour now and the import still did not finish. So, I assume it got stuck. Bad luck!
|
I assume you were importing into an empty library cloned from your actual library - yes/no? On what
evidence did you make the judgement that Import List PI was 'stuck'?
The process of importing a ~6500 row CSV into an empty database would create thousands of Author and Book folders, ~6500 metdata.opf files in the book folders, and write many more thousands of rows to dozens of tables in the database? Doesn't surprise me that it wasn't finished after an hour.
Quote:
Originally Posted by canpolat
davidfor: About zipping the library directory... BetterRed also suggested that but I thought it would be more difficult for my friend to work that way. But now that both of you are suggesting the same thing (and Import List is stuck), I think I will reevalute this idea. So this is basically zipping and unzipping metadata_db_prefs_backup.json and metadata.db, isn't it?
|
The unexpressed reasons I suggested that you create a 'book-less' copy of your library (i.e. one with no format files) were
- obviously to reduce the amount of data needing to be transferred to your friend, and less obviously
- to protect you from breaching the copyright terms of your books, and
- to make sure your friend was starting out with a recoverable library. Which means the metadata_db_prefs_backup.json would be included in the archive.
A 'bonus' would be that your friend wouldn't be looking at cover-less (boring) library. And the metdata.opfs, which are a recovery source would be in the book folders. And your friend might also be able to source 'better' covers than you already have.
I urge you to create a subset of your library with a couple of hundred books and experiment with that - in the long run it will save you time.
BR