Actually I just ran a test on a library of 20k books with random metadata, adding a 100 books (also with random metadata to it) and the check for duplicates is definitely not causing the slowdown. Indeed the cumulative time spent in the check for dupliactes function is not even in the top 10.
As further corroboration, the adding of the 100 books took less than a minute. So the slowdown is elsewhere.
|