@windom
That's pretty much my workflow too, and I find it easy to work with the one library at a time limitation:
When I don't think a batch has many books that will duplicate/replace existing ones, I fix up the metadata, etc and move them into the main library as you described.
When I think I have many duplicates, and don't want to waste my time unnecessarily (re)processing duplicates, here's what I do:
Clean up the authors & titles, and run Extract ISBN & Count Pages (words).
Then I move them to the main library and run Find Duplicates.
Then I can make pretty good decisions about what to do with the new duplicates -- junk them, replace the old copy & preserve the metadata (merge them), or send them back to the templib for more processing, along with the non-duplicate new books.
It's pretty easy to sort the library by Date [added] to identify the books I just added, and now want to move back to the templib for further processing.
Having said that, I would certainly use a multi-library duplicate finder if it existed -- but I'm pretty comfortable with the current setup and adding multi-library capability seems like a lot of work.
|