Some quick notes:
1) to be honest, we never considered that people would transfer a thousand or more books. The app is made to be very responsive to sorts and groups at the 50-200 book level.
2) This weekend we are going to redo how memory is used. My suspicion is that we will improve response time (time to screen doing something) but reduce throughput (time to scroll, etc). Unfortunately there is no magic wand. Sorting and grouping requires *something* to look at all the metadata, and this looking takes time.
3) When CC connects to calibre, it must send the metadata for *every* book back to calibre. This is not an option -- it is what calibre demands. Each network transaction (each book's metadata) requires around 1/3 second (wifi turnaround -- we haven't found a way to improve this), so 1000 books takes 333 seconds, or around 4 minutes total. We might be able to improve this somehow, perhaps by sending more than one book at a time, but such a fix is not likely to be in the next few days.
4) We should be able to improve the time to send a book. That is in the todo queue.
5) Calibre does not support the notion of cancel from a device. It thinks that a device is a passive disk, and disks that "cancel" are broken disks. We could disconnect, but that is a very large hammer for a potentially small nail.
6) Sorts are already saved. I agree about saving the grouping. That said, saving the and group sort won't save any time. Android can kill the app when it is shoved to background. When it is restarted, it must rebuild the displays.
7) Timeouts. Calibre waits for 60 seconds for the device to respond. Very large libraries might prevent responding for longer than that. It seems that your large number of books are causing CC to taje longer than that to say "hello" back to calibre. We will see if we can do better at "responding" even if CC is busy.
We understand the frustration of people who want to manipulate large numbers of books on their phone. It doesn't work well today, primarily because grouping and sorting operations are not consistent with large data sets on a tiny device. Our problem: it isn't clear if we can make it work well in both the "normal" and "large" library cases. We will try, but I can't promise anything.