Some time ago I was poking around Kobo's filesystem (
https://www.mobileread.com/forums/sho...d.php?t=211550) and as part of that I used e4defrag tool.
However, I didn't even mention doing this in my post because it didn't really do much. In my humble opinion there are two reasons why it can't do much:
First reason: it's an SD card we're talking here, which doesn't have seek time (and instead has block access time). Any large file cut in fragments, as long as those fragments aren't smaller than block size (~32 KiB?), should be read at the same speed regardless of whether those blocks are consecutive or not. "Seek" time doesn't depend on "distance" the "head" needs to travel.
Second reason: due to an advantage EXT filesystems had over FAT filesystems in the 90s, a myth developed that linux filesystems somehow don't suffer from fragmentation. Windows got the same resistance with NTFS, but as a result defragmenters evolved into complex tools that attempt to put files in an optimised order (as opposed to simply not having files fragmented). Linux never got such tools.
As a result, any EXT defragmenter you'll find today will be incredibly simplistic, and therefore useless (which just reinforces the myth, really).
As for the database on the FAT partition -- you can defragment the FAT partition by just connecting it to a Windows PC.
But to defragment any single file (such as the database) you can just connect it to the PC, move the file away from Kobo, then move it back. The new copy should be defragmented
FAT filesystem also has very serious problem with fragmentation of directory structures. If you ever find a tool, that runs on NT Windows (as opposed to 95/98/ME), that can defragment FAT directories, this might be a huge thing. I personally never found such tool, and Windows' own defragmentation API will never touch FAT directories (so all "generic" defragmenters that just use the API are powerless).