Is a multi-GB wikipedia ePub practical? Unfortunately, I think the answer is no.
At one level everything is ok, the ePub is internally a directory tree of files and for wikipedia each article would be a file. The ePub's metafiles (.opf and .ncx) would be bigger than usual, but perhaps you could design a reasonable TOC. Adobe Digital Editions (ADE) only processes each "file" (article) when it needs to display it on the screen - so that is ok (although other ePub readers might work differently).
The basic problem is with the container, i.e. the ZIP file that is the .epub. This has not been designed, so far as I can tell, to be an efficient replacement for a full filesystem. One approach that I can see an ePub reader taking is to uncompress the entire document to a temporary directory tree when the document is opened and then work from that. This is impractical for a large ePub. So, assume ADE isn't doing this but it doing on-the-fly decompression. What happens when you follow a link to another article? ADE has to find the relevant file in the huge ZIP file without walking the entire file. I don't think this can be done efficiently enough.
I am hand waving a bit in the above, because I don't know all the details of how ZIP and ADE work. However, I am certain that any ebook reader (computer code) designed for books in the MB range will fail on books in the GB range, particularly when running on resource limited hardware.
|