Quote:
Originally Posted by lumpynose
How do the e-readers process a book when it's being read? I was thinking that they'd extract the css files and have them on the drive or in ram and then stream into ram (unzipping the bytes as needed) the xhtml chapter files. I would think that with doing that that there would be minimal penalty for huge xhtml files.
|
The reader would need to hold the data structure built from parsing the entire html file in memory, otherwise it would have to reparse from the beginning each time the reader paged backwards (since html can't generally be parsed in reverse.)
Although it is not strictly necessary to load the entire html file before beginning to display the results, it is necessary to retain everything in memory until the end of the file is reached, and so the size of the html file (or more strictly, the size of the final data structure built from parsing the entire html file, which is probably much larger then the size of the file) is a limiting factor.
From what I have observed on my Glo, there is a noticable delay in the KePub reader when starting a new chapter, more noticable when the html file is large, so the KePub reader is probably parsing the entire html file before it displays the first page. But there is no extra delay when starting a new chapter in the ePub reader, so I think the ePub reader parses the html file as it goes. (There is a delay when paging backward to the end of the previous chapter in both readers though, because in that case they both need to parse the whole file from the beginning before they can display the final page.)