View Single Post
Old 08-04-2024, 03:44 PM   #16
Eric Muller
Connoisseur
Eric Muller is generous with chocolateEric Muller is generous with chocolateEric Muller is generous with chocolateEric Muller is generous with chocolateEric Muller is generous with chocolateEric Muller is generous with chocolateEric Muller is generous with chocolateEric Muller is generous with chocolateEric Muller is generous with chocolateEric Muller is generous with chocolateEric Muller is generous with chocolate
 
Posts: 87
Karma: 33940
Join Date: May 2010
Device: Opus
Quote:
Originally Posted by JSWolf View Post
Some say it's 1024 compressed characters and you say 1024 uncompressed characters. Do you have a link to show if it's compressed or uncompressed? I cannot seem to find anything.
I don't have evidence, but I worked at Adobe back then, some of the code I worked on was incorporated in the RMSDK, and I had discussions with the core RMSDK engineers.

You can also think about how to implement "goto page #". First, you can sum up the # of pages of each spine element until you find the one that contains the page of interest. For those elements, you get both their compressed and uncompressed sizes from the zip headers, so you could use either one. Then you need to uncompress that element and do layout until you reach the "screen" that contains the page boundary. For example, you generate the first "screen" and then know that it consumes bytes 0 through s1-1 of the uncompressed html; then the second "screen" consumes bytes s1 through s2-1, still of the uncompressed html. Relating those to compressed positions is certainly an unnecessary complication; and it is most likely a further distortion from the intuitive notion of page = some amount of content (e.g. a page could compresses well while another does not).

Anyway, the mater can be settled easily, I think. Just look at the number of synthetic pages, and the compressed and uncompressed sizes for an ordinary (i.e. that compresses well) epub.
Eric Muller is offline   Reply With Quote