Quote:
Originally Posted by ibu
Let's think about what's the worst case of pre parsing for an example book:
* It's a book of 400 pages (the number which is shown in the footer).
* The book has 15 chapters on 15 XHTML documents
* The book contains 80 endnotes.
* The book has 40 pages which contains links to endnotes.
Some of these pages contain 1, some 2, others 3 endnote links (the maximum).
* All endnotes are in the file endnotes.xhtml. That document has 8 pages.
That means: There are 10 endnotes on a page per average
In such a book you can expect, that pre parsing means, that 40 times endnotes.xhtml is parsed - additionally as to a firmware which do no pre parsing.
But may be I'm completly wrong with my thoughts. I'm not a developer, as I mentioned.
I just suppose that there are smart ways to implement a pre parsing.
In the worst case of pre parsing there are
|
Here's a worse case:
- 15 chapter book.
- Each chapter in a separate flow.
- Chapters between 50K and 250K in size.
- Chapter 3, (50K), has a link in the first paragraph, (it's the first link in the flow) - to a quotation in the last paragraph of chapter 7.
- Chapter 7 is 250K long and the last paragraph is 249K into this flow.
So an extra 250K has to be parsed, (=500% overhead), to pre-cache the linked text.