The technical term for what is occurring is
posterization you can read more about it here:
http://www.cambridgeincolour.com/tut...terization.htm .
The various error distribution schemes (Floyd, bi-cubic) work in a manner similar to MP3, except for the eyes instead of the ears. They attempt to hide the loss of data through perceptual tricks. The various methods work better with some documents than with others and their effectiveness various from one person to the next.
One thing though is generally accepted: using any of them on text glyphs is not a "good thing".
As others here had already commented upon elsewhere here, text in PDF's was at times not looking too sharp on the iLiad. When I removed the error distribution I immediately noticed the extra sharpness. After one struggles as hard as I did to get a working unit one tends to try and think more fondly of his sickly device eh what?
Rather than applying error distribution, which for black text there are no errors needing to be distributed (but we find something to distribute anyway

) these methods should be applied where they make the most sense: on images.
Everyone needs to keep in mind what happened yesterday. I took a cold toolchain, got it working, took a cold tar ball and got it to compile. I then walked into that cold source and found a big flashing sign that read "big bottle neck right here". The source didn't say "This code is sub-optimal because we found trying to render to 8 bit grey scale resulted in the following ...."
I fixed the bottle neck, then popped out two more revisions.
Yes I noticed some things were better, in particular some scanned magazines I have looked much better. But other scanned items looked worse. I also noticed that these issues seemed to track with the compression method being employed.
I marked this one down for "further investigation", because as others have already commented: when you go from 16.7 million colors down to 16 (1 million to 1), some things just aren't going to look right.
I've had an iLiad benchmark I've been calling the Jack Rabbit Benchmark. Its a PDF of a menu from a local catering outfit. Performance of this simple two page document on the iLiad can kindly be described as "sub-optimal". In the process of fixing this document's performance issues I believe I will turn up the source of some of the image issues.
I think I'm going to find that poppler has some "growth opportunities" in its code when it comes to rendering images into 8bpp grey.
Can I create a faster error distribution routine for the iLiad? You bet, I'll get out my PXA-255 manual and throw some wireless MMX opcodes into the code to speed it up nicely. But I only want to apply that to images, not glyphs.
In the mean time no one is paying me to work on this stuff: I owe, I owe, its off to work I go...