View Single Post
Old 05-03-2012, 10:41 AM   #44
geekmaster
Carpe diem, c'est la vie.
geekmaster ought to be getting tired of karma fortunes by now.geekmaster ought to be getting tired of karma fortunes by now.geekmaster ought to be getting tired of karma fortunes by now.geekmaster ought to be getting tired of karma fortunes by now.geekmaster ought to be getting tired of karma fortunes by now.geekmaster ought to be getting tired of karma fortunes by now.geekmaster ought to be getting tired of karma fortunes by now.geekmaster ought to be getting tired of karma fortunes by now.geekmaster ought to be getting tired of karma fortunes by now.geekmaster ought to be getting tired of karma fortunes by now.geekmaster ought to be getting tired of karma fortunes by now.
 
geekmaster's Avatar
 
Posts: 6,433
Karma: 10773668
Join Date: Nov 2011
Location: Multiverse 6627A
Device: K1 to PW3
I discovered (the hard way, later confirmed in GPL source code) that although the K4 and K5 use an 8-bit eink framebuffer, the pixels inside it must still be packed two pixels per byte JUST LIKE for the K3 and earlier. The only difference is that each byte must contain the SAME 4-bit value in both packed pixel positions. In other words, the values in these bytes are 4-bit values multiplied by 17 (which copies the bottom 4-bits to the top 4-bits).

According to GPL source code for the K5 eink drivers, some eink panels REQUIRE that the bottom 4-bits be identical to the top 4-bits or strange behavior can be expected (as can be seen in my screenshots for my paldemo program in another thread).

I added the "(c>>4)*17" term to my blitter function and it works a lot better now...

That also explains why we need to dither the framebuffer even for "8-bit grayscale" mode...

Also, when I updated it, I converted my dither tables to logical expressions using Karnaugh maps, and added the dither terms to my "dither/pack/blit" function. Interestingly, it now runs FASTER than when it used a dither lookup table -- cache effects, no doubt. Modern CPUs are MUCH faster than memory accesses in many cases. Back in the day, pre-computed lookup tables were THE WAY to optimize code, but these days it is often much faster to do real-time computation instead of memory accesses, even on embedded processors like we have in the kindles.



Last edited by geekmaster; 05-03-2012 at 10:51 AM.
geekmaster is offline   Reply With Quote