View Single Post
Old 07-02-2012, 12:22 AM   #87
geekmaster
Carpe diem, c'est la vie.
geekmaster ought to be getting tired of karma fortunes by now.geekmaster ought to be getting tired of karma fortunes by now.geekmaster ought to be getting tired of karma fortunes by now.geekmaster ought to be getting tired of karma fortunes by now.geekmaster ought to be getting tired of karma fortunes by now.geekmaster ought to be getting tired of karma fortunes by now.geekmaster ought to be getting tired of karma fortunes by now.geekmaster ought to be getting tired of karma fortunes by now.geekmaster ought to be getting tired of karma fortunes by now.geekmaster ought to be getting tired of karma fortunes by now.geekmaster ought to be getting tired of karma fortunes by now.
 
geekmaster's Avatar
 
Posts: 6,433
Karma: 10773668
Join Date: Nov 2011
Location: Multiverse 6627A
Device: K1 to PW3
Quote:
Originally Posted by twobob View Post
I get about 6 - 7 1/2 Fps which is surprisingly effective on the Eink screens.

undecided if procedural (audio), or wave table is the way to go. Need to get spitting out some data to have a see how the device s performs. In a way writing for embedded reminds me of writing back in the day.

Anyway. : ) Would seriously love to get my hands on that. When you have time.
Many thanks
The K4 (in diags mode) and the K5 can easily do 50 FPS because the drivers do not wait until the eink display "catches up", but at faster speed it becomes MUCH more content-dependent before it gets too many artifacts to be useful. The K3 was limited to 7FPS by the eink drivers, so I used 7FPS (fullscreen, or 12FPS partial screen) to encode the dithered video (to save space). A lot of testing showed that 7FPS works well on the eink because it is a rather long-persistence medium AND mostly because it is NOT a light-emitting display (which would cause you to see flicker or other artifacts much more easily).

Regarding sound, I am a huge fan of procedural encoding. Using procedures to generate video (such as procedural textures) is about as dense a form of compression as you can get, and the same for audio (such as MOD files). In fact, I used procedural video to generate the first video that I posted. I ran the program on a linux host PC, and sent the output through my raw2gmv program, then into gzip, to create that video file. That program can also run on a kindle, and that "procedural video" program is MUCH smaller than any video compression that you can apply to the output video. Creating content in real time is definately the way to go (unless you want to preserve natural content). Of course, modern codecs are BECOMING more procedural, where the do feature extraction and object recognition, saving the instructions (i.e. display list) to recreate a close model of the original content on playback (much like how the human brain operates). This requires a lot of processing power, but we are getting there. For sound, you know that some very simple algorithms can simulate some rather natural sounds (as used in simple non-wavetable music synths like in greeting cards).

Regarding the raw2gmv transcoder, I will publish that when I get back (a week or so). I have not finished implementing my parallel (4-pixels per operation) dither algorithms, but I have proven it "on paper". I try to avoid conditional branching on modern multilevel cache processors, by using complex logic expressions (how I do my recent "NOT formula 42) dithering without a dither table. Essentially, I figured out how to generate a dither table in realtime in a cache-friendly way and logically interleaved that with the pixel dithering code. You can see the non-parallel version in the gmplay source code. For a hint of how I procedurally created my first video, see the "goodbye()" function in the newtrix demo.

TTYL

Last edited by geekmaster; 07-02-2012 at 12:24 AM.
geekmaster is offline   Reply With Quote