View Single Post
Old 06-06-2008, 04:08 AM   #14
alex_d
Addict
alex_d doesn't litteralex_d doesn't litter
 
Posts: 303
Karma: 187
Join Date: Dec 2006
Device: Sony Reader
Yes, finding the right resolution will need a bit of work. I did it for the Reader using a test pattern and patience. I'll need a volunteer to do this for the Kindle and others, but that's for later.

Interactive cropping is something I'm really hoping for. We'll see how much .net skillz it'll take.


Ok, some news: I start playing around writing code, and tried that GPU-offloading I was talking about. Doing it frankly sounded a bit crazy at first, but I've actually got it working (at least, I've got the dilate filter working). This was way easier than I expected. And when the rendering happens on the GPU, it runs orders of magnitude faster. BUT, as soon I want to save the image instead of just display it onscreen, it automatically switches to software rendering.

Software rendering, despite the whole circuitous route, is actually no slower than calling netpbm to handle the dilate, but neither is it faster. So much for my hopes. Still, the question now is do I continue with this path of using bleeding-edge tech instead just a command-line tool or plain C code? I think so. First of all, this shader business is actually easy. Going out to the shell is dirty, and writing manual code also takes more work. Even if the pixel shaders are software rendered, I don't have to worry about shuffling data or even having a for loop since it's all being automated beneath the surface of WPF. Second of all, at some point in the future GPU acceleration might get turned on. Third, using .net 3.5 tech means it's easier to plug it back into the 3.5 UI to have interactive previewing and cropping.


Tomorrow I'll try to package it in an install program.
alex_d is offline   Reply With Quote