11-10-2013, 03:17 PM | #16 | ||
Wizard
Posts: 2,297
Karma: 12126329
Join Date: Jul 2012
Device: Kobo Forma, Nook
|
Quote:
https://www.mobileread.com/forums/sho...d.php?t=222916 Quote:
Also, do you have any more samples of you using SVG? (I am gathering intel/lots of information). |
||
11-11-2013, 03:33 AM | #17 | |
frumious Bandersnatch
Posts: 7,514
Karma: 18512745
Join Date: Jan 2008
Location: Spaniard in Sweden
Device: Cybook Orizon, Kobo Aura
|
Quote:
I've converted it to grayscale with: Code:
pngcrush -c 0 -bit_depth 8 -force 019_indexed.png 019_grayscale.png 019_indexed.png -> original image (65792) 019_grayscale.png -> pngcrushed (67688) 019_grayscale_2.png -> gimped and pngcrushed (67510) (the actual pixel data of all three images should be exactly the same) |
|
Advert | |
|
11-11-2013, 06:05 AM | #18 |
temp. out of service
Posts: 2,787
Karma: 24285242
Join Date: May 2010
Location: Duisburg (DE)
Device: PB 623
|
Rezipping it with zopfli algorithm should make it even smaller too, as the most tools still use zlib for the internally deflated data.
|
11-12-2013, 12:38 AM | #19 |
Obsessively Dedicated...
Posts: 3,200
Karma: 34977556
Join Date: May 2011
Location: JAPAN (US expatriate)
Device: Sony PRS-T2, ADE on PC
|
I played around with the indexed 16-shade png converted to 8-bit greyscale, and now I am torn between two options.
The book I last uploaded had over 300 images, saved to jpeg with 25% compression (that's 75 quality for you folks who see the glass as half-full, not half-empty). The two volumes of the book totaled about 16.25 mb. I re-converted the source images to 16-shade pngs in 8-bit greyscale. The two volumes now total almost 22 mb. So I have to decide whether to go with much cleaner images with the requisite size increase. I wonder how many users would even notice the improvement? I'm just throwing this out there for anybody who would like to chime in with an opinion. PS--@Jellby -- the Burn overlay is working really well, I was stunned to see how much grunge I had missed. Thank you again, this is absolutely faster than hunting specks by eyeball. Last edited by GrannyGrump; 11-12-2013 at 12:43 AM. |
11-12-2013, 02:26 AM | #20 |
Addict
Posts: 239
Karma: 1280000
Join Date: Oct 2010
Location: USA
Device: None
|
Hi GG,
I have downloaded and purchased epubs up to 75mb. They are slow to load initially, but then they seem to work fine (on a variety of Nooks and a Nexus 7 running Moon+). My preference would be one epub, containing the larger images, since my experience has shown little drawback. 22mb is about the same size as 4 or 5 mp3 songs, so for readers the value per mb is there. If my division is correct, that's still less than 100kb per image, so they are, on average, quite small. How does the book look on a 1920x1200 display? Perhaps they need to be larger Few books will have 300 illustrations, so the impact on total storage requirements isn't an issue. |
Advert | |
|
11-12-2013, 03:37 AM | #21 | ||
frumious Bandersnatch
Posts: 7,514
Karma: 18512745
Join Date: Jan 2008
Location: Spaniard in Sweden
Device: Cybook Orizon, Kobo Aura
|
Quote:
Quote:
|
||
11-12-2013, 07:07 AM | #22 | ||
Addict
Posts: 239
Karma: 1280000
Join Date: Oct 2010
Location: USA
Device: None
|
Quote:
Always get the source in, edit in, and save an archival copy in, max bit depth and max resolution and max color space (this last within reason / as appropriate) in a lossless format. Quote:
I saw the max image size is indeed about 100k, and limited by a max width of 600 pixels. The images looked pretty nice on my Lenovo X220 (1280x800), Calibre viewer, at about my normal half-screen column width, about 650 pixels. They were displayed in the native image resolution, not enlarged. As I increased the column width, the image formatting showed wider margins, which start to look odd at some point. But overall, quite acceptable; IMHO, very nice images at that size. The result on the Nexus 7 is less attractive. The high resolution means the images must be interpolated to fill the standard reading width (1200 pixels - margins). Some still look fine, but most already show softness at casual inspection, exaggerating the difference between the crisp text and the illustrations. My request would be that you use images with larger dimensions as a first necessary step for display on what must be the continued direction of devices towards higher resolution displays. I'd like to see images of at least 1100 pixels wide. If the book is four times the size, I care not one whit. I suspect that larger scale images might tolerate some jpeg compression, if required. Actually, I will widen that and say, that while manual compression methods like squeezing to 16 shades or shrinking images may be of use, my general preference would be to keep 256 shades (or 3x256+transparency) and sufficient resolution, and let the compression algorithm do the work. I would expect better results doing it that way more often, and it's going to be much faster. To be specific about what I mean by "better results" and "manual compression", I would expect a 1kx1k image with 256 shades of gray, shrunk using the required jpg compression to give a certain filesize (say 100k), to give a better image than would be achieved by force-reducing the shades of gray to 16, force-reducing the resolution to 512x512, and again picking a level of compression to give the same file size, when displayed at full resolution. The "forced" operations I am calling manual compression. Once the images fill the display, then I (or anyone) could compare the differences between a jpg and png, or 2 versus 16 versus 256 actual shades. If you care to attach sample images to posts, I can put them in the book and have a look. Last comment: the large patches of light in the images are glaring in Moon+ "night mode" (light text on dark background), but I don't know if anything can be done about that. |
||
11-14-2013, 03:14 AM | #23 |
Obsessively Dedicated...
Posts: 3,200
Karma: 34977556
Join Date: May 2011
Location: JAPAN (US expatriate)
Device: Sony PRS-T2, ADE on PC
|
@Jellby -- yes, I always keep the full-size source AND the down-sized source in uncompressed format -- In my case, PSP's native proprietary multi-layer format (which are not editable or even openable in any other software that I have investigated) and in uncompressed 24-bit png. I am now making sure to also keep a backup on another drive, after a data-loss disaster earlier this year -- no backups, and lost weeks and weeks of work, but my own stupid fault -- the first time I have lost data like that *ever* in almost 30 years of computing.
I am intrigued by ImageMagik for post-processing, but it sounds like you have to know a lot about formats and compression levels to use it to its full capacity. I will have to search for a good tutorial. (Even though I always have to keep a cheat-sheet for CLI programs, the switches fall right out of my memory in 5 minutes.) @derangedhermit -- I will whomp up a few samples to post tomorrow. I like your thinking about large image dimensions, but after reading so many posts about shaving 5 kb off an image, or reducing an epub by 350 kb, it feels like there are lots of folks out there who are still very much concerned about file-size. I even remember a post from a member that their reader is not strong enough to handle images at all, so must only download unillustrated books. I also need to see how my Sony reader handles scaling for very large resolutions. I know that Sony Reader app for PC does not do very well, and ADE looks pretty poor as well. But ADE doesn't handle images well in my opinion, they always look pixilated even when they are displayed at native resolution. |
11-14-2013, 04:25 AM | #24 | |
Addict
Posts: 239
Karma: 1280000
Join Date: Oct 2010
Location: USA
Device: None
|
Quote:
I would encourage you to make the best book you can without limiting yourself to the typical e-ink reader display (or emulation software), and publish that. Making nice images is too hard and too time consuming to do otherwise. Then, if you want to make a version with smaller file size or resolution images for whatever reason, use an automated tool, and move on. |
|
11-15-2013, 02:02 AM | #25 |
Obsessively Dedicated...
Posts: 3,200
Karma: 34977556
Join Date: May 2011
Location: JAPAN (US expatriate)
Device: Sony PRS-T2, ADE on PC
|
@derangedhermit
Well, here are a pair of images for you to fiddle with if you want. One pretty dark-toned, the other with lots of white. They are both 1200px wide File sizes seem huge after using images 600px wide! Each one was saved with 4 different settings--- Source decreased to 256 shades using "Optimized Octree" algorithm, saved to PNG using "Existing Palette". Source saved to PNG using PSP default 8-bit greyscale algorithm. Source decreased to 8-bit greyscale, saved to jpeg with 20% compression. Source decreased to 8-bit greyscale, saved to jpeg with 1% (minimum) compression. You might also find interest in some of the screenshots and samples on a thread I posted earlier https://www.mobileread.com/forums/sho...d.php?t=222916 You commented earlier about the "glare" of white areas while using Night Mode. I can't think of anything to be done about that from the user end. I imagine that the image could be created using off-white instead of pure white (e.g.: RGB 250-250-250 instead of 255-255-255) but then they would look grimy when displayed on a white background. |
11-15-2013, 02:06 AM | #26 |
Obsessively Dedicated...
Posts: 3,200
Karma: 34977556
Join Date: May 2011
Location: JAPAN (US expatriate)
Device: Sony PRS-T2, ADE on PC
|
My image-cleaning routine (if anyone cares)
I'm using Paint Shop Pro v9, using its native format for processing, I think it is similar to Photoshop's PSD files. PSP can also work with PSD files (and though I don't often use them, it can use many Photoshop plugins and filters). I set up scripts for some steps, so --click-- it's done (instead of using the settings dialog for that particular tool every time). Many of my scripts are assigned to special toolbar buttons, rather than using the lengthy dropdown list. 1--Save the source file as psp file. These are usually jpg or jp2/jpeg2000, and usually in the neigborhood of 2500 x 4000 px. 2--CROP Crop the image to leave a uniform white border. Done by manually drawing a rectangular frame just at the edge of the image, then Magic Wand with match-mode set to "opacity" to select the transparent interior of the frame. Then run my "one-click-script" to expand the selection by 10 pixels and automatically "crop to selection." 3--INITIAL REPAIRS Manually fix blatant dirt and damage with Clone and Smudge brushes. This is often easier before changing everything to shades of grey. 4--RECOLOR Use "Manual Color Correction" tool to change the age-darkened background (usually dark sepia tone, sometimes grey or yellow) to as close to white as possible -- it "colorizes" the image, keeping all the tones and values synced. 5--DESATURATE 100% Use "Hue-Saturation-Lightness" tool, but I don't change lightness because I don't want to wash out the black. I don't greyscale yet, because some PSP tools don't work well --or at all-- with only 256 shades. (One-click script) 6--IMPROVE BRIGHTNESS AND CONTRAST Use "Highlight-Midtone-Shadow" adjustment tool to lighten the white areas, and maybe adjust midtones and/or shadows. This tool lets you adjust the light/darkness of the three levels independently. Sometimes needs instead to be run twice with gentler settings. I prefer using this tool to using Brightess & Contrast adjustment -- more discrete control. 7--SWAT SPECKS ----New addition to speck-killing routine---- Add Jellby's suggested Burn overlay to display unwanted specks, spots, and smudges. (One-click script for this.) ----If lots of specks exist (and they always do), use the "Manual Color Replacement" tool to replace all pixels in a certain color range with a desired color. I use very gentle settings: for example, replace RGB 253-253-253 with pure white, using a 3% tolerance. This replaces all pixels 250-250-250 and up with pure white, and gets rid of the majority of specks. (One-click script.) ----If a lot of specks still remain, run the HMS tool with very low settings. (One-click script.) ----Manually paint out remaining stray specks with paintbrush and/or Dodge brush. ----Delete the Burn overlay layer. 8--MORE REPAIRS ----I use Dodge brush on small areas to lighten shadows and bring out details, Burn brush on small areas to darken details or darken shadows, Clone brush to cover over damaged areas, Smudge brush to blend. ----I also have a set of scripts for various Curve settings to lighten or darken or improve contrast. One that I'm fond of darkens only the dark areas -- helpful when I want more contrast without brightening the lighter areas. ----I find that these 19th century woodcut/engraving illustrations are by nature rather monotone. Often when you downsize a black-and-white image, it appears darker, and with these the sky is the same tone as the landscape. Also, it appears that those artists were obsessed with filling every inch of space with cross-hatching and curlicues, so the background sometimes seems very dark. So I sometimes make a cutout "puzzle-piece" of the offending area as a separate layer, and lighten that some more so it doesn't obscure the foreground figures or look as though a storm is constantly brewing. 9--SAVE my rehabilitated full-size source file. 10--DOWNSIZE to desired dimensions, using Weighted Average algorithm. Bicubic, Bilinear, and Pixel Resize often appear more jagged/pixilated/grainy. Though everything looks a bit grainy when you are dropping down to 20~30% of original size. I save this resized file in a separate folder. 11--CONVERT TO FINAL FORMAT FOR USE ----Greyscale [at last!]. Shaves a bit of file-size for jpegs. ----Use Save-A-Copy-As routine to convert a copy of the resized image to jpg or png at desired compression and/or color depth, and save to a third folder. I'm doing this now with batch-processing, but sometimes certain images want manual processing for different compression or color settings. ---------- I'm probably spending about 15 minutes per image, but some might take an hour -- or several -- if they are extremely dark or badly damaged or the source file is particularly fuzzy and awful. Trying to repair banding caused by scanning is particularly time-consuming. The decision-making process eats some time -- Use Curves or use HMS? -- Use paintbrush or dodgebrush? -- is that funny squiggle a scanner artifact? a drop of spilled soup? a printing goof? hung-over engraver's shaky hand? part of the artists's vision? But in the end, it's the manual stuff that takes the time. I keep hoping to learn more efficient ways to do all this. So if anybody has comments or suggestions, I am all ears. Well, all eyes -- I would have to read, not listen to, your ideas. Thanks for any advice or feedback. |
11-15-2013, 12:22 PM | #27 |
Addict
Posts: 239
Karma: 1280000
Join Date: Oct 2010
Location: USA
Device: None
|
I have looked at the files using PSE, but not on the Nexus yet. Preliminary comments:
1. I do not know why the 256-color indexed pngs are 60%-65% of the size of the 256-shade grayscale pngs. That seems like something isn't equal, and I want to know the explanation. I prefer to use a grayscale format for grayscale images. 2. Neither of the images have true black (and they both should have) nor enough true white. Before going down to 256 values, the image needs to be stretched so that there is black and white. Just use your eye to estimate how much. In PSE, if you select part of the histogram, it will tell you how much in % pixels you have selected. Like this: "I think the image needs 5% black and is about 65% white." (this perhaps is the first image with more white in it) Select area on histgram showing where that many pixels on each end fall. Use levels to stretch the values to match what you saw on the histogram. Most scans don't use the full scale from 0-max, so most images need this stretching. It should be part of your process, while at full color depth. It makes images pop properly (i.e. they look dull without this operation). 3. There's little for it so far between the different formats. By far the most visible artifact (to me) is the aliasing in diagonal lines, and those are obvious in all the images. Please go back to the original and see if they are present there. I have found no specific tool to smooth them, but something needs to be done. A standard smoothing or blur filter can certainly improve it to some degree. On the bright nighttime thing, I would see if I could add partial transparency and what that looked like. That might be nice when people pick textured or tinted backgrounds in general (or not). |
11-15-2013, 04:04 PM | #28 |
Grand Sorcerer
Posts: 11,470
Karma: 13095790
Join Date: Aug 2007
Location: Grass Valley, CA
Device: EB 1150, EZ Reader, Literati, iPad 2 & Air 2, iPhone 7
|
In answer to 1. Indexed formats are always more efficient that regular formats. This is why GIF is always indexed. With an index the color is only defined once and then referenced multiple times. I suspect there is a need for more than one byte to direct reference a pixel even though you only have 256 shades. However, I am not sure of the internal formats used in PNG. You might try reducing below 256 shades and see if the ratio changes.
Some sort of crunching would seem in order. PNG seems not to compress itself very well. Dale |
11-15-2013, 04:26 PM | #29 | ||||
Wizard
Posts: 2,297
Karma: 12126329
Join Date: Jul 2012
Device: Kobo Forma, Nook
|
Quote:
Quote:
As to the grayscale question... I am also not TOO sure on the technical specifics, but the larger filesize, it could be two things that I can think of: 1. Grayscale IS a different beast from Indexed, perhaps it is stored in a way that does not compress as efficiently. http://www.libpng.org/pub/png/book/c...g.ch08.div.5.3 When saved as "Grayscale", the image is stored using luminance (Y) instead of an Indexed table of colors. 2. Perhaps the program used to create the images uses different libraries/code to compress both of these types. This is part of the reason why most of us typically run PNGs through other, more powerful, optimization programs (optipng, pngcrush, scriptpng, ...). Many of these image programs when saving PNGs, will also save it with A LOT of extra bloat (which is partly why you always want to save with something along the lines of the "Save Image for Web"). Quote:
Quote:
See the serious bug I ran into in my Formula Tutorial topic here: https://www.mobileread.com/forums/sho...5&postcount=12 I would recommend steering clear of transparency for now (although it would be GREAT, since you wouldn't have a glaring white background). |
||||
11-15-2013, 05:16 PM | #30 |
Grand Sorcerer
Posts: 11,470
Karma: 13095790
Join Date: Aug 2007
Location: Grass Valley, CA
Device: EB 1150, EZ Reader, Literati, iPad 2 & Air 2, iPhone 7
|
By crunching I meant crunching (who would have thought), not jpg lossy. I was talking about tools like:
pngcrush [Glenn Randers-Pehrson] (Unix, DOS, Win32, RISC OS, BeOS/x86, etc.) - all versions; read/write; freeware (BSD) with source, as of version 1.2.0. (This is a command-line utility to compress PNG images better--i.e., it converts PNGs into smaller PNGs, completely losslessly, by optimizing the filtering and compression strategies. It can also remove specified chunks and fix PNG images affected by the Photoshop 5.0 gamma bug or the Photoshop 5.5 iCCP bug. It's especially handy in conjunction with apps like PS that are a bit weak on compression. See also OptiPNG, PNGGauntlet, PNGOUT, pngrewrite, and pngwolf.) |
Thread Tools | Search this Thread |
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
Image on click change to nex image for Ipad view. | vinayaksanga | ePub | 9 | 04-29-2013 06:49 AM |
inspect image properties /replace image | cybmole | Sigil | 6 | 02-05-2013 12:46 PM |
Different Send to Device routine based on the library. | iatheia | Library Management | 10 | 01-24-2012 12:06 PM |