Quote:
Originally Posted by ownedbycats
Yeah, I'm a bit baffled by all this. Compressing an already-compressed file usually not do much, or even sometimes paradoxically makes the file bigger.
|
Various JPEG optimization applications use more efficient representations of the encoding tables and better encoding types. The actual compressed image data does not change but the total file size does. JPEG images often contain metadata, comments, and other extraneous information. Removing some or all of that also reduces file size.
https://en.wikipedia.org/wiki/Jpegtran
Decompressing lossily compressed data then lossily recompressing it
will result in
additional loss from the original data.
Compression algorithms usually make assumptions about the nature of the uncompressed data. It is not unusual for file size to increase when there is not a good match between the assumptions and the actual data. Examples different algorithms being better for different data types are JPEG for photographs and PNG for regions of same color and simple line drawings.
Compressing is an exercise in reducing redundancy. (Nothing compresses better than all zeros.) Well compressed data has little or no redundancy, so the overhead of describing the compressed data increases the file size when trying compress already compressed data. (Try compressing random noise.)
Using general purpose compression on specially compressed data can result in a smaller file than either method used alone. For example, zipping a JPEG made from a BMP
might be significantly smaller than a zip of the BMP.