When not to do maximum compression in png? - graphics

Intro
When saving png images through GIMP, I've always used level 9 (maximum) compression, as I knew that it's lossless. Now I've to specify compression level when saving png format image through GD extension of PHP.
Question
Is there any case when I shouldn't compress PNG to maximum level? Like any compatibility issues? If there's no problem then why to ask user; why not automatically compress to max?

Each level of PNG compression requires significantly more memory and processing power to compress (and decompress to a lesser degree).
There is a rapid tailoff in the compression gains from each level, however, so choose one that balances the webserver resources available for compression with your need to reduce bandwith.

Related

How do you prepare deflate streams for PIGZ (parallel gzip)?

I am using the PIGZ library. https://zlib.net/pigz/
I compressed large files using multiple threads per file with this library and now I want to decompress those files using multiple threads per file too. As per the documentation:
Decompression can’t be parallelized, at least not without specially
prepared deflate streams for that purpose.
However, the documentation doesn't specify how to do that, and I'm finding it difficult to find information on this.
How would I create these "specically prepared deflate streams" that PIGZ can utilise for decompression?
pigz does not currently support parallel decompression, so it wouldn't help to specially prepare such a deflate stream.
The main reason this has not been implemented is that, in most situations, decompression is fast enough to be i/o bound, not processor bound. This is not the case for compression, which can be much slower than decompression, and where parallel compression can speed things up quite a bit.
You could write your own parallel decompressor using zlib and pthread. pigz 2.3.4 and later will in fact make a specially prepared stream for parallel decompression by using the --independent (-i) option. That makes the blocks independently decompressible, and puts two sync markers in front of each to make it possible to find them quickly by scanning the compressed data.The uncompressed size of a block is set with --blocksize or -b. You might want to make that size larger than the default, e.g. 1M instead of 128K, to reduce the compression impact of using -i. Some testing will tell you how much your compression is reduced by using -i.
(By the way, pigz is not a library, it is a command-line utility.)

JPG and PNG compression, PHP and Ubunutu with jpegtran and pngcrush

I have several hundred images I need to optimise and compress. I have found the following script on github: https://gist.github.com/ryansully/1720244 which works ok. However the filesizes of the jpgs are not much smaller after compression.
For example, one file before compression is 870.24 KB, after it is 724.97 KB. But, if I run the same image through compressjpg.com it reduces the filesize to around 290 KB.
How can I achieve this level of compression with jpegtran? Is it even possible?

Image sanitization library

I have a website that displays images submitted by users. I am concerned about
some wiseguy uploading an image which may exploit some 0-day vulnerability in a
browser rendering engine. Moreover, I would like to purge images of metadata
(like EXIF data), and attempt to compress them further in a lossless manner
(there are several such command line utilities for PNG and JPEG).
With the above in mind, my question is as follows: is there some C/C++
library out there that caters to the above scenario? And even if the
full pipeline of parsing -> purging -> sanitizing -> compressing -> writing
is not available in any single library, can I at least implement the
parsing -> purging -> sanitizing -> writing pipeline (without compressing) in a
library that supports JPEG/PNG/GIF?
Your requirement is impossible to fulfill: if there is a 0-day vulnerability in one of the image reading libraries you use, then your code may be exploitable when it tries to parse and sanitize the incoming file. By "presanitizing" as soon as the image is received, you'd just be moving the point of exploitation earlier rather than later.
The only thing that would help is to parse and sanitize incoming images in a sandbox, so that, at least, if there was a vulnerability, it would be contained to the sandbox. The sandbox could be a separate process running as an unprivileged user in a chroot environment (or VM, for the very paranoid), with an interface consisting only of bytestream in, sanitized image out.
The sanitization itself could be as simple as opening the image with ImageMagick, decoding it to a raster, and reencoding and emitting them in a standard format (say, PNG or JPEG). Note that if the input and output are both lossy formats (like JPEG) then this transformation will be lossy.
I know, I'm 9 years late, but...
You could use a idea similar to the PDF sanitizer in Qubes OS, which copies a PDF to a disposable virtual machine, runs a PDF parser which converts PDF to basically TIFF images, which are sent back to the originating VM and reassembled into a PDF there. This way you reduced your attack surface to TIFF files. Which is tiny.
(image taken from this article: https://blog.invisiblethings.org/2013/02/21/converting-untrusted-pdfs-into-trusted.html)
If there is really a 0-day exploit for your specific parser in that PDF, it compromises the disposable VM, but since only valid TIFF is accepted by the originating VM and since the disposable VM is discarded once the process is done, this is pointless. Unless of course the attacker also has a either Xen exploit at hand to break out of the disposable VM or a Spectre-type full memory read primitive coupled with a sidechannel to leak data to their machines. Since the disposable VM is not connected to the internet or has any audio hardware assigned, this boils down to creating EM interference by modulating the CPU power consumption, so the attacker probably needs a big antenna and a location close to your server.
It would be an expensive attack.

Loading website Images faster

Is it possible to improve Website background image to load faster that it is. My website's background image size is 1258X441 and memory size is 656KB. Its taking too long to load complete background image while accessing my website. Is there anyway than Compressing (As the image is already compressed) to improve its loading speed.
Choose: (2. is my main solution, and 3.)
Compress image even more, because image is background, it doesn't matter that much. Users do not focus at background as much as they do at content.
Since background is partially covered by content, you can paint black (or any other solid color) the part of background that is not visible (is behind content). This will make the image compress more nicely, sparig some place.
Save image in JPG progresive compression. That will make background display in gradually more quality as the image loads.
Get rid of background image. (obvious) :)
Today's web speeds are big, don't change anything.
ALSO: If your PNG image has any repeating parts, you can then slice your image in three parts and spare a lot of space.
The speed of loading the background image is determined by (latency ignored) bandwidth of your connection and the size of the image. So if you have let's say 128 KB/s and an image of size 4096 KB, you at least need
4096/128 = 32s
for it to load.
Since you can't change the bandwidth, the only thing you can do is change the size of the picture. That is, if you can't compress it more, lower the resolution.
If you don't want to lose precision, you could put different layers of background in your website with different qualities, the better ones over the bad ones. Then when your page is loading, the low quality images load fast and you get some background. Then over time the better quality images load and the background is improved.
Load your image in Database and call them , when ever you required. The benefit of this is ? Database is loading once you initiate it and can able to retrieve the information when ever required. it is fast compare to any other technique

Is there a way to make zip or other compressed files that extract more quickly?

I'd like to know if there's a way to make a zip file, or any other compressed file (tar,gz,etc) that will extract as quickly as possible. I'm just trying to move one folder to another computer, so I'm not concerned with the size of the file. However, I'm zipping up a large folder (~100 Mbs), and I was wondering if there's a method to extract a zip file quicker, or if another standard can decompress files more quickly.
Thanks!
The short answer is that compression is always a trade off between speed and size. i.e. faster compression usually means smaller size - but unless you're using floppy disks to transfer the data, the time you gain by using a faster compression method means more network time to haul the data about. But having said that, the speed and compression ratio for different mathods varies depending on te structure of the file(s) you are compressing.
You also have to consider availability of software - is it worth spending the time downloading and compiling a compression program? I guess if its worth the time waiting for an answer here then either you're using an RFC1149 network or you're going to be doing this a lot.
In which case the answer is simple: test the programs yourself using a representative dataset.

Resources