Image size optimisation in Linux - linux

There is a Yahoo "Smush.it" service that allow to optimise image size.
I am looking for alternative way to reach the same goal using Linux application.
A lot of images need to processed and uploading them manually one by one seems not a good idea.
How such can be done in Linux ?

For jpegs I use JPEGmini and from what I tested it's the one with the best results keeping the same visible image quality while reducing a lot of the size, they have a server version for Linux which is not cheap and I never used.
There's also Mozilla's mozjpeg which you can use directly from the terminal, but it also reduces image quality.
In some tests I did, mozjpeg gives smaller files (not much) than JPEGmini's, but with lower image quality.
If you need to reduce pngs, you could try Trimage or some of the alternatives listed on the same link.
Smush.it's FAQ lists all the tools they are using in their service.

Related

nodejs choosing between JIMP and MOZJPEG

I was wondering if there is a blaring reason to use jimp vs. imagemin-mozjpeg for compressing jpegs (I am already using both imagemin and jimp in my project already, imagemin-webp to serve next gen images, and jimp to convert pngs to jpegs in rare cases) So I am more looking for reasoning that is based on the following:
Performance
Reliability (I have noticed that there are some JPEGs mozjpeg has trouble with and fails on. specifically ones that I have used GNU Image Manipulation Program [GIMP] with.)
However, if someone has good reasons that don't align with the two aforementioned I would still like to hear them.
Heres some quick links to the NPM packages mentioned if anyone needs them:
imagemin-mozjpeg
jimp
Performance
imagemin-mozjpeg uses mozjpeg to process images. And mozjpeg itself is made using C language. While jimp uses javascript to process it.
As mention in the main repository jimp:
An image processing library for Node written entirely in JavaScript, with zero native dependencies.
We know the difference in performance between Javascript and C.
Reliability
I do not want much opinion in this section. but we can see directly how the statistics of each repository.
mozjpeg:
Star: 4.1k
Open Issues: 76
Closed Issues: 186
jimp:
Star: 10.3k
Open Issues: 157
Closed Issues: 430
I do not side with either. They all have worked well. I really appreciate the work of the maintainers and contributors of the library library.
Yes, and it goes far beyond the performance of the compression process (ie how long it takes to compress an image, which is also important) or the relative activity of development of the library (which is arguably less important).
I highly recommend reading Is WebP really better than JPEG? (and this discussion), which shows that even among JPEG compression libraries, the implementation can have a significant impact on compression ratio.
In short, MozJPEG produces jpeg files that are 10% smaller than jpeg files produced by the reference JPEG implementation (libjpeg). Even more interesting, for images larger than 500px, MozJPEG actually produces jpeg files that are smaller than WebP.
This leads to an interesting question. It will depend on exactly your use case and priorities, but it might actually make sense to simplify and use MozJPEG for everything, and ditch WebP entirely.
Looking forward, AVIF might make sense as a true next-gen format (delivering 30% smaller images), but browser support is "coming soon". Alternatively, JPEG XL also looks promising, but the standard hasn't been finalized yet. HEIC is problematic and I wouldn't count on wide support.
Warning regarding jimp:
As jimp is implemented in pure JavaScript, all image operations end up blocking the JS thread. This is catastrophic in node.js.
You must use the new Worker Threads API manually to run jimp on a thread.
Finally, a warning regarding selecting image manipulation libraries generally in the node.js world:
From what I've seen, a majority of them end up writing temp files to disk and then invoking a child process to do the actual work, and then read the result back in. (eg something like child_process.exec('imageresizer -in temp/file.jpg -out temp/resized.jpg')).
This is not an ideal way to do this, and it may be especially surprising when the API looks something like var img = await resizeImg(buffer), which does not look like it writes to disk.
imagemin is one such library; I would avoid it where performance matters.
Instead, search for modules that implement bindings to native code on the libuv thread pool. This will usually be the most performant way to work with images, since the operations happen on a thread in your node process and with minimal memory copying — and no disk I/O at all.
I've never used it, but node-mozjpeg looks like a good candidate.

Compress images in smartest way

I have many images in the source folder, which are 10 MB each. I need to two operation on that image:
To compress that image and place it on destination folder 1
To create a thumbnail and place it on destination folder 2
As there are large number of images and they all are of huge size, can you guide me the fastest way to achieve this which consume less memory.
Disclaimer: I'm the author.
The http://imageresizing.net/ library does very memory-efficient image resizing - it's designed for server-side use, so naturally it is quite fast and designed for minimal memory use.
It's also simple to use.
ImageBuilder.Current.Build(sourceFile,destFile, new ResizeSettings("format=jpg;quality=90"));
ImageBuilder.Current.Build(sourceFile,destFile, new ResizeSettings("maxwidth=100;maxheight=100;format=jpg"));
There are 50+ different options - so pretty much any kind of automatic cropping, padding, seam carving, rotation, flipping, watermarking, etc. is possible.
I'm also working on a plugin which uses WIC for simple resize operations, which might give you a 2x speed boost. Let me know if you're interested in beta-testing it.

Need to know standards for png file in web graphics?

I'm starting to venture out from using jpeg and gif files to png, I was wondering if there were any standards for using png beside IE's lack of support for it. I also want to know if there was any current articles about setting I should be using when optimizing for web? Right now I'm using photoshop to do this, should I be using firework instead?
Which optimizations you use depends on the type of image. If your image contains only few colors, you might use png-8, otherwise you may need png-24. Same goes for the use of transparency/alpha blending.
The Photoshop save for web-feature does a fine job, but when your website has a lot of visitors, you may benefit from using PNGCrush for further compressing your images. You can use the YSlow plugin for FireFox to test how much bandwidth you can save by crushing your images.
Also, you can make use of CSS-sprites if your design allows it. This can result in less (but larger) images and therefore less requests and sometimes less bandwidth. But this doen't depend on the type of images you use.
Png is supported by IE, by the way. Only the alpha-transparency is not supported by IE 6, but there are CSS/Javascript trics to work around that, although they do not work for background images.
I wouldn't quit using jpg. Jpg is very useful when it comes to pictures. Png files are convenient for small images like buttons, graphical elements, and for images with large plain areas, like screenshots.

High quality image re-sampling in Mono/C#/ASP.NET

I've developed a site that requires high quality resizing of uploaded photos. The site works perfectly under ASP.NET on Windows. This afternoon I tried running it under Mono/Apache/Ubuntu 10.10. To my surprise, it worked - except for the image resampling.
It seems the libraries underlying Mono's Graphics/GDI+ implementation don't implement the bi-cubic interpolation mode. (See Mono Ignores Graphics.InterpolationMode? ).
So I'm looking for a library that can do high quality image resizing. I'm willing to put in the effort to interop to it from C# since this is important functionality and I'd like to be able to run under mono if at all possible. I don't really need any other graphics processing capabilities - just resize.
Follow up: as suggested below ImageMagick works really well for this and was pretty easy to interop to. More details here: http://www.toptensoftware.com/blog/posts/17/high-quality-image-resampling-in-monolinux
ImageMagick is both a command line tool and a library with high quality interpolation and antialiasing algorithms.

SVG to PDF on a shared linux server

I have a website which uses SVG for an interactive client side thingamabob. I would like to provide the option to download a PDF of the finished output. I can pass the final SVG output back to the server, where I want to convert to PDF, then return it to the client for download.
This would need to work on a headless shared linux server, where installation or compilation is either an enormous pain, or impossible. The website is PHP, so the ideal solution would be PHP, or use software that's easily installed on a shared webserver. Python, perl and ruby are available, along with the usual things you might expect on a linux box. Solutions that involve cairo, scripting inkscape, or installation more complex than 'FTP it up' are probably out. Spending large amounts of money are also out, naturally. As this is a shared server, memory and/or CPU hungry solutions are also out, as they will tend to get killed; this more or less rules out Batik.
The nearest that I've got so far is this XSL transform which I can drive from PHP and then squirt the resulting postscript through ps2pdf (which is already installed). The only problem with this is that it doesn't support SVG paths - if it did, it would be perfect.
There are a bunch or related questions on StackOverflow, all of which I've read through, but they all assume that you can either install stuff, spend money, or both.
Does anyone have an off-the-shelf solution to this, or should I just spend some downtime trying to add paths support to that XSL transform?
Thanks,
Dunc
I stumbled across TCPDF today which would have been perfect for this, had I known about it at the time. It's just a collection of pure PHP classes, no external dependencies for most things.
It can build PDF's from scratch and you can include pretty much anything you want in there, including SVG (amongst many, many other things), as shown in these examples:
http://www.tcpdf.org/examples.php
Main project page is here:
http://www.tcpdf.org/
Sourceforge page is here:
http://sourceforge.net/projects/tcpdf/
You can use Apache FOP's free Batik SVG toolkit which has a transcoder api to transform SVG to PDF.
download link
You will need to write a tiny bit of java. There are code examples here – note you will need to set the transcoder to org.apache.fop.svg.PDFTranscoder instead of Java.
You should be able to do this without installing anything on your machine – just drag the jars on there and run a script. I quote:
All other libraries needed by Batik are included in the distribution. As a consequence the Batik archive is quite big, but after you have downloaded it, you will not need anything else.
have you looked at imagemagick? I suspect you also need ghostscript to complete the loop, which might make installation difficulty and performance a problem.
I'd suggest giving princexml a try, they provide various addons (including one for PHP) and can output PDF from SVG/HTML/XML.
i have used TCPDF (http://www.tcpdf.org/) in many projects and it work in almost every use case.
Here is the example of SVG: https://tcpdf.org/examples/example_058/
and following is the code which can help you:
$pdf->ImageSVG($file='images/testsvg.svg', $x=15, $y=30, $w='', $h='', $link='http://www.tcpdf.org', $align='', $palign='', $border=1, $fitonpage=false);
$pdf->ImageSVG($file='images/tux.svg', $x=30, $y=100, $w='', $h=100, $link='', $align='', $palign='', $border=0, $fitonpage=false);

Resources