JPG and PNG compression, PHP and Ubunutu with jpegtran and pngcrush - jpeg

I have several hundred images I need to optimise and compress. I have found the following script on github: https://gist.github.com/ryansully/1720244 which works ok. However the filesizes of the jpgs are not much smaller after compression.
For example, one file before compression is 870.24 KB, after it is 724.97 KB. But, if I run the same image through compressjpg.com it reduces the filesize to around 290 KB.
How can I achieve this level of compression with jpegtran? Is it even possible?

Related

When not to do maximum compression in png?

Intro
When saving png images through GIMP, I've always used level 9 (maximum) compression, as I knew that it's lossless. Now I've to specify compression level when saving png format image through GD extension of PHP.
Question
Is there any case when I shouldn't compress PNG to maximum level? Like any compatibility issues? If there's no problem then why to ask user; why not automatically compress to max?
Each level of PNG compression requires significantly more memory and processing power to compress (and decompress to a lesser degree).
There is a rapid tailoff in the compression gains from each level, however, so choose one that balances the webserver resources available for compression with your need to reduce bandwith.

GoogleEarth crashed or still loading?

How do you know if the (free) GoogleEarth App is still loading data or hasnt crashed for some reaons.
Im loading a huge kml file, 100% cpu usage, but it never stops processing..
Are there any limits about the KML size of the displayed file?
How many KML MBs the Google Earth PC application can show without crashing ?
I don't think there are any file size limitations for Google Earth when using .kml files. There is however limits with regard to the size of images, so if your kml is trying to load images (such as screen overlays) then perhaps your problem lies there.
This is related to the specifications of your computer, so you can find the maximum resolution of images you can import by looking under the 'About'. I am not sure where to find info about the kb size of the image.
Next, you can try creating a .kmz out of the .kml - A KMZ is simply a compressed file the same as a .zip is - learn more about it
http://code.google.com/apis/kml/documentation/kmzarchives.html
Also you can try breaking the file up into smaller ones, either by using network links
http://code.google.com/apis/kml/documentation/kmlreference.html#networklink
or Regions
http://code.google.com/apis/kml/documentation/regions.html
By breaking the file up into smaller ones, you might also discover which part/element of the KML is causing hassles if you have some kind of format error

My J2ME application take a long time to start running?

I make a j2me application that almost all of it, are text files.
size: 3mb
The problem is, when I run it on my mobile, it take about 10 sec to run. I do nothing on startup. I have another app with size: 7mb, but it runs without any delay!
Jar files link:
mine: http://s1.picofile.com/file/7252355799/mine.jar.html
correct one: http://s1.picofile.com/file/7252346448/correctone.jar.html
install both of them and run. mine take some seconds to show up, but the other shows immediately.
You need to take into account that JAR is a compressed file format.
To use jar file contents, device has to first de-compress it. How long does decompression take very much depends on jar contents and because of that, jar file size may be not directly related to startup delay.
You better use some zip tool (most if not all such tools can handle jar format) to learn about contents of the jar files you work with - this might give you better indication on what to expect at startup.
For example, I can easily imagine your "7 mb" jar file containing just a handful of jpeg images of total size, well, about same 7 mb - and decompressing very quickly.
As for "3 mb of text files" - if these decompress to something like few hundreds files of total size 50 mb then I would not wonder if it takes long to unpack at device startup.

Is there a way to make zip or other compressed files that extract more quickly?

I'd like to know if there's a way to make a zip file, or any other compressed file (tar,gz,etc) that will extract as quickly as possible. I'm just trying to move one folder to another computer, so I'm not concerned with the size of the file. However, I'm zipping up a large folder (~100 Mbs), and I was wondering if there's a method to extract a zip file quicker, or if another standard can decompress files more quickly.
Thanks!
The short answer is that compression is always a trade off between speed and size. i.e. faster compression usually means smaller size - but unless you're using floppy disks to transfer the data, the time you gain by using a faster compression method means more network time to haul the data about. But having said that, the speed and compression ratio for different mathods varies depending on te structure of the file(s) you are compressing.
You also have to consider availability of software - is it worth spending the time downloading and compiling a compression program? I guess if its worth the time waiting for an answer here then either you're using an RFC1149 network or you're going to be doing this a lot.
In which case the answer is simple: test the programs yourself using a representative dataset.

Break a zip file into INDIVIDUAL pieces

What I am trying to do is this;
I get these zip files from clients which are 1.5gb in general. They all include pictures only. I need to make them into 100mb files to actually upload it to my server. Problem is that, If I break my 1.5gb zip file, I need to re-attach all of them if I need to use one.
When I break the 1.5gb zip file into a 100mb zip file, I need the 100mb one to act as a separate new file so the server will unzip it and upload the pictures into the database. I have looked for this problem but most of the threads are about how to split a zip file. This is partially what I want to do and I can do it now but I also need those smaller pieces to be able to unzip on its own. Is it possible to break a zip file into smaller pieces that will act as a new, stand alone zip files?
Thanks.
I have the same question. I think unzip in the Linux shell cannot handle a zip file larger than 1 GB, and I need to unzip them unattended in a headless NAS. What I do for now is unzip everything in the desktop HD, select files until they almost reach 1 GB, archive and delete them, then select the next set of files until I reach 1 GB.
Your answer is not clear, but I will try to answer it based upon my understanding of your dilemma.
Questions
Why does the file size need to be limited?
Is it the transfer to the server that is the constraining factor?
Is application (on the server) unable to process files over a certain size?
Can the process be altered so that image file fragments can be recombined on the server before processing?
What operating systems are in use on the client and the server?
Do you have shell access to the server?
A few options
Use imagemagick to reduce the files so they fit within the file size constraints
On Linux/Mac, this is relatively straightforward to do:
split -b 1m my_large_image.jpg (you need the b parameter for it to work on binary files)
Compress each file into its own zip
Upload to the server
Unzip
Concatenate the fragments back into an image file:
cat xaa xab xac xad (etc) > my_large_image.jpg

Resources