Compress .ipa monotuch - xamarin.ios

Starting from the assumption that I have deleted all unnecessary files, i have my app that contains a folder with jpg images (1024*700 resolution minimum permitted) where the size is 400 MB. When generate my ipa size is 120 MB. I have tried to convert those images in PNG and next generate ipa but size is more than 120 MB (140 MB), but quality it's a bit worse.
Which best practices recommended to reduce the size of the application?
P.s. Those files are showed as gallery.

On tool we used in our game, Draw a Stickman: EPIC, is smusher.
To install (you have to have ruby or XCode command line tools):
sudo gem install smusher
It might print some errors installing that you can ignore.
To use it:
smusher mypng.png
smusher myjpg.jpg
The tool will send the picture off to yahoo's web service smush.it, and in a non-lossy way compress the image.
Generally you can save maybe 20% file size with no loss in quality.
There are definitely other techniques we used like using indexed PNGs, but you are already using JPGs, which are smaller.

Related

How can I avoid a "Segmentation Fault (core dumped)" error when loading large .JP2 images with PIL/OpenCV/Matplotlib?

I am running the following simple line in a short script without any issues:
Python 3.5.2;
PIL 1.1.7;
OpenCV 2.4.9.1;
Matplotlib 3.0.1;
...
# for example:
img = plt.imread(i1)
...
However, if the size of a loaded .JP2 > ~500 MB, Python3 throws the following error when attempting to load an image:
"Segmentation Fault (core dumped)"
It should not be a RAM issue, as only ~40% of the available RAM is being used when the error occurs + the error remains the same when RAM is removed or added to the computer. The error also remains the same when using other ways to load the image, e.g. with PIL.
Is there a way to avoid this error or to work around it?
Thanks a lot!
Not really a solution, more of an idea that may work or help other folks think up similar or further developments...
If you want to do several operations or crops on each monster JP2 image, it may be worth paying the price up-front, just once to convert to a format that ImageMagick can subsequently handle more easily. So, your image is 20048x80000 of 2-byte shorts, so you can expand it out to a 16-bit PGM file like this:
convert monster.jp2 -depth 16 image.pgm
and that takes around 3 minutes. However, if you now want to extract part of the image some way down its height, you can now extract from the PGM:
convert image.pgm -crop 400x400+0+6000 tile.tif
in 18 seconds, instead of from the monster JP2:
convert monster.jp2 -crop 400x400+0+6000 tile.tif
which takes 153 seconds.
Note that the PGM will take lots of disk space.... I guess you could try the same thing with a TIFF which can hold 16-bit data too and could maybe be LZW compressed. I guess you could also use libvips to extract tiles even faster from the PGM file.

Node.js: How do I extract an embedded thumbnail from a jpg without loading the full jpg first?

I'm creating a Raspberry Pi Zero W security camera and am attempting to integrate motion detection using Node.js. Images are being taken with Pi camera module at 8 Megapixels (3280x2464 pixels, roughly 5MB per image).
On a Pi Zero, resources are limited, so loading an entire image from file to Node.js may limit how fast I can capture then evaluate large photographs. Surprisingly, I capture about two 8MB images per second in a background time lapse process and hope to continue to capture the largest sized images roughly once per second at least. One resource that could help with this is extracting the embedded thumbnail from the large image (thumbnail size customizable in raspistill application).
Do you have thoughts on how I could quickly extract the thumbnail from a large image without loading the full image in Node.js? So far I've found a partial answer here. I'm guessing I would manage this through a buffer somehow?

Azure Media Services encoded file size

I have similar problem to this: Azure Media Services encoded mp4 file size is 10x the original I have a 500MB mp4 file. After encoding with 'H264 Multiple Bitrate 720p' file size is 11.5 GB. Processing costed a lot too. It's no problem with 1 file, but I have to be prepared to share about 100 1GB mp4 files. It has to cost so much? Maybe I didn't know about something? I'd like to share files with AES encryption.
There are a lot of dependencies on what your final output file will be.
1) You have a 500MB mp4 file. Is it SD, HD, 720P? 1080p? Resolution you started at will matter. If you had SD originally, the 720p preset will scale your video up and use more data.
2) The bitrate you started out at matters. If you have such a small file, it is likely encoded at a very low bitrate. Using the 720p multi bitrate preset will also throw more bits at the file. Basically creating bits that are not necessary for your source- as encoding cannot make things look better.
3) You started with a single bitrate - the preset you are using is generating Multiple bitrates (several MP4 files at different bitrates and resolutions.) The combination of starting with a small low bitrate file, and blowing it up to larger resolutions and bitrates, multiplied by 6 or more files tends to add a lot of data to the output.
The solution to your problem is to use "custom' presets. You don't have to use the built in presets that we define. You can modify them to suit your needs.
I recommend downloading the Azure Media Services Explorer tool at http://aka.ms/amse and using that to modify and submit your own custom JSON presets that match your output requirements better.

How to load large Pdf files fast? [duplicate]

This question already has answers here:
opening a large pdf files on web
(5 answers)
Closed 6 years ago.
I am hosting around 18Mb pdf file to s3 bucket and trying to get it, but it takes a long time on a bit slow network, I also tried to covert the file to HTML and then render it but it becomes of around 48MB because of which the phone starts hanging. I have also moved the s3 to Singapore location to reduce latency and have also tried to pipe it through the server, Now I am only left with a option to disintegrate the PDF into images for every page and load them when requested, Is there anything that I am missing to make the load time of pdf bearable?
You have the following options as you are facing limitations on end-users devices:
Split large PDF files into several parts and allow users to download these parts separately.
Linearize PDF files, this will affect how files are loaded but will not decrease the size so you may face issue with crashes on end-user devices too.
Optimize file size of PDF files by re-compressing images inside.
Render low resolution JPEG images of PDF pages (with Ghostscript or ImageMagick) but please do not use JPEG as main format as JPEG compression is not designed for text compression (but for human faces).

How to check png file if it's a decompression bomb

I am playing with image uploads to a website and I found out about these decompression bomb attacks that can take place when it's allowed to upload png files (and some other). Since I am going to change the uploaded images, I want to make sure I don't become a victim of this attack. So when it comes to checking if a png file is a bomb, can I just read the file's headers and make sure that width and height are not more than the set limit, like 4000x4000 or whatever? Is it a valid method? Or what is the better way to go?
Besides large width and height, decompression bombs can also have excessively large iCCP chunks, zTXt, chunks, and iTXt chunks. By default, libpng defends against those to some degree.
Your "imagemagick" tag indicates that you are you asking how to do it with ImageMagick. ImageMagick's default width and height limits are very large: "convert -list resource" says
Resource limits: Width: 214.7MP Height: 214.7MP Area: 8.135GP
Image width and height limits in ImageMagick come from the commandline "-limit" option, which I suppose can also be conveyed via some equivalent directive in the various ImageMagick APIs. ImageMagick inherits the limits on iCCP chunks, etc., from libpng.
Forged smaller width and height values in the IHDR chunk don't fool either libpng or ImageMagick. They just issue an "Extra compressed data" warning and skip the remainder of the IDAT data without decompressing it.

Resources