Resize jpeg images only where the quality >80 using either mozjpeg/jpegtran/jpegoptim - jpeg

Hi I want to resize jpeg image that were uploaded as optimized, but because of a software error all recoded to 100% jpeg quality. (so we upload say 80% quality image and after upload it is 100% quality and the filesize is enormous!)
So how can I
- 1st find only jpeg images file that have a quality setting > 80 or better quality setting == 100
- and optimize these images only to a quality setting = 80
I appreciate your help

you can use jpegoptim for compress images using command "jpegoptim --all-progressive --strip-com --strip-exif --strip-iptc --strip-xmp -T1 -m80"

Related

Video size optimization

I'm working on a task that should optimize the video's size before uploading to the server in a web application.
So i want my model to automatically optimize each input video size.
I have many trials in different approaches like FFmpeg:
I used libx265, h264 and lib265 as a codec, with some videos it increases the video size and others minimize it with little ratio and it takes so long to generate the output file.
for example with video of size 8M
input = {input_name: None}
output = {output_name: '-n preset faster -vcodec libx265 -crf 28'}
The output file is 10M.
And also tried OpenCV:
But the output videos aren't written the file appears 0kb size.
ex: with input video resolution (1280×544)
I want to down scale it by:
cap= cv2.VideoCapture(file_name)
cap.set(3,640)
cap.set(4,480)
codec = cv2.VideoWriter_fourcc(*'XDIV')
out = cv2.VideoWriter(output_file, codec, 28.0 , (640,480))
While cap.isOpened():
bol, frame = cap.read()
out.write(frame)
cv2.imshow('video', frame)
I become little bit confused. what are the parameters of the input,output videos i should consider to be optimized and make a vital size change in each specific video? Is it the codec, width, height only?
What is the most effective approach to do this?
Should i build a predictive model for estimating the suitable output video file parameters or there is a method that auto adjust ?
if there's an illustrative example please provide me.

Node.js: How do I extract an embedded thumbnail from a jpg without loading the full jpg first?

I'm creating a Raspberry Pi Zero W security camera and am attempting to integrate motion detection using Node.js. Images are being taken with Pi camera module at 8 Megapixels (3280x2464 pixels, roughly 5MB per image).
On a Pi Zero, resources are limited, so loading an entire image from file to Node.js may limit how fast I can capture then evaluate large photographs. Surprisingly, I capture about two 8MB images per second in a background time lapse process and hope to continue to capture the largest sized images roughly once per second at least. One resource that could help with this is extracting the embedded thumbnail from the large image (thumbnail size customizable in raspistill application).
Do you have thoughts on how I could quickly extract the thumbnail from a large image without loading the full image in Node.js? So far I've found a partial answer here. I'm guessing I would manage this through a buffer somehow?

Azure Media Services encoded file size

I have similar problem to this: Azure Media Services encoded mp4 file size is 10x the original I have a 500MB mp4 file. After encoding with 'H264 Multiple Bitrate 720p' file size is 11.5 GB. Processing costed a lot too. It's no problem with 1 file, but I have to be prepared to share about 100 1GB mp4 files. It has to cost so much? Maybe I didn't know about something? I'd like to share files with AES encryption.
There are a lot of dependencies on what your final output file will be.
1) You have a 500MB mp4 file. Is it SD, HD, 720P? 1080p? Resolution you started at will matter. If you had SD originally, the 720p preset will scale your video up and use more data.
2) The bitrate you started out at matters. If you have such a small file, it is likely encoded at a very low bitrate. Using the 720p multi bitrate preset will also throw more bits at the file. Basically creating bits that are not necessary for your source- as encoding cannot make things look better.
3) You started with a single bitrate - the preset you are using is generating Multiple bitrates (several MP4 files at different bitrates and resolutions.) The combination of starting with a small low bitrate file, and blowing it up to larger resolutions and bitrates, multiplied by 6 or more files tends to add a lot of data to the output.
The solution to your problem is to use "custom' presets. You don't have to use the built in presets that we define. You can modify them to suit your needs.
I recommend downloading the Azure Media Services Explorer tool at http://aka.ms/amse and using that to modify and submit your own custom JSON presets that match your output requirements better.

What is the best way/tool to convert Bulk BMP images to .pk and .raw images on Linux?

I am working on a project that involves converting .bmp to .pk and .raw files(for 1 bmp file there will be one pk and 1 raw files). Can anyone suggest a tool or process for conversion and the tool should be able to handle around 1.5M bmp images and I am working on a linux server.

Compress .ipa monotuch

Starting from the assumption that I have deleted all unnecessary files, i have my app that contains a folder with jpg images (1024*700 resolution minimum permitted) where the size is 400 MB. When generate my ipa size is 120 MB. I have tried to convert those images in PNG and next generate ipa but size is more than 120 MB (140 MB), but quality it's a bit worse.
Which best practices recommended to reduce the size of the application?
P.s. Those files are showed as gallery.
On tool we used in our game, Draw a Stickman: EPIC, is smusher.
To install (you have to have ruby or XCode command line tools):
sudo gem install smusher
It might print some errors installing that you can ignore.
To use it:
smusher mypng.png
smusher myjpg.jpg
The tool will send the picture off to yahoo's web service smush.it, and in a non-lossy way compress the image.
Generally you can save maybe 20% file size with no loss in quality.
There are definitely other techniques we used like using indexed PNGs, but you are already using JPGs, which are smaller.

Resources