How to reduce size of several photos in bash? - linux

I want to leave a feedback on aliexpress.com, but this site does not accept photos larger than 5 MB. Can I write a simple bash script to reduce size of several photos at the same time?
for file in *.JPG; do echo 'reduce size image here'; done
Thanks for any help.

for img in *.JPG; do
convert "$img" -resize "1280x960>" $(basename "$img" .JPG)_new.jpg
done
here is the pixel format for photo size less than 5 MB
Image Dimensions in Pixels Printed Size (W x H) Approximate File Size (CMYK Tiff)
1024 x 768 pixels 3.41" x 2.56" 3 Mb
1280 x 960 pixels 4.27" x 3.20 4.7 Mb
convert is from ImageMagick. ">" says it's only resized if larger. See here for its other options.

Related

JPEG-XL: Handling of palette in libjxl command line tools

I am trying to make sense of the following presentation, see page 27:
Could someone please describe the command line tools available in libjxl that can help me work with existing palettes ?
I tried a naive:
% convert -size 512x512 -depth 8 xc:white PNG8:white8.png
% convert -size 512x512 -depth 8 xc:white PNG24:white24.png
which gives me the exected:
% file white8.png white24.png
white8.png: PNG image data, 512 x 512, 8-bit colormap, non-interlaced
white24.png: PNG image data, 512 x 512, 8-bit/color RGB, non-interlaced
But then:
% cjxl -d 0 white8.png white8.jxl
% cjxl -d 0 white24.png white24.jxl
Gives:
% md5sum white8.jxl white24.jxl
68c88befec21604eab33f5e691a2a667 white8.jxl
68c88befec21604eab33f5e691a2a667 white24.jxl
where
% jxlinfo white8.jxl
dimensions: 512x512
have_container: 0
uses_original_profile: 1
bits_per_sample: 8
have_preview: 0
have_animation: 0
intrinsic xsize: 512
intrinsic ysize: 512
orientation: 1 (Normal)
num_color_channels: 3
num_extra_channels: 0
color profile:
format: JPEG XL encoded color profile
color_space: 0 (RGB color)
white_point: 1 (D65)
primaries: 1 (sRGB)
transfer_function: gamma: 0.454550
rendering_intent: 0 (Perceptual)
frame:
still frame, unnamed
I also tried:
% cjxl -d 0 --palette=1024 white24.png palette.jxl
which also gives:
% md5sum palette.jxl
68c88befec21604eab33f5e691a2a667 palette.jxl
The libjxl encoder either takes a JPEG bitstream as input (for the special case of lossless JPEG recompression), or pixels. It does not make any difference if those pixels are given via a PPM file, a PNG8 file, a PNG24 file, an RGB memory buffer, or any other way, if the pixels are the same, the result will be the same.
In your example, you have an image that is just solid white, so it will be encoded the same way regardless of how you pass it to cjxl.
Now if those pixels happen to use only few colors, as will be the case for PNG8 since there can be at most 256 colors in that case, the encoder (at a default effort setting) will detect this and use the jxl Palette transform to represent the image more compactly. In jxl, palettes can have arbitrary sizes, there is no limit to 256 colors. The --palette option in cjxl can be used to set the maximum number of colors for which it will still use the Palette transform — if the input image has more colors than that, it will not use Palette.
The use of Palette is considered an internal encoding tool in jxl, not part of the externally exposed image metadata. It can be used by the encoder to effectively recompress PNG8 files, but by no means will it necessarily always use that encoding tool when the input is PNG8, and it might also use Palette when the input has more than 256 colors. The Palette transform of jxl is quite versatile, it can also be applied to individual channels, to more or less than 3 channels, and palette entries can be not only specific colors but also so-called "delta palette entries" which are not a color but signed pixel values that get added to the predicted pixel value.
As explained by Jon Sneyers just above the palette is an internal encoding tool. I was confused by this, as I could not see any difference in the output of the jxlinfo command line.
So I ran the following experience on my side to convince myself:
$ cjxl -d 0 --palette=257 palette.png palette.257.jxl
$ cjxl -d 0 --palette=256 palette.png palette.256.jxl
$ cjxl -d 0 --palette=255 palette.png palette.255.jxl
Lead to:
% md5sum palette.*.jxl
e925521cbb976dce2646354ea3deee3b palette.255.jxl
8d241b94d67aeb2706a1aad7aed55cc7 palette.256.jxl
8d241b94d67aeb2706a1aad7aed55cc7 palette.257.jxl
Where:
% du -sb palette.*.jxl
89616 palette.255.jxl
45627 palette.256.jxl
45627 palette.257.jxl
In all case jxlinfo reveals:
% jxlinfo palette.255.jxl
dimensions: 256x256
have_container: 0
uses_original_profile: 1
bits_per_sample: 8
have_preview: 0
have_animation: 0
intrinsic xsize: 256
intrinsic ysize: 256
orientation: 1 (Normal)
num_color_channels: 3
num_extra_channels: 0
color profile:
format: JPEG XL encoded color profile
color_space: 0 (RGB color)
white_point: 1 (D65)
primaries: 1 (sRGB)
transfer_function: 13 (sRGB)
rendering_intent: 0 (Perceptual)
frame:
still frame, unnamed
With:
% pnginfo palette.png
palette.png...
Image Width: 256 Image Length: 256
Bitdepth (Bits/Sample): 8
Channels (Samples/Pixel): 1
Pixel depth (Pixel Depth): 8
Colour Type (Photometric Interpretation): PALETTED COLOUR (0 colours, 0 transparent)
Image filter: Single row per byte filter
Interlacing: No interlacing
Compression Scheme: Deflate method 8, 32k window
Resolution: 0, 0 (unit unknown)
FillOrder: msb-to-lsb
Byte Order: Network (Big Endian)
Number of text strings: 0

Indexed color memory size vs raw image

In this article https://en.m.wikipedia.org/wiki/Indexed_color
It says this:
Indexed color images with palette sizes beyond 256 entries are rare. The practical limit is around 12-bit per pixel, 4,096 different indices. To use indexed 16 bpp or more does not provide the benefits of the indexed color images' nature, due to the color palette size in bytes being greater than the raw image data itself. Also, useful direct RGB Highcolor modes can be used from 15 bpp and up.
I don't undestand why the indexed 16 bpp or more is inefficient in terms of memory
Because in this article there is also this:
Indexed color saves a lot of memory, storage space, and transmission time: using truecolor, each pixel needs 24 bits, or 3 bytes. A typical 640×480 VGA resolution truecolor uncompressed image needs 640×480×3 = 921,600 bytes (900 KiB). Limiting the image colors to 256, every pixel needs only 8 bits, or 1 byte each, so the example image now needs only 640×480×1 = 307,200 bytes (300 KiB), plus 256×3 = 768 additional bytes to store the palette map in itself (assuming RGB), approximately one third of the original size. Smaller palettes (4-bit 16 colors, 2-bit 4 colors) can pack the pixels even more (to one sixth or one twelfth), obviously at cost of color accuracy.
If i have 640x480 resolution and if i want to use 16-bit palette:
640x480x2(16 bits == 2 bytes) + 65536(2^16)*3(rgb)
614400 + 196608 = 811008 bytes
Raw image memory size:
640x480x3(rgb)
921600 bytes
So 811008 < 921600
And if i have 1920x1080 reolution:
Raw image: 1920x1080x3 = 6 220 800
Indexed color:
1920x1080x2 + palette size(2**16 * 3)
4147200 + 196608
4343808 bytes
So again indexed color is efficien in terms of memory. I don’t get it, why in this article is says it is inefficient.
It really depends upon the size of the image. As you said, if b is the number of bytes per pixel and p is the number of pixels, then the image data size i is:
i = p * b
And the color table size t is:
t = 2^(b * 8) * 3
So the point where a raw image would take the same space as an indexed image is:
p * 3 = p * b + 2^(b * 8) * 3
Which I'll now solve for p:
p * 3 - p * b = 2^(b * 8) * 3
p * (3 - b) = 2^(b * 8) * 3
p = (2^(b * 8) * 3) / (3 - b)
So for various bytepp, the minimum image size that will make using indexed images break even:
1 bytepp (8 bit) - 384 pixels (like an image of 24 x 16)
1.5 bytepp (12 bit) - 8192 pixels (like an image of 128 x 64)
2 bytepp (16 bit) - 196,604 pixels (like an image of 512 x 384)
2.5 bytepp (20 bit) - 6,291,456 pixels (like an image of 3072 x 2048)
2.875 bytepp (23 bit) - 201,326,592 (like an image of 16,384 x 12,288)
If you are using an image smaller than 512 x 384, 16 bit per pixel indexed color would take up more space than raw 24 bit image data.

PNG truecolor, 8 bit depth, how to read IDAT chunk

I have a question regarding a PNG file that I am trying to read (I have attached it in this question)
The file size 328750 bytes
Width 660
Height 330
Color type - truecolor
Bit depth - 24 bits
So here's my question. If it's true color, I assume it's RGB, which is 24 bits. But you do the math, the number doesn't add up. 660 (width) * 330 (height) * 3 bytes (from 24 bits) = 653400 bytes, which is double the actual file size.
Why is that?
I tried to read the IDAT chunk, pretending that each pixel is 3 bytes, and I tried to check the colour and it doesn't match what is displayed.
PNG is a compressed image format, so the IDAT chunk(s) contain a zlib-compressed representation of the RGB pixels. Probably the easiest way for you to access the pixel data is to use a converter such as ImageMagick or GraphicsMagick to decompress the image into the Netpbm "PPM" format.
magick image.png image.ppm
or
gm convert image.png image.ppm
Then read the "image.ppm" in the same way you tried to read the PNG. Just skip over the short header, which in the case of your image is
P 6 \n 6 6 0 3 3 0 \n 2 5 5 \n
where "P6" is the magic number, 660 and 330 are the dimensions, and 255 is the image depth (maximum value for R,G,and B is 255, or 0xff). The remainder of the file is just the R,G,B values you were expecting.

Confused between actual resolution/ interpolation bitmap format

I am using Logitech pro9000 HD webcam. Which have 2 mp zeiss lence and can capture HD video etc. blaa blaa
My code [not exactly the same but integrated in single function.
Now the problem if I use resolution up to 1600 x 1200 everything works fine. And received byte size as follows
for 640 x 480 VideoHeader.dwBytesUsed are 921600
for 1600 x 1200 VideoHeader.dwBytesUsed are 5760000
from 1600 x 1200 to 3264 x 2448 VideoHeader.dwBytesUsed are 5760000
But for higher resolution from 1600 x 1200 the byte size is same as 1600 x 1200 but my program can’t covert that data to bitmap I event try to set size of bitmap to 1600 x 1200 but nothing works I get only fuzzy at bottom or stretched multiple images at bottom of preview bitmap.
I know this is called interpolation
My question is where the interpolations is implemented actually in driver which I am accessing or the camera application given by company
Means do I am getting the interpolated data or I have to implement the algorithm in my program.
What confused me is if driver is still returning 1600 x 1200 images and software from Logitech is interpolating image to 3264 x 2448 size if this is a case then why I am not getting the 1600 x 1200 image from device event I set video format at init code to 3264 x 2448
[I have set bit to 24 and camera is using Format24bppRgb Pixel Format]
can anyone help me !....
my code is
Private Sub FrameCallBack(ByVal lwnd As IntPtr, ByVal lpVHdr As IntPtr)
Dim _SnapSize As Size = New Size(640, 480)
'Dim _SnapSize As Size = New Size(1600, 1200)
Dim _SnapSize As Size = New Size(3264, 2448)
Dim VideoHeader As New Avicap.VIDEOHDR
Dim VideoData(-1) As Byte
VideoHeader = CType(Avicap.GetStructure(lpVHdr, VideoHeader), Avicap.VIDEOHDR)
VideoData = New Byte(VideoHeader.dwBytesUsed - 1) {}
Marshal.Copy(VideoHeader.lpData, VideoData, 0, VideoData.Length)
Dim _SnapFormat As System.Drawing.Imaging.PixelFormat = PixelFormat.Format24bppRgb
Dim outBit As Bitmap
If Me.IsValidData Then
outBit = New Bitmap(_SnapSize.Width, _SnapSize.Height, _SnapFormat)
Dim bitData As BitmapData
bitData = outBit.LockBits(New Rectangle(Point.Empty, _SnapSize), ImageLockMode.WriteOnly, _SnapFormat)
outBit.UnlockBits(bitData)
GC.Collect()
GC.WaitForPendingFinalizers()
End If
End Sub
First really sorry that I completely forgot about this question
The answer is - these structures are for native APIs.
My camera was 2 mega pixel lens and I was getting proper image at resolution of 1600 x 1200
The math is simple
1600 x 1200 = 1920000 Total Pixels
Pixel format is 24 bpp means 3 bytes for each pixel total size of image is 5760000
This lens can no longer produce data more than 2 mb that’s why 1600x1200 is the hardware resolution limit for this camera and hardware is not responsible for interpolation for higher resolution image that’s why I have to do it manually after getting original image from camera.
This is what exactly I did; I capture image of 1600x1200 and write image processing algos to create interpolation and improving quality of that image.
Project was creating a cheap book scanning device for document scanning. Was done successfully and in use by our clients.

Flickr image url suffix medium 500

It says to use a '-' but that doesn't work for me..
Size Suffixes
The letter suffixes are as follows:
s small square 75x75
t thumbnail, 100 on longest side
m small, 240 on longest side
- medium, 500 on longest side
z medium 640, 640 on longest side
b large, 1024 on longest side*
o original image, either a jpg, gif or png, depending on source format
http://farm6.static.flickr.com/5025/5680710399_b609135279_-.jpg
http://farm{farm-id}.static.flickr.com/{server-id}/{id}_{secret}_[mstzb].jpg
In the explanation, they are using - as a symbol for "no suffix at all", so the URL
http://farm6.static.flickr.com/5025/5680710399_b609135279.jpg
is what you're looking for if you want 500px length.

Resources