Faster web experience when available is gold - node.js

In node.js can I take the binary straight from the canvas without calling toDataURL (which would convert it to 3x size base64)?
So I would then have binary (example: open up any image with a text editor)
Then convert that into webp base64 (?)
I've found that converting base64 png to base64 webp is very slow but, transporting webp via websockets is very fast. After a lot of tests though; I see that converting to webp from png then transporting is in fact much slower that just transporting pngs (one example is https://github.com/lovell/sharp which can do the base64 conversion)
If I could just go from canvas to webp then I would be transporting 80% less data in the same time. I would be decreasing the transport time and saving the user mobile bandwidth (if they have a webp supporting browser), if not, (their choice - you-snooze-you-loose) and I fallback to png.

Related

JPEG compression quality in a TIFF image file

I built the tiff library from source including zlib and jpeg compression options. I can save with either compression type depending if I want lossless images at a higher file size, or smaller file sizes but lossy images.
The issue I have is, how do I control the JPEG quality? The program I wrote wants to be able to create both, and change the image quality when using JPEG compression. I expected there to be a tiff tag where the quality can be set, but I have yet to find that when searching. I would like to try a few different qualities between 50 and 100.
The TIFF image container must be used, so I cannot just use libjpeg directly to make JPEG images.
I search the whole TIFF source code and found the tag TIFFTAG_JPEGQUALITY that allows the setting of the JPEG quality, using an int between 0 and 100. I tested it and it does in fact change the quality.
I searched the TIFF tags website and this tag is not listed as a valid tag, but it supported natively by the latest version.

Ultimate JPEG-2000 compression

I have done a research comparing JPEG and JPEG-2000 and couldn't reach the same or better perceptual level of quality for JPEG-2000 compared to JPEG in extreme compressing levels (for web).
Although, announced as ~20+% better perceptual quality with the same size than original JPEG, with available tools to reconvert existing JPEGs or even lossless PNGs, original JPEG was still superior. JPEG-2000 managed to get arguably better results only for huge images which are not as widely used in web (Full HD and bigger).
ImageMagick, GraphicsMagick, OpenJPEG all showed identical results (I assume, due to usage of Jasper to encode JPEG-2000) and all lacked options for encoding. Using Kakadu or online/basic converters didn't help either. In current status quo, tools like imagemin with plugins, can provide much better quality JPEGs on output than JPEG-2000 when maximally compressed for web. So JPEG-2000 being useful mostly for Safari, doesn't get any point to be another format to encode since regular JPEG provides better results.
What do I do wrong and are there any other tools/tricks that have more advanced options for JPEG-2000 encoding to finally beat JPEG?
It's not just you. JPEG 2000 isn't that much better. It's difficult to encode well, and we've got more mature JPEG encoders nowadays (MozJPEG, Guetzli).
JPEG 2000 is wavelet-based, which gives it better scores in the PSNR metric. However, JPEG 2000 excels in exactly the thing PSNR metric prefers too much: blurring. It makes JPEG 2000 look great in low quality range, where the older JPEG breaks down. But it's not helpful in practice for higher-quality images. Wavelets also struggle with text and sharp edges, just like the older JPEG.
JPEG 2000 has very nice progressive loading, but Safari has stopped supporting progressive rendering at some point.
The only practical use for JPEG 2000 is delivering photographic-like images with alpha channel, because JPEG can't do that, and JPEG 2000 still wins compared to PNG.

Why does Google serve JPEG instead of WebP in image search?

With all the fuss about WEBP and how cool it is, I'm still getting JPEG thumbnail images in image search on google.com in year 2016, even though Chrome browser tells in HTTP header it accepts webp images: accept:image/webp,image/,/*;q=0.8
Why is that so?
Answering myself.
It may have reason "just not adopted yet", although it may as well be "not worth adopting", because of the following:
WEBP gives better overall quality, but image distortions are different on low quality encoder settings, when compared to classical JPEG:
JPEG gives uniform distortions all over the picture, on hard edges, soft edges and soft gradients
while WEBP handles soft gradients and hard edges better than JPEG, it gives more distortions on the soft-edges. Because of that, image looks deformed.
Example: moon in the following image: http://xooyoozoo.github.io/yolo-octo-bugfixes/#pont-de-quebec-at-night&jpg=s&webp=s
As a side note: WEBP is used for video thumbnails on YouTube, but given the source is video, WEBP is more acceptable in this scenario, than encoding thumbnails for JPEG images.

dwebp increasing the jpeg original jpeg file size

I used cwebp to convert my jpg image to web. Now I am using dwebp to convert it back but its increasing in size from original one. Is there any way to control the file size in dwebp.
Transcoding between lossy formats tends to increase the size unless the representation of data happens to be extremely compatible between the formats, be it audio, pictures, video or other lossy data. WebP uses a 4x4 Hadamard transform, whereas JPEG uses an 8x8 Discrete Cosine Transform (DCT). Quantization, which is the main form of data loss in these formats, produces different kind of artefacts in these transformations, and transcoding cannot be optimal. Particularly, if either WebP or JPEG was saved with extremely low quality, the other format will struggle to compete with it after transcoding -- the later format will not only have to codify the image signal, but the resulting artefacts from the other format, too.
So, while there is an inherent tendency for an increase in file size in such back-and-forth conversion, the exact amount of loss happening at every stage can be controlled. Which flags and tools (including versions) are you using exactly?

Base64 - the diffrence in size

I'm using patternify and pixieengine when i need to make some small graphic elements for my websites. It didn't bother me till now - pixel editor is dead a few days now. Why these websites ? Because of the base64 code compression.
Example:
Patternify - I fill 5x5 px pattern with black color, this is the base64 code i can get:
data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAAUAAAAFCAYAAACNbyblAAAAEUlEQVQImWNgYGD4jwVTXRAA9qoY6Kb21uEAAAAASUVORK5CYII=
It's short and everything works as i expected.
Now I'll try to make a short base64 code without these sites. I made in photoshop a black square 5x5 the same as above and saved this in every possible format. Next I've found few online encoders but this is what they gave me:
iVBORw0KGgoAAAANSUhEUgAAAAUAAAAFCAMAAAC6sdbXAAAAGXRFWHRTb2Z0d2FyZQBBZG9iZSBJbWFnZVJlYWR5ccllPAAAAyJpVFh0WE1MOmNvbS5hZG9iZS54bXAAAAAAADw/eHBhY2tldCBiZWdpbj0i77u/IiBpZD0iVzVNME1wQ2VoaUh6cmVTek5UY3prYzlkIj8+IDx4OnhtcG1ldGEgeG1sbnM6eD0iYWRvYmU6bnM6bWV0YS8iIHg6eG1wdGs9IkFkb2JlIFhNUCBDb3JlIDUuMy1jMDExIDY2LjE0NTY2MSwgMjAxMi8wMi8wNi0xNDo1NjoyNyAgICAgICAgIj4gPHJkZjpSREYgeG1sbnM6cmRmPSJodHRwOi8vd3d3LnczLm9yZy8xOTk5LzAyLzIyLXJkZi1zeW50YXgtbnMjIj4gPHJkZjpEZXNjcmlwdGlvbiByZGY6YWJvdXQ9IiIgeG1sbnM6eG1wPSJodHRwOi8vbnMuYWRvYmUuY29tL3hhcC8xLjAvIiB4bWxuczp4bXBNTT0iaHR0cDovL25zLmFkb2JlLmNvbS94YXAvMS4wL21tLyIgeG1sbnM6c3RSZWY9Imh0dHA6Ly9ucy5hZG9iZS5jb20veGFwLzEuMC9zVHlwZS9SZXNvdXJjZVJlZiMiIHhtcDpDcmVhdG9yVG9vbD0iQWRvYmUgUGhvdG9zaG9wIENTNiAoV2luZG93cykiIHhtcE1NOkluc3RhbmNlSUQ9InhtcC5paWQ6RTM1QjVGOEU0MDkxMTFFM0E5MDlGOUFDNDM5REVCMUQiIHhtcE1NOkRvY3VtZW50SUQ9InhtcC5kaWQ6RTM1QjVGOEY0MDkxMTFFM0E5MDlGOUFDNDM5REVCMUQiPiA8eG1wTU06RGVyaXZlZEZyb20gc3RSZWY6aW5zdGFuY2VJRD0ieG1wLmlpZDpFMzVCNUY4QzQwOTExMUUzQTkwOUY5QUM0MzlERUIxRCIgc3RSZWY6ZG9jdW1lbnRJRD0ieG1wLmRpZDpFMzVCNUY4RDQwOTExMUUzQTkwOUY5QUM0MzlERUIxRCIvPiA8L3JkZjpEZXNjcmlwdGlvbj4gPC9yZGY6UkRGPiA8L3g6eG1wbWV0YT4gPD94cGFja2V0IGVuZD0iciI/Pg8gB7gAAAAGUExURQAAAP///6XZn90AAAAOSURBVHjaYmDABwACDAAAHgABzCCyiwAAAABJRU5ErkJggg==
Much longer code and the weight of file was similar to the PNG from patternify ~950 B
Patternify has limitation to 10x10 px. So for larger elements i have to use pixieengine, it has exact the same compression level as patternify and no limitation unfortunately it's dead thats why i need to understand now how it really works. Is there any "offline" way to achieve patternify/pixieengine compression level ?
This isn't really a question about base64 encoding, it's about image compression. Base64 encoding is not going to implicitly make your image take up fewer bytes, in fact it makes it take up more (binary vs. a string representation of that binary). Run your original PNG through a good compression tool such as pngcrush and then encode it as base64.

Resources