Node/Sharp: Resizing thumbnails produces much larger images than expected - node.js

I am creating a function to resize thumbnails downloaded by youtube-dl. youtube-dl downloads 4 thumbnails, but they are not in the exact dimensions I need so they must be resized. Because the images I need are smaller than the largest thumbnail downloaded by youtube-dl, that large thumbnail is used as a base to create the resized thumbnails.
This is the function I am using to resize my thumbnails:
import fs from 'fs-extra';
import sharp from 'sharp';
let thumbnailData = fs.readFileSync('D:/original 1280x720 (122,346 bytes).jpg');
await generateThumbnailFile(thumbnailData, 720, 405, 'D:/medium resized 720x405 (329,664 bytes).jpg');
await generateThumbnailFile(thumbnailData, 168, 95, 'D:/small resized 168x95 (26,246 bytes).jpg');
const generateThumbnailFile = async (sourceData, width, height, filepath) => {
try {
const data = await sharp(sourceData)
.resize({ width: width, height: height, fit: sharp.fit.inside })
.jpeg({ quality: 100, chromaSubsampling: '4:4:4' })
.toBuffer();
fs.writeFileSync(filepath, data);
} catch (err) {
throw err;
}
}
However, the size of my resized thumbnails is much greater than I would expect. The file size of the medium resized thumbnail nearly three times larger than the largest original thumbnail and the file size of the small resized thumbnail is nearly three times larger than its comparable original thumbnail.
What can I do to reduce the filesize of my resized thumbnails?

Answer is very easy - you used quality: 100 change it to smaller number 80-95 and the generated file should be smaller. If you use quality of 100 is a waste of space in jpegs.
Here you ahve a link to the page that can show you different images compressed to different queality levels: http://regex.info/blog/lightroom-goodies/jpeg-quality you can see there that 100% is usually usually bigger than other files significantly (sometimes even twice bigger).
Therefor using just 80-90% is enought and the file size is not really that great (as probably input file queality was not set to 100, resizing + quality set to 100 could produce files bigger in size than source files).

Related

NodeJS image processing: how to merge two images with a polygon mask?

I am trying to do something the same as this Merge images by mask, but in NodeJS Sharp library instead of Python.
I have two images and a polygon, I want the result to be the merged image where all the pixels inside polygon from image1 otherwise image2, more specifically, how to apply the 'mergeImageWithPolygonMask' function below:
const sharp = require('sharp')
let image1 = sharp('image1.jpg')
let image2 = sharp('image2.jpg')
let polygon = [[0,0], [0, 100], [100, 100], [100, 0]]
let newImage = mergeImagesWithPolygonMask(image1, image2, polygon) // how to do this?
newImage.toFile('out.jpg')
After playing with Sharp library for a while, I come up with a solution, which I think will help other people also as this is a quite common use case.
async function mergeImagesWithPolygonMask(image1, image2, polygonMask, config) {
// merge two images with the polygon mask, where within the polygon we have pixel from image1, outside the polygon with pixels from image2
const { height, width } = config;
// console.log("height", height, "width", width);
const mask = Buffer.from(
`<svg height="${height}" width="${width}"><path d="M${polygonMask.join(
"L"
)}Z" fill-opacity="1"/></svg>`
);
const maskImage = await sharp(mask, { density: 72 }).png().toBuffer();
const frontImage = await image2.toBuffer();
const upperLayer = await sharp(frontImage)
.composite([{ input: maskImage, blend: "dest-in", gravity: "northwest" }])
// Set the output format and write to a file
.png()
.toBuffer();
return image1.composite([{ input: upperLayer, gravity: "northwest" }]);
}
This function will first create an svg image and use sharp composite method to create the polygon mask on the front image. And then composite this masked front image to the background image. The size passed in in config specify the size of the svg image.
to use this function:
const sharp = require('sharp')
const image1 = sharp('image1')
const image2 = sharp('image2')
polygon = [[682, 457], [743, 437], [748, 471], [689, 477]]
mergeImagesWithPolygonMask(image1, image2, polygon, {height: 720, width:1280})

How can I load an exported Tileset (image collection) from Tiled in Phaser 3?

I want to load an image collection tileset into my phaser game. I know that with tilesets that are just one image you can just load that image into phaser, but what about an image collection? In Tiled I saw the options to export that tileset as either .tsx or .json, but I don't know if that helps in my case. The reason I need this is because I have some objects that are too big to be used as tiles. I can load them into Tiled and place them like I want to, but obviously they don't show up in my game, unless I can import that tileset into phaser. Does anyone know how to do that, or maybe you know a better option than using an image collection?
Well after some tests, and updating my Tiled version to 1.9.2, it seems there is an pretty simple way.
As long as the tileset collection is marked as "Embeded in map"
(I could have sworn, this checkbox was hidden/deactivated, when selecting "Collection of Images", in my early Tiled version)
Export the map as json
Load map and tile-images
preload() {
this.load.tilemapTiledJSON('map', 'export.tmj');
this.load.image('tile01', 'tile01.png');
this.load.image('tile02', 'tile02.png');
...
}
create Phaser TileSets, just use the filename from the json as the tilsetName (this is the "tricky" part, atleast for me)
create() {
var map = this.make.tilemap({ key: 'map' });
var img1 = map.addTilesetImage( 'tile01.png', 'tile01');
var img2 = map.addTilesetImage( 'tile02.png', 'tile02');
...
// create the layer with all tilesets
map.createLayer('Tile Layer 1', [img1, img2, ...]);
...
}
This should work, atleast with a "Collection of images", with images that have a size of 8x8 pixel (since I don't know the needed/intended Images size, I didn't want to waste time testing various images-sizes needlessly).
Here a small demo:
(due to CORS-issues the map data is inserted as jsonObject and the textures are generate and not loaded)
const mapJsonExport = {"compressionlevel":-1,"height":10,"infinite":false,"layers":[{"compression":"","data":"AQAAAAEAAAACAAAAAgAAAAEAAAACAAAAAgAAAAIAAAACAAAAAgAAAAIAAAACAAAAAgAAAAEAAAABAAAAAQAAAAEAAAABAAAAAQAAAAIAAAABAAAAAQAAAAEAAAABAAAAAQAAAAEAAAABAAAAAQAAAAEAAAACAAAAAQAAAAEAAAACAAAAAgAAAAEAAAABAAAAAgAAAAIAAAABAAAAAgAAAAIAAAACAAAAAgAAAAEAAAACAAAAAQAAAAEAAAABAAAAAQAAAAEAAAABAAAAAQAAAAIAAAACAAAAAgAAAAEAAAACAAAAAgAAAAEAAAACAAAAAQAAAAEAAAACAAAAAQAAAAEAAAABAAAAAQAAAAEAAAACAAAAAgAAAAEAAAABAAAAAgAAAAEAAAABAAAAAQAAAAEAAAACAAAAAQAAAAIAAAACAAAAAgAAAAEAAAABAAAAAgAAAAEAAAACAAAAAgAAAAIAAAACAAAAAgAAAAEAAAABAAAAAgAAAAIAAAACAAAAAgAAAAEAAAACAAAAAQAAAA==","encoding":"base64","height":10,"id":1,"name":"Tile Layer 1","opacity":1,"type":"tilelayer","visible":true,"width":10,"x":0,"y":0}],"nextlayerid":2,"nextobjectid":1,"orientation":"orthogonal","renderorder":"right-down","tiledversion":"1.9.2","tileheight":8,"tilesets":[{"columns":0,"firstgid":1,"grid":{"height":1,"orientation":"orthogonal","width":1},"margin":0,"name":"tiles","spacing":0,"tilecount":2,"tileheight":8,"tiles":[{"id":0,"image":"tile01.png","imageheight":8,"imagewidth":8},{"id":1,"image":"tile02.png","imageheight":8,"imagewidth":8}],"tilewidth":8}],"tilewidth":8,"type":"map","version":"1.9","width":10};
var config = {
width: 8 * 10,
height: 8 * 10,
zoom: 2.2,
scene: { preload, create }
};
function preload() {
// loading inline JSON due to CORS-issues with the code Snippets
this.load.tilemapTiledJSON('map', mapJsonExport);
// generating texture instead of loading them due to CORS-issues with the code Snippets
let graphics = this.make.graphics({add: false});
graphics.fillStyle(0xff0000);
graphics.fillRect(0, 0, 8, 8);
graphics.generateTexture('tile01', 8, 8);
graphics.fillStyle(0x000000);
graphics.fillRect(0, 0, 8, 8);
graphics.generateTexture('tile02', 8, 8);
}
function create () {
let map = this.make.tilemap({ key: 'map' });
let img1 = map.addTilesetImage( 'tile01.png', 'tile01');
let img2 = map.addTilesetImage( 'tile02.png', 'tile02');
map.createLayer('Tile Layer 1', [img1, img2], 0, 0);
}
new Phaser.Game(config);
<script src="https://cdn.jsdelivr.net/npm/phaser#3.55.2/dist/phaser.js"></script>

How to crop image after upload Cloudinary?

How i can to crop image after upload and send edited response ulr to frontent?
I will be grateful for the answer
MY CODE:
const stream = cloudinary.uploader.upload_stream(
{
folder,
},
(error: UploadApiErrorResponse | undefined, result: UploadApiResponse | undefined): void => {
console.log(error, result)
if (result) {
resolve({
url: result.url,
size: Math.round(result.bytes / 1024),
height: result.height,
width: result.width,
})
} else {
reject(error)
}
}
)
streamifier.createReadStream(buffer).pipe(stream)
The most common method of integrating Cloudinary is that you upload the original file to your Cloudinary account and store the Upload API response to your database which contains the details for the image: https://cloudinary.com/documentation/image_upload_api_reference#sample_response
If you don't want to store the entire response, you should store at least the fields needed to create a URL for that image later: https://cloudinary.com/documentation/image_transformations#transformation_url_structure
(Which are public_id, resource_type, type, format, and timestamp) though strictly speaking, most of those are optional if your assets are images of type 'upload' - certainly you need the public_id though.
Then, in your frontend code, when adding the image to your page or to your application, you add transformation parameters when building the URL, asking that the image is returned with transformations applied to match it to where/how you're using the image.
A common option is to set the width and height to exactly match the image tag or image view, then apply automatic cropping if the aspect ratio of the original doesn't match, with the crop selection being automatic: https://cloudinary.com/documentation/resizing_and_cropping
A Javascript example to add those parameters, if the image should be 500x500 is:
cloudinary.url( public_id,
{
resource_type: 'image', //these are the defaults and can be ommitted
type: 'upload', //these are the defaults and can be ommitted
height: 500,
width: 500,
crop: 'lfill', // try to fill the requested width and height without scaling up, crop if needed
gravity: 'auto', // select which area to keep automatically
fetch_format: 'auto',
quality: 'auto',
format: 'jpg', // sets the file extension on the URL, and will convert to that format if needed, and no fetch_format was set to override that
});
The resulting URL will be something like: http://res.cloudinary.com/demo/image/upload/c_lfill,f_auto,g_auto,h_500,q_auto,w_500/sample.jpg

Azure Media Services - encoding 4K UHD video with v3

I made a library that encodes video in Azure using v3 API (.NET Core). I successfully made encoding up to FHD.
But then I tried to encode 4k UHD video (based on How to encode with a custom Transform and H264 Multiple Bitrate 4K articles).
So, here's my code to create this Transform:
private static async Task<Transform> Ensure4kTransformExistsAsync(IAzureMediaServicesClient client,
string resourceGroupName,
string accountName)
{
H264Layer CreateH264Layer(int bitrate, int width, int height)
{
return new H264Layer(
profile: H264VideoProfile.Auto,
level: "auto",
bitrate: bitrate, // Note that the units is in bits per second
maxBitrate: bitrate,
//bufferWindow: TimeSpan.FromSeconds(5), // this is the default
width: width.ToString(),
height: height.ToString(),
bFrames: 3,
referenceFrames: 3,
adaptiveBFrame: true,
frameRate: "0/1"
);
}
// Does a Transform already exist with the desired name? Assume that an existing Transform with the desired name
// also uses the same recipe or Preset for processing content.
Transform transform = await client.Transforms.GetAsync(resourceGroupName, accountName, TRANSFORM_NAME_H264_MULTIPLE_4K_S);
if (transform != null) return transform;
// Create a new Transform Outputs array - this defines the set of outputs for the Transform
TransformOutput[] outputs =
{
// Create a new TransformOutput with a custom Standard Encoder Preset
new TransformOutput(
new StandardEncoderPreset(
codecs: new Codec[]
{
// Add an AAC Audio layer for the audio encoding
new AacAudio(
channels: 2,
samplingRate: 48000,
bitrate: 128000,
profile: AacAudioProfile.AacLc
),
// Next, add a H264Video for the video encoding
new H264Video(
// Set the GOP interval to 2 seconds for both H264Layers
keyFrameInterval: TimeSpan.FromSeconds(2),
// Add H264Layers
layers: new[]
{
CreateH264Layer(20000000, 4096, 2304),
CreateH264Layer(18000000, 3840, 2160),
CreateH264Layer(16000000, 3840, 2160),
CreateH264Layer(14000000, 3840, 2160),
CreateH264Layer(12000000, 2560, 1440),
CreateH264Layer(10000000, 2560, 1440),
CreateH264Layer(8000000, 2560, 1440),
CreateH264Layer(6000000, 1920, 1080),
CreateH264Layer(4700000, 1920, 1080),
CreateH264Layer(3400000, 1280, 720),
CreateH264Layer(2250000, 960, 540),
CreateH264Layer(1000000, 640, 360)
}
),
// Also generate a set of PNG thumbnails
new PngImage(
start: "25%",
step: "25%",
range: "80%",
layers: new[]
{
new PngLayer(
"50%",
"50%"
)
}
)
},
// Specify the format for the output files - one for video+audio, and another for the thumbnails
formats: new Format[]
{
// Mux the H.264 video and AAC audio into MP4 files, using basename, label, bitrate and extension macros
// Note that since you have multiple H264Layers defined above, you have to use a macro that produces unique names per H264Layer
// Either {Label} or {Bitrate} should suffice
new Mp4Format(
"Video-{Basename}-{Label}-{Bitrate}{Extension}"
),
new PngFormat(
"Thumbnail-{Basename}-{Index}{Extension}"
)
}
),
OnErrorType.StopProcessingJob,
Priority.Normal
)
};
const string DESCRIPTION = "Multiple 4k";
// Create the custom Transform with the outputs defined above
transform = await client.Transforms.CreateOrUpdateAsync(resourceGroupName, accountName, TRANSFORM_NAME_H264_MULTIPLE_4K_S,
outputs,
DESCRIPTION);
return transform;
}
But the job ends up with the following error:
Job ended with error: Fatal service error, please contact support.
An error has occurred. Stage: ProcessSubtaskRequest. Code: System.Net.WebException.
And I did use S3 Media Reserved Unit for encoding. So, is there any way to make it work?
Posting back the solutions to this thread for completeness
There was a bug in the sample code (RunAsync() method), which resulted in Jobs using an incorrect output Asset. The bug has now been fixed.
There was a related bug in error handling that is being addressed

Failing to write tiles with YCBCR photometric in Libtiff.net

I'm using Libtiff.net to write tiled images into a tiff file. It works ok if Photometric is set to RBG, but if I use YCbCr (in order to reduce file size), I get the error "Application trasferred too few scanlines". Relevant pieces of code:
private static Tiff CreateTiff()
{
var t = Tiff.Open(#"d:\test.tif", "w");
const long TIFF_SIZE = 256;
t.SetField(TiffTag.IMAGEWIDTH, TIFF_SIZE);
t.SetField(TiffTag.IMAGELENGTH, TIFF_SIZE);
t.SetField(TiffTag.TILEWIDTH, TIFF_SIZE);
t.SetField(TiffTag.TILELENGTH, TIFF_SIZE);
t.SetField(TiffTag.BITSPERSAMPLE, 8);
t.SetField(TiffTag.SAMPLESPERPIXEL, 3);
t.SetField(TiffTag.COMPRESSION, Compression.JPEG );
t.SetField(TiffTag.JPEGQUALITY, 80L);
t.SetField(TiffTag.PHOTOMETRIC, Photometric.YCBCR);
t.SetField(TiffTag.PLANARCONFIG, PlanarConfig.CONTIG);
return t;
}
private static void Go()
{
var tif = CreateTiff(true);
var bmp = Image.FromFile(#"d:\tile.bmp") as Bitmap;
var rect = new Rectangle(0, 0, bmp.Width, bmp.Height);
var size = bmp.Width;
var bands = 3;
var bytes = new byte[size * size * bands];
var bmpdata = bmp.LockBits(rect, ImageLockMode.ReadWrite, PixelFormat.Format24bppRgb);
try
{
System.Runtime.InteropServices.Marshal.Copy(bmpdata.Scan0, bytes, 0, bytes.Length);
}
finally
{
bmp.UnlockBits(bmpdata);
bmp.Dispose();
}
tif.WriteEncodedTile(0, bytes, bytes.Length);
tif.Dispose();
Console.ReadLine();
}
This same code works if I use Photometric.RGB. Even if I write a stripped image (t.SetField(TiffTag.ROWSPERSTRIP, ROWS_PER_STRIP), tif.WriteEncodedStrip(count, bytes, bytes.Length)) using Photometric.YCbCr everything works just fine.
Why can't I use YCbCr with a tiled image? Thanks in advance.
I finally found a workaround. I can set the tiff as RGB, encode the pixel data as YCbCr and then write it. The size of the output image is greatly reduced, as I desired, and then I can recover RGB values when I read the tiles. But it still isn't a perfect solution: any other software will recognize this tiff as RBG, so the colors are incorrect.
I thought of editing the photometric tag after writing the data, but every software I've tried either ignores the command or corrupts the file. How can I modify this tag without corrupting the image?

Resources