How to crop image after upload Cloudinary? - node.js

How i can to crop image after upload and send edited response ulr to frontent?
I will be grateful for the answer
MY CODE:
const stream = cloudinary.uploader.upload_stream(
{
folder,
},
(error: UploadApiErrorResponse | undefined, result: UploadApiResponse | undefined): void => {
console.log(error, result)
if (result) {
resolve({
url: result.url,
size: Math.round(result.bytes / 1024),
height: result.height,
width: result.width,
})
} else {
reject(error)
}
}
)
streamifier.createReadStream(buffer).pipe(stream)

The most common method of integrating Cloudinary is that you upload the original file to your Cloudinary account and store the Upload API response to your database which contains the details for the image: https://cloudinary.com/documentation/image_upload_api_reference#sample_response
If you don't want to store the entire response, you should store at least the fields needed to create a URL for that image later: https://cloudinary.com/documentation/image_transformations#transformation_url_structure
(Which are public_id, resource_type, type, format, and timestamp) though strictly speaking, most of those are optional if your assets are images of type 'upload' - certainly you need the public_id though.
Then, in your frontend code, when adding the image to your page or to your application, you add transformation parameters when building the URL, asking that the image is returned with transformations applied to match it to where/how you're using the image.
A common option is to set the width and height to exactly match the image tag or image view, then apply automatic cropping if the aspect ratio of the original doesn't match, with the crop selection being automatic: https://cloudinary.com/documentation/resizing_and_cropping
A Javascript example to add those parameters, if the image should be 500x500 is:
cloudinary.url( public_id,
{
resource_type: 'image', //these are the defaults and can be ommitted
type: 'upload', //these are the defaults and can be ommitted
height: 500,
width: 500,
crop: 'lfill', // try to fill the requested width and height without scaling up, crop if needed
gravity: 'auto', // select which area to keep automatically
fetch_format: 'auto',
quality: 'auto',
format: 'jpg', // sets the file extension on the URL, and will convert to that format if needed, and no fetch_format was set to override that
});
The resulting URL will be something like: http://res.cloudinary.com/demo/image/upload/c_lfill,f_auto,g_auto,h_500,q_auto,w_500/sample.jpg

Related

Display images from Node (MERN)

I’m building a MERN App and trying to display image saved in MongoDB on the fronted. I use Multer and Sharp to process images when saved to database like so:
const buffer = await sharp(req.file.path)
.resize({ width: 160, height: 160 })
.png()
.toBuffer();
group.groupImgUrl = Buffer.from(buffer, "binary").toString("base64");
await group.save();

when I console.log(group.groupImgUrl), it gives me the following output: new Binary(Buffer.from("6956….73d3d", "hex"), 0)
On the frontend side, I receive the groupImgUrl in different formats, sometimes in typed array, sometimes in string. when the response is in shape: groupImgUrl: { type: Buffer; data: Array }, I can transform the typed array and display it properly:
let base64Str = "";
if ("data" in group.groupImgUrl) {
response.group.groupImgUrl.data.forEach((index: number) => {
base64Str += String.fromCharCode(index)
})
}
<img src = `data:image/png;base64,${src}` />
However, when I receive the groupImgUrl in string format, it’s somehow tampered and cannot display correctly using src = data:image/png;base64,${src}
My question is, how come I receive the same image in different formats, and how to display them properly?

How can I load an exported Tileset (image collection) from Tiled in Phaser 3?

I want to load an image collection tileset into my phaser game. I know that with tilesets that are just one image you can just load that image into phaser, but what about an image collection? In Tiled I saw the options to export that tileset as either .tsx or .json, but I don't know if that helps in my case. The reason I need this is because I have some objects that are too big to be used as tiles. I can load them into Tiled and place them like I want to, but obviously they don't show up in my game, unless I can import that tileset into phaser. Does anyone know how to do that, or maybe you know a better option than using an image collection?
Well after some tests, and updating my Tiled version to 1.9.2, it seems there is an pretty simple way.
As long as the tileset collection is marked as "Embeded in map"
(I could have sworn, this checkbox was hidden/deactivated, when selecting "Collection of Images", in my early Tiled version)
Export the map as json
Load map and tile-images
preload() {
this.load.tilemapTiledJSON('map', 'export.tmj');
this.load.image('tile01', 'tile01.png');
this.load.image('tile02', 'tile02.png');
...
}
create Phaser TileSets, just use the filename from the json as the tilsetName (this is the "tricky" part, atleast for me)
create() {
var map = this.make.tilemap({ key: 'map' });
var img1 = map.addTilesetImage( 'tile01.png', 'tile01');
var img2 = map.addTilesetImage( 'tile02.png', 'tile02');
...
// create the layer with all tilesets
map.createLayer('Tile Layer 1', [img1, img2, ...]);
...
}
This should work, atleast with a "Collection of images", with images that have a size of 8x8 pixel (since I don't know the needed/intended Images size, I didn't want to waste time testing various images-sizes needlessly).
Here a small demo:
(due to CORS-issues the map data is inserted as jsonObject and the textures are generate and not loaded)
const mapJsonExport = {"compressionlevel":-1,"height":10,"infinite":false,"layers":[{"compression":"","data":"AQAAAAEAAAACAAAAAgAAAAEAAAACAAAAAgAAAAIAAAACAAAAAgAAAAIAAAACAAAAAgAAAAEAAAABAAAAAQAAAAEAAAABAAAAAQAAAAIAAAABAAAAAQAAAAEAAAABAAAAAQAAAAEAAAABAAAAAQAAAAEAAAACAAAAAQAAAAEAAAACAAAAAgAAAAEAAAABAAAAAgAAAAIAAAABAAAAAgAAAAIAAAACAAAAAgAAAAEAAAACAAAAAQAAAAEAAAABAAAAAQAAAAEAAAABAAAAAQAAAAIAAAACAAAAAgAAAAEAAAACAAAAAgAAAAEAAAACAAAAAQAAAAEAAAACAAAAAQAAAAEAAAABAAAAAQAAAAEAAAACAAAAAgAAAAEAAAABAAAAAgAAAAEAAAABAAAAAQAAAAEAAAACAAAAAQAAAAIAAAACAAAAAgAAAAEAAAABAAAAAgAAAAEAAAACAAAAAgAAAAIAAAACAAAAAgAAAAEAAAABAAAAAgAAAAIAAAACAAAAAgAAAAEAAAACAAAAAQAAAA==","encoding":"base64","height":10,"id":1,"name":"Tile Layer 1","opacity":1,"type":"tilelayer","visible":true,"width":10,"x":0,"y":0}],"nextlayerid":2,"nextobjectid":1,"orientation":"orthogonal","renderorder":"right-down","tiledversion":"1.9.2","tileheight":8,"tilesets":[{"columns":0,"firstgid":1,"grid":{"height":1,"orientation":"orthogonal","width":1},"margin":0,"name":"tiles","spacing":0,"tilecount":2,"tileheight":8,"tiles":[{"id":0,"image":"tile01.png","imageheight":8,"imagewidth":8},{"id":1,"image":"tile02.png","imageheight":8,"imagewidth":8}],"tilewidth":8}],"tilewidth":8,"type":"map","version":"1.9","width":10};
var config = {
width: 8 * 10,
height: 8 * 10,
zoom: 2.2,
scene: { preload, create }
};
function preload() {
// loading inline JSON due to CORS-issues with the code Snippets
this.load.tilemapTiledJSON('map', mapJsonExport);
// generating texture instead of loading them due to CORS-issues with the code Snippets
let graphics = this.make.graphics({add: false});
graphics.fillStyle(0xff0000);
graphics.fillRect(0, 0, 8, 8);
graphics.generateTexture('tile01', 8, 8);
graphics.fillStyle(0x000000);
graphics.fillRect(0, 0, 8, 8);
graphics.generateTexture('tile02', 8, 8);
}
function create () {
let map = this.make.tilemap({ key: 'map' });
let img1 = map.addTilesetImage( 'tile01.png', 'tile01');
let img2 = map.addTilesetImage( 'tile02.png', 'tile02');
map.createLayer('Tile Layer 1', [img1, img2], 0, 0);
}
new Phaser.Game(config);
<script src="https://cdn.jsdelivr.net/npm/phaser#3.55.2/dist/phaser.js"></script>

Node/Sharp: Resizing thumbnails produces much larger images than expected

I am creating a function to resize thumbnails downloaded by youtube-dl. youtube-dl downloads 4 thumbnails, but they are not in the exact dimensions I need so they must be resized. Because the images I need are smaller than the largest thumbnail downloaded by youtube-dl, that large thumbnail is used as a base to create the resized thumbnails.
This is the function I am using to resize my thumbnails:
import fs from 'fs-extra';
import sharp from 'sharp';
let thumbnailData = fs.readFileSync('D:/original 1280x720 (122,346 bytes).jpg');
await generateThumbnailFile(thumbnailData, 720, 405, 'D:/medium resized 720x405 (329,664 bytes).jpg');
await generateThumbnailFile(thumbnailData, 168, 95, 'D:/small resized 168x95 (26,246 bytes).jpg');
const generateThumbnailFile = async (sourceData, width, height, filepath) => {
try {
const data = await sharp(sourceData)
.resize({ width: width, height: height, fit: sharp.fit.inside })
.jpeg({ quality: 100, chromaSubsampling: '4:4:4' })
.toBuffer();
fs.writeFileSync(filepath, data);
} catch (err) {
throw err;
}
}
However, the size of my resized thumbnails is much greater than I would expect. The file size of the medium resized thumbnail nearly three times larger than the largest original thumbnail and the file size of the small resized thumbnail is nearly three times larger than its comparable original thumbnail.
What can I do to reduce the filesize of my resized thumbnails?
Answer is very easy - you used quality: 100 change it to smaller number 80-95 and the generated file should be smaller. If you use quality of 100 is a waste of space in jpegs.
Here you ahve a link to the page that can show you different images compressed to different queality levels: http://regex.info/blog/lightroom-goodies/jpeg-quality you can see there that 100% is usually usually bigger than other files significantly (sometimes even twice bigger).
Therefor using just 80-90% is enought and the file size is not really that great (as probably input file queality was not set to 100, resizing + quality set to 100 could produce files bigger in size than source files).

Azure Media Services - encoding 4K UHD video with v3

I made a library that encodes video in Azure using v3 API (.NET Core). I successfully made encoding up to FHD.
But then I tried to encode 4k UHD video (based on How to encode with a custom Transform and H264 Multiple Bitrate 4K articles).
So, here's my code to create this Transform:
private static async Task<Transform> Ensure4kTransformExistsAsync(IAzureMediaServicesClient client,
string resourceGroupName,
string accountName)
{
H264Layer CreateH264Layer(int bitrate, int width, int height)
{
return new H264Layer(
profile: H264VideoProfile.Auto,
level: "auto",
bitrate: bitrate, // Note that the units is in bits per second
maxBitrate: bitrate,
//bufferWindow: TimeSpan.FromSeconds(5), // this is the default
width: width.ToString(),
height: height.ToString(),
bFrames: 3,
referenceFrames: 3,
adaptiveBFrame: true,
frameRate: "0/1"
);
}
// Does a Transform already exist with the desired name? Assume that an existing Transform with the desired name
// also uses the same recipe or Preset for processing content.
Transform transform = await client.Transforms.GetAsync(resourceGroupName, accountName, TRANSFORM_NAME_H264_MULTIPLE_4K_S);
if (transform != null) return transform;
// Create a new Transform Outputs array - this defines the set of outputs for the Transform
TransformOutput[] outputs =
{
// Create a new TransformOutput with a custom Standard Encoder Preset
new TransformOutput(
new StandardEncoderPreset(
codecs: new Codec[]
{
// Add an AAC Audio layer for the audio encoding
new AacAudio(
channels: 2,
samplingRate: 48000,
bitrate: 128000,
profile: AacAudioProfile.AacLc
),
// Next, add a H264Video for the video encoding
new H264Video(
// Set the GOP interval to 2 seconds for both H264Layers
keyFrameInterval: TimeSpan.FromSeconds(2),
// Add H264Layers
layers: new[]
{
CreateH264Layer(20000000, 4096, 2304),
CreateH264Layer(18000000, 3840, 2160),
CreateH264Layer(16000000, 3840, 2160),
CreateH264Layer(14000000, 3840, 2160),
CreateH264Layer(12000000, 2560, 1440),
CreateH264Layer(10000000, 2560, 1440),
CreateH264Layer(8000000, 2560, 1440),
CreateH264Layer(6000000, 1920, 1080),
CreateH264Layer(4700000, 1920, 1080),
CreateH264Layer(3400000, 1280, 720),
CreateH264Layer(2250000, 960, 540),
CreateH264Layer(1000000, 640, 360)
}
),
// Also generate a set of PNG thumbnails
new PngImage(
start: "25%",
step: "25%",
range: "80%",
layers: new[]
{
new PngLayer(
"50%",
"50%"
)
}
)
},
// Specify the format for the output files - one for video+audio, and another for the thumbnails
formats: new Format[]
{
// Mux the H.264 video and AAC audio into MP4 files, using basename, label, bitrate and extension macros
// Note that since you have multiple H264Layers defined above, you have to use a macro that produces unique names per H264Layer
// Either {Label} or {Bitrate} should suffice
new Mp4Format(
"Video-{Basename}-{Label}-{Bitrate}{Extension}"
),
new PngFormat(
"Thumbnail-{Basename}-{Index}{Extension}"
)
}
),
OnErrorType.StopProcessingJob,
Priority.Normal
)
};
const string DESCRIPTION = "Multiple 4k";
// Create the custom Transform with the outputs defined above
transform = await client.Transforms.CreateOrUpdateAsync(resourceGroupName, accountName, TRANSFORM_NAME_H264_MULTIPLE_4K_S,
outputs,
DESCRIPTION);
return transform;
}
But the job ends up with the following error:
Job ended with error: Fatal service error, please contact support.
An error has occurred. Stage: ProcessSubtaskRequest. Code: System.Net.WebException.
And I did use S3 Media Reserved Unit for encoding. So, is there any way to make it work?
Posting back the solutions to this thread for completeness
There was a bug in the sample code (RunAsync() method), which resulted in Jobs using an incorrect output Asset. The bug has now been fixed.
There was a related bug in error handling that is being addressed

Kohana 3.2 Upload Exception

I'm using Kohana 3.2.
I have a category form with two upload fields: one is image, and one is banner. In my controller i got:
try{
$model_category->save();
}catch(ORM_Validation_Exception $e){
$errors = $e->errors('forms');
//echo Debug::vars($errors);
}catch (Exception $e){
$upload_errors = $e->getMessage();
}
Rules for my images in the model:
'photo' => array(
array('Upload::valid'),
array('Upload::type', array(array(':value'),array('jpeg', 'jpg', 'png', 'gif'))),
array('Upload::size', array(array(':value'), array('500000')))
),
'banner' => array(
//array(array($this, 'validate_photo'), array(':validation', ':field', ':value', 500, 100)),
array('Upload::valid'),
array('Upload::type', array(array(':value'),array('jpeg', 'jpg', 'png', 'gif'))),
array('Upload::size', array(array(':value'), array('5000000')))
),
I got into such problem: if i leave a required field, for example "name" and upload an txt file to force both exceptions occur, it can only catch the ORM_Validation_Exception. So my question is how to merge the two error array. And very important, how can i know if it's an exeception for the image field or the banner field.
I have been trying over for days but end up with nothing. Please help me out!
You can use Validation class to validate your uploads and then if validation is ok - save the model.
Something like:
$validate_image = Validation::factory($_FILES);
$validate_image->rule($name, 'Upload::valid');
$validate_image->rule($name, 'Upload::type', array($_FILES['image'], array('jpeg', 'jpg', 'png', 'gif')));
$validate_image->rule($name, 'Upload::size', array($_FILES[$name], '500000'));
$validate_banner = Validation::factory($_FILES);
$validate_banner->rule($name, 'Upload::valid');
$validate_banner->rule($name, 'Upload::type', array($_FILES[$name], array('jpeg', 'jpg', 'png', 'gif')));
$validate_banner->rule($name, 'Upload::size', array($_FILES[$name], '500000'));
if ($validate_image->check() && $validate_banner->check()) {
$model_category->save();
}

Resources