Looking though the docs, there are two ways of uploading files (images, videos, ...) to cloudinary using the node.js sdk.
Is there some way of getting progress reports when using either of the below specified methods? E.g 1 of 100mb have been uploaded.
cloudinary.v2.uploader.upload_large(filePath, options, (err, results) => {});
cloudinary.v2.uploader.upload(filePath, options, (err, results) => {});
For assets larger than this limit (100MB) you must request that the derived versions are created before they're requested, which we call 'eagerly', and that the processing takes place in the background ('asynchronously'). When using asynchronous eager transformations you can manipulate the asset as large as your account's maximum video/image file size limit.
Eager transformations can be requested for new asset in the upload API call or configured in an upload preset, including an upload preset that is used when you upload to Media Library.
For existing videos, you can request eager transformations via the explicit API method.
Once the video is transformed eagerly/asynchronously it will be available via the URL as normal.
For example in node:
cloudinary.v2.uploader.upload("sample.jpg",
{ eager: [
{ width: 300, height: 300, crop: "pad" },
{ width: 160, height: 100, crop: "crop", gravity: "south"} ],
eager_async: true,
eager_notification_url: "https://mysite.example.com/eager_endpoint",
notification_url: "https://mysite.example.com/upload_endpoint" },
function(error, result) {console.log(result, error); });
Related
Using Cloudinary, just like an image, I would like to limit the width and height of the pdfs that are uploaded.
This is how I upload the file:
const res = await new Promise((resolve, reject) =>
{
let cld_upload_stream = cloud.upload_stream(
{
folder: process.env.CLOUD_FOLDER,
},
function (err, res)
{
if (res)
{
resolve(res);
} else
{
reject(err);
}
}
);
streamifier.createReadStream(file.data).pipe(cld_upload_stream);
});
return {
url: res.url,
location: res.public_id
}
Are there any options to limit the width and height, that can work on pdf files?
I tried:
{ responsive_breakpoints:
{ create_derived: true,
bytes_step: 20000,
min_width: 200,
max_width: 1000 }}
but it deos not seem to work.
The responsive breakpoints feature you mentioned is related to analysing an image and deciding which sizes you should resize it to for a responsive design, balancing possible problems if you choose the sizes manually (which are that you may create 'too many' images with very similar sizes, or have large gaps between the byte sizes of the different sizes, so more bandwidth is used than necessary for often-requested files.)
There's a web interface here that uses that feature and provides examples of what it does:https://www.responsivebreakpoints.com/
This is not related to validating uploaded files or editing the original assets that you upload to Cloudinary. There's no server-side validation available related to the dimensions of an uploaded file, but you could either:
Use an Incoming Transformation to resize the asset before it's saved into your account: https://cloudinary.com/documentation/transformations_on_upload#incoming_transformations
Use the upload API response after the file is uploaded and validated, and if it's "too big", show an error to your user and delete the file again.
You could also use a webhooks notification to receive the uploaded file metadata: https://cloudinary.com/documentation/notifications
Cross post from slack#help channel...
I'm creating a Sanity/Nuxt site for a client that includes a blog component. His primary marketing source is Instagram and since Instagram's API only allows for single-image posts I'm trying to go about it in the reverse way. I'm setting up a Netlify function where the client will paste in the link to the Instagram post and the function will fetch all associated images via URL using the /?__a=1 trick to fetch public data from Instagram. What I would like to do is fetch all of the images from said Instagram post, upload them as assets, and then create a blog post utilizing said uploaded images. I've modified the built-in Netlify function to create a Sanity document, where I'm pulling the image as an arrayBuffer, converting it to 'base64', then trying to upload.
When I try to run the file, found at https://gist.github.com/jeffpohlmeyer/d9824920fc1eb522ceff6380edb9de80 , I get the following error:
body: {
statusCode: 400,
error: 'Bad Request',
message: 'Invalid image, could not read metadata',
details: 'Input buffer contains unsupported image format'
},
Can anyone suggest a way I can do this? As an alternative I figured I could just link to the URLs hosted on Instagram instead of hosting the images within Sanity but it makes it difficult to maintain for the client if, for example, the Instagram post changes or he wants to change the cover image, choosing URLs instead of images would be difficult.
Tried the same on my repl:
async function uploadImage(){
const image = await axios.get(
'<<url>>',
{ responseType: 'arraybuffer' }
)
const data = Buffer.from(image.data, 'binary');
client.assets
.upload('image', data)
.then(() => {
console.log('Done!')
})
.catch((err) => {
console.log('err', err)
return { statusCode: 400 }
})
}
uploadImage();
just remove base64 conversion done and this should work
I want to post large amount of data (around 10MB) to a nodejs(loopback) api server. My requirement is to ensure that the node server does not miss any api request coming towards it, even if other data is processing at the same time. This api will be called frequently from the scheduler.
Since there is a limit in config.json in loopback folder structure, which specifies the max limit of data to be sent. Is there any challenges to post these much amount of data to an api url(POST method)?
Or is there any mechanism to deal with the large amount of data, so that it will not affect the server performance when process these data.?
Check out TUS, an open protocol for resumable file uploads. It also has an official JavaScript client library. Here is an upload example borrowed from the JS library's Github page:
input.addEventListener("change", function(e) {
// Get the selected file from the input element
var file = e.target.files[0]
// Create a new tus upload
var upload = new tus.Upload(file, {
endpoint: "http://localhost:1080/files/",
retryDelays: [0, 1000, 3000, 5000],
metadata: {
filename: file.name,
filetype: file.type
},
onError: function(error) {
console.log("Failed because: " + error)
},
onProgress: function(bytesUploaded, bytesTotal) {
var percentage = (bytesUploaded / bytesTotal * 100).toFixed(2)
console.log(bytesUploaded, bytesTotal, percentage + "%")
},
onSuccess: function() {
console.log("Download %s from %s", upload.file.name, upload.url)
}
})
// Start the upload
upload.start()
}
For handling the upload on the server side, you can find multiple node packages in the npm repo. Here is an example.
When using the sharp image resize library https://github.com/lovell/sharp for node.js, the image is being rotated.
I have no code thats says .rotate(), so why is it being rotated and how can I stop it from rotating?
I'm using the serverless-image-resizing example provided by AWS: https://github.com/awslabs/serverless-image-resizing that uses lambda to resize images on the fly if the thumbnail does not exist
S3.getObject({Bucket: BUCKET, Key: originalKey}).promise()
.then(data => Sharp(data.Body)
.resize(width, height)
.toFormat('png')
.toBuffer()
)
.then(buffer => S3.putObject({
Body: buffer,
Bucket: BUCKET,
ContentType: 'image/png',
Key: key,
}).promise()
)
.then(() => callback(null, {
statusCode: '301',
headers: {'location': `${URL}/${key}`},
body: '',
})
)
.catch(err => callback(err))
Original large image:
Resized image: note it has been rotated as well:
The problem actually turned out to be this: when you resize an image, the exif data is lost. The exif data includes the correct orientation of the image, ie which way is up.
Fortunately sharp does have a feature that does retain the exif data, .withMetadata(). So the code above needs to be changed to read:
S3.getObject({Bucket: BUCKET, Key: originalKey}).promise()
.then(data => Sharp(data.Body)
.resize(width, height)
.withMetadata() // add this line here
.toBuffer()
)
(Note that you also need to remove the .toFormat('png') call because png does not have the same support for exif that jpeg does)
And now it works properly, and the resized image is the correct way up.
The alternative solution is to actually call .rotate() before resize. This will auto-orient the image based on the EXIF data.
.then(data => Sharp(data.Body)
.rotate()
.resize(width, height)
.toBuffer()
)
More details in the docs.
This way you don't need to retain the original metadata, keeping the overall image size smaller.
const data = await sharp(file_local)
.resize({
width: px,
})
.jpeg({
quality: quality,
progressive: true,
chromaSubsampling: "4:4:4",
})
.withMetadata()
.toFile(file_local_thumb);
Using the (.withMetadata()), to prevent rotation of image.
Aditional you can pass the width only parameter, you dont need the height.
I fixed this in a related way, specific to the AWS Serverless Image Handler, without changing code. I'm passing in "rotate":null in the list of edits.
In reading the latest (5.2.0) code it looks like they tried to fix this, but it still wasn't working for me until I added "rotate":null
Here is a related issue on Github: https://github.com/awslabs/serverless-image-handler/issues/236
Updated answer for Serverless Image Handler 5.0, deploying with the CloudFormation Stack template as of 10/2020:
I appended .rotate() to line 50 of image-handler.js and it worked like a charm:
const image = sharp(originalImage, { failOnError: false }).rotate();
In case you landed here using the Nuxt Image component with IPX provider, this is how I solved it: in the nuxt.config.js file, add this:
buildModules: [
[
'#nuxt/image',
{
sharp: {
withMetadata: true,
},
},
],
],
Note there are more options in the module than the ones that are documented: https://github.com/nuxt/image/blob/61bcb90f0403df804506ccbecebfe13605ae56b4/src/module.ts#L20
I am creating an image uploading system (size usually >20MB<50MB) and i want to crop that images to various sizes (its for viewing mobile,web and desktop application),all images are stored into AWS s3.
Here is the snapshot of crop-sizes
[{
width:200,
height:200,
type:"small",
platform:"web"},
{
width:300,
height:400,
type:"small",
platform:"mobile-android"
}
....
....
]
Here is the think i am planned to do
1.First upload the image into S3.
2.Run all the crop operations in async task
upload:function(req,res){
//various cropsizes
var cropSizes = [];
//upload image to s3
uploadImageToS3(req.file,function(err,result){
if(!err){
//create crop
cropImage({
'cropsizes':cropSizes,
'file':req.file
},function(err,result){
console.log('all crop completed',result);
});
res.send('run crop in backgroud');
}
});
}
But is this correct method?? can anyone have better thing other than this???
Since you are already using s3 I would recommend trying aws lambda to resize your images and adding them back to s3 bucket with new sizes.
Here is detailed explanation in this link https://aws.amazon.com/blogs/compute/resize-images-on-the-fly-with-amazon-s3-aws-lambda-and-amazon-api-gateway/