getting ENOENT Error when extracting the path of an image in S3 Bucket - node.js

My requirement is to resize an image present in s3 bucket by 50%. I found an npm package named lwip which does image resizing for local images.
My code is as follows
var lwip=require('lwip');
lwip.open(imagePath //Format of the path is as follows "https://s3bucketName.s3.amazonaws.com/filename.jpg"
, function (err, image) {
if (err) {
//error handling
} else{
//Some logic for resizing the image
}
});
I am getting the following error
ENOENT, open 'https://s3bucketName.s3.amazonaws.com/filename.jpg'
Can Somebody help with this issue as I am unable to understand why I am getting this error?
I have also made my s3 bucket as public so that anybody can use the image paths.

You are trying to pass the S3 URL of an image, i.e. a remote file. lwip works with local files.
Instead, you need to:
copy the file from S3 on to your local disk
perform any resizing operations
upload the resized file to S3
... or use a module that supports resizing directly on S3.
ENOENT just means "file does not exist", which is correct.

Related

Download xlsx file (or any formats that cannot be read by notepad) from Google Storage and store it locally

I currently have a node.js server running where I can grab a csv file stored in storage bucket and store that to a local file.
However, when I try to do the same thing with a xlsx file, it seems to mess up the file and cannot be read when I download it to a local directory.
Here is my code for getting the file to a stream:
async function getFileFromBucket(fileName) {
var fileTemp = await storage.bucket(bucketName).file(fileName);
return await fileTemp.download()
}
and with the data returned from above code, I store it into local directory by doing the following:
fs.promises.writeFile('local_directory', DataFromAboveCode)
It seems to work fine with .csv file but does not work with .xlsx file where I can open the csv file but xlsx file gets corrupted and cannot be opened.
I tried downloading the xlsx file directly from the storage bucket on google cloud console but it seems to work fine, meaning that somethings gone wrong in the downloading / saving process
Could someone guide me to what I am doing wrong here?
Thank you

How to get the file path in AWS Lambda?

I would like to send a file to Google Cloud Platform using their client library such on this this example (Node.js code sample): https://cloud.google.com/storage/docs/uploading-objects
My current code looks like this:
const s3Bucket = 'bucket_name';
const s3Key = 'folder/filename.extension';
const filePath = s3Bucket + "/" + s3Key;
await storage.bucket(s3Bucket).upload(filePath, {
gzip: true,
metadata: {
cacheControl: 'public, max-age=31536000',
},
});
But when I do this there is an error:
"ENOENT: no such file or directory, stat
'ch.ebu.mcma.google.eu-west-1.ibc.websiteExtract/AudioJobResults/audioGoogle.flac'"
I also tried to send the path I got in AWS Console (Copy path button) "s3://s3-eu-west-1.amazonaws.com/ch.ebu.mcma.google.eu-west-1.ibc.website/ExtractAudioJobResults/audioGoogle.flac", but did not work.
You seem to be trying to copy data from S3 to Google Cloud Storage directly. This is not what your example/tutorial shows. The sample code assumes that you upload a local copy of the data to Google Cloud Storage. S3 is not local storage.
How you could do it:
Download the data to /tmp in your Lambda function
Use the sample code above to upload the data from /tmp
(Optionally) Remove the uploaded data from /tmp
A word of caution: The available storage under /tmp is currently limited to 500MB. If you want to upload/copy files larger than that this won't work. Also beware that the lambda execution environment might be re-used so cleaning up after yourself (i.e. step 3) is probably a good idea if you plan to copy lots of files.

Trying to pull in https paths into ffmpeg for node-js running on aws ec2 fails with 'input file does not exist'

I have a script set up to load in a video file from an s3 bucket into a node script running on an ec2 instance, but I am having issues trying to get ffmpeg to accept the url. I know that my aws-sdk integration is working ok as I can read objects and write objects to the bucket, and I am generating a signed url to the object in order to easily pass the path through but it seems to be failing with an 'input file does not exist' error.
If I use this generated signed path in a browser however, the video file can be found.
Has anyone else come across this issue? I could probably try and pipe through the external file to a new file local to the ec2 instance but if I can get round that extra step that would be great!

how to save the binary of image of multi part form in node js

upload imageI am trying to make upload image in pure nodejs, please don't ask me why you do that cuz I am obliged to make it in pure node js I parsed the request header and I checked about image uploading or any files and I have in the console these bytes of image req data. and I**strong text** am tried to save it with fs module and the image saved successfully but when opening it gives me "this file format can't be opened"
how can i save the uploaded image ??

Amazon s3 bucket image access issue: Access Denied

I am getting the following error on putting the image src
I am using following modules to upload an image in node
aws = require('aws-sdk'),
multer = require('multer'),
multerS3 = require('multer-s3'),
Image is uploading successfully in the bucket but when I put the same url in <img src="https://meditationimg.s3.us-east-2.amazonaws.com/profilepic/1507187706799Penguins.jpg" /> it returns the above error
Anyone who knows the solution??
No Such Key is S3's way of saying "404 Not Found."
The request was authorized and syntactically valid, but there's no file in the bucket at the specified path.
You may want to inspect the contents of your bucket from the AWS console.
Make sure you access image using the same case as it was uploaded and is stored on S3.(Generally it should be lower case.)

Resources