aws-sdk not deploying image to s3 bucket - node.js

I am using AWS Lambda to resize images in node.js by using aws-sdk andsharp
Issue I face is that it read file successfully and also apply resize operations but not put object after resize.
Even not giving any error also. I check cloud watch where everything is alright but image not place in resize folder.
It only create key folders but image not there
return Promise.all(_croppedFiles.map(_cropFile => {
return S3.putObject({
Body: _cropFile.buffer,
Bucket: dstBucket,
ContentType: _cropFile.config.contentType,
Key: dstKey
}).promise()
}))

There is actually no extension in the keyname, which makes it to be just a name and treated as a folder. provide your keyname as dstKey.jpeg or whatever extension you want , and set your content type to image/jpeg
No matter what's the format of your input image , the output image will always be stored in "jpeg" format

Related

Play audio directly from Lambda /tmp folder

I'm currently building a Alexa application in Node with Lambda. I have the need to convert and merge several audio files. I'm currently creating an audio file using google text-to-speech (long story on the need for it) which I write to /tmp and pulling an audio file from s3 which I also write to /tmp. I'm then using sox to merge the two files (see below) and write back to S3 (currently public) which I then have hard coded to play that particular clip.
My question is if it is possible to play audio directly from the /tmp folder as opposed to having to write the file back to S3.
await lambdaAudio.sox('-m /tmp/google-formatted.mp3 /tmp/audio.mp3 /tmp/result.mp3')
// get data from resulting mp3
const data = await readFile('/tmp/result.mp3');
const base64data = new Buffer(data, 'binary');
// put file back on AWS for playing
s3.putObject({
Bucket: 'my-bucket',
Key: 'result.mp3',
Body: base64data,
ACL:'public-read'
},function (resp) {
console.log('Done');
});
return`<audio src="https://s3.amazonaws.com/my-bucket/result.mp3" />`;
I usually upload the lambda function zipping the code and modules and in general all the files that my code requires.
https://developer.amazon.com/blogs/post/Tx1UE9W1NQ0GYII/Publishing-Your-Skill-Code-to-Lambda-via-the-Command-Line-Interface
So if you zip the /tmp directory and publish it as part of your lambda code the audio file will be accessible by your lambda function

How to get image format when i get image via http request using nodejs

I'm getting image via line api. when user send image to my bot, I will get it, and upload to aws s3 bucket: GET https://api.line.me/v2/bot/message/{messageId}/content
I got success image. I'm using file = fs.createWriteStream("file.jpg") and pipe(file). However, it will be fixed at file extension "jpg". For example, the file I get has the extension "gif", it will still save in the format "jpg".
So, how to get image format, if I know image format, i can use fs.createWriteStream to create file have the same format.
You can simply first read headers of image and you will get image format in it, according to image format can save it to s3 Bucket.
Here is the sample code.
request.head('https://www.google.com/images/branding/googlelogo/2x/googlelogo_color_272x92dp.png', function (err, res, body) {
console.log('content-type:', res.headers['content-type']);
console.log('content-length:', res.headers['content-length']);
let filename = 'image.png' // according to content type
request(uri).pipe(fs.createWriteStream(filename)).on('close', callback);
});
filename can be decided according to content-type

Nodejs delete folder on Amazon S3 with aws-sdk

I'm facing issue of deleting folder which contains photos inside on Amazon S3
1. Create folder
var params = {Bucket: S3_BUCKET, Key: "test/", ACL:"public-read"};
s3.putObject(params, function(err, data) {
});
2. Upload photo
var body = fs.createReadStream(filePath);
var params = {Bucket: S3_BUCKET, Key: "test/flower.jpgg", Body: body, ContentType:"image/jpeg", ACL:"public-read"};
s3.upload(params, function(err, data) {
});
3. Delete folder
var params = {Bucket: S3_BUCKET, Key: "test/"};
s3.deleteObject(params, function(err, data) {
});
If folder has no photo, delete function works well. But it contains photos, delete will not work.
Please help. Thank for all supports.
The problem here is a conceptual one, and starts at step 1.
This does not create a folder. It creates a placeholder object that the console will display as a folder.
An object named with a trailing "/" displays as a folder in the Amazon S3 console.
http://docs.aws.amazon.com/AmazonS3/latest/UG/FolderOperations.html
It's not necessary to do this -- creating objects with this key prefix will still cause the console to display a folder, even without creating this object. From the same page:
Amazon S3 has a flat structure with no hierarchy like you would see in a typical file system. However, for the sake of organizational simplicity, the Amazon S3 console supports the folder concept as a means of grouping objects. Amazon S3 does this by using key name prefixes for objects.
Since, at step 1, you are not actually creating a folder, it makes sense that removing the placeholder object also does not delete the folder.
Folders do not actually exist in S3 -- they're just used for display purposes in the console -- so objects cannot properly be said to be "in" folders. The only way to remove all the objects "in" a folder is to explicitly remove the objects individually. Similarly, the only way to rename a folder is to rename the objects in it... and the only way to rename an object is to make a copy of an the object with a new key and then delete the old object.

NodeJS + AWS SDK + S3 - how do I successfully upload a zip file?

I've been trying to upload gzipped log files to S3 using the AWS NodeJS sdk, and occasionally find that the uploaded file in S3 is corrupted/truncated. When I download the file and decompress using gunzip in a bash terminal, I get:
01-log_2014-09-22.tsv.gz: unexpected end of file.
When I compare file sizes, the downloaded file comes up just a tiny bit short of the original file size (which unzips fine).
This doesn't happen consistently...one out of every three files or so is truncated. Reuploading can fix the problem. Uploading through the S3 Web UI also works fine.
Here's the code I'm using...
var stream = fs.createReadStream(localFilePath);
this.s3 = new AWS.S3();
this.s3.putObject({
Bucket: bucketName,
Key: folderName + filename,
ACL: "bucket-owner-full-control",
Body: stream,
},function(err) {
// stream.close();
callback(err);
});
I shouldn't have to close the stream since it defaults to autoclose, but the problem seems to occur either way.
The fact that its intermittent suggests it's some sort of a timing or buffering issue, but I can't find any controls to fiddle with that might affect that. Any suggestions?
Thanks.

content-type to be used for uploading svg images to AWS S3

I was trying to upload *.svg images to S3 without specifying any Content-type. This upload works successfully and AWS sets Content-Type as binary/octet-stream by default. Now, when I try to use S3 url of image in my browser, the browser does not render the image and throws incorrect mime-type warning.
To set the correct mime-type I checked list of Content-type which AWS offers but it does not have "image/svg+xml".
So I wanted to know if anyone has tried to upload svg images to S3? What is the content-type set in that case? Or is there any other compatible Content-type that can be used for uploading svg images to S3?
Thanks in Advance.
As you mentioned, the correct Content-Type for SVG files is "image/svg+xml".
Even if the AWS console does not provide that value in the Content-Type selection field, you can enter it anyway and S3 will accept it.
AWS specifies the following in their API docs for the Content-Type header:
A standard MIME type describing the format of the contents. For more information, go to http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.17.
Type: String
Default: binary/octet-stream
Valid Values: MIME types
Constraints: None
For additional details see http://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectPUT.html
For those using an SDK, here's an example code snippet that I used to solve this problem. Im using Javascript (NodeJs). This is to complement the accepted answer above, which is to explicitly define the ContentType as 'image/svg+xml' from the params before uploading.
const params = {
Bucket: 'bucket',
Key: 'key',
Body: stream,
ACL: 'public-read',
ContentType: 'image/svg+xml',
};
s3.upload(params, function(err, data) {
console.log(err, data);
});
If you want to pass content type static and get file from public path not in request then used this:
$destinationPath = public_path("/images/".$image_name);
if (File::exists($destinationPath))
{
$contents = File::get($destinationPath);
Storage::disk('s3')->put($image_name,$contents,['mimetype' => 'image/svg+xml'],'public');
}
I've problems with uploaded files svg to s3 in laravel; but works with next code:
$image = Storage::disk('s3')->put(
$filePath,
file_get_contents($file),
$file->getClientOriginalExtension() === 'svg' ? ['mimetype' => 'image/svg+xml'] : []
);
I understood from one of the other answers that you're using s3cmd to upload the files to S3. The package makes some educated guesses about MIME type for each file based on the file extension (there's a useful explanation here).
There are also a range of options to s3cmd including the --no-mime-magic command flag (see https://s3tools.org/usage for info), which helped with CSS and other files, but not with SVG.
To get this to work for my own purposes I had to set a specific Content-Type for only SVG files using include/exclude command flags:
s3cmd sync --content-type 'image/svg+xml' --exclude '*' --include '*.svg' {local source} {S3 bucket URL}
Because I also have other content I had to have a separate s3cmd command to sync everything else:
s3cmd sync --no-mime-magic --content-type 'image/svg+xml' --exclude '*.svg' {local source} {S3 bucket URL}

Resources