Nodejs delete folder on Amazon S3 with aws-sdk - node.js

I'm facing issue of deleting folder which contains photos inside on Amazon S3
1. Create folder
var params = {Bucket: S3_BUCKET, Key: "test/", ACL:"public-read"};
s3.putObject(params, function(err, data) {
});
2. Upload photo
var body = fs.createReadStream(filePath);
var params = {Bucket: S3_BUCKET, Key: "test/flower.jpgg", Body: body, ContentType:"image/jpeg", ACL:"public-read"};
s3.upload(params, function(err, data) {
});
3. Delete folder
var params = {Bucket: S3_BUCKET, Key: "test/"};
s3.deleteObject(params, function(err, data) {
});
If folder has no photo, delete function works well. But it contains photos, delete will not work.
Please help. Thank for all supports.

The problem here is a conceptual one, and starts at step 1.
This does not create a folder. It creates a placeholder object that the console will display as a folder.
An object named with a trailing "/" displays as a folder in the Amazon S3 console.
http://docs.aws.amazon.com/AmazonS3/latest/UG/FolderOperations.html
It's not necessary to do this -- creating objects with this key prefix will still cause the console to display a folder, even without creating this object. From the same page:
Amazon S3 has a flat structure with no hierarchy like you would see in a typical file system. However, for the sake of organizational simplicity, the Amazon S3 console supports the folder concept as a means of grouping objects. Amazon S3 does this by using key name prefixes for objects.
Since, at step 1, you are not actually creating a folder, it makes sense that removing the placeholder object also does not delete the folder.
Folders do not actually exist in S3 -- they're just used for display purposes in the console -- so objects cannot properly be said to be "in" folders. The only way to remove all the objects "in" a folder is to explicitly remove the objects individually. Similarly, the only way to rename a folder is to rename the objects in it... and the only way to rename an object is to make a copy of an the object with a new key and then delete the old object.

Related

Amazon S3 + Lambda (Node.JS) clarification on the s3.upload() method

I am following this tutorial wherein the programmer used this code:
await s3
.upload({ Bucket: bucket, Key: target_filename, Body: file_stream })
.promise();
Now, I understand that the method above would use the initialized variables file_stream, bucket, and target_filename (which he didn't bother typing out in his tutorial).
But the tutorial is hard to follow since (for what I know) the Key parameter inside the upload is the actual directory of the file to be re-uploaded back to S3.
This is confusing because at the file_stream variable, another Key parameter exists inside the method getObject().
So, is the filename inside the getObject() method should be the same as target_filename of the upload() method? and can you initialize the variables mentioned just to make it clearer for this question? Thank you.
No, the filename inside the getObject() method may not be the same as the target_filename in upload(). Let's look at a concrete example. Suppose you have a photo.zip file stored on S3 and its key is a/b/photo.zip, and you want to unzip it and reupload it to c/d/photo.jpg assuming that the photo.zip only contains one file. Then, the filename should be a/b/photo.zip, and the target_filename should be c/d/photo.jpg. As you can see, they are clearly different.

How can I move (not copy) a file in AWS S3 using their sdk?

I need to move a file in AWS s3 bucket to another location, example:
From: http://aws.xxxxx/xxxx/locationA/file.png
To: http://aws.xxxxx/xxxx/locationB/file.png
I've looked over the documentation: https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html, but haven't found any mention of either moving nor updating the file (I'm thinking I could update the file Key path...).
So far, it seems I need to copy the file then remove the old one? Is there a more straightforward way of doing it?
My current code which copies then removes old file:
function moveFileInAws(fromLocation, toLocation, callback) {
awsSdk.copyObject({
Bucket: BUCKET_NAME,
ACL: 'public-read',
CopySource: fromLocation,
Key: toLocation
}, (err, data) => {
if (err) {
console.log(err)
return callback("Couldn't copy files in directory")
}
// callback()
awsSdk.deleteObject({ Key: fromLocation }, (err, data) => {
if (err) {
console.log("Couldn't delete files in directory")
console.log(err)
return callback("Couldn't delete files in directory")
}
callback()
})
})
}
Based on the answer here: AWS S3 - Move an object to a different folder in which user #Michael-sqlbot comments:
That's because S3, itself, doesn't have an atomic move or rename operation... so there isn't really an alternative
And the docs here: https://docs.aws.amazon.com/sdk-for-java/v1/developer-guide/examples-s3-objects.html#copy-object (for the Java SDK, but seems to be useful here) which notes:
You can use copyObject with deleteObject to move or rename an object, by first copying the object to a new name (you can use the same bucket as both the source and destination) and then deleting the object from its old location.
It sounds like S3's infrastructure simply doesn't support moving or renaming in a single operation. You must copy, then delete.
Amazon S3 doesn’t provide an API to move or rename an object from one bucket to another in a single step.
As in your example, you can use copyObject with deleteObject to move or rename an object, by first copying the object to a new name (you can use the same bucket as both the source and destination) and then deleting the object from its old location.
For more information see Performing Operations on Amazon S3 Objects
I'm not at all familiar w/ the JavaScript SDK that you're using, but using aws cli, there is:
aws s3 mv s3://bucket/folder/file s3://bucket/folder2/file
that seems it would do what you want. Not sure about JavaScript SDK.

Play audio directly from Lambda /tmp folder

I'm currently building a Alexa application in Node with Lambda. I have the need to convert and merge several audio files. I'm currently creating an audio file using google text-to-speech (long story on the need for it) which I write to /tmp and pulling an audio file from s3 which I also write to /tmp. I'm then using sox to merge the two files (see below) and write back to S3 (currently public) which I then have hard coded to play that particular clip.
My question is if it is possible to play audio directly from the /tmp folder as opposed to having to write the file back to S3.
await lambdaAudio.sox('-m /tmp/google-formatted.mp3 /tmp/audio.mp3 /tmp/result.mp3')
// get data from resulting mp3
const data = await readFile('/tmp/result.mp3');
const base64data = new Buffer(data, 'binary');
// put file back on AWS for playing
s3.putObject({
Bucket: 'my-bucket',
Key: 'result.mp3',
Body: base64data,
ACL:'public-read'
},function (resp) {
console.log('Done');
});
return`<audio src="https://s3.amazonaws.com/my-bucket/result.mp3" />`;
I usually upload the lambda function zipping the code and modules and in general all the files that my code requires.
https://developer.amazon.com/blogs/post/Tx1UE9W1NQ0GYII/Publishing-Your-Skill-Code-to-Lambda-via-the-Command-Line-Interface
So if you zip the /tmp directory and publish it as part of your lambda code the audio file will be accessible by your lambda function

aws-sdk not deploying image to s3 bucket

I am using AWS Lambda to resize images in node.js by using aws-sdk andsharp
Issue I face is that it read file successfully and also apply resize operations but not put object after resize.
Even not giving any error also. I check cloud watch where everything is alright but image not place in resize folder.
It only create key folders but image not there
return Promise.all(_croppedFiles.map(_cropFile => {
return S3.putObject({
Body: _cropFile.buffer,
Bucket: dstBucket,
ContentType: _cropFile.config.contentType,
Key: dstKey
}).promise()
}))
There is actually no extension in the keyname, which makes it to be just a name and treated as a folder. provide your keyname as dstKey.jpeg or whatever extension you want , and set your content type to image/jpeg
No matter what's the format of your input image , the output image will always be stored in "jpeg" format

How to avoid performing a firebase function on folders on cloud storage events

I'm trying to organize assets(images) into folders with a unique id for each asset, the reason being that each asset will have multiple formats (thumbnails, and formats optimized for web and different viewports).
So for every asset that I upload to the folder assets-temp/ is then moved and renamed by the functions into assets/{unique-id}/original{extension}.
example: assets-temp/my-awesome-image.jpg should become assets/489023840984/original.jpg.
note: I also keep track of the files with their original name in the DB and in the original's file metadata.
The issue: The function runs and performs what I want, but it also adds a folder named assets/{uuid}/original/ with nothing in it...
The function:
exports.process_new_assets = functions.storage.object().onFinalize(async (object) => {
// Run this function only for files uploaded to the "assets-temp/" folder.
if (!object.name.startsWith('assets-temp/')) return null;
const file = bucket.file(object.name);
const fileExt = path.extname(object.name);
const destination = bucket.file(`assets/${id}/original${fileExt}`);
const metadata = {
id,
name: object.name.split('/').pop()
};
// Move the file to the new location.
return file.move(destination, {metadata});
});
I am guessing that this might happen if the operation of uploading the original image triggers two separate events: one that creates the directory assets-temp and one that creates the file assets-temp/my-awesome-image.jpg.
If I guessed right, the first operation will trigger your function with a directory object (named "assets-temp/"). This matches your first if, so the code will proceed and do
destination = bucket.file('assets/${id}/original') // fileExt being empty
and then call file.move - this will create assets/id/original/ directory.
Simply improve your 'if' to exclude a file named "assets-temp/".
According to the documentation there is no such thing as folders in cloud storage, however, it is possible to emulate them, like you can do by using the console GUI. When creating folders what really happens is that an empty object is created(zero bytes of space) but its name ends with a forward slash, also folder names can end with _$folder$ but it is my understanding that that is how things worked in older versions so for newer buckets the forward slash is enough.

Resources