how do I rename a folder? - node.js

I want to do this with aws-sdk library.
I have a folder on my S3 bucket called "abcd/", it has 3 files on it (e.g. abcd/1.jpg, abcd/2.jpg).
I want to rename the folder to 1234/
^ I want there to be 1234/ only
const awsMove = async (path) => {
try {
const s3 = new AWS.S3();
const AWS_BUCKET = 'my-bucket-test';
const copyParams = {
Key: path.newPath,
Bucket: AWS_BUCKET,
CopySource: encodeURI(`/${AWS_BUCKET}/${path.oldPath}`),
};
await s3.copyObject(copyParams).promise();
const deleteParams = {
Key: path.oldPath,
Bucket: AWS_BUCKET,
};
await s3.deleteObject(deleteParams).promise();
} catch (err) {
console.log(err);
}
};
const changePath = { oldPath: 'abcd/', newPath: '1234/' };
awsMove(changePath);
The above code errors with "The specified key does not exist" what am I doing wrong?

AWS S3 does not have the concept of folders as in a file system. You have a bucket and a key that identifies the object/file stored at that location. The pattern of the key is usually a/b/c/d/some_file and the way it is showed on AWS console, it might give you an impression that a, b, c or d are folders but indeed they aren't.
Now, you can't change the key of an object since it is immutable. You'll have to copy the file existing at the current key to the new key and delete the file at current key.
This implies renaming a folder say folder/ is same as copying all files located at key folder/* and creating new ones at newFolder/*. The error:
The specified key does not exist
says that you've not specified the full object key during the copy from source as well as during deletion. The correct implementation would be to list all files at folder/* and copy and delete them one by one. So, your function should be doing something like this:
const awsMove = async (path) => {
try {
const s3 = new AWS.S3();
const AWS_BUCKET = 'my-bucket-test';
const listParams = {
Bucket: AWS_BUCKET,
Delimiter: '/',
Prefix: `${path.oldPath}`
}
await s3.listObjects(listParams, function (err, data) {
if(err)throw err;
data.Contents.forEach(async (elem) => {
const copyParams = {
Key: `${path.newPath}${elem.Key}`,
Bucket: AWS_BUCKET,
CopySource: encodeURI(`/${AWS_BUCKET}/${path.oldPath}/${elem.Key}`),
};
await s3.copyObject(copyParams).promise();
const deleteParams = {
Key: `${path.newPath}${elem.Key}`,
Bucket: AWS_BUCKET,
};
await s3.deleteObject(deleteParams).promise();
});
}).promise();
} catch (err) {
console.log(err);
}
};

Unfortunately, you will need to copy the old ones to the new name and delete them from the old one.
BOTO 3:
AWS_BUCKET ='my-bucket-test'
s3 = boto3.resource('s3')
s3.Object(AWS_BUCKET,'new_file').copy_from(CopySource='AWS_BUCKET/old_file')
s3.Object(AWS_BUCKET,'old_file').delete()
Node :
var s3 = new AWS.S3();
AWS_BUCKET ='my-bucket-test'
var OLD_S3_KEY = '/old-file.json';
var NEW_S3_KEY = '/new-file.json';
s3.copyObject({
Bucket: BUCKET_NAME,
CopySource: `${BUCKET_NAME}${OLD_KEY}`,
Key: NEW_KEY
})
.promise()
.then(() =>
s3.deleteObject({
Bucket: BUCKET_NAME,
Key: OLD_KEY
}).promise()
)
.catch((e) => console.error(e))

Related

Read content of txt file from s3 bucket with Node

I would like to read the content of a .txt file stored within an s3 bucket.
I tried :
var s3 = new AWS.S3({apiVersion: '2006-03-01'});
var params = {Bucket: 'My-Bucket', Key: 'MyFile.txt'};
var s3file = s3.getObject(params)
But the s3file object that i get does not contain the content of the file.
Do you have an idea on what to do ?
Agree with zishone and here is the code with exception handling:
var s3 = new AWS.S3({apiVersion: '2006-03-01'});
var params = {Bucket: 'My-Bucket', Key: 'MyFile.txt'};
s3.getObject(params , function (err, data) {
if (err) {
console.log(err);
} else {
console.log(data.Body.toString());
}
})
According to the docs the contents of your file will be in the Body field of the result and it will be a Buffer.
And another problem there is that s3.getObject( should have a callback.
s3.getObject(params, (err, s3file) => {
const text = s3file.Body.toString();
})

Cannot upload to AWS S3 inside my Lambda function

I have the following lambda function. It received an XML, looks through it, finds a base64 pdf file and tries to upload it to S3.
index.js
const AWS = require('aws-sdk');
const xml2js = require('xml2js');
const pdfUpload = require('./upload_pdf');
const s3 = new AWS.S3();
exports.handler = async (event, context, callback) => {
let attachment;
xml2js.parseString(event.body, function(err, result) {
attachment =
result.Attachment[0].Data[0];
if (attachment) {
pdfUpload(attachment);
}
});
return {
statusCode: 200
}
};
upload_pdf.js
/**
*
* #param {string} base64 Data
* #return {string} Image url
*/
const pdfUpload = async (base64) => {
const AWS = require('aws-sdk');
const s3 = new AWS.S3();
const base64Data = new Buffer.from(base64, 'base64');
// With this setup, each time your user uploads an image, will be overwritten.
// To prevent this, use a different Key each time.
// This won't be needed if they're uploading their avatar, hence the filename, userAvatar.js.
const params = {
Bucket: 'mu-bucket',
Key: `123.pdf`,
Body: base64Data,
ACL: 'public-read',
ContentEncoding: 'base64',
ContentType: `application/pdf`
}
let location = '';
let key = '';
try {
const { Location, Key } = await s3.upload(params).promise();
location = Location;
key = Key;
} catch (error) {
// console.log(error)
}
console.log(location, key);
return location;
}
module.exports = pdfUpload;
No matter what I do, the file does not get uploaded. I have checked the permissions, and the lambda has access to the bucket. Running the lambda I'm not receiving any errors either. Can anybody see what might be wrong here?
First, as an advice, I think you should put more logs to see at which steps the function is stuck / failing
The second thing you can try is to put await
await pdfUpload(attachment);

Serverless lambda trigger read json file

I have lambda (Node) which has trigger to fire when a new JSON file added to our S3 bucket. Here is my lambda code
module.exports.bookInfo = (event, context) => {
console.log('Events ', JSON.stringify(event));
event.Records.forEach((record) => {
const filename = record.s3.object.key;
const bucketname = record.s3.bucket.name;
let logMsg = [];
const s3File = `BucketName: [${bucketname}] FileName: [${filename}]`;
console.log(s3File)
logMsg.push(`Lambda execution started for ${s3File}, Trying to download file from S3`);
try {
s3.getObject({
Bucket: bucketname,
Key: filename
}, function(err, data) {
logMsg.push('Data is ', JSON.stringify(data.Body))
if (err) {
logMsg.push('Generate Error :', err);
console.log(logMsg)
return null;
}
logMsg.push(`File downloaded successfully. Processing started for ${s3File}`);
logMsg.push('Data is ', JSON.stringify(data.Body))
});
} catch (e) {console.log(e)}
});
}
When i run this, i don't get file content and i suspect that lambda finishes execution before file read operation complete. I tried with async await without success. What i am missing here ? I was able to read small file of 1 kb but when my file grows like 100 MB, it causes issue.
Thanks in advance
I was able to do it through async/await. Here is my code
module.exports.bookInfo = (event, context) => {
event.Records.forEach(async(record) => {
const filename = record.s3.object.key;
const bucketname = record.s3.bucket.name;
const s3File = `BucketName: [${bucketname}] FileName: [${filename}]`;
logMsg.push(`Lambda execution started for ${s3File}, Trying to download file from S3`);
let response = await s3.getObject({
Bucket: bucketname,
Key: filename
}).promise();
})
}

AWS Lambda gives error on putting s3 object

I am working on a function which creates a thumbnail by saving a thumbnail version of the image in the screenshot folder when any image is uploaded to the images folder in the bucket. I am using serverless framework. I keep getting an error shown below. I have pasted exact code so anyone can copy paste and implement this solution. Serverless.yml, handler function file as well as any supporting file is included as well.
I can't figure out when i am referring to buffer why do i get this error that object type is not buffer etc.
{ InvalidParameterType: Expected params.Body to be a string, Buffer, Stream, Blob, or typed array object
at ParamValidator.fail (/var/runtime/node_modules/aws-sdk/lib/param_validator.js:50:37)
at ParamValidator.validatePayload (/var/runtime/node_modules/aws-sdk/lib/param_validator.js:255:10)
at ParamValidator.validateScalar (/var/runtime/node_modules/aws-sdk/lib/param_validator.js:133:21)
at ParamValidator.validateMember (/var/runtime/node_modules/aws-sdk/lib/param_validator.js:94:21)
at ParamValidator.validateStructure (/var/runtime/node_modules/aws-sdk/lib/param_validator.js:75:14)
at ParamValidator.validateMember (/var/runtime/node_modules/aws-sdk/lib/param_validator.js:88:21)
at ParamValidator.validate (/var/runtime/node_modules/aws-sdk/lib/param_validator.js:34:10)
at Request.VALIDATE_PARAMETERS (/var/runtime/node_modules/aws-sdk/lib/event_listeners.js:125:42)
at Request.callListeners (/var/runtime/node_modules/aws-sdk/lib/sequential_executor.js:105:20)
at callNextListener (/var/runtime/node_modules/aws-sdk/lib/sequential_executor.js:95:12)
message: 'Expected params.Body to be a string, Buffer, Stream, Blob, or typed array object',
code: 'InvalidParameterType',
time: 2019-03-12T16:37:26.910Z }
Code:
Handler.js
'use strict';
const resizer = require('./resizer');
module.exports.resizer = (event, context, callback) => {
console.log(event.Records[0].s3);
const bucket = event.Records[0].s3.bucket.name;
const key = event.Records[0].s3.object.key;
console.log(`A file named ${key} was put in a bucket ${bucket}`);
resizer(bucket, key)
.then(() => {
console.log(`The thumbnail was created`);
callback(null, {
message: 'The thumbnail was created'
});
})
.catch(error => {
console.log(error);
callback(error);
});
};
module.exports.thumbnails = (event, context, callback) => {
const bucket = event.Records[0].s3.bucket.name;
const key = event.Records[0].s3.object.key;
console.log(bucket);
console.log(key);
console.log(`A new file ${key} was created in the bucket ${bucket}`);
callback(null, {
message: `A new file ${key} was created in the bucket ${bucket}`
});
};
Resizer.js
'use strict';
const AWS = require('aws-sdk');
const S3 = new AWS.S3();
const Jimp = require('jimp'); //https://github.com/oliver-moran/jimp
module.exports = (bucket, key) => {
const newKey = replacePrefix(key);
const height = 512;
return getS3Object(bucket, key).then(data => resizer(data.Body, height)).then(buffer => putS3Object(bucket, newKey, buffer));
};
function getS3Object(bucket, key) {
return S3.getObject({
Bucket: bucket,
Key: key
}).promise();
}
function putS3Object(bucket, key, body) {
return S3.putObject({
Body: body,
Bucket: bucket,
ContentType: 'image/jpg',
Key: key
}).promise();
}
function replacePrefix(key) {
const uploadPrefix = 'uploads/';
const thumbnailsPrefix = 'thumbnails/';
return key.replace(uploadPrefix, thumbnailsPrefix);
}
function resizer(data, height) {
return Jimp.read(data)
.then(image => {
return image
.resize(Jimp.AUTO, height)
.quality(100) // set JPEG quality
.getBuffer(Jimp.MIME_JPEG, (err, buffer) => {
return buffer;
});
})
.catch(err => err);
}
Serverless.yml
service: serverless-resizer-project # NOTE: update this with your service name
provider:
name: aws
runtime: nodejs6.10
profile: student1
iamRoleStatements:
- Effect: "Allow"
Action:
- "s3:ListBucket"
- "s3:GetObject"
- "s3:PutObject"
Resource: "arn:aws:s3:::serverless-resizer-project-images/*"
functions:
resizer:
handler: handler.resizer
events:
- s3:
bucket: serverless-resizer-project-images
event: s3:ObjectCreated:*
rules:
- prefix: uploads/
- suffix: .jpg
thumbnails:
handler: handler.thumbnails
events:
- s3:
bucket: serverless-resizer-project-images
event: s3:ObjectCreated:*
rules:
- prefix: thumbnails/
- suffix: .jpg
The return value of your resizer function is not what you expect. You're using the getBuffer function with a callback, which means that the buffer of the image is not resolved by the promise, but instead is used in the callback, which is not your intention. You should instead use getBufferAsync, which returns a promise that resolves to the image buffer. Your resizer function should look something like this:
function resizer(data, height) {
return Jimp.read(data)
.then(image => image
.resize(Jimp.AUTO, height)
.quality(100) // set JPEG quality
.getBufferAsync(Jimp.MIME_JPEG)
)
.catch(err => err);
}

How can I delete folder on s3 with node.js?

Yes, I know. There is no folder concept on s3 storage. but I really want to delete a specific folder from s3 with node.js. I tried two solutions, but both didn't work.
My code is below:
Solution 1:
Deleting folder directly.
var key='level/folder1/folder2/';
var strReturn;
var params = {Bucket: MyBucket};
var s3 = new AWS.S3(params);
s3.client.listObjects({
Bucket: MyBucket,
Key: key
}, function (err, data) {
if(err){
strReturn="{\"status\":\"1\"}";
}else{
strReturn=+"{\"status\":\"0\"}";
}
res.send(returnJson);
console.log('error:'+err+' data:'+JSON.stringify(data));
});
Actually, I have a lot of files under folder2. I can delete single file from folder2 if I define key like this:
var key='level/folder1/folder2/file1.txt', but it didn't work when I deleted a folder(key='level/folder1/folder2/').
Solution 2:
I tried to set expiration to an object when I uploaded this file or folder to s3. code is below:
s3.client.putObject({
Bucket: Camera_Bucket,
Key: key,
ACL:'public-read',
Expires: 60
}
But it didn't either. After finishing uploading, I checked the properties of that file. it showed there was nothing value for expiry date:
Expiry Date:none
Expiration Rule:N/A
How can I delete folder on s3 with node.js?
Here is an implementation in ES7 with an async function and using listObjectsV2 (the revised List Objects API):
async function emptyS3Directory(bucket, dir) {
const listParams = {
Bucket: bucket,
Prefix: dir
};
const listedObjects = await s3.listObjectsV2(listParams).promise();
if (listedObjects.Contents.length === 0) return;
const deleteParams = {
Bucket: bucket,
Delete: { Objects: [] }
};
listedObjects.Contents.forEach(({ Key }) => {
deleteParams.Delete.Objects.push({ Key });
});
await s3.deleteObjects(deleteParams).promise();
if (listedObjects.IsTruncated) await emptyS3Directory(bucket, dir);
}
To call it:
await emptyS3Directory(process.env.S3_BUCKET, 'images/')
You can use aws-sdk module for deleting folder. Because you can only delete a folder when it is empty, you should first delete the files in it. I'm doing it like this :
function emptyBucket(bucketName,callback){
var params = {
Bucket: bucketName,
Prefix: 'folder/'
};
s3.listObjects(params, function(err, data) {
if (err) return callback(err);
if (data.Contents.length == 0) callback();
params = {Bucket: bucketName};
params.Delete = {Objects:[]};
data.Contents.forEach(function(content) {
params.Delete.Objects.push({Key: content.Key});
});
s3.deleteObjects(params, function(err, data) {
if (err) return callback(err);
if (data.IsTruncated) {
emptyBucket(bucketName, callback);
} else {
callback();
}
});
});
}
A much simpler way is to fetch all objects (keys) at that path & delete them. In each call fetch 1000 keys & s3 deleteObjects can delete 1000 keys in each request too. Do that recursively to achieve the goal
Written in typescript
/**
* delete a folder recursively
* #param bucket
* #param path - without end /
*/
deleteFolder(bucket: string, path: string) {
return new Promise((resolve, reject) => {
// get all keys and delete objects
const getAndDelete = (ct: string = null) => {
this.s3
.listObjectsV2({
Bucket: bucket,
MaxKeys: 1000,
ContinuationToken: ct,
Prefix: path + "/",
Delimiter: "",
})
.promise()
.then(async (data) => {
// params for delete operation
let params = {
Bucket: bucket,
Delete: { Objects: [] },
};
// add keys to Delete Object
data.Contents.forEach((content) => {
params.Delete.Objects.push({ Key: content.Key });
});
// delete all keys
await this.s3.deleteObjects(params).promise();
// check if ct is present
if (data.NextContinuationToken) getAndDelete(data.NextContinuationToken);
else resolve(true);
})
.catch((err) => reject(err));
};
// init call
getAndDelete();
});
}
According doc at https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjects.html:
A response can contain CommonPrefixes only if you specify a delimiter.
CommonPrefixes contains all (if there are any) keys between Prefix and the next occurrence of the string specified by the delimiter.
Omitting Delimiter parameter will make ListObject return all keys starting by the Prefix parameter.
According to accepted answer I created promise returned function, so you can chain it.
function emptyBucket(bucketName){
let currentData;
let params = {
Bucket: bucketName,
Prefix: 'folder/'
};
return S3.listObjects(params).promise().then(data => {
if (data.Contents.length === 0) {
throw new Error('List of objects empty.');
}
currentData = data;
params = {Bucket: bucketName};
params.Delete = {Objects:[]};
currentData.Contents.forEach(content => {
params.Delete.Objects.push({Key: content.Key});
});
return S3.deleteObjects(params).promise();
}).then(() => {
if (currentData.Contents.length === 1000) {
emptyBucket(bucketName, callback);
} else {
return true;
}
});
}
The accepted answer throws an error when used in typescript. I made it work by modifying the code in the following way. I'm very new to Typescript but at least it is working now.
async function emptyS3Directory(prefix: string) {
const listParams = {
Bucket: "bucketName",
Prefix: prefix, // ex. path/to/folder
};
const listedObjects = await s3.listObjectsV2(listParams).promise();
if (listedObjects.Contents.length === 0) return;
const deleteParams = {
Bucket: bucketName,
Delete: { Objects: [] as any },
};
listedObjects.Contents.forEach((content: any) => {
deleteParams.Delete.Objects.push({ Key: content.Key });
});
await s3.deleteObjects(deleteParams).promise();
if (listedObjects.IsTruncated) await emptyS3Directory(prefix);
}
Better solution with #aws-sdk/client-s3 module:
private async _deleteFolder(key: string, bucketName: string): Promise<void> {
const DeletePromises: Promise<DeleteObjectCommandOutput>[] = [];
const { Contents } = await this.client.send(
new ListObjectsCommand({
Bucket: bucketName,
Prefix: key,
}),
);
if (!Contents) return;
Contents.forEach(({ Key }) => {
DeletePromises.push(
this.client.send(
new DeleteObjectCommand({
Bucket: bucketName,
Key,
}),
),
);
});
await Promise.all(DeletePromises);
}
ListObjectsCommand returns the keys of files in the folder, even with subfolders
listObjectsV2 list files only with current dir Prefix not with subfolder Prefix. If you want to delete folder with subfolders recursively this is the source code: https://github.com/tagspaces/tagspaces-common/blob/develop/packages/common-aws/io-objectstore.js#L1060
deleteDirectoryPromise = async (path: string): Promise<Object> => {
const prefixes = await this.getDirectoryPrefixes(path);
if (prefixes.length > 0) {
const deleteParams = {
Bucket: this.config.bucketName,
Delete: { Objects: prefixes }
};
return this.objectStore.deleteObjects(deleteParams).promise();
}
return this.objectStore
.deleteObject({
Bucket: this.config.bucketName,
Key: path
})
.promise();
};
/**
* get recursively all aws directory prefixes
* #param path
*/
getDirectoryPrefixes = async (path: string): Promise<any[]> => {
const prefixes = [];
const promises = [];
const listParams = {
Bucket: this.config.bucketName,
Prefix: path,
Delimiter: '/'
};
const listedObjects = await this.objectStore
.listObjectsV2(listParams)
.promise();
if (
listedObjects.Contents.length > 0 ||
listedObjects.CommonPrefixes.length > 0
) {
listedObjects.Contents.forEach(({ Key }) => {
prefixes.push({ Key });
});
listedObjects.CommonPrefixes.forEach(({ Prefix }) => {
prefixes.push({ Key: Prefix });
promises.push(this.getDirectoryPrefixes(Prefix));
});
// if (listedObjects.IsTruncated) await this.deleteDirectoryPromise(path);
}
const subPrefixes = await Promise.all(promises);
subPrefixes.map(arrPrefixes => {
arrPrefixes.map(prefix => {
prefixes.push(prefix);
});
});
return prefixes;
};
You can try this:
import { s3DeleteDir } from '#zvs001/s3-utils'
import { S3 } from 'aws-sdk'
const s3Client = new S3()
await s3DeleteDir(s3Client, {
Bucket: 'my-bucket',
Prefix: `folder/`,
})
I like the list objects and then delete approach, which is what the aws cmd line does behind the scenes btw. But I didn't want to await the list (few seconds) before deleting them. So I use this 1 step (background) process, I found it slightly faster. You can await the child process if you really want to confirm deletion, but I found that took around 10 seconds, so I don't bother I just fire and forget and check logs instead. The entire API call with other stuff now takes 1.5s which is fine for my situation.
var CHILD = require("child_process").exec;
function removeImagesAndTheFolder(folder_name_str, callback){
var cmd_str = "aws s3 rm s3://"
+ IMAGE_BUCKET_STR
+ "/" + folder_name_str
+ "/ --recursive";
if(process.env.NODE_ENV === "development"){
//When not on an EC2 with a role I use my profile
cmd_str += " " + "--profile " + LOCAL_CONFIG.PROFILE_STR;
}
// In my situation I return early for the user. You could make them wait tho'.
callback(null, {"msg_str": "Check later that these images were actually removed."});
//do not return yet still stuff to do
CHILD(cmd_str, function(error, stdout, stderr){
if(error || stderr){
console.log("Problem removing this folder with a child process:" + stderr);
}else{
console.log("Child process completed, here are the results", stdout);
}
});
}
I suggest you to do it in 2 steps, so you can "follow" whats happen (with a progressBar etc...):
Get all keys to remove
Remove keys
Of course , the #1 is a recursive function, such as:
https://gist.github.com/ebuildy/7ac807fd017452dfaf3b9c9b10ff3b52#file-my-s3-client-ts
import { ListObjectsV2Command, S3Client, S3ClientConfig } from "#aws-sdk/client-s3"
/**
* Get all keys recurively
* #param Prefix
* #returns
*/
public async listObjectsRecursive(Prefix: string, ContinuationToken?: string): Promise<
any[]
> {
// Get objects for current prefix
const listObjects = await this.client.send(
new ListObjectsV2Command({
Delimiter: "/",
Bucket: this.bucket.name,
Prefix,
ContinuationToken
})
);
let deepFiles, nextFiles
// Recurive call to get sub prefixes
if (listObjects.CommonPrefixes) {
const deepFilesPromises = listObjects.CommonPrefixes.flatMap(({Prefix}) => {
return this.listObjectsRecursive(Prefix)
})
deepFiles = (await Promise.all(deepFilesPromises)).flatMap(t => t)
}
// If we must paginate
if (listObjects.IsTruncated) {
nextFiles = await this.listObjectsRecursive(Prefix, listObjects.NextContinuationToken)
}
return [
...(listObjects.Contents || []),
...(deepFiles || []),
...(nextFiles || [])
]
}
Then, delete all objects:
public async deleteKeys(keys: string[]): Promise<any[]> {
function spliceIntoChunks(arr: any[], chunkSize: number) {
const res = [];
while (arr.length > 0) {
const chunk = arr.splice(0, chunkSize);
res.push(chunk);
}
return res;
}
const allKeysToRemovePromises = keys.map(k => this.listObjectsRecursive(k))
const allKeysToRemove = (await Promise.all(allKeysToRemovePromises)).flatMap(k => k)
const allKeysToRemoveGroups = spliceIntoChunks(allKeysToRemove, 3)
const deletePromises = allKeysToRemoveGroups.map(group => {
return this.client.send(
new DeleteObjectsCommand({
Bucket: this.bucket.name,
Delete: {
Objects: group.map(({Key}) => {
return {
Key
}
})
}
})
)
})
const results = await Promise.all(deletePromises)
return results.flatMap(({$metadata, Deleted}) => {
return Deleted.map(({Key}) => {
return {
status: $metadata.httpStatusCode,
key: Key
}
})
})
}
According to Emi's answer I made a npm package so you don'
t need to write the code yourself. Also the code is written in typescript.
See https://github.com/bingtimren/s3-commons/blob/master/src/lib/deleteRecursive.ts
You can delete an empty folder the same way you delete a file. In order to delete a non-empty folder on AWS S3, you'll need to empty it first by deleting all files and folders inside. Once the folder is empty, you can delete it as a regular file. The same applies to the bucket deletion. We've implemented it in this app called Commandeer so you can do it from a GUI.

Resources