Yes, I know. There is no folder concept on s3 storage. but I really want to delete a specific folder from s3 with node.js. I tried two solutions, but both didn't work.
My code is below:
Solution 1:
Deleting folder directly.
var key='level/folder1/folder2/';
var strReturn;
var params = {Bucket: MyBucket};
var s3 = new AWS.S3(params);
s3.client.listObjects({
Bucket: MyBucket,
Key: key
}, function (err, data) {
if(err){
strReturn="{\"status\":\"1\"}";
}else{
strReturn=+"{\"status\":\"0\"}";
}
res.send(returnJson);
console.log('error:'+err+' data:'+JSON.stringify(data));
});
Actually, I have a lot of files under folder2. I can delete single file from folder2 if I define key like this:
var key='level/folder1/folder2/file1.txt', but it didn't work when I deleted a folder(key='level/folder1/folder2/').
Solution 2:
I tried to set expiration to an object when I uploaded this file or folder to s3. code is below:
s3.client.putObject({
Bucket: Camera_Bucket,
Key: key,
ACL:'public-read',
Expires: 60
}
But it didn't either. After finishing uploading, I checked the properties of that file. it showed there was nothing value for expiry date:
Expiry Date:none
Expiration Rule:N/A
How can I delete folder on s3 with node.js?
Here is an implementation in ES7 with an async function and using listObjectsV2 (the revised List Objects API):
async function emptyS3Directory(bucket, dir) {
const listParams = {
Bucket: bucket,
Prefix: dir
};
const listedObjects = await s3.listObjectsV2(listParams).promise();
if (listedObjects.Contents.length === 0) return;
const deleteParams = {
Bucket: bucket,
Delete: { Objects: [] }
};
listedObjects.Contents.forEach(({ Key }) => {
deleteParams.Delete.Objects.push({ Key });
});
await s3.deleteObjects(deleteParams).promise();
if (listedObjects.IsTruncated) await emptyS3Directory(bucket, dir);
}
To call it:
await emptyS3Directory(process.env.S3_BUCKET, 'images/')
You can use aws-sdk module for deleting folder. Because you can only delete a folder when it is empty, you should first delete the files in it. I'm doing it like this :
function emptyBucket(bucketName,callback){
var params = {
Bucket: bucketName,
Prefix: 'folder/'
};
s3.listObjects(params, function(err, data) {
if (err) return callback(err);
if (data.Contents.length == 0) callback();
params = {Bucket: bucketName};
params.Delete = {Objects:[]};
data.Contents.forEach(function(content) {
params.Delete.Objects.push({Key: content.Key});
});
s3.deleteObjects(params, function(err, data) {
if (err) return callback(err);
if (data.IsTruncated) {
emptyBucket(bucketName, callback);
} else {
callback();
}
});
});
}
A much simpler way is to fetch all objects (keys) at that path & delete them. In each call fetch 1000 keys & s3 deleteObjects can delete 1000 keys in each request too. Do that recursively to achieve the goal
Written in typescript
/**
* delete a folder recursively
* #param bucket
* #param path - without end /
*/
deleteFolder(bucket: string, path: string) {
return new Promise((resolve, reject) => {
// get all keys and delete objects
const getAndDelete = (ct: string = null) => {
this.s3
.listObjectsV2({
Bucket: bucket,
MaxKeys: 1000,
ContinuationToken: ct,
Prefix: path + "/",
Delimiter: "",
})
.promise()
.then(async (data) => {
// params for delete operation
let params = {
Bucket: bucket,
Delete: { Objects: [] },
};
// add keys to Delete Object
data.Contents.forEach((content) => {
params.Delete.Objects.push({ Key: content.Key });
});
// delete all keys
await this.s3.deleteObjects(params).promise();
// check if ct is present
if (data.NextContinuationToken) getAndDelete(data.NextContinuationToken);
else resolve(true);
})
.catch((err) => reject(err));
};
// init call
getAndDelete();
});
}
According doc at https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjects.html:
A response can contain CommonPrefixes only if you specify a delimiter.
CommonPrefixes contains all (if there are any) keys between Prefix and the next occurrence of the string specified by the delimiter.
Omitting Delimiter parameter will make ListObject return all keys starting by the Prefix parameter.
According to accepted answer I created promise returned function, so you can chain it.
function emptyBucket(bucketName){
let currentData;
let params = {
Bucket: bucketName,
Prefix: 'folder/'
};
return S3.listObjects(params).promise().then(data => {
if (data.Contents.length === 0) {
throw new Error('List of objects empty.');
}
currentData = data;
params = {Bucket: bucketName};
params.Delete = {Objects:[]};
currentData.Contents.forEach(content => {
params.Delete.Objects.push({Key: content.Key});
});
return S3.deleteObjects(params).promise();
}).then(() => {
if (currentData.Contents.length === 1000) {
emptyBucket(bucketName, callback);
} else {
return true;
}
});
}
The accepted answer throws an error when used in typescript. I made it work by modifying the code in the following way. I'm very new to Typescript but at least it is working now.
async function emptyS3Directory(prefix: string) {
const listParams = {
Bucket: "bucketName",
Prefix: prefix, // ex. path/to/folder
};
const listedObjects = await s3.listObjectsV2(listParams).promise();
if (listedObjects.Contents.length === 0) return;
const deleteParams = {
Bucket: bucketName,
Delete: { Objects: [] as any },
};
listedObjects.Contents.forEach((content: any) => {
deleteParams.Delete.Objects.push({ Key: content.Key });
});
await s3.deleteObjects(deleteParams).promise();
if (listedObjects.IsTruncated) await emptyS3Directory(prefix);
}
Better solution with #aws-sdk/client-s3 module:
private async _deleteFolder(key: string, bucketName: string): Promise<void> {
const DeletePromises: Promise<DeleteObjectCommandOutput>[] = [];
const { Contents } = await this.client.send(
new ListObjectsCommand({
Bucket: bucketName,
Prefix: key,
}),
);
if (!Contents) return;
Contents.forEach(({ Key }) => {
DeletePromises.push(
this.client.send(
new DeleteObjectCommand({
Bucket: bucketName,
Key,
}),
),
);
});
await Promise.all(DeletePromises);
}
ListObjectsCommand returns the keys of files in the folder, even with subfolders
listObjectsV2 list files only with current dir Prefix not with subfolder Prefix. If you want to delete folder with subfolders recursively this is the source code: https://github.com/tagspaces/tagspaces-common/blob/develop/packages/common-aws/io-objectstore.js#L1060
deleteDirectoryPromise = async (path: string): Promise<Object> => {
const prefixes = await this.getDirectoryPrefixes(path);
if (prefixes.length > 0) {
const deleteParams = {
Bucket: this.config.bucketName,
Delete: { Objects: prefixes }
};
return this.objectStore.deleteObjects(deleteParams).promise();
}
return this.objectStore
.deleteObject({
Bucket: this.config.bucketName,
Key: path
})
.promise();
};
/**
* get recursively all aws directory prefixes
* #param path
*/
getDirectoryPrefixes = async (path: string): Promise<any[]> => {
const prefixes = [];
const promises = [];
const listParams = {
Bucket: this.config.bucketName,
Prefix: path,
Delimiter: '/'
};
const listedObjects = await this.objectStore
.listObjectsV2(listParams)
.promise();
if (
listedObjects.Contents.length > 0 ||
listedObjects.CommonPrefixes.length > 0
) {
listedObjects.Contents.forEach(({ Key }) => {
prefixes.push({ Key });
});
listedObjects.CommonPrefixes.forEach(({ Prefix }) => {
prefixes.push({ Key: Prefix });
promises.push(this.getDirectoryPrefixes(Prefix));
});
// if (listedObjects.IsTruncated) await this.deleteDirectoryPromise(path);
}
const subPrefixes = await Promise.all(promises);
subPrefixes.map(arrPrefixes => {
arrPrefixes.map(prefix => {
prefixes.push(prefix);
});
});
return prefixes;
};
You can try this:
import { s3DeleteDir } from '#zvs001/s3-utils'
import { S3 } from 'aws-sdk'
const s3Client = new S3()
await s3DeleteDir(s3Client, {
Bucket: 'my-bucket',
Prefix: `folder/`,
})
I like the list objects and then delete approach, which is what the aws cmd line does behind the scenes btw. But I didn't want to await the list (few seconds) before deleting them. So I use this 1 step (background) process, I found it slightly faster. You can await the child process if you really want to confirm deletion, but I found that took around 10 seconds, so I don't bother I just fire and forget and check logs instead. The entire API call with other stuff now takes 1.5s which is fine for my situation.
var CHILD = require("child_process").exec;
function removeImagesAndTheFolder(folder_name_str, callback){
var cmd_str = "aws s3 rm s3://"
+ IMAGE_BUCKET_STR
+ "/" + folder_name_str
+ "/ --recursive";
if(process.env.NODE_ENV === "development"){
//When not on an EC2 with a role I use my profile
cmd_str += " " + "--profile " + LOCAL_CONFIG.PROFILE_STR;
}
// In my situation I return early for the user. You could make them wait tho'.
callback(null, {"msg_str": "Check later that these images were actually removed."});
//do not return yet still stuff to do
CHILD(cmd_str, function(error, stdout, stderr){
if(error || stderr){
console.log("Problem removing this folder with a child process:" + stderr);
}else{
console.log("Child process completed, here are the results", stdout);
}
});
}
I suggest you to do it in 2 steps, so you can "follow" whats happen (with a progressBar etc...):
Get all keys to remove
Remove keys
Of course , the #1 is a recursive function, such as:
https://gist.github.com/ebuildy/7ac807fd017452dfaf3b9c9b10ff3b52#file-my-s3-client-ts
import { ListObjectsV2Command, S3Client, S3ClientConfig } from "#aws-sdk/client-s3"
/**
* Get all keys recurively
* #param Prefix
* #returns
*/
public async listObjectsRecursive(Prefix: string, ContinuationToken?: string): Promise<
any[]
> {
// Get objects for current prefix
const listObjects = await this.client.send(
new ListObjectsV2Command({
Delimiter: "/",
Bucket: this.bucket.name,
Prefix,
ContinuationToken
})
);
let deepFiles, nextFiles
// Recurive call to get sub prefixes
if (listObjects.CommonPrefixes) {
const deepFilesPromises = listObjects.CommonPrefixes.flatMap(({Prefix}) => {
return this.listObjectsRecursive(Prefix)
})
deepFiles = (await Promise.all(deepFilesPromises)).flatMap(t => t)
}
// If we must paginate
if (listObjects.IsTruncated) {
nextFiles = await this.listObjectsRecursive(Prefix, listObjects.NextContinuationToken)
}
return [
...(listObjects.Contents || []),
...(deepFiles || []),
...(nextFiles || [])
]
}
Then, delete all objects:
public async deleteKeys(keys: string[]): Promise<any[]> {
function spliceIntoChunks(arr: any[], chunkSize: number) {
const res = [];
while (arr.length > 0) {
const chunk = arr.splice(0, chunkSize);
res.push(chunk);
}
return res;
}
const allKeysToRemovePromises = keys.map(k => this.listObjectsRecursive(k))
const allKeysToRemove = (await Promise.all(allKeysToRemovePromises)).flatMap(k => k)
const allKeysToRemoveGroups = spliceIntoChunks(allKeysToRemove, 3)
const deletePromises = allKeysToRemoveGroups.map(group => {
return this.client.send(
new DeleteObjectsCommand({
Bucket: this.bucket.name,
Delete: {
Objects: group.map(({Key}) => {
return {
Key
}
})
}
})
)
})
const results = await Promise.all(deletePromises)
return results.flatMap(({$metadata, Deleted}) => {
return Deleted.map(({Key}) => {
return {
status: $metadata.httpStatusCode,
key: Key
}
})
})
}
According to Emi's answer I made a npm package so you don'
t need to write the code yourself. Also the code is written in typescript.
See https://github.com/bingtimren/s3-commons/blob/master/src/lib/deleteRecursive.ts
You can delete an empty folder the same way you delete a file. In order to delete a non-empty folder on AWS S3, you'll need to empty it first by deleting all files and folders inside. Once the folder is empty, you can delete it as a regular file. The same applies to the bucket deletion. We've implemented it in this app called Commandeer so you can do it from a GUI.
Related
I want to do this with aws-sdk library.
I have a folder on my S3 bucket called "abcd/", it has 3 files on it (e.g. abcd/1.jpg, abcd/2.jpg).
I want to rename the folder to 1234/
^ I want there to be 1234/ only
const awsMove = async (path) => {
try {
const s3 = new AWS.S3();
const AWS_BUCKET = 'my-bucket-test';
const copyParams = {
Key: path.newPath,
Bucket: AWS_BUCKET,
CopySource: encodeURI(`/${AWS_BUCKET}/${path.oldPath}`),
};
await s3.copyObject(copyParams).promise();
const deleteParams = {
Key: path.oldPath,
Bucket: AWS_BUCKET,
};
await s3.deleteObject(deleteParams).promise();
} catch (err) {
console.log(err);
}
};
const changePath = { oldPath: 'abcd/', newPath: '1234/' };
awsMove(changePath);
The above code errors with "The specified key does not exist" what am I doing wrong?
AWS S3 does not have the concept of folders as in a file system. You have a bucket and a key that identifies the object/file stored at that location. The pattern of the key is usually a/b/c/d/some_file and the way it is showed on AWS console, it might give you an impression that a, b, c or d are folders but indeed they aren't.
Now, you can't change the key of an object since it is immutable. You'll have to copy the file existing at the current key to the new key and delete the file at current key.
This implies renaming a folder say folder/ is same as copying all files located at key folder/* and creating new ones at newFolder/*. The error:
The specified key does not exist
says that you've not specified the full object key during the copy from source as well as during deletion. The correct implementation would be to list all files at folder/* and copy and delete them one by one. So, your function should be doing something like this:
const awsMove = async (path) => {
try {
const s3 = new AWS.S3();
const AWS_BUCKET = 'my-bucket-test';
const listParams = {
Bucket: AWS_BUCKET,
Delimiter: '/',
Prefix: `${path.oldPath}`
}
await s3.listObjects(listParams, function (err, data) {
if(err)throw err;
data.Contents.forEach(async (elem) => {
const copyParams = {
Key: `${path.newPath}${elem.Key}`,
Bucket: AWS_BUCKET,
CopySource: encodeURI(`/${AWS_BUCKET}/${path.oldPath}/${elem.Key}`),
};
await s3.copyObject(copyParams).promise();
const deleteParams = {
Key: `${path.newPath}${elem.Key}`,
Bucket: AWS_BUCKET,
};
await s3.deleteObject(deleteParams).promise();
});
}).promise();
} catch (err) {
console.log(err);
}
};
Unfortunately, you will need to copy the old ones to the new name and delete them from the old one.
BOTO 3:
AWS_BUCKET ='my-bucket-test'
s3 = boto3.resource('s3')
s3.Object(AWS_BUCKET,'new_file').copy_from(CopySource='AWS_BUCKET/old_file')
s3.Object(AWS_BUCKET,'old_file').delete()
Node :
var s3 = new AWS.S3();
AWS_BUCKET ='my-bucket-test'
var OLD_S3_KEY = '/old-file.json';
var NEW_S3_KEY = '/new-file.json';
s3.copyObject({
Bucket: BUCKET_NAME,
CopySource: `${BUCKET_NAME}${OLD_KEY}`,
Key: NEW_KEY
})
.promise()
.then(() =>
s3.deleteObject({
Bucket: BUCKET_NAME,
Key: OLD_KEY
}).promise()
)
.catch((e) => console.error(e))
I'm using a library called sitemap to generate files from an array of objects constructed during runtime. My goal is to upload these generated sitemaps to an S3 bucket.
So far, the function is hosted on AWS lambda and uploading generated files correctly to the bucket.
My problem is that, the generated sitemaps are corrupted. When I run the function locally, they get generated correctly without any issues.
Here's my handler:
module.exports.handler = async () => {
try {
console.log("inside handler....");
await clearGeneratedSitemapsFromTmpDir();
const sms = new SitemapAndIndexStream({
limit: 10000,
getSitemapStream: (i) => {
const sitemapStream = new SitemapStream({
lastmodDateOnly: true,
});
const linkPath = `/sitemap-${i + 1}.xml`;
const writePath = `/tmp/${linkPath}`;
sitemapStream.pipe(createWriteStream(resolve(writePath)));
return [new URL(linkPath, hostName).toString(), sitemapStream];
},
});
const data = await generateSiteMap();
sms.pipe(createWriteStream(resolve("/tmp/sitemap-index.xml")));
// data.forEach((item) => sms.write(item));
Readable.from(data).pipe(sms);
sms.end();
await uploadToS3();
await clearGeneratedSitemapsFromTmpDir();
} catch (error) {
console.log("🚀 ~ file: index.js ~ line 228 ~ exec ~ error", error);
Sentry.captureException(error);
}
};
The data variable has an array of around 11k items, so according to the code above, two sitemap files would be generated(first 10k, rest to second sitemap) in addition to a sitemap index where it lists the two generated sitemaps.
Here's my uploadToS3 function:
const uploadToS3 = async () => {
try {
console.log("uploading to s3....");
const files = await getGeneratedXmlFilesNames();
for (let i = 0; i < files.length; i += 1) {
const file = files[i];
const filePath = `/tmp/${file}`;
// const stream = createReadStream(resolve(filePath));
const fileRead = await readFileAsync(filePath, { encoding: "utf-8" });
const params = {
Body: fileRead,
Key: `${file}`,
ACL: "public-read",
ContentType: "application/xml",
ContentDisposition: "inline",
};
// const result = await s3Client.upload(params).promise();
const result = await s3Client.putObject(params).promise();
console.log(
"🚀 ~ file: index.js ~ line 228 ~ uploadToS3 ~ result",
result
);
}
} catch (error) {
console.log("uploadToS3 => error", error);
// Sentry.captureException(error);
}
};
And here's the function that cleans up the generated files from lambda's /tmp directory after upload to S3:
const clearGeneratedSitemapsFromTmpDir = async () => {
try {
console.log("cleaning up....");
const readLocalTempDirDir = await readDirAsync("/tmp");
const xmlFiles = readLocalTempDirDir.filter((file) =>
file.includes(".xml")
);
for (const file of xmlFiles) {
await unlinkAsync(`/tmp/${file}`);
console.log("deleting file....");
}
} catch (error) {
console.log(
"🚀 ~ file: index.js ~ line 207 ~ clearGeneratedSitemapsFromTmpDir ~ error",
error
);
}
};
My hunch is that the issue is related to streams as I haven't fully understood them yet.
Any help here is highly appreciated.
Side note: I tried to sleep for 10s before uploading, but that didn't work either.
As a workaround, I did this:
const data = await generateSiteMap();
const logger = createWriteStream(resolve("/tmp/all-urls.json.txt"), {
flags: "a",
});
data.forEach((el) => {
logger.write(JSON.stringify(el));
logger.write("\n");
});
logger.end();
const stream = lineSeparatedURLsToSitemapOptions(
createReadStream(resolve("/tmp/all-urls.json.txt"))
)
.pipe(sms)
.pipe(createWriteStream(resolve("/tmp/sitemap-index.xml")));
await new Promise((fulfill) => stream.on("finish", fulfill));
await uploadToS3();
await clearGeneratedSitemapsFromTmpDir();
Will keep question open in case somebody answers it correctly.
I'm using #google-cloud/storage package and generating signed url to upload file like this:
const path = require("path");
const { Storage } = require("#google-cloud/storage");
const GOOGLE_CLOUD_KEYFILE = path.resolve(
__dirname + "/../gcloud_media_access.json"
);
const storage = new Storage({
keyFilename: GOOGLE_CLOUD_KEYFILE,
});
exports.uploadUrlGCloud = async (bucketName, key, isPrivate = false) => {
let bucket = storage.bucket(bucketName);
let file = bucket.file(key);
const options = {
version: "v4",
action: "write",
expires: Date.now() + 15 * 60 * 1000 // 15 minutes
};
let signedUrl = (await file.getSignedUrl(options))[0];
if(isPrivate){
await file.makePrivate({strict: true});
}
return signedUrl;
};
However when I call this function like this:
const url = await uploadUrlGCloud(bucket, key, true);
I'm getting 404 api error like this:
ApiError: No such object: testbucket/account/upload/4aac0fb0-92dd-11eb-8723-6b3ad09f80fa_demo.jpg
What I want to ask is is there a way to generate the signedUrl private? Before the file is uploaded, I want to mark it as private and prevent public access.
Edit:
I uploaded a file to the created signed URL, and made makePrivate again to the uploaded file. This time I didn't get any errors. However, when I checked the file again, I realized that is still public.
This is the function I tried to make file private:
const makeFilePrivate = async (bucketName, key) => {
return new Promise((resolve, reject) => {
let bucket = storage.bucket(bucketName);
let file = bucket.file(key);
try {
file.makePrivate({strict: true}, err => {
if(!err) {
resolve(file.isPublic());
} else
reject(err);
})
} catch (err) {
reject(err);
}
})
};
console.log(await makeFilePrivate(bucket, remotePath));
// True
You can't make the objects of a public bucket private due to the way how IAM and ACLs interact with one another.
I am trying to upload large files to a s3 bucket using the node.js aws-sdk.
the V2 method upload integrally uploads the files in a multipart upload.
I want to use the new V3 aws-sdk. What is the way to upload large files in the new version? The method PutObjectCommand doesn't seem to be doing it.
I've seen there are methods such as CreateMultiPartUpload but I can't seem to find a full working example using them.
Thanks in advance.
As of 2021, I would suggest using the lib-storage package, which abstracts a lot of the implementation details.
Sample code:
import { Upload } from "#aws-sdk/lib-storage";
import { S3Client, S3 } from "#aws-sdk/client-s3";
const target = { Bucket, Key, Body };
try {
const parallelUploads3 = new Upload({
client: new S3({}) || new S3Client({}),
tags: [...], // optional tags
queueSize: 4, // optional concurrency configuration
partSize: 5MB, // optional size of each part
leavePartsOnError: false, // optional manually handle dropped parts
params: target,
});
parallelUploads3.on("httpUploadProgress", (progress) => {
console.log(progress);
});
await parallelUploads3.done();
} catch (e) {
console.log(e);
}
Source: https://github.com/aws/aws-sdk-js-v3/blob/main/lib/lib-storage/README.md
Here's what I came up with, to upload a Buffer as a multipart upload, using aws-sdk v3 for nodejs and TypeScript.
Error handling still needs some work (you might want to abort/retry in case of an error), but it should be a good starting point... I have tested this with XML files up to 15MB, and so far so good. No guarantees, though! ;)
import {
CompleteMultipartUploadCommand,
CompleteMultipartUploadCommandInput,
CreateMultipartUploadCommand,
CreateMultipartUploadCommandInput,
S3Client,
UploadPartCommand,
UploadPartCommandInput
} from '#aws-sdk/client-s3'
const client = new S3Client({ region: 'us-west-2' })
export const uploadMultiPartObject = async (file: Buffer, createParams: CreateMultipartUploadCommandInput): Promise<void> => {
try {
const createUploadResponse = await client.send(
new CreateMultipartUploadCommand(createParams)
)
const { Bucket, Key } = createParams
const { UploadId } = createUploadResponse
console.log('Upload initiated. Upload ID: ', UploadId)
// 5MB is the minimum part size
// Last part can be any size (no min.)
// Single part is treated as last part (no min.)
const partSize = (1024 * 1024) * 5 // 5MB
const fileSize = file.length
const numParts = Math.ceil(fileSize / partSize)
const uploadedParts = []
let remainingBytes = fileSize
for (let i = 1; i <= numParts; i ++) {
let startOfPart = fileSize - remainingBytes
let endOfPart = Math.min(partSize, startOfPart + remainingBytes)
if (i > 1) {
endOfPart = startOfPart + Math.min(partSize, remainingBytes)
startOfPart += 1
}
const uploadParams: UploadPartCommandInput = {
// add 1 to endOfPart due to slice end being non-inclusive
Body: file.slice(startOfPart, endOfPart + 1),
Bucket,
Key,
UploadId,
PartNumber: i
}
const uploadPartResponse = await client.send(new UploadPartCommand(uploadParams))
console.log(`Part #${i} uploaded. ETag: `, uploadPartResponse.ETag)
remainingBytes -= Math.min(partSize, remainingBytes)
// For each part upload, you must record the part number and the ETag value.
// You must include these values in the subsequent request to complete the multipart upload.
// https://docs.aws.amazon.com/AmazonS3/latest/API/API_CompleteMultipartUpload.html
uploadedParts.push({ PartNumber: i, ETag: uploadPartResponse.ETag })
}
const completeParams: CompleteMultipartUploadCommandInput = {
Bucket,
Key,
UploadId,
MultipartUpload: {
Parts: uploadedParts
}
}
console.log('Completing upload...')
const completeData = await client.send(new CompleteMultipartUploadCommand(completeParams))
console.log('Upload complete: ', completeData.Key, '\n---')
} catch(e) {
throw e
}
}
Here is the fully working code with AWS SDK v3
import { Upload } from "#aws-sdk/lib-storage";
import { S3Client, S3 } from "#aws-sdk/client-s3";
import { createReadStream } from 'fs';
const inputStream = createReadStream('clamav_db.zip');
const Bucket = process.env.DB_BUCKET
const Key = process.env.FILE_NAME
const Body = inputStream
const target = { Bucket, Key, Body};
try {
const parallelUploads3 = new Upload({
client: new S3Client({
region: process.env.AWS_REGION,
credentials: { accessKeyId: process.env.AWS_ACCESS_KEY, secretAccessKey: process.env.AWS_SECRET_KEY }
}),
queueSize: 4, // optional concurrency configuration
partSize: 5242880, // optional size of each part
leavePartsOnError: false, // optional manually handle dropped parts
params: target,
});
parallelUploads3.on("httpUploadProgress", (progress) => {
console.log(progress);
});
await parallelUploads3.done();
} catch (e) {
console.log(e);
}
I've got storage trigger function which resize and replace uploaded image into storage and then update URL in my database
}).then(() => {
console.log('Original file deleted', filePath)
const logo = storageRef.file(JPEGFilePath)
return logo.getSignedUrl({ action: 'read', expires: date })
// const logo = storageRef.child(JPEGFilePath)
// return logo.getDownloadURL()
// return storageUrl.getDownloadURL(JPEGFilePath)
}).then((url) => {
const newRef = db.collection("user").doc(uid)
return newRef.set({
profile: { profileImg: url[0] }
}, {
merge: true
})
})
here is how I set expiry date
const d = new Date()
const date = new Date(d.setFullYear(d.getFullYear() + 200)).toString()
However the image expire in few weeks (roughly about 2 weeks). Does anyone know how to fix that? I have even played with getDownloadURL as you can see from commented code but that doesn't seems to work in trigger
Per the following links:
https://stackoverflow.com/a/42959262/370321
https://cloud.google.com/nodejs/docs/reference/storage/2.5.x/File#getSignedPolicy
Not sure which version of #google/cloud-storage you're using, but assuming it's 2.5.x, it looks like any value you pass in the date field is passed into new Date(), so it looks like your code should work as I tried it in my dev tools. The only thing I can guess is it doesn't like that you want a file to live for 200 years.
Per the source code:
https://github.com/googleapis/nodejs-storage/blob/master/src/file.ts#L2358
Have you tried a shorter amount of time -- or formatting it in the dateform at mm-dd-yyyy ?
Ok so I have tried something but I have no idea if this will work or not so I'll come back in 2 weeks to mark my question as answered if it will work. For those with the same problem I'll try to recapitulate what I've done.
1/ Download the service account key from console. Here is the link
https://console.firebase.google.com/project/_/settings/serviceaccounts/adminsdk
2/ Save the downloaded JSON file in your function directory
3/ Include the key in your function storage. But be careful how you set the path to the file. Here is my question about it
https://stackoverflow.com/a/56407592/11486115
UPDATE
I just found mistake in my function. My URL was provided by cloud function by mistake (commented code)
Here is complete function
const {
db
} = require('../../admin')
const projectId = "YOUR-PROJECT-ID"
const { Storage } = require('#google-cloud/storage');
const storage = new Storage({ projectId: projectId ,keyFilename: 'PATH-TO-SERVICE-ACCOUNT'})
const os = require('os');
const fs = require('fs');
const path = require('path');
const spawn = require('child-process-promise').spawn
const JPEG_EXTENSION = '.jpg'
exports.handler = ((object) => {
const bucket = object.bucket;
const contentType = object.contentType;
const filePath = object.name
const JPEGFilePath = path.normalize(path.format({ dir: path.dirname(filePath), name: 'profileImg', ext: JPEG_EXTENSION }))
const destBucket = storage.bucket(bucket)
const tempFilePath = path.join(os.tmpdir(), path.basename(filePath))
const tempLocalJPEGFile = path.join(os.tmpdir(), path.basename(JPEGFilePath))
const metadata = {
contentType: contentType
}
const uid = filePath.split("/").slice(1, 2).join("")
const d = new Date()
const date = new Date(d.setFullYear(d.getFullYear() + 200)).toString()
if (!object.contentType.startsWith('image/')) {
return destBucket.file(filePath).delete().then(() => {
console.log('File is not an image ', filePath, ' DELETED')
return null
});
}
if (object.metadata.modified) {
console.log('Image processed')
return null
}
return destBucket.file(filePath).download({
destination: tempFilePath
})
.then(() => {
console.log('The file has been downloaded to', tempFilePath)
return spawn('convert', [tempFilePath, '-resize', '100x100', tempLocalJPEGFile])
}).then(() => {
console.log('JPEG image created at', tempLocalJPEGFile)
metadata.modified = true
return destBucket.upload(tempLocalJPEGFile,
{
destination: JPEGFilePath,
metadata: { metadata: metadata }
})
}).then(() => {
console.log('JPEG image uploaded to Storage at', JPEGFilePath)
return destBucket.file(filePath).delete()
}).then(() => {
console.log('Original file deleted', filePath)
//const logo = storageRef.file(JPEGFilePath)
const logo = destBucket.file(JPEGFilePath)
return logo.getSignedUrl({ action: 'read', expires: date })
}).then((url) => {
const newRef = db.collection("user").doc(uid)
return newRef.set({
profile: { profileImg: url[0] }
}, {
merge: true
})
}).then(() => {
fs.unlinkSync(tempFilePath);
fs.unlinkSync(tempLocalJPEGFile)
console.log(uid, 'user database updated ')
return null
})
})
I'm pretty confident that this will work now.