Deleting file from Digital Ocean Space with Next (NodeJs) - node.js

I encountered weird behavior, while trying to delete file from S3 bucket on digitaloceanspace. I use aws-sdk and I follow the official example. However, the method doesn't delete the file, no error occurs and returned data object (which should be a key of deleted item) is empty. Below the code:
import AWS from "aws-sdk";
export default async function handler(req, res){
const key = req.query.key
const spacesEndpoint = new AWS.Endpoint("ams3.digitaloceanspaces.com");
const s3 = new AWS.S3({
endpoint: spacesEndpoint,
secretAccessKey: process.env.AWS_SECRET_KEY,
accessKeyId: process.env.AWS_ACCESS_KEY,
});
const params = {
Bucket: process.env.BUCKET_NAME,
Key: key,
};
s3.deleteObject(params, function (error, data) {
if (error) {
res.status({ error: "Something went wrong" });
}
console.log("Successfully deleted file", data);
});
}
The environmental variables are correct, the other (not mentioned above) upload file method works just fine.
The key passed to the params has format 'folder/file.ext' and it exists for sure.
What is returned from the callback is log: 'Successfully deleted file {}'
Any ideas what is happening here?

Please make sure you don't have any spaces in your key (filename), otherwise, it won't work. I was facing a similar problem when I saw there was a space in the filename that I was trying to delete.
Let's have an example if we are trying to upload a file named "new file.jpeg", then the DigitalOcean space is going to save it as "new\20%file.jpeg" and when you will try to delete this file, it won't find any file having name as "new file.jpeg". That's why we need to trim any whitespace or space between the words in the file name.
Hope it helps.

Related

Getting "NoSuchKey" error when creating S3 signedUrl with NodeJS

I'm trying to access an S3 bucket with nodejs using aws-sdk.
When I call the s3.getSignedUrl method and use the url it provides, I get a "NoSuchKey" error in the url.
<Code>NoSuchKey</Code>
<Message>The specified key does not exist.</Message>
<Key>{MY_BUCKET_NAME}/{REQUESTED_FILENAME}</Key>
My theory is that the request path I'm passing is wrong. Comparing my request:
{BUCKET_NAME}.s3.{BUCKET_REGION}.amazonaws.com/{BUCKET_NAME}/{KEY}
With the url created from the AWS console:
{BUCKET_NAME}.s3.{BUCKET_REGION}.amazonaws.com/{KEY}
Why is aws-sdk adding the "{BUCKET_NAME}" at the end?
NodeJS code:
// s3 instance setup
const s3 = new AWS.S3({
region: BUCKET_REGION,
endpoint: BUCKET_ENDPOINT, // {MY_BUCKET_NAME}.s3.{REGION}.amazonaws.com
s3ForcePathStyle: true,
signatureVersion: "v4",
});
const getSignedUrlFromS3 = async (filename) => {
const s3Params = {
Bucket: BUCKET_NAME,
Key: filename,
Expires: 60,
};
const signedUrl = await s3.getSignedUrl("getObject", s3Params);
return { name: filename, url: signedUrl };
};
The SDK adds the bucket name in the path because you specifically ask it to:
s3ForcePathStyle: true,
However, according to your comment, you use the bucket name in the endpoint already ("I have my endpoint as {MY_BUCKET_NAME}.s3.{REGION}.amazonaws.com") so your endpoint isn't meant to use path style...
Path style means using s3.amazonaws.com/bucket/key instead of bucket.s3.amazonaws.com/key. Forcing path style with an endpoint that actually already contains the bucket name ends up with bucket.s3.amazonaws.com/bucket/key which is interpreted as key bucket/key instead of key.
The fix should be to disable s3ForcePathStyle and instead to set s3BucketEndpoint: true because you specified an endpoint for an individual bucket.
However, in my opinion it's unnecessary to specify an endpoint in the first place - just let the SDK handle these things for you! I'd remove both s3ForcePathStyle and endpoint (then s3BucketEndpoint isn't needed either).

Unable to fetch file from S3 in node js

I have stored video files in S3 bucket and now i want to show the files to clients through an API. Here is my code for it
app.get('/vid', async(req, res) => {
AWS.config.update({
accessKeyId: config.awsAccessKey,
secretAccessKey: config.awsSecretKey,
region: "ap-south-1"
});
let s3 = new AWS.S3();
var p = req.query.p
res.attachment(p);
var options = {
Bucket: BUCKET_NAME,
Key: p,
};
console.log(p, "name")
try {
await s3.getObject(options).
createReadStream().pipe(res);
} catch (e) {
console.log(e)
}
})
This is the output I am getting when ther is this file available in S3 bucket -
vid_kdc5stoqnrIjEkL9M.mp4 name
NoSuchKey: The specified key does not exist.
This is likely caused by invalid parameters being passed into the function.
To check for invalid parameters you should double check the strings that are being passed in. For the object check the following:
Check the value of p, ensure it is the exact same name of the full object key.
Validate that the correct BUCKET_NAME is being used
No trailing characters (such as /)
Perform any necessary decoding before passing parameters in.
If in doubt use logging to output the exact value, also to test the function try testing with hard coded values to validate you can actually retrieve the objects.
For more information take a look at the How can I troubleshoot the 404 "NoSuchKey" error from Amazon S3? page.
Having certain characters in the bucketname leads to this error.
In your case, there is an underscore. Try renaming the file.
Also refer to this
S3 Bucket Naming Requirements Docs
Example from Docs:
The following example bucket names are not valid:
aws_example_bucket (contains underscores)
AwsExampleBucket (contains uppercase letters)
aws-example-bucket- (ends with a hyphen)

Copy file from one AWS S3 Bucket to another bucket with Node

I am trying to copy a file from a AWS S3 bucket to another bucket using Node. The problem is if the file name doesn't has the white space for example: "abc.csv", It is working fine.
But in case the file to which I want to copy has the white space in the file name for example: "abc xyz.csv". It is throwing the below error.
"The specified key does not exist."
"NoSuchKey: The specified key does not exist.
at Request.extractError (d:\Projects\Other\testproject\s3filetoarchieve\node_modules\aws-sdk\lib\services\s3.js:577:35)
Below is the code provided.
return Promise.each( files, file => {
var params = {
Bucket: process.env.CR_S3_BUCKET_NAME,
CopySource: `/${ process.env.CR_S3_BUCKET_NAME }/${ prefix }${ file.name}`,
Key: `${ archieveFolder }${ file.name }`
};
console.log(params);
return new Promise(( resolve, reject) => {
s3bucket.copyObject(params, function(err, data) {
if (err){
console.log(err, err.stack);
debugger
} else {
console.log(data);
debugger
}
});
});
}).then( result => {
debugger
});
Early help would be highly appreciable. Thank you.
I think the problem is exactly that space in the filename.
S3 keys must be url encoded, as they need to be accesible in URL form.
There are some packages that helps you with url formatting like speakingUrl
or you can try writting some on your own, maybe just simply replacing spaces (\s) with dashes (_ or -) if you want to keep it friendly.
If you don't mind about that, you can simply encodeURIComponent(file.name)
Hope it helps!

aws-sdk s3.getObject with redirect

In short, I'm trying to resize an image through through a redirect, aws lambda, and the aws-sdk.
Following along the tutorial on resizing an image on the fly with AWS, AWS - resize-images-on-the-fly, I've managed to make everything work according to the walkthrough, however my question is related to making the call to the bucket.
Currently the only way I can get this to work is by calling,
http://MY_BUCKET_WEBSITE_HOSTNAME/25×25/blue_marble.jpg.
If the image isn't available, the request is redirected, image resized, and then placed back in the bucket.
What I would like to do, is access the bucket in the aws-sdk through the s3.getObject() call, rather than to that direct link.
As of now, I can only access the images that are currently in the bucket, so the redirect is never happening.
My thought was the request wasn't being sent to the correct endpoint and from what I was able to find online, I changed the way the sdk was created to this -
s3 = new aws.S3({
accessKeyId: "myAccessKeyId",
secretAccessKey: "mySecretAccessKey",
region: "us-west-2",
endpoint: '<MYBUCKET>.s3-website-us-west-2.amazonaws.com',
s3BucketEndpoint: true,
sslEnabled: false,
signatureVersion: 'v4'
})
params = {
Bucket: 'MY_BUCKET',
Key: '85x85/blue_marble.jpg'
};
s3.getObject(params, (error, data) => data);
From what I can tell the endpoints in the request look correct.
When I visit the endpoints directly in the browser, everything works as expected.
But when using the sdk, only available images return. There is no redirect, no data returns, and I get the error.
XMLParserError: Non-whitespace before first tag.
Not sure if it's possible to do with s3.getObject(), seems like it may, but I can't seem to figure it out.
Use headObject to check if the object exists. If not you can call your API to do the resize & then retry the get after the resize.
var params = {
Bucket: config.get('s3bucket'),
Key: path
};
s3.headObject(params, function (err, metadata) {
if (err && err.code === 'NotFound') {
// Call your resize API here. Once your resize API returns a success, you can get the object\URL.
} else {
s3.getSignedUrl('getObject', params, callback); //Use this secure URL to access the object.
}
});

S3 file upload stream using node js

I am trying to find some solution to stream file on amazon S3 using node js server with requirements:
Don't store temp file on server or in memory. But up-to some limit not complete file, buffering can be used for uploading.
No restriction on uploaded file size.
Don't freeze server till complete file upload because in case of heavy file upload other request's waiting time will unexpectedly
increase.
I don't want to use direct file upload from browser because S3 credentials needs to share in that case. One more reason to upload file from node js server is that some authentication may also needs to apply before uploading file.
I tried to achieve this using node-multiparty. But it was not working as expecting. You can see my solution and issue at https://github.com/andrewrk/node-multiparty/issues/49. It works fine for small files but fails for file of size 15MB.
Any solution or alternative ?
You can now use streaming with the official Amazon SDK for nodejs in the section "Uploading a File to an Amazon S3 Bucket" or see their example on GitHub.
What's even more awesome, you finally can do so without knowing the file size in advance. Simply pass the stream as the Body:
var fs = require('fs');
var zlib = require('zlib');
var body = fs.createReadStream('bigfile').pipe(zlib.createGzip());
var s3obj = new AWS.S3({params: {Bucket: 'myBucket', Key: 'myKey'}});
s3obj.upload({Body: body})
.on('httpUploadProgress', function(evt) { console.log(evt); })
.send(function(err, data) { console.log(err, data) });
For your information, the v3 SDK were published with a dedicated module to handle that use case : https://www.npmjs.com/package/#aws-sdk/lib-storage
Took me a while to find it.
Give https://www.npmjs.org/package/streaming-s3 a try.
I used it for uploading several big files in parallel (>500Mb), and it worked very well.
It very configurable and also allows you to track uploading statistics.
You not need to know total size of the object, and nothing is written on disk.
If it helps anyone I was able to stream from the client to s3 successfully (without memory or disk storage):
https://gist.github.com/mattlockyer/532291b6194f6d9ca40cb82564db9d2a
The server endpoint assumes req is a stream object, I sent a File object from the client which modern browsers can send as binary data and added file info set in the headers.
const fileUploadStream = (req, res) => {
//get "body" args from header
const { id, fn } = JSON.parse(req.get('body'));
const Key = id + '/' + fn; //upload to s3 folder "id" with filename === fn
const params = {
Key,
Bucket: bucketName, //set somewhere
Body: req, //req is a stream
};
s3.upload(params, (err, data) => {
if (err) {
res.send('Error Uploading Data: ' + JSON.stringify(err) + '\n' + JSON.stringify(err.stack));
} else {
res.send(Key);
}
});
};
Yes putting the file info in the headers breaks convention but if you look at the gist it's much cleaner than anything else I found using streaming libraries or multer, busboy etc...
+1 for pragmatism and thanks to #SalehenRahman for his help.
I'm using the s3-upload-stream module in a working project here.
There is also some good examples from #raynos in his http-framework repository.
Alternatively you can look at - https://github.com/minio/minio-js. It has minimal set of abstracted API's implementing most commonly used S3 calls.
Here is an example of streaming upload.
$ npm install minio
$ cat >> put-object.js << EOF
var Minio = require('minio')
var fs = require('fs')
// find out your s3 end point here:
// http://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region
var s3Client = new Minio({
url: 'https://<your-s3-endpoint>',
accessKey: 'YOUR-ACCESSKEYID',
secretKey: 'YOUR-SECRETACCESSKEY'
})
var outFile = fs.createWriteStream('your_localfile.zip');
var fileStat = Fs.stat(file, function(e, stat) {
if (e) {
return console.log(e)
}
s3Client.putObject('mybucket', 'hello/remote_file.zip', 'application/octet-stream', stat.size, fileStream, function(e) {
return console.log(e) // should be null
})
})
EOF
putObject() here is a fully managed single function call for file sizes over 5MB it automatically does multipart internally. You can resume a failed upload as well and it will start from where its left off by verifying previously upload parts.
Additionally this library is also isomorphic, can be used in browsers as well.

Resources