I have stored video files in S3 bucket and now i want to show the files to clients through an API. Here is my code for it
app.get('/vid', async(req, res) => {
AWS.config.update({
accessKeyId: config.awsAccessKey,
secretAccessKey: config.awsSecretKey,
region: "ap-south-1"
});
let s3 = new AWS.S3();
var p = req.query.p
res.attachment(p);
var options = {
Bucket: BUCKET_NAME,
Key: p,
};
console.log(p, "name")
try {
await s3.getObject(options).
createReadStream().pipe(res);
} catch (e) {
console.log(e)
}
})
This is the output I am getting when ther is this file available in S3 bucket -
vid_kdc5stoqnrIjEkL9M.mp4 name
NoSuchKey: The specified key does not exist.
This is likely caused by invalid parameters being passed into the function.
To check for invalid parameters you should double check the strings that are being passed in. For the object check the following:
Check the value of p, ensure it is the exact same name of the full object key.
Validate that the correct BUCKET_NAME is being used
No trailing characters (such as /)
Perform any necessary decoding before passing parameters in.
If in doubt use logging to output the exact value, also to test the function try testing with hard coded values to validate you can actually retrieve the objects.
For more information take a look at the How can I troubleshoot the 404 "NoSuchKey" error from Amazon S3? page.
Having certain characters in the bucketname leads to this error.
In your case, there is an underscore. Try renaming the file.
Also refer to this
S3 Bucket Naming Requirements Docs
Example from Docs:
The following example bucket names are not valid:
aws_example_bucket (contains underscores)
AwsExampleBucket (contains uppercase letters)
aws-example-bucket- (ends with a hyphen)
Related
I encountered weird behavior, while trying to delete file from S3 bucket on digitaloceanspace. I use aws-sdk and I follow the official example. However, the method doesn't delete the file, no error occurs and returned data object (which should be a key of deleted item) is empty. Below the code:
import AWS from "aws-sdk";
export default async function handler(req, res){
const key = req.query.key
const spacesEndpoint = new AWS.Endpoint("ams3.digitaloceanspaces.com");
const s3 = new AWS.S3({
endpoint: spacesEndpoint,
secretAccessKey: process.env.AWS_SECRET_KEY,
accessKeyId: process.env.AWS_ACCESS_KEY,
});
const params = {
Bucket: process.env.BUCKET_NAME,
Key: key,
};
s3.deleteObject(params, function (error, data) {
if (error) {
res.status({ error: "Something went wrong" });
}
console.log("Successfully deleted file", data);
});
}
The environmental variables are correct, the other (not mentioned above) upload file method works just fine.
The key passed to the params has format 'folder/file.ext' and it exists for sure.
What is returned from the callback is log: 'Successfully deleted file {}'
Any ideas what is happening here?
Please make sure you don't have any spaces in your key (filename), otherwise, it won't work. I was facing a similar problem when I saw there was a space in the filename that I was trying to delete.
Let's have an example if we are trying to upload a file named "new file.jpeg", then the DigitalOcean space is going to save it as "new\20%file.jpeg" and when you will try to delete this file, it won't find any file having name as "new file.jpeg". That's why we need to trim any whitespace or space between the words in the file name.
Hope it helps.
I'm trying to access an S3 bucket with nodejs using aws-sdk.
When I call the s3.getSignedUrl method and use the url it provides, I get a "NoSuchKey" error in the url.
<Code>NoSuchKey</Code>
<Message>The specified key does not exist.</Message>
<Key>{MY_BUCKET_NAME}/{REQUESTED_FILENAME}</Key>
My theory is that the request path I'm passing is wrong. Comparing my request:
{BUCKET_NAME}.s3.{BUCKET_REGION}.amazonaws.com/{BUCKET_NAME}/{KEY}
With the url created from the AWS console:
{BUCKET_NAME}.s3.{BUCKET_REGION}.amazonaws.com/{KEY}
Why is aws-sdk adding the "{BUCKET_NAME}" at the end?
NodeJS code:
// s3 instance setup
const s3 = new AWS.S3({
region: BUCKET_REGION,
endpoint: BUCKET_ENDPOINT, // {MY_BUCKET_NAME}.s3.{REGION}.amazonaws.com
s3ForcePathStyle: true,
signatureVersion: "v4",
});
const getSignedUrlFromS3 = async (filename) => {
const s3Params = {
Bucket: BUCKET_NAME,
Key: filename,
Expires: 60,
};
const signedUrl = await s3.getSignedUrl("getObject", s3Params);
return { name: filename, url: signedUrl };
};
The SDK adds the bucket name in the path because you specifically ask it to:
s3ForcePathStyle: true,
However, according to your comment, you use the bucket name in the endpoint already ("I have my endpoint as {MY_BUCKET_NAME}.s3.{REGION}.amazonaws.com") so your endpoint isn't meant to use path style...
Path style means using s3.amazonaws.com/bucket/key instead of bucket.s3.amazonaws.com/key. Forcing path style with an endpoint that actually already contains the bucket name ends up with bucket.s3.amazonaws.com/bucket/key which is interpreted as key bucket/key instead of key.
The fix should be to disable s3ForcePathStyle and instead to set s3BucketEndpoint: true because you specified an endpoint for an individual bucket.
However, in my opinion it's unnecessary to specify an endpoint in the first place - just let the SDK handle these things for you! I'd remove both s3ForcePathStyle and endpoint (then s3BucketEndpoint isn't needed either).
I'm searching how check if an object exist in my aws s3 bucket in nodejs without list all my object (~1500) and check the prefix of the object but I cannot find how.
The format is like that:
<prefix I want to search>.<random string>/
Ex:
tutturuuu.dhbsfd7z63hd7833u/
Because you don't know the entire object Key, you will need to perform a list and filter by prefix. The AWS nodejs sdk provides such a method. Here is an example:
s3.listObjectsV2({
Bucket: 'youBucket',
MaxKeys: 1,
Prefix: 'tutturuuu.'
}, function(err, data) {
if (err) throw err;
const objectExists = data.Contents.length > 0
console.log(objectExists);
});
Note that it is important to use MaxKeys in order to reduce network usage. If more than one object has the prefix, then you will need to return everything and decide which one you need.
This API call will return metadata only. After you have the full key you can use getObject to retrieve the object contents.
I am using aws transcribe to get the text of the video using node js. I can specify the particular destination bucket in params but not the particular folder. Can anyone help me with this ? This is my code
var params = {
LanguageCode: "en-US",
Media: { /* required */
MediaFileUri: "s3://bucket-name/public/events/545/videoplayback1.mp4"
},
TranscriptionJobName: 'STRING_VALUE', /* required */
MediaFormat: "mp4", //mp3 | mp4 | wav | flac,
OutputBucketName: 'test-rekognition',
// }
};
transcribeservice.startTranscriptionJob(params, function(err, data) {
if (err) console.log(err, err.stack); // an error occurred
else console.log(data); // successful response
});
I have specified the destination bucket name in OutputBucketName field. But how to specify a particular folder ?
I would recommend creating a designated S3 Bucket for Transcribe output and adding a trigger with a lambda function to respond to that trigger on 'Object create (All)'. Essentially, as soon as there is a new object added to your S3 bucket, a lambda function is invoked to move/process that output by placing it in a specific 'folder' of your choice.
This doesn't solve the API issue but I hope it serves as a good workaround - you could look at this article ( https://linuxacademy.com/hands-on-lab/0e291fc6-52a4-4ed3-ad65-8cf2fd84e0df/ ) as a guide.
have a good one.
Very late to the party, but I just had the same issue.
It appeared that Amazon added a new parameter called OutputKey that allows you to save your data in a specific folder in your bucket :
You can use output keys to specify the Amazon S3 prefix and file name
of the transcription output. For example, specifying the Amazon S3
prefix, "folder1/folder2/", as an output key would lead to the output
being stored as "folder1/folder2/your-transcription-job-name.json". If
you specify "my-other-job-name.json" as the output key, the object key
is changed to "my-other-job-name.json". You can use an output key to
change both the prefix and the file name, for example
"folder/my-other-job-name.json".
Just make sure to put 'folder/' as an OutputKey (with '/' symbol at the end), otherwise it will be interpreted as a name for your file and not the folder where to store it.
Hope it'll be useful to someone.
I'm using signed url to upload a file from my react application to a s3 bucket. I specify the path as part of my Key and the folders are getting created properly:
let params = {
Bucket: vars.aws.bucket,
Key: `${req.body.path}/${req.body.fileName}`,
Expires: 5000,
ACL: 'public-read-write',
ContentType: req.body.fileType,
};
s3.getSignedUrl('putObject', params, (err, data)=>{...
However, when I use s3.listObject, the folders that are created this way are not getting returned. Here is my node api code:
const getFiles = (req, res) => {
let params = {
s3Params:{
Bucket: vars.aws.bucket,
Delimiter: '',
Prefix: req.body.path
}
}
s3.listObjects(params.s3Params, function (err, data) {
if (err) {
res.status(401).json(err);
} else {
res.status(200).json(data);
}
});
}
The folders that are getting created through the portal are showing in the returned object properly. Is there any attribute I need to set as part of generating the signed URL to make the folder recognized as an object?
I specify the path as part of my Key and the folders are getting created properly
Actually, they aren't.
Rule 1: The console displays a folder icon for the folder foo because one or more objects exists in the bucket with the prefix foo/.
The console appears to allow you to create "folders," but that isn't what's happening when you do that. If you create a folder named foo in the console, what actually happens is that an ordinary object, zero bytes in length, with the name foo/ is created. Because this now means there is at least one object that exists in the bucket with the prefix foo/, a folder is displayed in the console (see Rule 1).
But that folder is not really a folder. It's just a feature of the console interacting with another feature of the console. You can actually delete the foo/ object using the API/SDK and nothing happens, because the console till shows that folder as long as there remains at least one object in the bucket with the prefix foo/. (Deleting the folder in the console sends delete requests for all objects with that prefix. Deleting the dummy object via the API does not.)
In short, the behavior you are observing is normal.
If you set the delimiter to /, then the listObjects response will include CommonPrefixes -- and this is where you should be looking if you want to see "folders." Objects ending with / are just the dummy objects the console creates. CommonPrefixes does not depend on these.