I am trying to copy a file from a AWS S3 bucket to another bucket using Node. The problem is if the file name doesn't has the white space for example: "abc.csv", It is working fine.
But in case the file to which I want to copy has the white space in the file name for example: "abc xyz.csv". It is throwing the below error.
"The specified key does not exist."
"NoSuchKey: The specified key does not exist.
at Request.extractError (d:\Projects\Other\testproject\s3filetoarchieve\node_modules\aws-sdk\lib\services\s3.js:577:35)
Below is the code provided.
return Promise.each( files, file => {
var params = {
Bucket: process.env.CR_S3_BUCKET_NAME,
CopySource: `/${ process.env.CR_S3_BUCKET_NAME }/${ prefix }${ file.name}`,
Key: `${ archieveFolder }${ file.name }`
};
console.log(params);
return new Promise(( resolve, reject) => {
s3bucket.copyObject(params, function(err, data) {
if (err){
console.log(err, err.stack);
debugger
} else {
console.log(data);
debugger
}
});
});
}).then( result => {
debugger
});
Early help would be highly appreciable. Thank you.
I think the problem is exactly that space in the filename.
S3 keys must be url encoded, as they need to be accesible in URL form.
There are some packages that helps you with url formatting like speakingUrl
or you can try writting some on your own, maybe just simply replacing spaces (\s) with dashes (_ or -) if you want to keep it friendly.
If you don't mind about that, you can simply encodeURIComponent(file.name)
Hope it helps!
Related
this is my first stack post, sorry if it's a little blurry :/
So basically I have a Angular project with firestore behind. I got a cloud function which generates an .xlsx file and upload it to my fireStorage.
const path = 'hellothere/excels';
return workBook.xlsx.writeFile(`/tmp/myExcel.xlsx`).then(() => {
return storageFb.upload( `/tmp/myexcel.xlsx`,{
destination: path+'/myExcel.xlsx',
}
)
}).then(() => path);
Where StorageFb is the bucket of my storage.
Actuelly it's working, it uploads my .xlsx file under /hellothere/excels/ with the name myExcel.xlsx. But when I download it (by the admin panel or my angular client), it is fully named hellothere_excels_myExcel.xlsx.
Here is my client code:
this.fireStorage.ref('hellothere/excels/myExcel.xlsx').getDownloadURL().subscribe((url) => {
window.open(url, '_blank');
});
return Promise.resolve();
Simply. I know the code is messy but i'm testing all solution I can find so i'll clean it up afterall
Admin panel path
My file name
So I'm kinda stuck since I dunno why those file won't download with just the 'myExcel' name.
If anyone have a clue you'll save my week ahah ! Thanks !
You need to set the content disposition to define the filename. Try that
const path = 'hellothere/excels';
return workBook.xlsx.writeFile(`/tmp/myExcel.xlsx`).then(() => {
return storageFb.upload( `/tmp/myexcel.xlsx`,{
destination: path+'/myExcel.xlsx',
contentDisposition: 'filename=myExcel.xlsx'
}
)
}).then(() => path);
I encountered weird behavior, while trying to delete file from S3 bucket on digitaloceanspace. I use aws-sdk and I follow the official example. However, the method doesn't delete the file, no error occurs and returned data object (which should be a key of deleted item) is empty. Below the code:
import AWS from "aws-sdk";
export default async function handler(req, res){
const key = req.query.key
const spacesEndpoint = new AWS.Endpoint("ams3.digitaloceanspaces.com");
const s3 = new AWS.S3({
endpoint: spacesEndpoint,
secretAccessKey: process.env.AWS_SECRET_KEY,
accessKeyId: process.env.AWS_ACCESS_KEY,
});
const params = {
Bucket: process.env.BUCKET_NAME,
Key: key,
};
s3.deleteObject(params, function (error, data) {
if (error) {
res.status({ error: "Something went wrong" });
}
console.log("Successfully deleted file", data);
});
}
The environmental variables are correct, the other (not mentioned above) upload file method works just fine.
The key passed to the params has format 'folder/file.ext' and it exists for sure.
What is returned from the callback is log: 'Successfully deleted file {}'
Any ideas what is happening here?
Please make sure you don't have any spaces in your key (filename), otherwise, it won't work. I was facing a similar problem when I saw there was a space in the filename that I was trying to delete.
Let's have an example if we are trying to upload a file named "new file.jpeg", then the DigitalOcean space is going to save it as "new\20%file.jpeg" and when you will try to delete this file, it won't find any file having name as "new file.jpeg". That's why we need to trim any whitespace or space between the words in the file name.
Hope it helps.
I have stored video files in S3 bucket and now i want to show the files to clients through an API. Here is my code for it
app.get('/vid', async(req, res) => {
AWS.config.update({
accessKeyId: config.awsAccessKey,
secretAccessKey: config.awsSecretKey,
region: "ap-south-1"
});
let s3 = new AWS.S3();
var p = req.query.p
res.attachment(p);
var options = {
Bucket: BUCKET_NAME,
Key: p,
};
console.log(p, "name")
try {
await s3.getObject(options).
createReadStream().pipe(res);
} catch (e) {
console.log(e)
}
})
This is the output I am getting when ther is this file available in S3 bucket -
vid_kdc5stoqnrIjEkL9M.mp4 name
NoSuchKey: The specified key does not exist.
This is likely caused by invalid parameters being passed into the function.
To check for invalid parameters you should double check the strings that are being passed in. For the object check the following:
Check the value of p, ensure it is the exact same name of the full object key.
Validate that the correct BUCKET_NAME is being used
No trailing characters (such as /)
Perform any necessary decoding before passing parameters in.
If in doubt use logging to output the exact value, also to test the function try testing with hard coded values to validate you can actually retrieve the objects.
For more information take a look at the How can I troubleshoot the 404 "NoSuchKey" error from Amazon S3? page.
Having certain characters in the bucketname leads to this error.
In your case, there is an underscore. Try renaming the file.
Also refer to this
S3 Bucket Naming Requirements Docs
Example from Docs:
The following example bucket names are not valid:
aws_example_bucket (contains underscores)
AwsExampleBucket (contains uppercase letters)
aws-example-bucket- (ends with a hyphen)
I was having trouble with this, though it probably is a rookie issue. I want to store some file at ../images/icon.png as a File object in my database. I had trouble accessing the actual data and storing it properly. The documentation for Parse.File says you can access the data as
1. an Array of byte value Numbers, or
2. an Object like { base64: "..." } with a base64-encoded String.
3. a File object selected with a file upload control.
but I couldn't figure out how to actually do this.
What I ended up doing was
let iconFile = undefined;
const filePath = path.resolve(__dirname, '..', 'images/icon.png');
fs.readFile(filePath, 'base64', function(err, data) {
if (err) {
console.log(err);
} else {
iconFile = new Parse.File('icon', {base64: data});
}
});
I was getting errors where the path wasn't pointing to the image correctly, so I used node's path (as in require('path') to get it to point correctly.
This code should work for any file type, as far as I can tell.
I have a bucket on Google Cloud Platform where part of my application adds small text files with unique names (no extension).
A second app needs to retrieve individual text files (only one at a time) for insertion into a template.
I cannot find the correct api call for this.
Configuration is as required:
var gcloud = require('gcloud');
var gcs = gcloud.storage({
projectId: settings.bucketName,
keyFilename: settings.bucketKeyfile
});
var textBucket = gcs.bucket(settings.bucketTitle);
Saving to the bucket works well:
textBucket.upload(fileLocation, function(err, file) {
if(err) {
console.log("File not uploaded: " + err);
} else {
// console.log("File uploaded: " + file);
}
});
The following seems logical but returns only metadata and not the actual file for use in the callback;
textBucket.get(fileName, function(err, file) {
if(err) {
console.log("File not retrieved: " + err);
} else {
callback(file);
}
});
Probably no surprise this doesn't work since it's not actually in the official documentation but then again, neither is a simple asnyc function which returns a document you ask for.
The method get on a Bucket object is documented here: https://googlecloudplatform.github.io/gcloud-node/#/docs/v0.29.0/storage/bucket?method=get
If you want to simply download the file into memory, try the method download on a File object: https://googlecloudplatform.github.io/gcloud-node/#/docs/v0.29.0/storage/file?method=download. You can also use createReadStream if using a stream workflow.
If you have ideas for improving the docs, it would be great if you opened an issue on https://github.com/googlecloudplatform/gcloud-node so we can make it easier for the next person.