Upload file onto AWS S3 with specific path in NodeJS - node.js

I've been taking a crack at uploading files onto S3 via NodeJS, but with a specific path where they have to be stored.
return s3fsImpl.writeFile(file_name.originalFilename,stream).then(function() {
fs.unlink(file_name.path, function(err) {
if (err) {
console.error(err);
} else { /** sucessess **/ }
I'm not sure how do I give a path like /project_name/file_name.
I have been following this tutorial

In this scenario your are using a stream as the target. When you created that stream you should have specified the path at that point.

Related

How to download zip file from server to client (nodejs)

I have create a zip file in server side, then I would like to pass the file to client side so that I can download it with the saveAs() function and put it into a new Blob() function. How can I do that?
const blob = new Blob([res.file], { type: 'application/zip' });
saveAs(blob, res.filename);
I create a code like that, but I cant convert a right type of buffer file for the zip in server.
How should I convert the zip file so that the client side can receive a right file type input in Blob function.
Once you get your zip ready, you can serve the file using download() method to achieve that
Below snippet will help you
res.download('/report-12345.pdf', 'report.pdf', function (err) {
if (err) {
// Handle error, but keep in mind the response may be partially-sent
// so check res.headersSent
} else {
// decrement a download credit, etc.
}
})
You can read more details here
http://expressjs.com/en/5x/api.html#res.download
Hope that will help you :)

How to get the thumbnail of base64 encoded video file in Nodejs?

I am developing a web application using Nodejs. I am using Amazon S3 bucket to store files. What I am doing now is that when I upload a video file (mp4) to the S3 bucket, I will get the thumbnail photo of the video file from the lambda function. For fetching the thumbnail photo of the video file, I am using this package - https://www.npmjs.com/package/ffmpeg. I tested the package locally on my laptop and it is working.
Here is my code tested on my laptop
var ffmpeg = require('ffmpeg');
module.exports.createVideoThumbnail = function(req, res)
{
try {
var process = new ffmpeg('public/lalaland.mp4');
process.then(function (video) {
video.fnExtractFrameToJPG('public', {
frame_rate : 1,
number : 5,
file_name : 'my_frame_%t_%s'
}, function (error, files) {
if (!error)
console.log('Frames: ' + files);
else
console.log(error)
});
}, function (err) {
console.log('Error: ' + err);
});
} catch (e) {
console.log(e.code);
console.log(e.msg);
}
res.json({ status : true , message: "Video thumbnail created." });
}
The above code works well. It gave me the thumbnail photos of the video file (mp4). Now, I am trying to use that code in the AWS lambda function. The issue is the above code is using video file path as the parameter to fetch the thumbnails. In the lambda function, I can only fetch the base 64 encoded format of the file. I can get id (s3 path) of the file, but I cannot use it as the parameter (file path) to fetch the thumbnails as my s3 bucket does not allow public access.
So, what I tried to do was that I tried to save the base 64 encoded video file locally in the lambda function project itself and then passed the file path as the parameter for fetching the thumbnails. But the issue was that AWS lamda function file system is read-only. So I cannot write any file to the file system. So what I am trying to do right now is to retrieve the thumbnails directly from the base 64 encoded video file. How can I do it?
Looks like you are using a wrong file location,
/tmp/* is your writable location for temporary files and limited to 512MB
Checkout the tutorial that does the same as you like to do.
https://concrete5.co.jp/blog/creating-video-thumbnails-aws-lambda-your-s3-bucket
Lambda Docs:
https://docs.aws.amazon.com/lambda/latest/dg/limits.html
Ephemeral disk capacity ("/tmp" space) 512 MB
Hope it helps.

Google Cloud Platform file to node server via gcloud

I have a bucket on Google Cloud Platform where part of my application adds small text files with unique names (no extension).
A second app needs to retrieve individual text files (only one at a time) for insertion into a template.
I cannot find the correct api call for this.
Configuration is as required:
var gcloud = require('gcloud');
var gcs = gcloud.storage({
projectId: settings.bucketName,
keyFilename: settings.bucketKeyfile
});
var textBucket = gcs.bucket(settings.bucketTitle);
Saving to the bucket works well:
textBucket.upload(fileLocation, function(err, file) {
if(err) {
console.log("File not uploaded: " + err);
} else {
// console.log("File uploaded: " + file);
}
});
The following seems logical but returns only metadata and not the actual file for use in the callback;
textBucket.get(fileName, function(err, file) {
if(err) {
console.log("File not retrieved: " + err);
} else {
callback(file);
}
});
Probably no surprise this doesn't work since it's not actually in the official documentation but then again, neither is a simple asnyc function which returns a document you ask for.
The method get on a Bucket object is documented here: https://googlecloudplatform.github.io/gcloud-node/#/docs/v0.29.0/storage/bucket?method=get
If you want to simply download the file into memory, try the method download on a File object: https://googlecloudplatform.github.io/gcloud-node/#/docs/v0.29.0/storage/file?method=download. You can also use createReadStream if using a stream workflow.
If you have ideas for improving the docs, it would be great if you opened an issue on https://github.com/googlecloudplatform/gcloud-node so we can make it easier for the next person.

Access uploaded image in Sails.js backend project

I am trying to do an upload and then accessing the image. The upload is going well, uploading the image to assets/images, but when I try to access the image from the browser like http://localhost:1337/images/image-name.jpg it gives me 404. I use Sails.js only for backend purposes - for API and the project is created with --no-front-end option. My front end is on AngularJS.
My upload function:
avatarUpload: function(req, res) {
req.file('avatar').upload({
// don't allow the total upload size to exceed ~10MB
maxBytes: 10000000,
dirname: '../../assets/images'
}, function whenDone(err, uploadedFiles) {
console.log(uploadedFiles);
if (err) {
return res.negotiate(err);
}
// If no files were uploaded, respond with an error.
if (uploadedFiles.length === 0) {
return res.badRequest('No file was uploaded');
}
// Save the "fd" and the url where the avatar for a user can be accessed
User
.update(req.userId, {
// Generate a unique URL where the avatar can be downloaded.
avatarUrl: require('util').format('%s/user/avatar/%s', sails.getBaseUrl(), req.userId),
// Grab the first file and use it's `fd` (file descriptor)
avatarFd: uploadedFiles[0].fd
})
.exec(function (err){
if (err) return res.negotiate(err);
return res.ok();
});
});
}
I see the image in the assets/images folder - something like this - 54cd1fc5-89e8-477d-84e4-dd5fd048abc0.jpg
http://localhost:1337/assets/images/54cd1fc5-89e8-477d-84e4-dd5fd048abc0.jpg - gives 404
http://localhost:1337/images/54cd1fc5-89e8-477d-84e4-dd5fd048abc0.jpg - gives 404
This happens because the resources your application accesses are not accessed directly from the assets directory but the .tmp directory in the project root.
The assets are copied to the .tmp directory when sails is lifted, so anything added after the lift isn't present in .tmp.
What I usually do is upload to .tmp and copy the file to assets on completion. This way assets isn't polluted in case the upload fails for any reason.
Let us know if this works. Good luck!
Update
Found a relevant link for this.

Advice: flatiron, formidable and aws s3

I'm new with serverside programming with node.js. I'm sticking together a tiny webapp with it right now and having the usual startup learning to do. The following piece of code WORKS. But I would love to know if it's more or less a right way to do a simple file upload from a form and throw it into aws s3:
app.router.post('/form', { stream: true }, function () {
var req = this.req,
res = this.res,
form = new formidable.IncomingForm();
form
.parse(req, function(err, fields, files) {
console.log('Parsed file upload' + err);
if (err) {
res.end('error: Upload failed: ' + err);
} else {
var img = fs.readFileSync(files.image.path);
var data = {
Bucket: 'le-bucket',
Key: files.image.name,
Body: img
};
s3.client.putObject(data, function() {
console.log("Successfully uploaded data to myBucket/myKey");
});
res.end('success: Uploaded file(s)');
}
});
});
Note: I had to turn buffer off in union / flatiron.plugins.http.
What I would like to learn is, when to stream load a file and when to syncload it. It will be a really tiny webapp with little traffic.
If it's more or less good then please consider this as a token of working code which I also would throw into a gist. It's not that easy to find documenation and working examples of this kind of stuff. I like flatiron alot. But it's small module approach leads to lots of splattered docs and examples all over the net, speak alone of tutorials.
You should use other module than formidable because as far as I know formidable does not have s3 storage option , then you must save the files in your server before uploading it.
I would recommend you to use : multiparty
Use this example in order to upload directly to S3 without saving it locally in your server.

Resources