How to generate a one-time download link in Node.js? - node.js

I would like to generate a one-time download link in Node.js and email it to the user so he/she can download it. I would want the link to expire after a while, say one day or one week for example. How can I do this using node.js?
Thanks!
I can download the file using res.download but it sends the file directly to the client and do not generate download link.

This depends on where are you saving the file.
If you just save the file in your own server, then if you save the file in a static folder you can expose the file through your server's link.
This explains it "uploading File to a static folder in the Server" https://www.bezkoder.com/node-js-express-file-upload/
Now, you also want to expire the link. This would be more complicated since now you have to store the timestamp with respect to a link in the database and make the link invalid when the allocated duration passes.
This all is done by amazon s3 so if its possible to use that you should use it instead of implementing everything.
In aws s3, you can store your file and generate signed links that expire after a duration.

Related

Script in server saving locally

I wrote a script that is using slack API to parse AWS S3 files looking for strings or samples. As this is in testing, I'm using my local machine and ngrok to forward localhost traffic.
The thing is that the generated files are getting stores in my machine and will be stored in server once the script is ready for production.
Ideally, I'd like to avoid users needing to grab files from server. Do you think it's possible to store directly in user local machine?
No. Slack does not allow you to access the local machine of their users through a Slack app / API.
Solution 1: Download via browser
The easiest solution would be to offer a direct download link in a Slack message, e.g. via a Link button. Once the user clicks it he is prompted to download the file to his local machine.
Here is an example from one of my apps:
And once you click it you get this window:
To enable downloading via browser you need to set appropriate headers and send the file contents to the browser.
One approach is to have a helper script for the actual download and include a link to the helper script in the link button (you may also want to include some parameters in the link that defines what which file is downloaded).
The helper script then does the following:
Fetch the file to be downloaded (e.g. an PNG image)
Set headers to enable downloading via browser
Send the file to the browser
Here is an example in PHP:
<?php
$filename = "demo.png";
$file = file_get_contents($filename);
header('Content-Disposition: attachment;filename=' . $filename);
header('Content-Type: image/png');
echo $file;
die();
For more infos on download headers see also this answer on SO.
Solution 2: Upload to Slack
Alternatively you could upload the file to the user's Slack workspace via file.upload API method. That way the user does not need to download anything and and you can remove the file from your server after your app has finished processing.

NodeJS, how to handle image uploading with MongoDB?

I would like to know what is the best way to handle image uploading and saving the reference to the database. What I'm mostly interested is what order do you do the process in?
Should you upload the images first in the front-end (say Cloudinary), and then call the API with result links to the images and save it to the database?
Or should you upload the images to the server first, and upload them from the back-end and save the reference afterwards?
OR, should you do the image uploading after you save the record in the database and then update it once the images were uploaded?
It really depends on the resources, timeline, and number of images you need to upload daily.
So basically if you have very few images to upload then you can upload that image to your server then upload it to any cloud storage(s3, Cloudinary,..) you are using. As this will be very easy to implement(you can find code snippet over the internet) and you can securely maintain your secret keys/credential to your cloud platform on the server side.
But, according to me best way of doing this will be something like this. I am taking user registration as an example
Make server call to get a temporary credential to upload files on the cloud(Generally, all the providers give this functionality i.e. STS/Signed URL in AWS).
The user will fill up the form and select the image on the client side. When the user clicks the submit button make one call to save the user in the database and start upload with credentials. If possible keep a predictable path for upload. Like for user upload /users/:userId or something like that. this highly depends on your use case.
Now when upload finishes make a server call for acknowledgment and store some flag in the database.
Now advantages of this approach are:
You are completely offloading your server from handling file operations which are pretty heavy and I/O blocking and you are distributing that load to all clients.
If you want to post process the files after upload you can easily integrate this with serverless platforms and do that on there and again offload that.
You can easily provide retry mechanism to your users in case of file upload fails but they won't need to refill the data, just upload the image/file again
You don't need to expose the URL directly to the client for file upload as you are using temporary Creds.
If the significance of the images in your app is high then ideally, you should not complete the transaction until the image is saved. The approach should be to create an object in your code which you will eventually insert into mongodb, start upload of image to cloud and then add the link to this object. Finally then insert this object into mongodb in one go. Do not make repeated calls. Anything before that, raise an error and catch the exception
You can have many answers,
if you are working with big files greater than 16mb please go with gridfs and multer,
( changing the images to a different format and save them to mongoDB)
If your files are actually less than 16 mb, please try using this Converter that changes the image of format jpeg / png to a format of saving to mongodb, and you can see this as an easy alternative for gridfs ,
please check this github repo for more details..

Angular 5 Download file from dropbox and Upload it to AWS s3

I am currently using dropbox file picker to download the file. I got the download link after selection of file using dropbox picker.
Is there any possibility that we can save it inside bytestream in browser and upload it to server(Node.JS) using http post call ?
Or Is there any alternative to this scenario ?
Any help would be appreciated.
Instead of downloading and reuploading the file on the browser, I would have this step to be processed on the server side.
You can use Dropbox and S3 sdk's and follow the steps below:
Make a call to the server that will send IDs of the list of files available in Dropbox.
Let the user select a file in the angular app and send the selected file's resource identifier back to the server.
Download the file and then re-upload it to the S3 on the server side.
Display the result/status back to the user.
Is there any reason you want this to be done in the frontend?

What are the possible ways to transfer a file of more than 100MB generated via shell script, to any URL or server?

I have a shell script written, which generates an excel file of more than 100MB. Now, I want to transfer the file or say upload the file to one URL which is online storage server. This server will generate a URL containing the file after uploading.
The question is if we are able to upload the file using "cURL" to any given URL, then how to get the generated URL from that web page??
The URL generated after uploading the file is dynamic. (Dropbox kind of storage)
If it is not possible to get that URL then how to transfer such a big file.
Note: It is kind of automation, so please answer keeping automation in mind.
Thank you in advance.

Amazon S3 Browser Based Upload - Prevent Overwrites

We are using Amazon S3 for images on our website and users upload the images/files directly to S3 through our website. In our policy file we ensure it "begins-with" "upload/". Anyone is able to see the full urls of these images since they are publicly readable images after they are uploaded. Could a hacker come in and use the policy data in the javascript and the url of the image to overwrite these images with their data? I see no way to prevent overwrites after uploading once. The only solution I've seen is to copy/rename the file to a folder that is not publicly writeable but that requires downloading the image then uploading it again to S3 (since Amazon can't really rename in place)
If I understood you correctly The images are uploaded to Amazon S3 storage via your server application.
So the Amazon S3 write permission has only your application. Clients can upload images only throw your application (which will store them on S3). Hacker can only force your application to upload image with same name and rewrite the original one.
How do you handle the situation when user upload a image with a name that already exists in your S3 storage?
Consider following actions:
First user upload a image some-name.jpg
Your app stores that image in S3 under name upload-some-name.jpg
Second user upload a image some-name.jpg
Will your application overwrite the original one stored in S3?
I think the question implies the content goes directly through to S3 from the browser, using a policy file supplied by the server. If that policy file has set an expiration, for example, one day in the future, then the policy becomes invalid after that. Additionally, you can set a starts-with condition on the writeable path.
So the only way a hacker could use your policy files to maliciously overwrite files is to get a new policy file, and then overwrite files only in the path specified. But by that point, you will have had the chance to refuse to provide the policy file, since I assume that is something that happens after authenticating your users.
So in short, I don't see a danger here if you are handing out properly constructed policy files and authenticating users before doing so. No need for making copies of stuff.
actually S3 does have a copy feature that works great
Copying Amazon S3 Objects
but as amra stated above, doubling your space by copying sounds inefficient
mybe itll be better to give the object some kind of unique id like a guid and set additional user metadata that begin with "x-amz-meta-" for some more information on the object, like the user that uploaded it, display name, etc...
on the other hand you could always check if the key exists already and prompt for an error

Resources