downloading S3 files using express [duplicate] - node.js

I am currently trying to download the file from the s3 bucket using a button from the front-end. How is it possible to do this? I don't have any idea on how to start this thing. I have tried researching and researching, but no luck -- all I have searched are about UPLOADING files to the s3 bucket but not DOWNLOADING files. Thanks in advance.
NOTE: I am applying it to ReactJS (Frontend) and NodeJS (Backend) and also, the file is uploaded using Webmerge
UPDATE: I am trying to generate a download link with this (Tried node even if I'm not a backend dev) (lol)
see images below
what I have tried so far
onClick function

If the file you are trying to download is not public then you have to create a signed url to get that file.
The solution is here Javascript to download a file from amazon s3 bucket?
for getting non public files, which revolves around creating a lambda function that will generate a signed url for you then use that url to download the file on button click
BUT if the file you are trying to download you is public then you don't need a signed url, you just need to know the path to the file, the urls are structured like: https://s3.amazonaws.com/ [file path]/[filename]
They is also aws amplify its created and maintain by AWS team.
Just follow Get started and downloading the file from your react app is simply as:
Storage.get('hello.png', {expires: 60})
.then(result => console.log(result))
.catch(err => console.log(err));

Here is my solution:
let downloadImage = url => {
let urlArray = url.split("/")
let bucket = urlArray[3]
let key = `${urlArray[4]}/${urlArray[5]}`
let s3 = new AWS.S3({ params: { Bucket: bucket }})
let params = {Bucket: bucket, Key: key}
s3.getObject(params, (err, data) => {
let blob=new Blob([data.Body], {type: data.ContentType});
let link=document.createElement('a');
link.href=window.URL.createObjectURL(blob);
link.download=url;
link.click();
})
}
The url in the argument refers to the url of the S3 file.
Just put this in the onClick method of your button. You will also need the AWS SDK

Related

Disable Caching on Google Cloud Storage

I have been using GCS to storage my images and also use the NodeJS package to upload these images to my bucket. I have noticed that if I frequently change an image, it either does one of the following:
It changes
It serves an old image
It doesn't change
This seems to happen pretty randomly despite setting all of the options properly and even cross-referencing that with GCS.
I upload my images like this:
const options = {
destination,
public: true,
resumable: false,
metadata: {
cacheControl: 'no-cache, max-age=0',
},
};
const file = await this.bucket.upload(tempImageLocation, options);
const { bucket, name, generation } = file[0].metadata;
const imageUrl = `https://storage.googleapis.com/${bucket}/${name}`;
I have debated whether to use the base URL you see there or use this one: https://storage.cloud.google.com.
I can't seem to figure out what I am doing wrong and how to always serve a fresh image. I have also tried ?ignoreCache=1 and other query parameters.
As per the official API documentation - accessible here - shows, you should not need the await. This might be affecting your upload sometime. If you want to use the await, you need to have your function to be async in the declaration, as showed in the second example from the documentation. Your code should look like this.
const bucketName = 'Name of a bucket, e.g. my-bucket';
const filename = 'Local file to upload, e.g. ./local/path/to/file.txt';
const {Storage} = require('#google-cloud/storage');
const storage = new Storage();
async function uploadFile() {
// Uploads a local file to the bucket
await storage.bucket(bucketName).upload(filename, {
// Support for HTTP requests made with `Accept-Encoding: gzip`
gzip: true,
// By setting the option `destination`, you can change the name of the
// object you are uploading to a bucket.
metadata: {
// Enable long-lived HTTP caching headers
// Use only if the contents of the file will never change
// (If the contents will change, use cacheControl: 'no-cache')
cacheControl: 'public, max-age=31536000',
},
});
console.log(`${filename} uploaded to ${bucketName}.`);
}
uploadFile().catch(console.error);
While this is untested, it should help you avoiding the issue with not uploading always the images.
Besides that, as explained in the official documentation of Editing Metada, you can change the way that metadata - which includes the cache control - is used and managed by your project. This way, you can change your cache configuration as well.
I also, would like to include the below link for a complete tutorial on how to send images to Cloud Storage with Node.js, in case you want to check a different approach.
Image Upload With Google Cloud Storage and Node.js
Let me know if the information helped you!
u can try change ?ignoreCache=1 to ?ignoreCache=0.

How to upload files to Cloud Storage from App Engine Application Create using Nodejs On Back End?

I have created an App in Nodejs, this App involves users to upload some files such as profile pictures and some other media file, so these files are stored in certain folders here in my Web Application folder.
This works well locally now after deploying my App when the user do an upload these picture and other files it return an error, I suppose I should be maybe doing some configuration on Cloud Storage and let my App Engine Application be able to read and write to such folder using NodeJs. Please help me with how I can achieve this.
I use this following Code to create these files from Base64 data which is Uploaded using Rest API
var data = req.body.image; //base64 image data
const bucket = storage.bucket(process.env.GCLOUD_STORAGE_BUCKET); //decalre bucket
var blob = bucket.file('new_image.png');
writeScreenShot(blob, data);
blobStream.on('finish', () => {
res.send("Success");
});
blobStream.end(req.file.buffer);
function writeScreenShot(blob, data)
{
var strm = blob.createWriteStream();
strm.write(new Buffer(data, 'base64'));
strm.end();
}
This is how I implemented it so far, but it returns image that says it looks like we don't support this image type while file extension of this image is png, it seems like I have created this image wrongly now got affected the metadata of it. Thanks in advance

Piping a file straight to the client using Node.js and Amazon S3

So I want to pipe a file straight to the client; how I am currently doing it is create a file to disk, then sending that file straight to the client.
router.get("/download/:name", async (req, res) => {
const s3 = new aws.S3();
const dir = "uploads/" + req.params.name + ".apkg"
let file = fs.createWriteStream(dir);
await s3.getObject({
Bucket: <bucket-name>,
Key: req.params.name + ".apkg"
}).createReadStream().pipe(file);
await res.download(dir);
});
I just looked up that res.download() only serves locally. Is there a way you can do it directly from AWS S3 to Client download? i.e. pipe files straight to user. Thanks in advance
As described in this SO thread:
You can simply pipe the read stream into the response instead of the piping it to the file, just make sure to supply the correct Content-Type and to set it as an attachment, so the browser will know how to handle the response properly.
res.attachment(req.params.name);
await s3.getObject({
Bucket: <bucket-name>,
Key: req.params.name + ".apkg"
}).createReadStream().pipe(res);
On more pattern for this is to create a signed url directly to the S3 object and then let the client download straight from S3, instead of streaming it from your node webserver. This will reduce the workload from your web server.
You will need to use the getSignedUrl method from the AWS S3 SDK for JS.
Then, Once you have the URL, just return it to your client to download the file by themselves.
You should take into account that once you give the client a signed URL that has download permissions for, say, 5 minutes, they will only be able to download that file during those next 5 minutes. And you should also take into account that they will be able to pass that URL to anyone else for download during those 5 minutes, so it is dependant on how secure you need this to be.
S3 can be used to content so I would do the following.
Add CORS headers on your node response. This will enable browser to download from another origin i.e. S3.
Enable S3 web server on your bucket.
Script to download redirect from S3 - this you could achieve in JS.
Use signed URL as suggested in the other post if you need to protect S3 content.

Switching from Cloudinary to S3 - Unable to upload

I have a KeystoneJS project that will be switching from Cloudinary to S3. The project is working out of the box with Cloudinary. However, we would like to migrate over to using S3 for storage.
I currently have S3 set up in my keystone.js file:
var config = require('./config.json');
keystone.set('s3 config', {
bucket: config.s3.bucket,
key: config.s3.key,
secret: config.s3.secret
});
I have another model called Page that I would like to upload images with. There is a field called heroImage which was previously a simple { type: Types.Cloudinary }
However, in order for it to use S3, I had to change it to:
heroImage: {
type: Types.S3File,
format: function(item, file) {
return '<img src="' + file.url + '" style="max-width: 300px">';
}
}
When entering the AdminUI, everything looks fine. I am able to click Upload File and after pressing Save, I get a success message. I checked the S3 bucket with Transmit and found that the file was never uploaded. Additionally, going to the file URL shows this:
<Error>
<Code>PermanentRedirect</Code>
<Message>
The bucket you are attempting to access must be addressed using the specified endpoint. Please send all future requests to this endpoint.
</Message>
<Bucket>MY_BUCKET</Bucket>
<Endpoint>MY_BUCKET.s3.amazonaws.com</Endpoint>
<RequestId>5CBD0F317C517254</RequestId>
<HostId>
VCoEc5PFevAecvyYC79ta7CIzWBewQ90kribJ59NAQ5JHn8dNEwMV+Ncv9cSfT1l
</HostId>
</Error>
Any help switching this from Cloudinary to S3 would be greatly appreciated.
I simply had to remove any . from the S3 bucket name. For example, instead of bucketname.com, I just renamed it to bucketnamecms.

NodeJs - User upload to s3

I'm quite new to node.js and would like to do the following:
user can upload one file
upload should be saved to amazon s3
file information should be saved to a database
script shouldn't be limited to specific file size
As I've never used S3 or done uploads before I might have some
wrong ideas - please correct me, if I'm wrong.
So in my opinion the original file name should be saved into the db and returned for download but the file on S3 should be renamed to my database entry id to prevent overwriting files. Next, should the files be streamed or something? I've never done this but it just seems not to be smart to cache files on the server to then push them to S3, does it?
Thanks for your help!
At first I recommend to look at knox module for NodeJS. It is from quite reliable source. https://github.com/LearnBoost/knox
I write a code below for Express module, but if you do not use it or use another framework, you should still understand basics. Take a look at CAPS_CAPTIONS in the code, you want to change them according to your needs / configuration. Please also read comments to understand pieces of code.
app.post('/YOUR_REQUEST_PATH', function(req, res, next){
var fs = require("fs")
var knox = require("knox")
var s3 = knox.createClient({
key: 'YOUR PUBLIC KEY HERE' // take it from AWS S3 configuration
, secret: 'YOUR SECRET KEY HERE' // take it from AWS S3 configuration
, bucket: 'YOUR BUCKET' // create a bucket on AWS S3 and put the name here. Configure it to your needs beforehand. Allow to upload (in AWS management console) and possibly view/download. This can be made via bucket policies.
})
fs.readFile(req.files.NAME_OF_FILE_FIELD.path, function(err, buf){ // read file submitted from the form on the fly
var s3req = s3.put("/ABSOLUTE/FOLDER/ON/BUCKET/FILE_NAME.EXTENSION", { // configure putting a file. Write an algorithm to name your file
'Content-Length': buf.length
, 'Content-Type': 'FILE_MIME_TYPE'
})
s3req.on('response', function(s3res){ // write code for response
if (200 == s3res.statusCode) {
// play with database here, use s3req and s3res variables here
} else {
// handle errors here
}
})
s3req.end(buf) // execute uploading
})
})

Resources