How to upload image directly to S3 with NodeJS - node.js

went on NPMJS but could't find any libraries useful. It looks like nearly all of them requires it first to be stored on the server then upload to S3. Any chance the file can be uploaded directly to S3?

These two links are good sources of uploads to S3.
http://blog.katworksgames.com/2014/01/26/nodejs-deploying-files-to-aws-s3/
http://docs.aws.amazon.com/AWSJavaScriptSDK/guide/node-examples.html

This blog post explains in detail how you can solve this problem:
https://www.terlici.com/2015/05/23/uploading-files-S3.html
You need to create a signed URL first and after that a user can upload a file directly to S3.

Related

how to upload client files directly to minio using nodejs?

Can anyone help me to implement direct upload of client files to minio in nodejs? I just saw that there are two methods for this work, presignedPostPolicy and presignedPutObject, but somewhere about how these two methods worked was not explained at the same time.
I want that when a link is given to the user that he can upload there, it has limitations such as the maximum size that can be uploaded.

How to upload bigger files (200mb+) to blob storage using react and node js (server side)? without saving file on server

Earlier we were using aws multipart upload. so approach was simple and straightforward. i have to call three apis from my react frontend.multipart upload/chunk url/complete upload.
found this great tutorial on youtube.
I want to upload a large file to azure, not able to find straight forward answer or tutorial.
If you want to upload the file to blob storage programmatically then we can do this by dividing the file in chunks and each chunk will have an id and we upload the chunks separately.
Here we upload each chunk of data with an id and we also store the id in an list and we upload the list itself too.
Refer the following article by Gavrav Mantri

Javascript: How can i upload to S3 directly without saving output file locally?

I have a problem with developing a crawler using nodejs/puppeteer. The old crawler was:
Crawl Pages
Store the output file locally with the fs module
Since i'm going to introduce UI on the server, have set up the scenario to upload it to S3 instead of storing it locally, and show the result as a UI.
Crawl Pages
Stream output files to the server with the fs module
Get the output file back and upload it to the S3 bucket
The above is a scenario that i know of as a knowledge, and i'd like to know if it is possible as below.
Crawl Pages
Upload the stream stored data to memory to the S3 bucket
If you have a scenario like this, I would like to receive a guide. I would really appreciate if you would comment or reply :)
This is definitely possible. If you just pipe from your input stream to your server and pipe up to the S3 it should complete the loop.
This is possible because you can stream uploads to S3, even without knowing the size of the file beforehand.
This answer should help you out: S3 file upload stream using node js
If you post some code we can answer this a little bit better. But hope this puts you in the right direction.

using backend files nodejs

Sorry, It might be very novice problem but I am new to node and web apps and just have been stuck on this for couples of days.
I have been working with a API called "Face++" that requires user to upload images to detect faces. So basically users needed to upload images to my webapps backend and my backend would do an API request with that image. I somehow managed to upload the files at my node's backend using tutorial provided below but now I am struggling how to use those image files. I really don't know how to have access to those files. I thought writing just the filepath/filename would help but it did not. I am really new at webapps.
I used tutorial from here: https://coligo.io/building-ajax-file-uploader-with-node/
to upload my files at back-end.
thanks
You can also use the Face++ REST API node client
https://www.npmjs.com/package/faceppsdk
As per in documentation it requires a live URL on web. Then you have to upload your files into remote location (You may upload files to a Amazon S3 Bucket)
And also you check the sample codes from Documentation where you can upload directly to Face++

Saving Images to S3 from External URL with Node.js and MongoDB

I'm trying to save the images from a third-party API to my own S3 bucket using Node.js and MongoDB. The API provides a URL to the image on the third-party servers. I've never done this before but I'm assuming I have to download the image to my own server and then upload it to S3?
Should I save the image to mongodb with GridFS and then delete it once it is on S3? If so, what's the best way to do that?
I've read and re-read this previous question:
Problem with MongoDB GridFS Saving Files with Node.JS
But I can't find good documentation on how I should determine buffer/chunk size and other attributes for a JPEG image file.
Once I've saved the file on my server/database, then it seems like I should use:
https://github.com/appsattic/node-awssum
To upload it to S3. Is that a good idea?
I apologize if this is an obvious answer, I'm pretty green when it comes to databases and server scripting.
The easiest thing to do would be to save the image onto disk and then stream the upload from there using AwsSum's S3 PutObject operation. Alternatively, if you have the file contents in a Buffer you can just use that.
Have a look at the following two examples and they should help you figure out what to do:
https://github.com/appsattic/node-awssum/blob/master/examples/amazon/s3/put-bucket.js
https://github.com/appsattic/node-awssum/blob/master/examples/amazon/s3/put-object-streaming.js
Let me know if you need any more help. :)
Cheers,
Andy
Disclaimer: I'm the author of AwsSum.

Resources