What is the best way to send files through HTTP? - node.js

I am working on web api in node.js and express and I want to enable users to upload images.
My api is using JSON requests and responses but when it comes to uploading images I don't know which option is a better one. I can think of two ideas:
encode images as a base64 strings and send them as a JSON (like {"image": "base64_encoded_image"})
use multipart/form request and handle the request with a help of packages like multer
I've been reading some articles and other questions related to my issue and I'm still struggling to choose one approach over the other. Encoding image and sending it with JSON increases the size of data by about 25% (that's what I've read) but using multipart seems weird to me as all other endpoints on my api use JSON.

The multipart/formdata approach has certain advantages over the Base64 encoding one.
First and foremost disadvantage of using Base64 approach is the 30% increase in size of data, while this may not be significant for files of small size but it will definitely matter if you are sending large files and storing them on storage spaces( will increase your cost/data-consumption ). Also packages like multer provide you with certain functionalities like - checking the type of file(jpg,png etc) and set size limit on files etc. And they are quite easy to implement as well with a lot of tutorials and guides present online.
Furthermore, converting image to Base64 string increases computation overhead on user's machine especially if the file is large.
I would advise you to use multipart/form-data approach for your case.

Related

How to upload video to mogondb using expressjs?

I want to make an web app where user able to upload to upload.
So how can i make the api where the video will be stored in mongodb.
Storing an entire file is not an option, you can store using binary data, but is not the best approach.
As Kris said, are better options like AWS S3. On the other hand, remember the limitation of 16MB per document, that is the maximum amount of data a document can store, so in case you decide to use binary data to store your videos you need having in mind this limit.

Parsing a very long JSON file in NodeJS

I'm using a Nuxt App that has it's own Express based API, it looks to be working fine, but I'm scared of overusing someone else's services.
And I found a JSON that has everything I need to deliver before-hand.
The problem is, that it is 4MB long, and I don't think it would be a very efficient progress to retrieve and add data from it.
If I want to efficiently parse a huge JSON, and use it for the server (As in, to serve parts of it according to the requests I receive).
How would you go around it? Any ideas?
Is only 4MB, you should just load it to memory, any other thing you want to do probably is overkill, literally, fs.readFile, then JSON.parse, that's it, in memory.
Trying to come up with a more sophisticated and "efficient" way on this context is probably not worth the trouble and maybe is just not posible, if you end up using another service just to store and manage those 4mb of data, the IO needed for that is orders of magnitude more than just keeping it on ram.

How do I upload a file to a REST endpoint?

Using Twitter as an example: Twitter has an endpoint for uploading file data. https://developer.twitter.com/en/docs/media/upload-media/api-reference/post-media-upload-append
Can anyone provide an example of a real HTTP message containing, for example, image file data, showing how it is supposed to be structured? I'm fairly sure Twitter's documentation is nonsense, as their "example request" is the following:
POST https://upload.twitter.com/1.1/media/upload.json?command=APPEND&media_id=123&segment_index=2&media_data=123
Is the media_data really supposed to go in the URL? What if you have raw binary media data? Would it go in the body? How is the REST service to know how the data is encoded?
You're looking at the chunked uploader - it's intended for sending large files, breaking them into chunks, so a network failure doesn't mean you have to re-upload a 100 MB .mp4. It is, as a result, fairly complicated. (Side note: The file data goes in the request body, not the URL as a GET parameter... as indicated by "Requests should be multipart/form-data POST format.")
There's a far less complicated unchunked uploader that'll be easier to work with if you're just uploading a regular old image.
All of this gets a lot easier if you use one of Twitter's recommended libraries for your language.
to upload a file, you need to send it in a form, in node.js server you save accept the incoming file using formidable.
You can also use express-fileupload or multer

Manage security on file upload to nodejs

I have an image upload view on my client (ember.js) that send the resized image to nodejs rest api;
it works well but it is easy for someone expert to force upload of a non-resized image;
I would like to keep the resize process on the client because this allows users to select heavy-weight images, that are resized locally and uploaded only after that, when they are lightweight;
If someone else uses something like this, I'm interested on how it is possible to make this as safe as possible;
As a rule of thumb when developing web applications is never ever trust any data coming from the client side, always try to do a check in your server side!
Use authentication, this ensures that user only allow to upload data to their own account and not fiddling others files.
Add a special message passing between your server and client, a simple example would be
i. send a post API request first (that contains the image information and targeted compressed size) to your server indicating that your client is starting to compress the picture
ii. when uploading, add a metadata to include the complete compressed image, and check the uploaded image with your server if it is within the accepted threshold, else discard it
You could enhance the security of the message passing to be more complicated!
This would be my simple security, anyone else got better solution? :)
Approaches here also work for file uploads. You can use a combination of checking:
content-length header and/or (i.e. req.headers['content-length'] > x)
reading stream size as it's being read by server. (i.e req.on('data'))
If the stream data exceeds a certain size you can respond accordingly. Check out something like Multer for file uploads, specifically the limits section. Best approach would probably the second option.

Expressjs File Upload Customization

Expressjs has bodyParser middleware which can handle file-uploads and can even store them in a directory given in options. But in my app I want to store the files in Amazon S3, so I basically want to stream the file straight to S3 without having to store it locally at all.
But the problem is validation of the file. How can I be sure that these files are all images. Checking the content-type isn't good enough option coz that can be faked. I want to know is it ok if I do the validation after streaming the file to S3?? I am asking from the security point of view.
After storing the image, I need to retrieve it for creating thumbnails, How can I do it asynchronously after giving the response after file upload?
You have contradictory goals of not wanting to store it locally during upload but then also wanting to download it needlessly again to make thumbnails. If you want to go for technical slickness awards, you can simultaneously stream the file upload request body to a local temporary file as well as S3. Or you can do what the rest of the industry does and store it in a local temporary file and then thumbnail it, and then upload all sizes to S3. Either of these approaches alleviates any need to immediately download it from S3 to make thumbnails.
How exactly do you intend to validate that it's really an image? You could look at the first chunk of file data and validate for the file type's magic number if that gives you warm fuzzies, but ultimately it's untrusted user data. The second half of the supposed image file could be virus code and that is just as easily faked at the Content-Type header. Sounds like your security concerns are mostly driven by FUD as opposed to specific threats you intend to defend against. As long as you don't take the user's uploaded data, mark it executable and run it as root on your server, any non-image data is just going to be corrupt and fail to render correctly in a browser (and/or cause your thumbnailer program to exit with an error or perhaps crash in an extreme case).
Regarding validation can I just try to create a thumbnail and if I can't then its not a valid image and delete it. Is this way fine?
Most of the time, yes. There will be edge cases where your thumbnailer cannot process an image but a browser can as thumbnailers are not perfect and some images are partially corrupt. For example, I have found some animated GIFs that render and animate fine in a web browser but graphicsmagick crashes trying to process them. Not sure there's anything that can be done about those 0.01% edge cases.
And for uploads part, can I send a response to the user and than carry on with the thumbnail creation and storing it in S3?
Yes, that is generally the best approach so the user knows their upload succeeded. Generally image processing is usually architected as a "work queue" model where you just record that there's work to do and then proceed and a separate process or processes take work off the queue and complete it.

Resources