Serving images based on :foo in URL - node.js

I'm trying to limit data usage when serving images to ensure the user isn't loading bloated pages on mobile while still maintaining the ability to serve larger images on desktop.
I was looking at Twitter and noticed they append :large to the end of the url
e.g. https://pbs.twimg.com/media/CDX2lmOWMAIZPw9.jpg:large
I'm really just curious how this request is being handled, if you go directly to that link there is no scripts on the page so I'm assuming it's done serverside.
Can this be done using something like Restify/Express on a Node instance? More than anything I'm really just curious how it is done.

Yes, it can be done using Express in Node. However, it can't be done using express.static() since it is not a static request. Rather, the receiving function must handle the request by parsing the querystring (or whatever :large is) in order to dynamically respond with the appropriate image.
Generally the images will have already been pre-generated during the user-upload phase for a set of varying sizes (e.g. small, medium, large, original), and the function checks the querystring to determine which static request to respond with.
That is a much higher-performing solution than generating the appropriately-sized image server-side on every request from the original image, though sometimes that dynamic approach is necessary if the server is required to generate a non-finite set of image sizes.

Related

What is the difference between server side rendering (Next.js) and Static Site rendering (Gatsby.js)?

I'm looking to create a website that does not rely on client-side JavaScript, however, I still want to use SPA features like client-side routing etc, so I am looking at using a framework that does not render on the client-side. These 2 seem to be the top options for this type of thing, however, I'm unsure as to the differences between the 2 different types of server processing.
Server side rendering is where a request is made from the client/browser to the server, and then at that point the HTML is generated on-the-fly run-time and sent back to the browser to be rendered.
Static site rendering is very similar, however the parsing is carried out during the build time instead. Therefore when a request is made the HTML is stored statically and can be sent straight back to the client.
They both have their pros and cons:
Although static sites will be faster at run-time as no server-side processing is required, it does mean that any changes to data require a full rebuild over the application server side.
Alternatively, with the server side approach, putting any caching aside, the data is processed on-the-fly and sent straight to the client.
Often the decision is best made depending on how dynamic and real-time your content must be vs how performant the application needs to be.
For example, Stackoverflow most likely uses a server-side rendering approach. There are far two many questions for it to rebuild static versions of every question page each time a new post is submitted. The data also needs to be very real-time with users being able to see posts submitted only seconds ago.
However, a blog site, or promo site, which hardly has any content changes, would benefit much more from a static site setup. The response time would be much greater and the server costs would be much lower.

Express response interception. Use body to modify the outgoing headers

I would like to try and improve site render times by making use of preload/push headers.
We have various assets which are required up front that I would like to preload, and various assets which are marked up in data attributes etc which will be required later via JS but not for initial paint. It would be good to get these flowing to the client early.
Our application is a bit of a hybrid, it uses http-proxy-middleware connected to various different applications, plus directly renders pages it self. I would like the middleware to be agnostic and work regardless of how to page is produced.
I've seen express-mung but this doesn't hold back the header so executes too late, and works with chunked buffers anyway not the entire response. Next up was express-interceptor, that works perfectly for pages rendered directly in express but causes request failures for pages run through the proxy. My next best idea is pulling apart the compression module to figure out how it works.
Does anyone have a better suggestion, or even better know of a working module for this kind of thing?
Thanks.

Manage security on file upload to nodejs

I have an image upload view on my client (ember.js) that send the resized image to nodejs rest api;
it works well but it is easy for someone expert to force upload of a non-resized image;
I would like to keep the resize process on the client because this allows users to select heavy-weight images, that are resized locally and uploaded only after that, when they are lightweight;
If someone else uses something like this, I'm interested on how it is possible to make this as safe as possible;
As a rule of thumb when developing web applications is never ever trust any data coming from the client side, always try to do a check in your server side!
Use authentication, this ensures that user only allow to upload data to their own account and not fiddling others files.
Add a special message passing between your server and client, a simple example would be
i. send a post API request first (that contains the image information and targeted compressed size) to your server indicating that your client is starting to compress the picture
ii. when uploading, add a metadata to include the complete compressed image, and check the uploaded image with your server if it is within the accepted threshold, else discard it
You could enhance the security of the message passing to be more complicated!
This would be my simple security, anyone else got better solution? :)
Approaches here also work for file uploads. You can use a combination of checking:
content-length header and/or (i.e. req.headers['content-length'] > x)
reading stream size as it's being read by server. (i.e req.on('data'))
If the stream data exceeds a certain size you can respond accordingly. Check out something like Multer for file uploads, specifically the limits section. Best approach would probably the second option.

With ExpressJS or Node, Is there an easy way to read an external image into memory and serve it?

I'm using an external service to create images. I'd like my users to be able to hit my API and ask for the image. Then my Express server would retrieve it from the external service, then serve it to the user. Sort of like a proxy I suppose, but not exactly.
Is there an easy way to do this, preferably one that doesn't involve downloading the image to the hard drive, then reading it back in and serving it?
Using the request library, I was able to come up with this:
var request = require("request");
exports.relayImage = function(req, res){
request(req.params.url).pipe(res);
}
That seems to work. If there is a more efficient way to do this (meaning on server resources, not in terms of lines of code), speak up!
What you are doing is exactly what you should be doing, and is the most efficient method. Using pipe, the data is sent as it comes in, requiring no additional resources than are needed to buffer and transmit.
Also be mindful of content type and other response headers that you may want to relay. Finally, realize that you've effectively built an open proxy where anyone can request anything they want through your servers. This is a bit dangerous, so be sure to lock it down in your final application.
You should be able to use the http module to make a request to the external image service with a callback that returns the image as the response. It won't write to disk unless you explicitly tell it to.

Expressjs File Upload Customization

Expressjs has bodyParser middleware which can handle file-uploads and can even store them in a directory given in options. But in my app I want to store the files in Amazon S3, so I basically want to stream the file straight to S3 without having to store it locally at all.
But the problem is validation of the file. How can I be sure that these files are all images. Checking the content-type isn't good enough option coz that can be faked. I want to know is it ok if I do the validation after streaming the file to S3?? I am asking from the security point of view.
After storing the image, I need to retrieve it for creating thumbnails, How can I do it asynchronously after giving the response after file upload?
You have contradictory goals of not wanting to store it locally during upload but then also wanting to download it needlessly again to make thumbnails. If you want to go for technical slickness awards, you can simultaneously stream the file upload request body to a local temporary file as well as S3. Or you can do what the rest of the industry does and store it in a local temporary file and then thumbnail it, and then upload all sizes to S3. Either of these approaches alleviates any need to immediately download it from S3 to make thumbnails.
How exactly do you intend to validate that it's really an image? You could look at the first chunk of file data and validate for the file type's magic number if that gives you warm fuzzies, but ultimately it's untrusted user data. The second half of the supposed image file could be virus code and that is just as easily faked at the Content-Type header. Sounds like your security concerns are mostly driven by FUD as opposed to specific threats you intend to defend against. As long as you don't take the user's uploaded data, mark it executable and run it as root on your server, any non-image data is just going to be corrupt and fail to render correctly in a browser (and/or cause your thumbnailer program to exit with an error or perhaps crash in an extreme case).
Regarding validation can I just try to create a thumbnail and if I can't then its not a valid image and delete it. Is this way fine?
Most of the time, yes. There will be edge cases where your thumbnailer cannot process an image but a browser can as thumbnailers are not perfect and some images are partially corrupt. For example, I have found some animated GIFs that render and animate fine in a web browser but graphicsmagick crashes trying to process them. Not sure there's anything that can be done about those 0.01% edge cases.
And for uploads part, can I send a response to the user and than carry on with the thumbnail creation and storing it in S3?
Yes, that is generally the best approach so the user knows their upload succeeded. Generally image processing is usually architected as a "work queue" model where you just record that there's work to do and then proceed and a separate process or processes take work off the queue and complete it.

Resources