Serve static files by separate server - node.js

I have a simple web server in Node.JS/Express.
I would like to somehow dispach all requests for a statis HTML content to another instance of Express web server (running on the same machine). The idea is to release the main server ASAP for new requests and leave the handling of all static content to this second instance.
Is it posible that the second instance return the static content to the directly to the caller? I mean withount passing it back to the main server?

This could easily be handled by having a reverse proxy which redirects all requests for URL's with a /static (for example) prefix to one server, and all other URL's to your node.js server.

Related

separating-front-end-and-api for jhipster

Does anyone have any idea, why these three points are introduced and how we can implement this if we decide to separately deploy spring boot and SPA?
Calls to / serve static assets (from the front-end), which should not
be cached by the browser.
Calls to /app (which contains the
client-side application) and to /content (which contains the static
content, like images and CSS) should be cached in production, as
those assets are hashed.
Calls to a non-existant route should forward
the request to index.html. This is normally handled in the backend
through ClientForwardController.
They explain how you should handle the HTTP requests that point to the static parts.
2 first points may be configured in your reverse proxy for example.
Last point is a configuration of the RP but also of the router part of your js framework.

web-app using nginx and node - which is the web-server?

I have a web-application using nginx as a reverse-proxy and using the express framework as my backend in node.js. I am confused which is the web server. I use react, so the application features client side rendering, and nginx holds these files should it make a difference.
according to developer.mozilla.org
On the software side, a web server includes several parts that control how web users access hosted files, at minimum an HTTP server. An HTTP server is a piece of software that understands URLs (web addresses) and HTTP (the protocol your browser uses to view webpages). It can be accessed through the domain names (like mozilla.org) of websites it stores, and delivers their content to the end-user's device.
&
A web server first has to store the website's files, namely all HTML
documents and their related assets, including images, CSS stylesheets,
JavaScript files, fonts, and videos.
Taking this into consideration, I would say that Nginx is the web-server since it holds the html file. However, I really am not sure. Is it one of the two, both or is it a grey-zone?
Web servers provide web pages(HTML) with CSS, JS files that are required to render those pages. In your case, NGINX acts as a web server since it serves with HTML files.
NodeJS has a built-in HTTP module which supports to work with HTTP. We can use NodeJS to create Web servers since they use HTTP. But in this case, NodeJS acts as an API which exposes an HTTP interface to interact with it.

Displaying many images with NodeJS and Express

I am creating a web application where I want to display hundreds of images. I am using NodeJS with the Express Framework.
How do I send images from server to client?
Edit: If I place all images in the public directory, are they automatically send to the browser if the page is rendered or are GET requests generated in time if those images are needed?
Are you required to use express? Usually, static files are better served using a proper web server (like nginx or apache) along with your node/express application or some kind of cdn. In the client you could configure how your images are requested to avoid loading all of them at the beginning, either only downloading on demand or doing non-blocking requests

Node and React Isomorphic Rendering Architecture

So I understand the fundamentals of isomorphic rendering with React / Node, but I'm confused on how I would fit Apache or NGINX into my landscape.
Typically, with a client-side page I would just serve the static content from Apache or NGINX and the client-side page would make AJAX calls (which are reverse proxied through Apache or NGINX) to Node. Node would serve up the data and the page would change accordingly.
Looking at an isomorphic page with React, the page is initially rendered on the Node server and changes are served up to the client from the server. Can I still use Apache or NGINX to load balance and reverse proxy my requests?
As an example, I would have one Node instance serving my API and one Node instance for rendering React and serving it to the client. In this example, could I load balance, reverse proxy my calls, and serve my .js and .css bundles from Apache/NGINX? In this example the user would access www.example.com/ - that would first go to the Apache/NGINX which would reverse proxy the call to the Node server which would render the page and serve it up to the client. Then, on the page the client would click some button and access www.example.com/api/test and that would also go to Apache/NGINX and reverse proxy over to the second Node instance, which would server the data back to the client. Or should that button click go back to the first Node instance (where rendering takes place) and that Node instance calls the second Node instance to get the data, renders the new piece, and serves it back to the client?
Basically I want an isomorphic app with all the benefits of having Apache or NGINX in front of my Node servers. Is that possible and/or best practice? If not, what is the ideal landscape for an isomorphic app so that I can still maintain all the benefits of Apache or NGINX as my entry point to my webapp?
Yes, that should all work fine. The React/Node server just renders html, and is reverse-proxy-able like any other html backend.
And yes, using a reverse-proxy/loadbalancer in front of your servers is a great idea, if you're planning on running something at scale.

Using http protocol between servers

I have a configuration of two servers working in intranet.
First one is a web server that produces html pages to the browser, this html sends requests to the second server, which produces and returns reports (also html) according to some GET parameter's value.
Since this solution is un-secured (the passed parameter is exposed) I thought about having the html (produced by the first server) sending the requests for report back to the first server, there, a security check will be made, and the request for report will be sent to the reports server using http between the servers, instead of from browser to server.
The report's markup will be returned to the first server (as a string?), added to the response object and presented in the browser.
Is this a common practice of http?
Yes, it's a common practice. In fact, it works the same when your webserver needs to fetch some data from a database (not publically exposed - ie not in the webserver DMZ for example).
But you need to be able to use dynamic page generation (not static html. Let's suppose your webserver allows PHP or java for example).
your page does the equivalent of an HTTP GET (or POST, or whatever you like) do your second server, sending any required parameter you need. You can use cURL libraries, or fopen(http://), etc.
it receives the result, checks the return code, can also do optionnal content manipulation (like replacing some text or URLs)
it sends back the result to the user's browser.
If you can't (or won't) use dynamic page generation, you can configure your webserver to proxy some requests to the second server (for example with Apache's mod_proxy).
For example, when a request comes to server 1 for URL "http://server1/reports", the webserver proxies a request to "http://server2/internal/reports?param1=value1&param2=value2&etc".
The user will get the result of "http://server2/internal/reports?param1=value1&param2=value2&etc", but will never see from where it comes (from his point of view, he only knows http://server1/reports).
You can do more complex manipulations associating proxying with URL rewriting (so you can use some parameters of the request to server1 on the request to server2).
If it's not clear enough, don't hesitate to give more details (o/s, webserver technology, urls, etc) so I can give you more hints.
Two others options:
Configure the Internet facing HTTP server with a proxy (e.g.
mod_proxy in Apache)
Leave the server as it is and add an Application Firewal

Resources