I'm currently learning node.js and loving it. I noticing, however, that it seems that's it's really only fit for one site. So it's great for hosting mydomain.com, but what if I want to build an actual full web server with it. In other words, I would like to host mydomain.com, example.com, yourdomain.com and so on. What solutions (modules) are available for this? I was thinking of simply parsing the url from the request object and simply reading from the appropriate directory. For example if I get a request for example.com then read from the example_com directory or if I get a request from mydomain.com read from the mydomain_com directory. The issue here is I don't know how this will affect performance and scalability.
I've looked into Multi-node but I don't fully follow the idea of processes yet (I'm a node beginner).
Any suggestions are welcome.
You can do this a few different ways. One way is to write it directly into your web application by checking what domain the request was made to and then route within your application but unless your application is very basic this can make it fairly bloated and can get messy. A good time to do something like this might be if you're writing a blogging platform where everything is pretty much the same across all your domains. The key difference might be how you query your data to display the right data.
In this case you'd probably use the request to see which blog is being accessed.
If you want to just host a few different domains on the same server all using port 80 (like most websites do) you will want to proxy each request off to a different process. You can do this with nginx or even with node itself. It all comes down to what best fits your needs. bouncy is a quick way to get setup doing this as its a nodejs module and has some pretty impressive benchmarks. nginx (proxy with nginx) is probably the most wildly used method though, as a lot of nodejs servers use nginx to serve static content anyways.
http://blog.noort.be/2011/03/07/node-js-on-nginx.html
https://github.com/substack/bouncy/
You can use connect's vhost middleware (which is also available in express) to dispatch requests to separate request handlers based on the Host: header. This assumes that everything is being handled by the same node process on the same port; if you really need separate processes, then the suggestion about using nginx as a reverse proxy is probably the way to go.
Related
Is it possible to use the same nodeJS server for two/three different domains (aliases)? (I don't want to redirect my users. I want them to see the exact URL they typed in the address bar. However, all three domains are exactly the same!)
I want my users to be logged in on all three domains at the same time, in order to avoid any confusion.
What is the simplest way to do this and avoid cross-domain issues?
Thanks!
If you mean that all domains will serve the same nodejs app then Yes you can do that.
but if each domain should open a different application then you must have a reverse proxy running on the server to handle and manage the sites/vhosts.
You may install nginx and use it as reverse proxy server or look for http-proxy a library for nodejs.
If you would like to manage the vhosts in your app you can look for vhost middleware for nodejs and use it
Choose one of:
Use some other server (like nginx) as a reverse proxy.
Use node-http-proxy as a reverse proxy.
Use the vhost middleware if each domain can be served from the same Connect/Express codebase and node.js instance.
This is a very broad question. Moreover, it is generally a pretty bad idea, SEO-wise, to have multiple independent domains that each serve the same content.
Logging in is generally either done through Cookies, or through extra parameters in the URL. Cookies are always domain-specific, for obvious security reasons. If you want to ensure folks will be logged in to all the domains at once, you can create an internal purpose-driven domain to handle authentication (without such domain showing in URL bar, and only being used for HTTP redirects, effectively); such domain will store the login state for all the rest, and the rest would pick up the login state through such purpose-driven domain (through HTTP redirects).
In general, however, this sounds like too much trouble. Consider that, perhaps, some users specifically want to use different domains for different accounts, so, you'll effectively break their usage if you mandate that a single login be used for all of them. And, back to the original point, doing this is pretty bad for SEO, so, just don't do it.
Can somebody explain to me the architecture of this website (link to a picture) ? I am struggling to understand the different elements in the front-end section as well as the fields on top, which seem to be related to AWS S3 and CDNs. The backend-section seems clear enough, although I don't understand the memcache. I also don't get why in the front end section an nginx proxy is needed or why it is there.
I am an absolute beginner, so it would be really helpful if somebody could just once talk me through how these things are connected.
Source
Memcache is probably used to cache the results of frequent database queries. It can also be used as a session database so that authenticated users' session work consistently across multiple servers, eliminating a need for server affinity (memcache is one of several ways of doing this).
The CDN on the left caches images in its edge locations as they are fetched from S3, which is where they are pushed by the WordPress part of the application. The CDN isn't strictly necessary but improves down performance by caching frequently-requested objects closer to where the viewers are, and lowers transport costs somewhat.
The nginx proxy is an HTTP router that selectively routes certain path patterns to one group of servers and other paths to other groups of servers -- it appears that part of the site is powered by WordPress, and part of it node.js, and part of it is static react code that the browsers need to fetch, and this is one way of separating the paths behind a single hostname and routing them to different server clusters. Other ways to do this (in AWS) are Application Load Balancer and CloudFront, either of which can route to a specific server based on the request path, e.g. /assets/* or /css/*.
I am looking for a way how to dynamically route requests through proxy webserver. I will explain what I need exactly and what I have found so far.
I would like to have some lightweight webserver (thinking about node.js or nginx) set up as proxy webserver with public IP. It would route requests to different local webservers based on URLs. But not only based on hostname but based on full URL.
My idea is, that this proxying webserver would use either local memory cache, memcached or redis to look up key-value based information of URL and local webserver.
I have found these projects:
https://github.com/nodejitsu/node-http-proxy
https://www.steve.org.uk/Software/node-reverse-proxy/
https://github.com/hipache/hipache
They all seem to do similar things, but not exactly what I am looking for, that is:
URL based proxying (absolute URLs routing to different local webservers)
use of memory based configuration storage / cache
dynamically change configuration using API without reloading proxy webserver
Is there any better-suited project or is there a way how to configure one of three projects above to fit my requirements ?
Thank you for your time and effort in advance.
I think this does exactly what you want: https://openresty.org/en/dynamic-routing-based-on-redis.html
It's basically nginx with precompiled modules. You can setup the same by yourself with nginx + lua module + redis ( + of course the necessary lua rocks). OpenResty just makes it easier.
Currently using Node.js to handle all our AJAX calls, which works brilliantly but (unfortunately) still leveraging PERL to draw in-line page content, when absolutely necessary - (for instance when Facebook or some other third party site needs to call our site and read specific Meta tags that are generated by the DB) - as well as handle file uploads and stuff of that nature.
I am trying to figure out what I'd need to do to have my web server (Apache) reach out to Node.js directly, so as to replace PERL in our mix.
Would really appreciate a pointer at some documentation that explains how to set this up or (less preferably) a response saying that at present such a configuration is not possible.
I'm rather confused by most of your question - my interpretation is that you have some URLs which you want to handle with node.js and some that you want to handle with Apache+Perl
The simplest solution is to make the 2 servers available at different URLs, e.g.
http://www.example.com:80 for the node.js
and
http://www.example.com:81 for the apache + perl
Alternatively you could use one of the servers as a proxy for the other, given a particular prefix in the path. Which way around you put the servers has a lot of implications for the performance profile of the system - but the safest solution is to use node.js at the frontend. It should be straightforward to implement this (although I'm not an expert on node.js). It's also quite possible to this the other way around - with Apache at the front using mod_rewrite (and optionally mod_proxy if you want caching):
RewriteEngine on
RewriteRule ^ajax/(.*)$ http://127.0.0.1:82/$1 [P]
(you'll need mod_proxy and a proxyPassReverse rule to clean up any redirects returned by the node.js server)
Another approach would be to run a content director in front of both the servers - although this adds additional overheads. For example using nginx:
location /perl/ {
proxy_pass http://127.0.0.1:81;
}
location /ajax/ {
proxy_pass http://127.0.0.1:82;
}
If you mean that you want all requests to be handled by node.js and some of the responses are partially made up from data accessible via Apache+perl then that's a lot more complex.
If you really love Perl like me, you may feel like this module:
npm install exec_perl
see: https://github.com/tlqtangok/exec_perl
I have a CppCMS based application and I cant use IIS's FastCGI connector as
it is broken for my use thus I want to try to
use the internal HTTP server designed for debug purposes behind IIS.
I it is quite simple web server for an application that handles basic HTTP/1.0 requests
and does not care too much about security like DoS, file serving and more.
So I'd like to know if it is possible to use IIS in front of such application such that
it would:
Sanitize all requests - ensure that they are proper HTTP
Handle all DoS issues like timeouts
Serve the static files.
Is this something that can be configured and done at all?
I would suggest this is the wrong way of doing this. I would use a web server like Nginx to proxy the requests through to backend server. It is very configurable and you will find a lot of articles with doing it to Apache.
We just did something like this. You want the URL Rewriter module. You can use it to sanitize the URLs, however, it isn't going to sanitize the payload. Which is to say, you can make sure that the URLs that hit your box are very specific ones, e.g. not attempts to hits CGI, but you can't use it to make sure that the contents of an upload are safe.
ModSecurity is out for IIS now, it can handle lots of the security related issues.