So i am building a website using NodeJS where i will use Nginx as a reverse proxy to my app/apps. I will be using jade and sharing some layouts between subdomain and displaying specific content according to subdomain. I am trying to figure out from alot of research the best method of structuring the app. Is the best way to run each subdomain as a separate app on the same server? Or can i link them as one app? Please share your ideas and suggestions so i can make a decision and begin my coding :)
The main issue with using the same domain across multiple apps is security in regards to cookies. If apps are independent, then you might want to ensure a vulnerability in one app would not necessarily affect your other apps.
Otherwise, with nginx, there is really no limitation on your setup, however you decide to go. You can use nginx to easily join or disjoin multiple domains and/or ports/servers, into whatever setup you wish.
Whether you decide to go with multiple domains or multiple paths on a single domain have more to do with what kinds of apps you have in mind, and how logically separate would they appear to be from one another. With the help of the rewrite directive, even if you make a "wrong" choice initially, if you do have a desire, you could always fix it later on (preserving all existing links flawlessly), pretty much without any ill effect.
I am running multiple web applications (totally separated in different folders and running on different ports) on server with nxinx as proxy for different subdomains. However, if you want to make more subdomains for one application, the best way should be to structure it by URL.
For example you have mysite.com/books but you want books.mysite.com to be go to domain for books. You make proxy in nginx configs to redirect traffic from mysite.com/books to books.mysite.com.
Related
I have a single back-end running node/express providing API endpoints and 2 static (react) front-ends. The front-ends interact with the users and communicate with the back-end.
I need to use https through-out once enter production stage.
The front-ends will require their own domain names.
I’ve been thinking on the simplest way to have these configured and have come up with Option 1 (see diagram). Node.js API server running on one VPS and as the front-ends are static sites these can be loaded on separate servers (UPDATE- Mean't to say hosting providers) hence get their own domains. As an option, and I’m unsure if its needed, add cloudflare to the front-end to provide a layer of security.
This will allow front-ends to have separate domain names.
As this is a start-up project and doubt a large number of visitors, I’m wondering if the above is over-engineered and un-necessarily complicated.
So I’m considering Option 2 of hosting back-end api app and the two front-ends on the same linux vps. As the front-ends are static, I added the front-ends into the public folder of node.js. This allowed me to access the front-ends as http://serverIP:8080/siteA
As I want to access front-end as http://siteA.com I’m assuming I require a reverse proxy (nginX)
The questions to help me decide between the two options are:
For a start-up operation and given he above scenario which option is best ?
I understand that node.js requires a port number regardless to work, for the API I don’t mind having a port number (as its not applicable for end users i.e. http://10.20.30.40:3000), however the two front-ends require their own domain names (www.siteA.xom, www.siteB.com), therefore will I need to employ a reverse proxy (nginX) regardless if they are static sites or not ?
I’m concerned that someone could attack API end-points (http://10.20.30.40:3000). In this case, is it true with Option 2 is safer than 1 - that I could potentially block malicious direct API calls as all sites are hosted on the same VPS and the API can be easily be secured, this is not exposed to the outside world?
My developer once upon a time told me that option 1 is best as nginX adds un-needed complication, but not sure what he meant – hence my confusion, to be honest I don’t think he wanted to add nginX to the server.
I’m looking at a high-level guidance to get me on the right track. Thank
This is - as you have also doubted - unnecessarily complex, and incorrect in some cases. Here's a better (and widely used across the industry) design. I'm strongly recommending to drop the whole VM approach and go for a shared computing unit, unless you are using that machine for some other computation and utilizing it that way is saving your company a lot of money. I strongly doubt this is the case. Otherwise, you're just creating problems for yourself.
One of the most common mistakes that you can make when using Node.js is to host the static content through the public folder (for serious projects) Don't. Use a CDN instead. You'll get better telemetry depending on the CDN, redundancy, faster delivery, etc. If you aren't expecting high volumes of traffic and performance of delivering that static content isn't outrageously important at the moment, you can even go for a regular hosting server. I've done this with namecheap and GoDaddy before.
Use a direct node-js shared - or dedicated depending on size - hosting for your app and use CI/CD to deploy it. You can use CNAMEs to map whatever domain name you want to have your app on (ex: https://something.com) to map to the domain name of the cloud-hosting provider url for your app. I've used multiple things, Azure, Heroku, Namecheap for the apps and primarily Azure DevOps to manage the CI/CD pipeline, although Jenkins is super popular as well. I'd recommend Heroku - since it provides a super easy setup.
When designing any API on HTTP, you should assume people will call your API directly. See this answer for more details: How to prevent non-browser clients from sending requests to my server I'm not suggesting to put something like CloudFlare, but you may be overthinking it, look into your traffic first. Get it when you need it. As long as you have the right authentication / authorization mechanism in place, security of the API shouldn't be a big problem on these platforms. If you deploy it on one of these platforms, you won't have to deal with ports either. Unless you reach absolutely massive scale, it will definitely be cheaper for you operate with high-reliability this way.
You won't need to deal with nginx anymore.
Am trying to create a web app with nodejs and this app will have a different profile for different users.
when a user sign up from "www.site.com/signup", it should create a personal url for user e.g "user_name.site.com"
What you are looking for is called a subdomain. Subdomains are not handled at the application level. You need to add an A record in your DNS for every subdomain. Usually this is done using the API provided by your domain provider (or wherever your nameservers are located). Then, you'll need to proxy each subdomain to your application using some other web server like Apache or nginx.
The solution depends on:
Who your domain provider is.
What web server you're using (if any). Apache, nginx, etc.
The OS of the server.
And probably a lot more depending on your specific use-case.
Essentially what you're looking for isn't quite straightforward, and will probably involve a ton of work to get right and stable. There's many ways you can do this and it really depends on the rest of your technology stack. Not much of this actually has anything to do with node.js.
I'm using Drupal 8. Multiple sites sharing a single codebase. One .htaccess file for all.
I am receiving the same "page not found error" across all sites. Hackers attempting to break in to the site, presumably.
For example, someone tries to visit https:domain1/wp-admin/admin-ajax.php and https:domain2/wp-admin/admin-ajax.php ... Different domain names, but always the same addresses.
Other addresses include /phpmyadmin/scripts/setup.php and /1/wp-includes/wlwmanifest.xml and so on.
Using .htaccess, what is the most efficient means of redirecting all of these to an internal or external site so that my pages are not even served?
Thank you!
So, the way Drupal and the web server work is that when request arrives, if it matches “serverName” and document root and they points to Drupal then the web-server will hand that to Drupal to handle.
So, you have to ask if this is Drupal destined and if so, handle the redirect at Drupal (probably using the redirect module )
If you want set it up at at web-server level and you have access or using .htaccess then like :
RedirectMatch ^/wp-admin/(.*)$ http://example.com/404/$1
Note, there are plenty of other ways to write the above , but it’s simplest and lightest
I think this is a very common issue about CMS vulnerabilities and hosting security. And security issues is something that can not be done by a simple static action because there's always a new vulnerability. So be careful to always run :
composer update
To have always the last bug fixes and securities updates. Specially when you use modules like webform. At the moment Drupal offers more than one module for better securing your app. And in your case you need to identify IP addresses used by hacking robots and blocking them by using Perimeter .
The good news that the community arround Drupal is very concerned about security. For further reading and securing Drupal you can uses those modules but the more modules you install the more you have performance issues:
https://www.drupal.org/project/clamav
https://www.drupal.org/project/file_upload_secure_validator
https://www.drupal.org/project/key
https://www.drupal.org/project/csp
https://www.drupal.org/project/noopener_filter
https://www.drupal.org/project/hsts
https://www.drupal.org/project/securelogin
...
I also recommend the use of fast 404/403 Drupal error pages to not allow using of Database or more code running to serve that kind of pages.
I want to make the move to Webflow for a client project that require a CMS. I would like some more information about the logistics and best practices of adding domains though.
Say for instance I have a client’s home page and a blog hosted on Webflow and this is accessed by their custom domain. what if I still need to host additonal files, and other pages on a traditional hosting platform with cPanel?
Would it be best to point the www.clientwebsite.com to Webfllow and keep the clientwebsite.com pointing at traditional host with a 301 redirect to the www.clientwebsite.com
I could still have pages on the traditional host for example clientwebsite.com/page.html while being able to add additional pages to Webflow e.g. www.clientwebsite.com/page.html
Basically I want to be able to use the same domain name on both Webflow and traditional hosting with cPanel, I just want to know what the best way to do this is, is there a better way to achieve this, is there anything to be careful of/ or would be considered bad practice?
Thanks in advance
Typically, one hostname will resolve to one IP address, so one hosting platform has to be the entry point. If the sites can be logically separated, you should probably just use different hostname (blog.example.com, www.example.com, something.example.com) and point them to different hosting platforms.
If you need to have content from the 2 platforms served under the same hostname, then one platform will be the entry point and there it will have some internal rewrite/proxy rules to fetch and serve content from somewhere else. This is easily doable in all modern webservers (nginx, apache..) but I am not sure your CMS platform will allow it.
I have two domains abc.com and xyz.com pointing to the same NodeJS server. Based on the domain I want to load the configuration that will persist for that domain i.e. for xyz.com I wanna connect to Database1 and for abc.com I wanna connect to Database2.
How do I go about doing this? Is it even possible or recommended to do so?
I started with loading the configuration on the first request by getting the hostname from req.hostname. Is there any better way to do this?
Multiple strategies -
1. Deploy the same code multiple times but on different ports.
Your reverse proxy sends the request to the correct server. I do this currently by hosting multiple ghost blogs on the same vps. One runs on port 3000, another on 3010.
Pros - Less fragile setup, easier scaling, application need not be aware of operational environment. If one domain comes under attack, the other is not automatically a casualty.
Cons - Might not be possible in resource constrained environments. Deployments can involve repetitive work.
2. Read the hostname
Great option if the feature set is pretty much the same but only the domain name changes. You read the configuration file as you stated depending on the hostname.
Pros - Easier deployment, great option for resource constrained environments.
Cons - Unnecessarily tight coupling, all domains will become unavailable in case of server errors, scaling could be an issue.
Personally, I prefer deploying on different ports unless and until the code requires depending on hostnames. If you are just building a product where some unique identifier needs to be present in the URL like mycompany.slack.com then using subdomains in dns might be a better idea.