Create multiple front-ends hitting same data source - web

I want to create and host 4-5 websites using the same database. The only difference between the sites will be:
branding (colours and header)
data will be filtered per website (through sql query) and
Each site will be on a separate domain (but can be hosted on same server)
My 1st thought was to use API / Rest model and provision five front-ends in their own sub-domain. But as sites can be hosted on same server (I'm assuming one hosting account which enables multiple sub-domains), I think I can simply connect all sites with connection string to same database, avoiding complexities of using REST.
Is this possible and would i run into database conflicts doing this?
If later, I wanted to add a mobile app client, then will I need to build out a rest interface anyway?
Thanks

The right thing to do here depends a lot on your specific use case, expected load, preferred backend/edge technology, future plans, etc.
Site domains and servers -
The main point here is that you can host your domains/subdomains on the same or different servers. You simply need to update the DNS to point to the correct IP (update the subdomain's A record).
Note: If these sites are all public-facing, then I highly recommend using an edge/proxy server and even consider a load balancer, depending on expected number of visitors (Nginx, or Apache Web Server)
Decoupled architecture is almost always preferred -
I would definitely have an API/REST layer to abstract the database from the sites. This ensures that you establish a contract through which any clients can interact with the backend, including your mobile application. You also don't have to duplicate DB-specific code across the various clients. What if you decided to change your schema? Or even your database solution? Then all clients will be broken and your customers would be unhappy. As a guiding principle, think: if I change any one thing in my architecture, how many other things will need to change as a result? In terms of scalability, this architecture will also allow you to easily spin up more instances of whatever it is you need (databases, REST service, etc) should the need arise.
How do I build and deploy a REST API?Re: #2, to set up a simple custom REST service running on Node.js (and express), this is a good tutorial. The example also walks through setting up and integrating with an in-memory MongoDB database.
Database collisions?If you follow the above steps, this should be a moot point. Node.js/express and the databases expose ways to configure connection pools if the defaults do not suffice. Again, this will depend on your needs - how many concurrent users you expect.

Related

Hosting NodeJS service application

I would like to build a service using NodeJS. However, this question is more of an architectural nature. Lets say I have 2 companies with their own network security. Company A has a SQL Server instance, while Company B would host the NodeJS service application. In order to get data, the NodeJS service has to go to the SQL Server instance in Company A. Is this considered "bad practice"? If thats the case, whats the alternative? As a note, there is also the option of connecting to the SQL Server instance from AWS.
From an architectural standpoint, it's definitely not desired for an application to access a database through multiple network layers (potentially via Internet), because of multiple reasons, like: latency overhead, security (maybe), management overhead (if the DB is owned by another company).
Generally, the DB should be as close as possible to the app, because usually it's the main bottleneck of a system, and it will impact the throughput of the application at some point.
However, the right answer here depends on the requirements of your app. If the traffic volumes are not very big and the performance hit is acceptable, then you can use that approach (with all pros and cons it may have)
Ideally you should not do the same. You may setup a replica of Database on your application network. To sync the replica, you may setup VPN connection.

Splitting load of an API between multiple servers

I'm planning to build an API for one of my projects. But I'm looking for a good way to manage it, and manage server load.
Would I be better off just creating everything on one server, or should I create multiple?
Thoughts:
If I create one server and that server crashes, the whole system would go down. But if I create multiple servers to handle this, and one of them crashes, only that part would go down.
How I was thinking to accomplish this:
1) Create one API ENDPOINT
2) When a user sends a REQUEST to that API ENDPOINT, the ENDPOINT would send another request to the correct server containing the special task, when the task is done it would return the data back to the user.
AKA:
User => ENDPOINT => ENDPOINT 1, ENDPOINT 2, ENDPOINT 3, => ENDPOINT => User
Is this how I should do it?
P.S. I don't know if this the right terminology but I'm trying to learn how to scale my ENDPOINTS/API/code.
About the load balancer, you should use specific web server applications to do that, like nginxor apache. This kind of web server tools already have implemented load balance mechanisms, you just need to configure it.
Also, I recommend you to pack your server in docker images. This way you could use Docker Swarm or Kubernetes to deploy and scale up/down your application. It's easier to manage your services, check applications states and deploy new versions.
You could use docker with nginx, where each docker container has an instance of your application and nginx will take care of redirect/distribute your requests between your instances.
What you are basically looking for is a comparison between microservices based architecture (or SOA) and a monolith.
In microservices, there are multiple services performing specific tasks. They all in-turn are used to perform complex tasks. Monoliths on the other hand consist of a big server which does everything and is also the single point of failure like your pointed.
Should you move to microservices?
It is widely agreed that a project should be built in monolithic architecture and then moved to microservices as the complexity grows. Martin Fowler's article explains this concept well.
This is because there are certain disadvantages and tradeoffs associated with this architecture -- inconsistency and latency, for instance.
TLDR; Stick to one server if starting, break into services when it becomes complex.

Sails.js (Node.js) server architecture, scaling and performance

I want to create Sails.js (Node.js) server app, which will provide API for single-page-app. This server will consist of multiple modules:
user management
forum
chat
admin GUI
content management
payment gateway
...
All these modules will share one database. The server must be able to handle as many requests and web sockets as possible. Clean architecture and performance are my primary goals.
My questions:
Should I create multiple servers running on multiple ports? I mean, one server for content management module. Another server for forum management module.
Or is it better to create only one big universal server, which will consists of multiple separate modules (hooks in Sails.js) and runs on one port? Will performance of the server decrease in this case ?
I was thinking about vertical scaling one big universal server, running on single port with pm2. Or is it better to scale Node.js horizontaly and split server to multiple smaller servers ?
Im new to Node.js so I appreciate any advice.
I think it really boils down to the scale of the project.
For very simple things there's no real reason to scale past a single but reliable server is there?
However for broader projects that have a back-end that is resource intensive and a lot of users and traffic, you may a want to split the back / front end aspects depending on the requirements.
In which case you might have a single server (or more) dealing with the specific administrative requests or routines then have the client / user API running through a load balancer and spread across multiple servers in multiple regions or break it down further into an auto scaling group so as to accommodate for fluctuations in traffic.
It would be worthwhile to note too that this is really suited for higher volumes of traffic or resource usage as you're dedicating the server infrastructure for this purpose, for smaller applications where there is infrequent usage then breaking things down into micro services from the start and getting billed for the runtime rather than dedicated infrastructure utilization might make more sense to me. You could take a look at AWS API Gateway and Lambda services for some more information on that (I am not affiliated to AWS in any way, I just appreciate what they have managed to put together there).

accessing updates from database to my application

I would like to know, how to get data from MySQL database to my application without using any REST API or PHP code. I was looking over the internet for the solution for this problem. But they say, you can use php code as REST API and then, can communicate with database. For this purpose, i will need a host and domain. I don't want to use that. Is there any other way to communicate with mysql database. Can i use mysql module of node js in titanium application.
There is no way to have direct connection between your mobile client and MySQL database. To retrieve data from MySQL you need to build application which will receive request from your app, retrieve data from MySQL, process and return it as a response.
If you don't want to build mobile and server application at the same time you can try using Appcelerator Cloud service, which plays really nicely with Titanium SDK and allows you to persist users data.
There are two answers to this problem, depending on your situation:
If Your Data Is Specific to One Device...
If you want to store data locally on one device, and that one device is the only one that will ever use it, then you want to use a SQLite database. This is very commonly used in mobile apps, and is very well documented. If you already have a MySQL database with the schema you want to use, then you could really easily convert it to a SQLite db file.
If Your Data Is Centralized...
If you need to store data remotely, in one central place, that the mobile app can access, then you need to use a remote database.
MySQL is one such option. You say that hosting PHP (which is itself run through something like Apache or IIS) is not something you want to do. But if you can host MySQL somewhere, or run it on a machine that your mobile app can access, then you can also easily host PHP and Apache.
If you don't want to spend money on a domain, then use one of the free dynamic DNS providers, which map a domain name (such as foo.hopto.org) to an IP address. If you don't want to pay for a server, then use your home computer, and keep it on whenever the mobile app needs to access it. There's easy, well documented ways around any of the issues you're having.
Alternatively, as #daniula pointed out, use Appcelerator Cloud Services. Then you can interact with simple objects, and they'll be stored for you in a central server. You can control who can access what data, and more. (Full disclosure -- I work for Appcelerator.)

What are my options when it comes to node.js lifecycle?

Are there any examples or conventions out there of how to use node.js to host multiple web apps?
I'm already aware that node itself can be used to build a server, but I'm curious as to whether there have been implementations where you aren't necessarily running it all the time. Strictly for the reason that perhaps there are multiple sites being hosted, each with their own copy of a framework, static files and custom functionality.
Or maybe you do run one instance of node and code a multiple site architecture to ensure one bad site doesn't take the server downin some way?
Virtual hosts, ensuring that one site can't crash others...these are all things that have been considered with other platforms, but I have had some difficulties finding for node! :)
I am already aware of connect, express and other middleware, however it doesn't cover what I'm asking here.
If you're worried about runtime isolation, each "site" should run it's own node process. Then use a proxy like node-http-proxy that will do host header based routing. Another great node based option is bouncy, but you don't necessarily need to use node to do the host based routing. You could just as well use haproxy, nginx, etc.
The baseline RAM overhead of each node process is very small (~10mb - 15mb). Also, if you do HTTP based routing you can spread your sites easily across machines, user home directories, etc.
If you want to handle the site/host registration programmatically, I would use seaport and then communicate the hostname and host + port details back to the proxy so that the routing table can by dynamic. This would also make it fairly easy to scale a site across multiple node processes.
Good luck!

Resources