I would like to build a service using NodeJS. However, this question is more of an architectural nature. Lets say I have 2 companies with their own network security. Company A has a SQL Server instance, while Company B would host the NodeJS service application. In order to get data, the NodeJS service has to go to the SQL Server instance in Company A. Is this considered "bad practice"? If thats the case, whats the alternative? As a note, there is also the option of connecting to the SQL Server instance from AWS.
From an architectural standpoint, it's definitely not desired for an application to access a database through multiple network layers (potentially via Internet), because of multiple reasons, like: latency overhead, security (maybe), management overhead (if the DB is owned by another company).
Generally, the DB should be as close as possible to the app, because usually it's the main bottleneck of a system, and it will impact the throughput of the application at some point.
However, the right answer here depends on the requirements of your app. If the traffic volumes are not very big and the performance hit is acceptable, then you can use that approach (with all pros and cons it may have)
Ideally you should not do the same. You may setup a replica of Database on your application network. To sync the replica, you may setup VPN connection.
Related
I want to create and host 4-5 websites using the same database. The only difference between the sites will be:
branding (colours and header)
data will be filtered per website (through sql query) and
Each site will be on a separate domain (but can be hosted on same server)
My 1st thought was to use API / Rest model and provision five front-ends in their own sub-domain. But as sites can be hosted on same server (I'm assuming one hosting account which enables multiple sub-domains), I think I can simply connect all sites with connection string to same database, avoiding complexities of using REST.
Is this possible and would i run into database conflicts doing this?
If later, I wanted to add a mobile app client, then will I need to build out a rest interface anyway?
Thanks
The right thing to do here depends a lot on your specific use case, expected load, preferred backend/edge technology, future plans, etc.
Site domains and servers -
The main point here is that you can host your domains/subdomains on the same or different servers. You simply need to update the DNS to point to the correct IP (update the subdomain's A record).
Note: If these sites are all public-facing, then I highly recommend using an edge/proxy server and even consider a load balancer, depending on expected number of visitors (Nginx, or Apache Web Server)
Decoupled architecture is almost always preferred -
I would definitely have an API/REST layer to abstract the database from the sites. This ensures that you establish a contract through which any clients can interact with the backend, including your mobile application. You also don't have to duplicate DB-specific code across the various clients. What if you decided to change your schema? Or even your database solution? Then all clients will be broken and your customers would be unhappy. As a guiding principle, think: if I change any one thing in my architecture, how many other things will need to change as a result? In terms of scalability, this architecture will also allow you to easily spin up more instances of whatever it is you need (databases, REST service, etc) should the need arise.
How do I build and deploy a REST API?Re: #2, to set up a simple custom REST service running on Node.js (and express), this is a good tutorial. The example also walks through setting up and integrating with an in-memory MongoDB database.
Database collisions?If you follow the above steps, this should be a moot point. Node.js/express and the databases expose ways to configure connection pools if the defaults do not suffice. Again, this will depend on your needs - how many concurrent users you expect.
I'm planning to build an API for one of my projects. But I'm looking for a good way to manage it, and manage server load.
Would I be better off just creating everything on one server, or should I create multiple?
Thoughts:
If I create one server and that server crashes, the whole system would go down. But if I create multiple servers to handle this, and one of them crashes, only that part would go down.
How I was thinking to accomplish this:
1) Create one API ENDPOINT
2) When a user sends a REQUEST to that API ENDPOINT, the ENDPOINT would send another request to the correct server containing the special task, when the task is done it would return the data back to the user.
AKA:
User => ENDPOINT => ENDPOINT 1, ENDPOINT 2, ENDPOINT 3, => ENDPOINT => User
Is this how I should do it?
P.S. I don't know if this the right terminology but I'm trying to learn how to scale my ENDPOINTS/API/code.
About the load balancer, you should use specific web server applications to do that, like nginxor apache. This kind of web server tools already have implemented load balance mechanisms, you just need to configure it.
Also, I recommend you to pack your server in docker images. This way you could use Docker Swarm or Kubernetes to deploy and scale up/down your application. It's easier to manage your services, check applications states and deploy new versions.
You could use docker with nginx, where each docker container has an instance of your application and nginx will take care of redirect/distribute your requests between your instances.
What you are basically looking for is a comparison between microservices based architecture (or SOA) and a monolith.
In microservices, there are multiple services performing specific tasks. They all in-turn are used to perform complex tasks. Monoliths on the other hand consist of a big server which does everything and is also the single point of failure like your pointed.
Should you move to microservices?
It is widely agreed that a project should be built in monolithic architecture and then moved to microservices as the complexity grows. Martin Fowler's article explains this concept well.
This is because there are certain disadvantages and tradeoffs associated with this architecture -- inconsistency and latency, for instance.
TLDR; Stick to one server if starting, break into services when it becomes complex.
I want to create Sails.js (Node.js) server app, which will provide API for single-page-app. This server will consist of multiple modules:
user management
forum
chat
admin GUI
content management
payment gateway
...
All these modules will share one database. The server must be able to handle as many requests and web sockets as possible. Clean architecture and performance are my primary goals.
My questions:
Should I create multiple servers running on multiple ports? I mean, one server for content management module. Another server for forum management module.
Or is it better to create only one big universal server, which will consists of multiple separate modules (hooks in Sails.js) and runs on one port? Will performance of the server decrease in this case ?
I was thinking about vertical scaling one big universal server, running on single port with pm2. Or is it better to scale Node.js horizontaly and split server to multiple smaller servers ?
Im new to Node.js so I appreciate any advice.
I think it really boils down to the scale of the project.
For very simple things there's no real reason to scale past a single but reliable server is there?
However for broader projects that have a back-end that is resource intensive and a lot of users and traffic, you may a want to split the back / front end aspects depending on the requirements.
In which case you might have a single server (or more) dealing with the specific administrative requests or routines then have the client / user API running through a load balancer and spread across multiple servers in multiple regions or break it down further into an auto scaling group so as to accommodate for fluctuations in traffic.
It would be worthwhile to note too that this is really suited for higher volumes of traffic or resource usage as you're dedicating the server infrastructure for this purpose, for smaller applications where there is infrequent usage then breaking things down into micro services from the start and getting billed for the runtime rather than dedicated infrastructure utilization might make more sense to me. You could take a look at AWS API Gateway and Lambda services for some more information on that (I am not affiliated to AWS in any way, I just appreciate what they have managed to put together there).
I'm new to microservices. I envision them as a set of processes running in two or more machines (I suppose for a given process two instances must be run in separate machines for reliability). In that setup, depending on the kind of clients I have there may be one process working as a TCP server serving on a specific high port and speaking a non-HTTP protocol.
However, for my low-bandwidth, testing purposes, I haven't found a free cloud service which provides that kind of environment (machines to run processes on – say, Java on Linux – while keeping a high port open).
Maybe the facilities I'm expecting are only available to paying customers, or maybe implementing a microservice architecture in the cloud goes beyond simply running processes in machines and sharing a database? Could someone clarify? (and if possible direct me to one such free service)
Yes, you are right when you say Microservices are more about independent service (processes) that can be deployed in one or more cloud machines. Each service can communicate to other using non-http protocol like Message brokers, Thrift, Remote Procedure call (RPC) etc.
As the architecture point of view, services should mostly be decoupled enough to handle complexity of distributed computing. see the image on Microservices Architecture link
There's a concept of API Gateway which could be used for authentication and service registration and discovery purpose.
Coming back to your question, you can test microservices on single cloud (by running each service on different port) and use API Gateway to discover the service path for references here are the links which are worth to look at these.
For concept see links: Microservices.io and stackoverflow question
For Implementation: zookeeper and Auth0 (this is what i'm using)
If you are java lover great to look at infoQ article
Some of the free source that might can help in building and testing microservices are: Google App Engine, hook.io
I am considering a Cassandra cluster deployment to Google Compute Engine. However one of our principal db clients would be an App Engine app. Since GCE firewalls do not include App Engine instances (meaning App Engine instances are considered "outside" the firewall) we would need to open ports in the firewall to the Cassandra nodes, effectively putting our database on the public Internet.
Is this reasonable to do? I have read up on Cassandra's authentication scheme (http://www.datastax.com/documentation/cassandra/2.0/cassandra/security/securityTOC.html) but I'm certainly not an expert and thus I don't trust that I can properly evaluate whether this scheme is strong enough to protect a publicly available database server.
If this is a bad idea, what's our best alternative? Writing some kind of authenticating app in front of each database is rather unappealing since (1) we obviously want the db to be fast, so any extra steps in the way are counter to that goal, and (2) it might necessitate custom changes to the standard Cassandra client libs/programs.
Is there a standard practice here?