How to host multiple databases for a micro-service application? - node.js

I have an application in mind that would be built using node, mongodb + other db, kubernetes , RabbitMQ, docker and react as a front end. The application will be built in a microservice architecture. We all know that for a monolith app all you need is one DB (MongoDB, MySQL etc etc) but for a micro one you can have multiple databases. My question would be, do I need to buy multiple, separate databases and connect each service to them ? or how does it work in a micro-services design.
At the moment a I have a sample microservices app that is running on my local machine using docker and its connected to multiple databases or database/service. I am just to trying to get an idea on how does this work with companies like DigitalOcean or AWS.
Any input on this would be great.
I am just trying to figure out how this going to work when it comes to production later so that I am ware of cost and deployments. I have done some research on Digital ocean, AWS etc etc but I still can figure out how do they work.
thanks in advance.

You don't need having multiple instances of DBMS running. You can easily use one VM with one MongoDB running on it.
When you scale you might want to have separate machines running DB instances for your services, but at start you may just separate it logically to ensure you do not communicate between services using DB.
Chris Richardson on his microservices.io website says:
There are a few different ways to keep a service’s
persistent data private. You do not need to
provision a database server for each service.
For example, if you are using a relational database
then the options are:
- Private-tables-per-service – each service owns a
set of tables that must only be accessed by that
service
- Schema-per-service – each service has a database
schema that’s private to that service
- Database-server-per-service – each service has
it’s own database server.
Source: https://microservices.io/patterns/data/database-per-service.html

Related

Is it possible to host Cassandra nodes on-premise (vs cloud)?

I'm trying to build a Decentralised social media platform using Cassandra. To do this I would like the instances or nodes of the Cassandra database to be hosted on the clients side rather then having it hosted on the cloud. I would like to know if it would be possible for the user to somehow run an instance on their side with part of the data. This will allow the information to be distributed between many computers globally.
You can deploy Cassandra nodes:
on-premise,
on a private cloud,
on a public cloud, or
a hybrid environment of on-premise + cloud.
It is also possible to deploy Cassandra on any combination of the above. Cheers!

How should we setup our database (MongoDB) and backend (Express) so that everyone can access the database remotely?

The Problem
I am student assigned to a project to create a rudimentary social media app. We are planning to use Flutter to build the app and we are going to use MongoDB and Express for the database and API respectively. The goal is to be able to use continuous integration for our project through Fastlane and GitLab.
Initially, I thought to put the API and Flutter in separate Docker containers and to host the database on my desktop, but I realize that might not be the best solution.
The Question(s)
How should we setup the database and the server that we all have access to the same data in a database? Basically, how should we best set up our project environment to work as team, in terms of:
hosting the database?
setting up Express and Flutter for continuous integration?
If you are using MongoDB just set up a cluster on Atlas it's free as long as it's a relatively small application (up to 500MB). After you sign up, you will create a cluster, and then Atlas will give you instructions on how to connect to that cluster using node.js.
Basically all you do is throw in the link to your cluster with your configured password in your database connection link. This is all in the cloud so you can access it from anywhere after you whitelist the IP's that will be accessing it remotely. (alternatively you can whitelist all IP's which is the easier way of doing things it's just A LOT less secure.) but it's an okay option for a school project.
You can then use Heroku to host your app which allows for a custom server setup like you will have with Express.
You will need to use dotenv for heroku as well as securing your database link and password, so read up on that as well.

Need some guidance on deploying/hosting a web app

I recently developed a web app locally, with a React frontend that interacts via proxy with a Node.js backend that interacts with MongoDB Atlas. Everything works locally, and I am ready to actually deploy the web app for public use.
How does hosting work with a full stack web application? Do I host the entire web app in the same place (e.g. S3 bucket), or should the backend and frontend be deployed separately? I have never done this before, so I appreciate any help I can get.
Yes, you can have two different servers for frontend and backend.
You can use theHeroku platform to deploy your backend app, and Mlab to provision a Mongo database. These platforms has free tiers where you can experiment and learn about deployments and clouds.
Once you are comfortable with these then you can move to Elastic Bean Stalk on AWS to provision servers and also database.
Now mlab is not available as it is already been acquired by mongoDB only, so I would recommend you to create the database on Atlas cluster which also offers a free tier.
Rather than using the Heroku, I would suggest you to use MongoDB stitch which is also the backend as a service. If you will use stitch then you can also seek for support from mongoDB people but in case you will use heroku then you will not receive any support from them.
You can refer to the documentation of stitch for more information https://docs.mongodb.com/stitch/. This has complete guidance how you can deploy your app using stich and can use mongodb database.
However if you need more help, please ping me anytime.

Docker Microservice Architecture - Communication between different containers

I've just started working with docker and I'm currently trying to work out how to setup a project using microservice architecture.
My goal is to move out different services from the api and instead have each one in their own container.
Current architecture
Desired architecture
Questions
How does the API gateway communicate with the internal services? Should all microservices have their own API which only accept communication from the API gateway? Any other means of communications?
What would be the ideal authentication between the gateway and the microservices? JWT token? Basic Auth?
Do you see any problems with this architecture if hosted in Azure?
Is integration testing even possible in the desired architecture? For example, I use EF SQlite inmemory for integration testing and its easily accessible within the api, but I don't see this working if the database is located in it's own container.
Anything important here that i've missed?
I had created an application that is completely a micro service based architecture running on AWS ECS(Container Service), Each microservice is pushed on container as Docker image. There are 2 instances of EC2 are running for achieving High Availability and same mircoservices are running on both instances so if one instance goes down another can take care of requests.
each microservice use its own database and inter microservice communication is happening using client registry on HTTP protocol and discovery, Spring Cloud Consul and Netflix Eureka can be used for service discovery and registery.
.
Please find the diagram below :

Microservices Architecture in NodeJS

I was working on a side project and i deiced to redesign my Skelton project to be as Microservices, so far i didn't find any opensource project that follow this pattern. After a lot of reading and searching i conclude to this design but i still have some questions and thought.
Here are my questions and thoughts:
How to make the API gateway smart enough to load balnce the request if i have 2 node from the same microservice?
if one of the microservice is down how the discovery should know?
is there any similar implementation? is my design is right?
should i use Eureka or similar things?
Your design seems OK. We are also building our microservice project using API Gateway approach. All the services including the Gateway service(GW) are containerized(we use docker) Java applications(spring boot or dropwizard). Similar architecture could be built using nodejs as well. Some topics to mention related with your question:
Authentication/Authorization: The GW service is the single entry point for the clients. All the authentication/authorization operations are handled in the GW using JSON web tokens(JWT) which has nodejs libray as well. We keep authorization information like user's roles in the JWT token. Once the token is generated in the GW and returned to client, at each request the client sends the token in HTTP header then we check the token whether the client has the required role to call the specific service or the token has expired. In this approach, you don't need to keep track user's session in the server side. Actually there is no session. The required information is in the JWT token.
Service Discovery/ Load balance: We use docker, docker swarm which is a docker engine clustering tool bundled in docker engine (after docker v.12.1). Our services are docker containers. Containerized approach using docker makes it easy to deploy, maintain and scale the services. At the beginning of the project, we used Haproxy, Registrator and Consul together to implement service discovery and load balancing, similar to your drawing. Then we realized, we don't need them for service discovery and load balancing as long as we create a docker network and deploy our services using docker swarm. With this approach you can easily create isolated environments for your services like dev,beta,prod in one or multiple machines by creating different networks for each environment. Once you create the network and deploy services, service discovery and load balancing is not your concern. In same docker network, each container has the DNS records of other containers and can communicate with them. With docker swarm, you can easily scale services, with one command. At each request to a service, docker distributes(load balances) the request to a instance of the service.
Your design is OK.
If your API gateway needs to implement (and thats probably the case) CAS/ some kind of Auth (via one of the services - i. e. some kind of User Service) and also should track all requests and modify the headers to bear the requester metadata (for internal ACL/scoping usage) - Your API Gateway should be done in Node, but should be under Haproxy which will care about load-balancing/HTTPS
Discovery is in correct position - if you seek one that fits your design look nowhere but Consul.
You can use consul-template or use own micro-discovery-framework for the services and API-Gateway, so they share end-point data on boot.
ACL/Authorization should be implemented per service, and first request from API Gateway should be subject to all authorization middleware.
It's smart to track the requests with API Gateway providing request ID to each request so it lifecycle could be tracked within the "inner" system.
I would add Redis for messaging/workers/queues/fast in-memory stuff like cache/cache invalidation (you can't handle all MS architecture without one) - or take RabbitMQ if you have much more distributed transaction and alot of messaging
Spin all this on containers (Docker) so it will be easier to maintain and assemble.
As for BI why you would need a service for that? You could have external ELK Elastisearch, Logstash, Kibana) and have dashboards, log aggregation, and huge big data warehouse at once.

Resources