Nodejs Backend APIs, when dockerized, are taking more time to connect to mongodb - node.js

I moved my nodejs backend to docker(previously it was deployed on ec2 instance).
My mongodb is deployed on another ec2 instance, which i did't move to docker. (want to keep it this way only).
After dockerization of my backend,(Deployed it on ECS) APIs are taking longer time for db queries. Don't understand what went wrong. Or is it suppose to be like this? Any suggestions?

So, I found the issue here.
Due to different availability zone, It was taking more time. There was one extra hop causing extra time.

Related

AWS multiple services inside one EC2 instance for great speed?

Since I used separate hosting for the database and node.js server, the speed would not be good. If everything is on one machine, the local data exchange will be faster. How can I run AWS services on one instance (node.js, redis, mongodb). In production It is not recommended to run the database together with the server. Is it possible to fine-tune AWS to ensure the same speed between the databases and the server as on a single computer?
Please Help me do not spare your advice!

Deploy MongoDB cluster, node, express, mongoose app on AWS

I have been trying to deploy a Node.js web app which uses mongoose and express on AWS for a week now.
I'm new to AWS and am not the best networker, so please have patience with my lack of networking understanding.
So far I have used AWS's Quickstart Guide to launch a new VPC with mongoDB.
Found here: http://docs.aws.amazon.com/quickstart/latest/mongodb/welcome.html
I verified that the mongo database was working by ssh'ing into the private mongo IP's through the NAT gateway (using the keypair). It appears to be working fine and I have a username and password for admin level of the mongodb setup.
I then launched an elastic beanstalk node.js application within this VPC (or at least I think it was in here - the security rules include the subnets of the mongodb), with a call in my code as follows:
mongoose.connect('mongodb://<MongoUsername>:<MongoPassword>#test.amazonaws.com:27017/admin')
where admin is the database name.
When I try to launch this node.js instance though, it does not run.
I have, however, verified that the node.js app runs independently of the VPS by launching a completely separate elastic beanstalk instance. It runs my code fine (but obviously doesn't connect to a db so forms do not work.)
What am I missing here? Why can I not connect this cluster to my node app? I'm super confused and frustrated with the whole process and would really appreciate any advice. Thanks.
If you need any further info to help me debug this let me know.
Edit: To the person who wants this closed as too broad.. what extra information do you want? I specified in the question that im new at this and to tell me what else you need.. so I find this classification without any clarification pretty rude and unhelpful. Cheers
So this issue here was that I was not applying the correct security rules to my elastic beanstalk instance and it was therefore not rendering to the Ip.
Thanks to those that tried to help, if anyone has a similar issue and needs a hand, feel free to message me.

Setting up ELB on AWS for Node.js

I have a very basic question but I have been searching the internet for days without finding what I am looking for.
I currently run one instance on AWS.
That instance has my node server and my database on it.
I would like to make use of ELB by separating the one machine that hosts both the server and the database:
One machine that is never terminated, which hosts the database
One machine that runs the basic node server, which as well is never terminated
A policy to deploy (and subsequently terminate) additional EC2 instances that run the server when traffic demands it.
First of all I would like to know if this setup makes sense.
Secondly,
I am very confused about the way this should work in practice:
Do all deployed instances run using the same volume or is a snapshot of the volume is used?
In general, how do I set such a system? Again, I searched the web and all of the tutorials and documentations are so generalized for every case that I cannot seem to figure out exactly what to do in my case.
Any tips? Links? Articles? Videos?
Thank you!
You would have an AutoScaling Group with a minimum size of 1, that is configured to use an AMI based on your NodeJS server. The AutoScaling Group would add/remove instances to the ELB as instances are created and deleted.
EBS volumes can not be attached to more than one instance at a time. If you need a shared disk volume you would need to look into the EFS service.
Yes you need to move your database onto a separate server that is not a member of the AutoScaling Group.

Server segregation of nodejs and mongo in amazon

Why there are single web service just for mongodb? Unlike LAMP, I will just install everything on my ec2. So now I'm deploying MEAN stack, should I seperate mongodb and my node server? I'm confused. I don't see any limitation mixing node with mongod under one single instance, I can use tools like mongolab as well.
Ultimately it depends how much load you expect your application to have and whether or not you care about redundancy.
With mongo and node you can install everything on one instance. When you start scaling the first separation is to separate the application from the database. Often its easier to set everything up that way especially if you know you will have the load to require it.

Connecting Google Compute Engine MongoDB instances

I'm a systems/architecture noob, and am trying to get an understanding of how to use multiple GCE instances to run a Meteor app. This walkthrough seems pretty straightforward for getting Meteor running on a single instance, but if I want to add more instances it isn't clear to me how to connect them together.
From what I understand, I'll add each instance to an instance group and use a load-balancer to direct incoming traffic evenly across them. It also seems like I want to attach a persistent disk to each instance which the OS will boot from and which will include a MongoDB installation that participates in a "replicated set".
Is that accurate? And if so, how do I actually tell the MongoDB installation on each instance's disk to be a part of the replicated set?

Resources