What is best for node app with mongodb using docker container ?
Both node and mongodb in same docker container or having interlinked separate containers of nodeApp and mongodb ?
I have tried both approaches and both of them worked for me. For the first case I took ubuntu based image and installed node and mongodb using Dockerfile and started that container having both environments in same container. And for the second case, I used node and mongodb base-images and ran as separate containers. But confused which approach should I select?
The approach of using both in separate containers provides you with multiple advantages,the first one is that you can scale them independently of each other.
In addition to that it will also allow you to use more lightweight images since they only require a very specific set of dependencies.
It will also allow you to create a more flexible environment for the future, i.e. if you ever want to add more containers which have a dependency on only one of these containers or the other way around you reduce the number of interactions between the components. If both were in the same container it would not be possible to allow another container only access to only MongoDB for example. Or if you expand your Node application let it only connect to another backend container instead of also having to couple that backend server with mongo.
TLDR use the approach with two separate containers, that is what docker is meant for and provides the most flexibility
Considering scalability. It would be ideal using a separate container for both Node and MongoDB.
It gives you the flexibility. If you want to migrate only your MongoDB container to some other instance or server.
Related
1.my technology stack for above application is expressjs, nodejs, mongoDB, redisDB, s3(storage).
2.API is hosted on Linux AMI
3.I need to create docker container image for my application.
First of all you will need to decide to either keep everything inside a single container (monolithic, cannot really recommend it) or separate the concern and run a separate express/nodejs container, a mongodb container, and a redisDB container, s3 is a service you cannot run for yourself,
If you chose the later approach, there are already officially supported images on the docker hub for redis, and mongo, now for the actual app server (node) you need to set express as a dependency on node and start the official node image with an npm install command (which would get express on it) and then npm start (or whatever command you use for it), dont forget to include your code as a volume for this to work,
Now, bear in mind that if your app uses any reference data inside mongodb, you should make sure to insert it when the mongodb container starts or create an image based on the official mongodb that already has said data on it!
Another valuable note is that you should pass all connections inside your expressjs app as env vars, that way you can change them when deploying your app container (useful for when you distribute your system accross several hosts),
At the end of the day you would then start the containers in this order: mongodb, redis, and node/express. Now, the connection to s3 should already be handled inside your node app, so it is irrelevant in this context, just make sure the node app can reach the bucket!
If you want just to build a monolithic container, just start with a debian jessie image, get a shell inside the container, install everything as you would on a server, get your code running and commit the image to your repo, then use it to run your app, Still i cannot recommend this approach at all!
BR,
rethinkdb and nodejs+express app fit well in container for cluster environment??
The situation is below in a docker container
1. Running rethinkdb and nodjs+express app in one container
2. During the boot up of nodejs app it checks if there is a specific database and table exist or not. if not then create database and table
Running in one docker container works fine. But the problem is we need to do clustering of rethinkdb as well as maintaining specific number of replicas of the table.
putting all those clustering and replicas logic in the nodejs app seems not a good idea. Kind of stuck how can I proceed.
Help is very much appreciated.
Running rethinkdb and nodjs+express app in one container
You should typically not do this. Put rethinkdb in it's own container and put your application in a separate container.
I'd recommend using docker-compose and setup a docker-compose.yml file for your service. Make sure to use the depends_on property on the web application declaration so that docker will startup the rethinkdb container before the application container.
If you're hand spinning up your RethinkDB containers you should be totally set, but if you're using Swarm or some other scheduler, continue reading.
One problem RethinkDB has currently with automated / scheduled / containerized environments are ephemerality of containers and the possibility that they will possibly restart and come back with a different IP address. This requires some additional tooling around RethinkDB to modify the config tables.
For a bit of reading I'd recommend checking out how this was achieved in Kubernetes.
So my current set-up is a Node.js application that serves up an Angular front-end, a second Node.js application that has Express and serves as an API, and a MongoDB instance. Simply, the client-side app talks to the back-end app and the back-end app talks to MongoDB.
I was looking into how to Dockerize these applications and it seems like some examples use linking. So my question is does linking only work on the same host (meaning a single EC2 instance on AWS) or multiple EC2 instances? If only the former and if I have both apps and Mongo containerized on one instance, how do I scale out? Like if I spin up a second EC2 instance, would I put both containerized Node apps and Mongo again on that second instance? Is having a Mongo container on the same instance with the Node apps a single point of failure? How is that fault tolerant?
Just trying to wrap my head around this and apologize for my ignorance on the subject. Thanks!
You should put each app as well as the MongoDB server in separate containers (which is what I think you intend) and the linking (via Docker-Compose or other method) is just networking. If you use Docker links, it creates a private network. You can create other networks to talk to each other, also to a LAN, WAN, whatever.
Yes, putting them all on the same EC2 instance is creating a SPOF.
If that's a concern, look into: https://docs.docker.com/swarm/networking/
Docker Swarm is fully compatible with Docker’s networking features.
This includes the multi-host networking feature which allows creation
of custom container networks that span multiple Docker hosts.
Or load balance your apps and use an AWS-hosted MongoDB cluster. There are many possible approaches based on your needs and budget.
Why there are single web service just for mongodb? Unlike LAMP, I will just install everything on my ec2. So now I'm deploying MEAN stack, should I seperate mongodb and my node server? I'm confused. I don't see any limitation mixing node with mongod under one single instance, I can use tools like mongolab as well.
Ultimately it depends how much load you expect your application to have and whether or not you care about redundancy.
With mongo and node you can install everything on one instance. When you start scaling the first separation is to separate the application from the database. Often its easier to set everything up that way especially if you know you will have the load to require it.
I've just been introduced to Docker and the concept is awesome. I've found simple Dockerfiles for building an image for MongoDB and Node and was wondering, do I just combine those images together to make one image that has my project which is a combination of a custom Node app (built on Express), a NodeBB forum, backed by MongoDB, all wired together with Passport providing single-sign-on. Or should I make them all separate Images.
Can a Docker image contain its own VPN with the various services running on different VMs?
Docker does not have a standardized way to package and provision applications consisting of multiple images, so if you want to share your application, it's probably best to put everything into a single Dockerfile. Having said that, if sharing your application isn't a huge priority using multiple Docker images may be easier to maintain (plus you'll be able to use other MongoDB images). You could then use something like Fig (http://orchardup.github.io/fig/) to orchestrate the entire application.
As for communication between Docker containers, Docker has two options: enabling all communication across containers (this is the default), or disabling all communication except for those specified. You can enable the second option by passing the flag "--icc=false" to the Docker daemon. Afterwards, you'll need to explicitly "expose" and "link" containers in order for them to communicate. The relevant documentation can be found here.