How to manage multiple backend stacks for development? - node.js

I am looking for the best/simplest way to manage a local development environment for multiple stacks. For example on one project I'm building a MEAN stack backend.
I was recommended to use docker, however I believe it would complicate the deployment process because shouldn't you have one container for mongo, one for express etc? As found in this question on stack.
How do developers manage multiple environments without VMs?
And in particular, what are best practices doing this on ubuntu?
Thanks a lot.

With Docker-Compose you can easily create multiple containers in one go. For development, the containers are usually configured to mount a local folder into the containers filesystem. This way you can easily work on your code and have live reloading. A sample docker-compse.yml could look like this:
version: '2' services: node:
build: ./node
ports:
- "3000:3000"
volumes:
- ./node:/src
- /src/node_modules
links:
- mongo
command: nodemon --legacy-watch /src/bin/www
mongo:
image: mongo
You can then just type
docker-compose up
And you Stack will be up in seconds.

Related

ECS Fargate does not support bind mounts

I am trying to deploy a nodejs docker-compose app into aws ecs, here is how my docker compose file looks -
version: '3.8'
services:
sampleapp:
image: jeetawt/njs-backend
build:
context: .
ports:
- 3000:3000
environment:
- SERVER_PORT=3000
- CONNECTIONSTRING=mongodb://mongo:27017/isaac
volumes:
- ./:/app
command: npm start
mongo:
image: mongo:4.2.8
ports:
- 27017:27017
volumes:
- mongodb:/data/db
- mongodb_config:/data/configdb
volumes:
mongodb:
mongodb_config:
However when i try to run it using docker compose up after creating ecs context, it throws below error -
WARNING services.build: unsupported attribute
ECS Fargate does not support bind mounts from host: incompatible attribute
I am not specifying any where that I would like to use Fargate for this. Is there any way I can still deploy the application using ec2 instead of Fargate?
Default mode is Fargate. You presumably have not specified an ecs cluster with ec2 instances in your run command.
Your docker compose has a bind mount so your task would need to get deployed to an instance where the mount would work.
This example discusses deploying to an ec2 backed cluster.
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-cli-tutorial-ec2.html
Fargate is the default and there is no way to tell it that you want to deploy on EC2 instead. There are however situations where we have to deploy on EC2 when Fargate can't provide the required features (e.g. GPUs).
If you really really need to use bind mounts and need an EC2 instance you may use this trick (I haven't done it so I am basically brainstorming here):
configure your task to use a GPU (see examples here)
Convert your compose using docker compose convert
Manually edit the CFN template to use a different instance type (to avoid deploying a GPU based instance with its associated price)
Deploy the resulting CFN template.
You may even be able to automate this with some sed circus if you really need to.
As I said, I have not tried it and I am not sure how viable this could be. But it wouldn't be too complex I guess.

Dockerize and reuse NodeJS dependency

I'm developing an application based on a microfrontend architecture, and in a production environment, the goal is to have each microfrontend as a dockerized NodeJS application.
Right now, each microfrontend depends on an internal NPM package developed by the company, and I would like to know if it's possible to have that dependency as an independent image, where each microfrontend would, some how, reuse it instead of installing it multiple times (one for each microfrontend)?
I've been making some tests, and I've managed to dockerize the internal dependency, but haven't been able to make it reachable to the microfrontends? I was hopping that there was a way to set it up on package.json, something similar to how it's made for local path, but since the image's scope are isolated, they can't find out where's that dependency.
Thanks in advance.
There are at least 2 solutions to your question
create a package and import it in every project (see Verdaccio for local npm registry)
Use a single Docker image with shared node_modules and change command in docker-compose
Solution 2
Basically the idea is to put all your microservices into a single Docker image In a structure like this:
/service1
/service2
/service3
/node_modules
/package.json
Then on your docker-compose.yaml
version: '3'
services:
service1:
image: my-image:<version or latest>
command: npm run service1:start
environment:
...
service2:
image: my-image:<version or latest>
command: npm run service2:start
environment:
...
service3:
image: my-image:<version or latest>
command: npm run service3:start
environment:
...
The advantage is that you now you have a single image to deploy in production and all the shared code is in one place

ECONNREFUSED Inside of Docker Container; Accessing Remote API

Recently, I've been building a nodejs bot to run games on a phpbb forum. I have the bot running locally on my machine (logging in, posting, scraping threads, ect.), so, naturally, I've started to dockerize it.
However, even when running through the login workflow from inside of my docker container, I'm receiving ECONNREFUSED errors from inside of the container when trying to hit the forum's api. There's no official docs for the aforementioned api, so a lot of this has been self-investigation.
I'm running node v14.4.0 with axios v0.19.2 + toughcookie v4.0.0 for requests. I'm able to successfully curl the endpoint with the same payload from inside of the container, so I suspect there's something that's going over my head with axios/node.js. I've tried to look at other issues with docker and ECONNREFUSED, but most of them on stackoverflow relate to inter-container communication instead of issues with external apis/access.
The container is running as a root user, and I don't have any sort of proxies or wonky networking set up.
Does anyone have any advice or inklings? My docker-compose is pretty bare bones, but I've included it below for reference.
version: '3.2'
services:
bot:
build: .
env_file:
- .env
ports:
- '80:80'
Any tips or theories would be much appreciated; I've hit the end of my list!
Cheers!
I assume you try to reach the host from the container. A simple way of accomplishing this is to use host networking instead of container networking. Try
version: '3.2'
services:
network_mode: host
bot:
build: .
env_file:
- .env
A better way is probably to put the rest of your application in container as well and use the service discovery in Compose, e.g.
version: '3.2'
services:
bot:
build: .
env_file:
- .env
webapp:
...
Then you can reach the PHP app by connecting to the host name "webapp" in the above example.

How to run nodejs and reactjs in Docker

I have a nodejs app to run backend and another reactjs app to run frontend for a website, then put to docker image. But I don't know how to deal with CMD command in Dockerfile. Does Docker have any command solve this?
I thought that i could use docker-compose to build 2 separate image but it seem to be wasted because node image has to be installed 2 times.
Does anyone has solution?
Rule of thumb, single process per container.
I thought that I could use docker-compose to build 2 separate image
but it seems to be wasted because node image has to be installed 2
times.
First thing, manage 2 separate docker image is fine but running two process in the container is not fine at all.
second thing, You do not need to build 2 separate images, if you can run two processes from the same code then you can run both applications from single docker-compose.
version: '3.7'
services:
react-app:
image: myapp:latest
command: node server.js
ports:
- 3000:3000
node-app:
image: myapp:latest
ports:
- 3001:3001
command: react-scripts start"
Each container should have only one concern. Decoupling applications
into multiple containers makes it easier to scale horizontally and
reuse containers. For instance, a web application stack might consist
of three separate containers, each with its own unique image, to
manage the web application, database, and an in-memory cache in a
decoupled manner.
Limiting each container to one process is a good rule of thumb
Dockerfile Best practice
Whether put your backend and front-end inside the same container is a design choice (Remember that docker container are designed to share a lot of resources from the host machine).
You can use a shell script and run that shell script with CMD in your Dockerfile.

Docker Express Node.js app container not connecting with MongoDB container giving error TransientTransactionError

I am under a weird dilemma. I have created a node application and this application needs to connect to MongoDB (through docker container) I created a docker-compose file as follows:
version: "3"
services:
mongo:
image: mongo
expose:
- 27017
volumes:
- ./data/db:/data/db
my-node:
image: "<MY_IMAGE_PATH>:latest"
deploy:
replicas: 1
restart_policy:
condition: on-failure
working_dir: /opt/app
ports:
- "2000:2000"
volumes:
- ./mod:/usr/app
networks:
- webnet
command: "node app.js"
networks:
webnet:
I am using mongo official image. I have ommited my docker image from the above configuration .I have set up with many configuration but i am unable to connect to mongoDB (yes i have changed the MongoDB uri inside the node.js application too). but whenever i am deploying my docker-compose my application on start up gives me always MongoNetworkError of TransientTransactionError. I am unable to understand where is the problem since many hours.
One more weird thing is when i running my docker-compose file i receive following logs:
Creating network server_default
Creating network server_webnet
Creating service server_mongo
Creating service server_feed-grabber
Could it be that both services are in a different network? If yes then how to fix that?
Other Info:
In node.js application MongoDB uri that i tried is
mongodb://mongo:27017/MyDB
I am running my docker-compose with the command: docker stack deploy -c docker-compose.yml server
My node.js image is Ubuntu 18
Anyone can help me with this?
Ok So i have tried few things and i figured out at last after spending many many hours. There were two things i was doing wrong and they were hitting me on last point:
First thing logging of the startup of docker showed me that it was creating two networks server_default and server_webnet this is the first mistake. Both containers should be in the same network while working.
The second thing I needed to run the Mongo container first as my Node.js application depend_on the Mongo container to be run first. This is exactly what I did in my docker-compose configuration by introducing depend_on property.
For me it was:
1- get your ip by running command
docker-machine ip
2- don't go to localhost/port, go to your ip/port, example : http://192.168.99.100:8080

Resources