Dockerize and reuse NodeJS dependency - node.js

I'm developing an application based on a microfrontend architecture, and in a production environment, the goal is to have each microfrontend as a dockerized NodeJS application.
Right now, each microfrontend depends on an internal NPM package developed by the company, and I would like to know if it's possible to have that dependency as an independent image, where each microfrontend would, some how, reuse it instead of installing it multiple times (one for each microfrontend)?
I've been making some tests, and I've managed to dockerize the internal dependency, but haven't been able to make it reachable to the microfrontends? I was hopping that there was a way to set it up on package.json, something similar to how it's made for local path, but since the image's scope are isolated, they can't find out where's that dependency.
Thanks in advance.

There are at least 2 solutions to your question
create a package and import it in every project (see Verdaccio for local npm registry)
Use a single Docker image with shared node_modules and change command in docker-compose
Solution 2
Basically the idea is to put all your microservices into a single Docker image In a structure like this:
/service1
/service2
/service3
/node_modules
/package.json
Then on your docker-compose.yaml
version: '3'
services:
service1:
image: my-image:<version or latest>
command: npm run service1:start
environment:
...
service2:
image: my-image:<version or latest>
command: npm run service2:start
environment:
...
service3:
image: my-image:<version or latest>
command: npm run service3:start
environment:
...
The advantage is that you now you have a single image to deploy in production and all the shared code is in one place

Related

How can I connect to my Verdaccio service launched as docker container from another docker container?

I am trying to build an npm repository which will be used on an offline system. My idea is to build a ready docker container, which will already contain all the packages needed for a given project - downloading the packages will be based on the package.json file.
To implement my idea, I need to run server verdaccio on one container, then the other container will run the npm install command, thanks to which the appropriate files with ready npm packages will be generated.
However, I cannot cope with waiting for the launch of the first container. So far I have tried to use the wait-for.sh and wait-for.sh scripts (https://docs.docker.com/compose/startup-order/), but they are not able to connect to the given address.
P.S I am using Docker for Windows
docker-compose.yml
version: '3.1'
services:
listen:
build: listen
image: listen-img
container_name: listen
environment:
- VERDACCIO_PORT=4873
ports:
- "4873:4873"
download:
build: download
image: download-img
container_name: download
depends_on:
- listen
networks:
node-network:
driver: bridge
server dockerfile
FROM verdaccio/verdaccio:4
'npm install trigger' docker file
FROM node:15.3.0-alpine3.10
WORKDIR /usr/src/cached-npm
COPY package.json .
COPY wait-for.sh .
COPY /config/htpasswd /verdaccio/conf/htpasswd
USER root
RUN npm set registry http://host.docker.internal:4873
RUN chmod +x /usr/src/cached-npm/wait-for.sh
RUN /usr/src/cached-npm/wait-for.sh host.docker.internal:4873 -- echo "Listen is up"
RUN npm install
Is there something like a lack of shared ports missing from my solution, or are there other issues that are causing my approach to fail?
It turned out that the problem was to mix up two processes - building and launching the appropriate container. In my solution so far, I wanted to build both containers at the same time, while one of them needed an already running instance of the first to be built.

Substituting the npm links for Docker container (case with Node.js "node:12.4" and "docker-compose" file)

Project
There are dependencies provided via Lerna (I suppose, it using the npm link) inside node_modules (assume dependency_a and dependency_b). The project building works without Docker, but Docker don't see the links to dependency_a and dependency_b. With below docker-compose.yaml:
version: "3"
services:
logic:
image: node:12.4
command: npm run 'SPA incremental building' && npm run 'Run server'
volumes:
- .:/Application
working_dir: /Application
I have error:
Error: Cannot find module '/packages/dependency_a/bin/dependency_a'.
Logic of the solution
From the viewpoint of NPM and Lerna, nothing wrong has done (at least, linking is working without Docker). But this "right" way is "wrong" for Docker.
Docker must not break the basic concept of NPM as dependencies linking. It means when we tell to Docker where is dependency_a, the normal 'SPA incremental building' without Docker still must work. (Image what will be if each tool will decide which rules are right!)
Because of 2, Docker must leave the way to tell just to him where is the dependency_a. If this way exists, it is the solution.
Conceptual solution
For Docker container only (not local environment) replace npm links inside node_modules to such links as Docker can understand. It's important that Docker must see the changes of these dependencies because the are being frequently updated.
It could be something like:
version: "3"
services:
logic:
image: node:12.4
command: npm run 'SPA incremental building' && npm run 'Run server'
volumes:
- .:/Application
- ../../packages/dependency_a:/Application/node_modules/dependency_a
- ../../packages/dependency_b:/Application/node_modules/dependency_a
working_dir: /Application
# ports:
# - "3000:3000"
But
I am not sure what I actually doing.
I don't feeling good about intersection of volumes (second and third volumes overrides first).
Above settings does not works.
Off course, I telling about local development mode, not about production.
Related article analysis
The similar situation considering in the article Developing a new Node module in a Docker container without using NPM link. The solution for the case in this article:
RUN mkdir -p /usr/src/node_modules
ENV PATH /usr/src/node_modules/.bin:$PATH
and the in docker-compose.yaml:
volumes:
- .:/usr/src/app
- ../redux-beacon-slack:/usr/src/node_modules/redux-beacon-slack
Why this solution does not suite with my case because I am using node:12.4 image. I suppose, the paths like usr/src are actual for images of operation systems, but I don't understand if I need to create the file system (like bin, boot, cdrom, home, root, etc.) in my node:12.4 image.
Inappropriate solutions
Use only published npm libraries without npm links
The well-organized development environment is the basic requirement of modern IT industry. In relation to locally developing npm dependencies:
No excess realizes (to npm).
If we changed something in locally developing dependency, the changes must immediately reflect on application (what npm link providing). E. g. if we added console.log() somewhere in dependency_a, we instantly see this output in our application.
Changing of project structure
IMHO good software must adapt to custom project structure, not force own project structure.

How to run nodejs and reactjs in Docker

I have a nodejs app to run backend and another reactjs app to run frontend for a website, then put to docker image. But I don't know how to deal with CMD command in Dockerfile. Does Docker have any command solve this?
I thought that i could use docker-compose to build 2 separate image but it seem to be wasted because node image has to be installed 2 times.
Does anyone has solution?
Rule of thumb, single process per container.
I thought that I could use docker-compose to build 2 separate image
but it seems to be wasted because node image has to be installed 2
times.
First thing, manage 2 separate docker image is fine but running two process in the container is not fine at all.
second thing, You do not need to build 2 separate images, if you can run two processes from the same code then you can run both applications from single docker-compose.
version: '3.7'
services:
react-app:
image: myapp:latest
command: node server.js
ports:
- 3000:3000
node-app:
image: myapp:latest
ports:
- 3001:3001
command: react-scripts start"
Each container should have only one concern. Decoupling applications
into multiple containers makes it easier to scale horizontally and
reuse containers. For instance, a web application stack might consist
of three separate containers, each with its own unique image, to
manage the web application, database, and an in-memory cache in a
decoupled manner.
Limiting each container to one process is a good rule of thumb
Dockerfile Best practice
Whether put your backend and front-end inside the same container is a design choice (Remember that docker container are designed to share a lot of resources from the host machine).
You can use a shell script and run that shell script with CMD in your Dockerfile.

Codeship independent CI for microservices in monorepo

Currently we have a NodeJS monolith app. The tests run in Codeship and if the tests are green then the code will be deployed to Heroku. That is pretty easy.
So we would like to break up our monolith app into microservices and we prefer monorepo solution.
For example we have service-1 and service-2 in the repo. We would like to setup independent CI and deployment pipeline for each services on Codeship.
my-repo
- service-1
- src
- package.json
- docker-compose.yml
- codeship-steps.yml
- service-2
- src
- package.json
- docker-compose.yml
- codeship-steps.yml
Do you have any idea how can we setup the ideal CI?
Yes CodeShip Pro provides a Docker Compose-like approach to setting up multiple services from the same project space. Assuming microservices are already split up into their particular folders, a codeship-services.yml may look like the following:
service-a:
build:
context: ./service-a
dockerfile: Dockerfile # The Dockerfile in ./service-a directory
service-b:
build:
context: ./service-b
Please check out our comprehensive intro guide for more information

How to manage multiple backend stacks for development?

I am looking for the best/simplest way to manage a local development environment for multiple stacks. For example on one project I'm building a MEAN stack backend.
I was recommended to use docker, however I believe it would complicate the deployment process because shouldn't you have one container for mongo, one for express etc? As found in this question on stack.
How do developers manage multiple environments without VMs?
And in particular, what are best practices doing this on ubuntu?
Thanks a lot.
With Docker-Compose you can easily create multiple containers in one go. For development, the containers are usually configured to mount a local folder into the containers filesystem. This way you can easily work on your code and have live reloading. A sample docker-compse.yml could look like this:
version: '2' services: node:
build: ./node
ports:
- "3000:3000"
volumes:
- ./node:/src
- /src/node_modules
links:
- mongo
command: nodemon --legacy-watch /src/bin/www
mongo:
image: mongo
You can then just type
docker-compose up
And you Stack will be up in seconds.

Resources