How to run nodejs and reactjs in Docker - node.js

I have a nodejs app to run backend and another reactjs app to run frontend for a website, then put to docker image. But I don't know how to deal with CMD command in Dockerfile. Does Docker have any command solve this?
I thought that i could use docker-compose to build 2 separate image but it seem to be wasted because node image has to be installed 2 times.
Does anyone has solution?

Rule of thumb, single process per container.
I thought that I could use docker-compose to build 2 separate image
but it seems to be wasted because node image has to be installed 2
times.
First thing, manage 2 separate docker image is fine but running two process in the container is not fine at all.
second thing, You do not need to build 2 separate images, if you can run two processes from the same code then you can run both applications from single docker-compose.
version: '3.7'
services:
react-app:
image: myapp:latest
command: node server.js
ports:
- 3000:3000
node-app:
image: myapp:latest
ports:
- 3001:3001
command: react-scripts start"
Each container should have only one concern. Decoupling applications
into multiple containers makes it easier to scale horizontally and
reuse containers. For instance, a web application stack might consist
of three separate containers, each with its own unique image, to
manage the web application, database, and an in-memory cache in a
decoupled manner.
Limiting each container to one process is a good rule of thumb
Dockerfile Best practice

Whether put your backend and front-end inside the same container is a design choice (Remember that docker container are designed to share a lot of resources from the host machine).
You can use a shell script and run that shell script with CMD in your Dockerfile.

Related

how to make container for multiple servers in one code base to deploy golang app with docker?

I have a repo that has multiple servers gonna running. the structure like this
// Golang Apps
- account = port 4001
- event = port 4002
- place = port 4003
// Node js
- gateway = port 4000
I usually run in local using script like this
// script.sh here:
#!/bin/bash
EnvAPP="${ENV_APP:-dev}"
function cleanup {
kill "$ACCOUNTS_PID"
kill "$EVENTS_PID"
kill "$PLACES_PID"
}
trap cleanup EXIT
go build -tags $EnvAPP -o ./tmp/srv-accounts ./cmd/server/accounts
go build -tags $EnvAPP -o ./tmp/srv-events ./cmd/server/events
go build -tags $EnvAPP -o ./tmp/srv-places ./cmd/server/places
./tmp/srv-accounts &
ACCOUNTS_PID=$!
./tmp/srv-events &
EVENTS_PID=$!
./tmp/srv-places &
PLACES_PID=$!
sleep 1
node ./cmd/gateway/index.js
is that possible I create one Dockerfile for this case into Production? should I run the script.sh in the Dockerfile for this case? how about the image should I use in Dockerfile? I have no idea for this case using docker because the in one code base for multiple servers running , and the problem also port of servers running
maybe one of you ever has this case? it would be great to know how to solve this problem
I am using GraphQL Federation ( Go ) for this case, so I have multiple services and Gateway ( NodeJS )
I want to deploy this into Production for this question
You need four separate Dockerfiles for this, to launch four separate containers with four different programs. The Go component Dockerfiles can be fairly straightforward:
# Dockerfile.accounts
FROM golang:1.16 AS build
WORKDIR /app
COPY . .
ARG ENV_APP=dev
RUN go build -tags "$ENV_APP" -o /accounts ./cmd/server/accounts
FROM ubuntu:20.04
COPY --from=build /accounts /usr/local/bin
CMD accounts
(If the three images are really identical aside from the specific command directory being built, you could pass that in as an ARG as well. I'm assuming the ./cmd/server/* packages require packages elsewhere in your source directory like a ./pkg/support or whatever, which would require the Dockerfiles to be at the top level.)
Since your script is just running the four programs, I'd generally recommend using Docker Compose as a way to launch the four containers together. "Launch some containers with known options" is the only thing Compose does, but it would do everything your script does.
# docker-compose.yml
version: '3.8'
services:
accounts:
build:
context: .
dockerfile: Dockerfile.accounts
events:
build:
context: .
dockerfile: Dockerfile.events
places:
build:
context: .
dockerfile: Dockerfile.places
gateway:
build:
context: .
dockerfile: Dockerfile.gateway
# (Since a Node app can't reuse Go code, this could also
# reasonably be `build: cmd/gateway` using a
# `cmd/gateway/Dockerfile`)
ports:
- 3000:3000
Just running docker-compose up will start all four containers in the foreground; once it's up, pressing Ctrl+C will stop them all. You can configure the gateway to use the other container names accounts, events, places as host names; http://accounts/graphql for example.
You could also adapt your launcher script as-is. Run docker build instead of go build to build images, docker run to start a container (probably with fixed --names), docker stop && docker rm to stop them. You should docker network create a network and docker run --net all of the containers on them so they can communicate in the same way as the Compose setup.

Dockerize and reuse NodeJS dependency

I'm developing an application based on a microfrontend architecture, and in a production environment, the goal is to have each microfrontend as a dockerized NodeJS application.
Right now, each microfrontend depends on an internal NPM package developed by the company, and I would like to know if it's possible to have that dependency as an independent image, where each microfrontend would, some how, reuse it instead of installing it multiple times (one for each microfrontend)?
I've been making some tests, and I've managed to dockerize the internal dependency, but haven't been able to make it reachable to the microfrontends? I was hopping that there was a way to set it up on package.json, something similar to how it's made for local path, but since the image's scope are isolated, they can't find out where's that dependency.
Thanks in advance.
There are at least 2 solutions to your question
create a package and import it in every project (see Verdaccio for local npm registry)
Use a single Docker image with shared node_modules and change command in docker-compose
Solution 2
Basically the idea is to put all your microservices into a single Docker image In a structure like this:
/service1
/service2
/service3
/node_modules
/package.json
Then on your docker-compose.yaml
version: '3'
services:
service1:
image: my-image:<version or latest>
command: npm run service1:start
environment:
...
service2:
image: my-image:<version or latest>
command: npm run service2:start
environment:
...
service3:
image: my-image:<version or latest>
command: npm run service3:start
environment:
...
The advantage is that you now you have a single image to deploy in production and all the shared code is in one place

How to serve/update content from one Docker container in another

I'm trying to run NodeJS served by nginx. I want to proxy to the NodeJS server, and serve static content like images, css, js.
Here's my docker-compose file:
version: "3.7"
services:
web:
build: .
image: 127.0.0.1:5000/test
volumes:
- public:/app/public
deploy:
replicas: 2
nginx:
image: nginx:stable-alpine
ports:
- 80:80
volumes:
- ./nginx/nginx.conf:/etc/nginx/conf.d/default.conf:ro
# share files from web
- public:/static:ro
volumes:
public:
My Dockerfile runs npx webpack -p to build all static files (/app/public), and then runs the NodeJS server with node /app/src/server.min.js. The NodeJS serves server-side React with ajax endpoints for minimal page updates. As I mentioned above the nginx container serves the static content (css, js, images, etc).
The problem is that I can't update the static files. Once the volume is created and populated, those files aren't able to be altered, i.e. I can't update CSS or JS.
You can see this behavior with docker-compose up or docker stack deploy.
Is there some way that I could recreate the volume, or serve the files in some other way between the containers?
There's three basic approaches you can take here.
The first is, at build time, copy those static assets somewhere outside of Docker space where Nginx or something else can host them. If you're running in AWS anyways, you can serve them directly out of S3; if you're in a local environment, use a host path instead of a named volume. This mostly just avoids the "only on first use" volume behavior, but it requires some work outside of Docker.
You can build the same content into two images, and not try to share it using a volume. You'd have to add a second Dockerfile for the nginx image, and if there's some sort of build pipeline (Webpack?) to build the static content you'd have to ensure that's run up front. A Dockerfile line like
COPY --from=127.0.0.1:5000/test /app/public /static
might work.
You can also have the web image copy its own data manually at startup, instead of relying on Docker to do this for you. You can have an entrypoint script like
#!/bin/sh
if [ -d /static ]; then
cp -r /app/public/* /static
fi
exec "$#"
Add that to your image, and mount the shared volume on /static in both containers.
All of these cases avoid the behavior of Docker automatically populating volumes. I try to avoid that in general because of exactly this issue: it only happens the first time you run a container, but in reality you're frequently making updates, redeploying, etc. but that volume already exists so Docker won't update it. (This behavior also doesn't work at all in Kubernetes, in spite of it otherwise being able to run standard Docker images without modification.)

How setup a Node.js development environment using Docker Compose

I want create a complete Node.js environment for develop any kind of application (script, api service, website ecc.) also using different services (es. Mysql, Redis, MongoDB). I want use Docker to do it in order to have a portable and multi OS environment.
I've created a Dockerfile for the container in which is installed Node.js:
FROM node:8-slim
WORKDIR /app
COPY . /app
RUN yarn install
EXPOSE 80
CMD [ "yarn", "start" ]
And a docker-compose.yml file where adding the services that I need to use:
version: "3"
services:
app:
build: ./
volumes:
- "./app:/app"
- "/app/node_modules"
ports:
- "8080:80"
networks:
- webnet
mysql:
...
redis:
...
networks:
webnet:
I would like ask you what are the best patterns to achieve these goals:
Having all the work directory shared across the host and docker container in order to edit the files and see the changes from both sides.
Having the node_modules directory visible on both the host and the docker container in order to be debuggable also from an IDE in the host.
Since I want a development environment suitable for every project, I would have a container where, once it started, I can login into using a command like docker-compose exec app bash. So I'm trying find another way to keep the container alive instead of running a Node.js server or using the trick of CMD ['tail', '-f', '/d/null']
Thank you in advice!
Having all the work directory shared across the host and docker container in order to edit the files and see the changes from both sides.
use -v volume option to share the host volume inside the docker container
Having the node_modules directory visible on both the host and the docker container in order to be debuggable also from an IDE in the host.
same as above
Since I want a development environment suitable for every project, I would have a container where, once it started, I can login into using a command like docker-compose exec app bash. So I'm trying find another way to keep the container alive instead of running a Node.js server or using the trick of CMD ['tail', '-f', '/d/null']
docker-compose.yml define these for interactive mode
stdin_open: true
tty: true
Then run the container with the command docker exec -it

How to manage multiple backend stacks for development?

I am looking for the best/simplest way to manage a local development environment for multiple stacks. For example on one project I'm building a MEAN stack backend.
I was recommended to use docker, however I believe it would complicate the deployment process because shouldn't you have one container for mongo, one for express etc? As found in this question on stack.
How do developers manage multiple environments without VMs?
And in particular, what are best practices doing this on ubuntu?
Thanks a lot.
With Docker-Compose you can easily create multiple containers in one go. For development, the containers are usually configured to mount a local folder into the containers filesystem. This way you can easily work on your code and have live reloading. A sample docker-compse.yml could look like this:
version: '2' services: node:
build: ./node
ports:
- "3000:3000"
volumes:
- ./node:/src
- /src/node_modules
links:
- mongo
command: nodemon --legacy-watch /src/bin/www
mongo:
image: mongo
You can then just type
docker-compose up
And you Stack will be up in seconds.

Resources