How setup a Node.js development environment using Docker Compose - node.js

I want create a complete Node.js environment for develop any kind of application (script, api service, website ecc.) also using different services (es. Mysql, Redis, MongoDB). I want use Docker to do it in order to have a portable and multi OS environment.
I've created a Dockerfile for the container in which is installed Node.js:
FROM node:8-slim
WORKDIR /app
COPY . /app
RUN yarn install
EXPOSE 80
CMD [ "yarn", "start" ]
And a docker-compose.yml file where adding the services that I need to use:
version: "3"
services:
app:
build: ./
volumes:
- "./app:/app"
- "/app/node_modules"
ports:
- "8080:80"
networks:
- webnet
mysql:
...
redis:
...
networks:
webnet:
I would like ask you what are the best patterns to achieve these goals:
Having all the work directory shared across the host and docker container in order to edit the files and see the changes from both sides.
Having the node_modules directory visible on both the host and the docker container in order to be debuggable also from an IDE in the host.
Since I want a development environment suitable for every project, I would have a container where, once it started, I can login into using a command like docker-compose exec app bash. So I'm trying find another way to keep the container alive instead of running a Node.js server or using the trick of CMD ['tail', '-f', '/d/null']
Thank you in advice!

Having all the work directory shared across the host and docker container in order to edit the files and see the changes from both sides.
use -v volume option to share the host volume inside the docker container
Having the node_modules directory visible on both the host and the docker container in order to be debuggable also from an IDE in the host.
same as above
Since I want a development environment suitable for every project, I would have a container where, once it started, I can login into using a command like docker-compose exec app bash. So I'm trying find another way to keep the container alive instead of running a Node.js server or using the trick of CMD ['tail', '-f', '/d/null']
docker-compose.yml define these for interactive mode
stdin_open: true
tty: true
Then run the container with the command docker exec -it

Related

how to make container for multiple servers in one code base to deploy golang app with docker?

I have a repo that has multiple servers gonna running. the structure like this
// Golang Apps
- account = port 4001
- event = port 4002
- place = port 4003
// Node js
- gateway = port 4000
I usually run in local using script like this
// script.sh here:
#!/bin/bash
EnvAPP="${ENV_APP:-dev}"
function cleanup {
kill "$ACCOUNTS_PID"
kill "$EVENTS_PID"
kill "$PLACES_PID"
}
trap cleanup EXIT
go build -tags $EnvAPP -o ./tmp/srv-accounts ./cmd/server/accounts
go build -tags $EnvAPP -o ./tmp/srv-events ./cmd/server/events
go build -tags $EnvAPP -o ./tmp/srv-places ./cmd/server/places
./tmp/srv-accounts &
ACCOUNTS_PID=$!
./tmp/srv-events &
EVENTS_PID=$!
./tmp/srv-places &
PLACES_PID=$!
sleep 1
node ./cmd/gateway/index.js
is that possible I create one Dockerfile for this case into Production? should I run the script.sh in the Dockerfile for this case? how about the image should I use in Dockerfile? I have no idea for this case using docker because the in one code base for multiple servers running , and the problem also port of servers running
maybe one of you ever has this case? it would be great to know how to solve this problem
I am using GraphQL Federation ( Go ) for this case, so I have multiple services and Gateway ( NodeJS )
I want to deploy this into Production for this question
You need four separate Dockerfiles for this, to launch four separate containers with four different programs. The Go component Dockerfiles can be fairly straightforward:
# Dockerfile.accounts
FROM golang:1.16 AS build
WORKDIR /app
COPY . .
ARG ENV_APP=dev
RUN go build -tags "$ENV_APP" -o /accounts ./cmd/server/accounts
FROM ubuntu:20.04
COPY --from=build /accounts /usr/local/bin
CMD accounts
(If the three images are really identical aside from the specific command directory being built, you could pass that in as an ARG as well. I'm assuming the ./cmd/server/* packages require packages elsewhere in your source directory like a ./pkg/support or whatever, which would require the Dockerfiles to be at the top level.)
Since your script is just running the four programs, I'd generally recommend using Docker Compose as a way to launch the four containers together. "Launch some containers with known options" is the only thing Compose does, but it would do everything your script does.
# docker-compose.yml
version: '3.8'
services:
accounts:
build:
context: .
dockerfile: Dockerfile.accounts
events:
build:
context: .
dockerfile: Dockerfile.events
places:
build:
context: .
dockerfile: Dockerfile.places
gateway:
build:
context: .
dockerfile: Dockerfile.gateway
# (Since a Node app can't reuse Go code, this could also
# reasonably be `build: cmd/gateway` using a
# `cmd/gateway/Dockerfile`)
ports:
- 3000:3000
Just running docker-compose up will start all four containers in the foreground; once it's up, pressing Ctrl+C will stop them all. You can configure the gateway to use the other container names accounts, events, places as host names; http://accounts/graphql for example.
You could also adapt your launcher script as-is. Run docker build instead of go build to build images, docker run to start a container (probably with fixed --names), docker stop && docker rm to stop them. You should docker network create a network and docker run --net all of the containers on them so they can communicate in the same way as the Compose setup.

Running docker compose inside Docker Container

I have a docker file I am building, it will use Localstack to spin up a mock AWS environment, at the minute I do this locally with my docker compose file, so I was thinking I could just copy my docker-compose.yml over when building my docker file and then run docker-compose up from dockerfile and I would be able to run my application from the container created from dockerfile
Here is the docker compose file
version: '3.1'
services:
localstack:
image: localstack/localstack:latest
environment:
- AWS_DEFAULT_REGION=us-east-1
- EDGE_PORT=4566
- SERVICES=lambda,s3,cloudformation,sts,apigateway,iam,route53,dynamodb
ports:
- '4566-4597:4566-4597'
volumes:
- "${TEMPDIR:-/tmp/localstack}:/temp/localstack"
- "/var/run/docker.sock:/var/run/docker.sock"
Here us my Dockerfile
FROM node:16-alpine
RUN apk update
RUN npm install -g serverless; \
npm install -g serverless-localstack;
WORKDIR /app
COPY serverless.yml ./
COPY localstack_endpoints.json ./
COPY docker-compose.yml ./
COPY --from=library/docker:latest /usr/local/bin/docker /usr/bin/docker
COPY --from=docker/compose:latest /usr/local/bin/docker-compose /usr/bin/docker-compose
EXPOSE 3000
RUN docker-compose up
CMD ["sls","deploy" ]
But the error I am receiving is
#17 0.710 Couldn't connect to Docker daemon at http+docker://localhost - is it running?
#17 0.710
#17 0.710 If it's at a non-standard location, specify the URL with the DOCKER_HOST environment variable.
I'm new to Docker, when i researched the error online I see people saying it needs to be run with Sudo, although I think in this case it is something to do with my volumes linking to the host running the container but really not sure.
Inside the Docker container try to reach socket but it can not. so when you want to run your container use
-v /var/run/docker.sock:/var/run/docker.sock
it should fix the problem.
As a general rule, you can't do things in your Dockerfile that affect persistent state or processes running outside the container. Imagine docker building your image, docker pushing it to a registry, and docker pulling it on a new system; if the build step was able to start other running containers, they wouldn't be running with the same image on a different system.
At a more mechanical level, the build sequence doesn't have access to bind-mounted host directories or a variety of other runtime settings. That's why you get the "couldn't connect to Docker daemon" message: the build container isn't running a Docker daemon and it doesn't have access to the host's daemon.
Rather than try to have a container embed the Compose tool and Compose setup, you might find it easier to just distribute a docker-compose.yml file, and make the standard way to run your composite application be running docker-compose up on the host. Access to the Docker socket is incredibly powerful -- you can almost trivially use it to root the host -- and I wouldn't require it to avoid needing a fairly standard tool on the host.

How to use node.js on container that has a "WORKDIR /app" in its image to access a shared volume (volumes_from)?

I have 2 containers, the first ('client') writes files to a volume while the second ('server') needs to read them (this is a simplified version of my requirement). My problem is that I don't know how to access the files from the second container using node.js when it is set with a WORKDIR /app.
(I've seen examples of how to access volume using volumes_from, like this one: https://phoenixnap.com/kb/how-to-share-data-between-docker-containers which works on my tests, but it doesn't demonstrate my settings)
this is my docker-compose file (simplified):
volumes:
aca_storage:
services:
server-test:
image: aca-server:0.0.1
container_name: aca-server-test
command: sh -c "npm install && npm run start:dev"
ports:
- 8080:8080
volumes_from:
- client-test:ro
environment:
NODE_ENV: development
client-test:
image: aca-client:0.0.1
container_name: aca-client-test
ports:
- 81:80
volumes:
- aca_storage:/app/files_to_share
This is the docker file for the aca-server image:
FROM node:alpine
WORKDIR /app
COPY ["./package*.json", "./"]
RUN npm install -g nodemon
RUN npm install --production
COPY . .
CMD [ "node", "main.js"]
on my server's node app I'm trying to read files like this:
fs.readdir(PATH_TO_SHARED_VOLUME, function (err, files) {
if (err) {
return console.log('Unable to scan directory: ' + err);
}
files.forEach(function (file) {
console.log(file);
});
});
but all my tests to fill PATH_TO_SHARED_VOLUME with valid path failed. For example:
/aca_storage
/aca_storage/_data
/acaproject_aca_storage
/acaproject_aca_storage/_data
(acaproject is the VS Code workspace name which I noticed is being added automatically)
Using docker cli on the 'aca-server-test' container, I'm getting:
/app #
which with ls exposes only the files/folders on my node.js app but doesn't allow access to the volume 'aca_storage' as happens with the examples I can find on the internet.
If relevant, my environment is:
Windows 10 Home with WSL2
Docker Desktop set as Linux Containers
I'm noob with Linux and Docker so as many details as possible will be appreciated.
You can mount the same storage into different containers on different paths. I'd avoid using volumes_from:, which is inflexible and a little bit opaque.
version: '3.8'
volumes:
aca_storage:
services:
client:
volumes:
- aca_storage:/app/data
server:
volumes:
- aca_storage:/app/files_to_share
In each container the mount path needs to match what the application code is expecting, but they don't necessarily need to be the same path. With this configuration, in the server code, you'd set PATH_TO_SHARED_VOLUME = '/app/files_to_share', for example.
If VSCode is adding a bind mount to replace the server image's /app directory with your local development code, volumes_from: will also copy this mount into the client container. That could result in odd behavior.
Sharing files between containers adds many complications; it makes it hard to scale the setup and to move it to a clustered setup like Docker Swarm or Kubernetes. A simpler approach could be for the client container to HTTP POST the data to the server container, which can then manage its own (unshared) storage.

How can I connect to my Verdaccio service launched as docker container from another docker container?

I am trying to build an npm repository which will be used on an offline system. My idea is to build a ready docker container, which will already contain all the packages needed for a given project - downloading the packages will be based on the package.json file.
To implement my idea, I need to run server verdaccio on one container, then the other container will run the npm install command, thanks to which the appropriate files with ready npm packages will be generated.
However, I cannot cope with waiting for the launch of the first container. So far I have tried to use the wait-for.sh and wait-for.sh scripts (https://docs.docker.com/compose/startup-order/), but they are not able to connect to the given address.
P.S I am using Docker for Windows
docker-compose.yml
version: '3.1'
services:
listen:
build: listen
image: listen-img
container_name: listen
environment:
- VERDACCIO_PORT=4873
ports:
- "4873:4873"
download:
build: download
image: download-img
container_name: download
depends_on:
- listen
networks:
node-network:
driver: bridge
server dockerfile
FROM verdaccio/verdaccio:4
'npm install trigger' docker file
FROM node:15.3.0-alpine3.10
WORKDIR /usr/src/cached-npm
COPY package.json .
COPY wait-for.sh .
COPY /config/htpasswd /verdaccio/conf/htpasswd
USER root
RUN npm set registry http://host.docker.internal:4873
RUN chmod +x /usr/src/cached-npm/wait-for.sh
RUN /usr/src/cached-npm/wait-for.sh host.docker.internal:4873 -- echo "Listen is up"
RUN npm install
Is there something like a lack of shared ports missing from my solution, or are there other issues that are causing my approach to fail?
It turned out that the problem was to mix up two processes - building and launching the appropriate container. In my solution so far, I wanted to build both containers at the same time, while one of them needed an already running instance of the first to be built.

Google Sheets API v4, How to authenticate from Node.js (Docker Container)

I'm learning how to use Google Sheets API v.4 to download data from a sheet --> my nodeJS server. I'm using Docker containers for my node app. Fails on either localhost or online at server in Docker. It will work fine on localhost, but not in a Docker container. I've whitelisted the IP address at the Google API console. (note: I'm easily able to use firebase API from this node server, not the Google Sheets v4 API)
ref: https://developers.google.com/sheets/api/quickstart/nodejs#step_4_run_the_sample
First time you run the app, the command line on the node server displays:
Authorize this app by visiting this url:
https://accounts.google.com/o/oauth2/auth?access_type=offline&scope=https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fspreadsheets.readonly&response_type=code&client_id=xxx.apps.googleusercontent.com&redirect_uri=urn%3Aietf%3Awg%3Aoauth%3A2.0%3Aoob
You go to that URL, and that Google page displays:
Sign in
Please copy this code, switch to your application and paste it there.
4/xxxxxxxxxxxx
And here's the rub. No way will that work. I can copy and paste the 4/xxx token into the command line, but it's a fail. No error message, no nothing. No function either. Is there a way to get there from here? I know this works fine in a stand alone Node server on my desktop computer , but not in a docker container (either localhost or online). Is there a manual method for the authentication?
-----------Edit---------------------------------------------------------
I started looking at the code again, and the issue is a fail on node readline while using a docker container.
var rl = readline.createInterface({
input: process.stdin,
output: process.stdout
});
And that issue already exists here on StackOveflow.
Unable to get standard input inside docker container
duplicate of:
how to get docker container to read from stdin?
You need to run the container in interactive mode with --interactive
or -i:
Whoa... and how do you do that in a docker-compose deployment?
Interactive shell using Docker Compose
Ouch. No go on that posting. Didn't work at all for me.. See the answer provided below...
Info provided here in case anybody else hits this bump in the road.
So it turns out the solution was nowhere near that provided by Interactive shell using Docker Compose
I'm running a node server in a docker container. I wanted to use the terminal to insert a token upon container startup in response to Google sheet API call, using the Node readline method.
Instead the solution I came up with was the result of a note I saw in a docker compose github issue. A long slow read of docker compose functions got me to a better solution. It was as simple as:
$ docker-compose build
$ docker-compose run -p 8080:80 node
One important issue here... the word node is the name of my service as called out in the docker-compose.yml file below. This solution worked fine on both my localhost and at an online server via SSH terminal.
Dockerfile:
FROM node:8
RUN mkdir -p /opt/app
# set our node environment, either development or production
ARG NODE_ENV=production
ENV NODE_ENV $NODE_ENV
# default to port 80 for node, and 5858 or 9229 for debug
ARG PORT=80
ENV PORT $PORT
EXPOSE $PORT 5858 9229
# install dependencies first, in a different location for easier app bind mounting for local development
WORKDIR /opt
COPY package.json package-lock.json* ./
RUN npm install && npm cache clean --force
ENV PATH /opt/node_modules/.bin:$PATH
# copy in our source code last, as it changes the most
WORKDIR /opt/app
COPY . /opt/app
CMD [ "node", "./bin/www" ]
docker-compose.yml
version: '3.1'
services:
node: <---- Name of service in the container
build:
context: .
args:
- NODE_ENV=development
command: ../node_modules/.bin/nodemon ./bin/www --inspect=0.0.0.0:9229
ports:
- "80:80"
- "5858:5858"
- "9229:9229"
volumes:
- .:/opt/app
# this is a workaround to prevent host node_modules from accidently getting mounted in container
# in case you want to use node/npm both outside container for test/lint etc. and also inside container
# this will overwrite the default node_modules dir in container so it won't conflict with our
# /opt/node_modules location. Thanks to PR from #brnluiz
- notused:/opt/app/node_modules
environment:
- NODE_ENV=development
# tty: true ## tested, not needed
# stdin_open: true ## tested, not needed
volumes:
notused:
Many thanks to Bret Fisher for his work on node docker defaults.

Resources