Debug Node (>=6.3) from a docker container - node.js

I have a docker container which uses pm2 to run node like so:
#process.yml
apps:
- script: ./index.js
name: client
watch: true
args: --inspect
#Dockerfile
CMD pm2-docker process.yml
As I could read in that post: The node inspector has arrived in the core of node.js and running a script like so:
node --inspect <somescript.js>
gives some output on the commandline like that: chrome-devtools://… and navigating to that url in chrome, will fire up node-inspector.
How can I do that for a node instance that lives inside a container, but should be debugged from the host.
UPDATE
I could manage to start the debug process by changing two things:
node_args: --inspect=localhost:9080
docker run ... -p 9080:9080
But that brings up one Problem: The URL to use is displayed on the commandline right after node --inspect=... ... is executed, but when running the docker container that information goes down to the logs somewhere. So how can I access the url from there?

You simply publish the required with -p 9229:9229 or
ports:
- 9229:9229
in the docker-compose, and then start it with pm2 and the --inspect arg or directly with node --inspect index.
The url will then be printed out and you can simply use it in chrome like without docker.
To find that line afterwards you can use
docker-compose logs service-name | grep chrome-devtools
or
docker logs container-id 2>&1 | grep chrome-devtools

Related

Docker and Node .mjs files

I have an express application with all the JS files using the *.mjs extension.
So, to start the server I do node index.mjs and it works as expected.
Now I'm trying to containerize the app.
I have this basic Dockerfile
FROM mhart/alpine-node:14
WORKDIR /app
COPY package.json /app
RUN npm install
COPY . /app
CMD node index.mjs
EXPOSE 80
After building (with no errors) and tagging I try to run my application (docker run my-app:latest) it breaks the line in the console but I don't see the console logs of my server.
If I try to hit localhost at port 80, it doesn't work.
I check the containers with docker container ls and I see the container
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ce7ca2a0db96 my-app:latest "/bin/sh -c 'node in…" 6 minutes ago Up 6 minutes 80/tcp clever_bhabha
If I look for logs, nothing.
Does anyone have this issue? Could it be related to .mjs files? If so, is there a way to use them in Docker?
Thanks
I think you need to expose a port different to 80 locally. You should try
docker run -p 8080:80 my-app
Then in localhost:8080 you should reach your app.

NPM start script runs from local shell but fails inside Docker container command

I have a Node app which consists of three separate Node servers, each run by pm2 start. I use concurrently to run the three servers, as a start-all script in package.json:
"scripts": {
...
"start-all": "concurrently \" pm2 start ./dist/foo.js \" \"pm2 start ./dist/bar.js \" \"pm2 start ./dist/baz.js\"",
"stop-all": "pm2 stop all",
"reload-all": "pm2 reload all",
...
}
This all runs fine when running from the command line on localhost, but when I run it as a docker-compose command - or as a RUN command in my Dockerfile - only one of the server scripts (a random one each time I try it!) will launch, but then immediately exit. In my --verbose docker-compose output I can see the pm2 panel (listing name, version, mode, pid, etc.), but then this error message:
pm2 start ./dist/foo.js exited with code 0.
N.B: This is all with Docker running locally (on a Mac Mini with 16GB of RAM), not on a remote server.
If I docker exec -it <container_name> /bin/bash into the container and the run npm run start-all manually from the top level of the src directory (which I COPY over in my Dockerfile) everything works. Here is my Dockerfile:
FROM node:latest
# Create the workdir
RUN mkdir /myapp
WORKDIR /myapp
# Install packages
COPY package*.json ./
RUN npm install
# Install pm2 and concurrently globally.
RUN npm install -g pm2
RUN npm install -g concurrently
# Copy source code to the container
COPY . ./
In my docker-compose file I simply list npm run start-all as a command for the Node service. But it makes no difference if I add it to the Dockerfile like this:
RUN npm run start-all
What could possibly be going on? The pm2 logs show report nothing other than that the app has started.
the first reason is pm2 start app.js start the application in background so that is why your container stops as soon as it runs pm2 start.
You need to start an application with pm2_runtime, it starts an application in the foreground. also you do not need concurrently, pm2 process.yml will do this job.
Docker Integration
Using Containers? We got your back. Start today using pm2-runtime, a
perfect companion to get the most out of Node.js in production
environment.
The goal of pm2-runtime is to wrap your applications into a proper
Node.js production environment. It solves major issues when running
Node.js applications inside a container like:
Second Process Fallback for High Application Reliability Process Flow
Control Automatic Application Monitoring to keep it always sane and
high performing Automatic Source Map Discovery and Resolving Support
docker-pm2-nodejs
The second important thing, you should put all your application in pm2 config file, as docker can only run the process from CMD.
Ecosystem File
PM2 empowers your process management workflow. It allows you to
fine-tune the behavior, options, environment variables, logs files of
each application via a process file. It’s particularly useful for
micro-service based applications.
pm2 config application-declaration
Create file process.yml
apps:
- script : ./dist/bar.js
name : 'bar'
- script : ./dist/foo.js
name : 'worker'
env :
NODE_ENV: development
then add CMD in Dockerfile
CMD ["pm2-runtime", "process.yml"]
remove command from docker-compose.
Docker and pm2 provide overlapping functionality: both have the ability to restart processes and manage logs, for example. In Docker it's generally considered a best practice to only run one process inside a container, and if you do that, you don't necessarily need pm2. what is the point of using pm2 and docker together?
discusses this in more detail.
When you run your image you can specify the command to run, and you can start multiple containers off of the same image. Given the Dockerfile you show initially you can launch these as
docker run --name foo myimage node ./dist/foo.js
docker run --name bar myimage node ./dist/bar.js
docker run --name baz myimage node ./dist/baz.js
This will let you do things like restart only one of the containers when its code changes while leaving the rest untouched.
You hint at Docker Compose; its command: directive sets the same property.
version: '3'
services:
foo:
build: .
command: node ./dist/foo.js
bar:
build: .
command: node ./dist/bar.js
baz:
build: .
command: node ./dist/baz.js

Node.js app running in docker container is not reachable

I want to run a node.js app in a docker container using docker-compose. The app is TiddlyWiki, there are other containers and the whole thing runs in a vagrant VM and is set up with ansible, but I don't think any of that matters for this problem.
This is my docker-compose config:
wiki:
image: node:12-alpine
container_name: nodejs
restart: always
working_dir: /home/node/app
environment:
NODE_ENV: production
volumes:
- "/srv/docker_wiki/:/home/node/app"
ports:
- "8080:8080"
command: "node node_modules/tiddlywiki/tiddlywiki.js mywiki --listen debug-level=debug"
The app seems to start up and run without issues:
vagrant#vserver:~$ sudo docker logs nodejs
Serving on http://127.0.0.1:8080
(press ctrl-C to exit)
syncer-server-filesystem: Dispatching 'save' task: $:/StoryList
But I cannot reach it:
vagrant#vserver:~$ curl http://localhost:8080
curl: (52) Empty reply from server
vagrant#vserver:~$ curl http://localhost:8080
curl: (56) Recv failure: Connection reset by peer
It seems random which of the two different error messages comes up.
An interesting detail: If I use the default node image which comes itself with curl, then I can in fact reach the app from within the container itself after running docker exec -it nodejs /bin/bash
I have also tried to use a different port on the host, with the same result.
Any idea what could be going wrong here?
An interesting detail: If I use the default node image which comes
itself with curl, then I can in fact reach the app from within the
container itself after running docker exec -it nodejs /bin/bash
If you are able to access inside the container, it means the application bind with 127.0.0.1 the localhost of the container.
Serving on http://127.0.0.1:8080
(press ctrl-C to exit)
All need to bind it with 0.0.0.0.
so change the command to
command: "node node_modules/tiddlywiki/tiddlywiki.js mywiki --host 0.0.0.0 --listen debug-level=debug"
or
command: "node node_modules/tiddlywiki/tiddlywiki.js mywiki --listen debug-level=debug host=0.0.0.0"
You explore further ListenCommand here.

Google Sheets API v4, How to authenticate from Node.js (Docker Container)

I'm learning how to use Google Sheets API v.4 to download data from a sheet --> my nodeJS server. I'm using Docker containers for my node app. Fails on either localhost or online at server in Docker. It will work fine on localhost, but not in a Docker container. I've whitelisted the IP address at the Google API console. (note: I'm easily able to use firebase API from this node server, not the Google Sheets v4 API)
ref: https://developers.google.com/sheets/api/quickstart/nodejs#step_4_run_the_sample
First time you run the app, the command line on the node server displays:
Authorize this app by visiting this url:
https://accounts.google.com/o/oauth2/auth?access_type=offline&scope=https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fspreadsheets.readonly&response_type=code&client_id=xxx.apps.googleusercontent.com&redirect_uri=urn%3Aietf%3Awg%3Aoauth%3A2.0%3Aoob
You go to that URL, and that Google page displays:
Sign in
Please copy this code, switch to your application and paste it there.
4/xxxxxxxxxxxx
And here's the rub. No way will that work. I can copy and paste the 4/xxx token into the command line, but it's a fail. No error message, no nothing. No function either. Is there a way to get there from here? I know this works fine in a stand alone Node server on my desktop computer , but not in a docker container (either localhost or online). Is there a manual method for the authentication?
-----------Edit---------------------------------------------------------
I started looking at the code again, and the issue is a fail on node readline while using a docker container.
var rl = readline.createInterface({
input: process.stdin,
output: process.stdout
});
And that issue already exists here on StackOveflow.
Unable to get standard input inside docker container
duplicate of:
how to get docker container to read from stdin?
You need to run the container in interactive mode with --interactive
or -i:
Whoa... and how do you do that in a docker-compose deployment?
Interactive shell using Docker Compose
Ouch. No go on that posting. Didn't work at all for me.. See the answer provided below...
Info provided here in case anybody else hits this bump in the road.
So it turns out the solution was nowhere near that provided by Interactive shell using Docker Compose
I'm running a node server in a docker container. I wanted to use the terminal to insert a token upon container startup in response to Google sheet API call, using the Node readline method.
Instead the solution I came up with was the result of a note I saw in a docker compose github issue. A long slow read of docker compose functions got me to a better solution. It was as simple as:
$ docker-compose build
$ docker-compose run -p 8080:80 node
One important issue here... the word node is the name of my service as called out in the docker-compose.yml file below. This solution worked fine on both my localhost and at an online server via SSH terminal.
Dockerfile:
FROM node:8
RUN mkdir -p /opt/app
# set our node environment, either development or production
ARG NODE_ENV=production
ENV NODE_ENV $NODE_ENV
# default to port 80 for node, and 5858 or 9229 for debug
ARG PORT=80
ENV PORT $PORT
EXPOSE $PORT 5858 9229
# install dependencies first, in a different location for easier app bind mounting for local development
WORKDIR /opt
COPY package.json package-lock.json* ./
RUN npm install && npm cache clean --force
ENV PATH /opt/node_modules/.bin:$PATH
# copy in our source code last, as it changes the most
WORKDIR /opt/app
COPY . /opt/app
CMD [ "node", "./bin/www" ]
docker-compose.yml
version: '3.1'
services:
node: <---- Name of service in the container
build:
context: .
args:
- NODE_ENV=development
command: ../node_modules/.bin/nodemon ./bin/www --inspect=0.0.0.0:9229
ports:
- "80:80"
- "5858:5858"
- "9229:9229"
volumes:
- .:/opt/app
# this is a workaround to prevent host node_modules from accidently getting mounted in container
# in case you want to use node/npm both outside container for test/lint etc. and also inside container
# this will overwrite the default node_modules dir in container so it won't conflict with our
# /opt/node_modules location. Thanks to PR from #brnluiz
- notused:/opt/app/node_modules
environment:
- NODE_ENV=development
# tty: true ## tested, not needed
# stdin_open: true ## tested, not needed
volumes:
notused:
Many thanks to Bret Fisher for his work on node docker defaults.

Why doesn't the container start?

My docker-compose file is simple:
npm:
image: node
volumes:
- C:\Users\Samir\npm\:/home/dev
container_name: npm
When I run it:
docker-compose up -d
I get:
Starting npm
Then, nothing happens:
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
Any idea?
Per default the Node image is designed for interactive use and simply starts a Node REPL. When you run docker-compose up -d the Node REPL will start and exit immediately as no interactive terminal is availabe. Try:
docker-compose run -it npm
Note that this is currently not supported on Windows, you will need to use docker run -it -v ... node instead.
If you want to start your application instead simply override the default command in your docker-compose.yml

Resources