nodemon not starting when run in kubernetes environment - node.js

I am working on containerizing a nodejs app to be running on GKE.
the scripts section of package.json looks like this
"scripts": {
"dev": "npm-run-all --parallel dev:build-server dev:build-client dev:server",
"dev:server": "nodemon -L --watch build --exec node build/bundle.js",
"dev:build-server": "webpack --config webpack.server.js --watch",
"dev:build-client": "webpack --config webpack.client.js --watch"
},
so when running im using npm run dev to start them all.
this works perfectly while it is running on VM.
But when run as a container in kubernetes the nodemon process won't start.
nor it listens on the port. it gives a 502 status error on browser
but when you ssh to the pod and try running the command it starts the process.listens on 3001 port but obviously it gives routes not found on browser. since they are not linked as expected.
below is the dockerfile
FROM node:10.19-stretch
ENV USER="vue" UID="1001"
RUN apt-get update --fix-missing
RUN apt-get install -yq curl
RUN rm /bin/sh && ln -s /bin/bash /bin/sh && \
mkdir -p /opt/vue && \
addgroup --system -gid $UID $USER && \
adduser --system --uid $UID --gid $UID $USER
WORKDIR /opt/vue
COPY dashboard/. /opt/vue/
RUN npm cache clean --force && \
npm install && \
npm cache verify && \
chown $USER.$USER -R /opt/vue
USER $USER
EXPOSE 3001
# ENTRYPOINT ["/usr/bin/dumb-init","--"]
# CMD ["npm run dev"]
CMD ["npm", "run", "dev" ]
tried using base images (node:10.19-stretch,node:10.19.0-alpine3.11)
some had sugested installing inotify but even that didn't work.
what am I missing. please help.
UPDATE ---
when run either in docker or kubernetes standerout log says this
with no errors(enabled verbose output)
[nodemon] Looking in package.json for nodemonConfig
Usage: nodemon [nodemon options] [script.js] [args]
See "nodemon --help" for more.
[nodemon] exiting

after some peer reviewing and posting this on other networks found the correct answer.
issue behind was obvious.
in the scripts section the all dev:* scripts were running parallel. so when the two build steps were running the nodemon service was trying to start the service. and since the build/bundle.json is not there it gives a file not found. but the nodemon thinks of it as invalid arguments and gives the command help.
so what I did was adding the build steps in the docker file by running
npm run dev
and then on the CMD adding the
npm run prod:server
which actually speeded up the startupt time too.
here is the dockerfile part that was changed.
RUN npm cache clean --force && \
npm install && \
npm cache verify && \
chown $USER.$USER -R /opt/vue
USER $USER
EXPOSE 3001
CMD ["npm", "run", "prod:server" ]
also I had removed nodemon as its not needed. since files will not be changed anyway and they are too bulky
"scripts": {
"dev": "npm-run-all --parallel dev:build-server dev:build-client",
"prod:server": "node build/bundle.js",
"dev:build-server": "webpack --config webpack.server.js --watch",
"dev:build-client": "webpack --config webpack.client.js --watch"
},
finally.. !! viola.. it worked.
Thanks everyone who put their precious time on this.

Related

Docker container is refusing connection

I am using Dockerfile for Nodejs project but its returning Connection refused error
When I run the app without Docker it works absolutely fine.
Command I use to build and run the docker container is as follows:
docker build -t myapp .
docker run -it -p 8080:8080 myapp
Running above command runs without any error but when I hit http://localhost:8080/test-url it fails
My dockerfile is as follows:
FROM node:16.16.0-alpine
ADD . /opt
COPY . .
RUN npm install
EXPOSE 8080
RUN chmod +x /opt/deploy.sh
RUN apk update && apk add bash
CMD ["/bin/bash", "/opt/deploy.sh"]
And my package.json is as follows (truncated to show only script):
"scripts": {
"start": "DEBUG=app* node index.js",
"build": "rimraf build && babel-node ./src --out-dir build/src && npm run docs",
"dev": "DEBUG=app* nodemon --exec babel-node index.js",
"lint": "eslint 'index.js' 'src/**/*.js' 'src/index.js'",
"docs": "apidoc -i src/ -o public/docs",
"prepare": "husky install",
"lint-staged": "lint-staged"
},
For development I use following command which works fine::
npm run dev
For deploymeent I run deploy.sh which has env variables and final command as ::
npm run build
npm run start
Even when I am trying http://localhost:8080/test-url by loging into docker interactive terminal it returns same error - Connection Refused
Your Port mapping looks right to me. Did you check your firewall? Maybe it is blocking the connection. It could also be helpful to test, if you can run an NGINX-Container on Port 8080. That way you can check if it is a general configuration-problem with Docker or a specific problem of your Image.
Also, did you try to set your node server to listen to 0.0.0.0 instead of localhost? I'm not sure how Docker handles IPs in the Containers, but I was thinking maybe it is called by it's internal ip. If that is the case and your server listens to localhost only, it shouldn't accept the connection.
I hope one of these things can point you in the right direction.

What is the `PM2` for command `yarn run start`?

I run the nodejs app with yarn run start , what is the command for pm2 I should use?
pm2 yarn run start give me an error.
My package.json content
"scripts": {
"start": "budo main.js:dist/bundle.js --live --host 0.0.0.0",
"watch": "watchify main.js -v --debug -o dist/bundle.js",
"prep": "yarn && mkdirp dist",
"build": "browserify main.js -o dist/bundle.js",
"lint": "eslint main.js --fix",
"deploy": "yarn build && uglifyjs dist/bundle.js -c -m -o dist/bundle.min.js"
},
The error you're getting is because a bash script (yarn) is being executed with node...
Because pm2's default interpreter is set to node.
To run yarn you'll have to set the interpreter to bash:
shell:
Try the command below:
pm2 start yarn --interpreter bash --name api -- start
For me (on ubuntu 20)
pm2 start yarn --name api -- start
would do the trick. With the bash interpreter flag it would error in pm2.
my pm2 version is 5.2.0
pm2 start "yarn start" --name yourProjec
pm2 start yarn --name api -- start
Won't work for me.
It displays below
0|api | /usr/share/yarn/bin/yarn:2
0|api | argv0=$(echo "$0" | sed -e 's,\\,/,g')
0|api | SyntaxError: missing ) after argument list
It means launching a yarn binary file with a node.js
How about this command.
pm2 start "yarn start" --name api
It work's like charm for me.

webpack --watch && http-server ./dist -p 8080 --cors -o not working

"server:dev": "webpack --watch && http-server ./dist -p 8080 --cors -o -d false"
When I run npm run server:dev, the script starts --watch but http-server is not running. If I shuffle the statements then http-server is running and webpack is not watching. I know webpack-dev-server solves the problem but I want to have one simple command that watches changes and starts server in browser when I build it.
Could anyone help me with this?
If you are looking for a cross-platform solution, take a look at the package npm-run-all:
{
"scripts": {
"server:webpack": "webpack --watch",
"server:start": "http-server ./dist -p 8080 --cors -o -d false",
"serve": "npm-run-all --parallel server:*"
}
}

How to Dockerizing a Angular 2 web app via node js?

I tried out finding many ways for Dockerizing angular 2 web app using node js but not yet worked it is running on local but not working on docker container.Does anyone have any proper Dockerfile and package.json file for docking angular 2 app.
FROM node:boron
# Create app directory
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
# Install app dependencies
COPY package.json /usr/src/app/
RUN npm install
# Bundle app source
COPY . /usr/src/app
EXPOSE 5655
CMD [ "npm","start" ]
Site cant be reached while accessing
Below are some possible ways i found on stack over flow but not worked
docker run --rm --name my-container -it -p 8080:4200 -v $(pwd):/var/www -w "/var/www" node npm start
in package.json even i kept my port as dynamic "ng serve -host 0.0.0.0",
And also suggest me which server i need to use either nginx or node for docking angular 2 web-app
"scripts": {
"start": "node ./bin/www",
"build": "del-cli public/js/app && webpack --config webpack.config.dev.js --progress --profile --watch",
"build:prod": "del-cli public/js/app && ngc -p tsconfig.aot.json && ngc -p tsconfig.aot.json && webpack --config webpack.config.prod.js --progress --profile --bail && del-cli 'public/js/app/**/*.js' 'public/js/app/**/*.js.map' '!public/js/app/bundle.js' '!public/js/app/*.chunk.js' 'assets/app/**/*.ngfactory.ts' 'assets/app/**/*.shim.ts'"
}
first of all I would suggest to run "npm start" at your machine;
and check if after this you can reach you angular app in a browser;
if this works - you will need to remember at which port your angular app is served;
add new RUN section "RUN npm run build:prod" right before EXPOSE line;
set correct port at section EXPOSE;
run you container: "docker run --rm --name my-container -it ."
open browser at http:127.0.0.1:HERE_PORT_FROM_EXPOSE_SECTION
Here is a small example:
https://github.com/karlkori/dockerized-angular-app
Also I want to add that for production it will be better to use Nginx or CloudFront.

How-to run a node.js docker instance dropping into a shell that is auto-tailing logs

SO...
I am trying to create an elegant docker / node setup for my team on a greenfield prototype project. My team will need to have Node / NPM and Docker CLI installed beforehand, but afterwards I will be using NPM to manage everything and previously had...
"scripts": {
"docker": "npm run docker-build && npm run docker-start",
"docker-build": "docker build -t docker_foo .",
"docker-start": "docker run -it -p 8080:8080 --rm docker_foo",
"start": "node server.js"
}
...and the Dockerfile contains the CMD...
# Other stuff...
EXPOSE 8080
CMD ["npm", "start"]
...that will eventually start the node server. This works really well for seeing logs and cleaning up containers, but I want to make it better. I would like to instead start the container in the background using the -d option and attach to the container instead with an initial command tailing the logs to simulate the same behavior except that when the user terminates the process, they are still in the container so they can evaluate the container's current state. This led me to have...
"scripts": {
"docker": "npm run docker-build && npm run docker-start && npm run docker-attach",
"docker-build": "docker build -t docker_foo .",
"docker-start": "docker run -d -p 8080:8080 --name docker_foo docker_foo",
"docker-attach": "docker exec -it docker_foo /bin/ash",
"docker-clean": "npm run docker-clean-containers && npm run docker-clean-images",
"docker-clean-containers": "docker ps -a -q | xargs docker rm -f -v",
"docker-clean-images": "docker images -f 'dangling=true' -q | xargs docker rmi",
"start": "node server.js"
}
...but I am having some trouble finding where the node server logs are stored either on the container or on my local host to enable this desired workflow. Is there some way to re-direct the STD out and err to a location inside the container for historical purposes and a way to expand my attach command above to initially be tailing those logs?
What I ended up doing is simply starting the node server in the background...
"scripts": {
"prebuild": "npm run clean",
"build": "docker build -t docker_foo .",
"clean": "npm run clean:containers && npm run clean:images && npm run clean:volumes",
"clean:containers": "docker ps -a -q | xargs docker rm -f -v",
"clean:images": "docker images -f 'dangling=true' -q | xargs docker rmi",
"clean:volumes": "docker volume ls -qf dangling=true | xargs docker volume rm",
"prerun": "npm run build",
"run": "docker run -d -p 8080:8080 --name docker_foo -v $(pwd)/app:/usr/src/docker_foo/app -v $(pwd)/test:/usr/src/docker_foo/test docker_foo",
"start": "nodemon --watch app server.js",
"logs": "docker logs -f admin_api",
"pretest": "npm run build",
"test": "docker run -p 8080:8080 --name docker_foo -it --rm docker_foo node_modules/.bin/istanbul cover node_modules/.bin/_mocha",
"prewatch": "npm run run",
"watch": "docker exec -it docker_foo node_modules/.bin/nodemon --exec \"node_modules/.bin/_mocha -w\""
}
...so I could run "npm run run" and it would run my docker instance in the background with the service filesystem overlayed with my own hosts' being watched by nodemon for file changes for rapid application development and then I could run "npm run logs" afterwards to tail the logs during development.
Alternatively, I can now run "npm run watch" to have the docker image / container / volume cleaned, re-built and ran with nodemon watching to restart the server and mocha watching the app and test directories to re-run the tests and constantly output the results to the window.

Resources