I am working on a NestJS project which is executed with docker-compose. Among the many containers that are run by docker-compose there is one container in which the application runs with nodemon (allowing me to debug it if necessary) and another container in which unit tests are executed when changes in the code are detected.
Is there a way to execute the application and to run unit tests on code changes on the same container? Is it good practice? This would allow my machine to execute faster, since the whole set of containers is quite heavy on resources and having just one container to run the application and run unit tests on the fly would let me remove the container used just for the unit tests.
The nodemon config file is this:
{
"watch": ["src"],
"ext": "ts,json",
"ignore": ["src/**/*.spec.ts"],
"exec": "nest build && node --inspect=0.0.0.0 ./dist/main.js"
}
The unit tests in the second container are executed with jest --watch.
I am using one container for both running the app and executing tests. I see no problem with it. Since I'm using sqlite3 for e2e tests my Dockerfile looks like this:
FROM node:12.18.1
RUN apt-get update \
&& apt-get install sqlite3 \
Also in docker-compose.yml my command for this node container is:
command: npm run start:debug-remote
because why not. This npm command is:
"start:debug-remote": "nest start --debug 0.0.0.0:9229 --watch"
In order for the debugger to work you have to expose this port (9229) in docker-compose.yml (or in Dockerfile) and set it in the .vscode/launch.json configuration.
Related
I want to provide uninterrupted service using nestjs, pm2.
I download the changes via git pull origin master command.
After that, save the new changes through the yarn build command.
At this time, the service stops with an error saying dist/main.js cannot be found.
I tried to move the dist folder that was build outside the operating folder using mv, but it stopped and the service started again after entering the reload command.
Below is my code. How can I operate uninterrupted service?
//ecosystem.config.js
name: 'my_api',
script: 'dist/main.js',
watch: '.',
instances: 2,
exec_mode: 'cluster',
wait_ready: true,
listen_timeout: 20000,
kill_timeout: 5000
//package.json
"prebuild": "rimraf dist",
"start": "yarn build && pm2 start ecosystem.config.js",
You need to remove the dist folder before creating the build of the application. Stop the pm2 service and create a fresh build. After creating a fresh build. Restart the pm2 service. It will be fine.
I removed the watch:'.', the service did not stop during build,
and I was able to run it normally through the command pm2 reload myApp.
I am using Dockerfile for Nodejs project but its returning Connection refused error
When I run the app without Docker it works absolutely fine.
Command I use to build and run the docker container is as follows:
docker build -t myapp .
docker run -it -p 8080:8080 myapp
Running above command runs without any error but when I hit http://localhost:8080/test-url it fails
My dockerfile is as follows:
FROM node:16.16.0-alpine
ADD . /opt
COPY . .
RUN npm install
EXPOSE 8080
RUN chmod +x /opt/deploy.sh
RUN apk update && apk add bash
CMD ["/bin/bash", "/opt/deploy.sh"]
And my package.json is as follows (truncated to show only script):
"scripts": {
"start": "DEBUG=app* node index.js",
"build": "rimraf build && babel-node ./src --out-dir build/src && npm run docs",
"dev": "DEBUG=app* nodemon --exec babel-node index.js",
"lint": "eslint 'index.js' 'src/**/*.js' 'src/index.js'",
"docs": "apidoc -i src/ -o public/docs",
"prepare": "husky install",
"lint-staged": "lint-staged"
},
For development I use following command which works fine::
npm run dev
For deploymeent I run deploy.sh which has env variables and final command as ::
npm run build
npm run start
Even when I am trying http://localhost:8080/test-url by loging into docker interactive terminal it returns same error - Connection Refused
Your Port mapping looks right to me. Did you check your firewall? Maybe it is blocking the connection. It could also be helpful to test, if you can run an NGINX-Container on Port 8080. That way you can check if it is a general configuration-problem with Docker or a specific problem of your Image.
Also, did you try to set your node server to listen to 0.0.0.0 instead of localhost? I'm not sure how Docker handles IPs in the Containers, but I was thinking maybe it is called by it's internal ip. If that is the case and your server listens to localhost only, it shouldn't accept the connection.
I hope one of these things can point you in the right direction.
I'm trying do build a docker image of my Node backend for deployment but when I run it in a container and open in the browser I get "This site can’t be reached" error and the following log in dev tools:
crbug/1173575, non-JS module files deprecated
My backend is based on GraphQL Apollo server. Dockerfile is as following:
FROM node:16
WORKDIR /app
COPY ./package*.json ./
RUN npm ci --only=production
# RUN npm install
COPY . .
# RUN npm run build
EXPOSE 4000
CMD [ "node", "dist/main.js" ]
I've also tried to use the commented code, with no result.
The image builds without a problem and after running the container I get 🚀 Server ready at localhost:4000 in the docker logs, so I'd expect it to work properly.
"scripts": {
"build": "tsc",
"start": "node dist/main.js",
"dev": "concurrently \"tsc -w\" \"nodemon dist/main.js\""
},
That's the scripts part of my package.json I've also tried CMD ["npm", "start"] in Dockerfile but that doesn't work either. When I run the backend from terminal using npm start I can access the GraphQL playground at localhost:4000 - I assume that should be the same with docker?
I'm still new to docker so I'd be grateful for any hints. Thanks
EDIT:
I run the container with the following command:
docker run --rm -d -p 4000:80 image-name:latest
Seemingly it's running on port 0.0.0.0:4000 as that's what it says under 'PORT' when I execute docker ps
Please run docker inspect command and you will get IP and then run through that ip in browser
I'm trying to run a functional test for a node app.
In my package.json I have the following scripts:
"scripts": {
"web-server": "NODE_ENV=test node app.js &",
"test": "npm run web-server && mocha ./tests/functional/*.js --exit",
"posttest": "pkill -f node"
}
But when running it, tests run before the server completes starting.
How can I wait for the server?
I found wait-on today and like its approach. It only does the wait, not other things like command launching.
Using it with concurrently, like so:
"scripts": {
"xxx": "concurrently -n server,mocha \"npm run web-server\" \"npx wait-on http://localhost:8080 && npx mocha ...\"",
Wanted to mention to new visitors. I think wait-on is currently the best fitting answer to the title's question.
As far as I understand,
you'd like to run your local server, once the server is up tests cycle should be triggered.
I suggest to use the package "start-server-and-test" sounds suite for your solution, the NPM package page is here
Let's take your current package.json script object, and rewrite them.
The start and test scripts are the two basic scripts you need to maintain your app easily.
start - to start your app (I suggest to use nodemon or pm2)
test - call your test script
Notes:
To dev tests you will need to handle two terminals, each for the above.
I'm assuming you're running on port 8080
The package is also handling the termination of both processes (node and mocha) in both cases success and failure so no need (posttest:ci, --exit, etc..)
There is no need to use child process (the &) that mentioned at the end of your web-server package.json's script.
Here is the new script object, from my POV
"scripts": {
"start": "node app.js",
"test": "NODE_ENV=test mocha ./tests/functional/*.js",
"test:ci": "NODE_ENV=test start-server-and-test start \"http://localhost:8080\" test"
}
Now, from your CLI:
npm run test:ci
The ci suffix mentions this process is fully automated
It's expected that you'll have to define CI=true for a real CI environment,
just as all CI tools do and it's not necessary for local usage.
I have a ReactJS application and I'm deploying it using Kubernetes.
I'm trying to wrap my head around how to inject environment variables into my config.js file from within the Kubernetes deployment file.
I currently have these:
config.js file:
export const CLIENT_API_ENDPOINT = {
default:process.env.URL_TO_SERVICE,
};
and here's my Kubernetes deployment variables:
"spec": {
"containers": [
{
"name": "container_name",
"image": "image_name",
"env": [
{
"name": "URL_TO_SERVICE",
"value": "https://www.myurl.com"
}
]
Kinda clueless of why I can't see the environment variable in my config.js file. Any help would be highly appreciated.
Here's my dockerfile:
# Dockerfile (tag: v3)
FROM node:9.3.0
RUN npm install webpack -g
WORKDIR /tmp
COPY package.json /tmp/
RUN npm config set registry http://registry.npmjs.org/ && npm install
WORKDIR /usr/src/app
COPY . /usr/src/app/
RUN cp -a /tmp/node_modules /usr/src/app/
#RUN webpack
ENV NODE_ENV=production
ENV PORT=4000
#CMD [ "/usr/local/bin/node", "./index.js" ]
ENTRYPOINT npm start
EXPOSE 4000
The kubernetes environment variables are available in your container. So you would think the task here is a version of getting server side configuration variables shipped to your client side code.
But, If your react application is running in a container, you are most likely running your javascript build pipeline when you build the docker image. Something like this:
RUN npm run build
# Run app using nodemon
CMD [ "npm", "start" ]
When docker is building your container, the environment variables injected by kubernetes aren't yet yet available. They won't exist until you run the built container on a cluster.
One solution, and this is maybe your shortest path, is to stop building your client side code in the docker file and combine the build and run steps in npm start command . Something like this if you are using webpack:
"start": "webpack -p --progress --config webpack.production.config.js && node index.js"
If you go this route, then you can use any of the well documented techniques for shipping server side environment variables to your client during the build step : Passing environment-dependent variables in webpack. There are similar techniques and tools for all other javascript build tools.
Two: If you are running node, you can continue building your client app in the container, but have the node app write a config.js to the file system on the startup of the node application.
You could do even more complicated things like exposing your config via an api (a variation on the second approach), but this seems like throwing good money after bad.
I wonder if there isn't an easier way. If you have a purely client side app, why not just deploy it as a static site to, say, an amazon or gcloud bucket, firebase, or netlify? This way you just run the build process and deploy to the correct environment. no container needed.