npm scripts hang indefinitely on Concourse CI - node.js

On Concourse, I am running integration tests where I run certain npm scripts. There is a particular script that build my backend/frontend and then proceeds to run the tests. However once the test is done (fail or succeed). the npm script does not stop. It doesn't error out and hangs indefinitely either when the tests fail or succeeds. I have run this script on a local machine and a local container and the npm script works fine. Only on Concourse, the script hangs forever.
To give more context to my setup here is a sample of the npm script which is run on the frontend
"ci:start:backend": "npm run --prefix ../emailservice/mock-service dev & npm run --prefix ../server-repo ci:start:server & sleep 3"
"ci:test:system": "npm run ci:start:backend && npm run build:dist:serve & sleep 90 && npm run test:browser:ci"
npm run ci:test:system is the main script that is run. What it does it will start running an email service, a server and the frontend all at once in order to run the tests. It is a messy way of doing things but it works for both local and in containers. This method has been done for similar tests for server testing and it is running on concourse fine.
The task of the pipeline can be seen below
# runs unit tests for frontend
- name: run-tests
plan:
- get: frontend-repo
- get: server-repo
- get: emailservice
- task: run-npm-tests
privileged: true
config:
platform: linux
image_resource:
type: docker-image
source:
repository: jonoc/techradar-integration
inputs:
- name: frontend-repo
- name: server-repo
- name: emailservice
run:
path: sh
args:
- -exc
- |
mongod --fork --logpath /var/log/mongodb.log
export SHELL=/bin/bash
cd server-repo
npm install --silent
cd ../emailservice/mock-service
npm install --silent
cd ../../frontend-repo
npm install --silent
npm rebuild node-sass --silent
npm run postinstall --silent
npm run ci:test:system
Everything seems to be not out of the ordinary but concourse refuses to give a green or red build. I suspect it is due to the other scripts that are run forever but are hanging in the background and concourse does not want to end. However running npm run ci:start:backend in concourse will work fine, but running npm run test:browser:ci will hang forever which further adds confusion to whats the problem.
Concourse version:3.3.2
Deployment type (BOSH/Docker/binary):Docker
Infrastructure/IaaS:AWS/EC2
Browser (if applicable):Chrome
Did this used to work? Never

Are you sure that your resources are available in the tasks docker container?
You specify multiple inputs here
- name: frontend-repo
- name: server-repo
- name: emailservice
But concourse requires you to specify a proper path for each input if you have more than one.
Try to hijack the task container after execution and check if the resources are available. You can also execute the script in the container, so you can debug it easier.
fly -t <your_target> hijack -j demo_job/demo_task

My issue is resolved by changing up my npm scripts. Turns out chaining npm run --prefix ../emailservice/mock-service dev & npm run --prefix ../server-repo ci:start:server & sleep 3 with the other scripts causes some issues on Concourse.
I modified the npm scripts to use npm-run-all and use the -r parameter to finish the script when my tests are done

Related

How to run commands in parallel Gitlab CI/CD pipeline?

I have a test command in my repo that should work when my server is up because the tests interact with the server once it is running.
On my local i use two commands on first terminal npm run dev - this gets the server running and on second terminal i run the command npm run test that runs test which only pass when the first command is running. How do i achieve this in my gitlab CICD test stage job?
currently i am doing this
test_job:
stage: test
script:
- npm run dev
- npm run test
so the pipeline executes npm run dev which doesnt self terminate and my pipeline gets stuck cant seem to find the solution. Help and suggestions are appreciated. Stack is typescript express graphql
You have two options that you can use for this:
If you're building your server into a container prior to this test job, you can run the container as a service, which will allow you to access it via it's service alias. That would look something like this:
test_job:
stage: test
services:
- name: my_test_container:latest
alias: test_container
script:
- npm run test # should hit http://test_container:<port> to access service
You can use the nohup linux command to run your service in the background. This will run the command in the background once it's started up, and it will die when the CI job ends (as part of shutting down the job). That would look like this:
test_job:
stage: test
script:
- nohup npm run dev &
- npm run test

Build loopback 3 project

I have loopback 3 project. I want to build it. I am creating a bitbucket pipeline for this.
So for deployment, I want to know how to build loopback 3 project so that I can use these commands into my bitbucket.yml file.
I checked the documentations but for lb3 there is nothing for building the project. I got this into documentations: Preparing-for-deployment. But I am not user how I can use this into the yml file.
For loopback 4 we can use #loopback/build, and its working fine there. But I couldn't find anything for loopback 3.
Is there any other way to build loopback 3 project ?
Thanks in advance!
You can't build a loopback 3 server you can only run it.
To run a loopback server you simply use npm start or node . or even node server/server
Your postest script is running a linter and an audit, not the actual server.
What is running your server is not the script in the package.json it's the AZURE_EXTENSION_COMMAND part.
It's running pm2 start server/server.js which is a process manager that run your node server.
Using pm2 is correct making a separate step for testing and lining is also correct the problem is that you are confusing which part do what role.
This resulted in a response to the wrong question.
I didn't find anything for creating bundle for my loopback 3 app,
we can't make bundle of lb3. we can run server.js file and that's what I did using PM2. AZURE_EXTENSION_COMMAND here you can see that I have pulled to code from my branch and run the server.js file from that.
I used following things into my bitbucket.yml :
> pipelines:
branches:
> master:
> - step:
> script:
> - npm install
>
> - npm run posttest
>
> - step:
> name: Deploy to master
> deployment: production
> script:
> - echo "Deploying to master"
>
> - pipe: microsoft/azure-vm-linux-script-deploy:1.0.1
> variables:
> AZURE_APP_ID: '<appid>'
> AZURE_PASSWORD: '<pass>'
> AZURE_TENANT_ID: '<tenantid>'
> AZURE_RESOURCE_GROUP: '<rg>'
> AZURE_VM_NAME: '<vm name>'
> AZURE_EXTENSION_COMMAND: 'cd <path to my folder> && git remote add origin <my repo> && git pull origin master && npm install -g npm && npm install && sudo -E pm2 start server/server.js'
And in my package.json I have used below script for auditing:
"scripts": {
"posttest": "npm run lint && npm audit --audit-level high"
}
And it is working fine.
I am not sure if this is the very right methode, but i just found it useful.
Hope it can help someone as well.
Thanks!

NPM start script runs from local shell but fails inside Docker container command

I have a Node app which consists of three separate Node servers, each run by pm2 start. I use concurrently to run the three servers, as a start-all script in package.json:
"scripts": {
...
"start-all": "concurrently \" pm2 start ./dist/foo.js \" \"pm2 start ./dist/bar.js \" \"pm2 start ./dist/baz.js\"",
"stop-all": "pm2 stop all",
"reload-all": "pm2 reload all",
...
}
This all runs fine when running from the command line on localhost, but when I run it as a docker-compose command - or as a RUN command in my Dockerfile - only one of the server scripts (a random one each time I try it!) will launch, but then immediately exit. In my --verbose docker-compose output I can see the pm2 panel (listing name, version, mode, pid, etc.), but then this error message:
pm2 start ./dist/foo.js exited with code 0.
N.B: This is all with Docker running locally (on a Mac Mini with 16GB of RAM), not on a remote server.
If I docker exec -it <container_name> /bin/bash into the container and the run npm run start-all manually from the top level of the src directory (which I COPY over in my Dockerfile) everything works. Here is my Dockerfile:
FROM node:latest
# Create the workdir
RUN mkdir /myapp
WORKDIR /myapp
# Install packages
COPY package*.json ./
RUN npm install
# Install pm2 and concurrently globally.
RUN npm install -g pm2
RUN npm install -g concurrently
# Copy source code to the container
COPY . ./
In my docker-compose file I simply list npm run start-all as a command for the Node service. But it makes no difference if I add it to the Dockerfile like this:
RUN npm run start-all
What could possibly be going on? The pm2 logs show report nothing other than that the app has started.
the first reason is pm2 start app.js start the application in background so that is why your container stops as soon as it runs pm2 start.
You need to start an application with pm2_runtime, it starts an application in the foreground. also you do not need concurrently, pm2 process.yml will do this job.
Docker Integration
Using Containers? We got your back. Start today using pm2-runtime, a
perfect companion to get the most out of Node.js in production
environment.
The goal of pm2-runtime is to wrap your applications into a proper
Node.js production environment. It solves major issues when running
Node.js applications inside a container like:
Second Process Fallback for High Application Reliability Process Flow
Control Automatic Application Monitoring to keep it always sane and
high performing Automatic Source Map Discovery and Resolving Support
docker-pm2-nodejs
The second important thing, you should put all your application in pm2 config file, as docker can only run the process from CMD.
Ecosystem File
PM2 empowers your process management workflow. It allows you to
fine-tune the behavior, options, environment variables, logs files of
each application via a process file. It’s particularly useful for
micro-service based applications.
pm2 config application-declaration
Create file process.yml
apps:
- script : ./dist/bar.js
name : 'bar'
- script : ./dist/foo.js
name : 'worker'
env :
NODE_ENV: development
then add CMD in Dockerfile
CMD ["pm2-runtime", "process.yml"]
remove command from docker-compose.
Docker and pm2 provide overlapping functionality: both have the ability to restart processes and manage logs, for example. In Docker it's generally considered a best practice to only run one process inside a container, and if you do that, you don't necessarily need pm2. what is the point of using pm2 and docker together?
discusses this in more detail.
When you run your image you can specify the command to run, and you can start multiple containers off of the same image. Given the Dockerfile you show initially you can launch these as
docker run --name foo myimage node ./dist/foo.js
docker run --name bar myimage node ./dist/bar.js
docker run --name baz myimage node ./dist/baz.js
This will let you do things like restart only one of the containers when its code changes while leaving the rest untouched.
You hint at Docker Compose; its command: directive sets the same property.
version: '3'
services:
foo:
build: .
command: node ./dist/foo.js
bar:
build: .
command: node ./dist/bar.js
baz:
build: .
command: node ./dist/baz.js

GitLab CI stuck on running NodeJS server

I'm trying to use GitLab CI to build, test and deploy an Express app on a server (the Runner is running with the shell executor). However, the test:async and deploy_staging jobs do not terminate. But when checking the terminal inside GitLab, the Express server does indeed start. What gives ?
stages:
- build
- test
- deploy
### Jobs ###
build:
stage: build
script:
- npm install -q
- npm run build
- knex migrate:latest
- knex seed:run
artifacts:
paths:
- build/
- node_modules/
tags:
- database
- build
test:lint:
stage: test
script:
- npm run lint
tags:
- lint
# Run the Express server
test:async:
stage: test
script:
- npm start &
- curl http://localhost:3000
tags:
- server
deploy_staging:
stage: deploy
script:
- npm start
environment:
name: staging
url: my_url_here
tags:
- deployment
The npm start is just node build/bundle.js. The build script is using Webpack.
Note: solution works fine when using a gitlab runner with shell executor
Generally in Gitlab CI we run ordered jobs with specific tasks that should be executed one after the end of the other.
So for the job build we have the npm install -q command that runs and terminates with an exit status (0 exit status if the command was succesful), then runs the next command npm run build and so on until the job is terminated.
For the test job we have npm start & process that keeps running so the job wont be able to terminate.
The problem is that sometimes we need to have some process that need to run in background or having some process that keeps living between tasks. For example in some kind of test we need to keep the server running, something like that:
test:
stage: test
script:
- npm start
- npm test
in this case npm test will never start because npm statrt keeps running without terminating.
The solution is to use before_script where we run a shell script that keeps npm start process running then we call after_script to kill that npm start process
so on our .gitlab-ci.yml we write
test:
stage: test
before_script:
- ./serverstart.sh
script:
- npm test
after_script:
- kill -9 $(ps aux | grep '\snode\s' | awk '{print $2}')
and on the serverstart.sh
# !/bin/bash
# start the server and send the console and error logs on nodeserver.log
npm start > nodeserver.log 2>&1 &
# keep waiting until the server is started
# (in this case wait for mongodb://localhost:27017/app-test to be logged)
while ! grep -q "mongodb://localhost:27017/app-test" nodeserver.log
do
sleep .1
done
echo -e "server has started\n"
exit 0
thanks to that serverstart.sh script is terminated while keeping npm start process alive and help us by the way move to the job where we have npm test.
npm test terminates and pass to after script where we kill all nodejs process.
You are starting a background job during your test phase which never terminates - therefore the job runs forever.
The idea of the GitLab CI jobs are shortly-running tasks - like compiling, executing unit tests or gathering information such as code coverage - which are executed in a predefined order. In your case, the order is build -> test -> deploy; since the test job doesn't finish, deploy isn't even executed.
Depending on your environment, you will have to create a different job for deploying your node app. For example, you can push the build output to a remote server using a tool like scp or upload it to AWS; after that, you reference the final URL in the url: field in your .gitlab-ci.yml.

.ebextensions with CodePipeline and Elastic Beanstalk

I started working on my first CodePipeline with node.js app which is hosted on github. I would like to create simple pipe as follow:
Github repo triggers pipe
Test env (Elastic Beanstalk app) is built from S3 .zip file
Test env runs npm test and npm lint
If everything is OK then QA env (another EB app) is built
For above pipe I've created .config files under .ebextensions directory:
I would like to use npm install --production for QA and PROD env, but it seems that EC2 can't find node nor npm. I checked logs and EC2 triggered npm install by default in temporary folder, then it fails on my first script and app catalogue is always empty.
container_commands:
install-dev:
command: "npm install"
test: "[ \"$NODE_ENV\" = \"TEST\" ]"
ignoreErrors: false
install-prod:
command: "npm install --production"
test: "[ \"$NODE_ENV\" != \"TEST\" ]"
ignoreErrors: false
Is it posible to run unit tests and linting without jenkins?
container_commands:
lint:
command: "npm run lint"
test: "[ \"$NODE_ENV\" = \"TEST\" ]"
ignoreErrors: false
test:
command: "npm run test"
test: "[ \"$NODE_ENV\" = \"TEST\" ]"
ignoreErrors: false
I set NODE_ENV for each Elastic Beanstalk instance. No matter what I will do every time my pipe fails because of npm is not recognized, but how is it possible if I'm running 64bit Amazon Linux with node.js ? What's more I cannot find any examples about CodePipeline with node.js in AWS Docs. Thanks in advance!
If you're using AWS for CI/CD, you can use CodeBuild. However, Github provides a great feature called Actions for running Unit Tests, which I find much simpler than AWS. Anyway, I will walk you through both examples:
Using AWS for running Unit Tests
Essentially, you could create a new stage into your CodePipeline, and configure the CodeBuild for running Unit Tests, e.g.
First, add a buildspec.yml file in the root folder of your app so you can use the following example:
version: 0.2
phases:
install:
runtime-versions:
nodejs: 10
commands:
- echo Installing Mocha globally...
- npm install -g mocha
pre_build:
commands:
- echo Installing dependencies...
- npm install
- npm install unit.js
build:
commands:
- echo Build started on `date`
- echo Run Unit Tests and so on
- npm run test
- npm run lint
post_build:
commands:
- echo Build completed on `date`
# THIS IS OPTIONAL
artifacts:
files:
- app.js
- package.json
- src/app/*
- node_modules/**/*
You can find everything you need in the BackSpace Academy, this course is for free:
AWS DevOps CI/CD - CodePipeline, Elastic Beanstalk and Mocha
Using Github for running Unit Tests
You could create your custom actions using Github, it will automatically set up everything you need in your root folder, e.g.
After choosing the appropriate workflow, it will automatically generate a folder/file ".github > workflow > nodejs.yml".
So it will look like this:
name: Node CI
on: [push]
jobs:
build:
runs-on: ubuntu-latest
strategy:
matrix:
node-version: [8.x, 10.x, 12.x]
steps:
- uses: actions/checkout#v1
- name: Use Node.js ${{ matrix.node-version }}
uses: actions/setup-node#v1
with:
node-version: ${{ matrix.node-version }}
- name: npm install, build, and test
run: |
npm install
npm run build --if-present
npm test
env:
CI: true
I hope you could find everything you need in this answer. Cheers
Have you incorporated CodeBuild into your pipeline?
You should
1) Create a pipeline whose source is your github account. Go through the setup procedure so that commits on a particular branch trigger the Codepipeline
2) Create a test stage in your Codepipeline which leverages the CodeBuild service. In order to run your Node tests, you might need to provide a configured build environment. And you probably also need to provide a build spec file that specifies the tests to run etc.
3) Assuming that the test stage passes, you can determine if the pipeline continues to another stage which is linked to an elasticbeanstalk app environment which supports the Node platform. These environments are purely for artifacts that have passed testing, so I see no need to have the .ebextensions commands written above.
Have a read of what CodeBuild can do to help you run tests for Node,
https://aws.amazon.com/codebuild/
Good luck!

Resources